id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247828128
pes2o/s2orc
v3-fos-license
Study on the “Defamiliarization” Narrative in Mo Yan’s Novel the Republic of Wine Mo Yan understands the narration of defamiliarization profoundly. With his extraordinary creative personality, compassionate human feelings, and unique aesthetic judgment, he makes pioneering exploration and innovation in his novel The Republic of Wine. The defamiliarization of the novel’s language is firstly reflected in the change of the name, and the renewal of the semantic connotation gives the work a “new” and “defamiliarized” characteristic. As the core imagery and aesthetic element, the meaning of “wine” extends to the connotation and reflection of nationality. At the same time, the novel’s ironic rhetoric gives the plain words a strong critical meaning. Furthermore, the novel adopts a variable narrative point of view and a changing narrative subject, so that the story presents a rich plurality in pursuing the value of the subject and exploring artistic methods, overflowing with the brilliance of interweaving the real and the fantastic, overlapping the realistic and the absurd, reflecting the torture of human nature and the search for the soul in a defamiliarized expression with modern temperament and contemporary spirit. Introduction Mo Yan's The Republic of Wine is considered "a book with deep and symbolic meaning." [1] Mo Yan also said, "I am actually quite satisfied with the structure of The Republic of Wine. On the one hand, it has a strong social criticism intention, and on the other hand, it has a variety of language parodies, playfully imitating various genres of the time." The author sets up a structure of real and imaginary, which gives The Republic of Wine a great sense of absurdity. The "defamiliarized" language, diverse narrative points of view, and diverse narrative subjects coincide with the author's desire to express his sadness for the country and his strong social criticism. The defamiliarization of the narrative in The Republic of Wine creates a mysterious, eerie, horrible and attractive literary world. It can be noted that the concept of "defamiliarization" was introduced by Shklovsky, and Brecht further expanded its depth at the textual level by combining it with theater theory, which was continuously refined and developed by his successors. Mo Yan, however, "opens the way to a pure literary writing that returns to the weight and depth of words in an extraordinary and highly perceptible form of defamiliarization." [2] From the theory of "defamiliarization," this paper focuses on the embodiment, formation, and effect of the "defamiliarization" technique in The Republic of Wine, trying to explore how Mo Yan expresses his discoveries and reflections in the story of The Republic of Wine, a mixture of reality and fiction, and uses the creation of the textual world to complete his artistic critique of the real society, culture and human nature. [3] Language InnovationꞏRealistic CriticismꞏIronic Tension The "defamiliarization" of The Republic of Wine is mainly due to the defamiliarization of its language. Mo Yan's artistic processing of a certain event through "perverse" descriptions gives the work a "new" and "defamiliarized" reading effect. The use of language defamiliarization in The Republic of Wine is firstly reflected in the change of names. Specifically, "the first is the temporary creation of new words, the second is the temporary creation of new uses, and the third is the replacement or updating of ready-made words with descriptive or explanatory phrases, or the breaking up of the usual combination of words and phrases through the intentional accumulation of idioms, idioms or colloquialisms" [4]. There are many examples of this in The Republic of Wine, for example, "Goodbye, little girl. I have the finest fertilizers, specially to improve the alkaline soil." [5] In this sentence, Ding Hooker uses a metaphor to refer to the "alkaline soil" as a woman's inability to get pregnant, giving the language a subtle beauty. Another example is "Fertilizers!" She grins and says, "You're still here, aren't you?" [6] "Fertilizer" refers to the ding hooker that can help infertile women regain their fertility. In addition, there are also the words "crash" as "let me kiss you" and so on. In this way, the literary language is constructed by means of defamiliarization, making the semantics newer, thus expanding the aesthetic distance between the reader and the language and creating a witty and humorous artistic effect. As Shklovsky says: "Adding new adjectives to old words to expand their meaning into a new series makes people's ears and eyes ...... feel the unusualness of things, thus changing their usual view of it." [7] It is worth noting that Mo Yan breaks the norms of word usage not only to touch people's hearts, but also to better realize the epithet function on the basis of a more contextual meaning and to strengthen the expressive power of language. Mo Yan's technique of dealing with linguistic defamiliarization makes The Republic of Wine possess a richer allegorical meaning and a fresh quality while breaking the shackles of preexistence. The German theorist Brecht defined "defamiliarization" as "an artistic technique that evokes a novel artistic sensation" [8] and then moved the aesthetic category of "defamiliarization" to the field of social criticism. He then moved the aesthetic category of "defamiliarization" to the field of social criticism. Thanks to Brecht's creative change, the theory of defamiliarization became a means to intervene in life and criticize reality, to integrate into the broad social life. In The Republic of Wine, this concept of critical reality is fully implemented. In the world of The Republic of Wine, wine is food, and wine is the passport to this mysterious country. Mo Yan says that he originally wanted to write only about the relationship between wine and human life, but only after writing did he realize that this was difficult to achieve, because his initial motivation for writing was rooted in a strong sense of social responsibility. From the wine jug that Ding Hooker carries with him, to Dr. Li Yidou, an alcoholic doctor in the brewing college, to the legend of the wine moth that Yu Yizhu narrates, and finally the writer "Mo Yan" attends the wine banquet. The central imagery of "wine" throughout the text carries the writer's criticism of the social status quo of official corruption and heroic drinking but ineffective rectification, and thus has a metaphorical overtone. Because "an 'image' can be transformed into a metaphor once, but if it is repeated as a presentation and reproduction, it becomes a symbol, or even part of a symbolic system." [9] Whether it is Li Yidou's sober intoxication, Ding Hooker's unconscious drunkenness, or the writer Mo Yan's initiative to let Vice Mayor Wang pour wine into his mouth and feel grateful, they all point directly to the ignorance and numbness of national nature for thousands of years. The Republic of Wine is thus not only a virtual world described by language, but also has a strong otherness as well as rich historical and cultural connotation and artistic content. The authenticity of the social environment of The Republic of Wine seems to be a metaphor and a question to the real world. In addition, the effect of "defamiliarization" creates the tension of irony. Irony is understood as "a rhetorical strategy of euphemism, negation and concealment chosen by the author to maintain a balance between the complex elements of the object of expression in terms of content and form, phenomenon and essence, and so on." [10] The far-fetched rhetoric of King Kong Diamond and others in their toasts to Ding Hooker, and the culture of The Republic of Wine introduced by Li Yidou to writer Mo Yan, are refractions of the corruption of real life. However, when we find that the object of these rhetoric is so unpleasant, we experience the deviation of the serious political discourse from the dirty reality, and the opposite of the connotation and extension creates a great ironic tension. This tension is not limited to the narrative effect brought by the defamiliarized writing of "wine", but is also used in the writing of "food". For example, in Li Yidou's novel Donkey Avenue, the "Whole Donkey Banquet" and the famous dish "Dragon and Phoenix" in the "One Size Hotel" are actually just expensive condiments without the gorgeous packaging of their names. The name of the dish is not the flashy packaging, these expensive condiments are actually just genitalia. As for the details of "Cooking Class", Li Yidou's mother-in-law teaches her students to cook the dish of braised babies, which is even more surprising. The irony in the content of the novel is like a light of the soul, reflecting a splendid artistic world, and Mo Yan, the wonderful narrator, is able to use irony to reach a critique of reality and a reflection of human nature. Narrative PerspectiveꞏNarrative SubjectꞏNoise of Crowd Narrative perspective refers to "the particular vision and perspective from which a work or a text sees the world." [11] It is expressed in the four elements of "who is telling the story when the act of narrating a story takes place, in whose eyes the story is told, whose story is told, and to whom the story is told." [12] The narrative perspective of The Republic of Wine is fluid, dominated by an omniscient point of view, with a limited omniscient narrative, sometimes using a "metafictional" [13] narrative strategy, multiple narrative points of view and a unique narrative intelligence that escapes the barriers of artistic genres and transcends the dichotomy of good and evil, beauty and ugliness, etc. in human nature, creating a vibrant world of the Republic of Wine from multiple levels. The first thing to notice is that the novel uses an omniscient and omnipotent perspective outside the text to explain the circumstances surrounding the arrival of the protagonist, senior detective Ding Hooker, in Sakoku City to investigate a major incident in which a senior official cooked a baby. The novel tells the story of Ding Hooker's relationship with a spirited and rude female driver on the way, who later kills the person and commits a capital crime, and finally falls into an open pit and is buried. Using this perspective, the author overrides the novel and uses his "golden eye" to observe the words, actions and heart of the main character, Ding Hooker: "He felt his spirit like a potato full of young blue shoots, dripping and slithering rolled into her basket." [14] The narrative effect of defamiliarization is reflected in his bold language, which also makes readers feel that Mo Yan's characters are not vague, airy fabrications, but have a sense of "down-to-earth" reality. In the meantime, Mo Yan also uses the limited-knowledge perspective to narrate the story. Sometimes the narrator becomes a vajra, for example, at the beginning of Chapter 4: "Dear friends and students, when I learned that I had been appointed as a visiting professor at the University of Brewing, the immense honor was like a warm spring breeze in the cold winter months, blowing over my bare heart, my green intestines, my green lungs, and my purple, hard-working liver." [15] Borrowing from this shift in narrative point of view, Mo Yan brings out the image of Vajra's eagerness to pay compliments, to speak exaggerated official words, and to be inconsistent in appearance. Sometimes this point of view of the nearly unconscious Ding Hooker is used to see the scene in which he is taken to the room by the service girls: "They took me in and the door closed. Sure enough, it was an elevator, descending rapidly. I thought admiringly ...... The intense white light shone out of my eyes ......" [16] After Ding Hooker is beaten up for cheating with the female driver after being crashed by Diamond, this perspective is also used to reveal Ding Hooker's inner thoughts, in a huge contrast of contexts, to strengthen the ironic tension of the words is enhanced by the huge contrast in the context: "Remembering the diamond, remembering the sacred mission, gnashing my teeth. Go! Sleeping with your wife is a matter of life style, and cooking babies is the worst crime." [17] In the novel Li Yidou sends to "writer Mo Yan" to describe the prodigy, the character Li Yidou becomes the narrator: "The image he establishes as soon as he appears: a boy's body of less than three feet, a dense, stiff head of hair, two conspiratorial eyes, two thick, large ears, and a hoarse voice. voice." [18] The interweaving of the external story and the internal psychological activity of the characters in the text creates a relatively complex narrative. Mo Yan's ability to make a simple story treacherous is also due to his skillful use of multiple eyes to focus on the same thing. A typical example is the narrative of the middle-aged writer "Mo Yan" on his way to the city of Sakaguni. The novel begins with a characterization of Mo Yan from an omniscient and omnipotent perspective outside the text: "Lying in the comfort -as opposed to the hard seat -of the hard sleeper middle berth, his physique was in a state of shock. Mo Yan, a middleaged writer with a swollen physique, thinning hair, tiny eyes, and a tilted mouth, was not the least bit sleepy." [19] Mo Yan writes from his own perspective: "I am like a hermit crab, and 'Mo Yan' is the shell I live in." [20] In this one incident, the point of view of the narrative is constantly changed, and the characteristics of different points of view are used to give a multi-point perspective to the characters, making them three-dimensional, so that the narrative achieves the effect of "defamiliarization". In addition, Mo Yan consciously explores "metafiction" to expose the sources and narrative origins of his stories, resulting in "intertextual" narrative features in which the characters are divided into narrative segments. For example, in Child Prodigy, it is written, "Gentlemen, our story has actually begun a long time ago." [21] At the beginning of Caiyan, the reader is immersed in the discussion of his mother-in-law's immortality and youthfulness, and then he suddenly interrupts the reader's thoughts and inserts a paragraph that reads, "according to the popular narrative style of fiction nowadays, I can say that our story is about to begin." Li Yidou often breaks away from the original storytelling perspective to show the narrator's behavior, interrupting the novel's normal narrative with self-discussion, telling the reader that the story is fictional, thus exposing the story's creative thinking and giving the reader an unfamiliar feeling. The multiplicity of narrative points of view reflects Mo Yan's ingenious thinking and artistic techniques, and confirms Mo Yan's statement, "There is one thing I always insist on, and that is personalized writing and personalized works." [22] The subject is "the source of the subjective perception, awareness, judgment, opinion, etc. expressed" [23]. The identity of the narrator in a story is generally of two kinds: one is as an outsider looking down on the whole, and the other is as the person in the play who is narrating part of the plot. The diversity of narrative subjects means that "the characters of the narrative, both primary and secondary, occupy a part of the subject's consciousness. The narrator is not necessarily the most important spokesperson for the subject, but his or her voice cannot be ignored. And there may be more than one narrator." [24]. Mo Yan once said, "It is not considered superior to the characters in his own work." [25] The Republic of Wine fuses the traditional narrative resources inherent in China's homeland with Western modernist writing techniques to create a diverse and ever-changing narrative subject. The novel includes the main body of the narrative that tells of Ding Hooker's investigation of the baby-eating case: "Ding Hooker hurriedly read the letter of denunciation composed of the man's defamiliarized and odd handwriting, apparently written in his left hand." [26] As an extra-textual narrator, he controls the fate of the characters and the course of the story. At the same time, the main narrative of the correspondence between a doctoral student in the brewing school of the city of Sakaguni, who goes by the pseudonym "Li Yidou," and "Mo Yan" is inserted into the text, in which Li Yidou cynically writes: "Teacher, last night, I I wrote another novel entitled Meat Boy. In this novel, I think I have used Lu Xun's brushwork in a pure way, turning a pen in my hand into a sharp knife with a cow's ear, peeling off the skin of a gorgeous spiritual civilization and revealing the core of a cruel moral barbarism." [27] This narrative subject is used to summarize the main theme of Li Yidou's nine novels, which then leads to the narrative subject of these novels. In their correspondence, the two men also share their views on literary creation and literary criticism, which also includes their mutual praise and evaluation of wine. The content of these letters is closely related to the story of the novel written by "writer Mo Yan," who takes the initiative to admit in his correspondence that "the long novel I am working on has reached the most difficult stage, and that ghostly senior scout has been working against me at every turn, so I don't know whether to let him shoot himself or simply die drunk. Well, in the last chapter, I let him get drunk again". [28] It can be seen that the character of the novel, Mo Yan, has become the narrative subject of the novel. In this regard, Mo Yan also discusses that he believes that The Republic of Wine takes into account the relationship between the narrative subject and the writer, and that the supreme narrative subject in the novel is the writer Mo Yan, but this Mo Yan is different from the real Mo Yan. He is both the narrative subject and a character in the novel. In the last chapter, there is also a narrator who can see everything. This narrator tells the story of "writer Mo Yan" who replies to Li Yidou's letter, travels to the city of wine, and arrives at The Republic of Wine to go shopping with Li Yidou and attend a luncheon in a calm tone, giving the characters a multifaceted image. For example, the second and third sections of the text use the direct dialogue form of "Mo Yan says," "Li Yidou says," and "Yu Yizhu says" to show the "greasy, wellgroomed" Hu. The story is based on the original story. Based on the original story, the appearance of this narrative subject allows the text to constantly add new clues and new conflicts, in the process of deconstruction and reconstruction, so that the fictionality of the story itself is questioned, the absurdity of the plot and the complexity of human nature are more prominent, and the overall effect of intertwining fiction and truth is presented. [29] Like Dostoevsky's novel, The Republic of Wine has a number of separate and disparate consciousnesses that are indistinguishable from each other, or what Bakhtin called "polyphony. "Polyphony" is also called "polyphony", which is a musical term. According to Bakhtin, "one of the outstanding features of the 'monologue' novel is that the many personalities and destinies form a unified objective world, which unfolds in layers under the unified will of the author." [30] In the Republic of Wine, although the main characters Ding Hooker and Li Yidou are both objects depicted by the author, they have a fairly strong sense of self, each with their own characteristics, some selfpossessed and weak, some misbehaving and proud, each expressing their opinions equally, and fundamentally constituting a unified subject. The plurality of narrative subjects breaks the reader's reading experience and reading expectation horizon, so that the discussion is not limited to the function of portraying characters or unfolding the plot, and the work can reflect the real state of human nature in the real society. The pluralistic narrative subject makes the characters in the novel no longer "flat' images, but with human blood, making the novel a "symphony" of polyphonic chorus, playing a compassionate sound of interweaving good and evil. The Real and the ImaginaryꞏHumanity TrialꞏAesthetic Inspiration Mo Yan appropriately uses the technique of "defamiliarization" to blur the "basic features" of the words in The Republic of Wine through experimentation with various genres and changes in everyday language usage and specific contexts. This technique brings out the color of fantasy in the work, so that people can see things with a poetic gaze, transcending all the stakes in real life. In Li Yidou's letter to "I", "A Splash of Heroism," the author uses a literary style: "I was sitting alone, crying, when I suddenly felt a stone sink beneath my body, thunder in my ears, and a golden light in front of me." [31] This kind of unique style of writing puts the narrative in a playful state, and the authenticity of the text is questioned, so the game between the real and the fantastic is particularly prominent. In addition, the author does not use straightforward language, a single narrative subject and perspective to write, nor does he let the whole novel's theme become naked, otherwise it would dissolve its proper hidden meaning and aesthetic flavor of literature, but makes the sometimes playful words full of painful thoughts. For example, the bloody "baby feast" in the novel, as a symbol of memory, not only points to the traditional cultural memory of "cannibalism" written by Lu Xun, but also builds on the memory of the past and prefigures the future development of the "The Republic of Wine market". As a mnemonic symbol, it not only points to the traditional cultural memory of "eating people" written by Lu Xun, but also builds on the memory of the past and preconstructs the development prospect of the "The Republic of Wine market", with the cultural significance of the intersection of past, present and future. For example, the goblins in Li Yidou's book "have a treacherous, evil and ferocious smile on the corners of their mouths" [32]. They are presented in the the Republic of Wine with a sense of self-deprecation and freshness. The main character Ding Hooker tries to enter another "world" by going to the Republic of Wine to investigate the baby-eating case but is trapped in the food and sex of "The Republic of Wine", but always wanders in the periphery, unable to approach the case itself. Throughout the narrative, the writer downplays the truth of the case itself, but the details are extraordinarily realistic, giving the work an unattainable and dreamlike appearance. "Mo Yan is no longer a writer who can be summarized and described only by certain cultural or aesthetic neologisms, but has become an exceptionally multifaceted and fecund writer who encompasses almost all propositions in the vast field of complex humanities, history, morality and art." [33] Through the interweaving of the real and the fantastic, Mo Yan creates a tense aesthetic appeal and renders the many facets of life in the world. "A good novelist is concerned with people in social life and their inescapable desires, as well as the difficult struggle of human beings trying to free themselves from the control of their desires." [34] Mo Yan is hyper-aware of the depravity of human nature that may be exposed under the impact of the tide of reform and opening up, and this concern is particularly well demonstrated in the Republic of Wine. Baby-eating is a shocking event in the Republic of Wine. In the ancient times, there were rumors of people "eating their sons" [35] during the war, and stories of demons and monsters trying to eat the flesh of the Longevity Monk to live forever. Such chilling events are difficult to expect in real life, but Mo Yan says that the absurd plot of "baby-eating" in the Republic of Wine comes from his regurgitation of youthful experiences and hometown experiences. In the text, Li Yidou's motherin-law is so obsessed with physical desire that her humanity is nearly extinguished, and the babies she is about to slaughter and cook are not actually human beings, but merely contracts of mutual consent. This almost crazy psychology undoubtedly increases the difficulty of artistic perception and reveals the author's profound reflection on the dark psychology and morbid desires common to power and human beings. This kind of defamiliarized writing not only strengthens the power of literature to focus on reality, but also makes people feel the heterogeneous existence while re-sensitizing the daily plot, and actively ponder the connotation of the work at a deeper level. As Mr. Qian Zhongshu said in "The Art of Talking", "The so-called 'this article' is originally 'this nothing', as if the jade onion is peeled off layer by layer, the core of the inner content is not to be found." [36] On the surface, this blackand-white world is full of deviations and revolts, heroes fall and rampant rats, but when you think about it, what the novel wants to express is a reflection on human nature and a spur to slavery. Even in such a black hole of gourmets and abyss of human nature, there is no lack of light in The Republic of Wine. "Writer Mo Yan" faces a massage for a young woman and focuses his spirit on a pair of cold handcuffs to avoid making mistakes, suggesting the need to build a legally binding social system. Li Yidou withstands his mother-in-law's carnal temptations and does not break ethical taboos, believing that this degrades his identity as a "proper man", reflecting the positive role of traditional "humanistic indoctrination" in contemporary society. Mo Yan stands in the context of the time and examines the powers that be, the executioners, and the heroes, and also interrogates his own inability to overcome his human weaknesses, using the technique of "defamiliarization" to ask why irrational phenomena still exist in the real world, greatly enhancing the artistic charm and literary beauty of the novel. Mo Yan's defamiliarized narrative has a strong symbolic and philosophical connotation, which plays an important role in running the narrative, shaping the characters and creating a unique atmosphere, constructing a contemplation of human nature and soul, as well as "an understanding of life and the spirit of the subject of life." [37]. Conclusion Zhang Qinghua has commented, "Mo Yan is then able to strive to construct his unique genre expression within a profound tacit understanding with traditional artistic elements, and this tacit and drawn relationship is not one-sided, isolated and fragmented, but rather a fusion and overall tangency based on an accurate grasp of its intrinsic artistic spirit.". Through its unique linguistic construction and narrative point of view, The Republic of Wine coalesces a defamiliarized narrative landscape. The comprehensive use of omniscient perspective, limited-knowledge perspective, and meta-narrative constructs a rich and thick, highly discursive and open literary world while escaping from monotonous narratives. The defamiliarized narrative of The Republic of Wine not only relates to the times and society, but also reflects Mo Yan's thoughts and ideas in the pursuit of narrative art. The use of irony and the change of narrative subjects serve as a distinctive window, making the novel characterized by the interplay of the real and the fantastic, and the interplay of reality and absurdity. Beyond the shell of core imagery such as wine, it also reveals the author's examination and reflection on reality and the national soul, as well as his compassion for all the world's faces, as Ding Hooker's meaningful epitaph says: "In the age of chaos and corruption, brothers, do not judge your own brothers.". The poetic world of The Republic of Wine, forged through defamiliarized narrative language and narrative point of view, is interspersed with unique aesthetic experiences, coalescing with Mo Yan's unique artistic inspiration and narrative wisdom, revealing spiritual secrets and glowing with unique charm.
2022-03-31T15:24:17.190Z
2022-03-26T00:00:00.000
{ "year": 2022, "sha1": "66cadc7a173f088ae41a247a79f1bd3ae5c78575", "oa_license": "CCBY", "oa_url": "https://bcpublication.org/index.php/SSH/article/download/520/493", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b61e8e392af5e42dbf78ad3479abdafcf931e898", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
119110087
pes2o/s2orc
v3-fos-license
Cosmic Rays from the Ankle to the Cut-Off Recent advances in measuring and interpreting cosmic rays from the spectral ankle to the highest energies are briefly reviewed. The prime question at the highest energies is about the origin of the flux suppression observed at E ~ 4x10^{19} eV. Is this the long awaited GZK-effect or the exhaustion of sources? The key to answering this question will be provided by the largely unknown mass composition at the highest energies. The high level of isotropy observed even at the highest energies challenges models of a proton dominated composition if extragalactic magnetic fields are on the order of a few nG or less. We shall discuss the experimental and theoretical progress in the field and the prospects for the next decade. Introduction In the last decade, a new generation of the ultra-high energy cosmic ray (UHECR) observatories has come into operation: the Pierre Auger Observatory in the Southern hemisphere and the Telescope Array in the Northern one. Apart from a significant advance in size over their predecessors, both observatories have implemented, for the Energy Spectra The all-particle energy spectrum is perhaps the most prominent observable of cosmic rays being investigated. It carries combined information about the UHECR sources and about the galactic and/or intergalactic media in which CRs propagate. The ankle, a hardening seen in the all-particle spectrum at about 5 · 10 18 eV, is generally considered to mark the transition from galactic to extragalactic cosmic rays. However, recent measurements of KASCADE-Grande [5,6] suggest that this transition may occur more than an order of magnitude lower in energy, i.e. around 10 17 eV. At this energy, the component of light elements is subdominant but exhibits a hardening to become dominant at the ankle. The so-called dip-model of the ankle [7] interprets the ankle as being the imprint of protons suffering e + e − pair-production in the CMB. Thus, it requires protons to be dominant at energies significantly above and below the ankle and the transition to occur again below the ankle energy. Obviously, models differ in their energy spectra expected for different mass groups and thereby in their cosmic ray mass composition as a function of energy. Related to this, one also expects to see different levels of anisotropies in the arrival directions as it will be difficult to fully isotropize EeV protons in galactic magnetic fields [8]. At the highest energy, a flux-suppression due to energy losses by photo-pion production and photo-disintegration in the CMB is expected for protons and nuclei, respectively. In fact, this so-called GZK-effect [9,10] is the only firm prediction ever made concerning the shape of the UHECR spectrum. First observations of a cut-off were reported by HiRes and Auger [11,12]. However, at present we cannot be sure whether this flux suppression is an imprint of the aforementioned GZK energy losses or whether it is related to the maximum cosmic ray acceleration energy at the sources. A first comprehensive comparison of available data was performed by a joint working group of Auger, TA, HiRes, and Yakutsk and is presented in [13]. It is found that the energy spectra determined by the Auger and TA observatories are consistent in normalization and shape if the uncertainties in the energy scale -at that time [14,15]. The TA-data are fit to a model of extragalactic proton sources, distributed cosmologically according to (1 + z) 4.4 and injecting a power-law distribution at the sources according to E −2.39 (blue line). The Auger data are compared to a model assuming a maximum acceleration energy Emax = 10 18.7 eV × Z with injection spectra γ = 1 and an enhanced galactic cosmic ray composition from [16]. An additional galactic component is plotted as dotted black line. quoted for each experiment to be about 20 % -are taken into account. This is a quite notable achievement and it demonstrates how well the data of current observatories are understood. Most recent updates of the cosmic ray energy spectra were presented at the ICRC 2013 conference. Auger has reported an exposure of about 40 000 km 2 sr yr in the zenith angle range up to 80 • . TA, due to the later start and its more than 4 times smaller area, has collected about a 10 th of the events. The TA collaboration restricts the analysis to zenith angles below 45 • which can be understood from the smaller vertical dimensions of the scintillator slabs compared to the 1.2 m height of the water tanks. Accounting for recent precise measurements of the fluorescence yield [17] and taking advantage of a better estimate of the invisible energy, a deeper understanding of the detector and consequently improved event reconstruction, the Pierre Auger Collaboration has recently updated their cosmic ray energy scale and reduced its systematic uncertainties to 14 % [18]. The corresponding results of the two experiments are presented in Fig. 1. The energy spectra of the two observatories clearly exhibit the ankle at ∼ 5 · 10 18 eV and a flux suppression above ∼ 4 · 10 19 eV, and are compared to simplified astrophysical scenarios with parameters given in the figure caption. As can be seen from this comparison, the ankle occurs at an energy which is compatible with the dip-model under the assumption of a pure proton composition. Also, the flux suppression at the highest energies is in accordance with the energy loss processes of the GZK-effect. In the case of Auger, however, the suppression starts at lower energies as compared to the propagation calculations unless the maximum energy of sources is set to approx. 10 20 eV [15]. It is important to realize that the suppression region of the spectrum can also be described by assuming pure Fe-emission from the sources. In this case, however, the ankle would require another component of cosmic rays to contribute to the flux at lower energies. Another interpretation of the suppression region has been presented in e.g. [19,20,21,22]. In this group of models, the flux suppression is primarily caused by the limiting acceleration energy at the sources rather than by the GZK-effect. A good description of the Auger all-particle energy spectrum is obtained for Emax,p 10 18.7 eV with a mix of protons and heavier nuclei being accelerated up to the same rigidity, so that their maximum energy scales like Emax,Z ∝ Z × Emax,p (colored histograms in Fig. 1 [16]). Obviously, the latter class of models (which also account for all relevant energy loss processes during propagation [23]) leads to an increasingly heavier composition towards the suppression region. We shall return to this aspect in the next section. Another notable feature of such classes of models is the requirement of injection spectra considerably harder than those expected from Fermi acceleration. This was pointed out also e.g. in Refs. [22,16,24]. However, as recently discussed in [25], effects of diffusion of high energy cosmic rays in turbulent extra-galactic magnetic fields counteract the requirement of hard injection spectra (γ < 2.0) for a reasonable range of magnetic field strengths and coherence lengths. The different interpretations of the Auger and TA energy spectra demonstrate the ambiguity left by the allparticle energy spectrum and they underline the importance of understanding the absolute cosmic ray energy scales to a high level of precision. While perfect agreement is seen up to the ankle and beyond, one finds that the flux-suppression in the Auger data not only starts at somewhat lower energies, but also falls off more strongly than in TA data. This difference -despite being still compatible with the quoted systematic uncertainties of TA and Auger of 20 % and 14 % -deserves further attention. Mass Composition Obviously the all-particle energy spectrum by itself, despite the high level of precision reached, does not allow one to conclude about the origin of the spectral structures and thereby about the origin of cosmic rays from the ankle to the highest energies. Additional key information is obtained from the mass composition of cosmic rays. Unfortunately, the measurement of primary masses is the most difficult task in air shower physics as it relies on comparisons of data to EAS simulations with the latter serving as reference [26,27]. EAS simulations, however, are subject to uncertainties mostly because hadronic interaction models need to be employed at energy ranges much beyond those accessible to man-made particle accelerators. Therefore, the advent of LHC data, particularly those measured in the extreme forward region of the collisions, is of great importance to cosmic ray and air shower physics and has been awaited with great interest [26]. Remarkably, interaction models employed in air shower simulations provided a somewhat better prediction of global observables (multiplicities, p ⊥ -distributions, forward and transverse energy flow, etc.) than typical tunes of HEP models, such as PYTHIA or PHOJET [28]. This revealed that the cosmic ray community has taken great care in extrapolating models to the highest energies. Moreover, as demonstrated e.g. in [29], cosmic ray data provide important information about particle physics at centre-of-mass energies ten or more times higher than is accessible at LHC. The pp-inelastic cross section extracted from data of the Pierre Auger Observatory supports only a modest rise of the inelastic pp cross section with energy [29]. A careful analysis of composition data from various experiments has been performed and reviewed in [26,31]. Updated results from the TA and Auger Observatories as well as a comparison of the two were presented at the ICRC 2013 with exemplary results depicted in Fig. 2. The data from the Pierre Auger Observatory (Fig. 2 top panel) suggest an increasingly heavy mass composition above 4 · 10 18 eV when compared to post-LHC interaction models. The TA data are compatible with a proton dominated composition at all energies ( Fig. 2 bottom left) but have much larger statistical uncertainties and are compared to pre-LHC interaction models which showed a larger scatter and mostly predicted shallower showers. It is important to note that the datapoints and model predictions of TA and Auger cannot be compared directly to each other. This is because TA applies detector specific acceptance cuts to data and Monte Carlo simulations while Auger applies fiducial volume cuts aimed at selecting a bias free event sample. This is done by using a high quality hybrid data set and applying fiducial volume cuts based on the shower geometry that ensure that the viewable Xmax range for each shower is large enough to accommodate the full Xmax distribution [32]. The price to be paid for these so-called anti-bias cuts enabling a direct data-to-model comparison is that it requires significantly more statistics than the classical method of applying the same cuts to models and data. Because of this, it is presently not yet available in the TA data. The event statistics surviving all cuts and entering in the Xmax energy bins of the Auger and TA data sample is specified in Fig. 2. Because of these complications, both collaborations have started to jointly investigate the origin of these differences in Xmax by injecting the measured composition from the Pierre Auger Observatory into the TA Monte Carlo. The result of that preliminary study shows that the proton-and Auger-like composition cannot be discriminated from one another within the presently available TA statistics [33]. It will be interesting to see this puzzle being solved in the near future both by refined and improved reconstruction and analysis techniques, as well as by collecting more data. A (pre-ICRC 2013) compilation of composition data from various experiments is depicted in Fig. 2 (bottom right). These data complement those of the energy spectrum in a remarkable way. As can be seen, the breaks in the energy spectrum coincide with the turning points of changes in the composition: the mean mass becomes increasingly heavier above the knee, reaches a maximum near the 'iron-knee', another minimum at the ankle, before it starts to modestly rise again towards the highest energies. Different interaction models provide the same answer concerning changes in the composition but differ by their absolute values of ln A [26,34]. [15]. Bottom left: Xmax as a function of energy from TA [30]. Bottom right: Average logarithmic mass of CRs as a function of energy derived from Xmax measurements with optical detectors for the EPOS 1.99 interaction model. Lines are estimates of the experimental systematics, i.e. upper and lower boundaries of the data presented [26]. The interpretation of the all-particle energy spectrum in terms of the exhaustion of sources rather than in terms of the GZK-effect, discussed in the previous section (see histograms in Fig. 1), provides also a good description of the evolution of Xmax and RMS(Xmax) with energy, as seen by Auger. This is demonstrated exemplarily in Fig. 3 for the archetypal model from Ref. [22]. Similar results are reported e.g. in Refs. [35,16]. The mixture of light and intermediate/heavy primaries at the highest energies predicted by the maximumenergy models may also explain the low level of directional correlations to nearby AGN. Enhancements, presently foreseen by the Pierre Auger Collaboration will address this issue (see below). Moreover, improving the composition measurement in the ankle region will be the key also to discriminate between different models proposed to explain the transition from galactic to EG CRs. This has been a prime motivation for the HEAT and TALE extensions of the Pierre Auger and TA Observatories, respectively [36,37]. Clearly, the importance of measuring the composition up to the highest energy cannot be overstated as it will be the key to answering the question about the origin of the GZK-like flux suppression and the transition from galactic-to extra-galactic cosmic rays discussed above. Data for anisotropy searches Further important information about the nature and origin of UHECR is contained in the distribution of their arrival directions over the sky. Unlike energies or primary mass, the arrival directions of cosmic ray events are practically free from systematic errors. Modern cosmic ray experiments are well suited for studying the UHECR anisotropies at angular scales from about a degree up to the largest scales corresponding to the whole sky. The bulk of the arrival directions of UHECR events -those measured by the ground arrays -have an angular resolution of about ∼ 1 • [38,39]. The angular resolution may be up to an order of magnitude better for selected events observed by the fluorescence detectors in the stereo or hybrid modes [2], but the number of such events is much smaller. Most of the anisotropy studies discussed in what follows concerns data from the ground arrays. At E > 10 19 eV the total number of events accumulated to date exceeds 10 4 . The ground arrays of both Auger and TA are fully efficient at energies larger than 3 · 10 18 eV [40] and 10 19 eV [39], respectively. Above the efficiency thresholds (and certainly above 10 19 eV) the integrated exposures of both experiments are very close to the geometrical one [41]. This makes the anisotropy study at high energies straightforward. Possible (mild) deviations from the geometrical exposure have to be studied and taken into account at energies below the efficiency threshold. Together, Auger and TA cover the whole sky. Are anisotropies expected? Apart from the (unknown) distribution of sources over the sky, two main factors that determine the UHECR anisotropy are deflections in cosmic magnetic fields and attenuation due to the interactions with the radiation backgrounds. The extragalactic magnetic fields are known quite poorly. From measurements of the Faraday rotations of extragalactic sources, they are usually assumed to have a magnitude not exceeding 10 −9 G [42] and a correlation length up to ∼ 1 Mpc. In such a field, a proton of 10 20 eV would be deflected by 2 • over a distance of 50 Mpc. Small deflections in the extragalactic fields are supported by simulations [43] which indicate that the extragalactic fields are small everywhere except in galaxy clusters and filaments (see, however, [44] and further discussion in [45,46,47]). The arguments based on the analysis of the gamma-ray propagation [48,49] also point in this direction. An open, even though somewhat exotic, possibility is that the Milky Way itself is embedded in a filament with relatively strong magnetic fields, or that the galactic wind has magnetized the space around our Galaxy [50,51]. The Galactic magnetic field is known much better. Models of its regular component have been constructed based on the existing measurements of the Faraday rotations of extragalactic sources [52,53]. This field would deflect a proton of 10 20 eV by about 2 − 4 • depending on the direction. The deflections in the random component of the Galactic field were argued to be subdominant [54,55]. Energy losses of UHECR become important at energies in excess of about 5 · 10 19 eV (GZK-effect [9,10]). Although the mass composition of UHECR is not known well, both protons and heavier nuclei are subject to a similar attenuation and have a propagation horizon of a few tens of Mpc at the highest energies. As it is clear from the above numbers, if primary particles are predominantly protons, one might expect to recover the distribution of sources over the sky, with possibly bright spots of the size of a few degrees corresponding to individual bright sources. On the other hand, if primary particles are heavier nuclei, the flux distribution should be anisotropic in a manner similar (but not identical) to the source distribution at the scale of a few tens of degrees, but all the small-scale structure would be washed out. Note that because of the small propagation distance, at the highest energies the sources are expected to be distributed anisotropically due to the large-scale structure of the Universe. None of these anisotropies are observed in the data. Below we summarize the tests that have been performed, and discuss possible implications of the results. Searches for localized excesses of the UHECR flux Two techniques are most commonly used to search for local excesses of the UHECR flux. One is based on the two-point angular correlation function (see, e.g., [56] for the realization of this method in the case of UHECR). This method is particularly useful in cases when there are no very bright spots but rather many excesses with a small amplitude and similar angular size. One then expects an excess in correlations at the corresponding angular scale. Both, Auger and TA data were examined in this way, so far with negative results [57,41]. Individual bright spots can be identified by looking for excesses in a moving window of given angular size and estimating the background either from Monte Carlo simulations or directly from the data. The overall significance should be corrected for the effective number of trials which is typically calculated by Monte Carlo simulations. The Pierre Auger collaboration has performed this kind of a blind search with window sizes of 5 • and 15 • in the data set with energy E > 1 EeV [18]. No significant excesses were found. In the TA data analogous searches were performed in several energy bands around 1 EeV with a search window of 20 • [58] and a position-dependent window of several degrees [59]. No significant deviation from isotropy was found. At high energies (around and above the cutoff in the spectrum) the situation is more interesting. The Auger collaboration has reported an excess of the UHECR events with E > 55 EeV around the direction towards the Centaurus supercluster at a distance of about 60 Mpc and Centaurus A, a close AGN at a distance of about 3.5 Mpc. The largest excess was found for a circular region of the angular size 18 • . This region includes 10 out of 60 events above 55 EeV in the data set of this analysis, while 2.44 are expected from isotropy [60]. At lower energies no excess was found. The cumulative number of events (with the background expectation subtracted) as a function of the angular distance from the direction of Cen A is shown in Fig. 4 together with 1-, 2-and 3σ bands representing fluctuations of the background. In the Northern sky, the TA collaboration has also observed some deviation from isotropy in the data set with E > 57 EeV at similar angular scales [61] in the direction about 20 • from the Supergalactic plane, with no evident astrophysical structures in the closer vicinity. The corresponding sky map is shown in Fig. 5. The statistical significance of this "hot spot" has not been reported. Search for point sources If the UHECR composition is light and the deflections are dominated by the Galactic magnetic fields, or if the primary particles are neutral, one might expect that at the highest energies arrival directions of UHECR events roughly point back to their sources. Because of the GZK cutoff, the UHECR propagation distance of trans-GZK events, i.e. events exceeding the GZK-threshold, is limited to 50-100 Mpc. The number of potential sources of UHECR in this volume is limited, and one may expect directional correlations between the position of candidate sources and the CR event directions. This kind of analysis is complementary to the one described above in the sense that it is optimized for the situation when none of the sources is sufficiently bright to produce a significant hot spot (cf. the discussion above). The Auger collaboration has studied the correlation of the highest energy events above 55 EeV with the nearby Active Galactic Nuclei (AGNs) from the Véron-Cetty and Véron catalog (VCV) [63]. The parameters of the correlation (the energy threshold at 55 EeV, the maximum distance in the catalog of 75 Mpc and the maximum opening angle of 3.1 • ) were fixed from the exploratory scans in the independent data set [64,65]. The latest results of this study [62] is presented in Fig. 6 (left) which shows the most likely fraction of correlating events plotted as a function of the total number of events, together with the 1, 2, 3 − σ bands which allow one to see how far the observed number of correlated events deviates from the expectation assuming an isotropic background. One can see that while in the early part of the data there was a substantial deviation from isotropy, with the accumulation of events the correlation strength has decreased to 33±5% compared to 21% expected from isotropy. The statistical significance for a departure from isotropy has over this period remained almost constant at a level between 2 and 3σ. Correlation with the same set of AGN and with the parameters fixed at the values set by the Auger collaboration analysis has been studied by the HiRes collaboration [66] with a negative result, and by the TA collaboration [41]. The most recent update of the TA analysis is presented in Fig. 6 (right) which shows the number of correlating events as a function of the total number of events. There is a slight excess of correlating events over the expected background, compatible with both the background and with the latest update on the AGN correlations from Auger. The expectation from the latest Auger data [62] is depicted by the 1-and 2σ-bands which demonstrates an excellent agreement of the two data sets. The combined probability to observe such a correlation from an isotropic distribution is below p = 10 −3 , still too large to draw any firm conclusions. Harmonic analysis A standard tool in search for medium and large scale anisotropy searches is the harmonic analysis. In the case of UHECR, the application of this method is limited by the incomplete sky coverage of presently existing observatories which cover either the southern (in case of Auger) or northern (in case of TA) part of the sky. For this reason, not all components of the low multipoles can be extracted unambiguously from the data of a single experiment. For instance, because of the (approximate) azimuthal symmetry of the exposure function, only the (xy)-components of the dipole (in equatorial coordinates) can be obtained in a straightforward way by a single experiment. Results of a search for the equatorial dipole have been reported by the Pierre Auger collaboration [18,68]. Fig. 7 (left panel) shows the measurement of the dipole amplitude as a function of energy. Different analysis techniques have been used in different energy bins as indicated in the plot. The measured amplitude of the dipole is consistent with expectations from the isotropic background. It is interesting to note, however, that the dipole amplitude is not the most sensitive observable [68] because of the energy binning and related loss in statistics. Even when the dipole amplitude is not sufficiently large to be detected, its phase may show regular behavior with energy, which would be an indication for a non-zero dipole. The right panel of Fig. 7 shows the phase of the dipole as a function itudes of the first harmonic ends on the rvatory latitude and rved zenith angles ompare between riments we use the torial dipole onent r / hcos i s above 1 EeV low probability to from isotropy Amplitude of the dipole of the energy. One can observe that the values of the phase are correlated in adjacent energy bins, and the phase behavior with energy is consistent with a continuous curve. This may indicate the presence of a non-zero dipole in the Auger data whose amplitude is just below the detection threshold. The problem of the incomplete sky coverage may be resolved by combining the data of the two observatories. This is not a straightforward procedure because of the uncertainty in the relative flux calibration resulting mainly from possible differences in the energy scales of experiments. The difficulty, however, may be overcome, and the corresponding analysis is presently underway [69] with the first all-sky UHECR intensity presented at the ICRC 2013 with no significant under/overdensities found, yet [70]. Large-scale anisotropy If the deflections of UHECR do not exceed 10 − 20 • , as in the case of (predominantly) proton composition and small extragalactic magnetic fields, one should expect a correlation of UHECR arrival directions with the local large-scale structures (LSS). The largest correlations are expected at or above the GZK-threshold energy, because in this energy range the propagation distance is limited to 50 − 100 Mpc and the contributions of the local structures is enhanced. With enough statistics, by checking such a correlation one may either discover it, or put a lower limit on the UHECR deflections. With some assumptions about cosmic magnetic fields, this information may also help to understand the UHECR composition. The distribution of the UHECR flux expected in a generic model where sources trace the distribution of matter in the nearby Universe was calculated, e.g., in Ref. [71]. An improved version of this map obtained using a larger catalog of galaxies is presented in Fig. 8. This map was calculated assuming the UHECR are protons of energy 57 EeV, and smeared over an angular scale of 6 • . The expected flux map may be compared to the actual UHECR distribution by making use of an appropriate statistical test (see, e.g., [71]). The results of the analysis using the latest TA data set are shown in Fig. 9 for two datasets with E > 10 EeV and E > 57 EeV. One can see that at low energies E > 10 EeV the data are compatible with isotropy and incompatible with the LSS model for all but largest smearing angles. At high energies, on the contrary, the data are compatible with the structure and not compatible with isotropy (the latter may be another manifestation of the "hot spot" discussed above). A similar analysis has been performed using the first 69 publicly released Auger data [72] with energies E > 55 EeV. It was found that the correlation of the Auger events with the LSS prediction is larger than would be in the isotropic model, but smaller than in the model where the UHECR sources follow the matter distribution in the Universe. Other searches If galactic TeV gamma-rays originate from energetic protons suffering pion-production interactions with ambient photons, protons, or nuclei, one should expect that neutrons are also produced. At energies higher than 10 18 eV neutrons can reach us from large parts of the galaxy before they decay (τn = 9.2 kpc × E/EeV). Since neutrons are not deflected by the magnetic fields, they should point back to their sources. The Pierre Auger Collaboration has performed a dedicated search for Galactic sources of neutrons [73]. Several classes of sources were considered, such as H.E.S.S. TeV sources, several classes of pulsars, microquasars, and magnetars. These sources were stacked in their respective classes. The search window was set to the angular resolution of the detector. In addition to these sources, the Galactic plane and the Galactic Center were considered as possible sources. The advantage of this analysis over the blind search is that the penalty for trials is substantially reduced. No statistically significant excess was detected in any of the catalogs, including the Galactic plane and the Galactic Center. In a related analysis [74], a search for point sources of EeV photons was performed. With no photon point source being detected, upper limits on the photon flux have been derived for every direction within the Auger exposure map. None exceeds an energy flux of 0.25 eV cm −2 s −1 in any part of the sky assuming a photon flux following 1/E 2 . These limits are of considerable astrophysical interest, because the energy flux in TeV gamma rays exceeds 1 eV cm −2 s −1 for some Galactic sources with a differential spectral index of E −2 [75]. Conclusions and Outlook To summarize, the new generation of experiments -the Pierre Auger Observatory and the Telescope Arrayhave been constructed and operated in the last decade. Both experiments proved the advantage of the hybrid detector design where the fluorescence telescopes are combined with the ground array of detectors. The former are used for calorimetric energy measurements and calibration of the ground array energy scale, while the ground array takes advantage of its 100% duty cycle to accumulate large statistics. As a result, the uncertainty in the energy estimate has been reduced to much below 20%, and more than 10-fold increase in statistics has been achieved. This has lead to a number of important advances. First, the features in the UHECR energy spectrum -the ankle and the suppression at the highest energies -have been established beyond doubt. The spectral slopes before and after the ankle have been measured to the second digit and agree between the two experiments. The positions of the ankle also agree within the quoted errors, and are compatible with the existing model(s). The parameters of the break at the highest energies are known less accurately. There seems to be some discrepancy concerning the shape of the spectrum around the break, however more statistics is needed for a firm conclusion. The position of the break is compatible with the GZK cutoff for protons, but other explanations are also possible. The substantial increase in statistics allowed one to put stringent constraints on the previously claimed deviations of the arrival directions from the isotropic distribution. This concerns the clustering of the UHECR events, as well as their correlations with different classes of putative sources. Unfortunately, no significant deviation from isotropy has been confirmed, yet. As far as the mass composition of UHECR is concerned, the situation is less definite, and a consistent picture has not yet emerged. While the Pierre Auger Observatory sees a change of the composition towards a heavier one at the highest energies, the TA observes no such a trend and is compatible with a pure proton composition. This difference in the data has profound consequences: The Auger data suggest that we see the maximum energy of sources, similarly to what is observed at the knee in the cosmic ray spectrum, while the TA data suggest we observe the GZK-effect. Seeing the GZK-effect would naturally allow to interprete the ankle in terms of e + e − -pair production losses in the CMB while the maximum energy scenario relates the ankle to the transition of galactic to extragalactic cosmic rays. The hard injection spectra required by the maximum energy model would either call for non-standard acceleration processes or require a contribution of nearby sources to the all-particle flux. Moreover, the different compositions in the GZK-and maximum-energy scenario will affect the level of anisotropies expected to be seen in the data. As already mentioned, a pure proton composition up to the highest energies starts to conflict with the highly isotropic UHECR-sky, unless extremely strong galactic and extragalactic magnetics fields are assumed. Thus, despite the major advances, a number of key questions remain open: (i) a more accurate absolute energy calibration is needed to clarify the physical interpretation of the ankle and the high-energy break in the spectrum; (ii) the apparent differences in the observed mass composition at highest energies need to be understood; a more accurate modeling of air showers may be required for that in addition to a better understanding of systematic biases; (iii) the apparent absence of anisotropies, especially at the highest energies, has to be reconciled with the mass composition and our knowledge of the cosmic magnetic fields and the existing source models. An important lesson from the existing picture is that the above open problems are closely interrelated. It is not inconceivable that a breakthrough in one of these questions will lead to the understanding of the others and finally to the emergence of a consistent picture of UHECR. The next advance in the experimental techniques, presently prepared by both collaborations, is therefore likely to be the last crucial step in our understanding of the nature and origin of these highest-energy particles ever observed in Nature.
2014-05-03T12:16:06.000Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "e11065ac397b8292d3d337f17042a29f1e27954f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1405.0575", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e11065ac397b8292d3d337f17042a29f1e27954f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13723279
pes2o/s2orc
v3-fos-license
Enhanced Formation of Azoxymethane‐induced Colorectal Adenocarcinoma in γδ T Lymphocyte‐deficient Mice T cell receptor (TCR) γδ ‐positive T lymphocytes, which are localized mostly within the intraepithe‐lial space of intestinal epithelium, have been suggested to play a role in maintaining the normal configuration of intestinal epithelium. However, the role of TCRγδ ‐positive T lymphocytes in the formation and progression of colorectal adenocarcinoma that originates from colorectal epithelial cells remains to be elucidated. In this study, TCRαβ and TCRγδ ‐positive T lymphocyte‐deficient mice (homozygous TCRα and TCRδ‐gene knockout mice) and the background wild‐type mice were administered azoxymethane, and the formation of macroscopic tumors and microscopic aberrant crypt foci in colorectal mucosa were compared among the three types of mice. Well‐differentiated adenocarcinoma appeared 5 months after 5 administrations of azoxymethane (10 mg/kg weight) only in a few TCRδ‐gene knockout mice and the frequency of the carcinoma‐bearing mice was increased at 7 and 9 months after the administration. Aberrant crypt foci were also detected in the colorectal mucosa of TCRδ‐gene knockout mice to a greater extent than in colorectal mucosa of TCRδ‐gene knockout mice 1 month after the azoxymethane administration. These results suggest that TCRγδ ‐positive T lymphocytes, which are present mainly in the intraepithelial space, play a role in suppression of the formation and progression of colorectal adenocarcinoma in mice. Previous research has demonstrated the presence of lymphocytes in the space between intestinal epithelial cells (intestinal intraepithelial lymphocytes: iIEL) and most of the iIEL were shown to carry T cell receptor (TCR) γδ molecules (γδ T cells) on the surface. 1) γδ T cells have been suggested to differentiate in the intestinal mucosa 2) and to play a unique role in concert with surrounding cells such as helper T cells, cytotoxic T cells (CTL), macrophages, natural killer cells (NK cells) and epithelial cells. 3,4) γδ T cells have been reported to secrete a cytokine which supports the elimination of impaired epithelial cells and to maintain the normal configuration of the intestinal epithelium. 5,6) However, the precise functions of γδ T cells have not been elucidated. Some recent studies have described the effect of γδ T cells on intestinal tumor cells. γδ T cells regulated the cytotoxic activities of NK cells and CTL, of which the targets were intestinal tumor cells. 7) The frequency of tumorinfiltrated γδ T cells was lowered in human well-to-moderately differentiated colorectal adenocarcinoma, 8) suggesting a relationship between γδ T cells and the formation and progression of colorectal adenocarcinoma. Notably, γδ T cells in which the variable region on the TCRδ chain is Vδ1 (Vδ1-γδ T cells) were reported to be predominant in iIEL, to recognize the major histocompatibility complex class I-related molecules A and B (MICA and MICB) and to have cytotoxic activity against intestinal tumor cells. 9) The above report also suggests that γδ T cells in iIEL may affect the formation and progression of colorectal adenocarcinoma. The present study was designed to examine the effect of γδ T cells on the formation and progression of chemically induced colorectal adenocarcinoma using γδ T cell-deficient mice. Azoxymethane (AOM) was administered to wild-type C57BL/6 mice, αβ T cell-deficient mice (TCRα-gene knockout mice) and γδ T cell-deficient mice (TCRδ-gene knockout mice) and the formation of macroscopic tumors and microscopic aberrant crypt foci in colorectal mucosa was examined. The effect of γδ T cells on the formation and progression of colorectal adenocarcinoma is discussed in the light of the results of the present study. MATERIALS AND METHODS Mice Homozygous TCRα-gene knockout mice, 10) homozygous TCRδ-gene knockout mice 11) and wild-type C57BL/6 (the background of the knockout mice) were purchased from Jackson Laboratories (Bar Harbor, ME). The mice were bred and maintained in a clean air system (EBAC-S, Clea Japan Inc., Tokyo). Two-to 3-month-old mice (body weight: 15-20 g) were used for experiments. Induction of colorectal tumors and aberrant crypt foci For the induction of colorectal tumors, mice were administered AOM (10 mg/kg weight) intraperitoneally (i.p.) once a week for 1-5 weeks. In the preliminary experiments, 4-5 administrations of AOM were required for the formation of colorectal tumors and 5 administrations of AOM were used in further experiments. The colorectal part was resected at 3.5, 5, 7 and 9 months after the last administration and the formation of macroscopic tumors in mucosa was examined. The macroscopic tumor and a small piece of colorectal tissue were cut and fixed in 10% formalin. Thin sections of paraffin-embedded tissues were analyzed under a microscope after staining. For the induction of aberrant crypt foci (ACF), mice were administered AOM (10 mg/kg weight) once a week for 1-4 weeks. The formation of ACF was examined 1-1.5 months after the last administration by the method previously described. Briefly, the oral terminus of the colon and anal terminus of the rectum were knotted with strings and the whole colorectal part was resected. Then, 10% formalin was infused into the intestinal cavity with a syringe and left for 5 min. After the fixation, the mucosal side of the colorectal tissue was exposed by cutting longitudinally one side of the tissue. The tissue was spread on filter paper and fixed again with 10% formalin overnight. The surface of the fixed colorectal mucosa was stained with 0.2% methylene blue in 0.1 M phosphate buffer (pH 7.4) and the number of ACF was counted under a microscope at a low magnification. Morphological analysis of colorectal tissue Whole colorectal tissue was obtained from each mouse and small pieces were resected from normal and tumor parts of the tissue. The pieces were fixed with 10% formalin or periodate-lysine-2% paraformaldehyde (PLP) fixatives. 12) Thin sections prepared from formalin-fixed tissue were stained with hematoxylin and eosin (HE). Thin sections prepared from formalin-fixed and PLP-fixed tissues were stained by enzyme-immunostaining (LSAB kit, DAKO Japan, Kyoto) using monoclonal antibodies against proliferating cell nuclear antigen (PCNA; PC10, DAKO, Glostrup, Den-mark) and mutated p53 (Ab-3, Oncogene Res. Prod., Cambridge, MA). Formation of macroscopic colorectal tumors The formation of macroscopic colorectal tumors was dependent on the frequency of AOM administration and the duration after administration. Colorectal tumors were observed in a few TCRδ-gene knockout mice 5 months after 5 administrations, but not 3.5 months after 3 administrations. Additionally, tumors were observed only in TCRδ-gene knockout mice 5 months after 5 AOM administrations and the frequency of tumor-bearing mice and the number of tumors per mouse had increased at 7 and 9 months after the last administration (Table I). All tumors were 1-5 mm in diameter and projected into the mucosal cavity of the descending colon and rectum (Fig. 1B). In contrast, TCRα-gene knockout mice and wild-type C57BL/6N mice showed no macroscopic tumor formation by 9 months after AOM administration (Fig. 1A). Morphological findings of colorectal tumors Morphological analysis of the colorectal tumors revealed well-differentiated adenocarcinoma. The tumor cells were similar in size and formed a villus-structure, with destruction of the normal configuration (Fig. 2, A and B). The large nuclei of the tumor cells were stained with anti-PCNA antibody more intensely than the nuclei of the neighboring non-malignant cells (Fig. 2C). The tumor cells were also stained by monoclonal antibody against mutated p53. Formation of ACF Because ACF formation, which appears within a few weeks after AOM administration, is suggested to be the first step of colorectal carcinogenesis, ACF formation was examined in each group at 1 or 1.5 months after 1 to 4 AOM administrations (Table II). The resected colon was fixed with 10% formalin and stained with methylene blue. Using this method, the ACF were clearly observed under a microscope at low magnification (Fig. 3). The ACF were also observed mostly in the mucosa of the descending colon and rectum. A few ACF were observed in some TCRδ-gene knockout mice 1.5 months after AOM administration and the frequency of ACF-bearing mice and the number of ACF per mouse increased with increasing frequency of AOM administration. A total of 70%, 70% and 100% of the TCRδ-gene knockout mice bore 1 to 16 ACF one month after 2, 3 and 4 AOM administrations, respectively. The number of ACF in TCRδ-gene knockout mice was highest at 1 month after AOM administration and slightly decreased thereafter (data not shown). ACF were also observed in wild-type mice and TCRα-gene knockout mice 1 month after 3-4 administrations, but the frequency of ACF-bearing mice and the number of ACF per mouse were much lower than in TCRδ-gene knockout mice (Table II). DISCUSSION In the present study, the formation of colorectal adenocarcinoma was observed only in TCRδ-gene knockout mice (γδ T cell-deficient mice) and more ACF were detected in these mice. The results suggest that γδ T cells might play a role in the suppression of colorectal adenocarcinoma formation and progression. The difference between TCRδ-gene knockout mice and TCRα-gene knockout mice appeared not to be caused by the methods involved in preparing the knockout mice. Both knockout mice used in the study originated from the same inbred mouse strain, C57BL/6, the materials used to prepare the knockout mice were almost identical and the other subsets of T cells were left almost intact in both types of knockout mice. 10,11) Many dietary carcinogens and pathogens which affect the formation and progression of colorectal adenocarcinoma have been reported. 13) In the present study, mice were given ad libitum access to the same lab chow and water, and were bred and maintained in the same isolator. Thus, the difference is not likely to be due to these factors. The major cells affecting colorectal adenocarcinoma cells have been reported to be cytotoxic T lymphocytes, NK cells and activated macrophages. The major tumorinfiltrating cells of colorectal adenocarcinoma have been reported to be αβ T cells and activated macrophages. 14) In contrast, the function of γδ T cells in the formation of colorectal tumors has not been precisely analyzed. We reported that the frequency of γδ T cells that are present in colorectal adenocarcinoma tissues was lower than that in normal iIEL. 8) The results suggest that γδ T cells might also be related to the formation and progression of colorectal adenocarcinoma. Recent reports demonstrated that γδ T cells play a role in repair of damaged intestinal epithelial cells 5,6) and have the capacity to recognize certain antigens, such as special antigens on the surface of tumor cells and some superantigens, and that their cytotoxic activity is enhanced by TNFα. [15][16][17][18][19] Notably, it was reported that Vδ1-γδ T cells, which are predominant in the human intestinal mucosa, recognized the major histocompatibility antigen-related molecules, MICA and MICB, which are expressed on the surface of intestinal epithelial cells. The expression of the molecules was enhanced in colon tumor cells. Thus, Vδ1γδ T cells recognize the colorectal tumor cells through MICA/MICB and show cytotoxic activity against the tumor cells. 9) Taken together, γδ T cells in iIEL appear to Each mouse group was given AOM (10 mg/kg) intraperitoneally once a week for 1-4 weeks. Formation of ACF was compared 1.5 or 1 month after the last administration between groups with AOM administration (AOM (+)) and without AOM administration (AOM (−)). Fractions indicate the number of ACF-bearing mice in total number of mice treated. The numbers in parentheses show the mean number of ACF per group and the number of ACF on each ACF-bearing mouse. ND: not done. Fig. 3. Aberrant crypt foci in colorectal mucosa of a TCRδgene knockout mouse. Colorectal tissue was obtained from a TCRδ-gene knockout mouse 1 month after AOM administration and was fixed with 10% formalin. The surface of the tissue was stained with 0.2% methylene blue and the mucosal surface was observed under a microscope at low magnification (×25). eliminate aberrant intestinal epithelial cells, such as transformed cells and microbe-infected cells, and thus play an important role in immune surveillance at the intestinal mucosa to maintain the normal configuration of intestinal epithelium. In contrast, Egawa et al. reported that γδ T cells isolated from mouse spleen cells secrete a factor which inhibits the activity of CTL and NK cells, and thus the γδ T cells enhance the formation of colorectal tumor. 20,21) In the present study, γδ T cells in iIEL were suggested to act directly on aberrant intestinal cells rather than to modulate the activity of CTL and NK, because αβ T cell-deficient mice did not form any macroscopic tumor. The discrepancy may reflect the diversity of γδ T cells. The γδ T cells have been reported to express different Vγ regions according to their sites of localization. The γδ T cells which are present in intestinal intraepithelial spaces express exclusively Vγ7 or Vγ1, while those in circulation/lymph node/ spleen express diverse Vγ, and the processes of differentiation and their functions have been suggested to be different. 22) In the connection with this, the difference of target tumors cells, AOM-induced intestinal adenocarcinoma cells in the present study and mammary carcinoma and hepatoma cells in their report, may be responsible for the discrepancy. Interestingly, human γδ T cells in peripheral blood of patients showed direct tumoricidal activity against glioblastoma cells. 23,24) Further experiments are required to distinguish the functions of γδ T cells in intes-tinal mucosa and circulation/lymph node/spleen on colorectal adenocarcinoma formation. The improper functioning of γδ T cells and their reduced frequency due to such factors as aging, malnutrition and chemical carcinogens, may disrupt their activity, resulting in the formation and progression of colorectal adenocarcinoma in humans. Alternatively, the transformed cells may secrete factors that negate the cytotoxic activity of γδ T cells. In the present study, γδ T cells were also suggested to suppress the formation and progression of colorectal adenocarcinoma. They may influence the pathogenesis of colorectal adenocarcinoma and may also be useful for the prevention and immunotherapy of colorectal adenocarcinoma. However, further analysis is required to confirm this hypothesis.
2018-04-03T04:02:11.988Z
2001-08-01T00:00:00.000
{ "year": 2001, "sha1": "87573a8c0d66cad7679283081955130600e71abd", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc5926836?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "87573a8c0d66cad7679283081955130600e71abd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57433108
pes2o/s2orc
v3-fos-license
Rupture of Testicular Tunica Albuginea: A Urological Emergency Background: Road traffic accident, commonly enunciated as “RTA”, is the leading cause of morbidity and mortality worldwide. However, for a urologist, an “RTA” could imply “Rupture of the testicular Tunica Albuginea”, with equivalent, if not higher psychosocial, anatomical and hormonal morbidity and/or mortality. Blunt or penetrating trauma, or degloving injuries, may lead to an RTA, with extrusion of the testicular seminiferous tubules, mandating an early diagnosis and prompt intervention, in order to prevent future complications. Method: A thorough “pubmed” search was conducted with the terms “testicular rupture” and “tunica albuginea rupture”, and all English language articles with these terms in title or abstract, were included in this review. Results/ Conclusion: The following review highlights this urological emergency as an important differential for an “acute scrotum” and provides an insight into the currently available literature documenting testicular ruptures, as well as the various diagnostic modalities and management practices. Additional food for thought remains the need for long term follow up of these patients, in order to assess for hypogonadism or infertility as well as the need to understand the role of the “blood-testis barrier” and possible implications of its breach, with auto antibody production. Testicular Rupture Testicular rupture, by definition, is a breach or tear in the tunica albuginea, resulting in extrusion of the testicular contents, including the seminiferous tubules [1]. Types of Rupture Based on their mode of trauma, testicular injuries have been classified into 3 types: blunt trauma injury; penetrating trauma injury; and degloving injury [2]. Any of these mechanisms may be responsible for testicular rupture and extrusion of intratesticular contents. Mechanism of Injury Blunt trauma usually involves a fierce blow to the testicle, forcing it against the thigh or pubic bone, with resultant intraparenchymal bleed, and rupture of tunica albuginea [3]. As per previous studies, a force of around 50 kg or more is needed to breach this "holy barrier" of tunica albuginea [3]. Penetrating testicular trauma and rupture, usually has a different mechanism of injury [4,5]. Assaults on account of gunshots, stabbings, war injuries, straddle injuries and especially bomb blasts are most commonly implicated as the causative factors [5]. However, these are usually associated with concomitant multi-organ involvement [5]. Degloving injuries associated with testicular rupture are rare and usually associated with animal bites (mainly dogs), with multiple soft tissue and bony injuries alongside [6,7]. Sports injuries and fisticuff injuries, where patients have been directly struck on the groin, account for nearly 50% of all testicular ruptures [8,9]. Clinical Presentation and Diagnosis Testicular rupture is, in itself, a urological emergency, and it is reported that more than 90% of ruptured testes can be salvaged if identified and explored early [4]. A thorough history and physical examination are the cornerstones in early diagnosis of testicular rupture injuries [9]. Symptomatically, there may be no difference from other causes of acute scrotum and only a history of trauma, prior to the development of symptoms, may act as a pointer towards the diagnosis [2]. Most patients would present with a history of trauma in the recent past with pain, redness and increasing scrotal size [10]. Clinical examination would reveal tenderness, ecchymosis and swelling of the affected hemiscrotum with the testis often not palpable due to extreme tenderness, or masked by expanding hematomas or complete traumatic dislocation [10]. More often than not, it is nearly impossible to conduct an adequate examination of the scrotum, in the presence of a hematoma as well as exquisite tenderness, and this may contribute to probably taking the adage of "conservative treatment" away from this entity. Investigations In such an acute scrotum, with severe scrotal edema, swelling and tenderness, it may not always be possible to examine the testis clinically. Herein lays the importance of a testicular ultrasound. Ultrasound of the scrotum is now considered the first line investigation for suspected testicular rupture, with specificity and sensitivity rates equivocal to physical examination [4,10]. Heterogeneous testicular echotexture and discontinuity in the tunica albuginea are the parameters on ultrasound that are markers for rupture ( fig. 1, fig. 2) (sensitivity: 100%; specificity: 93.5%) [4,10]. In addition to diagnosis, ultrasound also aids in decision making. Buckley et al. [4] have reported that they depend on the initial size (clinical or sonographical) as well as hematoma dynamics, in order to consider surgical intervention or opt for conservative measures. In their retrospective review of 65 patients, 32 patients had a scrotal ultrasound suggestive of a testicular rupture (heterogeneous echotexture) and all of these underwent an immediate surgical exploration, with 30/32 having an actual rupture [4]. Color flow and duplex doppler help assess testicular vascularity and viability as an adjunct to routine ultrasound [10]. These too may help in decision making of conservative versus surgical management. Testicular MRI or CT scans do not provide too much of additional information, however, if doubt persists despite an adequate ultrasound, these would be the second line imaging modalities to resort to [4, 10-13]. Management Management of testicular rupture depends on the history, duration of injury, pre-operative imaging and intraoperative appearance of the affected testis, as well as the condition of the opposite testis [4]. Conservative management, with non steroidal analgesics, local ice-packs and testicular elevation, is recommended in insignificant testicular injuries without any signs of a hematocoele, as well as in cases where the hematocoele may be present, but smaller than three times the size of the contralateral testis [10,14,15]. Whether one can rely only on the clinical findings though, remains less understood. In a documented testicular rupture, immediate surgical exploration, debridement and excision of the devitalized testicular tissue with closure of the tunica albuginea, preserving as much of the viable testicular tissue as possible, forms the mainstay of treatment ( fig. 3-5) [16]. Where approximation of the tunica albuginea is not possible due to extensive tissue loss, tunica vaginalis flaps can be used to bridge the gap and preserve as much of the testicular parenchyma as possible [17]. If the testis, on exploration, is found to be non-viable, with no possibility of tissue preservation, or if there is extensive necrotic tissue, an orchidectomy may be mandated, irrespective of the status of the contralateral testis [10]. A delay in presentation or surgical exploration has its own hazards, including chronic pain, superadded infection, testicular atrophy, impairment of hormonal status as well as increase in long term orchidectomy rates [4,10]. Successful repair has been documented to occur in 90% of cases, when they present within 72 hours of injury, which drops to 45%, once this duration has lapsed [1]. There have been proponents of surgical exploration in all cases as well as those opting for conservative management initially, and then reverting to surgery in the absence of significant clinical improvement. What is the optimal strategy though, is still not known, and the path to be tread has to be self-chosen by the surgeon, keeping in mind the possible pros and cons of his decision. As long as the contralateral testis remains functional, there is no need for hormone replacement (testosterone supplementation). Infertility, if seen, is only in cases of bilateral testicular injuries, warranting bilateral orchidectomies and in such patients, a lifelong testosterone supplementation is necessary [10,18]. For the Future Follow up of patients with testicular rupture may be mandated to assess the extent to which late onset hypogonadism may actually occur, following such an injury. The blood-testis barrier, also known as the Sertoli cell -seminiferous epithelium barrier [19,20], provides a specialized micro-environment for meiosis I and II, as well as spermiogenesis and spermiation to take place [21]. In addition, it also has an important immunological function that cannot be even transiently compromised, so as to avoid the production of auto-antibodies against the germ cells [21]. Francavilla et al. [22] have even stated that these anti-sperm antibodies may lead to male infertility in future. Whether this barrier gets damaged by testicular rupture, leading to auto-antibody formation and is a likely cause of this late onset hypogonadism needs to be explored in future. The implications could be in terms of ensuring long term follow up of such patients, as well as, keeping this entity in mind, in subsequent cases of idiopathic infertility.
2019-01-04T14:13:36.101Z
2016-12-26T00:00:00.000
{ "year": 2015, "sha1": "d3348f58e5b4267b30361419960d8263c5d06e42", "oa_license": null, "oa_url": "https://doi.org/10.1159/000447137", "oa_status": "GOLD", "pdf_src": "Karger", "pdf_hash": "e78308b846a69cbda026ddae42a228839acdfb90", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259466803
pes2o/s2orc
v3-fos-license
Enhancement of Vacuum Gas Oil Viscosity Using Ultrasound Ultrasonic treatment is a suitable method for refinery processes that Acoustic cavitation is a technique that allows high levels of energy to be released into the liquid, which leads to changes in fluid properties such as a decrease in viscosity. Additionally, it's an effective way to improve the economic feasibility of physicochemical processing to enhance the quality of the product. In this work, vacuum gas oil with viscosity of 8.4 c.st, provided by Iraqi refineries, was treated by ultrasound radiation and studied the effect of several parameters on viscosity such as sonication time (5,10,15,20,30) min, power amplitude(10,20,30,40,50)watt, and frequency (20,30,40,50) kHr. It was found from the results that the viscosity decreased from (8.4) c.st to (5.82) c.st, which represents a percentage reduction of up to 30.7% compared to the value before treatment. This result was obtained after 30 min., also the 50% of ultrasound power is the appropriate to reduce the viscosity, where The experiment showed that 20 kHz of ultrasound frequency has a decreasing effect on the viscosity as the percentage reaches 30%. Upgrading consists of a set of techniques through which the hydrogenation of the molecules is achieved by the addition of hydrogen, which results in a lighter synthetic crude oil [10]. Classic crude upgrading methods applied in surface operations include dilution and heating, which generally involve a very high investment in equipment and complex infrastructure, resulting in an increase in capital and operating costs [11].There are also emerging technologies, which are based on principles such as: viscosity reduction, chemical changes, and friction reduction between the pipe and the fluid Some are relatively developed, while others are still in the study and application stages [12]. Some methods involve catalytic cracking using ionic liquids, as well as the use of nanoparticles to improve the properties of crude oil. The emerging technologies for crude oil upgrading are classified into four categories: i) hydrogen addition, ii) carbon rejection, iii) extraction, and iv) ultrasound [13]. pressure changes produce small bubbles or cavities, which violently expand and collapse, sending millions of shock waves into the air [18]. The aim of the research is to improve the quality of physical and chemical characteristics of vacuum gas oil achieved by ultrasound technology such as viscosity and density, and evaluate the optimum parameters such as power amplitude, time in minutes, and frequency in order to improve the properties of vacuum gas oil. Feedstock The feedstock in this study was vacuum gas oil, obtained from Iraqi refineries. The properties of the vacuum gas oil are given in Table (1). Boiling range ºC 299-538 Instruments Instruments that have been used in this research are (ultrasound device, viscometer device). The ultrasound device is the first important apparatus employed in these experiments, as shown in It is a strong device for ultrasound wave processors, displaying programmable action and a numerical display of running parameters [20]. It also has a frequency of more than 20 kHz and a maximum amplitude of power ultrasound of (100-1200) watts. Table (2) shows the ultrasound specifications. Treatment of gas oil In this series of tests, an ultrasonic horn reactor was used to treat vacuum gas oil, a certain volume of vacuum gas oil (150 mL) was put in a 250 mL beaker and subjected for (5, 10, 15, 20, and 30) minutes to ultrasonic radiation, power range: 100 to 500 watts; frequency range: 20 to 50 kHz. The oil samples were exposed to ultrasonic radiation and then cooled to 25 °C, ambient temperature. Following the conclusion of each program, the vacuum gas oil's characteristics were evaluated. Effects of power amplitude Figures (2) to (6) show the relationship between energy and viscosity of gas oil during different time periods. from the results, it is clear that the decrease in viscosity increases with an increase in power, as the best result was at 50% of that. The results obtained can be interpreted as a consequence of the mechanical churning and cavitation produced by the ultrasonic processing. The gas oil molecules experienced a variety of changes. The characteristics of the typical molecular structure were affected by microstructural changes [22]. Experiments indicate that the cavitation phenomena get more intense as ultrasonic power increases, resulting in a greater viscosity reduction impact for vacuum gas oil. A particular amount of ultrasonic power causes cavitation to become saturated, making the impact of ultrasonic waves on lowering gas oil viscosity more stable. The same results are obtained by [23]. Figure 7 shows the relationship between frequency and viscosity of gas oil. The results of the experiments proved that when frequency was increased, there was less effect on viscosity reduction due to the ultrasonic cavitation effect, which causes the fluid to produce a nuclear cavity during the sound wave expansion phase, so it is preferable to select a lower frequency. The best result was achieved at a frequency of 20 kHz and at a power of 50 %, which led to a reduction in viscosity by 30% after (30) min. Effects of frequency The rheology of the liquid will be impacted by the explosion that occurs when the hollow core is crushed by a sound wave. This explosion produces high temperatures and high pressures in an instant. The time required for the sound wave to expand and for the cavitation bubble to develop and collapse is decreased by increasing the ultrasonic frequency [24]. This prevents the growth and collapse of cavitation bubbles. when using ultrasound to reduce the viscosity of vacuum gas oil under certain other conditions, a lower frequency should be chosen. Table ( 4), the viscosity of the vacuum gas oil was reduced by 30.7% because, with an increase in time and power, the viscosity of vacuum gas oil will decrease. This finding demonstrated how well viscosity may be decreased using ultrasound. Viscosity is a measure of an oil's internal resistance to flow and shear and is determined by adding the contributions of the dispersion medium and the dispersion phase [26]. It has to do with how the dispersion phase interacts with the dispersing media, the particle form and size, and interactions during that phase [27]. Conclusion The following conclusions may be drawn from the results of a study: 1. Ultrasound is a developing technology that may be employed in oil production. It is critical to achieve a large drop in viscosity at this point in the oil chain. Ultrasound treatment may be used to enhance the qualities of Iraqi vacuum gas oil in order to minimize viscosity, but only if the final properties for the best model are established via trials. 2. The viscosity lowers with increasing power and duration, but spending too much time increases the cost of operation and shortens the life of the sonic device. Therefore, the ideal sonication time for vacuum gas oil is 30 min. 3. Low frequency and high power are predicted to be the future trends in ultrasonic industrial development. The lower the frequency, the lower the acoustic cavitation threshold and the higher the compression ratio of cavitation bubbles. The lower the frequency, the lower the acoustic energy attenuation coefficient and the greater the penetration distance. Furthermore, significant power is necessary to generate substantial sonic cavitation. 4. It is regarded as a helpful approach for oil refineries since it is a safe, affordable, economical, and successful method that can be used immediately without the need for equipment. 5. During the procedure, no related gases are released.
2023-07-11T01:47:22.022Z
2023-06-15T00:00:00.000
{ "year": 2023, "sha1": "1c443e047319a2c2d3977e729b7e3b468f094ebe", "oa_license": "CCBY", "oa_url": "https://jprs.gov.iq/index.php/jprs/article/download/695/505", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "556751fbeec21449ad687ffd4d5da0a4c3e58bb6", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
268738171
pes2o/s2orc
v3-fos-license
National clinical and financial outcomes associated with acute kidney injury following esophagectomy for cancer Background Esophagectomy is a complex oncologic operation associated with high rates of postoperative complications. While respiratory and septic complications have been well-defined, the implications of acute kidney injury (AKI) remain unclear. Using a nationally representative database, we aimed to characterize the association of AKI with mortality, resource use, and 30-day readmission. Methods All adults undergoing elective esophagectomy with a diagnosis of esophageal or gastric cancer were identified in the 2010–2019 Nationwide Readmissions Database. Study cohorts were stratified based on presence of AKI. Multivariable regressions and Royston-Parmar survival analysis were used to evaluate the independent association between AKI and outcomes of interest. Results Of an estimated 40,438 patients, 3,210 (7.9%) developed AKI. Over the 10-year study period, the incidence of AKI increased from 6.4% to 9.7%. Prior radiation/chemotherapy and minimally invasive operations were associated with reduced odds of AKI, whereas public insurance coverage and concurrent infectious and respiratory complications had greater risk of AKI. After risk adjustment, AKI remained independently associated with greater odds of in-hospital mortality (AOR: 4.59, 95% CI: 3.62–5.83) and had significantly increased attributable costs ($112,000 vs $54,000) and length of stay (25.7 vs 13.3 days) compared to patients without AKI. Furthermore, AKI demonstrated significantly increased hazard of 30-day readmission (hazard ratio: 1.16, 95% CI: 1.01–1.32). Conclusions AKI after esophagectomy is associated with greater risk of mortality, hospitalization costs, and 30-day readmission. Given the significant adverse consequences of AKI, careful perioperative management to mitigate this complication may improve quality of esophageal surgical care at the national level. Introduction Surgical resection, with or without neoadjuvant chemoradiotherapy, is a mainstay of treatment for esophageal and gastroesophageal junction cancers [1].Significant advances in surveillance programs, perioperative management, and multimodal treatment regimens have helped improve postoperative outcomes, with 5-year survival nearing 50% [2].Despite such progress, esophagectomy remains a high-risk procedure regardless of operative approach with reported complication rates ranging from 40-60% [3,4].Importantly, pneumonia, anastomotic leak, and infectious complications after esophagectomy have been linked to early cancer recurrence and reduced long-term survival [5,6]. More recently, acute kidney injury (AKI) and associated outcomes have garnered significant attention as an adverse event across many operations [7][8][9].In the context of esophagectomy, a single-center study of 898 patients in the US reported an incidence of 11.9%, whereas a multi-center study of 1,135 patients in the UK and Ireland reported 18.3% of patients with postoperative AKI [10,11].Age, preoperative renal insufficiency, operative time, and perioperative blood transfusions have all been cited as risk factors associated with AKI [10,12,13].With prior studies limited in sample size and primarily focused on identifying risk factors, the impact of AKI on postoperative outcomes at the national level has yet to be characterized. The present study characterized the incidence, risk factors, and in-hospital outcomes associated with AKI among a contemporary national cohort of patients receiving esophagectomy for cancer.We hypothesized AKI to be independently associated with increased odds of index mortality, hospitalization costs, length of stay, and readmission. Methods This was a cross-sectional study using the 2010-2019 Nationwide Readmissions Database (NRD).Maintained by the Healthcare Cost and Utilization Project (HCUP), the NRD is the largest publicly available all-payer readmissions database in the United States [14].Hospital discharge data in the NRD are collected from individual State Inpatient Databases, which contain deidentified unique patient linkage numbers used to track patients across hospitals within a state.Each sampled institution has assigned discharge weights allowing for survey-weighted national estimates of 36 million discharges each year, representing approximately 60% of all hospitalizations in the United States [14].All elective adult hospitalizations (�18 years) for esophagectomy with a diagnosis of esophageal or gastric cancer were identified using relevant International Classification of Diseases 9 th /10 th Revision (ICD-9/10) diagnosis and procedure codes (S1 Table ).Patients were stratified into the AKI group if specific diagnostic codes for acute kidney injury (584, N17) were present (otherwise no-AKI).Patients with history of endstage renal disease or chronic dialysis dependence were excluded. Patient and hospital characteristics including age, sex, income quartile, primary payer, and hospital teaching status were defined using the HCUP Data Dictionary [14].History of radiation/chemotherapy and comorbidities such as diabetes, hypertension, chronic kidney disease stages 1-5, lung disease, liver disease, congestive heart failure, pulmonary circulation disorders, and neurologic disorders were identified using ICD-9/10 diagnosis codes (S1 Table ).The Elixhauser Comorbidity Index, a validated composite of 30 comorbidities, was additionally used to quantify the overall burden of chronic conditions at index admission [15].ICD-9/ 10 procedure codes were used to ascertain open and minimally invasive (MIS), including laparoscopic and robotic, surgical approaches as well as requirement of renal replacement therapy (S1 Table ).Hospitals were stratified into low, medium, and high volume tertiles based on annual institutional case volume of esophagectomy.Perioperative complications included cerebrovascular (stroke), thromboembolic (deep vein thrombosis, pulmonary embolism), cardiac (cardiac arrest, myocardial infarction), pulmonary (respiratory failure, prolonged mechanical ventilation, pneumonia), infectious (septicemia, abscess, wound infection), and intraoperative (hemorrhage, accidental puncture, phrenic or vagus nerve injury) complications, as well as requirement of blood transfusion (S1 Table ).The Clavien-Dindo classification system was used to classify the severity of postoperative complications as no complication/ grade I, grade II, grade III, and grade IV/V, according to the ICD algorithm developed by Lentine et al. [16].Hospitalization costs were calculated from charges using hospital-specific costto-charge ratios and were inflation adjusted to the 2019 Patient Health Care Index [17].The primary outcome of interest was in-hospital mortality, while secondary outcomes included index length of stay (LOS), hospitalization cost, non-home discharge, and 30-day nonelective readmission. Categorical variables are reported as frequencies (%) and compared using the Pearson's chi-square test.Continuous variables are reported as means with standard deviation (SD) or medians with interquartile range (IQR) and compared using the adjusted Wald or Mann-Whitney U tests, respectively.Significance of temporal trends was assessed using Cuzick's nonparametric test for trends (nptrend) [18].Multivariable linear and logistic regression models were developed to identify risk factors for AKI and assess its independent association with outcomes of interest.Variable selection was performed by applying the Least Absolute Shrinkage and Selection Operator (LASSO) to enhance model generalizability and minimize overfitting and collinearity between independent variables [19].Models were evaluated using the receiver operating characteristics curve as well as Akaike and Bayesian information criteria. The cumulative risk of nonelective readmission within 30 days of index discharge was evaluated using Royston-Parmar's flexible parametric regression [20].This methodology allows for varying hazards of readmission over time and accounts for differences in patient, operative, and hospital characteristics between groups.The hazards were calculated over time to readmission using 2 restricted cubic spline knots.Regression results are reported as adjusted odds ratios (AOR) for dichotomous outcomes and beta coefficients (β) for continuous variables with 95% confidence intervals (CI).The Stata "margins" command was used to predict absolute risk-adjusted values for costs and LOS based on the output of relevant regressions.Statistical significance for all analyses was set at α = 0.05.All statistical analyses were performed using Stata 16.1 (StataCorp, College Station, TX).This study was deemed exempt from full review by the Institutional Review Board at the University of California, Los Angeles due to the de-identified nature of the NRD (accessed July 18, 2022). Results Of an estimated 40,438 cancer patients undergoing esophagectomy, 3,210 (7.9%) developed AKI.Among patients with AKI, 5.7% required renal replacement therapy.Over the 10-year study period, the incidence of AKI increased from 6.4% to 9.7% (nptrend<0.001,Fig 1).On examination of concurrent temporal trends that may help explain the increasing AKI incidence, we found that the average age, Elixhauser Comorbidity Index and prevalence of diabetes and chronic kidney disease also significantly increased over the study period (nptrend<0.001).In addition, the proportion of patients with prior chemoradiation therapy increased significantly from 17.3% in 2010 to 38.5% in 2019 (nptrend < 0.001), while patients with MIS operations also increased from 6.6% to 39.2% (nptrend < 0.001).Compared to nAKI, AKI patients were older (66 ± 9 vs 64 ± 10 years, p<0.001) and had a higher burden of comorbidities (Elixhauser Index 4.1 ± 1.5 vs 3.6 ± 1.5, p<0.001,Table 1).Patients with AKI were less commonly female (12.7 vs 18.3%, p<0.001) and less often had prior radiation/chemotherapy (14.0 vs 32.2%, p<0.001).In addition, the AKI cohort less frequently had private insurance (31.1 vs 42.9%, p<0.001) or robotic-assisted operations (5.Unadjusted clinical and financial outcomes are shown in Table 2. Compared to nAKI, the AKI cohort exhibited significantly increased rates of in-hospital mortality (21.7 vs 2.2%, p<0.001).Concurrent complications including infectious (43.1 vs 11.7%, p<0.001) and respiratory (57.8 vs 25.2%, p<0.001) events were also more common among patients with AKI.The AKI group had a significantly greater proportion of patients with postoperative complication severity grading of Clavien-Dindo Grade IV/V (55.9 vs 20.4%, p<0.001) compared to nAKI.Furthermore, the AKI group experienced significantly greater LOS (19 vs 10 days, p<0.001) and index hospitalization costs ($83,600 vs 43,900, p<0.001) relative to nAKI.Of note, non-home discharge (49.7 vs 14.9%, p<0.001) and 30-day nonelective readmission (16.1 vs 13.7%, p = 0.03) were significantly more common among individuals with AKI.Total costs including the index hospitalization and all readmission costs within 30 days remained greater among AKI patients ($95,500 vs 54,500, p<0.001) compared to nAKI. Discussion Using a nationally representative cohort of patients undergoing esophagectomy for cancer, the present study evaluated clinical and financial outcomes associated with the development of perioperative AKI.Over the 10-year study period, the incidence of AKI following esophagectomy increased from 6.4% to 9.7%.Risk factors for AKI included public insurance coverage and chronic kidney disease, while prior radiation/chemotherapy and MIS operative approaches were associated with reduced odds of AKI.Notably, patients with AKI had significantly increased odds of in-hospital mortality, non-home discharge, and 30-day nonelective readmission compared to patients without AKI.In addition, AKI was associated with greater hospitalization costs and LOS.Several of these findings warrant further discussion. The overall incidence of AKI (7.9%) was lower than reported in previous multi-center studies (12-18%), potentially due to underreporting in an administrative database [10,11].Comparisons with prior literature are limited by variations in definitions for AKI, surgical approach and patient risk factors.Nevertheless, AKI remains a deleterious complication that is highly predictive for mortality and resource use following esophagectomy.Of note, the present study observed increasing rates of AKI over the past decade, affecting nearly 10% of esophagectomy patients in 2019.Surgical quality improvement efforts have generally resulted in decreased mortality and complications after most types of operations.Yet AKI in particular has also been noted to be increasing in incidence after emergency abdominal operations [8].Interestingly, we found that the average age and burden of comorbidities including chronic kidney disease among esophagectomy patients also significantly increased over the study period.Given the independent association of age and comorbidities with AKI, the older and more frail patient population with worsening baseline renal function in recent years may be responsible for the increase in incidence of AKI [21].Recent advancements in diagnosing AKI during perioperative care may also contribute to the increase in incidence.While creatinine levels are often confounded by hemodilution perioperatively, promising novel biomarkers including cystatin C, neutrophil gelatinase-associated lipocalin, and kidney injury molecule-1 highlight early signs of renal stress before any deterioration in function and are specific to renal injury [22].Further efforts to incorporate novel technology and standardize diagnosis of AKI in the perioperative setting may help mitigate patient morbidity. The present study identified several risk factors associated with AKI.Consistent with prior literature, chronic comorbidities including diabetes, congestive heart failure, and preexisting kidney disease were more common among AKI patients [10,11].This finding suggests that lack of access to comprehensive primary and preventive care may be contributing to increased risk of AKI.In addition, neoadjuvant chemoradiation therapy was associated with decreased odds of AKI, similar to a prior national study of 1,446 esophagectomy patients with prior chemoradiation demonstrating reduced septic and renal complications [23].Interestingly, we found that the use of neoadjuvant therapy significantly increased over time, while the incidence of AKI also increased.Chemotherapy-induced nephrotoxicity and worse baseline renal function may explain this observation [24].However, after adjustment for comorbidities including preexisting renal failure, the independent association of chemoradiation with decreased AKI may ultimately reflect the beneficial impact of neoadjuvant therapy.Careful consideration of individual patient risk factors and improved access to comprehensive cancer centers may help guide provision of chemoradiation and prevent perioperative renal injury.Furthermore, MIS operations demonstrated significantly reduced odds of AKI following esophagectomy, which has similarly been reported in several prior studies [25,26].MIS approaches may lead to decreased blood loss, fluid shifts, and risk for renal injury compared to open esophagectomy approaches [27].The persistent incidence of AKI despite the increasing use of MIS over time suggests that access to financial capital and experienced MIS esophagectomy centers may potentially be contributing to disparities in perioperative AKI.In addition, the increasing age and burden of comorbidities among esophagectomy patients over time may reflect the expansion of MIS approaches, allowing for operations on more frail patients but potentially leading to greater incidence of AKI as well. As expected, several perioperative complications including sepsis, pneumonia, and respiratory failure were associated with AKI.AKI patients also had significantly increased severity of complications as assessed using the Clavien-Dindo classification.Of note, AKI likely triggers the development of multiple non-renal complications that collectively deteriorate a patient's condition, which becomes challenging to quantify [11,28].The Comprehensive Complication Index (CCI), a weighted algorithm that adjusts for both the number and severity of complications, has previously been used in randomized trials to quantify the overall morbidity as opposed to the burden of individual complications on the patient [29,30].While the CCI has not yet been validated in administrative databases, incorporation of such an index into future standardized data collection for surgical complications may be warranted to further understand the multiplicative impact of complications on the risk of death [31].The pathophysiology underlying AKI after esophagectomy is likely multifactorial.Major intraoperative fluid shifts, ischemic reperfusion events, the use of nephrotoxic drugs, and marked systemic inflammation often induced by surgical trauma may contribute to renal tubular injury [32,33].Goaldirected therapy (GDT) has been suggested to decrease the risk of postoperative renal injury through perioperative hemodynamic monitoring and combination of fluids and inotropes to reach adequate cardiac output (CO) and oxygen delivery (DO2) [34].These findings are supported by several systematic reviews of randomized controlled trials, and recent guidelines by the Kidney Disease Improving Global Outcomes (KDIGO) group provide strength of recommendation 2C for GDT in prevention of perioperative AKI [34][35][36][37].However, interventions to optimize hemodynamics remain widely variable in targets, timing, design, and technology.Standardized algorithms to guide fluid resuscitation and interdisciplinary care coordination between anesthetic and surgical teams may help mitigate AKI following esophagectomy [38]. Independent of other perioperative complications, AKI was associated with over 4-fold greater odds of in-hospital mortality as well as 16% increased odds of 30-day readmission.This study of esophagectomy patients adds to a growing body of literature demonstrating an association between even small postoperative changes in serum creatinine and worse outcomes [39][40][41].While we observed a stark difference in mortality, readmissions were only moderately increased in the presence of AKI.Only 5.7% of AKI patients required renal replacement therapy, suggesting that renal injury may generally be self-limited after esophagectomy.In a recent UK study of 1,135 patients undergoing esophageal cancer operations, 70% of those with AKI exhibited recovery of renal function within 48 hours [11].However, prior literature has demonstrated that patients with complete renal resolution of postoperative AKI still had an increased hazard ratio for long-term death of 1.20 (95% CI 1.10-1.31)[41].The increased odds of 30-day unplanned readmission remains a key indicator for adverse long-term outcomes of AKI.Moreover, AKI was associated with significantly greater resource use and nearly doubled the index hospitalization costs and LOS, in addition to the costs accrued at readmission.In light of the rising incidence of AKI, these findings raise significant financial concern and further highlight the need for systemic efforts to mitigate AKI and reduce healthcare expenditure [39].Further research on early screening and risk stratification for AKI as well as perioperative interventions to prevent organ hypoperfusion are needed. The present study has several limitations inherent to its retrospective nature and the use of an administrative database.The ICD-9/10 diagnosis codes used to define the development of AKI in this study were not based on the AKI Network criteria or risk, injury, failure, loss of kidney function, and end-stage renal disease (RIFLE) criteria due to absence of values for serum creatinine or baseline renal function [42,43].In addition, the NRD lacks clinical granularity regarding cancer staging, time of cancer diagnosis, as well as intraoperative variables such as anesthesia duration and urinary output.Anastomotic leak is not specified in ICD-9/10 coding and was approximated through the presence of clinical manifestations, including postoperative infection, septicemia, or abscess as reported in prior analyses [44,45].Of note, prior use of the Clavien-Dindo classification system with HCUP data has been limited to abdominal and urological operations [16,46,47].In addition, the Clavien-Dindo system does not reflect the overall impact of multiple complications on patient morbidity.The Comprehensive Complication Index (CCI) was unable to be assessed, as the CCI has not been validated in administrative databases such as the NRD and may be skewed by ICD-level coding of complications [29,30].ICD coding is often influenced by provider and center practices among participating hospitals in the NRD, and the transition from ICD-9 to ICD-10 may introduce variations in coding.Furthermore, our analysis was also limited to the duration of each admission and did not include outpatient data, thus potentially underestimating diagnosis of postoperative AKI and complications after hospital discharge.Despite these limitations, we used the largest, allpayer readmissions database and robust statistical methods to enhance the generalizability of our findings at the national level. Conclusions The present study used a nationally representative database to demonstrate that AKI development after esophagectomy for cancer has increased over the past decade.Notably, AKI appears to be independently associated with greater risk of mortality, resource use, and 30-day readmission.Given the substantial clinical and financial implications, standardized reporting of AKI and careful perioperative management to improve end-organ perfusion are needed to help mitigate this pernicious complication.Particularly among high-risk cancer patients, predischarge interventions and care coordination to limit readmission warrant further investigation and may improve quality of esophageal surgical care at the national level. Table 1 . Patient, operative, and hospital characteristics stratified by incidence of acute kidney injury (AKI) after esophagectomy for cancer . SD: Standard deviation.
2024-03-30T05:17:34.456Z
2024-03-28T00:00:00.000
{ "year": 2024, "sha1": "80bf70492d092cc9e0c8ceb6a8925d3f8dfa940a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "80bf70492d092cc9e0c8ceb6a8925d3f8dfa940a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269493863
pes2o/s2orc
v3-fos-license
Students’ Perception of the Teaching Profession as an Antecedent to Sustainability in Teacher Education This paper explores students’ perception of the teaching profession as an antecedent of sustainability in teacher education in Kenya. Sustainability and sustainable development have increasingly become critical issues in teacher education and development. Since sustainable development in education is impossible without the professional competence of teachers, there has been growing pressure for the reorientation of teacher education all over the world and Kenya in particular. However, referent literature indicates that scholars have not examined the extent to which the transformations in higher learning in Kenya have integrated sustainability. The present research focused on the socio-psychological model of sustainable behaviour. The study utilised e x post facto cross-sectional design and purposive sampling methods were used to select four institutions of higher learning in Kenya. A structured self-response questionnaire and interview schedule. The researcher used descriptive and inferential statistics to analyse quantitative data using statistical tools using the Statistical Package for Social Sciences (SPSS) version 24.0. A sample of 376 respondents filled the questionnaire, resulting in a response rate of 94%. The observed mean age was 22 years, with a standard deviation of 2.23. Among respondents, 216(57.4%) were male while 160(42.6%) we female. Research findings indicate that high 220(59%) percentages of respondents perceived sustainability in teacher education, followed by moderate 148(39%) with a combined perception level of 98%. The findings present an implication that the idea of sustainability in teacher education has great potential for future developments in the programme. These study findings present significant implications for teacher preparation strategies for sustainable development in education. The findings also shed light on the state of preparedness as well as advances made in Kenyan higher education in compliance with global trends in best practices for teacher education in the face of sustainable development. Introduction According to Barth, Michelsen, Rieckmann, and Thomas (2016), teacher education features prominently in recent academic research and publications on Education for Sustainable Development (ESD) concerning the themes covered in Sustainable Development Goals (SDG) 4.7.Besides, teacher education has increasingly recognised the need to respond to the economic, social, cultural and political challenges taking place globally.For example, the changing cultural composition of many societies has led to many countries, including inter-cultural competencies, within their training of teachers (Cushner, 2011(Cushner, , 2012;;Grant & Portera, 2011).Several factors, such as the increasingly multicultural nature of societies and the work of international development organisations, have resulted in increased interest in global citizenship and development education (Baily, O'Flaherty, & Hogan 2017).Research places greater importance on pointing out the inequalities that exist in the world and the role we all play in causing or preventing such inequalities (Liddy & Parker-Jenkins 2013).The Sustainable Development Goals decided by the United Nations include a goal centred on learners gaining the necessary knowledge and skills to promote sustainable development (UNESCO, 2015). Studies from European and US scholars present extrinsic, intrinsic, and altruistic motives for choosing a teaching career (Balyer & Özcan 2014;Kyriacou & Coulthard 2000;Thomson, Turner, & Nietfeld 2012;Yüce et al. 2013).Psychologists view aspects not inherent in the immediate work, such as salary, status, and working conditions as extrinsic motives involve.On the other hand, inherent aspects, relating to the meaning of teaching and the passion for teaching, subject knowledge and expertise are intrinsic motives.Altruistic motives entail perceptions of teaching as a valuable and important profession and the desires to support children's development and to make a difference in society.Intrinsic and altruistic reasons seem to be more frequent in what are termed 'developed countries' than in developing countries, where extrinsic reasons are more prominent (Watt et al. 2012).Likewise, Klassen et al. (2011) claim that motives for entering teacher education differ based on cultural background and accordingly, there is no universal pattern of motives. Sustainability and sustainable development have increasingly become critical issues in teacher education and development.Since sustainable development in education is impossible without the professional competence of teachers, there has been growing pressure for the reorientation of teacher education all over the world and Kenya in particular.However, referent literature indicates that scholars have not examined the extent to which the transformations in higher learning in Kenya have integrated sustainability.This paper explores students' perception of the teaching profession as an antecedent of sustainability in teacher education in Kenya.In this paper, we focus on assessment of the perception of the teaching profession across universities in Kenyan universities.Moreover, we also explore the influence of demographic characteristics on the perception of the teaching profession across universities in Kenyan Literature Review Alkhawaldeh (2017) argued that teacher education for sustainable development is an educational paradigm that considers life-long professional development and learning of teachers as the central hub of teaching practice.Sustainability and sustainable development have recently become widely discussed in the educational arena, in general, and in teacher education and development, in particular.For example, Salīte (2015) called for the reorientation of teacher education towards sustainable development.The core aspect of the debate on the sustainable professional development of teachers is a shift from the traditional one to more school-based teacher professional development.Recently, sustainable education and teacher education integration into the broader system of higher education and teacher education milieu have attracted the attention of policymakers, educationists and researchers.Teachers are urged to equip themselves with new skills and high standard professional knowledge to assume new roles and responsibilities in sustainable education in their societies (Kabadayi, 2016).Sustainable development in teacher education calls for a paradigm shift to focus on transformative models of education.The 21 st -century demands for humanity have changed since the world we live in is more globalised than before (Bell, 2016).With this new trend in teacher education, teachers are mainly required to exhibit teacher renewal and professionalism. According to McDiarmid (2008), the continuum of teacher learning, as well as teacher education, turns out to be indispensable in a lifelong learning process which implies the demand for extended teacher professionalism.Teacher education and learning, which forms the premise for the present study, should continue through the whole teacher development and should feature all teacher experiences during career-long learning.Eslamian, Jafari, and Neyestani (2017) claim that nowadays, educational systems have an essential mission for responding to the needs of different communities.The complex organisational nature of educational centres, accompanied by evolving pedagogies, requires multiple professional development strategies (Mohammadi & Moradi, 2017).The professional competence of teachers is the nexus for sustainable development of teacher education.Special attention should also centre on the training of teachers, youth leaders and other educators (UNESCO, 2005).In this way, the problem of improving the teacher's professional competence is relevant in terms of sustainable development of education (Korsun, 2017), and for educational improvement, teacher professionalism is essential (Reid & Horváthová, 2016).Yoo (2016) has argued that to ensure sustainable development, educators should focus on studies related to teacher programmes. Continuous professional development is necessary to help teachers understand sustainable development concepts as well as issues and help become effective mentors for sustainable education.It requires teachers to be learners, researchers, and collaborators, to reflect on their teaching practices and improve professional proficiency (Mohammadi & Moradi, 2017).Kabadayi (2016) argued that understanding teachers' professional proficiency and their training needs could lead to more responsive teacher education programmes. The Concept of Sustainability According to Gaudiano, Meira-Cartea and Martínez-Fernández (2015), the incorporation of sustainability into Higher Education institutions is a relatively new process.Its history traces back to the foundation of the Environmental Sciences Formation International Center in 1975.Then in 1985, the University and Environment in Latin America and Caribbean Seminary was founded in Bogotá, Colombia.However, despite progress, sustainable development is not yet a finished concept.Sustainable development is the development that addresses the current generation's needs, without compromising the capacity of future generations to satisfy their own needs (UNESCO, 2017).It is a paradigm to think in a future where the environmental, social and economic considerations balance with the search for better life quality. The sustainable development concept initially had a political connotation.Later, "sustainability" was used in a more critical sense lost over time.Some IES conventionally used either concept, without considering the implications (Gaudiano, Meira-Cartea and Martínez-Fernández, 2015;Martínez-Fernández & Gaudiano, 2015).According to Gutiérrez and Martínez (2010), the emphasis was first on the environment, but sustainable development now emphasises social, economic, political and religious dimensions.As these polysemic concepts of sustainable development and sustainability developed, environmental education emerged as a strategy to understand and address the growing environmental problems. Methodology Research Design of the Study This study was a mixed-method study that included qualitative and quantitative data.The study was quantitative and utilised an ex post facto cross-sectional survey design.The purpose of the inferential approach was to provide data from which to compute and examine correlations and the relationship between variables.Kothari (2014) suggests the use of ex post facto design in studies which the researcher does not manipulate the variables under study, which was the case in the present study.The cross-sectional survey design was appropriate for this study because the researcher collected data from a cross-section of sampled universities. Location of the Study The researcher conducted the study in selected institutions of higher learning in Nakuru, Laikipia and Kericho Counties in, Kenya.Each county consists of urban and rural regions where there is a cross-section of both private and public university campuses.With the establishment of a centralised university placement, the population in these institutions are not only cosmopolitan but also represent a cross-section of all communities in Kenya.The choice of the study location significantly enhanced the external validity of the study findings. Sampling Procedure and Sample Size The multi-stage sampling procedure was used to generate a sample of 400 respondents.First, a stratified sampling procedure was used to group the target population into two strata based on university ownership.One stratum was public universities (4) and the other private universities (3).University ownership may present unique characteristics that might have implications on the research findings.The researcher stratified the target population to ensure heterogeneity in the final sample. Instrumentation In this study, the demographic items, sustainable teacher education scale and the sustainable distance learning scale.The Cronbach's alpha measured the reliability of the instrument by which was found to be 0.86.This value is above .7,so the questions used in this test can be considered reliable with the sample. Data Collection Procedure The researchers obtained the required research authorisations before data collection commences.Data collection in each university took place in the same location to help improve the rate of return.After the sampling exercise, the data collection exercise began with a brief explanation of the aim of the study, followed by the distribution of the questionnaire.Questionnaires then were distributed to those willing to participate.The exercise ended when all the respondents returned the questionnaires to the researcher. Data Analysis Collected data was quantitative, and therefore data will be analysed in both descriptive and inferential statistics.Data analysis was done utilising statistical tools with the aid of computer software, SPSS version 25.0. Ethical Consideration Ethical principles are concerned with protecting the rights, dignity, and welfare of research participants (Baker, Pistrang & Elliott, 2002).The critical areas for consideration within this study centred on anonymity, confidentiality, and informed consent, voluntary participation of respondents and data handling and storage.The respondents were required to read the consent forms and acknowledge that they had understood what was involved in the study and that they were willing to participate.The respondents were assured of confidentiality through writing, indicating that the responses or data collected would not be presented in a way that would be identified with any respondent or university. Results A sample of 376 respondents filled the questionnaire, resulting in a response rate of 94%, which was considered suitable for survey research not only according to Babbie (1995, but also according to the findings of Asch and Colleagues (1997).The observed mean age was 22 years, with a standard deviation of 2.23.Among respondents, 216(57.4%)were male, while 160(42.6%)were female.It was also observed that 16(4.3%) of respondents were married, 356(94.7%)'single', 4(1.1%) neither 'married' nor 'single', a category designated 'other'.Data revealed that 118(31.4%) of respondents were public, while 258(68.6%)were from private universities.It was observed that 297(79%) were in the full-time mode of the study compared to 79(21%) who were enrolled on online and distance learning.Data revealed that 250(66.5%) of the respondents were in the second year compared to 67(17.8%) who were in fourth, 35(9.3%)third, 19(5.1%)first while 5(1.3%) belonged to other years of study. Perception of Teacher Education Sustainability in teacher education was made operational through 11 items measuring the perceptions of respondents.Each was required to respond to a 5-point Likert scale measuring the level of agreement.The findings are presented in The study findings presented a favourable implication for the perception of teacher education as currently constituted in Kenyan teacher preparation institutions.It was observed that high perception index was observed for many items measuring the perception of teacher education.Among sampled respondents, a 220(58.5%)presented an observed agreement level of 5 and 20(5.3%) at level 4 for item 1 which stated that a career in education was my first choice for a university degree.This is reflective of the findings by Ulrika, Stefan, Lena, and Annbritt (2018) who observed that among students, intrinsic and altruistic motives for choosing a career in education are frequent. Respondents whose level of agreement with item 2 of the tool (Given another chance I would still choose a career in education) ranged above 4 accounted for a total of 151(59.6%) of the total sample.Ulrika, Stefan, Lena, and Annbritt (2018) observed that the students' own experiences at school form the basis for the expressed feelings concerning teaching and teacher education.The findings of this research concurred with the assertion of Pop and Turner (2009) expressed a similar view.This, in turn, play a significant role in the construction of pedagogic identities.A reasonable consequence of the difference between the teacher programmes is that the future upper secondary school teachers will try to recreate their positive experiences to a higher degree while compulsory school student teachers will seek to create a somewhat different school than the one they experienced.Thus, the pedagogic identities of the former group will function retrospectively and conservatively, and the pedagogic identities of the latter group will function progressively and autonomously, in a de-centred manner (Bernstein 2000).This explains the feeling that respondents would still choose the teaching programme had they been given another chance. Respondents who felt that teachers should be more appreciated in society accounted for 24(6.4%) and 244(64.9%)at level 4 and 5 respectively, thus giving a total 71.3% of the high level of agreement.Environmental education is an essential element of sustainability and a core component in teacher preparation.Respondents who perceived that importance accounted for 236(62.7%)comprising 28(7.4%) and 208(55.3%)at level 4 and 5 respectively, that environmental education is an essential part of teacher training in our university.An observed total of 64.9% of the sample scored high on the perception level of the fact that all students can succeed in education.Besides, an observed total of 63.8% of selected respondents highly agreed with the statement: I am satisfied with the assessment strategies for the educational course at the university.High perceptions were observed among respondents who felt that it is easier to get a job with a degree in education, 70.2% at level 4 and 5, respectively.Respondents who registered high levels of agreement with the statement that Teacher education is relevant for national development accounted for a total of 71.2%. Finally, the highest scores were observed for respondents who felt that 'Teaching is a comfortable job' 68(18.1%)presented agreement level 4 and 244(64.9%)were at level 5 giving a combined agreement of 83%.The perception of "comfortable" in the career of teaching was viewed in this study to imply a profession that was comparatively less stressful compared to other prospective careers.This could be attributed in part to school holidays, ample family time, perceived job security, and 'break' times that characterise daily routine.This is supported in research carried out among 157 teacher candidates in Turkey where it was observed that a significant proportion of respondents chose a teaching career because of the long holidays and comfortable working conditions (Cermik et al. 2010).The findings are also congruent with a study by Gao and Trent (2009) on the motivations of students from Mainland China enrolled in teacher education programs in Hong Kong, where it was established that students' choice of teaching career was based on the perception of the profession as pleasant and devoid of the complexities involved in other disciplines.However, the findings were at variance with the findings in a study by Foley and Murphy (2015) as well as Hassan, Jani, Som, Hamid, and Azzizam (2015) who reported that teaching is a stressful career. Overall Perception of Teacher Education A dummy variable that grouped the perception index to low, moderate and high was generated to explore the distribution of overall perception of sustainability in teacher education.Out of the 55 possible points comprising 11 items where each had five possible points, the transition points were <18 and >36 for low and high perception index respectively and moderate perception at >18<37.The frequencies for each category were run, and the results are presented in Figure 1 Figure 1: Overall Respondents Perception of Sustainability in Teacher Education (N = 376) Data presented in Figure 1indciates that high percentages of respondents perceived sustainability in teacher education clustered around the high zone 220(59%) followed by moderate 148(39%) and then low 8(2%).Combining the moderate and high perception levels gives a total of 98% of respondents who felt that the model was viable.This presents an implication that the idea of sustainability in teacher education has great potential for future developments in the programme. The Gender Factor in Perception of Teacher Education In this study, gender was considered as an essential factor in the perception of teacher education.Analyses were, therefore grouped according to gender and the findings presented in Table 2. Data presented in Table 2 indicates that male respondents presented higher percentages on items measuring the perception of teacher education. Perception of Teacher Education and Type of University The researcher set out to explore how respondents' placement influence their perception of teacher education.The placement was operationalised as either public or private universities.Analyses were, therefore grouped according to the category of the university and the findings presented Table 3. Table 3 presents equal proportions of responses concerning the perception of teacher education and the type of university.A cursory inspection shows that both private and public universities respondents showed a similar perception of teacher education. Correlation Matrix for Teacher Education by Demographic Characteristics A correlation analysis was done for the various demographic variables taken to be critical factors in the perception of teacher education.The findings are presented in Chi-square test of significance for age factor in the perception of teacher education yielded χ2=29.58(df=20)p=0.77, which was not statistically significant.It was therefore concluded that age does not influence the perception of teacher education and therefore is not an antecedent to sustainability teacher education.However, the gender factor yielded χ2=13.96(df=2)p=0.001, which was statistically significant.This implied that there was observed a significant difference in the perception of teacher education based on gender.Based on these findings, it was inferred that gender was a cogent antecedent to sustainability teacher education. Similarly, respondents' year of the study yielded χ2=23.642(df=8)p=0.003, which was statistically significant, implying that academic level influenced perception making it a definite antecedent to sustainability teacher education.It was observed that respondents' mode of study (conceptualised as Part-time and Full-time) yielded χ2=2.156(df=2)p=0.284 which was not statistically significant, leading to the conclusion that academic mode of study was not an antecedent to sustainability teacher education.Finally, it was observed that the type of university yielded χ2=3.863(df=2)p=0.15, which was not statistically significant.It was therefore concluded that university placement does not significantly influence the perception of teacher education and therefore is not an antecedent to sustainability teacher education. Conclusion The sustainability of teacher education is mainly dependent on the social and professional perception of the discipline.These render impetus for choosing the teaching career and preconceptions of profession related to education.This, in turn, presents significant implications for the sustainability of the profession of teaching.The most important implication of this study is that institutions of higher learning draw their clientele from a cross-section of Kenyan society.With a large sample coming from different regions of Kenya, the research findings provide useful insights into the students' perception of teaching as a career which was viewed in this study as a motivation acts as an antecedent of sustainability of teacher education.Second, the study draws attention to variations in the perception of aspects of the teaching profession through socio-cultural values, which is a crucial component in career choice and factor in the sustainability of education discipline.These values include the social perception of the teaching profession, its conditions as a profession, and the ease of securing a job for graduates of the education programme.Understanding these antecedents presents significant implications for teacher educators and curriculum planners to grasp more clearly how public perceptions affect teachers' and prospective teachers' attitudes, and thereby to make teacher training programs more attractive.This would ultimately ensure continuity and sustainability in teacher education, whose consequence would be the achievement of sustainable development in other foci of the SDGs. Recommendations and Areas for Further Study First, this was a cross-sectional descriptive survey where the data collection tool was administered at only one point in time.Taking cognisance of the effect of time in shaping of attitudes towards phenomena it is reasonable to suppose that perceptions of teacher education might change during a student's academic progression, and this possibility (along with reasons for any changes) would be worth investigating. Table 1 Table 1 Respondents Perception of Teacher Education Table 2 Distribution of Respondents Perception of Teacher Education by Gender Table 3 Distribution of Respondents Perception of Teacher Education by Type of University Table 4 Table 3 Correlation Matrix for Teacher Education by Demographic Characteristics
2024-05-02T15:03:17.188Z
2020-11-19T00:00:00.000
{ "year": 2020, "sha1": "edb444a3b5cf6b2fb9c090d4446f04866db157d8", "oa_license": "CCBY", "oa_url": "https://journals.kabarak.ac.ke/index.php/kjri/article/download/107/106", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "06d93571cf563729f88824cc515263543a3b4b4d", "s2fieldsofstudy": [ "Education", "Environmental Science" ], "extfieldsofstudy": [] }
67933064
pes2o/s2orc
v3-fos-license
Communicable Diseases In a world of rapid international transport and contact between populations, systems are needed to monitor the potential explosive spread of pathogens that may be transferred from their normal habitat. The potential for the international spread of new or reinvigorated infectious diseases constitute threat to mankind akin to ecological and other man-made disasters. Public health has addressed the issues of communicable disease as one of its key issues in protecting individual and population health. Methods of intervention include classic public health through sanitation, immunization, and well beyond that into nutrition, education, case finding, and treatment, and changing human behavior. The knowledge, attitudes, beliefs, and practices of policy makers, health care providers, and parents is as important in the success of communicable disease control as are the technology available and methods of financing health systems. Together, these encompass the broad programmatic approach of the New Public Health to control of communicable diseases. important for all health providers and public health personnel so as to be able to cope with the scale of these problems and to absorb new technologies as they emerge from scientific advances and experience, and their successful application. lived. It was to be observed, indeed, that it did not come straight on toward us; for the city, that is to say within the walls, was indifferently healthy still; nor was it got over the water into Southwark; for though there died that week 1,268 of all distempers, whereof it might be supposed above 900 died of the plague, yet there was but 28 in Southwark, Lambeth parish included; whereas in the parishes of St. Giles HOST-AGENT-ENV! RON M ENT TRIAD The agent-host-environment triad, discussed in Chapter 2, is fundamental to the success of understanding transmission of infectious diseases and their control, including those well known, those changing their patterns, and those newly emerging or escaping current methods of control. Infection occurs when the organism successfully invades the host body, where it multiplies and produces an illness. A host is a person or other living animal, including birds and arthropods, who provides a place for growth and sustenance to an infectious agent under natural, as opposed to experimental, conditions. Some organisms, such as protozoa or helminths, may pass successive stages of their life cycle in different hosts, but the primary or definitive host is the one in which the organism passes its sexual stage. The secondary or intermediate host is where the parasite passes the larval or asexual stage. A transport host is a carrier in which the organism remains alive, but does not develop. An agent of an infectious disease is necessary, but not always sufficient to cause a disease or disorder. The infective dose is the quantity of the organism needed to cause clinical disease. A disease may have a single agent as a cause, or it may occur as a result of the agent in company with contributory factors, whose presence is also essential for the development of the disease. A disease may be present in an infected person in a dormant form such as tuberculosis, or a subclinical form, such as poliomyelitis or HIV. The virulence or pathogenicity of an infective agent is the capacity of an infectious agent to enter the host, replicate, damage tissue, and cause disease in an exposed and susceptible host. Virulence is indicated by the severity of clinical disease and case fatality rates. The environment provides a reservoir for the organism, and the mode of transmission, by which the organism reaches a new host. The reservoir is the natural habitat where an infectious agent lives and multiplies, from which it can be transmitted directly or indirectly to a new host. The reservoir refers to the natural habitat of the organism, which may be in people, animals, arthropods, plants, soil, or substances in which an organism normally lives and multiplies, and on which it depends for survival or in which it survives in a dormant form. Contacts are persons or animals who have been in association with an infected person, animal, or contaminated inanimate object, or environment that might provide an opportunity for acquiring the infective agent. Persons or animals that harbor a specific infectious agent, often in the absence of discernible clinical disease, and who serve as a source of infection or contamination of food, water, or other materials, are carriers. A carrier may have an inapparent infection (a healthy cartier) or may be in the incubation or convalescent stage of the infection. DISEASES Communicable diseases may be classified by a variety of methods: by organism, by mode of transmission, by methods of prevention (e.g., vaccine preventable, vector controllable), or by major organism classification, that is, viral, bacterial, and parasitic disease. A virus is a nucleic acid molecule (RNA or DNA) encapsulated in a protein coat or capsid. The virus is not a complete cell and can only replicate inside a complete cell. The capsid may have a protective envelope of a lipid containing membrane. The capsid and membrane facilitate attachment and penetration of a host cell. Inside the host cell, the nucleic molecule may cause the cell's chromosomes to be changed in its own genetic material or so that there is cellular manufacture and virus replication. Viroids are smaller RNA structures without capsids which can cause plant disease. Prions are recently discovered (Stanley Prusiner, Nobel prize, 1997) variants of viruses or viroids which are the infective agents cause of scrapie in sheep, and similar degenerative central nervous system diseases in cattle and in man (mad cow disease or Creutzfeld-Jakob disease in humans). Bacteria are unicellular organisms that reproduce sexually or asexually, grow on cell-free media, and can exist in an environment with oxygen (aerobic) or in one lacking oxygen (anaerobic). Some may enter a dormant state and form spores where they are protected from the environment and may remain viable for years. Bacteria include a nucleus of chromosomal DNA material within a membrane surrounded by cytoplasm, itself enclosed by the cellular membrane. Bacteria are often characterized by their coloration under Gram's stain, as gram-negative or gram-positive, as well as by their microscopic morphology, colony patterns on growth media, by the diseases they may cause, as well as by antibody and molecular (DNA) marking techniques. Bacteria include both indigenous flora (normal resident) bacteria and pathogenic (disease causing) bacteria. Pathogenic bacteria cause disease by invading, overcoming natural or acquired resistance, and multiplying in the body. Bacteria may produce a toxin or poison that can affect a body site distant from where the bacterial replication occurs, such as in tetanus. Bacteria may also initiate an excessive immune response, producing damage to other body tissues away from the site of infection, e.g;, acute rheumatic fever and glomerulonephritis. Parasitology studies protozoa, helminths, and arthropods that live within, on, or at the expense of a host. These include oxygen-producing, flagellate, unicellular organisms such as Giardia and Trichomonas, and amoebas such as Entamoeba important in enteric and gynecologic disorders. Sporozoa are parasites with complex life cycles in different hosts, such as cryptosporidium or malarial parasites. Parasitic disease usually refers to infestation, with fungi, molds, and yeasts that can affect humans. Helminths are worms that infest humans especially in poor sanitation and tropical areas. MODES OF TRANSMISSION OF DISEASE Transmission of diseases is by the spread of an infectious agent from a source or reservoir to a person (Table 4.1). Direct transmission from one host to another occurs during touching, biting, kissing, sexual intercourse, and projection via droplets, as in sneezing, coughing, or spitting, or by entry through the skin. Indirect transmission includes via aerosols of long-lasting suspended particles in air, fecal-oral transmission such as food and waterborne as well as by poor hygenic conditions with inanimate materials, such as soiled clothes, handkerchiefs, toys, or other objects. Vector-borne diseases are transmitted via crawling or flying insects, in some cases with multiplication, and development of the organism in the vector, as in malaria. The subsequent transmission to humans is by injection of salivary gland fluid during biting, e.g., congenital syphilis, or by deposition of feces, urine or other material capable of penetrating the skin through a bite wound or other trauma. Transmission may occur with insects as a transport mechanism, as in salmonella on the legs of a housefly. Airborne transmission occurs inderectly via infective organisms in small aerosols that may remain suspended for long periods of time and which easily enter the respiratory tract. Small particles of dust may spread organisms from soil, clothing, or bedding. Vertical transmission occurs from one generation to another, or from one stage of the insect life cycle to another stage. Maternal-infant transmission occurs during pregnancy (transplacental), delivery, as in gonorrhoea, breast-feeding, e.g., HIV, with transfer of infectious agents from mother to fetus or newborn. IMMUNITY Resistance to infectious diseases is related to many host and environmental factors, including age, sex, pregnancy, nutrition, trauma, fatigue, living and socioeconomic conditions, and emotional status. Good nutritional status has a protective effect against the results of an infection. Vitamin A supplements reduce complication rates of measles and enteric infections. Tuberculosis may be present in an individual whose resistance is sufficient to prevent clinical disease, but the infected person is a cartier of an organism which can be transmitted to another or cause clinical disease if the person's susceptibility is reduced. Immunity is resistance to infection resulting from presence of antibodies or cells that specifically act on the microorganism associated with a specific disease or toxin. Immunity to a specific organism can be acquired by having the disease, that is, natural immunity, or by immunization, active or passive, or by protection BOX 4.3 VACCINES AND PREVENTION "The Greeks had two gods of health, Aesculapius and Hygeia, therapy and prevention, respectively. Medicine in the twentieth century retains those two concepts, and vaccination is a powerful means of prevention. What follows is information on the vaccines that together with sanitation, make modem society possible, and that if wisely used will continue to bestow on mankind the gift of prevention, which according to proverb is worth far more than cure." Source: Plotkin, S. A., Mortimer, E. A. 1994. Vaccines. Second edition. Philadelphia: WB Saunders (with permission). OF INFECTIOUS DISEASES Infectious agent: a pathogenic organism (e.g., virus, bacteria, rickettsia, fungus, protozoa, or helminth) capable of producing infection or an infectious disease. Infection: the process of entry, development, and proliferation of an infectious agent in the body tissue of a living organism (human, animal, or plant) overcoming body defense mechanisms, resulting in an inapparent or clinically manifest disease. Antigen: a substance (e.g., protein, polysaccharide) capable of inducing specific response mechanisms in the body. An antigen may be introduced into the body by invasion of an infectious agent, by immunization, inhalation, ingestion, or through the skin, wounds, or via transplantation. Antibody: a protein molecule formed by the body in response to a foreign substance (an antigen) or acquired by passive transfer. Antibodies bind to the specific antigen that elicits its production, causing the infective agent to be susceptible to immune defense mechanisms against infections e.g., humoral and cellular. Immunoglobulins: antibodies that meet different types of antigenic challenges. They are present in blood or other body fluids, and can cross from a mother to fetus in utero, providing protection during part of the first year of life. There are five major classes (IgG, IgM, IgA, IgD, and IgE) and subclasses based on molecular weight. Anfisera or antitoxin: materials prepared in animals for use in passive immunization against infection or toxins. Source: Jawetz, Melrick, and Adelberg, Medical Microbiology, 1998. through elimination of circulation of the organism in the community. Immunity may be by antibodies produced by the host body or transferred from externally produced antibodies. The body also reacts to infective antigens by cellular responses, including those that directly defend against invading organisms and other cells which produce antibodies. The immune response is the resistance of a body to specific infectious organisms or their toxins provided by a complex interaction of antibodies and cells including a. B Cells (bone marrow and spleen) produce antibodies which circulate in the blood, i.e., humoral immunity; b. T Cell-mediated immunity is provided by sensitization of lymphocytes of thymus origin to mature into cytotoxic cells capable of destroying virusinfected or foreign cells; c. Complement, a humoral response which causes lysis of foreign cells; d. Phagocytosis, a cellular mechanism which ingests foreign microorganisms (macrophages and leukocytes). SURVEILLANCE Surveillance of disease is the continuous scrutiny of all aspects of occurrence and spread of disease pertinent to effective control of that disease. Maintaining ongoing surveillance is one of the basic duties of a public health system, and is vital to the control of communicable disease, providing the essential data for tracking of disease, planning interventions, and responding to future disease challenges. Surveillance of infectious disease incidence relies on reports of notifiable diseases by physicians, supplemented by individual and summary reports of public health laboratories. Such a system must concern itself with the completeness and quality of reporting and potential errors and artifacts. Quality is maintained by seeking clinical and laboratory support to confirm first reports. Completeness, rapidity, and quality of reporting by physicians and laboratories should be emphasized in undergraduate and postgraduate medical education. Enforcement of legal sanctions may be needed where standards are not met. Surveillance of infectious diseases includes the following: 1. Morbidity reports from clinics to public health offices; 2. Mortality reports from attending doctors to vital records; 3. Reports from selected sentinel centers; 4. Special field investigations of epidemics or individual cases; 5. Laboratory monitoring of infectious agents in population samples; 6. Data on supply, use, and side effects of vaccines, toxoids, immune globulins; 7. Data on vector control activities such as insecticides use; 8. Immunity levels in samples of the population at risk; 9. Review of current literature on the disease; 10. Epidemiologic and clinical reports from other jurisdictions. Epidemiologic monitoring based on individual and aggregated reports of infectious diseases provide data vital to planning interventions at the community level or for the individually exposed patient and his contacts, along with other information sources such as hospital discharge data and monitoring of sentinel centers. These may be specific medical or community sites that are representative of the population and are able to provide good levels of reporting to monitor an area or population group. A sentinel center can be a pediatric practice site, a hospital emergency room, or other location which will provide a "finger on the pulse" to assess the degree and kind of morbidity occurring in the community. It can also include monitoring in a location previously known for disease transmission, such as Hong Kong in relation to influenza. Epidemiologic analysis provided by government public health agencies should be published weekly, monthly, and annually and distributed to a wide audience of public health and health-related professionals throughout the country. Feedback to those in the field on whose initial reports the data are based is vital in order to promote involvement and improved quality of data, as well as to allow evaluation of the local situation in comparison to other areas. In a federal system of government, national agencies report regularly on all state or provincial health patterns. State or provincial health authorities provide data to the counties and cities in their jurisdictions. Such data should also be readily available to researchers in other government agencies, universities, and other academic settings for further research and analysis both on internet and hard-copy publications. Notifiable diseases are those which a physician is legally required to report to state or local public health officials, by reason of their contagiousness, severity, frequency, or other public health importance (Table 4.2). Public health laboratory services provide validation of clinical and epidemiologic reports. They also pro- vide day-to-day supervision of public health conditions, and can monitor communicable disease and vaccine efficacy and coverage. In addition, they support standards of clinical laboratories in biochemistry, microbiology, and genetic screening. Nosocomial Infections Nosocomial or hospital-acquired infections constitute a major health hazard associated with care in institutions. In the United States, they occur in 5-10% of hospital admissions and are the cause of lengthening of hospital stay and an estimated 30,000 deaths per year. In developing countries, nosocomial infection rates may occur in up to 65% of hospitalizations. This category of infectious disease most commonly includes infections of the urinary tract, surgical wounds, lower respiratory tract (pneumonias), and blood poisoning or septicemias. In the United States, up to 60% of hospital-acquired infections are caused by multidrug resistant organisms. Staphylococcus infections resistant to many current antibiotics, for example, methicillin and vancomycin, are a notable cause of prolongation of hospitalization or even death. The increasing number of immunodeficient patients has increased the importance of prevention of nosocomial infections. Where standards of infection control are lacking, in both developed and developing countries, hospital staff are vulnerable to serious infection. In developing countries, deadly new viruses, such as Ebola and Marburg viruses mainly affect nursing, medical, and other staff as secondary cases. Surveillance and control measures are important elements of hospital management. Hospital epidemiologists and infection control staff are part of modem hospital staffing. The cost to the health system of nosocomial infections is a major consideration in planning health budgets. Reducing the risk of acquiring such infections in hospital justifies substantial expenditures for hospital epidemiology and infection control activities. With diagnostic related group payment for hospital care (by diagnosis rather than by days of stay) the good manager has a major incentive to ensure that the risk of nosocomial infections is minimized, since they can greatly prolong hospital stays, raising patient dissatisfaction and health care costs. ENDEMIC AND EPIDEMIC DISEASE An endemic disease is the constant usual presence of a disease or infectious agent in a given geographic area or population group. Hyperendemic is a state of persistence of high levels of incidence of the disease. Holoendemic means that the disease appears early in life and affects most of the population, as in malaria or hepatitis A and B in some regions. An epidemic is the occurrence in a community or region of a number of cases of an illness in excess of the usual or expected number of cases. The number of cases constituting an epidemic varies with the disease, and factors such as previous epidemiological patterns of the disease, time and place of the occurrence, and the population involved must be taken into account. A single case of a disease long absent from an area, such as polio, constitutes an epidemic, and therefore a public health emergency because a clinical case may represent a hundred carriers with nonparalytic or subclinical poliomyelitis. In the 1990s, two to three or more cases of measles linked in time and place may be considered sufficient evidence of transmission and presumed to be an epidemic. A pandemic is occurrence of a disease over a very wide area, crossing international boundaries, affecting a large proportion of the population. Epidemic Investigation Each epidemic should be regarded as a unique natural experiment. The investigation of an epidemic requires preparation and field investigation in conjunction with local health and other relevant authorities. Verification of cases and the scope of the epidemic will require case definition and laboratory confirmation. Tabulation of known cases according to time, place, and person are important for immediate control measures and formulation of the hypothesis as to the nature of the epidemic. An epidemic curve is a graphic plotting of the distribution of cases by the time of onset or reporting, which gives a picture of the timing, spread, and extent of the disease from the time of the initial index cases and the secondary spread. Epidemic investigation requires a series of steps. This starts with confirmation of the initial report and preliminary investigation, defining who is affected, determining the nature of the illness and confirming the clinical diagnosis, and recording when and where the first (index) and follow-up (secondary) cases occurred, and how the disease was transmitted. Samples are taken from index case patients (e.g., blood, feces, throat swabs) as well as from possible vectors (e.g., food, water, sewage, environment). A working hypothesis is established based on the first findings, taking into account all plausible explanations. The epidemic pattern is studied, establishing common source or risk factors, such as food, water, contact, environment, and drawing a time line of cases to define the epidemic curve. How many are ill (the numerator) and what is the population at risk (the denominator) establish the attack rate, namely, the percentage of sick among those exposed to the common factor. What is a reasonable explanation of the occurrence; is there a previous pattern, with the present episode a recurrence or new event? Consultation with colleagues and the literature helps to establish both a biological and epidemiologic plausibility. What steps are needed to prevent spread and recurrence of the disease? Coordination with relevant health and other officials and providers is required to establish surveillance and control systems, document and distribute reports, and respond to the public's fight to know. The first reports of excess cases may come from a medical clinic or hospital. The initial (sentinel or index) cases provide the first clues that may point to a common source. Investigation of an epidemic is designed to quickly elucidate the cause and points of potential intervention to stop its continuation. This requires skilled investigation and interpretation. Epidemiologic investigations have defined many public health problems. Rubella syndrome, Legionnaire's disease, AIDS, and Lyme and hantavirus diseases were first identified clinically when unusually large numbers of cases appeared with common features. The suspicions that were raised led to a search for causes and the identification of control methods. A working hypothesis of the nature of an epidemic is developed based on the initial assessment, the type of presentation, the condition involved, and previous local, regional, national, and international experience. The hypothesis provides the basis for further investigation, control measures, and planning additional clinical and laboratory studies. Surveillance will then monitor the effectiveness of control measures. Communication of findings to local, regional, national, and international health reporting systems is important for sharing the knowledge with other potential support groups or other areas where similar epidemics may occur. The Centers for Disease Control and Prevention (CDC), originally organized in 1942 as the Office for Malaria Control in War Areas, is part of the U.S. Public Health Service. As of 1993, the CDC had a budget of $1.5 billion, and 7300 employees include epidemiologists, microbiologists, and many other professionals. The CDC includes national centers for environmental health and injury control, chronic disease prevention and health promotion, infectious diseases, prevention services, health statistics, occupational safety and health, and international health. The Epidemic Intelligence Service (EIS) of the CDC in the United States is an excellent model for the organization of the national control of communicable diseases. Young clinicians are trained to carry out epidemiologic investigations as part of training to become public health professionals. EIS officers are assigned to state health departments, other public health units, and research centers as part of their training, carrying out epidemic investigation and special tasks in disease control. The CDC, in cooperation with the WHO, has developed and offers free of charge, a personal computer program to support field epidemiology, including epidemic investigations (EPI-INFO), which can be accessed and down-loaded from the worldwide web. This program should be adopted widely in order to improve field investigations, to encourage reporting in real time, and to develop high standards in this discipline. 1 CDC's Morbidity and Mortality Weekly Report (MMWR) is a weekly publication of the CDC's epidemiologic data, also available free on the internet. It includes special summaries of reportable infectious diseases as well as noncom- CONTROL OF COMMUNICABLE DISEASES Although an infectious disease is an event affecting an individual, it is communicable to others, and therefore its control requires both individual and community measures of protection. Control of the disease is a reduction in its incidence, prevalence, morbidity, and mortality. Elimination of a disease in a specified geographic area may be achieved as a result of intervention programs such as individual protection against tetanus; elimination of infections such as measles requires stoppage of circulation of the organism. Eradication is success in reduction to zero of incidence of the disease and presence in nature of the organism, such as with smallpox. Extinction means that a specific organism no longer exists in nature or in laboratories. Public health applies a wide variety of tools for the prevention of infectious diseases and their transmission. It includes activities ranging from filtration and disinfection of community drinking water to environmental vector control, pasteurization of milk, and immunization programs (see Table 4.3). No less important are organized programs to promote self protection, case finding, and effective treatment of infections to stop their spread to other susceptible persons (e.g., HIV, sexually transmitted diseases, tuberculosis, malaria). Planning measures to control and eradicate specific communicable diseases is one of the principal activities of public health and remains so for the twenty-first century. Treatment Treating an infection once it has occurred is vital to the control of a communicable disease. Each person infected may become a vector and continue the chain of transmission. Successful treatment of the infected person reduces the potential for an uninfected contact person to acquire the infection. Bacteriostatic agents or drugs such as sulfonamides inhibit growth or stop replication of the organism, allowing normal body defenses to overcome the organism. Bacteriocidal drugs such as penicillin act to kill pathogenic organisms. Traditional medical emphasis on single antibiotics has changed to use of multiple drug combinations for tuberculosis and more recently for hospital-acquired infections. Antibiotics have made enormous contributions to clinical medicine and public health. However, pathogenic organisms are able to adapt or mutate and develop resistance to antibiotics, resulting in drug resistance. Wide-scale use of antibiotics has led to increasing incidence of resistant organisms. Multidrug resistance constitutes one of the major public health challenges at the end of the twentieth century. Antiviral agents (e.g., ribovarin) are important additions to medical treatment potential, as are "cocktails" of antiviral agents for management of HIV infection. Antibiotic use is a health problem requiting attention of clinicians and their teachers as well as the public health community and health care managers, representing the interaction of health issues across the entire spectrum of services. Methods of Prevention Organized public health services are responsible for advocating legislation and for regulating and monitoring programs to prevent infectious disease occurrence and/or spread. They function to educate the population in measures to reduce or prevent the spread of disease. Health promotion is one of the most essential instruments of infectious disease control. It promotes compliance and community support of preventive measures. These include personal hygiene and safe handling of water, milk, and food supplies. In sexually transmitted diseases, health education is the major method of prevention. Each of the infectious diseases or groups of infectious diseases have one or more preventive or control approaches (Table 4.3). These may involve the coordinated intervention of different disciplines and modalities, including epidemiologic monitoring, laboratory confirmation, environmental measures, immunization, and health education. This requires teamwork and organized collaboration. Very great progress has been made in infectious disease control by clinical, public health, and societal means since 1900 in the industrialized countries and since the 1970s in the developing world. This is attributable to a variety of factors, including organized public health services; the rapid development and wide use of new and improved vaccines and antibiotics; better access to health care; and improved sanitation, living conditions, and nutrition. Triumphs have been achieved in the eradication of smallpox and in the increasing control of other vaccine-preventable diseases. However, there remain serious problems with TB, STDs, malaria, and new infections such as HIV, and an increase in multiple drug-resistant organisms. VACCINE-PREVENTABLE DISEASES Vaccines are one of the most important tools of public health in the control of infectious diseases, especially for child health. Vaccine-preventable diseases TA B L E 4.4 Annual Incidence of Selected Vaccine-Preventable Infectious Diseases in Rates per 100,000 Population Selected Years, United States, 1950-1996Disease 195019601970198019851996 The body responds to invasion of disease-causing organisms by antigenantibody reactions and cellular responses. Together, these act to restrain or destroy the disease-causing potential. Strengthening this defense mechanism through im-BOX 4.5 DEFINITIONS OF IMMUNIZING AGENTS AND PROCESSES Vaccines: a suspension of live or killed microorganisms or antigenic portion of those agents presented to a potential host to induce immunity to prevent the specific disease caused by that organism. Preparation of vaccines may be from: a. Live attenuated organisms which have been passed repeatedly in tissue culture or chick embryos so that they have lost their capacity to cause disease but retain an ability to induce antibody response, such as polio-Sabin, measles, rubella, mumps, yellow fever, BCG, typhoid, and plague. b. Inactivated or killed organisms which have been killed by heat or chemicals but retain an ability to induce antibody response; they are generally safe but less efficacious than live vaccines and require multiple doses, such as polio-Salk, influenza, rabies, and Japanese encephalitis. c. Cellular fractions usually of a polysaccharide fraction of the cell wall of a disease-causing organisms, such as pneumococcal pneumonia or meningococcal meningitis. d. Recombinant vaccines produced by recombinant DNA methods in which specific DNA sequences are inserted by molecular engineering techniques, such as DNA sequences spliced to vaccinia virus grown in cell culture to produce influenza and hepatitis B vaccines. Toxoids or antisera: modified toxins are made nontoxic to stimulate formation of an antitoxin, such as tetanus, diphtheria, botulism, gas gangrene, and snake and scorpion venom. Immune globulin: an antibody-containing solution derived from immunized animals or human blood plasma, used primarily for short-term passive immunization, e.g., rabies, for immunocompromised persons. Antitoxin: an antibody derived from serum of animals after stimulation with specific antigens and used to provide passive immunity, e.g., tetanus. munization is one of the outstanding achievements of public health, as treatment of infectious diseases by antimicrobials is a major element of clinical medicine. Immunization (vaccination) is a process used to increase host resistance to specific microorganisms to prevent them from causing disease. It induces primary and secondary responses in the human or animal body: a. Primary response occurs on first exposure to an antigen. After a lag or latent period of 3-14 days (depending on the antigen) specific antibodies appear in the blood. Antibody production ceases after several weeks but memory cells that can recognize the antigen and respond to it remain ready to respond to a further challenge by the same antigen. b. Secondary (booster) response is the response to a second and subsequent exposure to an antigen. The lag period is shorter than the primary response, the peak is higher and lasts longer. The antibodies produced have a higher affinity for the antigen, and a much smaller dose of the antigen is required to initiate a response. c. Immunologic memory exists even when circulating antibodies are insufficient to protect against the antigen. When the body is exposed to the same antigen again, it responds by rapidly producing high levels of antibody to destroy the antigen before it can replicate and cause disease. Immunization protects susceptible individuals from communicable disease by administration of a living modified agent, or subunit of the agent, a suspension of killed organisms or an inactivated toxin (see Table 4.5) to stimulate development of antibodies to that agent. In disease control, individual immunity may also protect another individual. Herd immunity occurs when sufficient persons are protected (naturally or by immunization) against a specific infectious disease reducing circulation of the organism, thereby lowering the chance of an unprotected person to become infected. Each pathogen has different characteristics of infectivity, and therefore different levels of herd immunity are required to protect the nonimmune individual. Immunization Coverage The critical proportion of a population that must be immunized in order to interrupt local circulation of the organism varies from disease to disease. Eradication of smallpox was achieved with approximately 80% world coverage, followed by concentration on new case findings and immunization of contacts and surrounding communities. For highly infectious diseases, such as measles, immunization coverage of over 95% is needed to achieve local eradication. Immunization coverage in a community must be monitored in order to gauge the extent of protection and need for program modification to achieve targets of disease control. Immunization coverage is expressed as a proportion in which the numerator is the number of persons in the target group immunized at a specific age, and the denominator is the number of persons in the target cohort who should have been immunized according to the accepted standard: Vaccine coverage = no. persons immunized in specific age group • 100 no. persons in the age group during that year Immunization coverage in the United States is regularly monitered by the National Immunization Survey by a household survey in all 50 states, as well as selected urban areas considered to be at high risk for undervaccination. An initial telephone survey is followed by confirmation, where possible, from documentation from the parents or health care providers. The survey for July 1994-June 1995 examined children born between August 1991 and November 1993 (i.e., aged 19-35 months, median age 27 months). The results show improving coverage, with 95% having received three or more doses of DPT (diphtheria, pertussis, and tetanus), 88% with three or more doses of OPV (oral polio vaccine), 92% with three or more doses of Haemophilus influenzae, type b (Hib), but only 62% with three or more doses of hepatitis B. However, only 75% had received all recommended vaccines at the recommended ages. eases that still cause millions of deaths globally each year. Other important infectious diseases are still not subject to vaccine control because of difficulties in their development. In some cases, a microorganism can mutate with changes. Viruses can undergo antigenic shifts in the molecular structure in the organism, producing completely new subtypes of the organism. Hosts previously exposed to other strains may have little or no immunity to the new strains. Present technology allows for control or eradication of important infectious dis- Antigenic drift refers to relatively minor antigenic changes which occur in viruses. This is responsible for frequent epidemics. Antigenic shift is believed to explain the occurrence of new strains of influenza virus necessitating, for example, annual reformulation of the influenza vaccine associated with large scale epidemics and pandemics. New variants of poliovirus strains are similar enough to the three main types so that immunity to one strain is carded over to the new strain. Molecular epidemiology is a powerful new technique used to specify the geographic origin of organisms such as poliomyelitis and measles viruses, permiting tracking of the source of the virus and epidemic. Combinations of more than one vaccine is now common practice with a trend to enlarging the cocktail of vaccines in order to minimize the number of injections, and visits required. This reduces the number of visits to carry out routine immunization saving staff time and costs, as well as increasing compliance. There are virtually no contraindications to use of multiple antigens simultaneously. Examples of vaccine cocktails include DPT (diphtheria, pertussis, and tetanus) in combination with Haemophilus influenzae b, poliomyelitis, and varicella, or MMR (measles, mumps, and rubella) vaccines. Interventions in the form of effective vaccines save millions of lives each year and contribute to improved health of countless children and adults throughout the world. Vaccination is now accepted as one of the most cost-effective health interventions currently available. Continuous policy review is needed regarding allocation of adequate resources, logistical organization, and continued scientific effort to seek effective, safe, and inexpensive vaccines for other important diseases such as malaria and HIV. New technology of recombinant vaccines, such as that of hepatitis B, holds promise for important vaccine breakthroughs in the decades ahead. Internationally, much progress was made in the 1980s in the control of vaccinepreventable diseases. At the end of the 1970s, fewer than 10% of the world's children were being immunized. WHO, UNICEF, and other international organizations mobilized to promote an Expanded Programme on Immunization (EPI) with a target of reaching 80% coverage by 1990. Immunization coverage increased in the developing countries, preventing some 3 million child deaths annually. Bacillus Calmette-Gu6rin (BCG) coverage rose from 31 to 85%; poliomyelitis with OPV (three doses) from 24 to 80%, and tetanus toxoid for pregnant women from 14 to 54%. Since 1991, there has been a decline in coverage in some parts of the world, mainly in sub-Saharan Africa. The challenge remains to achieve control or eradication of vaccine-preventable diseases, thus saving millions of more lives. Part of the HFA stresses the EPI approach, which includes immunization against diphtheria, pertussis, tetanus, po-liomyelitis, measles, and tuberculosis. An extended form of this is the EPI PLUS program which combines EPI with immunization against hepatitis B and yellow fever and, where appropriate, supplementation with vitamin A and iodine. The success in international eradication of smallpox is now being followed by a campaign to eradicate poliomyelitis and other important infectious diseases. Vaccine-Preventable Diseases Diphtheria. Diphtheria is an acute bacterial disease of the tonsils, nasopharynx, and larynx caused by the organism Corynebacterium diphtheriae. It occurs in colder months in temperate climates where the organism is present in human hosts and is spread by contact with patients or carriers. It has an incubation period of 2-5 days. In the past, this was primarily an infection of children and was a major contributor to child mortality in the prevaccine and preantibiotic eras. Diphtheria has been virtually eliminated in countries with well-established immunization programs. In the 1980s, an outbreak of diphtheria occurred in the countries of the former Soviet Union among people over age 15. It reached epidemic proportions in the 1990s, with 140,000 cases (1991 -1995) with 1100 deaths in 1994 in Russia alone. This indicates a failure of the vaccination program in several respects: it used only three doses of DPT in infancy; no boosters were given at school age or subsequently; the efficacy of diphtheria vaccine may have been low, and coverage was below 80%. Efforts to control the present epidemic include mass vaccination campaigns for persons over 3 years of age with a single dose of dT (diphtheria and tetanus) and increasing coverage of routine DPT vaccines to four doses by age 2 years. The epidemic and its control measures have led to improved coverage with dT for those over 18 years, and 93% coverage among children aged 12-23 months. WHO recommends three doses of DPT in the first year of life and a booster at school entry. This is considered by many to be insufficient to produce long-lasting immunity. The United States and other industrialized countries use a four-dose schedule and recommend periodic boosters for adults with dT. Pertussis. Pertussis is an acute bacterial disease of the respiratory tract caused by the bacillus Bordetella pertussis. After an initial coldlike (catarrhal) stage, the patient develops a severe cough which comes in spasms (paroxysms). The disease can last 1-2 months. The paroxysms can become violent and may be followed by a characteristic crowing or high pitched inspiratory whooping sound, followed by expulsion of a tenacious clear sputum, often followed by vomiting. In poorly immunized populations and those with malnutrition, pneumonia often follows and death is common. Pertussis declined dramatically in the industrialized countries as a result of widespread coverage with DPT. However, because the pertussis component of the vaccine caused some reactions, many physicians avoided its use, using DT alone. During the 1970s in the United Kingdom, many physicians recommended against vaccination with DPT. As a result, pertussis incidence increased with substantial mortality rates. This led to a reappraisal of the immunization program, with insti-tution of incentive payments to general practitioners for completion of vaccination schedules. As a result of these measures, vaccination coverage, with resulting pertussis control, improved dramatically in the United Kingdom. Pertussis continues to be a public health threat and recurs wherever there is inadequate immunization in infancy. A new acellular vaccine is ready for widespread use and will be safer with fewer and less severe reactions in infants, increasing the potential for improved confidence and support for routine vaccination. Use of the new vaccine is spreading in the United States and forms part of the U.S. recommended vaccination schedule. Tetanus. Tetanus is an acute disease caused by an exotoxin of the tetanus bacillus (Clostridium tetani) which grows anaerobically at the site of an injury. The bacillus is universally present in the environment and enters the human body via penetrating injuries. Following an incubation period of 3-21 days, it causes an acute condition of painful muscular contractions. Unless there is modem medical care available, patients are at risk of high case fatality rates of 30-90% (highest in infants and the elderly). Antitetanus serum (ATS) was discovered in 1890 and during World War I, ATS contributed to saving the lives of many thousands of wounded soldiers. Tetanus toxoid was developed in 1993. The organism, because of its universal presence in the environment, cannot be eradicated. However, the disease can be controlled by effective immunization of every child during infancy and school age. Adults should receive routine boosters of tetanus toxoid once very decade. Newborns are infected by tetanus spores (tetanus neonatorum) where unsanitary conditions or practices are present. It can occur when traditional birth attendants at home deliveries use unclean instruments to sever the umbilical cord, or dress the severed cord with contaminated material. Tetanus neonatorum remains a serious public health problem in developing countries. Immunization of pregnant women and women of childbearing age is reducing the problem by conferring passive immunity to the newborn. The training of traditional birth attendants in hygienic practice and the use of medically supervised birth centers for delivery also decreases the incidence of tetanus neonastorum. Elimination of tetanus neonatorum by the year 2000 was made a health target by the World Summit of Children in 1990. In that year, the number of deaths from neonatal tetanus was reported by UNICEF as 700,000 infants worldwide, declining to 600,000 in 1993. Immunization of pregnant women increased from under 20% in 1984 to 52% in 1995-1997. Despite progress, coverage is still too low to achieve the target of elimination. Poliomyelitis. Polio virus infection may be asymptomatic or cause an acute nonspecific febrile illness. It may reach more severe forms of aseptic meningitis and acute flaccid paralysis with long-term residual paralysis or death during the acute phase. Poliomyelitis is transmitted mainly by direct person-to-person contact, but also via sewage contamination. Large-scale epidemics of disease, with attendant paralysis and death, occurred in industrialized countries in the 1940s and 1950s, engendering widespread fear and panic and thousands of clinical cases of "infantile paralysis". Growth of the poliovirus by John Enders and colleagues in tissue culture in 1949 led to development of the first inactivated polio vaccine by Jonas Salk in the mid-1950s and gave hope and considerable success in the control of the disease. The development of the live attenuated oral poliomyelitis vaccine by Albert Sabin, licensed in 1960, added a new dimension to its control because of the effectiveness, low cost, and ease of administration of the vaccine. The two vaccines in their more modern forms, enhanced strength inactivated polio vaccine (eIPV), and triple oral polio vaccine (TOPV), have been used in different settings with great success. Oral polio vaccine (OPV) induces both humoral and cellular, including intestinal, immunity. The presence of OPV in the environment by contact with immunized infants and via excreta of immunized persons in the sewage gives a booster effect in the community. Immunization using OPV, in both routine and National Immunization Days (NIDs) has proven effective in dramatically reducing poliomyelitis and circulation of the wild virus in many parts of the world. Use of the enhanced strength IPV (eIPV) produces early and high levels of circulating antibodies, as well as protecting against the vaccine-associated disease. In rare cases OPV can cause vaccine-associated paralytic poliomyelitis (VAPP), with a risk of 1 case per 520,000 with initial doses, and 1 case per over 12 million with subsequent doses. Approximately eight to ten cases of VAPP occur annually in the United States, with clinical, ethical, and legal implications. Use of IPV as initial protection eliminates this problem. Experience in Gaza and the West Bank in the 1970s and 1980s, and later in Israel, showed that a combination of IPV and OPV is effective in overcoming endemic and imported poliovirus. OPV requires multiple doses to achieve protective antibody levels. Where there are many enteroviruses in the environment, as is the case in most developing countries, interference in the uptake of OPV may result in cases of paralytic poliomeylitis among persons who have received 3 or even 4 doses of adequate OPV. Controversy as to the relative advantages of each vaccine continues. The OPV program of mass repeated vaccination in control of poliomyelitis in the Americas established the primacy of OPV in practical public health, and the momentum to eradicate poliomyelitis is building. A combined schedule of IPV and OPV would eliminate the wild virus and protect against vaccine-associated disease. The sequential use of IPV and OPV was adopted as part of the routine infant immunization program in the United States in 1997, but IPV alone was adopted in 1999. There are concerns that exclusive use of either vaccine alone will not lead to the desired goal of eradication of polyomyelitis. Progress in global eradication of polio has been impressive. Global coverage of infants with three doses of OPV reached 81% in 1996 as compared to 83% in 1995. The African region of WHO had an increase in OPV coverage from 58% in 1995 to 60% in 1996. National immunization days (NIDs) were conducted in 62 countries in 1995 and 82 in 1996, covering 419 million children in 1996. Mopping up operations to reinforce coverage of children in still endemic areas is proceeding along with increased emphasis on acute flaccid paralysis (AFP) monitoring. Confirmed polio cases reported continued at 5-6,000 per year in 1997-1998. With continued national and international emphasis, and support of WHO, Rotary International, UNICEF, donor countries, and others, there is a real prospect of a world without polio, if not by the year 2000, then or shortly thereafter. Measles. Measles is an acute disease caused by a virus of the paramyxovirus family. It is highly infectious with a very high ratio of clinical to subclinical case ratio (99/1). Measles has a characteristic clinical presentation with fever, white spots (Koplik spots) on the membranes of the mouth, and a red blotchy rash appearing on the 3rd-7th day lasting 4-7 days. Mortality rates are high in young children with compromised nutritional status, especially vitamin A deficiency. The measles virus evolved from a virus disease of cattle (rinderpest) some 3000-5000 years ago, becoming an important disease of humans with high mortality rates in debilitated, poorly nourished children, and significant mortality and morbidity even in industrialized countries. In the prevaccine era, measles was endemic worldwide, and even in the late 1990s it remains one of the major childhood infectious diseases. It is one of the commonest causes of death for school age children worldwide. Despite earlier predictions that measles deaths would be halved to 500,000 by 1996, WHO reported 1.1 million measles deaths in that year and over 1 million in 1997. Eradication in the first decade of the next century is a feasible goal, provided that there is an adequate international effort. Measles immunization increased from under 40% worldwide in 1985 to 79% in 1995-1996, but 56% in sub-Saharan Africa. Single-dose immunization failed to meet control or eradication requirements even in the most developed parts of the world. A live vaccine, licensed in 1963, was later replace by a more effective and heat stable vaccine, but still with a primary vaccination failure rate (i.e., fails to produce protective antibodies) of 4-8%, and secondary failure rate (i.e., produces antibodies but protection is lost over time) of 4%. A two-dose policy incorporates a booster dose, usually at school-age, in addition to maximum feasible infant coverage of children in the 9-15 month period (timing varies in different countries). Catch-up campaigns among schoolage children should be carried out until the routine two-dose policy has time to take full effect. Nearly universal primary education in developing countries, offers an opportunity for mass coverage of school age children with a second dose of measles and a resulting increase of herd immunity to reduce the transmission of the virus. The two-dose policy adopted in many countries, should be supplemented with catch-up campaigns in schools to provide the booster effect for those previously immunized and to cover those previously unimmunized, especially in developing countries. The CDC considers that domestic transmission in the United States has been interrupted and that most localized outbreaks were traceable to imported cases. South America and the Caribbean countries are now considered free of indigenous measles, based on their successful use of NIDs, although a large epidemic occurred in 1999 in Brazil. It now appears that eradication has become a feasible target during the early part of the next century, with a strategy of levels of coverage in in-fancy with a two-dose policy, supplemented by catch-up campaigns to older children and young adults, and outbreak control. Mumps. Mumps is an acute viral disease characterized by fever, swelling, and tenderness usually of the parotid glands, but also other glands. The incubation period ranges between 12 and 25 days. Orchitis, or inflammation of the testicles, occurs in 20-30% of postpubertal males and oophoritis, or inflammation of the ovaries, in 5% of postpubertal females. Sterility is an extremely rare result of mumps. Central nervous system involvement can occur in the form of aseptic meningitis, almost always without sequelae. Encephalitis is reported in 1-2 per 10,000 cases with an overall case fatality rate of 0.01%. Pancreatitis, neuritis, nerve deafness, mastiffs, nephritis, thyroiditis, and pericarditis, although rare, may occur. Most persons born before 1957 are immune to the disease, because of the nearly universal exposure to the disease before that time. The live attenuated vaccine introduced in the United States in 1967 is available as a single vaccine or in combination with measles and rubella as the measlesmumps-rubella (MMR) vaccine. It provides long-lasting immunity in 95% of cases. Mumps vaccine is now recommended in a two-dose policy with the first dose of MMR given between 12 and 15 months of age and a second dose given either at school entry or in early adolescence. MMR in two doses is now standard policy in the United Sates, Sweden, Canada, Israel, the United Kingdom, and other countries. The incidence of mumps has consequently declined rapidly. Local eradication of this disease is worthwhile and should be part of a basic international immunization program. Rubella. Rubella (German measles) is generally a mild viral disease with lymphadenopathy and a diffuse, raised red rash. Low grade fever, malaise, coryza, and lymphadenopathy characterize the prodromal period. The incubation period is usually 16-18 days. Differentiation from scarlet fever, measles, or other febrile diseases with rash may require laboratory testing and recovery of the virus from nasopharyngeal, blood, stool, and urine specimens. In 1942, Norman Gregg, an Australian ophthalmologist, noted an epidemic of cases of congenital cataract in newborns associated with a history of rubella in the mother during the first trimester. Subsequent investigation demonstrated that intrauterine death, spontaneous abortion, and congenital anomalies occur commonly when rubella occurs early in pregnancy. Congenital rubella syndrome (CRS) occurs with single or multiple congenital anomalies including deafness, cataracts, microophthalmia, congenital glaucoma, microcephaly, meningoencephalitis, congenital heart defects, and others. Moderate and severe cases are recognizable at birth, but mild cases may not be detected for months or years after birth. Insulin-dependent diabetes is suspected as a late sequela of congenital rubella. Each case of CRS is estimated to cost some $250,000 in health care costs during the patient's lifetime. Prior to availability of the attenuated live rubella vaccine in 1969, the disease was universally endemic, with epidemics or peak incidence every 6-9 years. In unvaccinated populations, rubella is primarily a disease of childhood. In areas where children are well vaccinated, adolescent and young adult infection is more apparent, with epidemics in institutions, colleges, and among military personnel. A sharp reduction of rubella cases was seen in the United States following introduction of the vaccine in 1970, but increased in 1978, following rubella epidemics in 1976-1978. A further reduction in cases was followed by a sharp upswing of rubella and CRS in [1988][1989][1990]. An outbreak of rubella among the Amish in the United States, who refuse immunization on religious grounds, resulted in 7 cases of CRS in 1991. It is now thought that vaccination of sufficient numbers in the United States reduced circulation of the virus and protected most vulnerable groups in the population. In the past, immunization policy in some countries was to vaccinate school girls aged 12 to protect them for the period of fertility. The current approach is to give a routine dose of MMR in early childhood, followed by a second dose in early school age to reduce the pool of susceptible persons. Women of reproductive age should be tested to confirm immunity before pregnancy and immunized if not already immune. Should a woman become infected during pregnancy, termination of pregnancy previously recommended is now managed with hyperimmune globulin. The infection of pregnant women during their first trimester of pregnancy is the primary public health implication of rubella. The emotional and financial burden of CRS, including the cost of treatment of its congenital defects, makes this vaccination program cost-effective. Its inclusion in a modem immunization program is fully justified. Elimination of CRS syndrome should be one of the primary goals of a program for prevention of vaccine-preventable disease in developed and developing countries. Adoption of MMR and the two-dose policy will gradually lead to eradication of rubella and rubella syndrome. Viral Hepatitis. Viral hepatitis is a group of diseases of increasing public health importance due to their large scale worldwide prevalence, their serious consequences, and our increasing ability to take preventive action. Viral hepatic infectious diseases each have specific etiologic, clinical, epidemiologic, serologic, and pathologic characteristics. They have important short-and long-term sequelae. Vaccine development is of high priority for control and ultimate eradication. HEPATITIS A. Hepatitis A (HAV) was previously known as infectious hepatitis or epidemic jaundice. HAV is mainly transmitted by the fecal-oral route. Clinical severity varies from a mild illness of 1-2 weeks to a debilitating illness lasting several months. The norm is complete recovery within 9 weeks, but a fulminating or even fatal hepatitis can occur. Severity of the disease worsens with increasing age. HAV is sporadic/endemic worldwide. Improving sanitation raises the age of exposure, with accompanying complications. It now occurs particularly in persons from industrialized countries when exposed to situations of poor hygiene, or among young adults when traveling to areas where the disease is en-demic. Common source outbreaks occur in school-aged children and young adults from case contact, or from food contaminated by infected handlers. Hepatitis A may be a serious public health problem in a disaster situation. Prevention involves improving personal and community hygiene, with safe chlorinated water and proper food handling. Hepatitis A vaccine has been recently licensed for use in the United States, and will probably soon be recommended for routine vaccination programs, as well as for persons traveling to endemic areas. HEPATITIS B. Hepatitis B (HBV) once called serum jaundice, was thought to be transmitted only by injections of blood or blood products. It is now known to be present in all body fluids and easily transmissible by household and sexual contact, perinatal spread from mother to newborn, and between toddlers. However, it is not spread by the oral-fecal route. Hepatitis B virus is endemic worldwide and is especially prevalent in developing countries. Carrier status with persistent viremia varies from < 1% of adults in North America to 20% in some parts of the world. Carders have detectable levels of HBsAg, the surface antigen (i.e., Australian antigen), in their blood. High risk groups in developed countries include intravenous drug users, homosexual men, persons with high numbers of sexual partners, those receiving tattoos, body piercing or acupuncture treatments, and residents or staff of institutions such as group homes and prisons. Immunocompromised and hemodialysis patients are commonly carders of HBV. HBV may also be spread in a health system by use of inadequately sterilized reusable syringes, as in China and the former Soviet Union. Transmission is reduced by screening blood and blood products for HBsAg and strict technique for handling blood and body fluids in health settings. HBV is clinically recognizable in less than 10% of infected children but is apparent in 30-50% of infected adults. Clinically HBV has an insidious onset with anorexia, abdominal discomfort, nausea, vomiting, and jaundice. The disease can vary in severity from subclinical, very mild to fulminating liver necrosis, and death. It is a major cause of primary liver cancer, chronic liver disease, and liver failure, all devastating to health and expensive to treat. Hepatitis B virus is considered to be the cause of 60% of primary cancer of the liver in the world and the most common carcinogen after cigarette smoking. The WHO estimates that more than 2 billion people alive today have been infected with HBV. It is also estimated that 350 million persons are chronic carriers of HBV, with an estimated 1-1.5 million deaths per year from cirrhosis or primary liver cancer. This makes hepatitis B control a vital issue in the revision of health priorities in many countries. Strict discipline in blood banks and testing of all blood donations for HBV, as well as HIV, and hepatitis C, is mandatory, with destruction of those with positive tests. Contacts should be immunized following exposure with HBV immunoglobulin and HBV vaccine. The inexpensive recombinant HBV vaccine should be adopted by all countries and included in routine vaccination of infants. Catch-up immunization for older children is also desirable. Immunization programs should include those exposed at work, such as health, prison, or sex workers and adults in group settings. HBV immunization has been included in WHO's EPI-PLUS expanded program of immunization. HEPATITIS C. First identified in 1989, and previously known as non-A, non-B hepatitis, hepatitis C (HCV) has an insidious onset with jaundice, fatigue, abdominal pain, nausea, and vomiting. It may cause mild to moderate illness, but chronicity is common going on to cirrhosis and liver failure. The CDC estimates that 4 million Americans are chronically infected with HCV, with 8000-10,000 resulting deaths per annum, and the main cause of liver transplants. HCV is transmitted most commonly in blood products, but also among injecting drug users (90% of intravenous drug users were HCV positive in a Vancouver study in 1998), and is also a risk for health workers. The disease may also occur in dialysis centers and other medical situations. Person-to-person spread is unclear. Prevention of transmission includes routine testing of blood donations, antiviral treatment of blood products, needle exchange programs, and hygiene. The WHO in 1998 has declared hepatitis prevention as a major public health crisis, with an estimated 170 million persons infected worldwide (1996), stressing that this "silent epidemic" is being neglected and that screening of blood products is vital to reduce transmission of this disease as for HIu HCV is a major cause of chronic cirrhosis and liver cancer. No vaccine is available at present, but an experimental vaccine is undergoing field trials. Interferon and ribavirin treatment is reportedly effective in 40% of cases. HEPATITIS D. Hepatitis D virus (HDV) also known as Delta hepatitis, may be self-limiting or progress to chronic hepatitis. It is caused by a viruslike particle which infects cells along with HBV as a coinfection or in chronic carriers of HBV. HDV occurs worldwide in the same groups at risk for HBV. It also occurs in epidemics and is endemic in South America, Africa, and among drug users. Prevention is by measures similar to those for HBV. Management for HDV is by passive immunity with immunoglobulin for contacts and high risk groups, and should include HBV vaccination as the diseases often coincide. There is currently no vaccine for HDV. HEPATITIS E. Hepatitis E virus has an epidemiological and clinical course similar to that of HAV. There is no evidence of a chronic form of HEV. One striking characteristic of HEV is its high mortality rate among pregnant women. The disease is caused by a viruslike particle with an incubation period of 15-64 days and is most common in young adults. Sporadic cases as well as epidemics have been identified in India, Pakistan, Burma, China, Russia, Mexico, and North Africa. HEV results from waterborne epidemics or as sporadic cases in areas with poor hygiene, spread via the oral-fecal route. It is a hazard in disaster situations with crowding and poor sanitary conditions. Prevention is by safe management of water supplies and sanitation. Disease management is supportive care; passive immunization is not helpful and no vaccine is currently available. Haemophilus influenzae type b. Haemophilus influenzae type b (Hib) is a bac- teria which causes meningitis and other serious infections in children under 18 months of age. Before the introduction of effective vaccines, as many as 1 in 200 children developed invasive Hib infection. Two-thirds of these had Hib meningitis, with a case fatality rate of 2-5%. Long-term sequelae such as hearing impairment and neurological deficits occurred in 15-30% of survivors. The first Hib vaccine was licensed in 1985, based on capsular material from the bacteria. Extensive clinical trials in Finland demonstrated a high degree of efficacy, but less impressive results were in seen in postmarketing efficacy studies. By 1989, a conjugate vaccine based on an additional protein cell capsular factor capable of enhancing the immunologic response was introduced. Several conjugate vaccines are now available. The conjugate vaccines are now combined with DPT as their schedule is simultaneous with that of the DPT. Although the Hib vaccine has been found to be cost-effective, despite initially being as costly as all the basic vaccines combined (i.e., DPT, OPV, MMR, and HBV). For this reason, its use thus far has been limited to industrialized countries. The vaccine is a valuable addition to the immunologic armamentarium. It showed dramatic results in local eradication of this serious early childhood infection in a number of European countries and a sharp reduction in the United States. Impressive field trials in the Gambia showed a sharp reduction in mortality from invasive streptococcal diseases. The price of the vaccine has also fallen dramatically since the mid 1990s. As a result, in 1997, the World Health Organization recommended inclusion of Hib vaccine in routine immunization programs in developing countries. Influenza. Influenza is an acute viral respiratory illness characterized by fever, headache, myalgia, prostration, and cough. Transmission is rapid by close contact with infected individuals and by airborne particles with an incubation period of 1-5 days. It is generally mild and self-limited with recovery in 2-7 days. However, in certain population groups, such as the elderly and chronically ill, infection can lead to severe sequelae. Gastrointestinal symptoms commonly occur in children. During epidemics, mortality rates from respiratory diseases increase because of the large numbers of persons affected, although the case fatality rates are generally low. Over the past century, influenza pandemics have occurred in 1889, 1918, 1957, and 1968, while epidemics are annual events. The influenza pandemic of 1918 caused millions of deaths among young adults, by some estimates killing more than had died in World War I. It was the fear of recurrence of this pandemic which led the CDC to launch a massive immunization program in the United States in 1976 to prevent swine flu (the virus was a strain antigenically similar to that of the 1918 pandemic influenza) from spreading from an isolated outbreak in an army camp. The effort was stopped after millions of persons were immunized with an urgently produced vaccine when serious reactions occurred (Guillain-Barre Syndrome, (i.e., a type of paralysis), and when no further cases of swine flu were seen. This demonstrated the difficulty of extrapolating scenarios from a historical experience. Each year, epidemiologic services of the WHO and collaborating centers such as the CDC recommend which strains should be used in vaccine preparation for use among susceptible population groups. These vaccines are prepared with the current anticipated epidemic strains. The three main types of influenza (A, B, and C) have different epidemiological characteristics. Type A and its subtypes, which are subject to antigenic shift, are associated with widespread epidemics and pandemics. Type B undergoes antigenic drift and is associated with less widespread epidemics. Influenza Type C is even more localized. Active immunization against the prevailing wild strain of influenza virus produces a 70-80% level of protection in high risk groups. The benefits of annual immunization outweigh the costs, and it has proven to be effective in reducing cases of influenza and its secondary complications such as pneumonia and death from respiratory complications in high-risk groups. Pneumococcal diseases, which are caused by Streptococcus pneumoniae, include pneumonia, meningitis, and otitis media. The 23 capsular types of pneumococci selected out of 83 known types of the organism for the vaccine are those responsible for 88% of pneumococcal pneumonia cases and 10-25% of all pneumonia cases in the United States, and are responsible for some 40,000 deaths per year. This vaccine has been found to be cost-effective for high risk groups, including persons with chronic disease, HIV carriers, patients whose spleens were removed, the elderly, and those with immunosuppressive conditions. It should be included in preventive-oriented health programs, especially for long-term care of the chronically ill. Because pneumococci cause bacterial meningitis, pneumococcal vaccine may be a future candidate for use in routine immunization programs for children (over age 2). Varicella (Chicken Pox, Shingles, Herpes Zoster). Varicella is an acute, generalized virus disease caused by the varicella zoster virus (VZV). Despite its reputation as an innocuous disease of childhood, varicella patients can be quite ill. A mild fever and characteristic generalized red rash lasts for a few hours, followed by vesicles occurring in successive crops over various areas of the body. Affected areas may include the membranes of the eyes, mouth, and respiratory tract. The disease may be so mild as to escape observation or may be quite severe, especially in adults. Death can occur from viral pneumonia in adults and sepsis or encephalitis in children. Neonates whose mothers develop the disease within 2 days of delivery are at increased risk with a case fatality rate of up to 30%. Long-term sequelae include herpes zoster or shingles with a severely painful, vesicular rash along the distribution of sensory nerves, which can last for months. Its occurrence increases with age and it is primarily seen in the elderly. It can, however, occur in immunocompromised children (especially those on cancer chemotherapy), AIDS patients, and others. Some 15% of a population will experience herpes zoster during their lifetimes. Reye's syndrome is an increasingly rare but serious complication from varicella or influenza B. It occurs in children and affects the liver and central nervous system. Congenital varicella syndrome with birth defects similar to congenital rubella syndrome has been identified recently. Varicella vaccine is now recommended for routine immunization at age 12-18 months in the United States, with catch-up for children up to age 13 years and for occupationally exposed persons in health or child care settings. Varicella vaccine is also recommended for nonpregnant women of child bearing years. Cost-benefit studies indicate a 1:5 ratio if both direct and indirect costs are included (see Chapter 11). Varicella vaccine is likely to be added to a "cocktail vaccine" containing DPT, polio (IPV), and Hib. Meningococcal Meningitis. Meningococcal meningitis, caused by the bacterium Neisseria meningitides, is characterized by headache, fever, neck stiffness, delirium, coma, and/or convulsions. The incubation period is 2-10 days. It has a case fatality rate of 5-15% if treated early and adequately, but rises up to 50% in the absence of treatment. There are several important strains (A, B, C, X, Y, and Z). Serogroups A and C are the main causes of epidemics, with B causing sporadic cases and local outbreaks. Transmission is by direct contact and droplet spread. Meningitis (group A) is common in sub-Saharan African countries, but epidemics have occurred worldwide. During epidemics, children, teenagers, and young adults are the most severely affected. In developed countries, outbreaks occur most frequently in military and student populations. In 1997, meningococcal meningitis spread widely in the "meningitis belt" in Central Africa. Epidemic control is achieved by mass chemoprophylaxis with antibiotics (e.g., rifampin or sulfa drugs) among case contacts, although the emergence of resistant strains is a concern. Vaccines against serotypes A and C (bivalent) or A, C, W, and Y are available. Their use is effective in epidemic control and prevention institutions and military recruits, especially for A and C serogroups. ESSENTIALS OF AN IMMUNIZATION PROGRAM Vaccination is one of the key modalities of primary prevention. Immunization is cost-effective and prevents wide-scale disease and death, with high levels of safety. Despite the general consensus in public health regarding the central role of vaccination, there are many areas of controversy and unfulfilled expectations. A vaccination program should aim at 95% coverage at appropriate times, including infants, school children, and adults. Immunization policy should be adapted from current international standards applying the best available program to national circumstances and financial capacities (Table 4.6). Public health personnel with expertise in vaccine-preventable disease control are needed to advise ministries of health and the practicing pediatric community on current issues in vaccination and to monitor implementation and evolution of control programs. Controversies and changing views are common to immunization policy, so that discussions must be conducted on a continuing basis. Policy should be under continuing review by a ministerially appointed national immunization advisory committee, including professionals from public health, academia, immunology, laboratory sciences, economics, and relevant clinical fields. bDuring 1999, the recommendation for polio virus was changed to 3 doses of IPV in infancy. Vaccine supply should be adequate and continuous. Supplies should be ordered from known manufacturers meeting international standards of good manufacturing practice. All batches should be tested for safety and efficacy prior to release for use. There should be an adequate and continuously monitored cold chain to protect against high temperatures for heat labile vaccines, sera, and other active biological preparations. The cold chain should include all stages of storage, transport, and maintenance at the site of usage. Only disposable syringes should be used in vaccination programs to prevent any possible transmission of blood-borne infection. A vaccination program depends on a readily available service with no barriers or unnecessary prerequisites, free to parents or with a minimum fee, to administer vaccines in disposable syringes by properly trained individuals using patientoriented and community-oriented approaches. Ongoing education and training on current immunization practices are needed. Incentive payments by insuring agency or managed care systems promote complete, on-time coverage. All clinical encounters should be used to screen, immunize, and educate parents/guardians. Contraindications to vaccination are very few; vaccines may be given even during mild illness with or without fever, during antibiotic therapy, during convalescence from illness, following recent exposure to an infectious disease, and to persons having a history of mild/moderate local reactions, convulsions, or family history of sudden infant death syndrome (SIDS). Simultaneous administration of vaccines and vaccine "cocktails" reduces the number of visits and thereby improves coverage; there are no known interferences between vaccine antigens. Accurate and complete recording with computerization of records with automatic reminders helps promote compliance, as does co-scheduling of immunization appointments with other services. Adverse events should be reported promptly, accurately, and completely. A tracking system should operate with reminders of upcoming or overdue immunizations; use mail, telephone, and home visits, especially for high risk families, with semiannual audits to assess coverage and review patient records in the population served to determine the percentage of children covered by second birthday. Tracking should identify children needing completion of the immunization schedule and assess the quality of documentation. It is important to maintain up-to-date, easily retrievable medical protocols where vaccines are administered, noting vaccine dosage, contraindications, and management of adverse events. All health care providers and managers should be trained in education, promotion, and management of immunization policy. Health education should target parents as well as the general public. Monitoring of vaccines used and children immunized, individually and by category of vaccination can be facilitated by computerization of immunization records, or regular manual review of child care records. Where immunization is done by physicians in private practice, as in the United States, determination of coverage is by periodic surveys. Regulation of Vaccines Inspection of vaccines for safety, purity, potency, and standards is part of the regulatory function. Vaccines are defined as biological products and are therefore subject to regulation by national health authorities. In the United States, this comes under the legislative authority of the Public Health Service Act, as well as the Food, Drug and Cosmetics Act, with applicable regulations in the Code of Federal Regulations. The federal agency empowered to carry out this regulatory function is the Center for Drugs and Biologics of the Federal Food and Drug Administration. Litigation regarding adverse effects of vaccines led to inflation of legal costs and efforts to limit court settlements. The U.S. federal government enacted the Child Vaccine Injury Act of 1988. This legislation requires providers to document vaccines given and to report on complications or reactions. It was intended to pay benefits to persons injured by vaccines faster and by means of a less expensive procedure than a civil suit for resolving claims. Using this no-fault system, petitioners do not need to prove that manufacturers or vaccine givers were at fault. They must only prove that the vaccine is related to the injury in order to receive compensation. The vaccines covered by this legislation include DTP, MMR, OPV, and IPV. Vaccine Development Development of vaccines from Jenner in eighteenth century to the advent of recombinant hepatitis B vaccine in 1987, and of vaccines for acellular pertussis, varicella, hepatitis A, and rotavirus in the 1990s, has provided one of the pillars of public health and led to enormous savings of human life. Vaccines for viral in-fections in humans for HIV, respiratory syncytial virus, papilloma, Epstein-Barr virus, dengue fever, and hantavirus are under intense research with genetic approaches using recombinant techniques. The potential for the future of vaccines will be greatly influenced by scientific advances in genetic engineering, with potential for development of vaccines attached to bacteria or protein in plants, which may be given in combination for an increasing range of organisms or their harmful products. Recombinant DNA technology has revolutionized basic and biomedical research since the 1970s. The industry of biotechnology has produced important diagnostic tests, such as for HIV, with great potential for vaccine development. Traditional whole organism vaccines, alive or killed, may contain toxic products that may cause mild to severe reactions. Subunit vaccines are prepared from components of a whole organism. This avoids the use of live organisms that can cause the disease or create toxic products which cause reactions. Subunit vaccines traditionally prepared by inactivation of partially purified toxins are costly, difficult to prepare, and weakly immunogenic. Recombinant techniques are an important development for production of new whole cell or subunit vaccines that are safe, inexpensive, and more productive of antibodies than other approaches. Their potential contribution to the future of immunology is enormous. Molecular biology and genetic engineering have made it feasible to create new, improved, and less costly vaccines. New vaccines should be inexpensive, easily administered, capable of being stored and transported without refrigeration, and given orally. The search for inexpensive and effective vaccines for groups of viruses causing diarrheal diseases led to development of the rotavirus vaccine. Some "edible" research focuses on the genetic programming of plants to produce vaccines and DNA. Vaccine manufacturers, who spend huge sums of money and years of research on new products, tend to work on those which will bring great financial rewards for the company and are critical to the local health care community. This has led to less effort being made in developing vaccines for diseases such as malaria. Yet industry plays a crucial role for continued progress in the field. CONTROL/ERADICATION OF INFECTIOUS DISEASES Since the eradication of smallpox, much attention has focused on the possibility of similarly eradicating other diseases, and a list of potential candidates has emerged. Some of these have been abandoned because of practical difficulties with current technology. Diseases that have been under discussion for eradication have included measles, TB, and some tropical diseases, such as malaria and dracunculiasis. Eradication is defined as the achievement of a situation whereby no further cases of a disease occur anywhere and continued control measures are unnecessary. Reducing epidemics of infectious diseases, through control and eradication in selected areas or target groups, can in certain instances achieve eradication of the disease. Local eradication can be achieved where domestic circulation of an organism is interrupted with cases occurring from importation only. This requires a strong, sustained immunization program with adaptation to meet needs of importation of carriers and changing epidemiologic patterns. Smallpox Smallpox was one of the major pandemic diseases of the Middle Ages and its recorded history goes back to antiquity. Prevention of smallpox was discussed in ancient China by Ho Kung (circe AO 320), and inoculation against the disease was practiced there from the eleventh century AD. Prevention was carried out by nasal inhalation of powdered dried smallpox scabs. Exposure of children to smallpox when the mortality rate was lowest assumed a weakened form of the disease, and it was observed that a person could only have smallpox once in a lifetime. Isolation and quarantine were widely practiced in Europe during the sixteenth and seventeenth centuries. Variolation was the practice of inoculating youngsters with material from scabs of pustules from mild cases of smallpox in the hope that they would develop a mild form of the disease. Although this practice was associated with substantial mortality, it was widely adopted because mortality from variolation was well below that of smallpox acquired during epidemics. Introduced into England in 1721 (see Chapter 1) it was commonly practiced as a lucrative medical specialty during the eighteenth century. In the 1720s, variolation was also introduced into the American colonies, Russia, and subsequently into Sweden and Denmark. Despite all efforts, in the early eighteenth century smallpox was a leading cause of death in all age groups. Toward the end of the eighteenth century an estimated 400,000 persons died annually from smallpox in Europe. Vaccination, or the use of cowpox vaccinia virus to protect against smallpox, was initiated late in the eighteenth century. In 1774, a cattle breeder in Yorkshire, England, inoculated his wife and two children with cowpox to protect them during a smallpox epidemic. In 1796, Edward Jenner, an English country general practitioner, experimented with inoculation from a milkmaid's cowpox pustule to a healthy youngster, who subsequently proved resistant to smallpox by variolation (see Chapter 1). Vaccination, the deliberate inoculation of cowpox material, was slow to be adopted universally, but by 1801, over 100,000 persons in England were vaccinated. Vaccination gathered support in the nineteenth century in military establishments, and in some countries which adopted it universally. Opposition to vaccination remained strong for nearly a century based on religious grounds, observed failures of vaccination to give lifelong immunity, and because it was seen as an infringement of the state on the rights of the individual. Often the protest was led by medical variolationists whose medical practice and large incomes were threatened by the mass movement to vaccination. Resistance was also offered by "sanitarians" who opposed the germ theory and thought cleanliness was the best method of prevention. Universal vaccination was increasingly adopted in Europe and America in the early nineteenth century and eradication of smallpox in developed countries was achieved by the mid twentieth century. In 1958, the Soviet Union proposed to the World Health Assembly a program to eradicate smallpox internationally and subsequently donated 140 million doses of vaccine per year as part of the 250 million needed to promote vaccination of at least 80% of the world population. In 1967, WHO adopted a target for the eradication of smallpox. A program was developed which included a massive increase in coverage to reduce the circulation of the virus through person-to-person contact. Where smallpox was endemic, with a substantial number of unvaccinated persons, the aim of the mass vaccination phase was 80% coverage. Increasing vaccination coverage in developing countries reduced the disease to periodic and increasingly localized outbreaks. In 1967, 33 countries were considered endemic for smallpox, and another 11 experienced importation of cases. By 1970, the number of endemic countries was down to 17, and by 1973 only 6 countries were still endemic, including India, Pakistan, Bangladesh, and Nepal. In these countries, a new strategy was needed, based on a search for cases and vaccination of all contacts, working with a case incidence below 5 per 100,000. The program then moved into the consolidation phase, with emphasis on vaccination of newborns and new arrivals. Surveillance and case detection were improved with case contact or risk group vaccination. The maintenance phase began when surveillance and reporting were switched to the national or regional health service with intensive follow-up of any suspect case. The mass epidemic era had been controlled by mass vaccination, reducing the total burden of the disease, but eradication required the isolation of individual cases with vaccination of potential contacts. Technical innovations greatly eased the problems associated with mass vaccination worldwide. During the 1920s, there was wide variation in sources of smallpox vaccine. In the 1930s, efforts to standardize and further attenuate the strains used reduced complication rates from vaccinations. The development of lyophilization (freeze-drying) of the vaccine in England in the 1950s made a heat-stable vaccine that could be effective in tropical field conditions in developing countries. The invention of the bifurcated needle (Bernard Rubin 1961) allowed for easier and more widespread vaccination by lesser trained personnel in remote areas. The net result of these innovations was increased world coverage and a reduction in the spread of the disease. Smallpox became more and more confined by increasing herd immunity, thus allowing transition to the phase of monitoring and isolation of individual cases. In 1977 the last case of smallpox was identified in Somalia, and in 1980 the WHO declared the disease eradicated. No subsequent cases have been found except for several associated with a laboratory accident in the United Kingdom in 1978. The WHO recommends that the last stores of smallpox virus should be destroyed in 1999. The cost of the eradication program was $112 million or $8 million per year. Worldwide savings are estimated at $1 billion annually. This monumental public health achievement set the precedent for eradication of other infectious diseases. The World Health Assembly decided to destroy the last two remaining stocks of the smallpox virus in Atlanta and Moscow in 1999. Destruction of the remaining stock was delayed in 1999 to 2002 because of concern that illegal stocks may be held by some states or potential bioterrorists for potential use in weapons of mass destruction, concern regarding the appearance of monkeypox and a wish to use the virus for further research. Eradication of Poliomyelitis In 1988, the WHO established a target of eradication of poliomyelitis by the year 2000. Global immunization coverage with three doses of OPV increased from some 45% in 1984 to over 80% in 1990, with a slight decline in the period 1991-1993. Support from member countries and international agencies such as UNICEF and Rotary International has led to widescale increases in immunization coverage throughout many parts of the world. The World Health Organization promotes use of OPV only as part of routine infant immunization or National Immunization Days (NIDs). This strategy has been successful in the Americas and in China, but India and the Middle East remain problematic. Eradication of wild poliomyelitis by the year 2000 will require flexibility in vaccination strategies and may require the combined approach, using OPV and IPV, as adopted in the United States in 1997 to prevent vaccine-associated clinical cases. The combination of OPV and IPV may be needed where enteric disease is common and leads to interference in OPV uptake, especially in tropical areas where endemic poliovirus and diarrheal diseases are still found. The World Bank estimated that achievement of global eradication would save $300 million annually in the United States alone. Other Candidates for Eradication Since the eradication of smallpox, discussion has focused on the possibility of similarly eradicating other diseases, and a list of potential candidates has emerged. Some of these have been abandoned because of practical difficulties with current technology. Diseases that have been under discussion for eradication have included measles, TB, and tropical diseases such as malaria and dracunculiasis. Eradication of malaria was thought to be possible in the 1950s when major gains were seen in malaria control by aggressive case environmental control, case finding, and management. However, lack of sustained vector control and an effective vaccine has prevented global eradication. Malaria control suffered serious setbacks because of failure in political resolve and capacity to continue support needed for necessary programs. In the 1960s and 1970s, control efforts were not sustained in many countries, and a dreadful comeback of the disease occurred in Africa and Asia in the 1980s. The emergence of mosquitoes resistant to insecticides, and malarial strains resistant to antimalarial drugs, have made malaria control even more difficult and expensive. Renewed effort in malaria control may require new approaches. Use of community health workers (CHWs) in small villages in highly endemic regions of Colombia resulted in a major drop in malaria mortality during the 1990s. The CHWs investigate suspect cases by taking clinical histories and blood smears. 1. Scientific feasibility a. Epidemiologic vulnerability; lack of nonhuman reservoir, ease of spread, no natural immunity, relapse potential; b. Effective practical intervention available; vaccine or other primary preventive or curative treatment, or vectoricide that is safe, inexpensive, long lasting, and easily used in the field; c. Demonstrated feasibility of elimination in specific locations, such as an island or other geographic unit. 2. Political will/popular support a. They examine smears for malaria parasites and a diagnosis is made. Therapy is instituted and the patient is followed. Quality control monitoring shows high levels of accuracy in reading of slides compared to professional laboratories. In the late 1970s, there was widespread discussion in the literature of the potential for eradication of measles and TB. Measles eradication was set back as breakthrough epidemics occurred in the United States, Canada, and many other countries during the 1980s and early 1990s, but regional eradication was achieved combining the two-dose policy with catch-up campaigns for older children or in National Immunization Days, as in the Caribbean countries. Tuberculosis has also increased in the United States and several European countries for the first time in many decades. Unrealistic expectations can lead to inappropriate assessments and policy when confounding factors alter the epidemiologic course of events. Such is the case with TB, where control and eradication have receded from the picture. This deadly disease has returned to developed countries, partly in association with the HIV infection and multiple-drug-resistant strains, as well as homelessness, rising prison populations, poverty, and other deleterious social conditions. Directly observed therapy is an important recent breakthrough, more effective in use of available technology and will play a major role in TB control in the twenty-first century. Future Candidates for Eradication A decade after the eradication of smallpox was achieved, the International Task Force for Disease Eradication (ITFDE) was established to systematically evaluate the potential for global eradicability of candidate diseases. Its goals were to identify specific barriers to the eradication of these diseases that might be surmountable and to promote eradication efforts. The subject of eradication versus control of infectious diseases if of central public health importance as technology expands the armamentarium of immunization and vector control into the twenty-first century. The control of epidemics, followed by interruption of transmission and ultimately eradication, will save countless lives and prevent serious damage to children throughout the world. The smallpox achievement, momentous in itself, points to the potential for the eradication of other deadly diseases. The skillful use of existing and new technology is an important priority in the New Public Health. Flexibility and adaptability are as vital as resources and personnel. Selecting diseases for eradication is not purely a professional issue of resources such as vaccines and manpower, organization and financing. It is also a matter of political will and perception of the burden of disease. There will be many controversies. The selection of polio for eradication while deferring measles when polio kills few and measles kills many may be questioned. The CDC published criteria for selection of disease for eradication are shown in Box 4.8. The WHO, in a 1998 review of health targets in the field of infectious disease control for the twenty-first century, selected the following targets: eradication of Chagas' disease by 2010; eradication of neonatal tetanus by 2010; eradication of leprosy by 2010; eradication of measles by 2020; eradication of trachoma by 2020; reversing the current trend of increasing tuberculosis and HIV/AIDS. In 1998, a conference in Atlanta, Georgia, reviewed the subject, which is still very much in a state of flux. Table 4.7 summarizes the selection of diseases which are presently seen as controllable and those considered to be potentially eradicable. The subject will be under review in the years ahead. Tuberculosis (TB) is caused by a group of organisms including Mycobacterium tuberculosis in humans and M. bovis in cattle. The disease is primarily found in humans, but it is also a disease of cattle and occasionally other primates in certain regions of the world. It is transmitted via airborne droplet nuclei from persons with pulmonary or laryngeal TB during coughing, sneezing, talking, or singing. The initial infection may go unnoticed, but tuberculin sensitivity appears within a few weeks. About 95% of those infected enter a latent phase with a lifelong risk of reactivation. Approximately 5 % go from initial infection to pulmonary TB. Less commonly, the infection develops as extrapulmonary TB, involving meninges, lymph nodes, pleura, pericardium, bones, kidneys, or other organs. Untreated, about half of the patients with active TB will die of the disease within 2 years, but modern chemotherapy almost always results in a cure. Pulmonary TB symptoms include cough and weight loss, with clinical findings on chest examination and confirmation by findings of tubercle bacilli in stained smears of sputum and, if possible, growth of the organism on culture media, and changes in the chest X-ray. Tuberculosis affects people in their adult working years, with 80-90% of cases in persons between the ages of 15 and 49. Its devastating effects on the work force and economic development contribute to a high cost-effectiveness for TB control. The tubercle bacillus infects approximately 1.7 billion people in the world today, causing over 7 million cases and nearly 3 million deaths in 1997. During 1995, new cases of TB included 2.8 million (40%) in southeast Asia and the western Pacific regions of WHO, with 2.3 million cases in India, and 0.5 million in Indonesia. By 2005, the incidence of TB may increase to 11.9 million new cases per year, a 58% increase over 1990. Between 1990 and 1999, WHO estimates there were 88 million new cases of TB, of which 8 million cases were in association with HIV infection. During the 1990s, an estimated 30 million persons died of TB, including 2.9 million with HIV infection. A new and dangerous period for TB resurgence has resulted from parallel epidemiologic events: first, the advent of HIV infection and second, the occurrence of multiple drug resistant TB (MDRTB), that is, organisms resistant at least to both Isoniazid (INH) and rifampicin, two mainstays of TB treatment. MDRTB can have a case fatality rate as high as 70%. HIV reduces cellular immunity so that people with latent TB have a high risk of activation of the disease. It is estimated that HIV negative persons have a 5-10% lifetime risk of TB; HIV positive people have a risk of 10% per year of developing clinical tuberculosis. Drug resistance, the long period of treatment, and the socioeconomic profile of most TB patients combine to require a new approach to therapy. Directly observed treatment, short-course (DOTS), has shown itself to be highly effective with patients in poor self-care settings, such as the homeless, drug users, and those with AIDS. The strategy of DOTS uses community health workers to visit the patient and observes him or her taking the various medications, providing both incentive, support, and moral coercion to complete the needed 6 to 8 month therapy. DOTS has been shown to cure up to 95% of cases, at a cost of as little as $11 per patient. It is one of the few hopes of containing the TB pandemic. In 1994, WHO released a new strategy for control of tuberculosis over the next decade. The plan calls for new guidelines for control, new aid funds for developing countries, and enlistment of NGOs to assist in the fight. The new guidelines stress short-term chemotherapy in well-managed programs of DOTS, stressing strict compliance with therapy for infectious cases with a goal of an 85 % cure rate. Even under adverse conditions, DOTS produces excellent results. It is one of the most cost-effective health interventions combining public health and clinical medical approaches. Tuberculosis incidence in the United States decreased steadily until 1985, increased in 1990, and has declined again since. From 1986 to 1992, there was an excess of 51,600 cases over the expected rate if the previous decline in case incidence had continued. This rise was largely due to the HIV/AIDS epidemic and the emergence of MDRTB, but also greater incidence among immigrants from areas of higher TB incidence, drug abusers, the homeless, and those with limited access to health care. This is particularly true in New York City, where MDRTB has appeared in outbreaks among prison inmates and hospital staff. From 1992 to 1997, TB incidence in the United States declined by 26% and in some states, including New York, by 50% or more. This turnaround was due to stronger TB control programs that promptly identified persons with TB and initiated and ensured completion of appropriate therapy. Aggressive staff training, outreach, and case management approaches were vital to this success. Concern over rising rates among recent immigrants and the continued challenge of HIV/AIDS and coincidental transmission of hepatitis A, B, and C among drug users and marginal population groups show that continued support for TB control is needed. Bacillus Calmette-Gurrin (BCG) is an attenuated strain of the tubercle bacillus used widely as a vaccination to prevent TB, especially in high incidence areas. It induces tuberculin sensitivity or an antigen-antibody reaction in which antibodies produced may be somewhat protective against the tubercle bacillus in 90% of vaccinees. Although the support for its general use is contradictory, there is evidence from case-control and contact studies of positive protection against TB meningitis and disseminated TB in children under the age of 5. In some developed, low-incidence countries, it is not used routinely but selectively. It may also be used in asymptomatic HIV-positive persons or other high risk groups. The BCG vaccine for tuberculosis remains controversial. While used widely internationally, in the United States and other industrialized countries, it is thought to hinder rather than help in the fight against TB. This concern is based on the usefulness of tuberculin testing for diagnosis of the disease. Where BCG has been administered, the diagnostic value of tuberculin testing is reduced, especially in the period soon after the BCG is used. Studies showing equivocal benefit of BCG in preventing tuberculosis have added to the controversy. While those in the field in the United States continue to oppose the use of BCG, internationally it is still felt to be of benefit in preventing TB, primarily in children. A 1994 metaanalysis of the literature of BCG carried out by the Technology Assessment Group at Harvard School of Public Health concluded: On average, BCG vaccine significantly reduces the risk of TB by 50%. Protection is observed across many populations, study designs, and forms of TB. Age at vaccination did not enhance predictiveness of BCG efficacy. Protection against tuberculous death, meningitis, and disseminated disease is higher than for total TB cases, although this result may reflect reduced error in disease classification rather than greater BCG efficacy. [Colditz et al., JAMA, 1994.] Box 4.9 CONTROL OF TUBERCULOSIS 1. Identifying persons with clinically active TB; 2. Diagnostic methods--clinical suspicion, sputum smear for bacteriologic examination, tuberculin skin testing, chest radiograph; 3. Case finding and investigation programs in high risk groups; 4. Contact investigation; 5. Isolation techniques during initial therapy; 6. Treatment, mainly ambulatory, of persons with clinically active TB; 7. Treatment of contacts; 8. Directly observed treatment, short-course (DOTS), where compliance suspect; 9. Environmental control in treatment settings to reduce droplet infection; 10. Educate health care providers on suspicion of TB and investigation of suspects. Currently, the WHO recommends use of BCG as close to birth as possible as part of the Expanded Programme of Immunization (EPI). Tuberculosis control remains feasible with current medical and public health methods. Deterioration in its control should not lead to despair and passivity. The recent trend to successful control by DOTS despite the growing problem of MDRTB suggest that control and gradual reduction can be achieved by an activist, community outreach approach. The WHO in 1999 made TB control one of its major priorities, expressing grave concern that the MDR organism, now widely spread in countries of Asia, eastern Europe, and the former Soviet Union, may spread the disease much more widely. The disease constitutes one of the great challenges to public health at the start of the new century. STREPTOCOCCAL DISEASES Acute infectious diseases caused by Group A streptococci include streptococcal sore throat, scarlet fever, puerperal fever, septicemia, ersypelas, cellulitis, mastoiditis, otitis media, pneumonia, peritonsillitis (quinsy), wound infections, toxic shock syndrome, and fasciitis, the "flesh eating bacteria." Streptococcus pyogenes group A include some 80 serologically distinct types which vary in geographic location and clinical significance. Transmission is by droplet, person-to-person direct contact, or by food infected by carriers. Important complications from a public health point of view include acute rheumatic fever and acute glomerulonephritis, but also skin infections and pneumonia. Acute rheumatic fever is a complication of strep A infection that has virtually disappeared from industrialized countries as a result of improved standards of living and antibiotic therapy. However, outbreaks were recorded in the United States in 1985, and an increasing number of cases have been seen since 1990. In developing countries, rheumatic fever remains a serious public health problem affecting school age children, particularly those in crowded living arrangements. Longterm sequelae include disease of the mitral and aortic heart valves, which require cardiac care and surgery for repair or replacement with artificial valves. Acute glomerulonephritis is a reaction to toxins of the streptococcal infection in the kidney tissue. This can result in long-term kidney failure and the need for dialysis or kidney transplantation. This disease has become far less common in the industrialized countries, but remains a public health problem in developing countries. The streptococcal diseases are controllable by early diagnosis and treatment with antibiotics. This is a major function of primary care systems. Recent increases in rheumatic fever may herald a return of the problem, perhaps due to inadequate access to primary care in the United States for large sectors of the population, along with increased social hygiene problems. Where access to primary care services is limited, infections with streptococci can result in a heavy burden of chronic heart and kidney disease with substantial health, emotional, and financial tolls. Measures to improve access to care and pub-lic information are needed to assure rapid and effective care to prevent chronic and costly conditions. ZOONOSES Zoonoses are infectious diseases transmissible from vetebrate animals to humans. Common examples of zoonoses of public health importance in nonindustrialized countries include brucellosis and rabies. In industrialized countries, salmonellosis, "mad cow disease" and influenza have reinforced the importance of relationships of animal and human health. Strong cooperation between public health and veterinary public health authorities are required to monitor and to prevent such diseases. Brucellosis Brucellosis is a disease occurring in cattle (Brucella abortus), in dogs (Br. cahis), in goats and sheep (Br. melitensis), and in pigs (Br. suis). Humans are affected mainly through ingestion of contaminated milk products, by contact, or inhalation. Brucellosis (also known as relapsing, undulant, Malta, or Mediterranean fever) is a systemic bacterial disease of acute or insidious onset characterized by fever, headache, weakness, sweating, chills, arthralgia, depression, weight loss, and generalized malaise. Spread is by contact with tissues, blood, urine, vaginal discharges, but mainly by ingestion of raw milk and dairy products from infected animals. The disease may last from a few days to a year or more. Complications include osteoarthritis and relapses. Case fatality is under 2%, but disability is common and can be pronounced. The disease is primarily seen in Mediterranean countries, the Middle East, India, central Asia, and in Central and South America. Brucellosis occurs primarily as an occupational disease of persons working with and in contact with tissues, blood, and urine of infected animals, especially goats and sheep. It is an occupational hazard for veterinarians, packinghouse workers, butchers, tanners, and laboratory workers. It is also transmitted to consumers of unpasteurized milk from infected animals. Animal vectors include wild animals, so that eradication is virtually impossible. Diagnosis is confirmed by laboratory findings of the organism in blood or other tissue samples, or with rising antibody titers in the blood, with confirmation by blood cultures. Clinical cases are treated with antibiotics. Epidemiologic investigation may help track down contaminated animal flocks. Routine immunization of animals, monitoring of animals in high risk areas, quarantining sick animals, destroying infected animals, and pasteurizing milk and milk products prevents spread of the disease. Control measures include educating farmers and the public not to use unpasteurized milk. Individuals who work with animals (cattle, swine, goats, sheep, dogs, coyotes) should take special precautions when handling animal carcasses and materials. Testing animals, destroying carriers, and enforcing mandatory pasteurization will restrict the spread of the disease. This is an economic as well as public health problem, requiring full cooperation between ministries of health and of agriculture. Rabies Rabies is primarily a disease of animals, with a variety of wild animals serving as a reservoir for this disease, including foxes, wolves, bats, skunks, and raccoons, who may infect domestic animals such as dogs, cats, and farm animals. Animal bites break the skin or mucous membrane, allowing entry of the virus from the infected saliva into the bloodstream. The incubation period of the virus is 2-8 weeks; it can be as long as several years or as short as 5 days, so that postexposure preventive treatment is a public health emergency. The clinical disease often begins with a feeling of apprehension, headache, pyrexia, followed by muscle spasms, acute encephalitis, and death. Fear of water ("hydrophobia") or fear of swallowing is a characteristic of the disease. Rabies is almost always fatal within a week of onset of symptoms. The disease is estimated to cause 30,000 deaths annually, primarily in developing countries. It is uncommon in developed countries. Rabies control focuses on prevention in humans, domestic animals, and wildlife. Prevention in humans is based on preexposure prophylaxis for groups at risk (e.g., veterinarians, zoo workers) and postexposure immunization for persons bitten by potentially rabid animals. Because reducing exposure of pets to wild animals is difficult, immunization of domestic animals is one of the most important preventive measures. Prevention in domestic animals is by mandatory immunization of household pets. All domestic animals should be immunized at age 3 months and revaccinated according to veterinary instructions. Prevention in wild animals to reduce the reservoir is successful in achieving local eradication in settings where reentry from neighboring settings is limited. Since 1978, the use of oral rabies immunization has been successful in reducing the population of wild animals infected by the rabies virus. Rabies eradication efforts, using aerial distribution of baits containing fox rabies vaccine in affected areas of Belgium, France, Germany, Italy, and Luxembourg, have been underway since 1989. The number of rabies cases in these affected areas has declined by some 70%. Switzerland is now virtually rabies-free because of this vaccination program. The potential exists for focal eradication, especially on islands or in partially restricted areas with limited possibilities of wild animal entry. Livestock need not be routinely immunized against rabies, except in high risk areas. Where bats are major reservoirs of the disease, as in the United States, eradication is not presently feasible. Salmonella Salmonella, discussed later in this chapter under diarrheal diseases, is one of the commonest of all infectious diseases among animals and is easily spread to humans via poultry, meat, eggs, and dairy products. Specific antigenic types are associated with food-borne transmission to humans, causing generalized illness and gastroenteritis. Severity of the disease varies widely, but the diseases can be devastating among vulnerable population groups, such as young children, the elderly, and the immunocompromised. Epidemiologic investigation of common food source outbreaks may uncover hazardous food handling practices. Laboratory confirmation or serotypes helps in monitoring the disease. Prevention is by maintaining high standards of food hygiene in processing, inspection and regulation, food handling practices, and hygiene education. Anthrax Bacillus anthracis causes a bacterial infection in herbivore animals. Its spores contaminate soil, worldwide. It affects humans exposed in occupational settings. Transmission is cutaneous by contact, gastrointestinal by ingestion, or respiratory by inhalation. It has gained recent attention (Iraq, 1997) as a highly potent agent for germ warfare or terrorism. Limited supplies of vaccine are available. Creutzfeld-Jakob Disease Creutzfeld-Jakob disease is a degenerative disease of the central nervous system linked to consumption of beef from cattle infected with bovine spongiform encephalopathy. It is transmitted by prions in animal feed prepared from contaminated animal material and in transplanted organs. This disease was identified in the United Kingdom linked to infected cattle leading to a 1997 ban on British beef in many parts of the world and slaughter of large numbers of potentially contaminated animals. Other Major Zoonotic Diseases The tapeworm causing diphyllobothriasis (Diphyllobothrium latum) is widespread in North American freshwater fish, passing from crustacean to fish to humans by eating raw freshwater fish. It is especially common among Inuit peoples and may be asymptomatic or cause severe general and abdominal disorder. Food hygiene (freezing and cooking of meat) is recommended; treatment is by anthelminthics. Leptospiroses are a group of zoonotic bacterial diseases found worldwide in rats, raccoons, and domestic animals. It affects farmers, sewer workers, dairy and abattoir workers, veterinarians, military personnel, and miners with transmission by exposure to or ingestion of urine-contaminated water or tissues of infected animals. It is often asymptomatic or mild, but may cause generalized illness like influenza, meningitis, or encephalitis. Prevention requires education of the public in self protection and immunization of workers in hazardous occupations, along with immunization and segregation of domestic animals and control of wild animals. VECTOR-BORNE DISEASES Vector-borne diseases are a group of diseases in which the infectious agent is transmitted to humans by crawling or flying insects. The vector is the intermediary between the reservoir and the host. Both the vector and the host may be affected by climatic condition; mosquitoes thrive in warm, wet weather and are suppressed by cold weather; humans may wear less protective clothing in warm weather. Malaria The only important reservoir of malaria is humans. Its mode of transmission is from person to person via the bite of an infected female anopheles mosquito (Ronald Ross, Nobel prize, 1902). The causative organism is a single cell parasite with four species: Plasmodium vivax, P malariae, P falciparum, and P ovale. Clinical symptoms are produced by the parasite invading and destroying red blood cells. The incubation period of approximately 12-30 days, depending on the specific plasmodium involved. Some strains of P vivax may have a protracted incubation period of 8-10 months and even longer for P ovale. The disease can also be transmitted through infected blood transfusions. Confirmation of diagnosis is by demonstrating malaria parasites on blood smears. Falciparum malaria, the most serious form, presents with fever, chills, sweats, and headache. It may progress to jaundice, bleeding disorders, shock, renal or liver failure, encephalopathy, coma, and death. Prompt treatment is essential. Case fatality rates in untreated children and adults are above 10%. An untreated attack may last 18 months. Other forms of malaria may present as a nonspecific fever. Relapse of the P ovale may occur up to 5 years after initial infection; malaria may persist in chronic form for up to 50 years. Malaria control advanced during the 1940s-1960s through improved chlovaquine treatment and use of DDT for vector control with optimism for eradication of the disease. However, control regressed in many developing countries as allocations for environmental control and case findings/treatment were reduced. There has also been an increase in drug resistance, so that this disease is now an extremely serious public health problem in many parts of the world. The need for a vaccine for malaria control is now more apparent than ever. The World Health Organization estimated that, in 1997, sub-Saharan Africa (SSA) had 270 million new malaria cases, with 5% of children up to age 5. Over 1 million deaths occur annually from malaria more than two-thirds of them in SSA. Large areas, particularly in forest or savannah regions with high rainfall, are holoendemic. In higher altitudes, endemicity is lower, but epidemics do occur. Chloroquine-resistant P. falciparum has spread throughout Africa, accompanied by an increasing incidence of severe clinical forms of the disease. The World Bank estimates that 11% of all disability-adjusted life years (DALYs) lost per year in SSA are from malaria, which places a heavy economic burden on the health systems. In the Americas, the number of cases detected has risen every year since 1974, and the WHO estimates there to have been 2.2-2.5 million cases in 1991. The nine most endemic countries in the Americas achieved a 60% reduction in malaria mortality between 1994 and 1997. Southeast Asian region reports some 3.4 million cases of malaria in 1996 and 8000 deaths from TB. This accounts for more than one-third of all non-African malaria cases. There is an increase in resistant strains to the major available drugs and of the mosquitoes to insecticides in use. Vector control, case finding, and treatment remain the mainstay of control. Use of insecticide-impregnated bed nets and curtains, and residual house spraying, and strengthened vector control activities are important, as are early diagnosis and carefully monitored treatment with monitoring for resistance. Control of malaria will ultimately depend on a safe, effective, and inexpensive vaccine. Attempts to develop a malaria vaccine have been unsuccessful to date due to the large number of genetic types of P. falciparum even in localized areas. A Colombian-developed vaccine is being field-tested with partial effectiveness. Research in vaccines for malaria has also been hampered by the fact that it is a relatively low priority for vaccine manufacturers because of the minimal potential for financial benefit. Research on malaria concentrates on the pharmacological aspects of the disease because of increasing drug resistance. In 1998, WHO has initiated a new campaign to "Roll Back Malaria" and maintain the dream of eradication in the future. Effective low technology interventions include community-based case finding, early treatment of good quality, insecticide use, and vector control. The use of community health workers in endemic areas, has shown promising results. Local control and even eradication can be achieved with currently available technology. This requires an integration of public health and clinical approaches with strong political commitment. Rickettsial Infections The rickettsia are obligate parasites, i.e., they can only replicate in living cells, but otherwise they have characteristics of bacteria. This is a group of clinically similar diseases, usually characterized by severe headache, fever, myalgia, rash, and capillary bleeding causing damage to brain, lungs, kidneys, and heart. Identification is by serological testing for antibodies, but the organisms can also be cultured in laboratory animals, embryonic eggs, or in cell cultures. The organisms are transmitted by arthropod vectors such as lice, fleas, ticks, and mites. The diseases caused millions of deaths during war and famine periods prior to the advent of antibiotics. These diseases appear in nature in ways that make them impossible to eradicate, but clinical diagnosis, host protection, and vector control can help reduce the burden of disease and deal with outbreaks that may occur. Public education regarding self-protection, appropriate clothing, tick removal, and localized control measures such as spraying and habitat modification are useful. Epidemic typhus, first identified in 1836, is due to Rickettsia prowazekii. Spread primarily by the body louse, typhus was the cause of an estimated 3 million deaths, i.e., during war and famine, in Poland and the Soviet Union from 1915-1922. Untreated, the fatality rate is 5-40%. Typhus responds well to antibiotics. It is currently largely confined to endemic foci in central Africa, central Asia, eastern Europe, and South America. It is preventable by hygiene and pediculicides such as DDT and lindane. A vaccine is available for exposed laboratory personnel. Murine typhus is a mild form of typhus due to Rickettsia typhi, which is found worldwide and spread in rodent reservoirs. Scrub typhus, also known as Tsutsugamushi or Japanese river fever, is located throughout the Far East and the Pacific islands, and was a serious health problem for U.S. armed forces in the Pacific during World War II. It is spread by the Rickettsia tsutsugamushi and has a wide variation in case fatality according to region, organism, and age of patient. Rocky Mountain spotted fever is a well-known and severe form of tick-borne typhus due to Rickettsia rickettsii, occurring in western North America, Europe, and Asia. Q. fever is a tick-borne disease caused by Coxiella burnetii and is worldwide in distribution, usually associated with farm workers, in both acute and chronic forms. Regular anti-tick spraying of sheep, cows, and goats helps protect exposed workers. Protective clothing and regular removal of body ticks help protect exposed persons. Arboviruses (Arthropod-Borne Viral Diseases) Arthropod-borne viral diseases are caused by a diverse group of viruses which are transmitted between vertebrate animals (often farm animals or small rodents) and people by the bite of blood-feeding vectors such as mosquitoes, ticks, and sandflies and by direct contact with infected animal carcasses. Usually the viruses have the capacity to multiply in the salivary glands of the vector, but some are carried mechanically in their mouthparts. These viruses cause acute central nervous system infections (meningoencephalitis), myocarditis, or undifferentiated viral illnesses with polyarthritis and rashes, or severe hemorrhagic febrile illnesses. Arbovirus diseases are often asymptomatic in vertebrates but may be severe in humans. Over 250 antigenetically distinct arboviruses are associated with disease in humans, varying from benign fevers of short duration to severe hemmorhagic fevers. Each has a specific geographic location, vector, clinical, and virologic characteristics. They are of international public health importance because of the potential for spread via natural phenomena and modem rapid transportation of vectors and persons incubating the disease or ill with it, with potential for further spreading at the point of destination. Encephalitides Arboviruses are responsible for a large number of encephalitic diseases characterized by mode of transmission and geographic area. Mosquito-borne arboviruses causing encephalitis include Eastern and Western equine, Venezuelan, Japanese, and Murray Hill encephalitides. Japanese encephalitis is caused by a mosquito-borne arbovirus found in Asia and is associated with rice-growing areas. It is characterized by headache, fever, convulsions, and paralysis, with fatality rates in severe cases as high as 60%. A currently available vaccine is used routinely in endemic areas (Japan, Korea, Thailand, India, and Taiwan) and for persons traveling to infected areas. Tick-borne arboviruses causing encephalitis include the Powassan virus, which occurs sporadically in the United States and Canada. Tickborne encephalitis is endemic in eastern Europe, Scandinavia, and the former Soviet Union. An epidemic of mosquito-borne encephalitis in New York City in 1999 included 54 cases and 6 deaths, due to the West Nile Fever virus, never before found in the United States. Rift Valley Fever Rift Valley fever (RVF) is a virus spread by mosquitoes and other insect vectors. It affects animals and humans who are in direct contact with the meat or blood of affected animals. The virus causes a generalized illness in humans with encephalitis, hemorrhages, retinitis and retinal hemorrhage leading to partial or total blindness, and death (1-2%). It also causes universal abortion in ewes and a high percentage of death in lambs. The normal habitat is in the Rift Valley of eastern Africa (the Great Syrian-African Rift), often spreading to southern Africa, depending on climactic conditions. The primary reservoir and vector is the Aedes mosquito, and affected animals serve to multiply the virus which is transmitted by other vectors and direct contact with animal fluids to humans. An unusual spread of RVF northward to the Sudan and along the Aswan Dam reservoir to Egypt in 1977-1978 caused hundreds of thousands of animal deaths, with 18,000 human cases and 598 deaths. RVF appeared again in Egypt in 1993. This disease is suspected to be one of the ten plagues of Egypt leading to the exodus of the Children of Israel from Egypt during pharaonic-biblical times. In 1997, an outbreak of RVF in Kenya, initially thought to be anthrax, with hundreds of cases and dozens of deaths, was related to abnormal rainy season and vector conditions. Satellite monitoring of rainfall and vegetation is being used to predict epidemics in Kenya and surrounding countries. Animal immunization, monitoring, vector control, and reduced contact with infected animals can limit the spread of this disease. Hemorrhagic Fevers Arboviruses can also cause hemorrhagic fevers. These are acute febrile illnesses, with extensive hemorrhagic phenomena (internal and external), liver damage, shock, and often high mortality rates. The potential for international transmission is high. Yellow Fever. Yellow fever is an acute viral disease of short duration and varying severity with jaundice. It can progress to liver disease and severe intestinal bleeding. The case fatality rate is <5% in endemic areas, but may be as high as 50% in nonendemic areas and in epidemics. It caused major epidemics in the Americas in the past, but was controlled by elimination of the vector, Aedes aegypti. A live attenuated vaccine is used in routine immunization endemic areas and recommended for travelers to infected areas. Determining the mode of transmission and vector control of yellow fever played a major role in the development of public health (see Chapter 1). In 1997, the WHO reported 200,000 cases and 30,000 deaths from yellow fever globally. Dengue Hemorrhagic Fever. Dengue hemorrhagic fever is an acute sudden onset viral disease, with 3-5 days of fever, intense headache, myalgia, arthralgia, BOX 4.10 DENGUE FEVER AND DENGUE HEMORRHAGIC FEVER, 1996 Dengue fever, a severe influenza-like illness, and dengue hemorrhagic fever are closely related conditions caused by four distinct viruses transmitted by Aedes aegypti mosquitos. Dengue is the world's most important mosquito-borne virus disease. A total of 2,500 million people worldwide are at risk of infection. An estimated 20 million cases occur each year, of whom 500,000 need to be hospitalized. This is a spreading problem, especially in cities in tropical and subtropical areas. Major outbreaks were reported in Colombia, Cuba, and many other locations in 1997. Source: World Health Organization. 1998. World Health Report 1998 gastrointestinal disturbance, and rash. Hemorrhagic phenomena can cause case fatality rates of up to 50%. Epidemics can be explosive, but adequate treatment can greatly reduce the number of deaths. Dengue occurs in Southeast Asia, the Pacific islands, Australia, West Africa, the Caribbean, and Central and South America. An epidemic in Cuba in 1981 included more than 500,000 cases, and 158 deaths. Vector control of the A. aegypti mosquito resulted in control of the disease during the 1950s-1970s, but reinfestation of mosquitoes led to incresased transmission and epidemics in the Pacific Islands, Caribbean, Central and South America in the 1980s and 1990s. Outbreaks in Vietnam included 370,000 cases in 1987, another 116,000 cases in 1990, and a similar sized outbreak in 1997. Indonesia had over 13,000 cases in 1997 with 240 deaths, and in 1998 over 19,000 cases (January-May) with at least 531 deaths. In 1998, epidemics of dengue were reported in Fiji, the Cook Islands, New Caledonia, and northern Australia. The WHO estimates 140,000 deaths and 3.1 million cases worldwide in 1997. Monkeys are the main reservoir, and the vector is the A. aegypti mosquito. No vaccine is currently available, and management is by vector control. Other Hemmorhagic Fevers Lassa Fever. Lassa fever was first isolated in Lassa, Nigeria, in 1969 and is widely distributed in West Africa, with 200,000-400,000 cases and 5000 deaths annually. It is spread by direct contact with blood, urine, or secretions of infected rodents and by direct person-to-person contact in hospital settings. The disease is characterized by a persistent or spiking fever for 2-4 weeks, and may include severe hypotension, shock, and hemorrhaging. The case fatality rate is 15%. Marburg Disease. Marburg disease is a viral disease with sudden onset of generalized illness, malaise, fever, myalgia, headache, diarrhea, vomiting, rash, and hemorrhages. It was first seen in Marburg, Germany, in 1967, following ex-posure to green monkeys. Person-to-person spread occurs via blood, secretions, organs, and semen. Case fatality rates can be over 50%. Ebola Fever. Ebola fever is a viral disease with sudden onset of generalized illness, malaise, fever, myalgia, headache, diarrhea, vomiting, rash, and hemorrhages. It was first found in Zaire and Sudan in 1976 in outbreaks which killed more than 400 persons. It is spread from person to person by the blood, vomitus, urine, stools, and other secretions of sick patients, with a short incubation period. The disease has case fatality rates of up to 90%. An outbreak of Ebola among laboratory monkeys in a medical laboratory near Washington, D.C., was contained with no human cases. The reservoir for the virus is thought to be rodents. An outbreak of Ebola in May 1995 in the town of Kikwit, Zaire, killed 245 persons out of 316 cases (78% case fatality rate). This outbreak caused international concern that the disease could spread, but it remained localized. Another outbreak of Ebola virus occurred in Gabon in early 1996, with 37 cases, 21 of whom had direct exposure to an infected monkey, the remainder by human-to-human contact, or not established; 21 of the cases died (57%). This disease is considered highly dangerous unless outbreaks are effectively controlled. In Zaire, lack of basic sanitary supplies, such as surgical gloves for hospitals, almost ensures that this disease will spread when it recurs. Lyme Disease Lyme disease is characterized by the presence of a rash, musculoskeletal, neurologic, and cardiovascular symptoms. Confirmation is by laboratory investigation. It is the most common vector-borne disease in the United States, with 33,000 cases reported between 1993 and 1995. It primarily affects children in the 5-14 age group and adults aged 30-49. Lyme disease is preventable by avoiding contact with ticks, by applying insect repellant, wearing long pants and long sleeves in infected areas, and by the early removal of attached ticks. Several U.S. manufacturers produced vaccines which are approved for animal and human use. In the mid 1970s, a mother of two young boys who were recently diagnosed with arthritis in the town of Lyme, Connecticut, conducted a private investigation among other town residents. She mapped each of the six arthritis cases in the town, cases which had occurred in a short time span among boys living in close proximity. This suggested that this syndrome of "juvenile rheumatoid arthritis" was perhaps connected with the boys playing in the woods. She presented her data to the head of Rheumatology at Yale Medical School in New Haven, who investigated this "cluster of a new disease entity." Some parents reported that their sons had experienced tick bites and a rash before onset of the arthritis. A tick-borne, spiral shaped bacterium, a spirochete, Borrelia burgdorferi, was identified as the organism, and ticks shown to be the vector. Cases repond well to antibiotic therapy. In 1996 over 16,000 cases (6.2 per 100,000) were reported from 45 states, an increase from 11,000 in 1965 and 13,000 in 1994. Cases were mainly located in the northeast, north central, and mid-Atlantic regions. The disease accounts for over 90% of vector-borne disease in the United States and was the ninth leading reported infection in 1995. Lyme disease has been identified in many parts of North America, Europe, the former Soviet Union, China, and Japan. A newly licensed vaccine is effective for people exposed to ticks but not general usage. Personal hygiene for protection from ticks and environmental modification are important to limit spread of the disease. PARASITIC DISEASES Medically important parasites are animals that live, take nourishment, and thrive in the body of a host, which may or may not harm the host, but never brings benefit. They include those caused by unicellular organisms such as protozoa, which include amoebas (malaria, schistosomiasis, amebiasis, and cryptosporidium), and helminths (worms), which are categorized as nematodes, cestodes, and trematodes. Public health continues to face the problems of parasitic diseases in the developing world. Increasingly, parasitic diseases are being recognized in industrialized countries. Giardiasis and cryptosporidium infections in waterborne and other outbreaks have occurred in the United States. Parasitic diseases are among the most common causes of illness and death in the world, e.g., malaria. Milder illnesses such as giardiasis and trichomoniasis cause widespread morbidity. Intestinal infestations with worms may cause of severe complications, although they commonly cause chronic low-grade symptomatology and iron deficiency anemia. Echinococcosis Echinococcosis (hydatid cyst disease) is infection with Echinococcus granulosus, a small dog tapeworm. The tapeworm forms unilocular (single, noncompartmental) cysts in the host, primarily in the liver and lungs, but they can also grow in the kidney, spleen, central nervous system, or in bones. Cysts, which may grow up to 10 cm in size, may be asymptomatic or, if untreated, may cause severe symptoms and even death. This parasite is common where dogs are used with herd grazing animals and also have intimate contact with humans. The Middle East, Greece, Sardinia, North Africa, and South America are endemic areas, as are a few areas in the United States and Canada. The human dis-ease has been eliminated in Cyprus and Australia. While the dog is the major host, intermediate hosts include sheep, cattle, pigs, horses, moose, and wolves. Preventive measures include education in food and animal contact hygiene, destroying wild and stray dogs, and keeping dogs from the viscera of slaughtered animals. A similar, but multilocular, cystic hydatid disease is widely found in wild animal hosts in areas of the northern hemisphere, including central Europe, the former Soviet Union, Japan, Alaska, Canada, and the north-central United States. Another echinococcal disease (Echinococcus vogeli) is found in South America, where its natural host is the bush dog and its intermediate host is the rat. The domestic dog also serves as a source of human infection. Surgical resection is not always successful, and long-term medical treatment may be required. Control is through awareness and hygiene as well as the control of wild animals that come in contact with humans and domestic animals. Control may require cooperation between neighboring countries. Tapeworm Tapeworm infestation (taeniasis) is common in tropical countries where hygienic standards are low. Beef (Taenia saginata) and pork (T. solium) tapeworms are common where animals are fed with water or food exposed to human feces. Freezing or cooking meat will destroy the tapeworm. Fish tapeworm (Diphyllobothrium latum) is common in populations living primarily on uncooked fish, such as Inuit people. These tapeworms are usually associated with northern climates. Toddlers are especially susceptible to dog tapeworm (Dipylidium caninum), which is present worldwide, and domestic pets are often the source of oral-fecal transmission of the eggs. The disease is usually asymptomatic. Similarly, dwarf tapeworm (Hymenolepis nana) is transmitted through oral-fecal contamination from person to person, or via contaminated food or water. Rat tapeworm (Hymenolepis diminuta) also mostly affects young children. Onchoeerciasis Onchocerciasis (fiver blindness) is a disease caused by a parasitic worm, which produces millions of larvae that move through the body causing intense itching, debilitation, and eventually blindness. The disease is spread by a blackfly that transmits the larva from infected to uninfected people. It is primarily located in sub-Saharan Africa and in Latin America, with over 120 million persons at risk. Control is by a combination of activities including environmental control by larvicidal sprays to reduce the vector population, protection of potential hosts by protective clothing and insect repellents, and case treatment. A WHO-initiated program for onchocerciasis control started in 1974 is sponsored by four international agencies: the Food and Agriculture Organization (FAO), the United Nations Development Program (UNDP), the World Bank, and WHO. It covers 11 countries in sub-Saharan Africa, focusing on control of the blackfly by destoying its larvae, mainly via insecticides sprayed from the air. Prevalence in 1997 was reported by WHO as over 17 million persons. The program has been successful in protecting some 30 million persons and helping 1.5 million infected persons to recover from this disease. WHO estimates that the program will have prevented 500,000 cases of blindness by the year 2000 and has freed 25 million hectares of land for resettlement and cultivation. The program cost $570 million. This investment is considered by the World Bank to have a return of 16-28% in terms of large scale land reuse and improved output of the population. A WHO program, the African Program for Onchocerciasis Control (APOC), started in 1996, uses a new drug (Ivermectin) and selective vector control efforts by spraying. This involves 30 countries in Africa, and 6 in a similar program in south America. See website http://www/who.int/ocp and is financed by many donor countries, internation organizations, Merck & Company, and NGOs. Dracunculiasis Dracunculiasis (Guinea worm disease) is a parasitic disease of great public health importance in India, Pakistan, and Central and West Africa. It is an infection of the subcutaneous and deeper tissues caused by a large (60 cm) nematode, usually affecting the lower extremities and causing pain and disability. The nematode causes a burning blister on the skin when it is ready to release its eggs. After the blister ruptures, the worm discharges larvae whenever the extremity is in water. The eggs are ingested in contaminated water and the larva released migrate through the viscera to locate as adults in the subcutaneous tissue of the leg. Incubation is about 12 months. The larva released in water are ingested by minute crustaceans and remain infective for as long as a month. Prevention is based on improving the safety of water supplies and by preventing contamination by infected persons. Education of persons in endemic areas to stay out of water sources and to filter drinking water reduces transmission. Insecticides remove the crustaceans. Chlorine also kills the larvae and the crustaceans which prologue larval infectivity. There is no vaccine. Treatment is helpful, but not definitive. Dracunculiasis was traditionally endemic in a belt from West Africa through the Middle East to India and central Asia. It was successfully eliminated from central Asia and Iran and has disappeared from the Middle East and from some African countries (Gambia and Guinea). The World Health Organization has promoted the eradication of dracunculiasis. Major progress has been made in this direction. Worldwide prevalence is reported to have been reduced from 12 million cases in 1980 to 3 million in 1990, 152,814 in 1996, and 77,863 cases in 1997. Eradication was anticipated for the year 2000, and in 1995 the WHO established a commission to monitor and certify eradication in formerly endemic areas. India's reported cases fell from 17,000 in 1987 to 900 in 1992, and the country was free of transmission in 1997. In 1997, formerly high prevalence countries such as Kenya reported no cases in 1997, while Chad, Senegal, Cameroons, Yemen, and the Central African Republic less than 30 cases each. Eradication of this disease appears to be imminent. The WHO eradication program was developed successfully as an independent program with its own direction and field staff, but further progress will require the integration of this program with other basic primary care programs in order to be self-sustaining as an integral part of community health. Community-based surveillance systems for this disease are being converted to work for monitoring of other health conditions in the community. Schistosomiasis Schistosomiasis (snail fever or bilharziasis) is a parasitic infection caused by the trematode (blood fluke) and transmitted from person to person via an intermediate host, the snail. It is endemic in 74 countries in Africa, South America, the Caribbean, and Asia. There are an estimated 200 million persons infected worldwide and more than 600 million at risk for the disease. The clinical symptoms include fever, nausea, vomiting, abdominal pain, diarrhea, and hematuria. The organisms Schistosoma mansoni and S. japonicum cause intestinal and hepatic symptoms, including diarrhea and abdominal pain. Schistosoma haematobium affects the genitourinary tract, causing chronic cystitis, pyelonephritis, with high risk for bladder cancer the ninth most common cause of cancer deaths globally. Infection is acquired by skin contact with freshwater containing contaminated snails. The cercariae of the organism penetrate the skin, and in the human host it matures into an adult worm that mates and produces eggs. The eggs are disseminated to other parts of the body from the worm's location in the veins surrounding the bladder or the intestines, and may result in neurological symptoms. Eggs may be detected under microscopic examination of urine and stools. Sensitive serologic tests are also available. Treatment is effective against all three major species of schistosomiasis. Eradication of the disease can be achieved with the use of irrigation canals, prevention of contamination of water sources by urine and feces of infected persons, treatment of infected persons, destruction of snails, and health education in affected areas. Persons exposed to freshwater lakes, streams, and rivers in endemic areas should be warned of the danger of infection. Mass chemotherapy in communities at risk and improved water and sanitation facilities are resulting in improved control of this disease. Leishmaniasis Leishmaniasis causes both cutaneous and visceral disease. The cutaneous form is a chronic ulcer of the skin, called by various names, e.g., rose of Jericho, oriental sore, and Aleppo boil. It is caused by Leishmania tropica, L. brasiliensis, L. mexicana, or the L. donovani complex. This chronic ulcer may last from weeks to more than a year. Diagnosis is by biopsy, culture, and serologic tests. The organism multiplies in the gut of sandflies (Phlebotomus and Lutzomi) and is transmitted to humans, dogs, and rodents through bites. The parasites may remain in the untreated lesion for 5-24 months, and the lesion does not heal until the parasites are eliminated. Prevention is through limiting exposure to the phlebotomines and reducing the sandfly population by environmental control measures. Insecticide use near breeding places and homes has been successful in destroying the vector sandflies in their breeding places. Case detection and treatment reduce the incidence of new cases. There is no vaccine, and treatment is with specific antimonials and antibiotics. Visceral Leishmaniasis Visceral leishmaniasis (kala azar) is a chronic systemic disease in which the parasite multiplies in the cells of the host's visceral organs. The disease is characterized by fever, the enlargement of the liver and spleen, lymphadenopathy, anemia, leukopenia, and progressive weakness and emaciation. Diagnosis is by culture of the organism from biopsy or aspirated material, or by demonstration of intracellular (Leishman-Donovan) bodies in stained smears from bone marrow, spleen, liver, or blood. Kala azar is a rural disease occurring in the Indian subcontinent, China, the southern republics of the former U.S.S.R., the Middle East, Latin America, and sub-Saharan Africa. It usually occurs as scattered cases among infants, children, and adolescents. Transmission is by the bite of the infected sandfly with an incubation period of 2-4 months. There is no vaccine, but specific treatment is effective and environmental control measures reduce the disease prevalence. This includes the use of antimalarial insecticides. In localities where the dog population has been reduced, the disease is less prevalent. Trypanosomiasis Sleeping Sickness. Sleeping sickness a disease caused by Trypanosoma brucei, transmitted but the tsetse fly, primarily in the African savannahs, affecting cattle and humans. Some 55 million persons are at risk in sub-Saharan Africa. WHO reported 200,000 new cases, a total prevalence of 300,000 cases, and 150,000 deaths from this disease in 1996. Prevention depends on vector control, and effective treatment of human cases. Chagas Disease (American trypanosomiasis) Chagas disease is a chronic and incurable vector and blood transfusion borne parasitic disease (Trypanosoma cruzi) which causes disability and death. It affects some 17 million persons mainly in Latin America, with some 300,000 new cases and 45,000 deaths occurring annually. About 30% of affected persons develop severe heart disease. Brazil, which accounts for 40% of the cases prevalent in Latin America, achieved elimination of transmission in 1998, after Uruguay (1996) and Venezuela (1997) and followed by Argentina (1999). Elimination of transmission is projected by WHO by the year 2010. Control is difficult, but control measures include reducing the animal host and vector insect population in its habitat by ecological and insectiside measures, education of the population in prevention by clothing, bednets, and repellents, and with chemotherapy for case management. Other Parasitic Diseases Amebiasis. Amebiasis is an infection with a protozoan parasite (Entamoeba histolytica) which exists as an infective cyst. Infestation may be asymptomatic or cause acute, severe diarrhea with blood and mucus, alternating with constipation. Amebic colitis can be confused with ulcerative colitis. Diagnosis is by microscopic examination of fresh fecal specimens showing trophozoites or cysts. Transmission is generally via ingestion of fecal-contaminated food or water containing cysts, or by oral-anal sexual practices. Amebiasis is found worldwide. Sand filtration of community water supplies removes nearly all cysts. Suspect water should be boiled. Education regarding hygienic practices with safe food and water handling and disposal of human feces are the basis for control. Ascariasis. Ascariasis is infestation of the small intestine with the roundworm Ascaris lumbricoides, which may appear in the stool, occasionally the nose or mouth, or may be coughed up from lung infestation. The roundworm is very common in tropical countries, where infestation may reach or exceed 50% of the population. Children aged 3-8 years are especially susceptible. Infestation can cause pulmonary symptoms and frequently contributes to malnutrition, especially iron deficiency anemia. Transmission is by ingestion of infective eggs, common among children playing in contaminated areas, or via the ingestion of uncooked products of infected soil. Eggs may remain viable in the soil for years. Vermox and other treatments are effective. Prevention is through education, adequate sanitary facilities for excretion, and improved hygienic practices, especially with food. Use of human feces for fertilizer, even after partial treatment, may spread the infestation. Mass treatment is indicated in high prevalence communities. Pinworm Disease or Enterobiasis. Pinworm disease (oxyuriasis) is common worldwide in all socioeconomic classes; however, it is more widespread when crowded and unsanitary living conditions exist. The Enterobius vermicularis infestation of the intestine may be symptomless or may cause severe perianal itching or vulvovaginitis. It primarily affects schoolchildren and preschoolers. More severe complications may occur. Adult worms may be seen visually or identified by microscopic examination of stool specimens or perianal swabs. Transmission is by the oral-fecal ingestion of eggs. The larvae grow in the small intestine and upper colon. Prevention is by educating the public regarding hygiene and adequate sanitary facilities, as well as by treating cases and investigating contacts. Treatment is the same as for ascariasis. Mass treatment is indicated in high prevalence communities. Ectoparasites. Ectoparasites include scabies (Sarcoptes scabiei), the common bed bug (Cimex lectularius), fleas, and lice, including the body louse (Pediculus humanis), pubic louse (Phthirius pubis), and the head louse (Pediculus humanus capitis). Their severity ranges from nuisance value to serious public health hazard. Head lice are common in schoolchildren worldwide and are mainly a distressing nuisance. The body louse serves as a vector for epidemic typhus, trench fever, and louse-borne relapsing fever. In disaster situations, disinfection and hygienic practices may be essential to prevent epidemic typhus. The flea plays an important role in the spread of the plague by transmitting the organism from the rat to humans. Control of rats has reduced the flea population, but during war and disasters, rat and flea populations may thrive. Scabies, which is caused by a mite, is common worldwide and is transmitted from person to person. The mite burrows under the skin and causes intense itching. All of these ectoparasites are preventable by proper hygiene and the treatment of cases. The spread of these diseases is rapid and therefore warrants attention in school health and public health policy. Legionnaire's disease (Legionnellosis) is an acute bacterial disease caused by Legionnelae, a gram-negative group of bacilli, with 35 species and many serogroups. The first documented case was reported in the United States in 1947, and the first disease outbreak was reported in the United States in 1976 among participants of a war veterans convention. General malaise, anorexia, myalgia, and headache are followed by fever, cough, abdominal pain, and diarrhea. Pneumonia followed by respiratory failure may follow. The case fatality rate can be as high as 40% of hospitalized cases. A milder, nonpneumonic form of the disease (Pontiac fever) is associated with virtually no mortality. The organism is found in water reservoirs and is transmitted through heating, cooling, and air conditioning systems, as well as from tap water, showers, saunas, and jaccuzzi baths. The disease has been reported in Australia, Canada, South America, Europe, Israel, and on cruise ships. Prevention requires the cleaning of water towers and cooling systems, including whirlpool spas. Hyperchlorination of water systems and the replacement of filters is required where cases and/or organisms have been identified. Antibiotic treatment with erythromycin is effective. LEPROSY Leprosy (Hansen's disease) was widely prevalent in Europe and Mediterranean countries for many centuries, with some 19,000 leprosaria in the year 1300. Leprosy was largely wiped out during the Black Death in the fourteenth century, but continued in endemic form until the twentieth century. Leprosy is a chronic bacterial infection of the skin, peripheral nerves, and upper airway. In the lepromatous form, there is diffuse infiltration of the skin nodules and macules, usually bilateral and extensive. The tuberculoid form of the disease is characterized by clearly demarcated skin lesions with peripheral nerve involvement. Diagnosis is based on clinical examination of the skin and signs of peripheral nerve damage, skin scrapings, and skin biopsy. Transmission of the Mycobacterium leprae organism is by close contact from person to person, with incubation periods of between 9 months and 20 years (average of 4-8 years). Rifampicin and other medications make the patient noninfectious in a short time, so that ambulatory treatment is possible. Multidrug therapy (MDT) has been shown to be highly effective in combating the disease, with a very low relapse rate. Treatment with MDT ensures that the bacillus does not develop drug resistance. MDT is covering 91% of known cases in 1996, according to WHO reports, as compared to only 55% in 1994. The increase has been associated with improved case finding. BCG may be useful in reducing tuberculoid leprosy among contacts. Investigation of contacts over 5 years is recommended. The disease is still highly endemic primarily in five countries, India, Brazil, Indonesia, Myanmar, and Bangladesh, and is still present in some 80 countries in Southeast Asia, including the Philippines and Burma, sub-Saharan Africa, the Middle East (Sudan, Egypt, Iran), and in some parts of Latin America (Mexico, Colombia) with isolated cases in the United States. World prevalence has declined from 10.5 million cases in 1980, 5.5 million in 1990, to less than 1 million cases in 1995. The World Health Organization expects to eliminate leprosy as a public health problem by the year 2000, defined as prevalence of less than 1 per 10,000 population, or less than 300,000 cases. TRACHOMA Trachoma is currently responsible for 6 million blind persons or 15% of total blindness in the world. The causative organism, Chlamydia trachomatis, is a bacteria which can survive only within a cell. It is spread through contact with eye discharges, usually by flies, or household items (e.g., handkerchiefs, washcloths). Trachoma is common in poor rural areas of Central America, Brazil, Africa, parts of Asia, and some countries in the eastern Mediterranean. The resulting infection leads to conjuncfival scarring and if untreated, to blindness. WHO estimates there are 148 million cases of active disease in 46 endemic countries. Hygiene, vector control, and treatment with antibiotic eye ointments or simple surgery for scarring of eyelids and inturned eyelashes prevent the blindness. A new drug, azithromycin, is effective in curing the disease. The WHO is promoting a program for the global elimination of trachoma using azithromycin and hygiene education in endemic areas. Chlamydia (Chlamydia pneumonia) is suspected of playing a role in coronary artery disease by intraarterial infection, with plaque formation and occlusion of the artery by thrombi consisting mainly of platelets. If borne out, this will provide potential for low cost intervention to reduce the burden of the leading worldwide cause of death. SEXUALLY TRANSMITTED DISEASES Sexually transmitted diseases (STDs) are widespread internationally with an estimated 330 million new cases per year, with 5.8 million new cases, over 30 million total cases, and 2.3 million deaths (1997), AIDS has captured world attention over the past decade. The global burden of STDs is enormous (Table 4.8), and the public health and social consequences are devastating in many countries. Sexually transmitted diseases, especially in women, may be asymptomatic, so that severe sequelae may occur before patients seek care. Infection by one STD increases risk of infection by other diseases in this group. Syphilis Syphilis is caused by the spirochete Treponema pallidum. After an incubation period of 10-90 days (mean -21), primary syphilis develops as a painless ulcer or chancre on the penis, cervix, nose, mouth, or anus, lasting 4-6 weeks. The patient may first present with secondary syphilis 6-8 weeks (up to 12 weeks) after infection with a general rash and malaise, fever, hair loss, arthritis, and jaundice. These symptoms spontaneously disappear within weeks or up to 12 months later. Tertiary syphilis may appear 5-20 years after initial infection. Complications of tertiary syphilis include catastrophic cardiovascular and central nervous system conditions. Early antibiotic treatment is highly effective when given in a large initial dose, but longer term therapy may be needed if treatment is delayed. Gonorrhea Gonorrhea (GC) is caused by the bacterium Neisseria gonorrhoeae. The incubation period is 1-14 days. Gonorrhea is often associated with concurrent chlamydia infection. In women, GC may be asymptomatic or it may cause vaginal discharge, pain on urination, bleeding on intercourse, or lower abdominal pain. Untreated, it can lead to sterility. In men, GC causes urethral discharge and painful urination. Treatment with antibiotics ends infectivity, but untreated cases can be infectious for months. Drug resistance to penicillin and tetracycline has increased in many countries so that more expensive and often unavailable drugs are necessary for treatment. Prevention of gonococcal eye infection in newborns is based on routine use of antibiotic ointments in the eyes of newborns. Other Sexually Transmitted Diseases Chancroid. Chancroid is caused by Haemophilus ducreyi. In women chancroids may cause a painful, irregular ulcer near the vagina, resulting in pain on in-tercourse, urination, and defection, but it may be asymptomatic. In men it causes a painful, irregular ulcer on the penis. The incubation period is usually 3-5 days, but may be up to 14 days. An individual is infectious as long as there are ulcers, usually 1-3 months. Treatment is by erythromycin or azithromycin. Herpes Simplex. Herpes simplex is caused by herpes simplex virus types 1 and 2 and has an incubation period of 2-12 days. Genital herpes causes painful blisters around the mouth, vagina, penis, or anus. The genital lesions are infectious for 7-12 days. Herpes may lead to central nervous system meningoencephalitis infection. It can be transmitted to newborns during vaginal delivery, causing infection, encephalitis, and death. Cesarian delivery is therefore necessary when a mother is infected. Anti-viral drugs are used in treatment, orally, topically, or intravenously. Chlamydia. Chlamydia is caused by Chlamydia trachomatis. In women, it is usually asymptomatic but may cause vaginal discharge, spotting, pain on urination, lower abdominal pain, and pelvic inflammatory disease (PID). In newborns, chlamydia may cause eye and respiratory infections. In men, chlamydia causes urethral discharge and pain on urination. The incubation period is 7-21 days and the infectious period is unknown. Treatment for chlamydia is doxycycline, azithromycin, or erythromycin. Chlamydia infection, not necessarily venereal in transmission, may be transmitted to newborns of infected mothers. Chlamydia pneumoniae, presently under investigation as a possible cause or contributor to coronary heart disease, and is widespread in poor hygenic conditions. Trichomoniasis. Trichomoniasis is caused by Trichomonas vaginalis. The incubation period is 4-20 days (mean = 7). In women, trichomoniasis may be asymptomatic or may cause a frothy vaginal discharge with foul odor, and painful urination and intercourse. In men, the disease is usually mild, causing pain on urination. Treatment is by metronidazole taken orally. Without treatment, the disease may persist and remain infectious for years. Condyloma. Condyloma or viral wart is caused by human papilloma virus (HPV). It is a sporadic disease which may be associated with cervical neoplasia and cancer of the cervix. HPV includes many types associated with a variety of conditons. The search for a HPV vaccine to prevent cancer of the cervix looks promising. Control of Sexually Transmitted Infections In areas where a full range of diagnostic services is lacking, a "syndromic approach" is recommended for the control of STDs. The diagnosis is based on a group of symptoms and treatment on a protocol addressing all the diseases that could possibly cause those symptoms, without expensive laboratory tests and repeated visits. Early treatment without laboratory confirmation helps to cure persons who might not return for follow-up, or may place them in a noninfective stage so that even without follow-up they will not transmit the disease. STD incidence between 1950 and 1996 is shown in Table 4.9, with decline overall except around 1990, with subsequent further fall in incidence. Screening in prenatal and family planning clinics, prison medical services, and Selected Years 1950-1996Disease 195019601970198019851996 Syphilis ( [1985][1986][1987][1988][1989][1990] and subsequent decline by more than 50% in reported cases includes all three stages of the disease as well as congential syphilis. Rates are cases per 100,000 population, rounded. in clinics serving prostitutes, homosexuals, or other potential risk groups will detect subclinical cases of various STDs. Treatment can be carried out cheaply and immediately. For instance, the screening test for syphilis costs $0.10 and the treatment with benzathine penicillin injection costs about $0.40 in 1998. Partner notification is a controversial issue, but may be needed to identify contacts who may be the source of transmission to others. Control of STDs through a syndrome approaach based on primary care providers is being promoted by WHO. Health education directed at high risk target groups is essential. Providing easy and cost-free access to acceptable, nonthreatening treatment is vital in promoting the early treatment of cases and thereby reducing the risk of transmission. Promoting prevention through the use of condoms and/or monogamy requires long-term educational efforts that are now fostered by the HIV/AIDS pandemic. Increased use of condoms for HIV prevention is associated with reduced risk of other STDs. Training medical care providers in STD awareness should be stressed in undergraduate and continuing educational efforts including personal protection as care givers. HIV/AIDS Human immunodeficiency virus (HIV) is a retrovirus that infects various cells of the immune system, and also affects the central nervous system. Two types have been identified: HIV 1, worldwide in distribution, and the less pathogenic HIV2, found mainly in West Africa. HIV is transmitted by sexual contact, exposure to blood and blood products, perinatally, and via breast milk. The period of communicability is unknown, but studies indicate that infectiousness is high, both during the initial period after infection and later in the disease. Antibodies to HIV usually appear within 1-3 months. Within several weeks to months of the infection, many persons develop an acute self-limited flulike syndrome. They may then be free of any signs or symptoms for months to more than 10 years. Onset of illness is usually insidious with nonspecific symptoms, including sweats, diarrhea, weight loss, and fatigue. AIDS represents the later clinical stage of HIV infection. According to the revised CDC case definition (1993), AIDS involves any one or more of the following: low CD4 count, severe systematic symptoms, opportunistic infections such as pneumocystis pneumonia or TB, aggressive cancers such as Kaposi's sarcoma or lymphoma, and/or neurological manifestations, including dementia and neuropathy. The WHO case definition is more clinically oriented, relying less on often unavailable laboratory diagnoses for indicator diseases. AIDS was first recognized clinically in 1981 in Los Angeles and New York. By mid-1982 it was considered an epidemic in those and other U.S. cities. It was primarily seen among homosexual men and recipients of blood products. After initial errors, testing of blood and blood products became standard and has subsequently closed off this method of transmission. Transmission has changed markedly since the initial onslaught of the disease, with needle sharing among intravenous drug users, heterosexual, and maternal-fetal transmission becoming major factors. Comorbidity with other STDs apparently increases HIV infectivity and may have helped to convert the epidemiology to a greater degree of heterosexual transmission. The disease grew exponentially in the United States (Table 4.10), but incidence of new cases nas declined since 1993. AIDS has become a major public health problem in most developed and developing countries, reaching catastrophic proportions in some sub-Saharan African countries affecting up to 30% of the population. HIV-related deaths were the eighth leading cause of all deaths in 1993 in the U.S., the leading cause among men aged 25-44 years of age, and the fourth leading cause for women in this age group. By 1996, AIDS had been diagnosed in 548,000 persons and 343,000 had died. It is estimated that up to 1 million persons are HIV infected in the United States. Globally, deaths from AIDS totalled 2.3 million in 1997, with an estimated 11.7 million person having died from this pandemic up to 1997. In 1998, an estimated 3.1 million person were HIV infected with 5.8 million new infection in 1997. The declining incidence of new cases in the industrialized countries may be the result of greater awareness of the disease and methods of prevention of transmission. Improving early diagnosis and access to care, especially the combined therapy programs that are very effective in delaying onset of symptoms, are important parts of public health management of the AIDS crisis. Until an effective vaccine is available, preventive reliance will continue to be on behavior risk-reduction and other prevention strategies such as needle and condom distribution among high risk population groups. Throughout the world, HIV continues to spread rapidly, especially in poor countries in Africa, Asia, and South and Central America. The United Nations reports that 21 million persons are living with HIV/AIDS, 90% of them in developing countries, where transmission is 85% by heterosexual contact. Every day, more than 8500 persons are infected, including 1000 children. In Thailand, 1 person in 50 is now infected. In sub-Saharan Africa 1 person in 40 is infected, and in some cities as many as 1 person in 3 carries the virus. Estimations of new infections per year in sub-Saharan Africa range from 1 to 2 million persons, while in Asia the range is from 1.2 to 3.5 million new infected persons per year. Lessons are still being learned from the AIDS pandemic. The explosive spread of this infection, from an estimated 100,000 people in 1980 to an anticipated 40 million persons HIV infected, shows that the world is still vulnerable to pandemics of "new" infectious diseases. Enormous movements of tourists, business people, truck drivers, migrants, soldiers, and refugees promote the spread of such diseases. Widespread sexual exchange, traffic in blood products, and illicit drug use all promote the international potential for pandemics. War and massive refugee situations promote rape and prostitution, worsening the AIDS situation in some settings in Africa. HIV has arrived in almost every country. However, there is the somewhat hopeful indication that the rate of increase, has slowed in the United States. This may be an indication either of higher levels of self-protective behavior, or that the most susceptible population groups have already been affected and the spread into the general population is at a slower rate. It is also possible that this may yet prove to be only a lull in the storm, as heterosexual contact becomes a more important mode of transmission. The Eleventh International Conference on AIDS, held in Vancouver, Canada, in July 1996, reported signs that combinations of several drugs from among a number of antiretroviral medications are showing promise to suppress the AIDS virus in infected people. At a current annual price of $10,000-15,000 per patient, these sums well beyond the capacity of most developing countries. Development of methods of measuring the HIV viral load have allowed for better evaluation of potential therapies and monitoring of patients receiving therapy. In developed countries, transmission by blood products has been largely controlled by screening tests; transmission among homosexuals has been reduced by safe sex practices; transmission to newborns has been reduced by recent therapeutic advances. Safe sex practices and condom use may have helped in reducing heterosexual transmission. Further advances in therapy and prevention with a vaccine are expected over the next decade. The HIV/AIDS pandemic is one of the great challenges to public health for the 21st century due to its complexity, its international spread, its sexual and other modes of transmission, its devastating and costly clinical effects, and its impact on parallel diseases such as tuberculosis, respiratory infections, and cancer. The cost of care for the AIDS patient can be very high. Needed programs include home care and community health workers to improve nutrition and self-care, and mutual help among HIV carriers and AIDS patients. The ethical issues associated with AIDS are also complex regarding screening of pregnant women, newborns, partner notification, reporting, and contact tracing, as well as financing the cost of care. DIARRHEAL DISEASES Diarrheal diseases are caused by a wide variety of bacteria, parasites, and viruses (Table 4.11) infecting the intestinal tract and causing secretion of fluids and dis- solved salts into the gut with mild to severe or fatal complications. In developing countries, diarrheal diseases account for half of all morbidity and a quarter of all mortality. Diarrhea itself does not cause death, but the dehydration resulting from fluid and electrolyte loss is one of the most common causes of death in children worldwide. Deaths from dehydration can be prevented by use of oral rehydration therapy (ORT), an inexpensive and simple method of intervention easily used by a nonmedical primary care worker and by the mother of the child as a home intervention. In 1983, diarrheal diseases were the cause of almost 4 million child deaths, but by 1996 this had declined to 2.4 million, largely under the impact of increased use of ORT. Diarrheal diseases are transmitted by water, food, and directly from person to person via oral-fecal contamination. Diarrheal diseases occur in epidemics in situations of food poisoning or contaminated water sources, but can also be present at high levels when common source contamination is not found. Contamination of drinking water by sewage and poor management of water supplies are also major causes of diarrheal disease. The use of sewage for the irrigation of vegetables is a common cause of diarrheal disease in many areas. Salmonella Salmonella are a group of bacterial organisms causing acute gastroenteritis, associated with generalized illness including headache, fever, abdominal pains, and dehydration. There are over 2000 serotypes of salmonella, many of which are pathogenic in humans, the most common of which are Salmonella typhimurium, S. enteritidis, and S. typhi. Transmission is by ingestion of the organisms in food, derived from fecal material from animal or human contamination. Common sources include raw or uncooked eggs, raw milk, meat, poultry and its products, as well as pet turtles or chicks. Fecal-oral transmission from person to person is common. Prevention is in safe animal and food handling, refrigeration, sanitary preparation and storage, protection against rodent and insect contamination, and the use of sterile techniques during patient care. Antibiotics may not eliminate the carrier state and may produce resistant strains. Shigena Shigella are a group of bacteria that are pathogenic in man, with four groups: Type A = Shigella dysenteriae, Type B = S. flexneri, Type C = S. boydii, and Type D = S. sonnei. Types A, B, and C are each further divided into a total of 40 serotypes. Shigella are transmitted by direct or indirect fecal-oral methods from a patient or carrier, and illness follows ingestion of even a few organisms. Water and milk transmission occurs as a result of contamination. Flies can transmit the organism, and in nonrefrigerated foods the organism may multiply to an infectious dose. Control is in hygienic practices and in the safe handling of water and food. Escheria eoli E. coli are common fecal contaminants of inadequately prepared and cooked food. Particularly virulent strains such as O 157:H17 can cause explosive outbreaks of severe (enterohemmorhagic) diarrhoeal disease with a hemolytic-uremic syndrome and death, as occurred in Japan in 1998 with cases and deaths due to a foodborne epidemic. Other milder strains cause travellers diarrhoea and nursery infections. Inadequately cooked hamburger, unpasturized milk, and other food vectors are discussed under food safety in Chapter 8. Cholera Cholera is an acute bacterial enteric disease caused by Vibrio cholerae, with sudden onset, profuse painless watery stools, occasional vomiting, and, if untreated, rapid dehydration, and circulatory collapse, and death. Asymptomatic infection or carrier status, and mild cases are common. In severe, untreated cases, mortality is over 50%, but with adequate treatment, mortality is under 1%. Diagnosis is based on clinical signs, epidemiologic, serologic and bacteriologic confirmation by culture. The two types of cholera are the classic and el Tor (with Inaba and Ogawa serotypes). In 1991, a large scale epidemic of cholera spread through much of South America. It was imported via a Chinese freighter, whose sewage contaminated shellfish in Lima harbor in Peru (Box 4.12). The South American cholera epidemic has caused hundreds of thousands of cases and thousands of deaths since 1991. Prevention requires sanitation, particularly the chlorination of drinking water, prohibiting the use of raw sewage for the irrigation of vegetable crops, and high standards of community, food, and personal hygiene. Treatment is prompt fluid therapy with electrolytes in large volume to replace all fluid loss. Oral rehydration should be accomplished using standard ORT. Tetracycline shortens the duration of the disease, and chemoprophylaxis for contacts following stool samples may help in reducing its spread. A vaccine is available but is of no value in the prevention of outbreaks. Viral Gastroenteritis Viral gastroenteritis can occur in sporadic or epidemic forms, in infants, children, or adults. Some viruses, such as the rotaviruses and enteric adenoviruses, af- In the 1980s, Peruvian officials stopped the chlorination of community water supplies because of concern over possible carcinogenic effects of trihalomethanes, a view encouraged by officials of the U.S. Environmental Protection Agency (EPA) and the U.S. Public Health Service. In January 1991, a Chinese freighter arrived in Lima, Peru, and dumped bilge (sewage) in the harbor, apparently contaminating local shellfish. Consumption of raw shellfish is a popular local delicacy (ceviche) and associated with cases of cholera seen in local hospitals. Contamination of local water supplies from sewage resulted in the geometric increase in cases, and by the end of 1992 the Pan American Health Organization (PAHO) reported an epidemic of 391,000 cases and 4002 deaths. The epidemic spread to 21 countries, and in 1992 there were a further 339,000 cases and 2321 deaths spreading over much of South America, continuing in 1999. In the United States, 102 cases of cholera were reported in 1992; of these, 75 cases and 1 death were among passengers of an airplane flying from South America to Los Angeles in which contaminated seafood was served. In 1993, 91 cases of cholera were reported in the United States which were unrelated to international travel. These occurred mostly among persons consuming shellfish from the Gulf coast with a strain of cholera similar to the South American strain, also possibly introduced in ship ballast. Cholera organisms are reported in harbor waters in other parts of the United States (Promed, 1999 , 1991Promed, 1999. fect mainly infants and young children, and may be severe enough to cause hospitalization for dehydration. Others such as Norwalk and Norwalk-like viruses affect older children and adults in self-limited acute gastroenteritis in family, institution, or community outbreaks. Rotaviruses. Rotaviruses cause acute gastroenteritis in infants and young children, with fever and vomiting, followed by watery diarrhea and occasionally severe dehydration and death if not adequately treated. Diagnosis is by examination of stool or rectal swabs with commercial immunologic kits. In both developed and developing countries, rotavirus is the cause of about one-third of all hospitalized cases for diarrheal diseases in infants and children up to age 5. Most children in developing countries experience this disease by the age of 4 years, with the majority of cases between 6 and 24 months. In developing countries, rotaviruses are estimated to cause over 800,000 deaths per year. The virus is found in temperate climates in the cooler months and in tropical countries throughout the year. Breastfeeding does not prevent the disease but may reduce its severity. Oral rehydration therapy is the key treatment. A live attenuated vaccine was approved by the FDA in 1998 and adopted in the 1999 U.S. recommended routine vaccination programs for infants. Adenoviruses. Adenoviruses, Norwalk, and a variety of other viruses (including astrovirus, calcivirus, and other groups) cause sporadic acute gastroenteritis worldwide, mostly in outbreaks. Spread is by the oral-fecal route, often in hospital or other communal settings, with secondary spread among family contacts. Food-borne and waterborne transmission are both likely. These can be a serious problem in disaster situations. No vaccines are available. Management is with fluid replacement and hygienic measures to prevent secondary spread. Parasitic Gastroenteritis Giardiasis. Giardiasis (caused by Giardia lamblia) is a protozoan parasitic infection of the upper small intestine, usually asymptomatic, but sometimes associated with chronic diarrhea, abdominal cramps, bloating, frequent loose greasy stools, fatigue, and weight loss. Malabsorption of fats and vitamins may lead to malnutrition. Diagnosis is by the presence of cysts or other forms of the organism in stools, duodenal fluid, or in intestinal mucosa from a biopsy. This disease is prevalent worldwide and affects mostly children. It is spread in areas of poor sanitation and in preschool settings and swimming pools, and is of increasing importance as a secondary infection among immunocompromised patients, especially those with AIDS. Waterborne giardia was recognized as a serious problem in the United States in the 1980s and 1990s, since the protozoa is not readily inactivated by chlorine, but requires adequate filtration before chlorination. Person-to-person transmission in day-care centers is common, as is transmission by unfiltered stream or lake water where contamination by human or animal feces is to be expected. An asymptomatic carrier state is common. Prevention relies on careful hygiene in settings such as day-care centers, filtration of public water supplies and the boiling of water in emergency situations. Cryptosporidium. Cryptosporidium parvum is a parasitic infection of the gastrointestinal tract in man, small and large mammals and vertebrates. Infection may be asymptomatic or cause a profuse, watery diarrhea, abdominal cramps, general malaise, fever, anorexia, nausea, and vomiting. In immunosuppressed patients, such as persons with AIDS, it can be a serious problem. The disease is most common in children under 2 years of age and those in close contact with them, as well as in homosexual men. Diagnosis is by identification of the cryptosporidium or-ganism cysts in stools. The disease is present worldwide. In Europe and the United States, the organism has been found in <1 to 4.5% of individuals sampled. Spread is common by person-to-person contact by fecal-oral contamination, especially in such settings as day-care centers. Raw milk and waterborne outbreaks have also been identified in recent years. A large waterborne disease outbreak due to cryptosporidium occurred in Milwaukee in 1986 described in Chapter 9. Management is by rehydration and prevention is by careful hygiene in food and water safety. Helicobacter pylori. Helicobacter pylori, first identified in 1986, is a bacterium causally linked to duodenal ulcers and gastritis, contributing to high rates of gastric cancer (Chapter 5). It is an important example of the link between infection and chronic disease. This has enormous implications for prevention of cancer of the stomach, chronic peptic ulcers and large-scale use of hospitals and other medical resources (see Chapter 5). A Program Approach to Diarrhoel Disease Control The control of diarrheal diseases requires a comprehensive program involving a wide range of activities, including good management of food and water supplies, education in hygiene, and, particularly where morbidity and mortality are high, education in the use of Oral Rehydration Therapy (ORT). Oral rehydration therapy (ORT) is considered by UNICEF and WHO to have resulted in the saving of 1 million lives each year in the 1990s. Proper management of an episode of diarrhea by ORT (Table 4.12), along with continued feeding, not only saves the child from dehydration and immediate death, but also contributes to early restoration of nutritional adequacy, sparing the child the prolonged effects of malnutrition. The World Summit for Children (WSC) in 1990 called for a reduction in child deaths from diarrheal diseases by one-third and malnutrition by one-half, with em- phasis on the widest possible availability, education for, and use of ORT. This requires a programmatic approach. Public health leadership must train primary care doctors, pediatricians, pharmacists, drug manufacturers, and primary care health workers of all kinds in ORT principles and usage. They must be backed by the widest possible publicity to raise awareness among parents. Oral rehydration therapy is an important public health modality in developed countries as well as in developing countries. Diarrhoeal disease may not cause death as frequently in developed countries, but it is still a significant factor in infant and child health and, even under the most optimal conditions, can cause setbacks in the nutritional state and physical development of a child. Use of ORT does not prevent the disease (i.e., it is not a primary prevention), but it is excellent in secondary prevention, by preventing complications from diarrhoea, and should be available in every home for symptomatic treatment of diarrheal diseases. An adaptation of ORT has found its place in popular culture in the United States. A form of ORT, marketed as "sports drinks," is used in sports where athletes lose large quantifies of water and salts in sweat and insensible loss from the respiratory tract. The wider application of the principles of ORT for use in adults in dry hot climates and in adults under severe physical exertion with inadequate fluid/salt intake situations requires further exploration. Management of diarrheal diseases should be part of a wider approach to child nutrition. The child who goes through an episode of diarrheal disease may have a faltering in growth and development. Supportive measures may be needed following the episode as well as during it. This involves providing primary care services that are attuned to monitoring individual infant and child growth. Growth monitoring surveillance is important to assess the health status of the individual child and the child population. Supplementation of infant feeding with vitamins A and D, and iron to prevent anemia are important for routine infant and child care, and more so for conditions affecting total nutrition such as a diarrheal disease. ACUTE RESPIRATORY INFECTIONS In the developing world, respiratory infections account for over one-quarter of all deaths and illnesses in children. As diarrheal disease deaths are reduced, the major cause of death among infants in developing countries is becoming acute respiratory infections (ARIs). In industrialized countries, ARIs are important for their potentially devastating effects on the elderly and chronically ill. They are also the major cause of morbidity in infants in developed countries, causing much anxiety to parents even in areas with good living conditions. Cigarette smoking, chronic bronchitis, poorly controlled diabetes or congestive heart failure, and chronic liver and kidney disease increase susceptibility to ARIs. ARIs place a heavy burden on health care systems and individual families. Improved methods of management of such chronic diseases are needed to reduce the associated toll of morbidity, mortality, and the considerable expenses of health care. Acute respiratory infections are due to a broad range of viral and, to a lesser extent, bacterial infections. It is the latter which can progress to pneumonia with mortality rates of 10-20%. Acute viral respiratory diseases include those affecting the upper respiratory tract, such as acute viral rhinitis, pharyngitis, and laryngitis, as well as those affecting the lower respiratory tract, tracheobronchitis, bronchitis, bronchiolitis, and pneumonia. ARIs are frequently associated with vaccine-preventable diseases, including measles, varicella, and influenza. They are caused by a large number of viruses, producing a wide spectrum of acute respiratory illness. Some organisms affect any part of the respiratory tract, while others affect specific parts and all predispose to bacterial secondary infection. While children and the elderly are especially susceptible to morbidity and mortality from acute respiratory disease, the vast numbers of respiratory illnesses among adults cause large-scale economic loss from work absence. Bacterial agents causing upper respiratory tract infection include group A streptococcus, mycoplasma pneumonia, pertussis, and parapertussis. Pneumonia or acute bacterial infection of the lower respiratory tract and lung tissue may be due to pneumococcal infection with Streptococcus pneumoniae. There are 83 known types of this organism, distinguished by capsule characteristics; 23 account for 88% of pneumococcal infections in the United States. An excellent polyvalent vaccine based on these types is available for high risk groups such as the elderly, immunodeficient patients, and persons with chronic heart, lung, liver, blood disorders, or diabetes. Opportunistic infections attack the chronically ill, especially those with compromised immune suystems, often with life-threatening ARIs. Mycoplasma (primary atypical pneumonia) is a lower respiratory tract infection which sometimes progresses to pneumonia. TB and Pneumonocytis carynia are especially problematic for AIDS patients. Other organisms causing pneumonias include Chlamydia pneumoniae, H. influenza, klebsiella pneumonia, Escherichia coli, Staphylococcus, rickettsia (Q fever), and Legionella. Parasitic infestation of lungs may occur with nematodes (e.g., ascariasis). Fungal infections of the lung may be caused by aspergillosis, histoplasmosis, and coccidiomycosis, often as a complication of antibiotic therapy. Access to primary care and early institution of treatment are vital to control excess mortality from ARIs. In developed countries, ARIs as contributors to infant deaths are largely a problem in minority and deprived population groups. Because these groups contribute disproportionately to childhood mortality, infant mortality reduction has been slower in countries such as the United States and Russia than in other industrialized countries. The continuing gap in mortality rates between white and black children in the United States can, to a large extent, be attributed to ARIs and less access to organized primary care. Children are brought to emergency rooms for care when the disease process is already advanced and more dangerous than had it been attended to professionally earlier in the process. Many field trials of ARI prevention programs have been proved successful involving parent education and training of primary care workers in early assessment and, if necessary, initiation of treatment. This needs field testing in multiple settings. Reliance on vaccines to prevent respiratory infectious diseases is not currently feasible. ARIs are caused by a very wide spectrum of viruses, and the development of vaccines in this field has been slow and limited. The vaccine for pneumococcal pneumonia has been an important breakthrough, but it is still inadequately utilized by the chronically ill because of its limitations, costs, and lack of sufficient awareness, and it is too expensive for developing countries. Improvements in bacterial and viral vaccine development will potentially help to reduce the burden of ARIs. A programmatic approach with clinical guidelines and education of family and care givers is currently the only feasible way to reduce the still enormous morbidity and mortality from ARIs on the young and the elderly. COMMUNICABLE DISEASE CONTROL IN THE NEW PUBLIC HEALTH The success of sanitation vaccines and antibiotics led many to assume that all infectious diseases would sooner or later succumb to public health and medical technology. Unfortunately, this is a premature and even dangerous assumption. Despite the longstanding availability of an effective and inexpensive vaccine, the persistence of measles as a major killer of 1 million children per year represents a failure in effective use of both the vaccine and the health system. The resurgence of TB and malaria have led to new strategies, such as managed or directly observed care, with community health workers to assure compliance needed to render the patient noninfectious to others and to reduce the pool of carriers of the disease. Current successes in reducing poliomyelitis, dracunculiasis, onchocerciasis, and other diseases to the point of eradication has raised hopes for similar success in other fields. But there are many infectious diseases of importance in developed and developing countries where existing technologies are not fully utilized. Oral rehydration therapy (ORT) is one of the most cost-effective methods of preventing excess mortality from ordinary diarrheal diseases, and yet is not used on sufficient scale. Biases in the financing and management of medical insurance programs can result in underutilization of available effective vaccines. Hospital-based infections cause large-scale increases in lengths of stay and expenditures, although application of epidemiologic investigation and improved quality in hospital practices could reduce this burden. Control of the spread of AIDS using combined medical therapies is not financially or logistically possible in many countries, but education for "safe sex" is effective. Community health worker programs can greatly enhance tuberculosis, malaria, and STD control, or in AIDS care, promote prevention and appropriate treatment. In the industrialized and mid-level developing countries, epidemiologic and demographic shifts have created new challenges in infectious disease control. Prevention and early treatment of infectious disease among the chronically ill and the elderly is not only a medical issue, it is also an economic one. Patients with chronic obstructive lung disease (COPD), chronic liver or kidney disease, or congestive heart failure are at high risk of developing an infectious disease followed by prolonged hospitalization. SUMMARY Public health has addressed, and will continue to stress the issues of communicable disease as one of its key issues in protecting individual and population health. Methods of intervention include classic public health through sanitation, immunization, and well beyond that into nutrition, education, case finding, and treatment, and changing human behavior. The knowledge, attitudes, beliefs, and practices of policy makers, health care providers, and parents is as important in the success of communicable disease control as are the technology available and methods of financing health systems. Together, these encompass the broad programmatic approach of the New Public Health to control of communicable diseases. In a world of rapid international transport and contact between populations, systems are needed to monitor the potential explosive spread of pathogens that may be transferred from their normal habitat. The potential for the international spread of new or reinvigorated infectious diseases constitute threat to mankind akin to ecological and other man-made disasters. The eradication of smallpox paved the way for the eradication of poliomyelitis, and perhaps measles, in the foreseeable future. New vaccines are showing the capacity to reduce important morbidity from rubella syndrome, mumps, meningitis, and hepatitis. Other new vaccines on the horizon will continue the immunologic revolution into the twenty-first century. As the triumphs of control or elimination of infectious diseases of children continue, the scourge of HIV infection continues with distressingly slow progess an effective vaccine or cure for the disease it engenders. Partly as a result of the HIV/ AIDS, TB staged a comeback in many countries where it was thought to be merely a residual problem. At the same time an old/new method of intervention using directly observed short-term therapy has shown great success in controlling the TB epidemic. The resurgence of TB is more dangerous in that MDRTB has become a widespread problem. This issue highlights the difficulty of keeping ahead of drug resistance in the search for new generations of antibiotics, posing a difficult challenge for the pharmaceutical industry, basic scientists as well as public health workers. The burden of infectious diseases has receded as the predominant public health problem in the developed countries but remains large in the developing countries. With increases in longevity and increased importance of chronic disease in the health status of the industrial and mid-level developing nations, the effects of infectious disease on the care of the elderly and chronically ill is of great importance in the New Public Health. Long-term management of chronic disease needs to address the care of vulnerable groups, promoting the use of existing vaccines and antibiotics. Most important is the development of health systems that provide close monitoring of groups at special risk for infectious disease, especially patients with chronic diseases, the immunocompromised, and the elderly. The combination of traditional public health with direct medical care needed for effective control and eradication of communicable diseases is an essential element of the New Public Health. The challenge is to apply a comprehensive approach and management of resources to define and reach achievable targets in communicable disease control. ELECTRONIC MEDIA Access to e-mail and the Internet are vital to current practice of public health and nowhere is this more important than in communicable diseases. There are many such information sites and these will undoubtedly expand in the coming years. Several sites are given as examples. The Internet has great practical implications for keeping up to date with rapidly occurring events in this field.
2019-08-17T17:21:32.362Z
2007-05-09T00:00:00.000
{ "year": 2007, "sha1": "4ea44c2bf1d6c7ccf80306157d1968c1abbe7266", "oa_license": null, "oa_url": "https://doi.org/10.1016/b978-012703350-1/50006-1", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5096f6dd83e82eefa2c6cd277e8947aacfb62940", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Business", "Medicine" ] }
228407315
pes2o/s2orc
v3-fos-license
Thrombus or vegetation?Importance of cardiac MRI as a diagnostic tool based on case report and literature review Introduction We report the Case of a 35 years old male patient admitted for pulmonary embolism in a febrile context. Transthoracic echocardiography showed a filamentary mass appended to the pulmonary valve whose thrombotic origin has been suggested on data of late gadolinium enhancement magnetic resonance imaging. Case presentation The patient had a history of deep vein thrombosis in the context of familial thrombophilia with factor V leiden gene mutation in two of his sisters and an inhaled drug addiction to heroïn. There was a biological inflammatory syndrome with negative blood cultures. Transthoracic echocardiography showed a very mobile homogeneous hyperechoic mass measuring 8 cm in the right ventricle appended between the pulmonary valve and the lateral wall of the RV. In LGE-MRI, an isointense, to the myocardium, marginal hall and a central rim enhancement were objectified, suggesting the diagnosis of thrombus rather than vegetation. Conclusion Despite the notion of drug addiction, the febrile context and the localization of the mass, a diagnosis of RV thrombus rather than infective endocarditis was favored relying on familial thrompbophilia, personal history of DVT and LGE-MRI aspect. The patient was treated with curative heparin therapy and antibiotic therapy. Due to the persistence of the mass after three weeks of treatment and after heart-team discussion, the patient underwent surgical mass removal. The anatomopathological study confirmed a fibrino-cruoric thrombus. Introduction The diagnosis of infective endocarditis is made based on the modified Duke's criteria, including clinical presentation, results of blood tests, blood culture and imaging, including echocardiography and PET Scan [1]. Echocardiography is recommended as first-line diagnostic imaging and has high diagnostic performance. However, its ability to distinguish between vegetation, thrombi and cardiac tumor is limited [1]. Computed tomography and MRI technical developments leading to high-quality images and thorough information in this matter [2,3]. Multimodality imaging play an increasing role in differential diagnosis of these three types of heart masses. However, very few studies have reported results on the diagnostic value of cardiac MRI to discriminate thrombus from vegetation and no characteristic MRI signs of vegetation have been reported. We report a Case of a serpentine mass measuring 10 cm appended to the pulmonary valve with a challenging diagnostic between thrombus and right heart endocarditis. We will discuss the contribution of cardiac MRI and especially late gadolinium enhancement sequences in this diagnostic approach. The work has been reported in line with the SCARE 2018 criteria [4]. Presentation of case We report the Case of a 35-year-old male patient, chronic smoker, with a history of inhaled heroin addiction and DVT treated with Rivaroxaban for 6 months, and there is a family history of Factor V Leiden in two siblings. The clinical examination showed a preserved venous capital with absence of dermatological signs pointing towards intravenous drug injection. The patient presented to the emergency room with dyspnea at rest, associated with chest pain of the right hemithorax accentuated with deep inspiration. Upon presentation, vitals were as follows: oral temperature of 38.7∘ C, blood pressure of 130/90 mmhg, heart rate of 80 beats/min, and respiratory rate of 22 breaths/min, with an oxygen saturation of 88% on room air. The patient was alert and oriented. ECG showed sinus rhythm with a heart rate at 80 bpm. The thoracic CT angiography showed a pulmonary embolism of the right intermediate trunk with images of alveolar condensations which could be related to pulmonary infarctions or septic localizations. TTE revealed a mobile, serpentine, hyperechoic mass, appended between the lateral wall of the right ventricle and the pulmonary valve measuring 8 cm (Fig. 1). In addition, the pulmonary valve was intact without regurgitation or obstruction of the pulmonary outflow tract obstruction. The trunk of the pulmonary artery was free. The RV was of normal size and function. The left ventricle was of good function. Laboratory tests revealed elevated levels of CRP of 148 mg/l, leukocytosis of 13,000/mm3. NT-proBNP and troponin were normal. The blood cultures were all negative. The cerebral and thoraco-abdominopelvic CT scan, performed as part of the extension assessment in the hypothesis of an IE were normal. In this context of diagnostic doubt between an IE and thrombus, the patient was treated by a curative anticoagulation based on enoxaparin sodium in association with an intravenous antibiotic therapy: vancomycin and gentamycin for one week then Rifampicin and ciprofloxacin because of fever and inflammatory syndrome persistence. Tests of classic causes of blood culture negative endocarditis were negative. Close monitoring of the patient showed improvement with regression of dyspnea, chest pain, fever and inflammatory syndrome after three weeks of antibiotic and anticoagulant therapy, but persistence of the mass with no decrease. A cardiac MRI was performed, it showed a serpentine mass of 10 cm, appended between the pulmonary valve and the RV apex causing a partial obstruction of right ventricular outflow tract. This mass had a signal intensity on the SSFP cine MRI imaging sequences identical to that of the myocardium (Fig. 2, a-d). On the LGE-MRI sequences, in particular on the PSIR sequences, there was an "etched appearance" with a hypointense border and a central brighter zone (Fig. 3), favoring the diagnosis of an organized thrombus. Given the unchanged dimensions of the mass despite antibiotic and anticoagulant treatment, the potential risk of its migration in the pulmonary trunk and after heart team discussion, it was decided to surgically remove the mass. The surgical procedure, made by experienced team of heart valve surgery from Paris, allowed to complete resection of the mass which was 10 cm long with preservation of the healthy native pulmonary valve. Anatomopathological analysis confirmed the purely thrombotic nature of the mass. Discussion The diagnosis of infectious endocarditis is based on clinical and biological data, including blood culture, echocardiography and 18F-FDG PET/CT. This diagnosis remains difficult due to the atypical clinical manifestations and the low specificity of the Laboratory tests [1]. Right-sided infective endocarditis is less common than left-sided, it is more frequent in intravenous drug users or patients with underlying congenital heart disease corrected or not [2,3]. Echocardiography, either TTE or TEE, is the first imaging technique used for diagnosis purpose, it shows high sensitivity and specificity, especially when performed in a complementary manner. TTE/EET have limited diagnostic performance in differentiating thrombi from vegetations or cardiac tumors, especially at the early stage and when the vegetation is too small (2-3 mm) to be detected or when the mass is appended to a valve. In our Case, TTE and TEE showed a very mobile serpentine mass measuring 8 cm in length appended to the pulmonary valve. But this echocardiographic aspect did not allow to differentiate thrombus or vegetation. The patient was febrile; but blood cultures and tests for classic causes of blood culture negative endocarditis were negative. The role of MSCT and CMR imaging for the diagnosis of infectious endocarditis has been poorly evaluated [1,5,6]. Vegetation cardiac MRI aspects have not been clearly depicted in literature. Differentiation between vegetation and thrombus can be difficult, because both can have a similar aspect on LGE-MRI [7]. In Case of infective endocarditis, some authors report cases of marginal mass enhancements on LGE-MRI [8], which could facilitate the differentiation between vegetation and thrombus. Other results suggest that cardiac MRI may show endocardial LGE reflecting irreversible endocardial damage or fibrosis, and may also show perivalvular abscess [9][10][11]. Dursun and al, have shown that even in the absence of vegetation detected on cardiac MRI or echocardiography, the detection of late endocardial enhancement indicating endothelial inflammation can contribute to the diagnosis of infective endocarditis and guide therapeutic management [9]. Zatorska K and al, have shown that cardiac MRI can be useful in diagnosing perivalvular complications in IE patients, although vegetation visualization is limited by the low spatial resolution of this method. In this study LGE associated with inflammatory process extension and myocardium infiltration was reported in 40% of patients [12]. In Case of thrombus, signal intensity is changing depending on the thrombus age and it does not generally enhance in LGE-MRI sequences. Cardiac MRI late enhancement sequences can detect thrombus with higher sensitivity than echocardiography by describing a high contrast between the dark thrombus and the adjacent structures. Mural thrombus does not enhance during the first-pass perfusion (FPP) and is often depicted with hypointense border and brighter central zone ("etched appearance") particularly on PSIR-type sequences, allowing high diagnostic specificity [13]. One pattern was found to be more common in thrombi (94%) than in tumor (2%) in a study: hyperintensity at short inversion time (TI), "edged" appearance with hypointense border and brighter central zone at intermediate TI, and hypointensity pattern at long TI. This pattern had the highest accuracy (95%) to differentiate thrombus and tumor [14]. The same study showed that combination of negative FPP, negative LGE and typical thrombus pattern in TI scout sequence had a specificity of 98% for thrombus, whereas the combination between negative FPP and negative LGE had a specificity of 92% for the diagnosis of thrombus [14]. Few cases have been reported describing cardiac masses with T1 and T2 relaxation time measurement. Saba et al. reported that the mean T1 relaxation time of the cardiac thrombus and myxoma were 1044.7 ms and 1681.6 ms, respectively [15]. The cardiac ECG synchronized computed tomography provides high resolution anatomical information in cases of infective endocarditis, especially in Case of valve prosthesis. It can highlight vegetation, aneurysm or perivalvular abscess. 18F-FDG PET/CT imaging may also, particularly in patients with prosthetic valves, be useful in the IE diagnosis showing abnormal 18F-FDG uptake around prosthetic valve IE with high sensitivity [1,6]. Conclusion: Differential diagnosis between thrombus and vegetation is sometimes a real challenge for the clinician. Cardiac MRI with its LGE-MRI sequences can help characterizing thrombus with hypointense edges and brighter central zone called "the etched appearance". Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-inChief of this journal on request. Funding This Case report did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector. Author contribution Dr. IJ, and Dr. CT analysed and performed the literature research, Dr. EJ performed the examination and performed the scientific validation of the manuscript. Dr. El ouazzani Jamal was the major contributors to the writing of the manuscript. All authors read and approved the manuscript. Consent A copy of the written consent is available for review by the Editor-in-Chief of this journal on request". Provenance and peer review Not commissioned, externally peer-reviewed. Registration of research studies 1. Name of the registry: 2. Unique Identifying number or registration ID: 3. Hyperlink to your specific registration (must be publicly accessible and will be checked): Ethical approval Research studies involving patients require ethical approval. Please state whether approval has been given, name the relevant ethics committee and the state the reference number for their judgement.
2020-12-10T09:04:36.003Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "81ef36b55d3bab43a00eda6c23cbf6932534abab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.amsu.2020.12.007", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47e3810410f9743d72267266b8bcc6e59af7e675", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228960410
pes2o/s2orc
v3-fos-license
A Coordination Technique for Improving Scalability of Byzantine Fault-Tolerant Consensus : Among various consensus algorithms, the Byzantine Fault Tolerance (BFT)-based consensus algorithms are broadly used for private blockchain. However, as BFT-based consensus algorithms are structured for all participants to take part in a consensus process, a scalability issue becomes more noticeable. In this approach, we introduce a consensus coordinator to execute a conditionally BFT-based consensus algorithm by classifying transactions. Transactions are divided into equal and unequal transactions. Moreover, unequal transactions are divided again and classified as common and trouble transactions. After that, a consensus algorithm is only executed for trouble transactions, and BFT-based consensus algorithms can achieve scalability. For evaluating our approach, we carried out three experiments in response to three research questions. By applying our approach to PBFT, we obtained 4.75 times better performance than using only PBFT. In the other experiment, we applied our approach to IBFT of Hyperledger Besu, and our result shows a 61.81% performance improvement. In all experiments depending on the change of the number of blockchain nodes, we obtained the better performance than original BFT-based consensus algorithms; thus, we can conclude that our approach improved the scalability of original BFT-based consensus algorithms. We also showed a correlation between performance and trouble transactions associated with transaction issue intervals and the number of blockchain nodes. representative example. In the BFT-based consensus algorithm, the number of network communications among participants explodes with the increase in the number of participants. This is because all participants in the BFT-based consensus algorithm should be involved to complete the process for each transaction, and the process is even composed of diverse steps. This characteristic of the BFT-based consensus algorithm may raise the performance and scalability issue of the algorithm, as the more participants are newly involved, the slower the completion of the consensus process is [15]. There have been several previous studies on improving scalability of BFT-based consensus algorithms (see [16][17][18][19][20][21][22][23]). Some works tried to build sub-groups of blockchain nodes on a regular basis and reduced the number of network communications by executing two-step execution of the BFT-consensus algorithm: executing consensus inside sub-groups and conducting consensus between representatives of each sub-group. While this approach increased the PBFT algorithm's scalability, it has a shortcoming that a new node cannot join any groups until new groups have been made. Other works also tried to optimize the number of communications by modifying the PBFT protocol with introducing a collector role or removing faulty nodes during consensus processes. Similarly, some works tried to reduce the number of prime node elections for minimizing the node election overhead. However, it is hard to expect the outstanding improvement of scalability through these approaches. In addition to these, other approach tried to deploy a new hardware-based BFT algorithm execution environment, but it has an apparent weakness that all nodes should prepare the specific hardware environments in advance. To address the above issues, we propose a coordination technique for scalable Byzantine fault-tolerant consensus algorithms. The key idea is to introduce a Consensus Coordinator that controls conditional execution of a BFT-based consensus algorithm after classifying transactions of all nodes with respect to their equality. Our approach runs regularly associated with the block generation time interval, consisting of four steps. First, it starts with electing a prime node among all blockchain nodes for executing a BFT-based consensus algorithm and communicates with the consensus coordinator. Then, the consensus coordinator collects transactions from a transaction pool of each node. In the third step, the coordinator classifies transactions based on their equality and decides for executing a consensus algorithm. In the case that all transactions are equal, the coordinator lets the prime node execute block generation without executing a consensus algorithm, which is the completion of the synchronization in a blockchain network. When some transactions are not equal, the coordinator divides transactions into common and trouble transactions and requests the prime node to execute a BFT-based consensus algorithm only for trouble transactions. The prime node notifies agreed transactions to the coordinator. Finally, the coordinator sorts all common and agreed transactions by time order and requests the prime node to generate a new block containing all of processed transactions. For the evaluation of our approach, we conducted three experiments for answering three research questions. We measured performance of the PBFT algorithm with and without our approach. In an experiment, the PBFT equipped with our approach obtained an average of 4.75 times better performance than only using the PBFT. In addition, we applied our approach to Hyperledger Besu using the IBFT (Istanbul Byzantine Fault Tolerance) consensus algorithm and showed a 61.81% performance improvement, compared to using only IBFT. We also presented correlation of performance and trouble transactions associated with transaction issue interval and the number of blockchain nodes. The contributions of our approach are summarized as follows: • We propose a novel coordination technique to improve the scalability and performance of consensus algorithms, which is applicable to diverse BFT-based consensus algorithms. • Our approach was implemented and applied to PBFT and Hyperleder Besu consensus algorithm, opened as an open source project for public access. • We performed three experiments in response to three research questions and showed the feasibility of our approach. The remainder of this paper is organized as follows. Section 2 presents BFT-based consensus algorithms as background and related work for improving the scalability of BFT-based consensus algorithms. Section 3 proposes our coordination technique and explains four steps for achieving the scalable BFT-based consensus algorithm in detail. Section 4 presents the evaluation of our approach by responding to our three research questions. Section 5 concludes our paper and discusses future works. Background and Related Work This section presents BFT-based consensus algorithms and their characteristics as background and introduces some previous works regarding how to improve BFT-based consensus algorithms' scalability. Background: BFT-Based Consensus Algorithms BFT (Byzantine Fault Tolerance)-based consensus algorithms indicate a group of consensus algorithms for resolving the byzantine general problem regarding how to achieve a consensus of data in an environment where normal and malicious nodes are mixed [24]. The representative example is PBFT (Practical Byzantine Fault Tolerance) [13,14] in the Hyperledger Fabric 0.6 (Hyperledger Fabric later changed the PBFT into Raft since version 2.0 [25]), and diverse variations of PBFT such as Tendermint in Cosmos [26], Hotstuff in Libra [27], and IBFT in Hyperledger Besu [28] are broadly used. BFT-based consensus algorithms' characteristics are the fast finality of a transaction, which indicates that the transaction is immediately finalized once the transaction is issued by a client and validated by N = 3 f + 1 participants. However, in other consensus algorithms such as PoW and PoS, a client should wait until their transaction is contained in a new block after issuing transactions. In Bitcoin, for example, it takes 1 h to finalize transactions theoretically. In the worse case, it often takes more time when a block is forked. Despite the fast finality of BFT-based consensus algorithms, it is inevitable to decrease performance and scalability when the number of nodes is increased. This is because all participants should join the consensus process, and two over three nodes should agree with transactions by communicating with each other in four steps: pre-prepare, prepare, commit, and reply ( Figure 1). Thus, this mechanism always causes the scalability issue depending on the number of nodes [15]. Related Work Many approaches have been suggested to improve the scalability of BFT-based consensus algorithms. Most of their methods tried to reduce the number of network communications. To control the number of nodes participating in the consensus protocol, Feng et al. suggested the SDMA (Scalable Dynamic Multi-Agent)-PBFT approach that reduces the number of participants [16]. The approach builds sub-groups among the peers and elect an agent as a primary node in each sub-group. Then, it carries out the consensus process in sub-groups at first, and the second consensus process is performed only among the agents. While this approach increases the PBFT algorithm's scalability by reducing communication paths from the established blockchain network, it has a shortcoming that a new node cannot join any previously established groups until new groups have been made. Similar to Feng et al.'s approach, Luu et al. proposed SCP (Scalable Byzantine Consensus Protocol) by executing the first consensus algorithm within sub-groups and the second consensus algorithm among group leaders from a result of the first execution [17]. The approach builds sub-groups by generating a random group number based on their IP address, public key, and nonce, while Feng et al.'s approach builds sub-groups by making the spanning-tree from a root node. Although this research contributed to reducing communication paths, it still has a similar problem that new nodes cannot easily join a blockchain network as in the approach of Feng et al. As another approach, some research tried to optimize the number of communications by introducing collector role or removing faulty nodes during consensus processes. Kotl et al. proposed a new BFT-based consensus protocol, named Zyzzyva, where the number of non-faulty nodes for PBFT adaptively changes from N = 3 f + 1 into N = 2 f + 1 when a faulty node is detected during a consensus process [18]. Gueta et al. suggested that SBFT (State-of-the-art Byzantine Fault Tolerant) [19] reduces the number of communication paths among nodes by collecting messages in a consensus process into two collector nodes and validates messages in limited places. Similar to Gueta et al.'s approaches, Jiang et al. suggested HSBFT (High Performance and Scalable Byzantine Fault Tolerance), which makes a prime node to play a collector role that collects all messages and validates them [20]. HSBFT has a prime node electing process based on a node stable table containing identity number, state, IP, and public key. Based on the table, HSBFT excludes unstable nodes and optimizes communication paths. Although three approaches reduce the number of participant nodes and communication paths, it is hard to expect an outstanding improvement regarding scalability or performance. In addition, Lei et al. proposed RBFT (Reputation-based Byzantine Fault Tolerance) algorithm for reducing communication paths in a private blockchain [21]. Each blockchain node computes their reputation score based on evaluation of their behaviors (e.g., a good behavior for generating a new block) and the number of permitted votes of each node is different depending on the reputation score. Votes are used to decide pass of PBFT's steps by checking if the number of votes is over a specific threshold. Although it can reduce the number of communications, only limited nodes can have an out-sized influence on the voting process. Some research tries to minimize the number of prime node election processes for enhancing scalability. Gao et al. proposed a trust eigen-based PBFT consensus algorithm, which is called T-PBFT [22]. In the approach, they tried to minimize the number of the prime node elections based on each node's trust evaluation. Before starting the PBFT consensus process, the proposed eigen trust model evaluates all nodes' trust scores and makes a group called a primary group. Then, the consensus process is composed of two steps: (1) the consensus within the primary group; and (2) the consensus between the remaining nodes and the primary group. It can improve scalability by reducing change the proportion of the single primary node. However, its number of communications is the same as the PBFT, so it is hard to expect a distinct improvement of scalability. A new hardware-based execution environment is also introduced for improving performance of the BFT-based consensus algorithm. Liu et al. proposed a hardware-based BFT algorithm execution environment, which is named Fast BFT [23]. All nodes use a hardware chip (e.g., Intel SGX) for using TEE (Trusted Execution Environment) to execute the consensus algorithm and the TEE supports public key operation (e.g., multi signatures) during the consensus process. They also suggested the new Fast BFT algorithm that reduces verification steps by collaborating with TEE. While they improved the BFT algorithm's scalability, their assumptions that all nodes should be executed upon TEE must be their limitations. A Coordination Technique for Scalable BFT Consensus This section presents a coordination technique for achieving the scalability of BFT-based consensus algorithms. Our approach is composed of two parts: Our Coordination Technique and BFT-based Consensus Algorithm ( Figure 2). The BFT-based Consensus Algorithm part located at the bottom of the figure indicates traditional BFT-based algorithms such as PBFT and IBFT. It is composed of one prime node that controls consensus process and other general nodes similar to general BFT-based blockchain platforms. Each node has BFT-Module controlling the consensus process and generating new blocks and Transaction Pool maintaining unconfirmed transactions. Thus, the BFT-Module accesses transactions of the transaction pool regularly and executes a BFT-based consensus algorithm. Once all nodes have been achieved, the consensus of transactions, the BFT-Module produces a new block after accumulating agreed transactions. Our Coordination Technique part corresponds to the top of the figure, and it is also located to the part above the BFT-based Consensus Algorithm part. In the technique, we newly introduce the Consensus Coordinator that controls each node's BFT-based consensus algorithms' conditional execution depending on the equality of transactions. Our consensus coordination technique consists of four steps. (1) The prime node is elected among all participating nodes. (2) The coordinator collects all transactions that existed in the transaction pool of each node. (3) The coordinator checks the equality of transactions and classifies transactions into common and trouble transactions. For trouble transactions, the coordinator requests a prime node to execute a consensus algorithm and obtains agreed transactions. (4) The coordinator merges common and agreed transactions and requests the controller of all nodes to execute block generation with merged transactions. In the following subsections, the steps are described in more detail. Step 1. Electing a Prime Node The first step is to elect a prime node among all participant nodes. The elected prime node plays a role of interacting with a consensus coordinator. This election step runs each regular t time (we assumed that all nodes have the same time unit through a logical clock or a physical clock algorithm (e.g., [29][30][31])). Once all steps are completed, this prime node election step is carried out again. The algorithm of this step is shown in Algorithm 1. Algorithm 1 [All Nodes] Electing Prime Node 1: random = Random(seed) % N(Node) 2: if random equals to Node i then 3: sig prime = Signature (Node i , seed) sk prime 4: notify (sig prime , pk prime ) to CC 5: end if Electing the prime node starts with a seed which is a previous block hash value, and modulates the random number with the seed by total number of nodes N(node) to get a prime node number. The hash value of a previous block must be the same throughout all nodes because they are already agreed in a previous round. In the case that the result random from the random algorithm is equal to a unique number of each node Node i that is assigned before, a node is elected as a prime node. After signing its node number Node i and seed with its private key sk prime , it notifies the consensus coordinator CC with the result of sign sig prime and its public key pk prime . Step 2. Collecting Transactions from Transaction Pool Once the coordinator gets the prime node election notification from the prime node, it collects all transactions from each node's transaction pool as the second step. This collection step is presented in Algorithm 2. The input parameters of this step are sig prime and pk prime from the prime node. Then, the coordinator checks elected prime node's validity by generating a random number with the delivered seed and Node i (see . If the generated random is equals to Node i received from the prime node, the consensus coordinator requests all nodes to send all transactions accumulated in the transaction pool of each node between the previous time time p to the current time time c . When random is different from Node i , the coordinator terminates all coordination steps of this round and waits for the next idle state. Algorithm 2 [Coordinator] Collecting Transactions from TxPool 1: Inputs: sig prime , pk prime 2: Initialize: Txs ← {} 3: Node i , seed = Signature(sig prime ) pk prime 4: random = Random(seed) % N(Node) 5: if random equals to Node i then 6: for i=0, i ≤ N(Node) , i++ do 7: isCon f irmed, sig prime , pk prime ← request Node prime to confirm Txs 0 8: if isCon f irmed == true then 9: request All Nodes to generate a new block with Txs 0 and sig prime If all transactions are the same, the coordinator requests the prime node Node Prime to confirm transactions Txs 0 and the prime node responses to the confirmation of transactions. This confirmation step is necessary for mutual trust between the coordinator and the prime node on integrity of transactions from the coordinator. The response from the prime node includes isCon f irmed, sig prime , and pk prime . Among the responses, sig prime results from execution Signature(Txs 0 ) sk prime of the prime node (see . When the prime node confirms all equal transactions, the coordinator requests all nodes of blockchain network to generate a new block. Then, the controller receives the request and delegates the request to the BFT-Module to generate a new block. All nodes' time p is updated with the current time c for designating the starting period for the next round (see Line 10). If the prime node does not confirm transactions, this round is terminated and time p remains at the previous time. In addition, when some of the transactions are different, the coordinator performs the handle unequal transactions step presented in the next subsection. for j=0, j ≤ N(Txs i ) , j++ do 5: isCommon ← true 6: for k=0, k ≤ N(Node) & i = k, k++ do 7: if Tx j does not exist in Txs k then 8: isCommon ← f alse 9: break 10: end if 11: end for 12: if isCommon == true then 13: List comm ← Tx j (2) Executing a Consensus Algorithm: For trouble transactions, List tr from the previous sub-step, the coordinator requests the prime node to execute a consensus algorithm and obtains a list of agreed transactions List agg from the prime node, denoted on Line 19 Algorithm 4. It should be noted that any BFT-based consensus algorithms can be applied in this step. All transactions in List tr cannot be contained in List agg , because some of the transactions might not be completed in the consensus algorithm. At that time, transactions not agreed upon are removed according to the BFT-based consensus algorithm. (3) Sorting All Transactions: Based on common transactions List comm and agreed transactions List agg , the coordinator merges and sorts them in time order in this step. Then, the coordinator requests the prime node to confirm merged transactions similar to the process of equal transactions (see Algorithm 5, Line 5). The sig prime is produced by the prime node using Signature(SortedList csn ) sk prime . If transactions are confirmed, the coordinator requests all nodes to generate a new block with the SortedList csn , and all nodes' List comm , List agg 2: Initialize: List csn ← {}, SortedList csn ← {} 3: List csn ← (List comm ∪ List agg ) 4: SortedList csn ← sort (List csn ) 5: isCon f irmed, sig prime , pk prime ← request Node prime to confirm SortedList csn 6: if isCon f irmed == true then 7: request All Nodes to generate a new block with SortedList csn and sig prime Step 4. Generating a New Block The last step is to generate a new block that is relied on blockchain platforms. The coordinator does not intervene in this final step. Thus, all nodes create a new block with transferred transactions and a previous block's hash. The controller, for requesting the BFT-Module to generate a new block and accessing transaction pools, is developed for each blockchain platform and our approach can apply to diverse BFT-based consensus algorithms. Evaluation This section describes our experiments' results designed to evaluate our approach. For this evaluation, we established the three research questions below and carried out three experiments in response to research questions. • RQ1: How much can the scalability of PBFT be increased through our proposed approach? • RQ2: What is the correlation between the trouble transactions and performance? • RQ3: How much can our approach improve the scalability of IBFT of Hyperledger Besu? RQ1: How Much Can the Scalability of PBFT be Increased through our Proposed Approach? This first research question is intended to figure out how much our suggested approach achieves our research aim which is to improve the scalability of the BFT-based consensus algorithm. We selected and implemented the PBFT consensus algorithm for this research question, which is the most popular BFT-based consensus algorithm. Then, we measured the performance of the PBFT equipped with and without our approach. Furthermore, we increased the number of nodes to figure out the scalability of our approach. Experimental setting for RQ1. To respond to RQ1, we built the PBFT network (Our source code for implementing the PBFT network is available at https://github.com/jungwonrs/JwRalph_ Seo/tree/master/lab/Agent_Consensus) based on the Castro and Liskov's research [13,14]. Initially, we structured four nodes and issued transactions every 10 ms for 10,000,000 (ms), so that we transmitted one million transactions for 2 h 40 min. In addition, we set a block generation time to be 10 ms, which implies that each node generates a new block every 10 s with transactions in their transaction pools (by using the sensitivity analysis, we obtained 10 s as the best block generation time for the best performance.). Then, we measured the total elapsed time until the consensus process of transactions is complete and computed an average elapsed time. We prepared 81 physical computers and deployed 80 PBFT nodes and one consensus coordinator into each computer. The hardware specification of each computer was Intel i5-3570 3.4 GHz CPU with 4GB RAM, and Windows 10 OS installed. Experimental Result for RQ1. Figure 4 shows the result of the experiment. In the initial experiment with four nodes, the PBFT with the consensus coordinator expressed in CC + PBFT achieved 0.0328 s for each transaction on average, while PBFT without our approach denoted as PBFT showed 0.1237 s. The gap of the elapsed time of two approaches grew as the number of nodes increased. When the number of nodes reached 80, the elapsed time of PBFT and CC + PBFT became 1.1191 and 6.2212 s, respectively. Thus, the PBFT equipped with our approach obtained 3.77 times (=0.1237/0.0328) higher performance than PBFT with the initial four nodes. The performance gain was increased so that the PBFT with our approach achieved 5.56 (=6.2212/1.1191) times higher performance with 80 nodes. In all experiments throughout node changes, the performance of the PBFT with our approach increased an average of 4.75 times compared to the use of the PBFT alone. Therefore, we can recognize that our approach contributed to improving the performance of the PBFT consensus algorithm. In addition to this, we observed that the elapsed time increase rate of the PBFT is bigger than that of CC + PBFT. While the increase rate of the PBFT from 4 to 80 nodes is 50.29 times (=6.2212/0.1237), that of the CC + PBFT is 34.12 (=1.1191/0.0328). It implies that our approach contributes to improving PBFT consensus algorithm's scalability depending on the increase of nodes compared to only using the PBFT. The reason for the increase of the elapsed time of PBFT is because all nodes in the PBFT algorithm should participate in consensus process and the number of communications. In addition, the consensus process should always be executed for all transactions (i.e., one million transactions). However, our approach checks the equality of transactions and executes the consensus process only for trouble transactions. Thus, depending on the proportion of trouble transactions associated with the number of the nodes, the elapsed time of CC + PBFT is increased but not as steeply as that of PBFT. RQ2: What is the Correlation between the Trouble Transactions and Performance? The second research question is for finding out how much our approach can contribute to the performance improvement of the BFT-based consensus algorithm. In the real-world, all blockchain nodes issue transactions, respectively, and it may rarely happen that all transactions in a transaction pool are equal. According to Donet and Pérez-Solà's experiment [32], transaction propagation time in the Bitcoin network composed of 344 nodes took 35 min on average, which means that transactions in each node's transaction pool are commonly different. In this research question, we build an environment where each node has many trouble transactions as in the real-world by controlling the interval of the transaction issue time and computed correlation between the elapsed time and the proportion of trouble transactions as τ. Experimental setting for RQ2. To simulate the environment, we started with the experimental setting for RQ1 with the same hardware specification and issued one million transactions by controlling interval of transaction issue time from every 15 to 1 ms. Then, we measured a total elapsed time and obtained an average elapsed time for each transaction by dividing the total elapsed time by the number of transactions, as shown in Table 1. In addition to this, we observed the proportion of the trouble transaction τ in the consensus coordinator to obtain the correlation between trouble transactions and the elapsed time. Experimental results for RQ2. Table 1 shows the result of the experiment. In the table, Columns 15 ms, 10 ms, 5 ms, 2 ms and 1 ms denote the time interval of the transaction issue and we only selected some of the representative time intervals. We computed τ by averaging the proportion of trouble transactions in a transaction pool collected from each node for every time p ∼ time c period (i.e., 10 s). Based on the τ, we highlight cells with the same colors associated with its τ value (see Color Legend in the table). When the number of nodes is 16, and a transaction is issued every 15 ms, our approach obtained 0.099 s for each of the average time interval (see the italic in the table). Depending on decreasing the transaction issue time interval, the average elapsed time and τ were increased. In addition, an increase in the number of nodes causes an increase in the average elapsed time and τ. From the result, we established the correlation between τ and the average elapsed time for a transaction Avg.Elap.Time tx based on the transaction issue time interval tit and the number of nodes N(Node) as Equation (1). The equation implies that the average elapsed time is δ times proportional to the proportion of the trouble transaction τ. In addition, τ is inversely proportional to the transaction issue interval tit and it has logarithmic relation with the number of nodes N(Node). In the experiment, we obtained δ = 2.5, α = 2, and β = 0.1, which indicates that the proportion of the trouble transaction strongly affects the average elapsed time for each transaction. RQ3: How Much Can Our Approach Improve the Scalability of IBFT of Hyperledger Besu? We established the third research question regarding the applicability of our approach to the real-world open-source blockchain framework using another BFT-based consensus algorithm. We selected Hyperledger Besu, a popular implementation of Ethereum client that supports public and private blockchain. It uses IBFT (Istanbul Byzantine Fault Tolerance) that enhances the performance by decreasing the number of nodes for transaction confirmation from 3 f + 1 to 2 f + 1 (IBFT https://github.com/ethereum/EIPs/issues/650). In the experiment for RQ3, we modified the Hyperledger Besu source code to communicate with our Controller for requesting new block generation and accessing transaction pool (our source code for implementing the BFT network is available at https://:github.com/jungwonrs/JwRalph_Seo/tree/master/lab/besu_backup). Then, we performed an experiment similar to that of RQ1 to figure out how much our approach can improve the IBFT consensus protocol's performance. Experimental setting of RQ3. We modified Hyperledger Besu 1.5.1 (https://github.com/ hyperledger/besu/tree/1.5.1) to communicate with our consensus coordinator. For the experiment, we transmitted random transactions to Hyperledger Besu on a regular basis during the designated period. Initially, the number of nodes was four and we gradually increased the number of nodes until it reached 40. Then, we measured the number of transactions contained in generated blocks to recognize its throughput. We set the block generation time of the Hyperledger Besu to 10 s, and our coordinator execution interval was also set into 10 s because this time showed the best performance. The experiment was also carried out every 5 min (=300 s and 30 block generations) for each node configuration. In the experiment, the interval of the transaction transmission started from 10 ms, because a significant number of transactions was missed in Hyperledger Besu in the case of under 10-ms interval. We carried out this experiment with the transaction transmission interval of 5 ms from 10 to 40 ms. Our hardware specification as a blockchain node and consensus coordinator was Intel i7-8700, 3.2 GHz CPU with 24 GB RAM and Windows 10 OS. All nodes and the consensus coordinator were executed on one computer. Due to the hardware specification limitation, the maximum node number of Hyperledger Besu was set to 40. In addition, the gas limitation of Hyperledger Besu was removed to generate transactions continuously (we refer to the method shown on the official Besu website: https://besu.hyperledger.org/en/stable/HowTo/Configure/FreeGas/). Experimental results of RQ3. Figure 5 shows the result of the experiment where its transmission interval is 10 ms with from 4 to 40 blockchain nodes. In the figure, results of only using the IBFT and IBFT equipped with our approach are expressed in IBFT and CC + IBFT, respectively. Figure 6 shows the result of the experiment where the interval is 25 ms. As the transaction time interval is 25 ms, the total number of transactions that can be issued for 5 min is 12,000 (=300/0.025), which is the maximum number of transactions that can be contained in generated blocks. In four blockchain nodes, the numbers of IBFT and CC + IBFT are 11,900 and 12,000, respectively, in which most of the issued transactions are contained in generated blocks. Thus, the gap between the two numbers of transactions is not big. However, for 20 nodes, CC + IBFT achieved 53.25% (=(11,255 − 7344)/7344) performance improvement. In addition, CC + IBFT obtained 344.81% (=(7584 − 1705)/1705) performance improvement in the case of 40 blockchain nodes, compared to only using the IBFT. In all node configurations, the combinational use of IBFT and our approach gained 61.81% (=((103,348 − 63,868)/63,868)) performance improvement on average. Thus, it is possible to conclude that our approach contributed to improving the performance of a specific blockchain node configuration and improving the IBFT consensus algorithm's scalability because the loss of performance is smaller depending on the increase of the blockchain nodes. All datasets resulting from this experiment are presented in the AppendixA. While carrying out this experiment, we observed that the proportion of equal and unequal transactions in consensus coordinator, as shown in Table 2. As the coordinator execution interval is 10 s, our approach's maximum number of execution is 30 for 5 min. Then, we counted the number of cases that carry out equal transactions, that is, the case of Step 3.1 Handling Equal Transactions and the number of unequal transaction cases that process Step 3.2 Handling Unequal Transactions. Each of them is expressed in Equal Txs and Unequal Txs in the table. In four nodes with 10 and 25 ms, all transactions of the transaction pool of all nodes are equal, which are 93.33% and 96.67% of 30 coordinator executions, respectively. However, the number of nodes is increased, the higher proportion of the case for equal transactions is decreased, and that of unequal transactions is increased. As a result, average equal and unequal transactions are 55.33% and 44.67% in the case of 10-ms transaction issue interval, while those of the 25-ms case are 72% and 28%, respectively. Thus, we recognized that our contribution to the performance and scalability improvement is positively associated with the proportion of equal transactions, as pointed by Equation (1). Threats to Validity Construct Validity. The results of RQ1, RQ2, and RQ3 may be influenced by hardware specification and the version of Hyperledger Besu. Although there are diverse factors related to performance, the experimental results, such as elapsed times and the number of transactions, can differ. However, we tried to carry out our experiments on the same hardware specification for control and experimental groups, so that the relative comparison of the result is considered desirable for our experiment. In addition, the result of the experiment for RQ3 can be different due to the version of Hyperledger Besu. We selected the 1.5.1 version of Hyperledger Besu in the experiment for RQ3, which was the most recent version at the time. However, the version upgrade is frequent so that the use of different versions would show different results. Content Validity. In this paper, the concept of scalability is measured by the extent of the decrease in performance, depending on the increase in the number of nodes. Based on this definition, we keep measuring the performance gap, depending on node changes. Besides, we also defined that the term transaction issue indicates that a client issues one transaction, and the transaction is contained in a new block through block generation. However, the block generation interval was set to be every 10 s in the experiments for RQ1 and RQ3, while the unit of the measure of the elapsed time must be 10 s, which are not the exact time. Due to this issue, we performed our experiment for 2 h 40 min, which is a long time enough to ignore the 10 s time gap for measuring the average elapsed time for processing a transaction. In the experiment for RQ3, we fixed the experiment time into 30 block generations (i.e., 5 min) to resolve the issue. Internal Validity. Experiment results may be affected by different sets of the interval execution of the consensus coordinator (i.e., time p ∼ time c ) and the block generation time for PBFT and IBFT in Hyperledger Besu. To handle this issue, we performed the sensitivity analysis and observed that setting the execution time interval of the coordinator and block generation time of PBFT and IBFT to 10 ms showed the best performance. However, the performance may be different depending on the interval setting. External Validity. In this paper, we claim that our approach is efficient for BFT-based consensus algorithms, and we applied our approach to two consensus algorithms: PBFT and IBFT. It is hard to claim that our approach applies to all BFT-based consensus algorithms. However, we selected the most popular BFT-based consensus algorithm, PBFT. Many consensus algorithms such as IBFT, Zyzzyva [18], SBFT [19], Hotstuff [33,34], and Tendermint [26] are derived from PBFT. Although we selected IBFT as a representative derivation of PBFT, we argue that our approach can be applied to other BFT-based consensus algorithms. Conclusions This paper proposes a coordination technique for improving the scalability of BFT-based consensus algorithms. The technique is composed of four steps: (1) electing a prime node; (2) collecting transactions from transaction pools; (3) processing equal and unequal transactions; and (4) generating a new block. Our key idea is to control a conditional execution of the consensus algorithm by dividing the transaction pool into equal and unequal transactions and secondly dividing common and trouble transactions among unequal transactions. The consensus algorithm is then executed only for trouble transactions, and the results are merged and finalized through sharing the transactions throughout all blockchain nodes. Based on this approach, we carried out three experiments to respond to three research questions. As a result of the experiments, the use of PBFT equipped with our approach showed 4.75 times the performance improvement on average compared to using PBFT only. In addition, our approach contributed to improving the performance by a maximum of 61.81% of the performance, compared to the single-use of IBFT. In addition to this, we showed the correlation of performance and trouble transactions associated with the transaction issue interval and the number of blockchain nodes. Although our approach showed the scalability improvement of BFT-based consensus algorithms, it has explicit limitations. First, the consensus coordinator is centralized so that it exposes the coordinator to the single point failure issue. Second, our approach does not address the recovery issue of the coordinator when a system has failed or restarted. Third, our approach should be tested in the real-world environment where diverse synchronization issues exist such as clock synchronizations throughout distributed nodes. For future work, we plan to carry out more research on distributing the centralized consensus coordinator and establishing recovery strategy from the system failure in the real-world environment.
2020-10-29T09:02:26.244Z
2020-10-28T00:00:00.000
{ "year": 2020, "sha1": "3b54d389e731d984fcfa47858f77e70726cfaeb4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/10/21/7609/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "098633b252fbfe1741660923282ac92cca9ba816", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
104367777
pes2o/s2orc
v3-fos-license
A targeted biocompatible organic nanoprobe for photoacoustic and near-infrared-II fluorescence imaging in living mice Multimodal molecular imaging probes have attracted much attention, and they possess great potential to accurately diagnose diseases due to the synergistic superiorities of multiple complementary imaging. Herein, a targeted biocompatible organic nanoplatform (IR-PEG-FA) with a strong optical absorption in the near-infrared window (NIR-I) for photoacoustic imaging (PAI) and excellent second near-infrared (NIR-II) fluorescence imaging property for NIR-II imaging is fabricated. The dual-modal nanoprobe is composed of the small organic dye molecule IR-1061, water-soluble poly(ethylene glycol) (PEG) and folic acid (FA) as the targeted ligands. Depending on the strength of high temporal resolution and preeminent spatial resolution, the targeted biocompatible dual-mode nanoprobe for PAI and NIR-II imaging can provide more detailed date of cancers and diseases, and enables us to specifically diagnose them through quite a precise way. Introduction The territory of bio-imaging and cancer diagnosis has witnessed a great increase in the past decades, mainly owing to the rapid progress of numerous imaging modalities with eminent sensitivity and resolution. [1][2][3][4] These emerging imaging modalities possessed tremendous promise for remarkably perfecting patient medical treatment and potentially reducing damage to the normal tissue. [5][6][7] Over the past years, the most noteworthy evolution is the introduction of multimodality molecular imaging, which is regarded as the combination of several imaging modalities for the sake of maximizing their respective advantages while conquering the defects of each modality. [8][9][10][11][12] Thus far, plenty of multimodality imaging platforms have been successfully adopted to improve disease diagnostic accuracy and efficacy. [13][14][15][16][17][18][19][20] Photoacoustic imaging (PAI), which combines optical signal detection with ultrasonic information collection, possesses noticeable preponderance for biological imaging with a relatively favorable tissue penetration and spatial resolution. [21][22][23][24][25][26][27][28] However, to realize more higher resolution, the tissue penetration of PAI under the rst near-infrared region (NIR-I, 650-950 nm) is still restricted owing to the optical attenuation. 29 To address the limitation, the developing imaging agents with uorescence emission in the second near-infrared (NIR-II) region (1000-1700 nm) have incomparable advantages. 30 These NIR-II agents show more deeper tissue penetration and higher spatial resolution, which attributes to the reduced autouorescence and diminished optical scattering. [31][32][33][34][35][36] A recent successful paradigm in the NIR-II window for in vivo lymphatic, 30,35 microvascular, 32,37 and tumor imaging 38,39 repeatedly demonstrated the superior temporal and spatial resolutions of the technique. Particularly, NIR-II imaging can accurately detect tumor margins based on real-time images to perfect diagnostic accuracy and further guarantee the triumph of tumor resection. 34 On account of the dominant superiority over conventional NIR-I imaging, NIR-II imaging possesses great opportunities in preclinical applications and clinical translation. 40 Hence, in view of the feasibility and compatibility of PAI and NIR-II uorescence imaging, it promotes us to integrate these two imaging modalities together to obtain a synergetic and complementary information to enhance the efficacy of cancer diagnosis. Thus far, inorganic nanomaterials, including single-walled carbon nanotubes, 41,42 quantum dots, 43,44 and rare-earth nanoparticles, 18,45 have been employed for the in vivo NIR-II imaging. Owing to undiscovered long-term toxicity misgivings, it would urge us to explore organic molecule dyes for in vivo imaging to promote the clinical translation. 46 More recently, some organic nanomaterials (small molecule and conjugated polymer) 30,33,40,47 have been developed as attractive NIR-II agents. Particularly, Cheng et al. developed the rst dual-mode probe composed of D-A-D-type chromophores, which exhibited excellent NIR-II imaging and PAI properties in living mice. 48 Despite the rapid progress in exploring NIR-II imaging agents, there are still rare reports about the dual-mode contrast agents based on PAI/NIR-II imaging. Therefore, developing a novel biocompatible PAI/ NIR-II imaging agent will greatly promote the development of the eld in molecule imaging and diagnosis. Herein, we developed a targeted biocompatible dual-mode contrast agent IR-PEG-FA (Scheme 1), which is composed of IR-1061 (a commercial organic dye) and folic acid (FA), for in vivo PAI and NIR-II imaging (Scheme 2). The obtained IR-PEG-FA exhibited a broad NIR absorption region with a peak at $810 nm and a maximum emission wavelength at 1100 nm. Owing to the introduction of a molecular imaging ligand, along with the enhanced permeation and retention (EPR) effect of the pathological tumor sites, the IR-PEG-FA possessed more tumor-specic targeting performance for the in vivo dual-mode imaging. We believe that the tailor-made biocompatible dualmode contrast agent will provide a new strategy for dual-mode PAI and NIR-II imaging and light up the clinical applications in cancer diagnosis. Chemicals The functionalized poly(ethylene glycol) (SH-PEG 5000 -NH 2 ) was purchased from ToYongBio Tech. Inc. (Shanghai, China). All the other reagents were commercially acquired (Sigma Aldrich) and used directly without further purication. Material characterization NMR spectra were recorded using a 400 MHz spectrometer (Bruker Ultra Shield Plus). The absorption spectra were recorded using a UV-3600 spectrophotometer (Shimadzu). NIR uorescence spectra were recorded using an Edinburgh FLSP920 uorescence spectrophotometer in the 900-1500 nm region with an 808 nm laser excitation. The ALV/CSG-3 laser scattering spectrometer was used for dynamic light scattering (DLS) investigation. The HT7700 transmission electron microscope (TEM) was used for recording TEM images at an acceleration voltage of 100 kV. The Nexus-128 photoacoustic imaging (PAI) instrument (680-950 nm) was used for PAI in vitro and in vivo. The methyl thiazolyl tetrazolium (MTT) study was performed using a BioTek spectrophotometer (PowerWave XS/XS2 microplate). The in vivo NIR-II uorescence imaging was performed using the home-built imaging set-up (CDD: NIRvana TE 640). The excitation wavelength was xed at 808 nm (laser power: $100 mW cm À2 ). Synthesis of IR-PEG-FA SH-PEG 5000 -NH 2 (200 mg) and IR-1061 (50 mg) were dissolved in 30 mL dimethyl sulfoxide and allowed to react at room temperature for 36 h. 47 Aer removing the solvent by distillation under reduced pressure, the mixture was redissolved in pure water. To remove unreacted PEG, the acquired solution was further puried by dialysis with deionized water for 72 h using a dialysis bag (MW: 7000 Da). The obtained product was then dried via freeze-drying to acquire a brown compound (IR-PEG). Then, 100 mg IR-PEG, NHS (25 mg), EDC$HCl (40 mg), and 30 mg folic acid (FA) were dissolved in 30 mL of dry N,Ndimethyl formamide (DMF) at room temperature. Aer 72 h of reaction, DMF was distilled by reduced pressure distillation, and the residual solid was dissolved in appropriate amounts of deionized water. To remove unreacted FA and other impurities, the obtained solution was further subjected to dialysis with deionized water for 72 h using a dialysis bag (MW: 7000 Da). The nal product (IR-PEG-FA) was further dried by freeze-drying to yield a brown product. NIR uorescence spectroscopy NIR uorescence emission spectra were recorded using a spectrophotometer (Edinburgh FLSP920, 900-1500 nm) under laser excitation at 808 nm. The excitation light at 808 nm was provided by a diode laser (Ti:sapphire system lasers, 808 nm, 180 mW) and ltered through a 900 nm long-pass lter. The excitation laser beam was passed through the sample of IR-PEG-FA in PBS solutions (pH ¼ 7.4) and the emission signal was obtained with the transmission geometry. In order to reject the 808 nm laser light, an additional 900 nm long-pass lter was used to reduce the background interference. Preparation of IR-PEG-FA nanoparticles IR-PEG-FA nanoparticles were synthesized by directly dissolving IR-PEG-FA in phosphate-buffered saline (PBS, pH ¼ 7.4) with continuous sonication. The gained NPs were then ltrated using a millipore lter (0.22 mm) to remove the insoluble product or other impurities for cell and biological experiments. Cell viability assay The cytotoxicity of IR-PEG-FA was studied by an MTT experiment using U87MG and NIH-3T3 cells. These cells (1 Â 10 4 cells per mL) were cultured in 96-well plates. Aer 1 day of incubation, the medium was removed and a series of concentrations of IR-PEG-FA solution (80, 60, 40, 20, and 10 mg mL À1 ) were added into the well and cultured for another 1 day. Aer that, the cells were washed twice with saline, and MTT solution (100 mL, 0.5 mg mL À1 ) was added into each well. Aer cultivation for 4 h, the MTT solution was removed and 200 mL of dimethylsulfoxide was added into each well and le for another 20 min. A microplate reader was used for acquiring absorbance values at 490 nm. The untreated cells were used as the control group and the cellular viability was set at 100%. Tumor mouse model All U87MG tumor-bearing mice were treated on the basis of the rules set by the Animal Centre of Southeast University (Nanjing, China) and the experiments were approved by the Animal Ethics Committee of Model Animal Research Center of Nanjing University. To acquire the suitable U87MG tumor model for in vivo imaging, 6 week old mice were used. In addition, 1.5 Â 10 6 U87MG cells were injected subcutaneously into the target location of the mice. The tumor-bearing mice can be adopted for in vivo imaging until the tumor grew up to approximately 8 mm in size. In vitro and in vivo photoacoustic imaging The in vitro PAI properties of IR-PEG-FA aqueous solutions (50, 100, 200, 400, and 800 mg mL À1 ) were studied and the PA signals were recorded using the Endra PA tomography system at an excitation wavelength of 808 nm. The gained images were further analyzed using the ImageJ soware, and an identical region-of-interest was performed for collecting the quantitative PA signal intensity. For in vivo PAI, the living mice were anesthetized using 1.5% isourane in owing air, and they were administered with IR-PEG-FA (150 mL, 1 mg mL À1 ) through the tail vein. Then, the mice were transferred into a chamber where the temperature was set at 38 C, and were provided with water for drinking. The Endra PA imaging system was utilized to collect the PA images at 808 nm. In vitro and in vivo NIR-II imaging A 500 mL polymerase chain reaction (PCR) tube with IR-PEG-FA aqueous solution was placed in the NIR-II imaging instrument, and the NIR-II uorescence images were recorded. The nal image was analyzed using the ImageJ soware. To further detect the in vivo NIR-II imaging, the U87MG tumor-bearing living mice were intravenously injected through the tail with a 150 mL solution of IR-PEG-FA (1 mg mL À1 ) for the video-rate in vivo uorescence imaging. The in vivo NIR-II uorescence imaging was recorded using the home-built imaging set-up (CDD: NIRvana TE 640). The 808 nm laser excitation power was xed at $100 mW cm À1 on the animal surface. Preparation and characterisation of IR-PEG-FA The synthetic route of IR-PEG-FA is illustrated in Scheme 1. Briey, aminated IR-PEG was rst synthesized by introducing a functionalized poly(ethylene glycol) (PEG, M w ¼ 5000) chain (SH-PEG-NH 2 ) into the IR1061 molecule. Then, the obtained amino group of IR-PEG can provide modiable sites for the introduction of folic acid (FA) by the amidation reaction, affording the end-product, IR-PEG-FA (Fig. 1A). The successful linkage of PEG and FA can be determined from the MALDI-TOF-MS spectra (Fig. S1 †) and 1 H NMR spectra (Fig. S2 and S3 †) of IR-PEG-FA. Particularly, the mass spectrum (Fig. S1 †) exhibited two peaks with maximum molecular weight at 5806.621 Da and 6247.476 Da, which were attributed to the unsuccessful conjugation of a part of FA molecules. Owing to the hydrophobichydrophilic groups, IR-PEG-FA was directly dispersed in the phosphate-buffered saline (PBS, pH ¼ 7.4) and could be spontaneously self-assembled to form nanoparticles, which gave brown color to the solution (Fig. 1B). Dynamic light scattering (DLS) showed that the IR-PEG-FA has similar average diameters of $77 nm (Fig. 1D). A spherical morphology was observed in the transmission electron microscopic (TEM) image with a particle size of 62 AE 4.6 nm (Fig. 1C). The acquired bigger nanoparticle size of IR-PEG-FA in the DLS image could be attributed to the import of the PEG chains and its extension in an aqueous state. Besides, no signicant insoluble substances and precipitation in the IR-PEG-FA solution were observed aer storage in a refrigerator (4 C) for 30 days (Fig. 2A), demonstrating its excellent stability in the aqueous solution. The optical performance of IR-PEG-FA was studied in PBS (pH ¼ 7.4) by probing its absorption and uorescence (FL) spectra. As shown in Fig. 2B, IR-PEG-FA possessed a broad absorption band with two peaks at 810 nm and 1000 nm. The mass extinction coefficient of IR-PEG-FA at 808 nm were 58.65 mg À1 cm À1 mL (Fig. 2C), which was superior to many reported NIR absorption materials, such as carbon nanotubes (46.5 mg À1 cm À1 mL), gold nanorods (13.89 mg À1 cm À1 mL) and conjugated polymers (55 mg À1 cm À1 mL). 47 Besides, the uorescence spectrum of IR-PEG-FA was investigated and it showed an evident NIR-II emission property with a peak at around 1100 nm at an excitation wavelength of 808 nm (Fig. 2D). Herein, considering the maximum absorption of IR-PEG-FA and penetration depth for biological imaging in vivo, the 808 nm laser was used. 32 Thus, the strong NIR absorption and NIR-II emission of IR-PEG-FA could provide a theoretical basis for PAI and NIR-II imaging. In vitro PAI and NIR-II imaging To evaluate the PAI property, the PA signal intensities at 808 nm of IR-PEG-FA in aqueous solution were recorded. From Fig. 3A, the strong and bright PAI images indicated that IR-PAG-FA possesses excellent PA property and provided an opportunity for in vivo PAI. Besides, IR-PEG-FA exhibited a linear dependence between the sample concentrations and PA signals (Fig. 3B). Next, to further study the feasibility of NIR-II imaging, the uorescence image in the NIR-II region of IR-PEG-FA was acquired at an excitation wavelength of 808 nm. As shown in Fig. 3C, IR-PEG-FA displayed a signicant NIR-II emission efficacy and possessed a strong NIR-II emission signal. Besides, to explore the cytotoxicity of IR-PEG-FA, the cellular viabilities of NIH3T3 and U87MG cells aer administration with IR-PEG-FA were evaluated, and Fig. 3D shows that the agent possessed ne cytocompatibility. All of these proved that IR-PEG-FA demonstrates incomparable advantages for further in vivo imaging applications. In vivo PAI and NIR-II imaging The capability of IR-PEG-FA for targeted in vivo PAI and NIR-II imaging was further proved using the U87MG tumor mouse model. As shown in Fig. 4A, before intravenous injection of IR-PEG-FA via the tail, the tumor site exhibited low PA signals at 808 nm on account of the intrinsic background signal. 49 Owing to the introduction of a molecular imaging ligand, along with the enhanced permeation and retention (EPR) effect of the pathological tumor sites, the IR-PEG-FA possessed more tumor-specic targeting performance for in vivo imaging. As we expected, a distinct and bright PA signal increase in the tumor regions could be detected aer IR-PEG-FA injection. At 4 h post injection, the PA signal within the tumor achieved a maximum value, indicating the effective accumulation of IR-PEG-FA, owing to both active and passive tumor targeting. Aer that, the PA signal around the tumor area decreased gradually because of the biological metabolism and biodegradability of the NPs in the physiological environment in vivo. Besides, at 24 h post injection of IR-PEG-FA, the mice were euthanized. The major organs including the lungs, liver, heart, kidney, spleen, and tumor were excised and imaged via PAI (Fig. 5A). The liver and spleen tissues exhibited bright and strong PA intensity, whereas the heart, lung, kidney and tumor tissues possessed relatively low signals (Fig. 5B). Thus, this manifested that the administered IR-PEG-FA was completely cleared mainly through the hepatobiliary system. To test the NIR-II imaging ability of IR-PEG-FA in vivo, the U87MG tumor-bearing mice were administrated with 150 mL of NPs (1 mg mL À1 ) via tail vein. Interestingly, the blood vessels of the whole body could be distinctly detected at 2 min post injection using IR-PEG-FA (Fig. 4C) with high resolution and excellent image quality. Then, the uorescence signals of tumor sites were gradually lightened along with the accumulation of IR-PEG-FA ( Fig. 4D and E). At 4 h, the uorescence signal of the tumor region achieved the peak, which was consistent with that in PAI. Particularly, the maximum uorescence signal intensity at 4 h post-injection was approximately 21 fold than that at 24 h post-injection with IR-PEG-FA, which further indicated the good tumor-specic targeting property of IR-PEG-FA and excellent biological biodegradable (Fig. 4E). These results indicated that IR-PEG-FA as a watersoluble and biocompatible NIR-II uorescence agent ts biological imaging perfectly. Toxicity evaluation Furthermore, IR-PEG-FA toxicity was further investigated by collecting and analyzing the major stained organ slices from mice aer 30 days of administration with IR-PEG-FA. As the same as the control groups (administration with PBS), no distinct pathological tissue, cell apoptosis, or necrosis were seen in the spleen, kidney, liver, heart, and lung aer administration with the agent, which signied the absence of distinct side effects (Fig. 6), similar to the previously reported NIR-II uorophores. 46 Altogether, these satisfying results demonstrated that IR-PEG-FA can be adopted for NIR-II uorescence and NIR-triggered PAI in living mice. Conclusions Summarily, we designed and synthesized a water-soluble, targeted and biocompatible dual-mode contrast agent that exhibited a strong NIR absorption and a bright uorescence emission signal in the NIR-II region for in vivo PAI and NIR-II imaging. We believed that the tailor-made IR-PEG-FA for PAI and NIR-II uorescence dual-modal imaging will capture widespread attention for the clinical applications of molecular imaging. Conflicts of interest There are no conicts to declare.
2019-04-10T13:12:52.854Z
2018-12-19T00:00:00.000
{ "year": 2018, "sha1": "71358822a147f755e8317534ec109fe38e173883", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c8ra08163h", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b00ee516adaebaccce6d61f5bea0d9bb2155a26a", "s2fieldsofstudy": [ "Biology", "Physics", "Mathematics" ], "extfieldsofstudy": [ "Materials Science" ] }
122288034
pes2o/s2orc
v3-fos-license
EUV diagnostics of pulsed plasma systems . Extreme ultraviolet (EUV) diagnostics have been a subject of continuing interest for last several decades in the field of space and plasma sciences. In recent days, EUV diagnostics are being widely employed in the laboratories and industries to characterize EUV emission from the EM radiation sources that have strong impacts on future technology. This paper gives description of some of the important EUV diagnostics such as EUV photon detector, EUV energy measurement system, EUV pinhole camera, grazing incidence spectrograph and transmission grating spectrograph employed at Tokyo Institute of Technology to characterize EUV emission from the low current (<15 kA) and low energy (~ ten of Joule) pulsed plasma systems and some typical results. Introduction Extreme ultraviolet (EUV) diagnostics have been a subject of continuing interest for the last several decades in the field of space and plasma sciences [1][2][3][4][5]. EUV has proved to be a valuable wavelength for the study of particular groups of astronomical objects, including white dwarf stars and stellar coronae, as well as the interstellar medium. Using EUV telescope, the probing of interstellar space and extended atmospheres around stars becomes now a routine work. Moreover, the proper study of extragalactic objects would not be possible without the development of diagnostics at EUV wavelength [1]. Similarly, in the past, tokamak researchers had attempted to study the transport and other physical properties of plasma using EUV and soft X-ray lines emitted by highly ionized plasmas [2]. A renewed interest on EUV spectra measurements from tokamak and other plasma systems are noticed because of the need of generation of atomic physics data in support of the future large fusion devices [4,5]. The prospect of the employment of EUV and soft X-ray sources for next generation lithography system QP PLFURVFRS\ LQ WKH ³ZDWHU window" QP -4.4 nm), plasma diagnostics and EUV/soft X-ray laser research have led to considerable progress on the development of high spatial and temporal EUV diagnostics in recent years [6,7]. The extreme ultraviolet (EUV) plasma sources will become the next generation lithographic source only when the significant obstacles such as in-band EUV energy, small EUV emission volume, spectral purity, positional and energy stability are to be addressed [8][9][10][11]. Although the final goal is to produce a suitable 13.5 nm light source, solving the full range of issues and this requires a more thorough understanding of the plasma that emits the EUV optical emissions. Because of the complex nature and extreme conditions produced in the EUV plasma sources, several plasma diagnostics are needed to fully characterize these extreme ultraviolet lithography (EUVL) sources. This paper describes the development and employment of some EUV diagnostics at Tokyo Institute of Technology for characterizing the EUV emission from the pulsed plasma systems namely, capillary [12,13] and gas jet Z-pinch [14,15]. The performance of these EUV emitting sources have been investigated in terms of various technological aspects of EUV lithography. Some important results are presented over here. Methodology The spectral as well as electrical performance of our EUV systems have been evaluated by employing diagnostics such as EUV photon detector, EUV mini calorimeter (energy measurement system), EUV pinhole camera, EUV spectrographs and some routine diagnostics like voltage and current probe etc. Since the motivation behind this work is how to achieve high flux of EUV photon at 13.5 nm, therefore, EUV diagnostics were developed that mainly scanned the wavelength range of 5 to 20 nm. The time evolution of EUV photon in the wavelength range of 5 to 18 nm was monitored using an EUV photodiode (IRD, AXUV-10) coupled with Zr/C filter (thickness 200/50 nm). The choice of photodiode basically depends upon the quantum efficiency of the semiconductor materials in the desired wavelength and the capacitance of the diode. Besides an EUV mini calorimeter, comprising of a Zr filter, Mo/Si multi-layer mirror and photodiode was used for the absolute in-band power measurement. The Zr filter used in this detector is transparent to the wavelength range of approximately 12 to 18 nm. The multilayer mirror is coated with 50 bilayers of Mo/Si having maximum reflection at 13.5 nm and the bandwidth of the stack is typically 0.5 nm. Since the reflectivity of multilayer mirror is maximum at 45° incident angle, the mirror was positioned accordingly with respect to the incoming EUV light. The calorimeter was calibrated by JENOPTIK energy monitor [12]. The EUV pinhole camera consisting of a 50 µm diameter pinhole, a Zr filter and an X-ray CCD camera (Andor Technology Ltd. DO434) was used for measuring the dimension of the EUV emitting volume and for evaluating the positional stability of the EUV emitting zone. The resolution of the pinhole camera which is limited by the diameter of the pinhole and by the diffraction is found to be around 100 µm. For recording the wavelength of EUV emission, a grazing incidence spectrograph and transmission grating spectrograph were employed. Commercially available McPherson flat-field grating spectrometer (model 248/310G) was used for the investigation of photon emission in 10 to 20 nm wavelength range. It mainly consists of an entrance slit, a 600 grooves/mm diffraction grating and a microchannel plate (MCP) intensifier. The resolution of the spectrometer is of the order of 0.036 nm. In addition to the grazing incidence spectrograph, a transmission grating spectrograph was designed in house and employed for recording the photon emission mainly in 3 to 50 nm wavelength range. A transmission grating of 1000 lines/mm in normal incidence geometry, a P SLQKROH DQG an X-ray CDD camera were basically used to fabricate the transmission grating spectrometer. A spectral resolution of 0.45 nm is obtained by optimizing the aperture, the source to grating distance and the grating to detector distance. For the routine check of electrical signal at the different parts of the circuit, commercially available voltage and current probes were used. The pulsed plasma systems used in the present work, based on the well-known principle "pinch effect", generate a high-temperature and high-density transient plasma that emits copious amount of EUV radiation in the expense of electrical power. Two compact pulse power drivers were designed and constructed that pump the electrical power into the pulsed plasma systems. These comprise of a high voltage power supply, pulse forming network, spark gap switch, pulse transformer, magnetic switch, capacitor bank etc. The details of the pulse power drivers are described elsewhere [12]. One pulse power diver, named as the slow pulse power driver, delivers a low dI/dt of around 20 A/ns at the load. On the other hand, the other pulse power driver, named as fast pulse power driver, produces a high dI/dt of the order of 57 A/ns at the discharge part. The brief outlines of the different discharge heads employed in the present work are described hereafter. The capillary discharge head comprises of mainly a narrow ceramic tube sandwiched in between specially configured coaxial electrodes as shown in figure 1 (a). A 10 mm long alumina capillary having inner diameter 2 mm is rigidly fixed in between two molybdenum electrodes. The electrode (a) (b) The gas jet Z-pinch discharge head has the uniqueness of a jet type coaxial electrodes configuration as shown in figure 1 (b). The electrodes configuration of this source has typically a dual orifice nozzle (cathode) and corresponding diffuser (anode). The dual orifice nozzle consists of two simplex nozzles arranged concentrically. The inner nozzle have 2 mm mouth diameter and the outer nozzle having annular cylindrical type opening have 1 mm width. The annular separation between the inner and outer nozzle is 4 mm. The inner and outer diameters of the diffuser are 6 and 24 mm, respectively. The inner nozzle supplies the source gas (xenon) for the EUV light and the outer nozzle supplies helium gas for the gas curtain. The typical design parameters of the nozzles and pressure inside the detection chamber help to maintain a subsonic gas flow through the nozzles. The discharge head is mounted inside the detection chamber through one of its port as illustrated in figure 2. The detection chamber has another five ports that are used for diagnostic purposes as well as evacuation. The detection chamber is usually evacuated up to a base pressure of less than mTorr with the aide of a turbo pump. Xe gas is fed into the discharge head with the help of mass flow controller. The supply gas pressure and chamber pressure are monitored using Baratron pressure transducer. The pressure of the xenon gas near the discharge head is to be controlled up to several hundreds mTorr while the chamber pressure is maintained less than several mTorr. Results and discussion: Numerous experiments were conducted in each discharge head to find out the discharge dynamics and the optimum EUV emission by varying different experimental conditions. Both the pulse power drivers were at first employed to check the performance of capillary discharge head and it was found that the fast pulse power driver with high dI/dt emits more stable and intense EUV emission. The significant findings on each discharge head driven by the fast pulse power supply are mainly discussed in this paper. The electrical measurements indicated that our fast pulse power driver delivered around 10 kA discharge current and 25 kV discharge voltage across the capillary head to create hot and dense plasma, which emits photons mainly in the EUV region. The time evolution of EUV emission was compared with the discharge current pulses for various experimental conditions using EUV photodiode. A typical set of traces of the discharge current and photodiode signals recorded at 5 Torr supply pressure and 9 kV charging voltage is shown in figure 3. The traces indicate that the EUV photons emerge out after 40 ns of the initiation of discharge inside the capillary and the photon emission reaches a peak value during the maximum of discharge current. The pressure inside the capillary was varied from 2 to 8 Torr in order to find out the optimum pressure condition for the maximum EUV output. It is observed that the EUV emission is maximum at a pressure of around 5 Torr. Figure 3. Traces of EUV output and current signals The EUV mini calorimeter was used to evaluate the absolute in band radiation at best operating conditions. It is estimated be maximum of 3.3 mJ/sr/2%BW/pulse at optimum operating condition [12]. The angular variation of EUV measurement shows that EUV emission is fairly symmetrical except at 10° angle as illustrated in figure 4. The EUV pinhole camera was employed on axis (0°) and off axis (45°) of the capillary plasma system to record the images of EUV emitting zone in various experimental conditions like different supply pressures and charging voltages. Figure 5 shows a typical on axis and off axis EUV images obtained for the best operating condition (9 kV charging voltage and 5 Torr supply pressure). The images obtained on axis and off axis are circular and ellipsoidal in nature, respectively. The images recorded on axis seem to be fairly symmetric with respect to the capillary axis irrespective of the gas pressure. The high intensity zone is reasonably symmetric to the total EUV emitting zone. With the increase in supply gas pressure, the EUV emission region seems to be ejected out of the capillary. The size and intensity of EUV plasma strongly depend upon the gas pressure. The source dimension having length of 0.143 mm and diameter of 1.103 mm is estimated form the pinhole images at optimum condition. The positional and pulse-to-pulse intensity stability were also evaluated from pinhole imaging and quite encouraging results are obtained. Figure 5. Typical on axis and off axis pinhole images The time-resolved EUV emission spectrum from the capillary discharge head was carried out using the McPershon spectrometer. The slow power supply was used to power the discharge head and observation was carried out on entire time range corresponding to the slow current pulse. Two typical time-gated spectra recorded after 450 and 700 ns of the initiation of discharge are shown in figure 6. The spectrum obtained at 450 ns shows four broadband emission peaks that are centered at 11, 13.5,15.2 and 17.2 nm and these peaks are identified as transitions in Xe 11+ , Xe 10+ , Xe 9+ and Xe 8+ ions [11,12]. Besides form these broadband peaks, the lines from oxygen impurities at 15.2 and 17.2 nm are also appeared. These lines of O 4+ ionization state are attributed to the emission of impurity from the wall of capillary. In the image recorded during 700 ns, the height of the broadband peaks suppresses and the emission from impurities lines increase significantly as shown in figure 6 (b). The spectra recorded in later time are dominated by the impurities lines and these emissions do not contribute to the usable EUV output power for lithography. 10 The EUV spectra were also recorded using the transmission grating spectrograph. Figure 7 shows the spectra of transmission grating spectrometer for different Xe gas supply pressures (3 to 6 Torr). Transmission grating spectra also shows broad band peaks at 11, 13.5, 14.5 and 17.5 nm. From the spectra it is noticed that the intensity of each peak decreases with the increase of supply gas pressure. But further increase in the supply gas pressure beyond 6 Torr, only 14.5 nm peak seems to be prominent in comparison to the other peaks. Figure 7. EUV spectra obtained from capillary discharge using transmission grating spectrometer EUV photon yield in the range of 6 to 15 nm from the gas jet Z-pinch source was examined by employing the EUV photodiode integrated with Zr filter. The photodiode signals (with and without gas curtain) together with the current pulse are shown in figure 8. The EUV photons in both the cases appear after 70 ns of initiation of discharge current and reach a maximum nearly just before the maximum of current pulse. The EUV intensity peak as well as EUV photon flux improves around 30 % because of the presence of gas curtain. The duration of EUV emission occurs roughly 110 ns in absence of the gas curtain whereas it prolongs for another few tens of nanoseconds in presence of the gas curtain. The rise time of EUV signal is shorter than the fall time, which may be indicative of faster pinching of plasma and slower expansion of disrupted plasma. A second peak in EUV signal is always Figure 8. Traces of EUV photodiode and current signals appeared in the presence of gas curtain only at higher Xe gas pressure ( 7RUU 7KH DSSHDUDQFH RI second peak suggests another minor pinching of the expanding Xe plasma that occurs due to the confinement caused by the gas curtain. While employing an additional pumping system to diffuser the maximum photon flux is obtained which is more than twice of photon signal obtained without gas curtain. The in-band energy measurement was done with the help of EUV mini calorimeter at 10 Torr Xe supply pressure and 4 mm electrode separation. The EUV output in 2 % bandwidth at 13.5 nm is estimated to be 0.78 mJ/sr/pulse, which is much lower value than as observed in capillary discharge head. In addition, the calorimeter was utilized to examine angular variation of in-band radiation and it is found to be exceptionally isotropic in available observable angle. Time integrated EUV pinhole images were recorded at radial position with a time integrated EUV pinhole camera for various Xe gas pressures (10 to 30 Torr) and electrode separations (4 to 16 mm). The influence of both the experimental parameters on EUV emission is distinctly observed. Figure 8 shows some of the EUV images recorded at different Xe gas pressure for 12 mm electrode separation. It is observed that maximum EUV intensity is obtained at 20 Torr gas pressure which is roughly five times more than the intensity obtained at lower (10 Torr) and higher (30 Torr) gas pressures. Moreover, EUV intensity is improved by one order with simply increasing the electrode separation from 4 mm to 12 mm. The shot to shot fluctuation in EUV intensity measurement is approximately 4 %. A reasonably strong EUV intensity is obtained in between pressure range 18 to 25 Torr for electrode separation greater than 6 mm and the optimum condition for EUV intensity is 12 mm electrode separation and 20±2 Torr supply gas pressure. The dimension of EUV emitting zone was estimated at the optimum condition (12 mm electrode separation and 20 Torr supply pressure) and the FWHM diameter and length are found to be 0.16 and 0.92 mm, respectively. However, the dimension of EUV emitting zone increases by many folds while EUV intensity reduces significantly at a higher Xe supply pressure. The influence of gas flow rate of He gas curtain on EUV intensity and dimension is also studied. Nearly 25 to 50 % increase in EUV intensity is marked in 20 to 25 Torr Xe gas pressure because of the presence of gas curtain. But He flow rate has no major role on controlling maximum EUV intensity irrespective of Xe supply pressure. The influence of gas curtain on EUV emitting zone has observed only at higher Xe gas pressure (> 20 Torr) and the gas flow rate of gas curtain has very little effect on size of EUV emitting zone. Conclusions To conclude, EUV diagnostics such as photon detector, mini calorimeter, pinhole camera and grazing incidence and transmission grating spectrographs were developed and employed to characterize EUV emission from pulsed plasma systems. These diagnostics tools have helped to asses the performance of capillary discharge and gas jet Z-pinch sources in terms of different technological aspects of EUV lithography source.
2019-04-19T13:06:09.168Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "ee1df0d51b478e7a9df1f595cee33b633b15b843", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/208/1/012138/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b0537fae23639d2f5dca03a85c1076f3c6078a27", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255841362
pes2o/s2orc
v3-fos-license
Multiple idiopathic external cervical root resorption in patient treated continuously with denosumab: a case report External root resorption is an irreversible loss of dental hard tissue as a result of odontoclastic action. Multiple external cervical root resorptions in permanent teeth are rare. The exact cause of external cervical root resorption is unclear. It is currently well established that RANK/RANKL signaling is essential for osteoclastogenesis and osteoclast-mediated bone resorption. Denosumab is an anti-RANKL antibody used for the treatment of postmenopausal osteoporosis. RANK/RANKL pathway suppression by denosumab is expected to suppress the activity of clastic cells responsible for hard tissue resorption involving both osteoclasts and odontoclasts. This case report demonstrates aggressive and generalized idiopathic external cervical root resorption that started and advanced during ongoing antiresorptive therapy with the human monoclonal RANKL-blocking antibody denosumab without discontinuation of therapy in a 74-year-old female patient treated for postmenopausal osteoporosis. The extent of resorptive defects was too large and progressively led to fractures of the teeth. The number of teeth involved and the extend of destruction excluded conservative treatment. The affected teeth had to be extracted for functional prosthetic reconstruction. This finding suggests that treatment with denosumab may be associated with severe and aggressive odontoclastic resorption of multiple dental roots despite an adequate inhibitory effect on osteoclasts in the treatment of osteoporosis. The RANKL-independent pathways of clastic cell formation are likely to be involved in this pathological process. Background External cervical root resorption (ECR) is an irreversible loss of hard dental structures that may necessitate dental treatment or even an extraction of affected teeth. Multiple external resorptions in permanent dentition are uncommonly reported phenomena. The etiology and pathogenesis of ECR is not exactly understood, but it is believed to involve the action of osteoclast-like cells (odontoclasts) originating from hematopoietic stem cells in the bone marrow [1]. In addition, it has been suggested that the presence of inflamed fibrovascular connective tissue adjacent to the root surface lacking an intact periodontal ligament is a prerequisite for cervical root resorption [1]. Although several cases of ECR have been reported as idiopathic, various mechanical or chemical factors have been associated with ECR, particularly dental trauma and orthodontic treatment [2,3]. Most cases of ECR are asymptomatic and are typically discovered as an incidental finding on radiographic examination [4]. The pathologic resorptive process typically starts below epithelial attachment at the mesial or distal cemento-enamel junction of the tooth and can progress and involve the entire cervical region [5]. ECR is a result of increased odontoclastic activity that develops as a consequence of local damage or deficiency of the periodontal ligament or subepithelial cementum [3]. Receptor activator of nuclear factor kappaB (RANK) is a receptor for RANK ligand (RANKL) and part of the RANK/RANKL/osteoprotegrin (OPG) signaling pathway that regulates osteoclast differentiation and activation. RANK/RANKL/OPG plays an essential role in osteoclastogenesis and osteoclastic bone resorption [6]. Histological studies show that local trauma to the bone or periodontal ligament increases the concentrations of RANKL, macrophage-colony stimulating factor (M-CSF), tumor necrosis factor alpha (TNF-α), interleukin 1β (IL-1β) and other inflammatory cytokines that stimulate osteoclast and odontoclast differentiation [7][8][9]. Denosumab is a novel variant of antiresorptive therapy used to treat osteoporosis (e.g., Prolia 60 mg every 6 months) or to treat or prevent skeletal complications in malignancies (e.g., Xgeva 120 mg up to once every month) [10]. Denosumab acts as a human monoclonal antibody that prevents the RANK/RANKL interaction, thereby inhibiting osteoclast development and activation [11]. The long-term efficacy and safety of denosumab has been evaluated in the FREEDOM Extension Trial with results published for up to 10 years of denosumab exposure, demonstrating a continuing increase in bone mineral density, a sustained reduction in bone turnover markers, a low fracture incidence and a consistent safety profile [12]. Antiresorptive therapy with denosumab inhibits RANK/RANKL signaling and thereby osteoclast-mediated bone resorption, suggesting the same effect for odontoclast-mediated root resorption [13,14]. The prevalence rate of ECR in the general population varies from 0.02 to 0.08% [15]. Evidence on the prevalence rate of ECR in individuals affected by osteoporosis or treated with denosumab is lacking. In the present article, we demonstrate the coincidence of long-term and continuous antiresorptive therapy with denosumab and aggressive multiple external cervical root resorptions in patient with postmenopausal osteoporosis. Case presentation A 74-year-old female patient presented in July 2020 with a chief complaint of occasional pain, thermal sensitivity and slightly increased mobility of her left mandibular second premolar (tooth 35). Her symptoms started two months ago. She also reported unexpected fractures of otherwise asymptomatic teeth in the previous weeks. Four weeks prior to the first examination, a crown of the first left mandibular incisor (tooth 31) suddenly broke. Two weeks later, in a similar manner, she lost a crown of the second maxillary right incisor (tooth 12) because of the fracture in the cervical area of the tooth caused by chewing. She was referred by her dentist to the Department of Stomatology and Maxillofacial Surgery at University Hospital Martin with numerous sites of external cervical root resorption found on her recent panoramic radiograph (Fig. 1). All roots of her 23 teeth were affected, and crowns of previously broken teeth 12 and 31 were missing. There were no signs of external cervical root resorption on her previous radiographic examination in 2017. However, the panoramic radiograph from 2017 showed signs of Generalized Stage 3 Periodontitis with moderate to severe bone loss around most maxillary and several mandibular teeth. Subgingival calculus was noted on maxillary molars. Bone loss around these teeth was severe and consistent with presence of calculus. Significant bone loss was present around maxillary premolars, between incisors, and between maxillary right canine and lateral incisor. (Fig. 2). There was no history of dental trauma, periodontal surgery, intracoronal bleaching or Paget disease of bone. Her medical history was positive for arterial hypertension, Her treatment for osteoporosis started in 2008 with parathormone injections. Because of an allergic reaction to parathormone, the therapy was changed to strontium renalete for the next two years. Since 2011, she has been administered antiresorptive therapy with the human monoclonal antibody denosumab (60 mg) via subcutaneous injections every six months without any discontinuation until the first presentation to our department. She received the last injection of denosumab in March 2020. At the time of her presentation to our clinic, she was medicated with the following medications: denosumab, calcium, vitamin D, nitredipine, perindopril, bisoprolol, latanoprost, and timolol. No detectable abnormalities were identified on extraoral and intraoral examination. Intraoral examination revealed good oral hygiene, dentition with multiple restorations and no active carious lesions. Crowns of previously broken teeth 31 and 12 were missing, and the roots were covered with gingiva ( Fig. 3). Tooth 35 showed slightly increased mobility, and all other teeth were not discolored and did not exhibit increased mobility. Despite the radiographically noticeable extensive loss of root structure, all teeth tested vital with the exception of earlier endodontically treated teeth 21 and 24 and molars 26 and 37 with massive metallic restorations. Her current panoramic radiograph showed aggressive external cervical root resorption in all 23 teeth. Most resorptive lesions started in the approximal cervical area of the root and spread towards the pulp or towards the apex of the root. The resorptive process extended beyond the coronal third of the root, particularly for teeth 17, 27, and 36. There was also a resorptive defect in the apical area of tooth 16. Attempted probing of the periodontal sulcus of the upper central incisors with a periodontal probe failed to penetrate through the gingivodental junction into the cervical defects. The gingiva around the upper central incisors was pink and firm on probing without bleeding with probing depths of 3 mm (Fig. 3). However, other maxillary and most mandibular teeth showed clinical signs of periodontitis with gingival swelling and positive bleeding after probing. Overhanging fillings of the maxillary and left mandibular premolars and calculus of the maxillary molars were apparently a potential etiologic factor for advanced generalized periodontitis (Fig. 2). After the examination in our clinic with the diagnosis of multiple idiopathic ECR and Generalized Stage 3 Periodontitis, the patient consulted her endocrinologist, who ordered routine laboratory tests. These test results were (including alkaline phosphatase level) within the normal range with the exception of an elevated cholesterol level of 5.89 mmol/l. In July 2020, circulating intact parathyroid hormone and serum vitamin D, calcium, magnesium and phosphorus levels were normal. Her osteoporosis treatment with denosumab, calcium and vitamin D was assessed as efficient with decreased bone turnover markers: osteocalcin level was 5.06 ng/ml (normal range, 15.0-46.0 ng/ml), procollagen type 1 N-terminal propeptide was 9.27 ng/ml (normal range, 16-76 ng/ml), and collagen type 1 C-terminal telopeptide was 0.051 ng/ml (normal range, 0.104-1.008 ng/ml). Her densitometry in 2010 showed osteoporosis. During a decade of treatment, her bone mineral density increased from 0.528 g/cm 2 (T-score − 3.4) in 2010 to 0.697 g/cm 2 (T-score − 2.0) in 2020. Because of the pain and mobility of tooth 35, in August 2020, the tooth was extracted under local anesthesia with subsequent suturing of the mucoperiostal flap to avoid medication-related osteonecrosis of the jaw (MRONJ). The extracted root with cervical resorptive lesion was sent for histological examination (Fig. 4). The healing of the wound was uneventful, and stitches were removed after 14 days. Histological examination of the cervical area of the fractured tooth revealed inflammation in connective tissue with the presence of CD68+ (cluster of differentiation) and CD163 + histiocytoid cells as well as CD3+ T lymphocytes and CD20+ B lymphocytes. Sporadically MPO+ (myeloperoxidase positive) leukocytes were present. In surrounding alveolar bone, osteoblasts and sparse osteocytes were found. Connective tissue in areas of resorption contains fibroblast and fibrocytes and osteoclast-like giant cells (CD68+ and CD136+) on the border between dentin and the invading resorptive soft tissue (Figs. 5, 6). Because the other teeth showed no symptoms or mobility, the decision was made to "watch and wait". The extent of resorptive defects was too large, and the number of teeth involved excluded conservative treatment. In September 2020, the patient lost teeth 22 and 25 due to spontaneous fracture in the cervical area and decided on definitive treatment with extractions of teeth and restoration with removable full dentures. Discussion and conclusions External cervical root resorption is an aggressive pathologic process that may lead to loss of teeth. Multiple ECR of permanent teeth is rare condition. The first case of idiopathic external cervical root resorption was published in 1930 by Mueller and Rony; since then, several cases have been reported in the literature [16][17][18][19][20][21]. The etiology of ECR remains incompletely characterized. Potential predisposing factors have been identified, such as dental trauma, orthodontic treatment, internal bleaching, periodontal surgery, Paget's disease of bone, genetic predisposition, cystic lesions, impacted teeth, playing wind instruments, feline viruses transmissed to humans, and osteoclastic rebound effects after cessation of denosumab [3,20,[22][23][24]. In the present case, we did not find any of the mentioned etiological factors for ECR, and we observed a progression of ECR despite ongoing antiresorptive therapy with denosumab. Currently, evidence about a possible association between ECR and the use of denosumab is lacking. Since 2013, 31 cases of ECR in patients treated with denosumab for osteoporosis have been recorded (https:// www. eheal thme. com/ ds/ prolia/ tooth-resor ption/). Some reports in the literature suggest that denosumab or bisphosphonates may act protectively against the aggressive progression of ECR during treatment [20,21,25]. The present patient has been treated with denosumab for 9 years without discontinuation of therapy. According to regular monitoring of antiresorptive therapy, her treatment for osteoporosis was effective, exhibiting decreased levels of bone turnover markers and an increase in bone mineral density. Deeb et al. reported a similar case of generalized ECR in a patient previously treated with denosumab for postmenopausal osteoporosis after its discontinuation, suggesting a potential osteoclastic rebound effect as a reason for ECR [20]. Denosumab is a fully human monoclonal antibody with high affinity and specificity for human receptor activator of nuclear factor kappaB ligand (RANKL), which neutralizes the activity of human RANKL, thereby inhibiting osteoclast formation, function and survival [26]. Discontinuation of denosumab therapy has been associated with a significant bone turnover rebound and a rapid loss of bone mass [27]. The rebound effect after cessation of denosumab may lead to a decrease in bone mineral density and an increase in bone turnover markers to above pretreatment baseline levels [11]. Possible explanations for the "rebound effect hypothesis" could be that an increased pool of osteoclast precursors that were dormant during the treatment period with denosumab become activated after its discontinuation and/or that a high RANKL/OPG ratio ensues after denosumab is cleared from the circulation, leading to a rapid rebound in remodeling rates [28]. Because the administration of denosumab in the present case was continuous every six months, the rebound phenomenon was not a presumable cause of progressive ECR. The recent publication by Alyahya et al. suggests that the use of denosumab could significantly predict the risk of developing ECR [29]. Yet the factors that activate osteoclasts/odontoclasts and recruit them to root surfaces rather than bone surfaces (noted in periodontitis) remain unknown [24]. Histological examinations of ECR lesions typically reveal the presence of multinucleated osteoclasts or resorptive (clastic) cells located within resorptive lacunas [30,31]. In the present case, active osteoclast-like cells were found in the resorptive process, and aggressive progression of root resorption has proceeded during the last 3 years despite ongoing antiresorptive therapy with denosumab. A similar case of multiple idiopathic external root resorptions in a female patient with osteoporosis treated with bisphosphonates was published in 2005 [21]. In that case, ECR was diagnosed before the administration of antiresorptive therapy, no multinucleated osteoclasts were observed on resorpted dentin, and the resorption did not advance for 6 years of treatment with bisphosphonates, suggesting that bisphosphonates may prevent the progression of root resorption [21]. Because denosumab also acts as an inhibitor of osteoclastic formation and activation, a similar prognosis would be expected in the present case. However, in the present case, ECR started and progressed after more than 6 years of antiresorptive therapy with denosumab. This finding suggests that a different mechanism of inducing osteoclastogenesis or activating osteoclastic resorption other than RANK/RANKL signaling must be involved. Osteoclasts and odontoclasts are morphologically and functionally similar multinucleated cells of hematopoietic origin responsible for the resorption of bone or dental hard tissue [32]. RANK and its ligand RANKL have been localized in odontoblasts, pulp fibroblasts, periodontal ligament fibroblasts, and odontoclasts [33]. Osteoclastogenesis is modulated by osteoprotegerin (OPG), a member of the TNF receptor superfamily that inhibits osteoclastogenesis by preventing RANKL from binding to its receptor RANK at the osteoclast membrane. The RANKL/RANK/OPG system is a key mediator in osteoclastogenesis [34]. OPG, RANKL and RANK have also been identified in odontoclasts activated during resorption of deciduous teeth [13,34]. Recent studies have shown that RANKL/RANK signaling also plays a role in various physiological processes within the immune system [6,34]. In this case report, we presented an example of multiple ECR in a patient with pre-existing advanced periodontitis. Infection may not be a prerequisite for the initiation of ECR [15]. However, it may lead to damage of the protective cementum layer in the area of cemento-enamel junction, which is one of the known etiological factors for the development of ECR [24]. Periodontitis is known to be associated with elevated levels of pro-inflammatory cytokines such as IL-1, IL-6 and TNF-α. Increased levels of the same inflammatory cytokines have been detected in the gingival crevicular fluid during root resorption [35]. Inflammatory response is indispensable for osteoclastogenesis. IL-1β, IL-6 and TNF-α are recognized as key factors contributing to the upregulation of RANKL expression by studies on both root resorption and periodontitis. Given that the initiation of osteoclastogenesis in root resorption and in periodontitis share a similar mechanism, it is reasonable to infer that resorptive tissues might derive from periodontal tissues to ECR [15]. Interesting in this case report is the coincidence of ongoing RANKL-blocking antiresorptive treatment and massive resorption of all tooth roots. Although RANKLinduced osteoclast formation is considered as major pathway, reports in the literature suggest that osteoclasts can also differentiate independently of RANKL [14,[36][37][38][39][40][41][42]. In a sufficiently inflamed environment, other cytokines may compensate to form osteoclast-like cells independently of RANK. O´Brien et al. suggest that TNF/ IL-6 can drive RANK-independent osteoclast formation in vivo and in vitro [38]. Feng et al. suggest that several other humoral factors and pro-inflammatory cytokines such as IL-1, IL-6, TNF-α, TGF-β or lipopolysachcaride can substitute for RANKL to induce osteoclast formation [42]. TNF-α has a fundamental role in osteoclastogenesis and may stimulate osteoclast differentiation in the presence of M-CSF independent of the RANK-RANKL system [40,41]. Macrophage-colony stimulating factor (M-CSF) is an essential factor involved in the proliferation and differentiation of osteoclasts from their progenitors and histological studies show that local trauma to the bone or periodontal ligament increases the concentrations of M-CSF [7][8][9]43]. This finding may explain the progressive external root resorption by activated odontoclasts in conditions of suppressed RANK/RANKL signaling during denosumab treatment and advanced periodontitis. Further research is needed to better understand the process of external root resorption and the factors influencing this process in conditions of suppressed RANK/RANKL signaling. This case report demonstrates aggressive and multiple idiopathic external cervical root resorption that started and advanced during ongoing antiresorptive therapy with the human monoclonal RANKL-blocking antibody denosumab without discontinuation of therapy. This finding suggests that treatment with denosumab may be associated with severe and aggressive odontoclastic resorption of multiple dental roots despite an adequate inhibitory effect on osteoclasts in the treatment of osteoporosis. The RANKL-independent pathways of clastic cell formation are likely to be involved in this pathological process. Acknowledgements Not applicable. Statement of clinical relevance Treatment with denosumab may be associated with severe and aggressive odontoclastic resorption of multiple dental roots despite an adequate inhibitory effect on osteoclasts in the treatment of osteoporosis. The RANKLindependent pathways of clastic cell formation are likely to be involved in this pathological process. This case report has not been published elsewhere in any format and it is not under consideration for publication elsewhere. This case report is approved by all authors and if accepted, it will not be published elsewhere in the same form, in English or in any other language, including electronically without written consent of the copyright-holder. Author contributions KM analyzed the patient data and performed the dentoalveolar surgery. PV analyzed and interpreted the patient data regarding the osteoporosis treatment. KA performed the histological examination of speciments and was a contributor in wrinting the manuscript. DS was the consultant and contributor in writing the manuscript. MJ has drafted and revised the manuscript and performend the translation to English. IM performed the surgery and was a contributor in writing the manuscript. TS was the major contributor in writing the manuscript. All authors read and approved the final manuscript.
2023-01-16T14:52:55.270Z
2022-04-15T00:00:00.000
{ "year": 2022, "sha1": "88adc583df5f0ada1167321f15ea179ce59b62ce", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12903-022-02165-7", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "88adc583df5f0ada1167321f15ea179ce59b62ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
216588809
pes2o/s2orc
v3-fos-license
How Has COVID-19 Affected Our Orthopedic Implant Industry Partners? Implications for the Surgeon-Industry Relationship in 2020 and Beyond Background The COVID-19 pandemic has had far-reaching societal and financial consequences. The purpose of this study was to evaluate how COVID-19 has affected AAHKS industry partners and the surgeon-industry relationship, emphasizing education, resource allocation, and strategic direction for the second half of 2020. Methods AAHKS industry partners were contacted to participate in a blinded survey and optional interview with the AAHKS Industry Relations Committee. Based on the results, a group of AAHKS member surgeons with disparate practice types were asked to postulate on how the COVID-19 pandemic has and will affect their practice and relationship with Industry. Results AAHKS industry partner responses indicated decreased resource allocation for regional, “other national,” and AAHKS annual meetings (67%, 55%, and 30%, respectively). Web-based educational content was expected to increase in 2020 and will likely remain a point of emphasis in 2021 (100% and 70% of responders). For Q3/Q4 2020, a significant emphasis was placed on site of service/outpatient TJA and COVID-19-related safety measures (70% and 90% of responders), as well as increased availability of instrumentation and implants (40% and 60%, respectively). Conclusion The COVID-19 pandemic has altered the orthopedic landscape for the foreseeable future. Survey responses by AAHKS industry partners demonstrate a continued commitment to surgeon education with an increasing shift to a web-based platform. Increased resource allocation for outpatient TJA and COVID-19-related safety measures were significant. Articulating optimal mechanisms to aid industry in supporting surgeons with different practice models to meet demand during the second half of fiscal year 2020 will be critical. Introduction 29 The orthopaedic practice of AAHKS members, like virtually every other industry in the United 30 States, has been significantly burdened by the negative economic effects of the worldwide COVID-19 31 pandemic. Elective cases have been essentially eliminated. Fracture cases have slowed, and clinic visits 32 have dropped by as much as 40-90 percent due to social distancing -all of which have contributed to 33 significant strain on orthopaedic practices. 34 The cessation of elective total hip and knee arthroplasty has also had a dramatic effect on 35 AAHKS Industry Partners. Severe dips in 1 st and 2 nd quarter implant sales revenue have forced 36 companies to take pro-active steps to conserve capital and maintain liquidity during these uncertain 37 times. The significant financial resources AAHKS Industry Partners have historically committed to 38 support national and regional orthopaedic meetings, surgeon education, clinical research, 39 surgical/vendor OR support, and technological innovation have required re-distribution to varying 40 degrees to maintain fiscal solvency. 41 Forecasts for when US elective procedures may resume suggest that inpatient procedures could 42 potentially restart in some states by mid-May, and most if not all states by the end of June 2020 1 . 43 Hospitals, orthopaedic groups, and industry must create strategies to address the anticipated increases 44 in demand expected upon a return to 'normalcy'. In the near term, it will be critical to meet volumetric 45 demands, support the orthopaedic workforce, and create more efficient business relationships to 46 maintain continuity in a rapidly changing milieu. Orthopaedic practice models of AAHKS members differ 47 significantly and will be affected to varying extents moving forward; the ability to increase production 48 during the 2 nd half of the fiscal year for the private practice, academic, and hospital employed surgeon 49 must be considered both separately and as a collective; needs in terms of optimal industry support will 50 be heterogeneous. Due to the rapidly changing environment and risk of a resurgent COVID-19 virus, long-term strategies must also be articulated to safely prevent another catastrophic full-scale shutdown 52 of elective cases. 53 Methods 54 Executive leadership of AAHKS Industry Partners with a significant footprint in elective hip and 55 knee arthroplasty were contacted to participate in a brief blinded survey (Table 1) and optional phone 56 or Zoom interview with a member of the AAHKS Industry Relations Committee (IRC). The survey 57 consisted of questions covering 5 topics to gauge changes in industry support resource allocation and 58 work force including: Resources designated for orthopaedic meetings, Resources designated for 59 surgeon education, Resources designated for strategic points of emphasis, Effect on industry work-force, 60 and Measures taken to meet increased 3 rd and 4 th quarter demand. 61 Results were tabulated and distributed a cohort of AAHKS Member Surgeons with disparate 62 practice models including, Private practice, 'Academic' practice, and Hospital employed practice. 63 Surgeons were asked to consider these results and postulate on how the COVID-19 pandemic has and 64 will affect their specific practice type and relationship with Industry in 2020 and beyond. 65 66 Executive leadership of AAHKS Industry Partners with a significant footprint in primary elective 67 total hip and total knee arthroplasty were contacted to participate in the survey. Ten responses were 68 received, while 2 declined to participate (83%). Full survey results are listed in Figure 1. 69 Resources Designated for Orthopaedic Meeting Participation: 70 Industry partners noted an expected shift in 2020 toward decreased resources designated for 71 the AAHKS Annul Meeting (30%). Uncertainty with respect to orthopaedic meeting resource allocation 74 persisted amongst a small but significant percentage of responders for 2021 (20-33%), while most 75 responders anticipated a return to standard levels of support (50-78%). Of responders, 30% anticipated 76 increased resource allocation for the AAHKS Annual meeting in 2021. 77 Resources Designated for Surgeon Education: 78 Twenty percent of responding companies anticipated decreased budgeting for resident, fellow, 79 and surgeon specific educational activities in 2020; this was tempered by an across the board increase in 80 Web-Based Recorded and Live Video educational content (100%). Industry support with educational 81 resources in 2021 can be expected to rebound for resident (40%), fellow (40%) and surgeon (50%) 82 education based on survey responses. Support for Web-Based Live and Recorded content was also 83 expected to again increase in 2021 (70%). 84 Resources Designated for Strategic Points of Emphasis: 85 The most significant trends identified for resources designated for strategic points of emphasis 86 included a decrease in Marketing allocation (60% in 2020), and increased allocation for Site of 87 Service/Outpatient Joint Replacement and COVID-19 Related Operative Safety Measures (70% and 90% 88 of responders in 2020, 70% and 50% in 2021) 89 Measures Taken to Meet Increased 3 rd and 4 th Quarter Demand: 90 In anticipation of a Q3/Q4 2020 surge in elective joint arthroplasty, 90% of responding Industry 91 Partners anticipated no significant decrease in their sales workforce. Resources designated to increase 92 available instrumentation and support implant production was expected to increase (40%, 50% 93 respectively) or remain unchanged (60%, 40% respectively). No Industry Partner anticipated decreases Discussion 96 Industry support for AAHKS membership has played a critical role driving success of 97 orthopaedics in general, and elective hip and knee arthroplasty in specific. Resources directed towards 98 the training of residents, fellows and surgeons have improved educational offerings. Industry vendor 99 support in the operative room and through implant development have enhanced operative efficiency 100 and improved patient outcomes. The Surgeon-Industry relationship has become symbiotic to the point 101 where both sides contribute to financial success and viability of a healthcare system which provides 102 access to quality orthopedic care for the American public. An improved understanding of how the 103 COVID-19 pandemic has affected AAHKS Industry Partners and enhanced communication will help 104 optimize our combined response as a profession to uncharted waters ahead. The survey and interview 105 responses provided by AAHKS Industry Partners varied considerably, however strong common themes 106 of continued commitment to education, as well as the expectation for elevated levels of vendor/surgical 107 support in the 2 nd half of 2020 emerged. 108 Indirect Education Support: National and Regional Meetings 109 Industry Partners expressed trepidation in their support of Regional Meetings in 2020 (67% 110 decrease) and a level of uncertainty with regard to the 2020 AAHKS Annual Meeting (30% uncertain, 111 30% decreased resource allocation). Continued commitment to support for the AAHKS Annual Meeting 112 in 2021, however was strong (50% no change, 30% increase). This critical continued resource allocation 113 moving forward indirectly supports the educational mission of our largest arthroplasty society. The 114 extent to which COVID-19 and associated societal scars lingers may very well affect the scope, scale, 115 attendance, and importance of all in person regional and national orthopaedic meetings for the 116 foreseeable future. 117 Direct Education Support: In terms of direct support to surgeon education, a small downtrend of support at each level of 119 training was identified for 2020, however a unanimous increase in resources directed towards web-120 based educational platforms was seen which can potentially benefit all learners. This included 100% of 121 responders directing more resources towards Web-Based Recorded and Live Video educational content 122 in 2020. As physicians are forced to embrace and become comfortable with telemedicine secondary to 123 COVID-19 induced social distancing, remote engagement and education supported by our Industry 124 Partners may become a mainstay to reach a large number of individuals without geographic or temporal 125 limitations in a cost and time efficient manner. 126 Surgical Support During 3rd and 4 th Quarter 2020 127 A common theme across survey responses and phone interviews was a significant commitment 128 to enhanced surgeon support during the anticipated uptick of elective arthroplasty procedure in the 129 second half of the fiscal year. While AAHKS Industry Partners nearly universally articulated an assurance 130 to partner with surgeons to meet increased need, there is significant uncertainly on how to distribute 131 limited resources and optimally support disparate surgeon practice models. Defining the needs of 132 AAHKS members with different practice models and articulating an appropriate response will allow our 133 Industry Partners to help smooth the transition back to the Operative Room for all arthroplasty 134 surgeons. Industry Partners are working to prepare for the expected surge in demand and appear open 135 to guidance. This is a potential action item for AAHKS and national leadership as we look to the future in 136 2020 and beyond. 137 Of Industry responders 70% expected to allocate more resources to 'Site of Service' support 138 such as outpatient joint arthroplasty in both 2020 and 2021. COVID-19 related operative safety 139 measures were also a highly emphasized (90% and 50% in 2020 and 2021 respectively). As this first 140 wave of COVID-19 tamponades, the expected surge of elective cases currently backlogged will likely 141 entail maximally stressing the elective capabilities in certain centers; the needs will be varied and 7 inherently dependent on surgeon practice model. Both sides of the aisle of our profession are currently 143 bracing for the possibility of extended weekday schedules and/or weekend elective schedules where 144 none existed prior. There will be increased emphasis on efficiency of the non-surgical aspects of cases: 145 pre-op preparation, anesthesia time, turn-over times in the OR, etc. Surgical efficiency will also be 146 stressed, yet hopefully maintaining its rightful place behind quality and safety. Our relationship with 147 Industry and our implant vendors can play a critical role during this time. 148 On the vendor side, there could be a significantly greater tug-of-war for implant reps to be 149 present in multiple hospitals or surgery centers at once, even late into the evenings or on weekends. 150 More instrument sets may be needed to do more of the same case per day, or on consecutive days. As 151 such, central sterilization efforts will heavily be put to the test as well. Refill of implants will be 152 demanded more quickly, and more individual units of the same size will be expected to be available at 153 once, to work through the backlog. We expect that the hip and knee implant manufacturers will do their 154 best to provide the supply required, as their sales will have been reduced for 3 months or more. 155 Private Practice Perspective 156 Amid the current shutdown of elective arthroplasty, private orthopedic groups, like any 157 independent service provider, have been put at significant at risk. That said, with the assistance of 158 Industry, private practice groups supporting Orthopaedic Hospitals and physician owned ambulatory 159 surgery centers (ASCs) may have enhanced logistical mobility to ramp up elective case load at a greater 160 rate than hospital employed or academic orthopaedic Surgeons. The ultimate sustainability of private 161 orthopedic practices must come through a resumption of normal business practices. In the meantime, 162 favorable financing structures for capital expenditures and volume based economic incentives are 163 potential opportunities for Industry Partners to help 'weather the storm'. There is also risk moving forward that further interruptions in business could be anticipated in 165 the fourth quarter. Some epidemiologists and infectious disease experts have predicted that COVID-19 166 infections will drop precipitously over the summer, only to come roaring back in the late fall and winter 2 . 167 Hospitals will inevitably shoulder the burden of treating infected patients, as they have currently. Site of 168 care, including outpatient surgery centers and physicians owned hospitals will likely be the 'clean' 169 hospitals moving forward. These sites which will see less viral burden can serve a public role without 170 placing additional stress on hospital systems. In anticipation, industry could play a large role in helping 171 orthopaedic surgeons maintain care pathways that ultimately reduce patient exposure to the Covid-19 172 virus. 173 Academic Practice Perspective 174 The academic arthroplasty surgeon faces a different set of potential pifalls. The 'red tape' 175 commonly associated with large tertiary referral centers may obviate the ability to increase weekday 176 efforts or add weekend shifts. A push for efficiency may also have an extended impact on teaching at 177 academic institutions, wanting to catch up on wait-lists and make up for lost revenue, as a priority over 178 technical instruction. 179 Vendor support may be limited due to more stringent access restrictions in tertiary hospitals 180 where COVID-19 patients are still popping up even after the initial surge has abated. Furthermore, at 181 many institutions there is no contract with one or two manufacturers, but rather a capitation model 182 with a multitude of vendors used for primary and revision joint surgery. It is possible that due to the 183 financial blow of COVID-19, the model currently in place is urged to change in order to produce more 184 savings on implant costs. Finally, the confidence that there is no risk of coronavirus transmission will 185 need to be instilled in the patients proceeding to elective surgery, which may be more difficult at a tertiary care center. Whether this involves increased testing, antibody testing or advanced PPE usage is 187 uncertain. 188 Hospital Employed Practice Perspective 189 Much like the position of an academic surgeon, the hospital employed orthopaedic surgeon may 190 have to a degree had a buffer against the immediate economic ramifications of COVID-19 fallout. This 191 group however will have highly variable Hospital/Administrative response in the coming months. 192 Industry may be able to play a role in partnering with hospitals to strengthen a previously existing 193 relationship or forge a new partnership. While the hospital employed practice is unlikely to be 194 inundated with a surge of semi-urgent revision or infection cases compared to a tertiary referral center, 195 control over when and how to increase production is variable and administration dependent. 196 As surgical moratoriums are lifted, is likely that hospital systems will place an emphasis on the 197 rapid resumption of highly profitable elective cases such as total joint arthroplasty to offset fiscal losses 198 imposed by the COVID-19 shutdown 3 . Favorable financing structures for capital expenditures and 199 volume based economic incentives are potential opportunities for Industry Partners to work with 200 hospital systems to support the hospital employed orthopaedic surgeon. 201 Conclusions: 202 In summary, the majority of AAHKS Industry Partner responses indicated a forward-thinking 203 mindset in the face of COVID-19 induced uncertainty. In the short term, and emphasis on educational 204 offerings has not been lost, but has been transitioned to a less hands on and more technologically driven 205 'social-distancing' friendly medium. This may very well become a significant aspect of our profession's 206 'new normal'. Support for 'hands on' training that is a stalwart of total joint arthroplasty education will 207 likely rebound but may take time and patience. 208 It is apparent that arthroplasty surgeons and Industry will have to maximize their efforts to work 209 together safely, cohesively, respectfully and efficiently more than ever before, to weather the tidal wave 210 of surgeries that is sure to come and forge a stronger working relationship and fortify our profession. 211 Early and open communication will be paramount to smoothly adapting to changes in volume and 212 disparate needs of surgeons and practices moving forward. 213 Orthopaedic surgeons, hospitals, and most importantly patients across the United States have 214 benefited greatly from symbiotic partnerships with manufacturers of orthopaedic products and 215 implants. Although the current pandemic threatens massive upheaval across the industry, it also 216 provides an opportunity to strengthen the Surgeon-Industry partnership in 2020 and beyond, enhancing 217 our ability to achieve our ultimate goal -the assurance of access to quality orthopedic care for the 218 American public. 219
2020-04-29T05:06:23.876Z
2020-04-28T00:00:00.000
{ "year": 2020, "sha1": "5eebe3ce6b6eef5cf5095841a5e7999e0531d99e", "oa_license": null, "oa_url": "http://www.arthroplastyjournal.org/article/S0883540320304514/pdf", "oa_status": "BRONZE", "pdf_src": "ElsevierCorona", "pdf_hash": "a71f431f1a0d0b43e9b5e2036799f27a6136a752", "s2fieldsofstudy": [ "Medicine", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
231812330
pes2o/s2orc
v3-fos-license
Policy-makers’ views on translating burden of disease estimates in health policies: bridging the gap through data visualization Background Knowledge Translation (KT) and data visualization play a vital role in the dissemination of data and information to improve healthcare systems. A better understanding of KT and its utility requires examining its processes, and how these interact with available data tools and platforms and various users. In this context, the aim of this paper is to understand how relevant users interact with data visualization tools, in particular Global Burden of Disease (GBD) visualizations, while also examining KT processes related to data visualization. Methods A qualitative case-study consisting of semi-structured interviews with eight policy officers. Interviewees were selected by the Institute for Health Metrics and Evaluation (IHME) from three countries: Canada, Kenya and New Zealand. Data were analyzed through framework coding, using qualitative analysis software. Results Policy officers’ responses indicated that data can prompt action by engaging users, and effective delivery and translation of data was enhanced by data visualization tools. GBD was considered valuable for use in policy (e.g., planning and prioritizing health policy; facilitating accountability; and tracking and monitoring progress and trends over time and between countries). Using GBD and data visualization tools, participants quickly and easily accessed key information and turned it into actionable messages; engaging visuals captured decision-makers’ attention while providing information in a digestible, time-saving manner. However, participants emphasized an overall disconnect between research and public health. Functionality and technical issues, e.g., absence of tool guides and tool complexity, as well as lacking buy-in and awareness of certain tools from those less familiar with GBD, were named as major barriers. In order to address this “know-do” gap, user-friendly knowledge translation tools were stated as crucial, as was the importance of collaboration and leveraging different insights from data generators and users to inform health policy. Conclusions Policy officers aware of KT are willing to utilize data visualization tools as they value them as an engaging and important mechanism to foster the use of GBD data in policy-making. To further facilitate policy officers’ efforts to effectively use GBD data in policy and practice, further research is required into how users perceive and interact with data visualization and other KT tools. Supplementary Information The online version contains supplementary material available at 10.1186/s13690-021-00537-z. Background Timely, accurate and high-quality health information is one of the key inputs for public health action, health systems strengthening and progress towards the achievement of important health goals, such as universal health coverage and the health-related Sustainable Development Goals [1]. Effective public health action and implementation of useful health policy is integral to achieving improved health. A range of tools and mechanisms are used to promote the utilization of health information in policy development and implementation. They can be summarized under the name of knowledge translation (KT) approaches [2]. The World Health Organization (WHO) defines KT as: "the exchange, synthesis, and effective communication of reliable and relevant research results" [3]. The focus of KT is on promoting interaction among the producers and users of research, removing the barriers to research use, and tailoring information to different target audiences so that effective interventions are used more widely [4]. WHO identifies four key KT approaches that, either singularly or combined, illustrate the link between health information and action [5,6]: Push approaches, where information producers ensure research processes and findings are more digestible for decision-makers by effectively translating technical information and jargon into nontechnical information; Pull approaches, where information users, such as policymakers, request health information based on their needs; Exchange approaches emphasizing linkages between information producers and users, which can be facilitated by knowledge brokers (KBs), to work together at specific points or during the entire information generation cycle; and finally, Integrated approaches, where a KT platform is established in an organization or broader health system allowing the promotion of early and sustained engagements between information producers and users and institutionally linking research to action through push, pull or exchange efforts [6]. These approaches have been used by information producers, policymakers and others in the health sector in countries across the world, engaging many different types of stakeholders in diverse areas of policy-making [5]. In the context of KT, health information includes the summary and analysis of health status and problems in populations over time, as well as quantification of associations between outcomes and risks; it also assesses effectiveness of public health interventions [5]. Data consist of raw facts of a study or work, in qualitative or quantitative form; while evidence is defined as 'findings from research and other knowledge that serves as useful basis for decision-making in public health and health care' [5]. Explicit knowledge consists of these three: health information, data and evidence [5]. Health data are usually presented in different formats and with different degrees of aggregation, which can make understanding it challenging. This was one of the issues that has been identified as a barrier to participants' use in decision-making processes. Data visualization-understood as 'the representation and presentation of data to facilitate understanding'is considered an important mechanism for increasing the usefulness of health information [7,8]. Data visualization can be considered a 'push' approach to KT [9] as it seeks to present data in more accessible ways, such as graphs, charts, and maps with interactive features [10], but it also facilitates interaction between multisectoral stakeholders from different areas by connecting them in the production of such tools, resulting in mutually beneficial collaboration [11]. Given this, data visualization tools can play an important role in KT processes. The usefulness of data visualizations has been investigated and critically assessed in computer science for some time [12], but so far there is limited understanding of how well existing visualization tools help users to master the wealth of information represented by complex data. This is particularly relevant for health sector and health policy systems decision-making. Providing a base for further assessments, this paper explores how a number of professional experts in a few countries analyze and use Global Burden of Disease (GBD) data using data visualization platforms. This study aims to contribute to a more robust understanding of the use of data visualization to support KT processes. It uses a case-study of policy officers' engagement with one data visualization platform, GBD Compare. GBD Compare is a tool developed as part of the GBD study, led by the Institute for Health Metrics and Evaluation (IHME) in collaboration with thousands of researchers worldwide. GBD quantifies global health loss from diseases, injuries, and risk factors across age groups, sexes, countries, regions, and time, with estimates currently produced for 195 countries, by age and sex, from 1990 to the present. GBD Compare is primarily a push approach aimed at making research processes and findings more digestible for decision-makers. The GBD study provides a rich context for exploring processes of knowledge translation, due both to the vast scope of the data available and its explicit orientation towards helping policymakers understand their countries' health challenges. In order to support decision-making, GBD Compare has been developed as a KT mechanism for this audience, with the aim of "allow [ing] decisionmakers to compare the effects of different diseases, such as malaria versus cancer, and then use that information at home" [13]. The GBD estimates are intended to be used for policy and so KT is an integral part of this effort; if people are not able to access, interact with, and understand the estimates, it is difficult to incorporate them into decision-making. GBD Compare is intended to help users explore patterns and trends, make comparisons across and between different axes such as country, demographic groups, and diseases, and create and capture relevant graphics such as maps, graphs, and tables. Given its range and intended orientation towards supporting policymaking, GBD Compare provides a useful case study for exploring the following questions: 1. How do policy officers perceive, interpret and utilize web-based data visualization tools? 2. How can these tools and platforms be optimized as catalysts for KT processes? Study design, setting and methods This was a qualitative case-study using semi-structured interviews with policy officers (see Additional file 1 for the interview guide). A case-study design was selected due to the descriptive and exploratory nature of the inquiry-how knowledge translation tools are perceived, interpreted, and used by policy officers-and because understanding the specific context in which these processes are occurring is important to answering the primary research questions [14]. Given this study design, rather than seeking to draw general inference about larger populations or other contexts, this work seeks instead to examine and explicate processes related to the use of data visualization for knowledge translation, which may have relevance, explanatory value, or provide transferrable lessons for other settings. Semi-structured interviews allowed data collection around themes pertinent to the research topic while also giving respondents the space to discuss issues or areas that they deemed important [15]. The purposive selection was facilitated by IHME using a predetermined list of potential respondents with expertise and characteristics relevant to the study (policy officers who advise on public policy in government settings in a technical capacity; individuals who have prior familiarity with the GBD and its data visualization platform, GBD Compare; and English-speaking). The final sample size was determined by reaching saturation [16]. Study sample As described above, the study sample consisted of respondents who were a) policy officers and b) familiar with the GBD tool. In terms of their involvement in policymaking, we define policy officers as people who influence policy processes in order to improve and shape implementation strategies. These advisors often act as KBs between the academic world and policymakers by providing data and evidence to governments. Working as developers and promoters of new government policy, these specialists have technical expertise in a specific field and advise colleagues (e.g., senior bureaucrats and government ministers) on policy options. Among other functions, they help evaluate and monitor policies, create dialogues, and strategize. A total of eight persons, out of 10 invitees, agreed to participate in a semi-structured interview. Seven interviews were conducted over a period of 3 months (with one done as a joint interview with two participants). Of the seven interviews, five were Skype audio, one Skype video, and one Zoom video call. Interviews ranged from 25 min to 64 min long, although a majority were about 50 min in duration. The study participants included an epidemiologist, a research scientist and research associate, a UHC program coordinator, a Monitoring and Evaluation specialist, an NCD Division Lead, a principal advisor in epidemiology and principal policy analyst in strategy and policy. The average interview duration was 50 min. Seven interviewees were female and one male with a wide range of duration of work experience in the field of health policy and/or knowledge translation work (range 3 to 15 years). All participants were highly educated individuals holding senior positions, having either master's degrees (n=7), or PhD (n=1), in public health and medical sciences. Three currently work in healthy policy and were previously clinicians before transitioning into policy-focused roles. To explore potential contextual dynamics, respondents with these characteristics were also selected from different countries with varied data landscapes and policy contexts. In this case, one Low-and Middle-Income Country (LMIC) (Kenya) and two high-income countries (HICs) (Canada and New Zealand). Each participant was an individual with expertise in a particular discipline and employed by a government agency working in different ways to contribute to implementing health policy. The interviewer was located in Stockholm, Sweden, while interviewees were located in their respective countries' locations. Data management and framework analysis All interviews were recorded after receiving participants' consent. Interviews were then transcribed and uploaded to qualitative analysis platform, Dedoose, for data analysis [17]. Data analysis began using open coding as a process to 'open up' the text in order to discern ideas and meanings [18]. This consisted of close readings of interview transcripts and consideration of the multiple meanings found within them [19], generating and applying codes, and then looking across codes to discern key themes [20]. These themes were then grouped by adapting Lavis et al.'s framework for assessing country-level efforts to link research to action [6]. Their framework consists of four elements: general climate for research use, production of relevant and synthesized research, mix of activities linking research to action and the evaluation of efforts to link research to action [6]. This framework explicates the various directions, dynamics and inter-relationships of those involved in KT and data use. Of note, while Lavis et al's framework was used, another theme, data use, was added to encapsulate all results. Themes A total of 6 themes and 16 subthemes emerged from the data (see Table 1). Participants identified different advantages and weaknesses of data visualization tools for KT, as well as discussing various aspects of knowledge translation and data visualization use. Specifically, the primary themes were: production of research, exchange efforts, efforts to facilitate user pull, push efforts, general climate, and data use (see Fig. 1): Exchange effortsbuild relationships among researchers and research users who have shared interests Participants from both LMIC and HICs highlighted the need to collaborate and consolidate efforts between data generators and users to produce data, foster data use and inform health policy. Participants emphasized the importance of collaboration in utilizing and incorporating different insights, perspectives, and experiences. They described the interlinkages and exchange between all involved parties as crucial in informing health policy: participants highly valued conferences, trainings or other events that brought people together so they could meet, discuss and exchange ideas and information with one another. Given the explicit intent of GBD data to be used as a decision-making tool and to inform health policy, respondents noted the imperative of a collaborative approaches to using tools, engaging in dialogue with colleagues rather than simply extracting and presenting data to them. As one respondent, a policy analyst in a highincome setting, explained about using GBD Compare with a colleague: "Both of us come from different perspectives … so I'm mainly using the visualization tools and also using the methodology behind it to create epidemiological insight … and then provide it and discuss it with [colleague], so that he can look at it from the policy perspective to see what has been done in the field before." Other respondents noted the importance of constant communication between different loci of decision-making when engaging with health data like that from the GBD, which has implications for many sectors; for example, a participant from an LMIC highlighted the need to make connections between different areas, particularly between and amongst various Ministries, such as Transport and Finance, when thinking about audiences for the information provided in the tools. Similarly, a research scientist and an epidemiologist from HIC stressed how delivery of data needs to be done thoughtfully and, importantly, to the right people, with one explaining: "you can't just show them the evidence like here's the reality … it's really a process of change and so it's figuring out who your stakeholder, where your stakeholders or your decisionmakers are right now". Applying the tools therefore helps in the process of fostering data-driven decision-making, but only if relevant entities and actors are identified and brought in. With reference to collaboration, participants noted a key challenge that remains in terms of a broader disconnect between research and public health. Respondents noted that without effective communication and collaboration between relevant stakeholders, health information can remain under or unused. An epidemiologist, from LMIC, put it succinctly: "people don't talk to each other, academicians and then the ward and the public health professionals and so there is a gap there that's a kind of, the bridge, the connection is not there". While the GBD study seeks to use a collaborative process in the production of the data that ends up in GBD Compare, this finding suggests that participants remain acutely aware of the underlying challenges of collaboration and cross-stakeholder communication remain when it comes facilitating effective KT processes. Facilitating user pulltechnical considerations Participants from LMIC countries noted certain challenges in using the tool, GBD Compare. In particular Need for multisectoral collaboration and communication between data generators and users to produce data, foster data use and inform health policy; more interlinkages and exchange needed between GBD collaborators, which include policy-makers, analysts and other data users; researchers need to explain data effectively so wider audience can understand in order to facilitate exchange, otherwise meaning of information will be lost; bringing people together e.g. conferences or trainings valued as helpful and important; gaps seen between academia and public health realms, which can be bridged by collaboration; collaborations between departments, such as Ministry of Health and Ministry of Education seen as crucial to make best policy; barriers include lacking time and resources to facilitate these partnerships Production of researchprioritysetting processes 11. Transparency and consistency of data sources needs to be maintained Users highly value and appreciate transparency of data sources as data is more trustworthy; discrepancies between GBD and local data and within GBD data they noted the complexity and intricacies of the tool, its scope and interface requiring regular use in order to understand and maximize utilization, and the way data is presented in graphics sometimes proving difficult to interpret. As one participant noted, "if I'm seated at my desk and I have no training and I want this information, I may not be able to make sense of that information". A research scientist from HIC observed some did not utilize the tools if they did not feel as confident using them or as familiar with how to use them. More guidance and training were highlighted as necessary for increasing utilization of the tool, as without clear Insufficient time, awareness and/or resources contribute to less effective KT; KT should be prioritized among many factors and competing needs involved in decision-making as more likely to result in action; funding contingent on quality of researchers' KT plan incentivizes prioritization of KT; KT valued as important process leading to informed decisions and make stronger policy recommendations; KT important in facilitation of evidence accessibility and understandability; throughout data generation, consider KT to provide better information 13. Overall lacking buy-in and awareness of GBD and data visualization tools from country leadership can lead to minimal utilization GBD and/or data visualization not known, incorporated or used by policy-makers in some settings as certain country leaders resistant to using outside data and skeptical of GBD estimates, particularly in lower income countries; leadership attitudes or unawareness negatively affects user interest Data use 14. GBD data used to forecast, plan and prioritize health policy GBD data used to predict future needs and prioritize resources accordingly; GBD data perceived as important part of informing decisions, policymakers and public and ultimately eliciting change; data used to monitor and track progress over time and justify resource allocation and funding Facilitate and improve accountability; track and monitor progress and trends over time and between countries Comparison between countries illuminates gaps and thus triggers action; GBD results inform and discern patterns within and among countries Fig. 1 Themes guidance, insufficient user understanding of GBD Compare can be quite limiting. In addition, participants from all three countries suggested several improvements that could be made to tool functionality, ranging from technical issues, such as ensuring compatibility with certain browsers, to the way in which information is displayed. Facilitating user pullresearch production While respondents noted challenges in using the data visualizations, they all identified deeper structural issues at the local and country-level hindering GBD uptake by various policy officers and country leaders, such as lacking buy-in, or awareness, of GBD. Those from LMIC settings observed a need to increase engagement around GBD among policy and other audiences in country, suggesting that buy-in would increase once the benefits of GBD could be seen and the methodology was better understood. Participants experienced challenges garnering interest in using the tools, and commented on a certain skepticism and questioning of GBD data and its quality, particularly in the context of existing and competing information sources. One participant from LMIC explained that "the country has invested a lot in our routine data system that anytime you could bring this issue and not and mention that the routine data has not been used, it could immediately face a lot of resistance." Another noted, "sometimes people feel that evidence that is not generated in-country is not good enough." The same concerns arose from those in high-income settings. For instance, skepticism surrounding the estimates and how they were calculated raised some doubts. A research scientist in HIC noted these concerns: "one barrier we have with using it internally is sometimes our more senior managers are concerned that because these are modeled estimates, they're not going to match what comes out of our national systems". Moreover, while participants highly valued and appreciated transparency of data sources in the GBD Compare tool and found that they themselves and their colleagues were more trusting of data if there was clarity about its source, several users discussed concerns about discrepancies found within GBD data, which is re-estimated each cycle due to new methods and underlying data sources: "the little issue that at times we have is the fact that each and every time every year when the new GBD results are released, you may find varying … indicators". Confusion regarding modeling methodology and estimates led to distrust among users and were therefore less willing and likely to use the data. Despite these concerns, participants had hope that visualizations could help gain more trust once GBD methods were shared and explained. In one HIC, the presence of a GBD oversight team was found to be of immense help. The oversight team consisted of deputy directors-general who provide technical and/or policy assistance to both policymakers and researchers on GBD Compare. This led to more, maximized tool use, and its value was emphasized highly by participants who worked with this team. However, more guidance from IHME was requested around explanations for global patterns and recommendations for potential policies that could be implemented. Participants hoped that examples of existing, successful policy interventions for certain diseases, e.g., increasing obesity rates, could be shared by IHME. Push effortsidentify actionable messages Data visualization was used to better understand data by quickly, uniquely and clearly conveying information. Visualizations were noted as a valuable method to prompt action and support policy recommendations to stakeholders by grabbing their attention and providing illustrative data. As one policy analyst from a HIC shared: "decisions are not rational things, are they? They're about hearts and about minds and so visualizations are something that helps you to bridge the heart and the mind a little bit … images can stay with you in a way that words on a page won't, so I think there's something about the translation that's hard to encapsulate but it helps to tell the story." Policy officers indicated that data can prompt action by engaging users, if compiled and presented in a succinct, colorful way with the help of data visualizations and related tools. Instead of handing users a bland 60page report, policy officers could present the data in a more productive manner. One respondent from an HIC described the power of data visualizations, in particular the GBD tree map visualization: "that one is really useful for, you know, showing the overall burden and then making a comparison to resources allocated. Whenever I present that slide, I get silence in the room for 30 seconds and then discussion". Policy officers also noted that being able to present data on risks factors and their contribution to disease was useful for capturing audience attention, challenging preconceptions of the current situation and therefore spurring appropriate, effective action. Making the evidence more engaging was perceived as critical to incite and inspire change in policymaking, with one HIC policy analyst noting that "it's quite a powerful way of describing the interplay of risk factors and poor outcomes for health, and just yesterday we were using this analysis for some of our senior leaders in our section to look at health loss and the factors that contribute to it. So, I'd say the visualization tools are very important part of bringing the data to life." Additionally, user-friendliness emerged as an important aspect of data visualization and its powerful, unique ability to convey information. Participants from LMIC discussed "scorecard" approaches to presenting information on GBD Compare as attention-drawing tools: "it's color coded so you have like red, orange and green depending on the progress that has been made … it can be printed, but most of the time, you can even get it on your mobile phone". In the words of another participant, at the click of a button you can have information and indicators for an entire country. Top management, or other audiences who need to know the important facts, do not need to spend a lot of time trying to access this kind of data: "they just need it at a glance, they are so busy". While, as discussed earlier, existing information systems could be seen as a barrier to buy-in for the data found in GBD Compare, it is also the case that participants have been able to show the utility of GBD and its visualization tools as a useful complementary resource in conjunction with existing local systems, and using the tools to identify and fill gaps in local information or data proved very valuable. For instance, a research scientist used GBD data in addition to their existing platforms. They do not have a system that comprehensively covers both morbidity and mortality for all causes. In LMIC, a participant described using GBD for subnational estimates: "for example life expectancy, or top five causes for mortality … we could easily sell that even to the leaders at the subnational level". The GBD estimates are therefore used as a way to compare in-country data to give a better overall grasp of the situation, used in tandem with other systems. General climatecountry support and approach (es) towards KT According to participants, in order to optimize KT, sufficient resources, awareness and time all need to be devoted to the process of KT and policymaking, and a lack of awareness and engagement with GBD and/or data visualization were identified as issues to be addressed addressed. These issues were particularly seen in the LMIC setting, where country leaders were resistant to using outside data and more skeptical of GBD estimates. Leadership attitudes set the tone for the rest of the country and trickled down into departments or divisions. In general, respondents noted that fewer resources were devoted to KT and data collection and visualization in LMIC countries, highlight the need for "more money, more funds, more capacity … especially in my field …. there's a lot of gaps in knowledge". In high-income contexts, however, KT was much more supported. For instance, using KT was even enforced as a part of the research and funding process in these contexts. A research scientist in HIC explained how KT is an integral part of acquiring funding and how a KT plan is mandatory and evaluated as part of the funding application process. Data use GBD data was used to help identify needs and prioritize resources accordingly. Participants described how GBD helps inform the future, for instance by giving direction to the country's goals. One respondents explained that "it shows you how [things have] changed and you can change over the years, to see how both the risk and the causes of diseases are changing … you can see where you're making progress and where we are not making progress and how, if that's a neglected area, we should focus on it." Policy officers found that GBD results, as presented in GBD Compare, help to discern patterns, and could therefore be used for not only monitoring, but also planning for time, funding and resource allocation: "We have the indicators which we use to monitor the progress of the project, so each and every time we have to keep looking at the data, see how the data is behaving, look at trends over time, maybe over the months, over the years, and see whether, make decisions based on that and see if we are on track you know if we need to do, need to change our tactic for better results." A research scientist in HIC further explained commented on their utility for planning, explaining how in "planning, developing our five-year plans, we have to actually look back and really bringing all that still, the charts and you know, the visualizations to show the current situation-it becomes like building your case," though also emphasized that other aspects of decisionmaking need to be considered, as "there are many factors that influence what an organization is going to focus on and evidence is just one thing." Another key aspect of GBD Compare highlighted by respondents was how the data facilitated comparison among countries, which illuminated any gaps and may trigger processes of policy changes. While GBD data in itself does not indicate what the policy responses should or would be, it helps to identify gaps, understand areas where disease burden is high or significantly increasing, particularly in this directly comparative context. Data was also used to increase accountability and transparency. In this case, participants noted one other tool developed by IHME, Financing Global Health, a data visualization focused on tracking and monitoring government spending on health. One respondent noted that this tool was helpful in conjunction with GBD Compare: "You can say the US government spends this much on malaria … I think that's really good because many times even for us, being right now we are lower middle-income country but when we were low-income country, there was a lot of investment by partners, you know donors … you could have this notion that they spend this amount but that amount doesn't really get to the ground." Discussion Our findings indicate that while there were variations in the perception, interpretation and utilization of a data visualization tools by policy officers, all study participants found these tools to be extremely useful. Data visualizations are instrumental to quickly and clearly conveying information and thus contribute to increasing the use of burden of disease data in policy. According to our findings, the process of KT requires sufficient resources, awareness, time and sense of ownership. In some cases, participants perceived unwillingness to use GBD data among their colleagues and the wider stakeholder community. Knowledge producers, such as IHME, can make more concerted efforts to increase buy-in and familiarity with various data platforms, perhaps by advertising their objectives and their data more in the health information realm, and by providing more training for current and prospective users. A substantial body of literature confirms that KT processes build important bridges between evidence and practice, inter alia through co-creation process and the involvement of all relevant actors, from differing backgrounds [21]. These results build on existing evidence of the literature on data visualization and KT, such as assertions that data visualization's interactive nature allows decision-makers to explore data from various perspectives [22]. Scholars have highlighted how visualizations can help users (e.g. policy officers presenting data and information) explore, analyze and communicate results from complex big data in health [22], and can be used to convey a larger story or narrative in a stimulating and more effective way than text alone [23]. However, there is a scarcity of research on using information technology as a means for knowledge dissemination in health policy-making [24]. A systematic review by Delnord et al. reports that only one previous study proposed a framework for monitoring the impact of national health information systems in regards to KT [25]. During the study process, we observed that inaccessibility of the evidence hampered evidence-informed policy; evidence will be quickly discarded by policy-makers if they struggle to access it online or if the way in which is presented is difficult to understand [26]. However, data visualizations tools represent just one example of strategies for facilitating the use of explicit knowledge (health information, data, evidence) in policy. Others include synthesis tools, such as policy briefs, which summarize and present information in a more accessible ways, as well as knowledge networks (e.g., regular meetings or advisory groups/committees) that bring stakeholders together to exchange ideas [9]. Actors' use of available interactive tools and the creation of exchanges and exchange between data producers and users affect the extent to which health information is applied in the policy process [27]. Collaboration between multisectoral stakeholders such as evidence producers and users, funders and actors in healthcare with varying perspectives increases mutual understanding and trust which facilitates KT [26,28], which is echoed by participants' emphasis on the importance of collaboration and communication. The three countries in this study provided contrasting contexts in regards to both GBD uptake and health information systems, in terms of resources, data quality, transparency as well as awareness and support for KT. This is particularly relevant for public health in LMICs, where local-level evidence might be ignored or inadequate hardware and data quality prevail [29]. In Kenya, a majority of health facilities are still operated by faithbased organizations and non-governmental organizations, data remains scarce and KT is not prioritized [30]. In contrast, policy-makers in HICs have a wealth of data, resources and they integrate KT and GBD data in their policy-making process [21,22]. While some participants noted attributing high importance to KT and use of data visualization, they also encountered hindrances such as lacking resources, particularly in LMICs. Dedicated resources (e.g. staff, time and money) to support KT efforts are needed [31]. In order to take appropriate actions, health policymakers require various kinds of information about health system performance and public health problems and needs [24]. Health information and data helps to identify problems, as well as aids monitoring and evaluation by measuring the magnitude of a disease and assessing progress in addressing these health problems [5]. However, data is only part of the evidence that is needed for effective evidence-informed policy-making. For the remainder of the policy cycle, other kinds of evidence and health information are needed, in particular primary and secondary research [5]. The Evidence-informed Policy Network (EVIPNet), of WHO, is looking at comprehensively addressing the evidence needs of policy-makers at all stages of the policy cycle while synthesizing the best available evidence in a user-friendly manner [5]. Methodological considerations This was a qualitative case study that generated an indepth, multi-faceted understanding of data visualization in real-life contexts across three countries. The case study provides valuable input from selected experts to help design further research; specifically, the usability and relevance of sophisticated visualization tools in the analysis of complex data and information like GBD. However, such an approach comes with its own methodological limitations, including selecting study participants purposively (e.g., pre-selected by IHME based on participant use of GBD Compare); also, findings may not be generalized to other contexts [32]. The research questions were exploratory in nature, seeking specifically to understand and draw out inferences from respondent perspectives and experiences. An inductive approach in this case is optimal, where data was gathered to build concepts, hypotheses, and theories, rather than deductively deriving hypotheses to be tested [33]. Semi-structured interviews allowed for participants to diverge and proved useful as relevant topics could be delved into and discussed in detail. Also, participants came from different backgrounds, professions, and countries. Interviewing policy officers from three countries with contrasting contexts regarding GBD uptake, and healthcare systems generally, yielded diverse responses and data while also giving the study a richer, more comparative perspective. Of note, interviews and data collection were not done in situ in each country where participants were doing their work. A better grasp of the local system and context would have provided more background and better understanding of the participants' responses and emerging themes from their insights. Additional limitations include self-reporting and potential biases, the main bias being the low number of interviews, as well as the fact that interviews may take different courses, so it is difficult to compare responses. Conclusions This study provides insight into the complexities of utilization and interpretation of data visualization tools. For instance, policy officers familiar with the concept of KT are more willing to utilize data visualization tools as they view them as engaging and important implements for KT and policy-making process. While future research can build on these observations to examine how users perceive and interact with data visualization and related tools, and their implications for KT, a suggestion emerging from this work is for organizations such as IHME and WHO to consider how certain KT mechanisms can be strengthened and thus make data more digestible, usable, and informative for health policy-making. Focus should be placed on finding and developing solutions that facilitate efforts in bridging the gap between evidence and policy; specifically, understanding how health metrics are used and interpreted and developing a KT framework incorporating these can help identify and overcome challenges. The results of this study also demonstrate the importance of actors across the health information system coming together to work with one another in processes of knowledge translation. Thus, adopting a system-wide approach when translating health information into policy could be useful to strengthen and enhance a collaborative research processes and encourage systematic, transparent use of GBD in the policy-making process.
2021-02-05T15:17:59.419Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "15c41cd22de45113f9d938b8ccdb343c914afa31", "oa_license": "CCBY", "oa_url": "https://archpublichealth.biomedcentral.com/track/pdf/10.1186/s13690-021-00537-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15c41cd22de45113f9d938b8ccdb343c914afa31", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
248780407
pes2o/s2orc
v3-fos-license
Transkimmer: Transformer Learns to Layer-wise Skim Transformer architecture has become the de-facto model for many machine learning tasks from natural language processing and computer vision. As such, improving its computational efficiency becomes paramount. One of the major computational inefficiency of Transformer based models is that they spend the identical amount of computation throughout all layers. Prior works have proposed to augment the Transformer model with the capability of skimming tokens to improve its computational efficiency. However, they suffer from not having effectual and end-to-end optimization of the discrete skimming predictor. To address the above limitations, we propose the Transkimmer architecture, which learns to identify hidden state tokens that are not required by each layer. The skimmed tokens are then forwarded directly to the final output, thus reducing the computation of the successive layers. The key idea in Transkimmer is to add a parameterized predictor before each layer that learns to make the skimming decision. We also propose to adopt reparameterization trick and add skim loss for the end-to-end training of Transkimmer. Transkimmer achieves 10.97x average speedup on GLUE benchmark compared with vanilla BERT-base baseline with less than 1% accuracy degradation. Introduction The Transformer model (Vaswani et al., 2017) has pushed the accuracy of various NLP applications to a new stage by introducing the multi-head attention (MHA) mechanism (Lin et al., 2017). Further, the BERT (Devlin et al., 2019) model advances its performances by introducing self-supervised pre-training, and has reached the state-of-the-art accuracy on many NLP tasks. Compared to the recurrent fashion models, e.g. RNN (Rumelhart et al., 1986), LSTM (Hochreiter and Schmidhuber, 1997), the Transformer model leverages the above attention mechanism to process Figure 1: Overview of Transkimmer dynamic token skimming method. Tokens are pruned during the processing of Transformer layers. Note that actually we don't need all the tokens given to the downstream classifier in this sequence classification example. We show the full length output embedding sequence to demonstrate the forwarding design of Transkimmer. all the input sequence. By doing so, extremely large scale and long span models are enabled, resulting in a huge performance leap in sequence processing tasks. However, the computation complexity of the attention mechanism is O(N 2 ) with the input length of N , which leads to the high computation demand of the Transformer model. Some prior works (Goyal et al., 2020;Kim and Cho, 2021;Ye et al., 2021) explore the opportunity on the dynamic reduction of input sequence length to improve the Transformer's computational efficiency. Its intuition is similar to the human-being's reading comprehension capability that does not read all words equally. Instead, some words are focused with more interest while others are skimmed. For Transformer models, this means adopting dynamic computation budget for different input tokens according to their contents. To excavate the efficiency from this insight, we propose to append a skim predictor module to the Transformer layer to conduct fine-grained dynamic token pruning as shown in Fig. 1. When processed by the Transformer layers, the sequence of token hidden state embeddings are pruned at each layer with reference to its current state. Less relevant tokens are skimmed without further computation and forwarded to the final output directly. Only the significant tokens are continued for successive layers for further processing. This improves the Transformer model inference latency by reducing the input tensors on the sequence length dimension. However, the optimization problem of such skim decision prediction is non-trivial. To conduct pruning of dynamic tensors, non-differentiable discrete skim decisions are applied. Prior works have proposed to use soft-masking approximation or reinforcement learning to resolve, which leads to approximation mismatch or nonuniform optimization. Transkimmer propose to adopt reparameterization technique (Jang et al., 2017) to estimate the gradient for skim prediction. As such, we can achieve the end-to-end joint optimization obejective and training paradigm. By jointly training the downstream task and skim objective, the Transformer learns to selectively skim input contents. In our evaluation, we show Transkimmer outperforms all prior input reduction works on inference speedup gain and model accuracy. Specifically, BERT base is accfelerated for 10.97× on GLUE benchmark and 2.81× without counting the padding tokens. Moreover, we also demonstrate the method proposed by Transkimmer is generally applicable to pre-trained language models and compression methods with RoBERTa, DistillBERT and ALBERT models. This paper contributes to the following 3 aspects. • We propose the Transkimmer model which accelerates the Transformer inference with dynamic token skimming. • We further propose an end-to-end joint optimization method that trains the skim strategy together with the downstream objective. • We evaluate the proposed method on various datasets and backbone models to demonstrate its generality. Related Works Recurrent Models with Skimming. The idea to skip or skim irrelevant sections or tokens of input sequence has been studied in NLP models, especially recurrent neural networks (RNN) (Rumelhart et al., 1986) and long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997). When processed recurrently, skimming the computation of a token is simply jumping the current step and keep the hidden states unchanged. LSTM-Jump , Skim-RNN (Seo et al., 2018), Structural-Jump-LSTM (Hansen et al., 2019) and Skip-RNN (Campos et al., 2018) adopt this skimming design for acceleration in recurrent models. Transformer with Input Reduction. Unlike the sequential processing of the recurrent models, the Transformer model calculates all the input sequence tokens in parallel. As such, skimming can be regarded as the reduction of hidden states tensor on sequence length dimension. Universal Transformer (Dehghani et al., 2019) proposes a dynamic halting mechanism that determines the refinement steps for each token. DeFormer (Cao et al., 2020) proposes a dual-tower structure to process the question and context part separately at shallow layers specific for QA task. The context branch is preprocessed off-line and pruned at shallow layers. Also dedicated for QA tasks, Block-Skim (Guan et al., 2021) proposes to predict and skim the irrelevant context blocks by analyzing the attention weight patterns. Progressive Growth (Gu et al., 2021) randomly drops a portion of input tokens during training to achieve better pre-training efficiency. Another track of research is to perform such input token selection dynamically during inference, which is the closest to our idea. POWER-BERT (Goyal et al., 2020) extracts input sequence at token level while processing. During the finetuning process for downstream tasks, Goyal et al. proposes a soft-extraction layer to train the model jointly. Length-Adaptive Transformer (Kim and Cho, 2021) improves it by forwarding the inflected tokens to final downstream classifier as recovery. Learned Token Pruning improves POWER-BERT by making its pre-defined sparsity ratio a parameterized threshold. TR-BERT (Ye et al., 2021) adopts reinforcement learning to independently optimize a policy network that drops tokens. Comparison to these works are discussed in detail in Sec. 3. Moreover, SpAttn facilitate POWER-BERT design with a domain-specific hardware design for better acceleration and propose to make skimming decisions with attention values from all layers. Early Exit Early exit (Panda et al., 2016;Teerapittayanon et al., 2016) is another method to execute the neural network with input-dependent computational complexity. The idea is to halt the execution during model processing at some early exits. Under the circumstance of processing sequential inputs, early exit can be viewed as a coarsegrained case of input skimming. With the hard constraint that all input tokens are skimmed at the same time, early exit methods lead to worse accuracy and performance results compared to input skimming methods. However, the early exit method is also generally applicable to other domains like convolutional neural networks (CNN). DeeBERT (Xin et al., 2020), PABEE , FastBERT are some recent works adopting early exit in Transformer models. Magic Pyramid (He et al., 2021) proposes to combine the early exit and the input skimming ideas together. Tokens are skimmed with fine-grained granularity following POWER-BERT design and the whole input sequence is halted at some early exits. Efficient Transformer. There are also many efforts for designing efficient Transformers Wu et al., 2020;Tay et al., 2020). For example, researchers have applied well studied compression methods to Transformers, such as pruning (Guo et al.), quantization (Wang and Zhang, 2020;Guo et al., 2022), distillation (Sanh et al., 2019), and weight sharing. Other efforts focus on dedicated efficient attention mechanism considering its quadratic complexity of sequence length (Kitaev et al., 2020;Beltagy et al., 2020;Zaheer et al., 2020) or efficient feed-forward neural network (FFN) design regarding its dominant complexity in Transformer model (Dong et al., 2021). Transkimmer is orthogonal to these techniques on the input dimension reduction. Input Skimming Search Space In this section, we discuss the challenges of dynamic input skimming idea in details. Moreover, we compare techniques and design decisions from prior works described in Tbl. 1. Optimization Method The first challenge of input skimming is the optimization with discrete skimming decisions. In specific, the decision for pruning the hidden state tensors (i.e., reducing their sequence length) is a binary prediction. As such, the skim prediction model is non-differentiable and unable to be directly optimized by gradient back propagation. Prior works handle the discrete binary skimming decision by using a set of complicated training techniques, which we categorize in Tbl. 1. Soft-Masking. Some works (Goyal et al., 2020;Kim and Cho, 2021; propose to use the soft-masking training trick which uses a continuous value for predicting the skimming prediction. During the training process, the predicted value is multiplied to the hidden states embedding vectors so that no actual pruning happens. In the inference phase, this continuous skimming prediction value is binarized by a threshold-based step function. The threshold value is pre-defined or determined through a hyper-parameter search process. Obviously, there exists a training-inference paradigm mismatch where the actual skimming only happens at the inference time. Such a mismatch leads to a significant accuracy degradation. Reinforcement Learning. TR-BERT (Ye et al., 2021) proposes to use the reinforcement learning (RL) to solve the discrete skimming decision problem. It uses a separated policy network as the skimming predictor, and the backbone Transformer model is considered as the value network. At first, the backbone Transformer is fine-tuned separately. It then updates the skimming policy network by using the RL algorithm. This multi-step training paradigm is tedious. And training the backbone Transformer and skimming policy network separately is sub-optimal compared to the joint optimization paradigm. Moreover, the large search space of such RL objective is difficult to converge especially on small downstream datasets. Reparameterization. In this work, we propose to use the reparameterization technique to address the discrete skimming decision challenge. Its core idea is to sample the backward propagation gradient during training, whose details we describe in Sec. 4. The advantage of our method is that it enables the joint optimization of skim predictor and backbone Transformer model and therefore achieves the optimal solution. For example, we will later demonstrate in Fig. 4 that the different tasks or datasets prefer different layer-wise skimming strategies, which are learned by our method. We will further explain the results in Sec. 5.4. Design Choices In our work, we also jointly consider other design choices regarding the skimming optimization, which includes the choice of input to the skimming module and how to deal with the skimmed input. We first explain the choices made by prior works, and then explain the choice of our method. Strategy. For the skimming optimization methods described above, there can be different strategies regarding the implementation details. Generally, the skimming strategy can be categorized into search-based or learning-based approach, as described in Tbl. 1. However, when applied to various downstream NLP tasks and datasets, the dynamic skimming scheme prefers different layerwise strategies as we mentioned above. This layerwise skimming characteristics makes the searchbased approach not scalable and generally applicable. In contrast, our method enables the joint training of skimming strategy and downstream task , which leads to better skimming decisions with reference to both efficiency and accuracy. LTP is the only by prior works adopting learning-based method, which, however, uses the soft-masking approach and suffers from the training-inference mismatch. Input for Skimming. POWER-BERT, LAT and LTP treat the attention weight value as importance score and utilize it as the criterion for making the skimming decision. Compared to this value-based method (Guan et al., 2020), TR-BERT uses hidden state embeddings as input feature. In our work, we use the hidden state embeddings because they enclose contextual information of the corresponding input token. Our work shows that the joint training of skimming module and backbone Transformer model leads to that the embeddings also learn to carry features for skimming prediction. Skimming Tokens. For the tokens pruned dynamically by the skimming decision during processing, it is natural to remove them from all the successive layers. However, LAT and TR-BERT propose to forward such tokens to the final output of the Transformer encoder, which keeps the dimension of the Transformer output unchanged. Our work adopts the forward-based design because it is more friendly for the Transformer decoder module on downstream tasks. Transformer with Skim Predictor To predict which tokens to be pruned, we append an extra prediction module before each layer as shown in Fig. 2. This prediction module outputs a skimming mask M , which is used to gather the hidden state embedding H at the sequence length dimension. The pruned embedding is then feed to the Transformer layer as its input. In the skim mask, we use output 1 to denote remaining tokens and 0 to denote pruned tokens. The gathering operation is to select the input tensor with a provided mask. By optimizing this stand-alone skim module, syntactically redundant and semantically irrelevant tokens are skimmed and pruned. The proposed skim predictor module is a multilayer perceptron (MLP) network composed of 2 linear layers with a layer norm operation (Ba et al., 2016) and GeLU activation (Hendrycks and Gimpel, 2016). The activation function is an arbitrary function with discrete output as skim decision. where M LP = Linear(GeLU (LN (Linear))) This skim predictor introduces extra model parameters and computation overhead. However, both of them are very small compared to the vanilla Transformer model, which are about 7.9% and 6.5% respectively. We demonstrate later that the computation overhead of skim module is much smaller than the benefits brought by the reduction of input tensor through skimming. For the tokens pruned by the skim module at each layer, we forward the these pruned hidden state embeddings to the last Transformer layer. As such, the final output of the whole Transformer model is composed of token embeddings skimmed at all layers and the ones processed by all layers without being skimmed. And this output is used for classification layers on various downstream tasks. This makes the skimming operation also compatible for token classification tasks such as extractive question answering (QA) and named entity recognition (NER). This also restores the once abandoned information for downstream tasks. End-to-End Optimization In the above discussion, we have described that Transkimmer can be easily augmented to a backbone model without modification to its current structure. Furthermore, Transkimmer is also capable to utilize the pre-trained model parameters and finetune the Transkimmer activated Transformerbased models on downstream tasks. With an extra skim loss appended to the optimization object, this fine-tuning process is also performed end-to-end without changing its origin paradigm. Skim Attention. In the training procedure, Transkimmer does not prune the hidden state tensors as it does in the inference time. Because the gathering and pruning operation of a portion of tokens prevents the back-propagation of their gradients. The absence of error signal from negative samples interference the convergence of the Transkimmer model. Therefore, we propose skim-attention to mask the reduced tokens in training instead of actually pruning them. The attention weights to the skimmed tokens are set to 0 and thus unreachable by the other tokens. By doing so, the remaining tokens will have the identical computational value as actually pruning. And the gradient signal is passed to the skim predictor module from the skim attention multiplication. Gumbel Softmax. Following the discussion in Sec. 3.1, the output decision mask of skim predictor is discrete and non-differentiable. To conquer this inability of back propagation, we use the reparameterization method (Jang et al., 2017) to sample the discrete skim prediction from the output probability distribution π i of the MLP. The gradient of the non-differentiable activation function is estimated from the Gumbel-Softmax distribution during back propagation. g i j are independent and identically sampled from Gumbel(0, 1) distribution. τ is the temperature hyper-parameter controlling the one-hot prediction distribution. We take τ = 0.1 for all experiments. To achieve better token sparsification ratio, we further add a skim loss term to the overall optimization objective as follows The skim loss is essentially the ratio of tokens remained in each layer thus representing the computation complexity speedup. By decreasing this objective, more tokens are forced to be pruned during processing. To collaborate with the original downstream task loss, we use a harmony coefficient λ to balance the two loss terms. As such, the total loss used for training is formulated as Loss total = Loss downstream + λLoss skim . (7) With the use of the previous settings, the Transkimmer model is trained end-to-end without any change to its original training paradigm. Unbalanced Initialization. Another obstacle is that skimming tokens during the training process makes it much unstable and decreases its accuracy performance. With the pre-trained language modeling parameters, the skim predictor module is random initialized and predicts random decisions. This induces significant processing mismatch in the backbone Transformer model, where all tokens are accessible. Consequently, the randomly initialized skim predictor makes the training unstable and diverged. We propose an unbalance initialization technique to solve this issue. The idea is to force positive prediction at first and learn to skim gradually. Generally, parameters are initialized by zero mean distribution as ω ∼ N (0, σ). We propose to initialize the bias vector of the last linear layer in the skim predictor MLP with unbalanced bias as where i stands for the bias vector for prediction 1 or 0. Consequently, the skim predictor tends to reserve tokens rather than skimming them when innocent. The mean value µ 0 of the unbalanced distribution set to 5 for all the experiments. Setup Datasets. We evaluate the proposed Transkimmer method on various datasets. We use the GLUE (Wang et al., 2019) benchmark including 9 classification/regression datasets, extractive question answering dataset SQuAD-v2.0, and sequence classification datasets 20News (Lang, 1995), YELP (Zhang et al., 2015) and IMDB (Maas et al., 2011). These datasets are all publicly accessible and the summary is shown in Tbl. 2. The diversity of tasks and text contexts demonstrates the general applicability of the proposed method. Models. We follow the setting of the BERT model to use the structure of the Transformer encoder and a linear classification layer for all the datasets. We evaluate the base setting with 12 heads and 12 layers in prior work (Devlin et al., 2019). We implement Transkimmer upon BERT and RoBERTa pre-trained language model on downstream tasks. Baselines. We compare our work to prior token reduction works including POWER-BERT (Goyal et al., 2020), Length-Adaptive Transformer (LA-Transformer) (Kim and Cho, 2021), Learned Token Pruning (LTP) , DeFormer (Cao et al., 2020) and TR-BERT . We also compare our method with model compression methods of knowledge distillation and weight sharing. Knowledge distillation uses a teacher model to transfer the knowledge to a smaller student model. Here we adopt DistilBERT (Sanh et al., 2019) setting to distill a 6-layer model from the BERT base model. By sharing weight parameters among layers, the amount of weight parameters reduces. Note that weight sharing does not impact the computa- tion FLOPs (floating-point operations). We evaluate Transkimmer on ALBERT (Lan et al., 2020) that shares weight parameters among all layers. To express that token reduction method is compatible with these model compression methods, we further implement Transkimmer method with this works to demonstrate their cooperation effect. Besides, Dee-BERT (Xin et al., 2020) is a Transformer early exit baseline which can be regarded as coarse-grained input skimming. Padding. While processing batched input samples, Transformer models perform a padding operation on the input sequences to align the input length. Sequences are appended with a special padding token [PAD] to a predefined sequence length for the convenience of successive computing. This is a trivial setting for general evaluation but could lead to possible pseudo speedup for token reductions works. Because the padded tokens can be pruned without prediction. For the prior works, there are three evaluation settings with reference to padding, padding to a fixed sequence length, padding to mini-batch maximum length and no padding (denoted as Sequence, Batch and No in Fig. 3 & 4). We indicate the padding methods of prior works and evaluate Transkimmer with differ-ent padding settings for a fair comparison. The speedup of padding to mini-batch maximum length setting is related to batch size and processing order of input samples. So it is difficult to make a direct comparison under this setting. However, it can be estimated with padding to fixed sequence length as upper bound and no padding as lower bound. The sequence length on different datasets is determined following prior works' settings (Goyal et al., 2020;. We measure the inference FLOPs as a general measurement of the model computational complexity on all platforms. We use the TorchProfile(?) tool to calculate the FLOPs for each model. Training Setting. We implement the proposed method based on open-sourced library from Wolf et al. (2020) 1 . For each baseline model, we use the released pre-trained checkpoints 2 . We follow the training setting used by Devlin et al. (2019) and Liu et al. (2019) to perform the fine-tuning on the above datasets. We perform all the experiments reported with random seed 42. We use four V100 GPUs for training experiments. The harmony coefficient λ is determined by hyper-parameter grid search on development set with 20% data random picked from training set set. The search space is from 0.1 to 1 with a step of 0.1. Overall Results We show the overall results on several datasets and demonstrate our observations. Tbl. 3 demonstrates the accuracy and speedup evaluated on GLUE benchmark. And Tbl. 4 further demonstrates the results on other datasets with longer input. Comparison to vanilla model baseline. Generally, Transkimmer achieves considerably speedup to the vanilla models with a minor accuracy degradation, which is less than 1% for nearly all cases. The average speedup is 2.81× on GLUE benchmark and over 2× on the other datasets. This demonstrates the inference efficiency improvement of the Transkimmer input reduction method. We also evaluate Transkimmer with RoBERTa model as backbone and reach 3.24× average speedup on GLUE benchmark. This result further expresses the general applicability of Transkimmer with different Transformer-based pre-trained language models. Among all the datasets we evaluated, Transkimmer tends to have better acceleration ratio on the easier ones. For example, sequence classification tasks like QQP and STS-B are better accelerated than QA or NLI datasets. We suggest that the Transformer backbone is able to process the information at shallower layers and skim the redundant part earlier. This is also demonstrated in the following post-hoc analysis Sec. 5.4. Comparison to input reduction prior works. As shown in Tbl. 3, Transkimmer outperforms all the input reduction methods by a margin on GLUE benchmark. To make a fair comparison, we evaluate Transkimmer with two padding settings, padding to fixed sequence length or no padding. For most cases, Transkimmer has better accuracy performance and higher speedup ratio at the same time. When taking the special padding token into account, Transkimmer is able to accelerate Comparison to model compression methods. The comparison to two model compression methods is shown in Tbl. 3. Transkimmer outperforms the knowledge distillation and weight sharing baseline by a margin. Besides, the dynamic skimming idea itself is orthogonal to this existing model compression methods. To elaborate, we further adopt the proposed Transkimmer method on DistilBERT and ALBERT models. With the proposed end-toend training objective, Transkimmer is easily augmented to these methods. There is also no need to change the original training process. The result shows that the Transkimmer method further accelerates the inference efficiency of compressed models with nearly no extra accuracy degradation. Table 5: Post-hoc case study of SST-2 sentimental analysis and SQuAD QA tasks from Transkimmer model with BERT base setting. The color indicated by the colorbar represents the Transformer layer index where the token is pruned. Specifically, the black tokens are fully processed without being skimmed. Post-hoc Analysis Skim Strategy. Fig. 4 is the result of the number of tokens remained for the processing of each Transformer layer. The normalized area under each curve is a rough approximation of the speedup ratio with reference to the tokens number. By end-to-end optimization, Transkimmer learns significant distinguished strategies on different tasks. On WNLI dataset, over 90% of tokens are pruned within the first 3 layers and guarantees a high acceleration gain. The steer cliff at layer 7 on COLA demonstrates a large portion of skimming at this particular position. We suggest that this is because the processing of contextual information is sufficient for the skimming decision at this specific layer. Post-Hoc Case Study. Moreover, several posthoc case studies are demonstrated with Tbl. 5. In the SST-2 sentimental analysis example, the definite articles and apostrophes are discarded at the beginning. And all words are encoded in contextual hidden states embeddings and gradually discarded except for a few significant key words. Only the special token [CLS] is fully processed in this example for final sentimental classification. However, on the token classification task example from SQuAD dataset, all tokens are given to the downstream classifier to predict the answer position. The answer tokens are processed by all Transformer layers. Similarly, the question part is also kept with tokens containing enough information. Another detail worth mentioning is that we use subword tokenization for the SQuAD dataset. As such, subword tokens of the same word might be discarded at different layers. For instance, the word Francia is tokenized into franand -cia two subword tokens, which are pruned at layer 4 and 6 respectively. Conclusion Input skimming or dynamic input reduction is an emerging Transformer model acceleration method studied by many works recently. This idea utilizes the semantic structure of language and the syntactic information of the input context for inference acceleration. Compared to static model weight compression methods, input skimming explores the redundancy in the input and hidden state tensors. As such, it is orthogonal and compatible with those model compression algorithms with its dynamic feature. In this work, we propose an accurate and efficient Transformer inference acceleration method by teaching it how to skim input contents. The proposed Transkimmer method is trained with an easy and end-to-end paradigm. Furthermore, Transkimmer is also generally applicable to various Transformer-based model structures. It is even compatible with the static model compression methods like knowledge distillation and weight sharing. We believe that the above features guarantee the Transkimmer method a wide range of applicable production scenarios.
2022-05-15T13:16:18.930Z
2022-05-15T00:00:00.000
{ "year": 2022, "sha1": "81b234a1e6da7bc8131e8585a9455dca5dd68754", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "81b234a1e6da7bc8131e8585a9455dca5dd68754", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
271439520
pes2o/s2orc
v3-fos-license
Elucidating the effects of precooked treatments on the quality attributes of red swamp crayfish (Procambarus clarkia): Insights from water boiling vs. microwaving Highlights • Water boiling and microwaving both ensured effective sterilization of crayfish.• Microwave precooking in crayfish induced moderate lipid oxidation.• The moderate lipid oxidation preferentially liberated taste and volatile compounds.• Water boiling triggered-moderate proteins oxidation improved the digestibility. Introduction In the rapidly evolving landscape of modern living, marked by escalating personal standards and an unyielding tempo of professional commitments, a pronounced transformation in consumer dietary inclinations has become evident.This dietary transformation is intricately linked to the burgeoning convenience-centric economy and the pervasive embrace of a lifestyle centered on the home.Notably, there is a discernible inclination among consumers towards a diverse spectrum of convenient, nutritionally enriched prepared meals that prioritize health and efficiency (Yi and Xu, 2023;Xu et al., 2023).The realm of precooked food, which spans both fully processed and semi-finished products, underscores pivotal elements such as nutritional density, flavor profile, quality, convenience, and a diverse culinary experience.Paramount among these is the meticulous maintenance of flavor and nutritional integrity during the reheating process, a factor that substantially influences consumer acceptance.Precooked meals offer a time-saving alternative to traditional fast food and address the issue of culinary skill deficiency among consumers (Khalid et al., 2023;Ying et al., 2024).The transition from fresh ingredients to processed precooked foods involves alterations in sensory attributes, nutrition, and quality.The various heating stages in the processing of fresh materials can lead to meat shrinkage and oxidation, which in turn affect the final product's sensory and nutritional value.A profound comprehension of the mechanisms that govern quality alterations during food processing, in conjunction with precise control measures, is essential for attenuating the degradation of nutritional and flavor characteristics.Such an approach is instrumental in augmenting the intrinsic value of precooked food. The red swamp crayfish (Procambarus clarkia), celebrated for its culinary allure and nutritional bounty, exemplifies a pivotal shift in gastronomic inclinations.This freshwater gem, with a commendable protein content of approximately 20 % and a rich profile of eight essential amino acids, has garnered extensive acclaim (Bai et al., 2022). Its remarkable fatty acid profile, which includes nearly 50 % unsaturated fatty acids, features notable compounds such as eicosatetraenoic acid (EPA) and docosahexaenoic acid (DHA), establishing it as a nutritional powerhouse.Additionally, the crayfish provides a spectrum of essential minerals and vitamins crucial for human health, including selenium and retinol (Peng et al., 2021).The crayfish industry in China has experienced exponential growth since 2016, driven by a burgeoning market for crayfish consumption (Jiang et al., 2023).Currently, the crayfish processing sector is undergoing a renaissance, spurred by the rising demand for prepared dishes.However, a significant portion of the annual crayfish harvest remains underutilized due to regional or seasonal constraints, leading to considerable waste and economic loss (Bai et al., 2022).The strategic deployment of prepared dishes presents a viable solution to the ongoing challenge of year-round availability and holds the potential to enhance accessibility in remote regions (Li et al., 2023a).Interestingly, the scientific examination of the subtleties that govern the quality changes in precooked crayfish products during processing remains scarce.This underscores the pressing need for dedicated research to unlock the full potential of the crayfish industry, while minimizing waste, in accordance with evolving culinary preferences and sustainable food practices. The meticulous processing of crayfish is a multifaceted endeavor, with precooking being a pivotal step that is orchestrated to enhance visual appeal, neutralize autolysins, and effectively eradicate microorganisms both externally and internally (Khalid et al., 2023).This strategic process serves a dual function: enhancing the permeability of crayfish tissue cells and priming the crustacean for subsequent flavor absorption during curing processes.Water boiling, executed at a precisely controlled temperature of 99 ± 1 • C, is a conventional method that stands as a cornerstone for pathogen inactivation in industrial crayfish production.This method is favored for its convenience, costeffectiveness, scalability, and ability to preserve natural flavors (Li et al., 2023a).However, it is important to note that thermal processing induces alterations in protein conformation and physicochemical properties, which can impact the appearance, flavor, texture, and chemical composition of the crayfish (Khalid et al., 2023).Despite its efficacy, water boiling raises ecological concerns due to significant water consumption and the generation of wastewater, presenting challenges for sustainability and nutrient retention (Fan et al., 2021).In contrast, microwave technology, which spans the electromagnetic spectrum from 300 MHz to 300 GHz, offers versatility and potential ecological benefits compared to water boiling.However, research on its impact on crayfish quality remains limited (Wang et al., 2023a).Notably, microwave-cooked samples exhibit reduced muscle damage due to shorter cooking times, suggesting the potential for preserving texture and nutrients (Jiang et al., 2023).The narrative of crayfish processing unfolds against the backdrop of contrasting methodologies: the conventional reliability of water boiling juxtaposed with the emerging realm of microwave technology (Wang et al., 2023a).Striking a balance between efficacy, sustainability, and culinary excellence is paramount.Delving into the alterations in crayfish quality induced by microwave technology holds promise for both scientific enlightenment and practical insights, potentially reshaping crayfish processing practices in the culinary domain. This research embarks on a meticulous scientific odyssey, delving into the complexities of the red swamp crayfish (Procambarus clarkia).Through a comprehensive and multifaceted investigation, it scrutinizes the texture, nutrient composition, taste, volatile flavors, and digestion characteristics of this species.The study's primary objective is to elucidate the nuanced interplay between two predominant precooked modalities: water boiling and microwaving.At the crux of this inquiry lies the comparative analysis of the bactericidal efficacies of water boiling and microwaving, as well as the exploration of the intricate mechanistic underpinnings that drive the observed variations in quality.The meticulous examination of these attributes forms the bedrock of this analytical endeavor, which seeks to understand the multifaceted impacts of these precooked methods on the sensory and nutritional dimensions of the red swamp crayfish.By carefully dissecting these attributes, the research aims to reveal the subtle alterations induced by each precooked method.The study aspires to offer invaluable insights that extend beyond the confines of laboratory, poised to guide industrial crayfish processing practices.Strengthening the scientific foundation underpinning crayfish processing, the study envisions enhancing the quality of the final product and optimizing the practical efficacy of industrial operations in this domain.The findings are expected to contribute to the body of knowledge surrounding the processing of aquatic resources, providing a robust framework for future research and innovation in the culinary and food processing industries. Materials, reagents and chemicals Fresh red swamp crayfish, sourced from a local market, exhibited an average weight of 20.70 ± 3.50 g.Essential chemicals, including hydrochloric acid, sodium hydroxide, sulfuric acid, methanol, potassium bromide, petroleum ether, and phenol were procured from Hunan Huihong Reagent Co., Ltd.(Hunan, China).Chromatography-grade nhexane and acetonitrile were obtained from Shanghai Macklin Biochemical Co., Ltd.(Shanghai, China). Sample preparations The crayfish underwent thorough cleaning, including scrubbing and rinsing with running water, before being weighed.The specimens were divided into two precooked groups: one group was subjected to boiling in water at 100 • C for 5 min (referred to as water boiling), while the other underwent microwave treatment at 210 W (Microwave heating 2450 MHz) for 5 min (referred to as microwaves). Enumeration of aerobic plate count The aerobic plate count of caryfish was enumerated as per the stringent protocols outlined in the GB 4789.2-2016standard.The methodology commenced with the homogenization of the samples, ensuring a uniform distribution of microbial content.This was followed by a systematic serial dilution process, executed in a 0.85 % NaCl solution, which served as a diluent conducive to microbial stability.The diluted samples were then applied onto AOAC 3M Petrifilm™ Aerobic Count Plates, sourced from Kesebai Medical Technology Co., LTD, Shanghai, China.These plates are specifically designed for the isolation and enumeration of aerobic bacteria, providing an optimal environment for their growth.The inoculated plates were incubated for a period of 72 h at a constant temperature of 37 • C, a condition that promotes the proliferation of mesophilic bacteria.Upon completion of the incubation, the bacterial colonies were counted, yielding the aerobic plate count, which serves as a quantitative measure of the microbial load in the carp samples. Analysis of texture profile To assess the texture parameters (hardness, elasticity, and chewiness) of crayfish tail meat, a TA.XT plus texture analyzer (Stable Micro Systems LTD, Godalming, UK) was utilized at room temperature (25 • C).The testing parameters included a trigger force of 5 g, pretest speed of 1.0 mm/s, test speed of 3 mm/s, post-test speed of 5 mm/s, compressed depth of 25 %, time interval of 5 s, and a compression ratio set at 50 %.Each treatment was replicated twelve times to ensure robust data. Fatty acids composition analysis Crayfish tail meat, after freeze-drying, underwent fatty acid methyl esterification according to the method by Ma et al., (2020).Gas chromatography-mass spectrometer (GC-MS, 7820A GC-5977E MSD, Agilent LTD, Palo Alto, America) utilizing an HP-5-MS capillary column (30 m × 0.25 mm × 0.25 μm) facilitated the analysis.The identification of fatty acid methyl ester relied on MS data obtained from the National Institute of Standards and Technology (N1ST14.L) library database. Amino acids and free amino acids composition analysis For the analysis of amino acid and free amino acid composition, a 2 mg sample of freeze-dried crayfish tail meat was placed in a sealed 5 mL vial.Subsequently, 3 mL of either 6 mol/L hydrochloric acid or a 40 % sodium hydroxide solution was added, and the vial was securely sealed.The sample underwent hydrolysis at 110 • C for 24 h.Following the thorough hydrolysis of proteins into amino acid residues, 3 mL of the supernatant, obtained post-membrane filtration, was utilized for analysis employing the Elite-AAK amino acid analysis system (Dalian Elite LTD, Liaoning, China).Amino acid content determination was conducted using high-performance liquid chromatography (HPLC) coupled with the Elite-AAK amino acid analysis column (250 mm × 4.6 mm, 5 μm).The column temperature was maintained at 27 • C, with an injection volume of 10 μL.The mobile phase flow rate was set at 1.2 mL/min, and detection occurred at a wavelength of 360 nm. For the analysis of free amino acids, the sample solution underwent precipitation with five times the volume of acetone, followed by centrifugation at 10,000 r/min for 15 min.Thereafter, 3 mL of the supernatant, post-membrane removal, was employed for derivatization and measurement using the Elite-AAK amino acid analysis system.The analytical procedure remained consistent with that utilized for amino acid composition. Flavor nucleotides analysis The quantification of key flavor nucleotides-inosine monophosphate (IMP), guanosine monophosphate (GMP), adenosine monophosphate (AMP), and cytidine monophosphate (CMP)-in crayfish meat was conducted in strict accordance with the national standard GB 5413.40-2016.The procedure involved a precise extraction method where 5 g of crayfish meat was homogenized with10 mL of deionized water.This mixture was then subjected to centrifugation at 5000 rpm for 15 min to facilitate the separation of the supernatant.A 2 mL aliquot of this supernatant was further diluted to a final volume of 10 mL to ensure optimal conditions for chromatographic analysis.The resulting solution was analyzed using Ultra Performance liquid chromatography (ACQ-UITY UPLC, Waters, USA). Volatile compounds analysis The analysis of volatile compounds in crayfish tail meat followed the methodology outlined by Ma et al. (2020) utilizing GC-MS coupled with an automated solid-phase microextraction (SPME) system.Commercial SPME fibers (50/30 μm DVB-CAR-PDMS, Shanghai Anpel LTD, China) were employed for the extraction of volatiles from crayfish tail muscle.The identification of volatile compounds relied on MS data obtained from the National Institute of Standards and Technology (N1ST14.L) library database. Calculation of equivalent umami concentration (EUC) and taste activity value (TAV) Equivalent Umami Concentration (EUC) is defined as a synergistic combination of AMP, GMP, IMP, Glu, and Asp that elicits an umami intensity equivalent to the amount found in monosodium glutamate (MSG).The calculation formula is expressed as follows: EUC is represented in grams of MSG per 100 g of dried meat.The synergy constant, 1218, plays a crucial role in this calculation.The variables include a i, the content of umami amino acids (g/100 g dry meat), with b i representing their relative umami strength compared to MSG (Asp, 0.077; Glu, 1).Additionally, a j denotes the content of flavor nucleotides (g/100 g dry meat), and b j signifies their relative strength compared to IMP (IMP,1; GMP,2.3;AMP, 0.18). Taste activity value (TAV) for each free amino acid and nucleotide is determined by calculating the ratio of their amount to the corresponding taste threshold.A TAV value greater than one indicates a significant impact on the sample's taste (Zhu et al., 2023). Analysis of microstructure Microstructure analysis of red swamp crayfish meat adhered to the methodology delineated by Li et al. (2022a).After fixation in a 10 % formaldehyde solution for 24 h, samples underwent paraffin sectioning and were subjected to hematoxylin and eosin staining.Subsequently, imaging slides were prepared for each sample, and their microstructure was observed in a bright field under an inverted microscope (ECLIPSE Ti-S, Nikon, Tokyo, Japan). Determination of free radical content The endogenous free radical content within red swamp crayfish (Procambarus clarkia) meat was quantified employing an A200 electron spin resonance (ESR) spectrometer, a state-of-the-art device from Bruker Corporation, Karisruhe, Germany.This methodology aligns with the established protocol detailed by Li et al. (2020), ensuring a precise and accurate measurement of free radicals, which are pivotal reactive species in the oxidative processes. Assessment of thiobarbituric acid-reactive substances (TBARS) The TBARS assay, a standard procedure for monitoring lipid peroxidation, was conducted on the crayfish meat samples.The TBARS values, indicative of oxidative degradation, were measured following the technique reported by Li et al. (2020) and are expressed in terms of malondialdehyde (MDA) equivalents per kilogram of sample. Quantification of carbonyl content The carbonyl content, a biochemical marker for protein oxidation, was determined using a protein carbonyl assay kit procured from Jian cheng Technology Co., Nanjing, China.The assay was performed in triplicate, strictly adhering to the guidelines provided by the manufacturer.This rigorous experimental design ensures the reproducibility and validity of the carbonyl content measurements, offering insights into the extent of protein oxidation in the crayfish samples. Measurement of sulfhydryl group (SH) contents The quantification of sulfhydryl groups adhered to the methodology outlined by Li et al. (2022a).In brief, 1.5 mL of crayfish protein solution (5 mg/mL) was initially blended with 10 mL of Tris-glycine buffer (composed of 4 mM EDTA, 8 mM urea, 90 mM glycine, 86 mM Tris, and 0.5 mL of DTNB, with a pH of 8.0) or Tris-glycine buffer without urea (utilized for measuring free SH).Subsequent to the mixing process, the resultant solution was allowed to stand at 25 • C for 1 h and then centrifuged at 4 • C centrifuged for 10 min.The absorbance of the supernatant was recorded at 412 nm (A 1 ), with the buffer serving as a blank (A 2 ).The subsequent mathematical expressions were employed for the computation of total sulfhydryl groups: (2) Assessment of surface hydrophobicity Surface hydrophobicity analysis of red swamp crayfish protein was conducted following the methodology outlined by Wang et al. (2019).The determination involved the examination of surface hydrophobicity in an protein solution (2 mg/mL) using a 1 mg/mL bromophenol blue (BPB).As a control, a phosphate buffer without protein was employed.Following agitation at room temperature for a specified duration, all samples and the control were centrifuged at 2000 g for 15 min at 4 • C. Subsequently, the supernatant was separated and diluted 10 times with PBS, and the absorbance at 595 nm was measured.The hydrophobicity index, representing the bound BPB, was calculated using the formula: where Ac and As denote the absorbance of the control and samples, respectively. Fourier transform infrared (FTIR) spectroscopy analysis Crayfish protein samples were blended with dried KBr powder and compressed into thin slices.Spectra were acquired using an FTIR spectrometer (Perkin Elmer, Salem, MA) within a scanning range of 400-4000 cm − 1 .The region spanning 1600-1700 cm − 1 was specifically selected to analyze the secondary structures of proteins, employing PeakFit v4.12 (SeaSolve Software Inc., USA). Intrinsic fluorescence emission analysis The intrinsic fluorescence emission spectroscopy of the crayfish protein solution (1 mg/mL) was analyzed using a fluorescence spectrophotometer (Hitachi Corp., Tokyo, Japan).The excitation and emission wavelength were set at 280 nm and 290-450 nm, respectively. Simulated gastrointestinal digestion procedure Red swamp crayfish (Procambarus clarkia) samples underwent an in vitro gastrointestinal digestion process, following the protocol established by Wang et al. (2023b).The method involved homogenizing 100 mg of sample with 10 mL of simulated salivary fluid (CZ0281, LEA-GENE, China) and incubating at 37 • C for 10 min.Subsequently, 10 mL of simulated gastric fluid (CZ0211, LEAGENE, China) was introduced, and the sample was further incubated at 37 • C for 2 h.Post-gastric simulation, the pH of the mixture was adjusted to neutral (pH 7.0), and simulated intestinal fluid (CZ0201, LEAGENE, China) was incorporated, with a final 2-h incubation at 37 • C. The digestion process concluded with the inhibition of enzymatic activity through boiling for 5 min, followed by centrifugation at 10,000 g for 10 min to collect the supernatant for subsequent analysis. Confocal laser scanning microscopy (CLSM) The microstructural changes post-gastrointestinal digestion was examined using confocal laser scanning microscopy (CLSM, SP8, LEICA, Germany).A staining procedure with rhodamine B (0.1 % v/w in ethanol) was applied to enhance visualization of the digested sample. Assessment of hydrolysis degree and soluble amino groups content The degree of hydrolysis (DH) of the supernatant from the gastrointestinal digestion system was determined through the reaction of ophthalaldehyde (OPA) with amino groups, utilizing a standard curve generated with L-serine.Additionally, the content of soluble amino groups was quantified using the same OPA-based assay, following the method described by Duque-Estrada et al. (2019). Particle size distribution analysis The particle size distribution of the supernatant from the gastrointestinal digestion system was measured using a Zetasizer (3000HSA, Malvern, UK), providing insights into the changes in particle dimensions post-digestion. Statistical analysis The statistical evaluation of our data adhered to rigorous standards.Results from three independent and repeated parallel experiments are reported as mean values ± standard deviation unless explicitly stated otherwise.Significance analysis, set at a 5 % level (p < 0.05), was conducted using ANOVA along with the student t-test, employing SPSS 16.0 statistical software (SPSS Inc., Chicago, IL, USA).The analysis was further complemented by utilizing OriginPro (version 2023, OriginLab Corporation, Northampton, MA, USA) for a comprehensive examination of the statistical outcomes. Proximate analysis of fundamental quality attributes in red swamp crayfish meat A comprehensive evaluation of red swamp crayfish (Procambarus clarkia) meat quality is presented, focusing on aerobic plate count values, texture profile analysis, and basic nutrient composition under two distinct precooking methods: water boiling and microwave treatment.Fig. 1(A) and (B) illustrate the aerobic plate count results for crayfish precooked by water boiling and microwaving, respectively.A clear impact of water boiling for 5 min on microbial reduction is evident.The aerobic plate count after 5 min of microwave treatment was a mere 80 CFU/g, well below the 10 5 CFU/g limit set by the GB 10136-2015 National Standard for Food Safety in Animal Aquatic Products.Both water boiling and microwave treatments maintained aerobic plate counts within acceptable limits, showcasing their efficacy in reducing bacterial presence in crayfish.The mechanism of heat sterilization involves subjecting polymer materials to high temperatures, inducing denaturation and achieving effective sterilization (Barnett et al., 2020).Cooking crayfish at 70 • C successfully eliminated a significant portion of pathogenic bacteria (Li et al., 2023a).The total aerobic plate count after 5 min of boiling or microwaving was significantly below the 500 CFU/g threshold defined by national standards, confirming the robust bacterial reduction achieved by both sterilization methods. In the analysis of textural properties, as detailed in Fig. 1(C), crayfish precooked by water boiling exhibited significantly higher hardness and chewiness compared to those precooked by microwaving.This can be attributed to the heat-induced denaturation and aggregation of myofibrillar proteins, which enhances muscle fiber density and contributes to a firmer texture (Yu et al., 2021a;Li et al., 2023b;Zhang et al., 2023b).Despite variations in hardness and chewiness, springiness remained statistically invariant between the two precooked methods. Upon scrutinizing the basic nutrient composition in Fig. 1(D), distinct variations between crayfish precooked by water boiling and microwaving are observed.The water boiling method resulted in a higher moisture content, which is significant (p < 0.05), highlighting the necessity for accurate microwave heating to conserve the moisture crucial for the crayfish's edibility.Interestingly, the slight moisture loss during microwaving is associated with improved palatability (Li et al., 2023a).Moreover, the water boiling group showed a significant increase in crude fat and glycogen levels compared to the microwaving group (p < 0.05), likely due to the concentration effect caused by moisture depletion during heating (Jiang et al., 2023). Evaluation of nutrition and flavor compounds Changes in fatty acid and amino acid profiles fatty acids.The MUFAs content in the water boiling group was significantly lower than that in the microwaving group (p < 0.05), while the water boiling group exhibited heightened PUFAs content.This decline in PUFA content is attributed to the prevalence of conjugated double bonds and high unsaturation within PUFAs (Li et al., 2023a).As PUFAs undergo oxidation and degradation under elevated temperatures and aqueous conditions, they give rise to small flavor-enhancing molecules such as alcohols, aldehydes, and ketones (Zhu et al., 2023).This dual effect results in a reduction in the nutritional value of crayfish meat but a simultaneous enhancement of its flavor.Notably, essential fatty acids, particularly the n-6 series of linolenic acid (LA) and arachidonic acid (ARA), as well as the n-3 series of eicosapentaenoic acid (EPA), are abundantly present.These essential fatty acids, crucial for preventing coronary heart disease and reducing blood lipids (Mukhametov et al., 2022), exhibit no significant difference in EPA content between crayfish treated by microwaving and water boiling (Fig. 2A).In summary, no substantial variance was observed in n-3 PUFAs and n-6 PUFAs between water boiling and microwaving (Fig. 2B). Turning to amino acid analysis, Fig. 2(C) and (D) illustrate the comparable compositions in crayfish meat under water boiling and microwave precooking.Fig. S2 further supports this analysis by presenting the chromatograms of the amino acid compositions, offering a detailed visual representation of the data.In both precooked samples, a total of 18 amino acids were identified, encompassing a comprehensive range of nutritionally significant compounds.The Essential Amino Acids (EAA) and Total Amino Acids (TAA) content showed negligible differences between the water boiling and microwave precooking methods, with statistical analysis indicating no significant variation (p > 0.05).Amino acids play pivotal roles in maintaining normal human metabolism, improving blood circulation, enhancing oxygen supply, regulating brain nerve cells, promoting brain health, inducing calmness, improving sleep, and regulating blood pressure (Li et al., 2022b).Lysine, the amino acid with the highest content, contributes to immune system strengthening (Huang et al., 2022).According to the ideal model standard proposed by WHO/FAO, the EAA/TAA ratio for both water boiling and microwaving exceeded 40 %, indicating the high nutritional value of processed crayfish protein.This underscores its role as a valuable source of essential amino acids, improving protein utilization.Regardless of the precooking method, crayfish meat demonstrates a diverse array of amino acids, preserving rich nutritional content. Alterations in free amino acid and nucleotide content Exploring the intricacy of free amino acid and nucleotide content in crayfish meat, this section sheds light on the influence of water boiling and microwaving methods.The summarized findings are deed in Table 1. Free amino acids, pivotal precursors to flavor, play a critical role in defining the distinctive flavor profile of crayfish (Zhu et al., 2019).The results underscore the detection of 18 types of free amino acids in both the water boiling and microwaving groups.The total content of free amino acids amounted to 18.300 ± 1.455 g/100 g for the water boiling group and 26.949 ± 0.007 g/100 g for the microwaving group.Substantial differences (p < 0.05) were evident in total free amino acids, umami free amino acids (Asp and Glu), and sweet free amino acids (Thr, Ser, Gly, Ala, Arg, and Pro) between the water boiling and microwave methods.The dynamic variations in free amino acid content, influenced by heating time and potential participation in the Maillard reaction, led to a decline in TFAA content during the later stages of cooking (Li et al., 2023a).Notably, the levels of Arg and Lys increased in both groups, with Arg contributing a bitter but pleasantly sweet taste, and Lys offering a sweet but unpleasantly bitter taste.Additionally, Ser, Thr, Lys, Pro, Ala, and Gly were identified as the predominant sweet amino acids, with Asp and Glu serving as primary umami amino acids (Li et al., 2023a).These dynamic shifts in the content of flavor free amino acid contents Table 1 Comparative assessment of free amino acid and nucleotide content, taste threshold, and total amino acid value (TAV) in red swamp crayfish (Procambarus clarkia) subjected to water boiling and microwave precooking methods.The acronyms TFAAs, UFAAs, and SFAAs denote total free amino acids, umami free amino acids (aspartic acid and glutamic acid), and sweet free amino acids (threonine, serine, glycine, alanine, arginine, and proline), respectively.Data not available for certain entries are indicated by '/ '.The symbols '(+)' and '(− )' correspond to the subjective assessment of taste quality, where '(+)' signifies a pleasant taste and '(− )' denotes an unpleasant taste.The term EUC, or equivalent umami concentration, represents a calculated value reflecting the quantitative analysis of free amino acids and nucleotides that contribute to the umami taste.Statistical significance among mean values is denoted by letters in the legend, ascertained through one-way ANOVA and t-test with a significance level of p < 0.05. W. Xu et al. collectively contribute to the overall enhancement of crayfish meat flavor.ATP-related compounds undergo a transformation into flavor nucleotides, such as IMP and AMP, through thermal treatment.These nucleotides play a significant role in imparting specific taste attributes, with IMP and AMP being closely associated with sweetness and umami taste in aquatic products (Zhu, et al., 2023).The lower AMP content in the water boiling group compared to the microwaving group suggests high solubility of AMP in hot water, leading to increased loss (Liu et al., 2021).Conversely, the higher IMP content in the water boiling group may be attributed to heat-induced decomposition by electromagnetic waves.The microwaving group exhibited higher GMP levels than the water boiling group, consistent with previous reports on flavor nucleotide changes in mussels upon shucking (Liu et al., 2021).Notably, AMP stood out as the predominant taste-active nucleotide in all groups, characterized by TAV values exceeding 1. Calculating the Equivalent Umami Concentration (EUC) values for crayfish under water boiling and microwaving using the monosodium glutamate equivalent formula (Table 1) revealed a superior taste quality imparted by microwaving to crayfish meat.The EUC for crayfish meat samples in the water boiling and microwaving groups were 0.038 g MSG/100 g and 0.232 g MSG/100 g, respectively.These findings align with the conclusions of Zhu, et al (2023), who reported that the microwave method enhances the taste quality of Mytilus coruscus meat compared to water boiling. Shifts in volatile compounds The savory essence of cooked crayfish meat arises from intricate reactions involving a myriad of flavor precursors, intermediates, and their resultant interaction products (Sohail et al., 2022).During the precooking process of crayfish, the interplay between lipid oxidation degradation and the Maillard reaction gives rise to a spectrum of volatile flavor compounds, including aldehydes, alcohols, ketones, and others.These compounds collectively contribute to the creation of a distinctive and nuanced flavor experience for consumers (Sohail et al., 2022).Flavor characteristics serve as pivotal indicators for assessing crayfish meat quality and consumer acceptability (Li et al., 2023a).A thorough evaluation of the volatile compounds was undertaken, employing heatmap analysis, classification, and principal component analysis (PCA).This comprehensive study was conducted on crayfish subjected to water boiling and microwaving methods, as depicted in Fig. 2(E-G).Fig. S3 complements the primary analysis by displaying the chromatograms of the volatile compounds.These chromatograms offer a detailed view of the individual peaks, corresponding to the various volatile compounds detected. The findings uncover notable distinctions in the compositions and proportions of volatile flavor compounds between the microwaving and water boiling groups.Specifically, the water boiling group exhibited the detection of 17 volatile flavor compounds, including 4 alkanes, 5 alcohols, 4 aldehydes, 1 ketone, 2 ethers, and 1 aromatic.In contrast, the microwaving group revealed 9 volatile flavor compounds in crayfish meat, consisting of 3 alkanes, 2 alcohols, 2 aldehydes, 1 ketone, and 1 aromatic.Alcohols, aldehydes, and ketone constituted a significantly higher proportion (69 %) than other detected volatile flavor compounds in both water boiling and microwaving groups.This observation suggests the oxidation and degradation of polyunsaturated fatty acids (PUFAs), enhancing flavor but diminishing nutritional value.Consumer sensory thresholds toward alkane compounds are typically high, suggesting minimal impact on crayfish meat flavor.Unsaturated alcohols, including 2-Octen-1-ol, 2-nonene-1-ol, and 2-Decyn-1-ol, detected in the water boiling group, contribute significantly to crayfish meat flavor.Furthermore, both water boiling (6.67 %) and microwaving (17.66 %) groups exhibited elevated concentrations of 1-Pentanol, a derivative of the oxidation of linoleic acid.Aldehydes, primarily stemming from the oxidative degradation of PUFAs and the Strecker reaction involving amino acids, play a pivotal role in elevating the flavor profile of crayfish meat, owing to their low threshold values (Liang et al., 2022).In the water boiling group, hexanal, nonanal, octanal, and benzaldehyde exhibited high relative contents.The nonanal and decanal were detected as aldehydes in the microwaving group.Both decanal and nonanal contribute orange and fresh scents, respectively, mainly derived from the oxidation of oleic acid and linoleic acid (Li et al., 2023a).Ketone compounds, with relatively high threshold values, exert minimal influence on crayfish meat flavor.Additionally, an aromatic compound was detected in both water boiling and microwaving groups.For a deeper analysis, principal component analysis (PCA) diagrams were generated, displaying two-dimensional scatter plots (Fig. 2G).PCA explained 74.7 % of the total variance, with PC1 contributing 42.4 % and PC2 contributing 32.3 %.This comprehensive coverage of variable information indicates clear distinctions in the results.Zhang et al. (2023a) found that employing a cooking method with reduced water content and prolonged heating duration, such as microwaving, can effectively facilitate the production of desirable tilapia aroma. While the nutritional differences between microwaving and water boiling crayfish were minimal, the microwaving method displayed enhanced taste and volatile flavor absorption compared to water boiling.The impact of processing conditions on protein structure is central to their flavor adsorption capability, as highlighted by Jiang et al. (2022).The volatilization of flavor compounds from the food matrix is governed by the nature of the compounds and their mass transfer resistance, a point emphasized by Mao et al. (2015).The proteins' structural conformation and surface hydrophobicity, which are critical for flavor adsorption, undergo significant alterations due to the specific processing conditions, as observed by Lv et al. (2017). Changes in crayfish meat with protein and lipid oxidation Morphological changes in crayfish meat microstructure The microstructural analysis of crayfish meat post-treatment is presented in Fig. 3(A-D), revealing the impact of cooking methods on muscle fiber arrangement.In the context of aquatic muscle-based foods, such microstructural modifications are reflective of underlying protein structural changes (Li et al., 2020).The water boiling method resulted in a compact muscle structure with uniform fiber bundle spacing, while microwaving induced the spacing between fibers. The distinct microstructures are directly associated with the meat's textural properties, as evidenced by the lower hardness and chewiness in the microwaving group compared to the water boiling group, as shown in Fig. 1(C).The temperature response within the microwave field is complex, influenced by the crayfish's composition and shape, which leads to heterogeneous temperature profiles (Fan, et al., 2021).The rapid increase in hotspot temperatures during microwaving plateaus as water evaporates, followed by stabilization through heat conduction (Fan, et al., 2020). Oxidative degradation of proteins and lipids in crayfish meat Electron spin resonance (ESR) spectroscopy, a pivotal analytical technique for detecting free radicals, provides a profound insight into the incipient stages of oxidative reactions.Free radicals, as intermediaries in a plethora of chemical processes, are instrumental in instigating protein and lipid oxidation.This study measured the total free radical content, which can assail the α-carbon hydrogens of the protein backbone and side chains, thereby precipitating protein oxidation (Wang et al., 2023b).As depicted in Fig. 3(E), the microwave precooking of crayfish meat induced a slight elevation in radical signal intensities when compared to the water-boiled samples.This observation underscores the subtle yet significant impact of different precooked methods on the generation of reactive species in the meat.The variation in free radical content and ESR spectral characteristics is intricately linked to the progression and accumulation of oxidative reactions, notably lipid oxidation and protein oxidation.Jiang et al. (2023) reported a marginal disparity in free radical content during thermal processing, highlighting the nuanced differences in oxidative dynamics. The TBARS assay is a cardinal indicator of lipid oxidation, primarily quantifying aldehydes generated during the secondary oxidation of polyunsaturated fatty acids (PUFAs) (Jiang et al., 2023).Furthermore, it is instrumental in the formation of flavor precursors.The TBARS values for microwaved crayfish were significantly elevated in comparison to those boiled in water (Fig. 3F).This increase in TBARS values was found to be highly correlated with the crude fat content (Fig. 1D) and alterations in PUFA profiles (Fig. 2B) of red swamp crayfish that underwent different precooked treatments.The higher crude fat content in microwaved crayfish meat, resulting from moisture loss, along with the thermally induced lipid oxidation, contributed to a decrease in PUFA levels.Additionally, the microwaving technique was more effective in enhancing the flavor and volatile profile of crayfish compared to water boiling methods, offering a superior gustatory experience. Carbonyl content, a well-established indicator of protein oxidation within meat system (Shi et al., 2020), revealed a significant discrepancy between microwaved and water-boiled crayfish proteins.The microwaved crayfish proteins had a considerably lower carbonyl content compared to those boiled in water, which is in direct contrast to the TBARS values presented in Fig. 3(F).This suggests that water-boiled crayfish are more susceptible to protein oxidation.In contrast, microwaved proteins experience a moderate level of oxidation that results in the cross-linking and compaction of the tissue structure, contributing to the enhanced hardness and chewiness as depicted in Fig. 1(C). Comparative evaluation of tertiary and secondary protein structures The intrinsic fluorescence spectrum serves as a valuable indicator of changes in tryptophan residues within the tertiary conformation of crayfish protein.As depicted in Fig. 3(G), the maximum fluorescence intensity (FI) for crayfish protein under both water boiling and microwaving treatments occurred around 337 nm with an excitation wavelength of 290 nm (tryptophan fluorescence).This spectral shift suggests an increased polarity of the environment surrounding tryptophan residues due to protein unfolding.Notably, the maximum fluorescence intensity of crayfish protein under microwaving showed a significant reduction compared to water boiling, implying crayfish protein unfolding and exposure of fluorophores to a more polar environment, leading to fluorescence quenching (Huang et al., 2022). An effective method for estimating protein denaturation involves assessing surface hydrophobicity through the binding of bromophenol blue (BPB) molecules.The exposure of hydrophobic amino acid residues is a consequence of changes in the chemical and physical states of proteins induced by heat treatment, as elucidated by Yu et al. (2021b).The bound BPB values, as illustrated in Fig. 3(H), were significantly higher in water-boiled crayfish (114.50 μg) than in those subjected to microwaving (95.76 μg).This indicates a stronger surface hydrophobicity in the proteins of water-boiled crayfish, which exposes a larger number of hydrophobic amino acids capable of binding to BPB, compared to their microwaved counterparts (Wang et al., 2019).The reduced hydrophobic interactions resulting from microwaving may be instrumental in the increased liberation of flavor compounds, as evidenced in Fig. 2 and Table 1. Sulfhydryl groups (− SH), primarily located in the head of myosin, exhibit sensitivity to reactive hydroxyl groups (•OH) and can undergo oxidation, leading to the formation of intramolecular and intermolecular disulfide bonds (Zhu et al., 2022;Huang et al., 2022).As illustrated in Fig. 3(H), the water-boiled crayfish tail meat demonstrated a higher content of reactive sulfhydryl groups at 4.81 μmol/g pro compared to the microwaved crayfish (4.56 μmol/g pro), suggesting that water boiling was more effective at exposing buried sulfhydryl groups within amino acid chains. Employing FTIR, the study investigated alterations in the secondary structure of crayfish protein under both water boiling and microwaving conditions, as depicted in Fig. 3(I) and (J).The frequencies within the amide I band components (1700-1600 cm − 1 ), extracted from FTIR spectroscopy, were employed to calculate the secondary structural elements of crayfish proteins.Notably, no significant differences were discerned between the water boiling and microwaving in terms of α-helix (1650-1660 cm − 1 ), β-sheet (1600-1640 cm − 1 ), and β-turn (1660-1690 cm − 1 ).Similarly, there were no noticeable changes in random coils (1640-1650 cm − 1 ) (Li et al., 2023b).The substantial modifications in surface hydrophobicity and disulfide bonds (tertiary structure), coupled with minimal changes in the secondary structure of crayfish meat under microwaving and water boiling, suggest that microwaving preserves the spatial structure of proteins with less unfolding (Fan et al., 2021).This preservation could be attributed to the faster heating rate of microwaving, providing ample time for protein denaturation (Wang, et al., 2019, Dong, et al., 2021). In vitro digestive properties of crayfish protein Precooking is defined by the attainment of an internal sample temperature of 80 • C, ensuring that the product is ready for immediate consumption or subsequent reheating.This process is pivotal in preparing food items for optimal taste and texture.The use of confocal laser scanning microscopy (CLSM) provides a unique lens through which to examine the morphological changes in crayfish protein during the digestive process following water boiling and microwaving.The CLSM images depicted in Fig. 4(A) and (B) reveal a notable reduction in the particle size of water-boiled crayfish protein, accompanied by a more uniform distribution post-gastrointestinal digestion.Upon hydrolysis by pepsin and trypsin, crayfish protein experiences degradation, translating to the transformation of high molecular weight proteins into smaller peptides or amino acids.This biochemical cascade culminates in a diminution of the particle size of the resultant digesta. Proteins, as macromolecular nutrients, are broken down through enzymatic hydrolysis and digestion in the body, resulting in the release of amino acids.The degree of hydrolysis (DH%) is a critical parameter that quantifies the level of proteolysis, reflecting the efficiency with which proteins are converted into protein hydrolysates (Zhang et al., 2023b).As illustrated in Fig. 4(C), the DH% values for the gastrointestinal digestive solutions of microwaved and water-boiled crayfish were recorded at 46.12 % and 48.68 % respectively.Additionally, the pattern of soluble amino group alteration mirrored this trend, with water-boiled crayfish exhibiting a higher proportion post-digestion compared to microwaved counterparts.These outcomes are closely correlated with the observations in Fig. 3, suggesting that variations in precooked treatment methods could be instrumental in this disparity, as they induce structural modifications in proteins, thereby unmasking additional cleavage sites.The augmentation of surface hydrophobicity, consequent to these structural changes, facilitates enhanced protein unfolding, culminating in improved digestibility (Wijethunga et al., 2024).Fig. 4(D) corroborates these findings, demonstrating that the particle size of water-boiled crayfish protein post-gastrointestinal digestion was significantly smaller than that of microwaved crayfish, a finding consistent with the CLSM observations.The study conducted by Lu et al. (2023) revealed the process of microwaving had little effect on the digestion stability of Chinese mitten crab tropomyosin, comparing with thatultrasound and high temperature-pressure treatments. Correlative analysis of protein digestibility and physicochemical properties The susceptibility of protein digestibility to the influences of oxidative and structural modifications is well-established (Dong, et al., 2021, Li, et al.,2021, Wang et al. 2023b).In this study, Pearson correlation coefficients were employed to dissect the interplay between the digestive attributes of proteins and a spectrum of physicochemical indicators, including texture, nutritional content, volatile compounds, and oxidative markers.Fig. 5 utilizes a color-coded system to denote positive (red) and negative (blue) correlations, offering a visual synopsis of the relationships observed.The findings elucidate a significant negative linear correlation between the digestive properties and certain textural attributes, specifically elasticity, as well as with indicators of lipid oxidation and taste-related substances such as free amino acids (FAAs) and nucleotides.Conversely, a positive linear correlation was observed between the digestive properties and parameters associated with protein oxidation, including carbonyl content, reactive hydroxyl groups, and surface hydrophobicity.Additionally, a positive correlation was noted with textural attributes like hardness and chewiness, as well as nutritional components such as fatty acids and amino acids.These correlations underscore the pivotal role of protein structural alterations induced by oxidation in conjunction with nutritional factors in augmenting the digestive properties of crayfish meat.In contrast, lipid oxidation and taste substances appear to exert a detrimental effect on digestibility, potentially hindering the digestive process in red swamp W. Xu et al. crayfish meat subjected to diverse treatment methods. Conclusions The oxidative analysis of precooked red swamp crayfish (Procambarus clarkii) meat reveals a nuanced relationship between cooking methods and meat quality.Microwave precooking, while beneficial for flavor retention, leads to moderate lipid oxidation and minimal protein oxidation, subtly altering the meat's lipid profile and potentially its flavor and aroma.In contrast, water boiling is more effective in preserving the meat's digestive properties and protein integrity, crucial for nutritional value and digestibility.The study highlights that precooked treatments significantly influence the nutritional and sensory profiles of crayfish meat, offering valuable insights for food scientists and industry professionals in optimizing cooking techniques.Future research should delve into the protein structural dynamics post-secondary cooking to better understand how these changes impact the meat's texture, flavor release, and consumer appeal.This focused approach will aid in developing food processing strategies that enhance both the nutritional content and sensory experience of crayfish meat, driving innovation and consumer satisfaction in the food industry. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 2 (Fig. 1 .Fig. 2 . Fig. 1.Comparative analysis of aerobic plate count values for red swamp crayfish (Procambarus clarkia) subjected to water boiling (A) and microwave precooking (B) methods.Evaluation of texture (C) and fundamental nutrient composition (D) under the two culinary approaches.Statistical significance is denoted by asterisks, with * indicating p < 0.05 for a 95 % confidence interval and ** indicating p < 0.01 for a 99 % confidence interval.These markers identify significant mean differences ascertained via one-way ANOVA and t-test. Fig. 3 . Fig. 3. Comparative analysis of protein and lipid oxidation in red swamp crayfish (Procambarus clarkia) prepared by different precooking methods: microstructure post water boiling (A back transverse, C abdomen transverse); microstructure post microwaving (B back transverse, D abdomen transverse); electron spin resonance (ESR) signal (E); oxidation status of protein and lipids (F); fluorescence spectra (G); surface hydrophobicity and molecular force analysis (H); FTIR spectra (I); secondary structure distribution (J).Asterisks in the legend indicate statistically significant mean differences at *p < 0.05 and **p < 0.01, as determined by one-way ANOVA and t-test. Fig. 4 . Fig. 4. In vitro examination of protein digestive properties in red swamp crayfish (Procambarus clarkia) following various precooking methods: Confocal laser scanning microscopy (CLSM) images post gastrointestinal digestion of water-boiled crayfish (A); CLSM images post gastrointestinal digestion of microwaved crayfish (B); degree of hydrolysis (DH) and soluble amino groups from gastrointestinal digestion (C); particle size distribution (D).Statistical significance among means is indicated by asterisks at *p < 0.05 and **p < 0.01, as determined by one-way ANOVA and t-test. Fig. 5 . Fig. 5. Correlation analysis between protein digestive properties and physicochemical indicators, encompassing texture, nutrition, volatile compounds, and oxidation.Pearson's correlation coefficients were calculated, with darker shades indicating stronger correlations.Positive correlations are illustrated in red and negative in blue, with significance denoted by *p < 0.05 and **p < 0.01.
2024-07-26T15:03:59.510Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "be719006d22b2c1a4a4de0ae4a45b0566c8e9c88", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.fochx.2024.101692", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91f718ec03534fe1555b8df207d3b3d56186de60", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
246436058
pes2o/s2orc
v3-fos-license
Lower Amounts of Daily and Prolonged Sitting Do Not Lower Free-Living Continuously Monitored Glucose Concentrations in Overweight and Obese Adults: A Randomised Crossover Study This study compared the short-term continuously monitored glucose responses between higher and lower amounts of prolonged sitting in overweight and obese adults under free-living conditions. In a randomised crossover design, 12 participants (age 48 ± 10 years, body mass index 33.3 ± 5.5 kg/m2) completed two four-day experimental regimens while wearing a continuous glucose monitor, as follows: (1) uninterrupted sitting (participants were instructed to sit for ≥10 h/day and accrue ≥7, 1 h sitting bouts each day), and (2) interrupted sitting (participants were instructed to interrupt sitting every 30 min during ten of their waking hours with 6–10 min of activity accrued in each hour). Linear mixed models compared outcomes between regimens. None of the continuously monitored glucose variables differed between regimens, e.g., 24 h net incremental area under the glucose curve was 5.9 [95% CI: −1.4, 13.1] and 5.6 [95% CI: −1.7, 12.8] mmol/L∙24 h, respectively (p = 0.47). Daily sitting (−58 min/day, p = 0.001) and sitting bouts lasting ≥30 min (−99 min/day, p < 0.001) were significantly lower and stepping time significantly higher (+40 min/day, p < 0.001) in the interrupted sitting than the uninterrupted sitting regimen. In conclusion, lower amounts of daily and prolonged sitting did not improve free-living continuously measured glucose among overweight and obese adults. Introduction Overweight and obesity contribute significantly to insulin resistance and glucose intolerance [1,2]. Thus, interventions to aid with the management of glucose metabolism in these individuals is important to reduce the risk of insulin resistance and subsequent cardiometabolic disease [1,3]. Indeed, repeated elevations in glucose concentrations can lead to a plethora of metabolic disturbances that increase the risk of type 2 diabetes mellitus (T2DM) and cardiovascular disease, including insulin resistance, pancreatic β-cell deficiency, oxidative stress and endothelial dysfunction [4,5]. This is important because it is estimated that 463 million adults worldwide have T2DM [6]. Cardiovascular disease is estimated to cause 17.9 million deaths per year, making it the leading global cause of mortality [7]. High volumes of sedentary behaviour are associated with increased risk of T2DM and cardiovascular disease, which may be independent of moderate-to-vigorous physical activity [8,9]. This increased risk may be via the detrimental associations that higher sedentary time has with cardiometabolic disease risk markers, such as glucose tolerance and insulin resistance [10,11]. The manner in which sedentary time is accumulated also has implications for cardiometabolic health. A single day of prolonged sitting led to a 39% reduction in insulin action compared with an active day of standing and~10,000 steps in healthy adults [12]. An increased frequency of interruptions to sitting is also associated with improved fasting glucose and glucose tolerance [11]. Interrupting sitting in controlled laboratory environments with 2 to 5 min of standing, light-intensity walking or moderateintensity walking every 20 to 30 min attenuates postprandial glucose responses over a single day in individuals who are healthy, overweight/obese and/or have impaired glucose tolerance [13][14][15][16]. Dunstan et al. [15] reported that overweight or obese individuals with normal fasting glucose had similar reductions (24-30%) in postprandial glucose in response to interrupting sitting for 2 min every 20 min with light or moderate-intensity walking. It appears that the effects of interrupting sitting on glucose metabolism may be more pronounced in participants with adverse cardiometabolic health profiles and in individuals who are overweight/obese with impaired or apparently normal glucose levels [13,15,17]. Reducing prolonged sitting in individuals who are overweight and obese may thus be important for optimal cardiometabolic health. The ability to achieve improvements in glucose via reductions in prolonged sitting, outside of laboratory settings, is not well understood. Continuous glucose monitoring offers a unique opportunity to examine glucose responses in free-living settings that may be caused by reducing and breaking up sitting time. Replacing 4.7 h/day of sitting in an imposed sedentary regimen with 2.5 h and 2.2 h of standing and light-intensity walking in a "sit less" regimen over four days resulted in a significant 36% lower 24 h glucose concentration in participants with T2DM [18]. Another study in participants with T2DM did not see any significant reduction in 24 h glucose concentrations when sitting was interrupted with four bouts of walking within the hour immediately after breakfast, lunch and dinner, compared to a day of normal habitual activity [19]. Yet, differences in sitting and activity between the conditions may have been insufficient to detect significant effects on glucose; indeed, participants interrupted sitting only in the hour after each meal and the control condition was habitual activity as opposed to an imposed increase in sitting in the study by Duvivier et al. [18]. In overweight adults with normal fasting glucose concentrations, four days of sitting for 7.6 h, standing for 4.0 h and light-intensity walking for 4.3 h per day led to significant improvements in insulin sensitivity and reductions in insulin during an oral glucose tolerance test compared with four days of sitting for 13.5 h, standing for 1.4 h and light-intensity walking for 0.7 h per day [20]. The effects of interrupting sitting on free-living continuous glucose concentrations, however, has not been evaluated in overweight and obese participants. This gap in research should be addressed to extend knowledge with regard to reducing and breaking up sitting being used as a potential intervention strategy for reducing the risk of T2DM and cardiovascular disease in this 'at risk' population [21]. The primary aim of this study, therefore, was to compare continuously monitored glucose responses between higher and lower amounts of daily and prolonged sitting under free-living conditions among overweight and obese individuals. The secondary aim was to evaluate sitting, standing and stepping responses to the free-living experimental protocols. Study Design and Overview A within-subjects randomised crossover design was employed and reported following Consolidated Standards of Reporting Trials guidance [22]; see checklist in Supplementary Material Table S1. Following preliminary tests, participants took part in two, 4-day long activity regimens in free-living conditions, as follows: (1) uninterrupted sitting and (2) interrupted sitting. Participants were randomised to regimen order by the research team using a simple randomisation method via an online tool (www.randomizer.org) (accessed on 1 November 2016). There was a 72 h washout period between regimens to avoid potential insulin sensitivity carryover effects that have been observed ≥48 h after a single exercise session [23]. The study was reviewed and approved by the University of Bedfordshire Institute for Sport and Physical Activity Research Ethics Committee (no. 2017ISPAR004) and conformed to the Declaration of Helsinki. Participants provided written informed consent prior to any testing procedures. Participants Participants were 23-63 years old with overweight or obesity as defined by a body mass index (BMI) of 25 to 45 kg/m 2 [24] and a waist circumference of ≥102 cm for men and ≥88 cm for women [25]. Habitual sitting time needed to be at least 7 h/day and moderate-to-vigorous physical activity less than 150 min/week to be included in the study; this was screened using a domain specific sitting time questionnaire [26] and the International Physical Activity Questionnaire [27], respectively; these criteria were used in anticipation that metabolic responses to manipulations in sitting would be greatest in individuals who sat more and were physically inactive [20,28]. Exclusion criteria were working night shifts, current or recent smoker, contraindications to physical activity, fitted with an artificial pacemaker, diabetes, a known blood borne disease, recreational drug use, alcohol addiction, pregnancy, or taking glucose or lipid-lowering medication. Participants were recruited at the University of Bedfordshire and from the local community using adverts and word-of-mouth. Sample Size The primary outcome for this study was 24 h glucose net incremental area under the curve. To detect an effect size of d = 1.27 between conditions, at least 10 participants were required to achieve 90% power based on a two-tailed alpha of 0.05 and a withinperson correlation of 0.5. The effect size for the calculations was informed by previous experimental research [13,15,18]. Preliminary Visit A researcher met with participants at their workplace, a community location, or at the University of Bedfordshire Sport and Exercise Laboratories for preliminary measures. This always occurred on a Monday, 24 h before the commencement of the first experimental regimen. Height was measured to the nearest 0.1 cm and body mass to the nearest 0.1 kg. Bioelectrical impedance analysis was used to provide a valid estimate of body fat% (Tanita BC-418 Segmental Body Composition Analyzer; Tanita Corp., Tokyo, Japan) [29]. The midpoint between the lowest rib and the iliac crest was located to take a measure of waist circumference to the nearest 0.1 cm. A FreeStyle Libre (Abbott Diabetes Care, Ltd., Witney, UK) flash glucose monitor (FGM) was inserted at the midline of the upper arm for each participant and an activPAL3 activity monitor (PAL Technologies, Glasgow, UK) was attached to the thigh. Following this, participants were informed of the order of their experimental regimens and provided verbal and written guidance on how to complete the protocols. Figure 1 shows the experimental protocol. The first experimental regimen took place Tuesday to Friday. There was then a 72 h washout period and the second experimental regimen took place the following Tuesday to Friday. An activity log was provided so participants could record the amount of sitting, standing or physical activity each hour throughout each of the experimental regimens. This was provided in an attempt to help participants visualise their behaviour and thus encourage compliance with the protocols. The two, 4-day regimens were based on previous experimental research in which reducing and breaking up sitting led to acute improvements in cardiometabolic health under freeliving conditions [20,30]. They were as follows: Experimental Design Nutrients 2022, 14, 605 4 of 12 throughout each of the experimental regimens. This was provided in an attempt to help participants visualise their behaviour and thus encourage compliance with the protocols. The two, 4-day regimens were based on previous experimental research in which reducing and breaking up sitting led to acute improvements in cardiometabolic health under free-living conditions [20,30]. They were as follows: Uninterrupted Sitting Regimen During this regimen, participants were instructed to (1) increase sitting as much as possible, (2) sit for ≥10 h/day, and (3) engage in no more than a combined total of 1.5 h/day of standing and stepping. Participants were asked to sit continuously without any breaks for 7 of the 10 h in which they were asked to remain seated (i.e., they were required to accumulate 7 bouts of uninterrupted sitting that were each ≥1 h in duration), except for visiting the toilet. They were free to select when in the day they engaged in these 7 h of uninterrupted sitting. This was to encourage the accumulation of sitting in prolonged bouts. During the other 3 h, participants were instructed to interrupt their sitting no more than once per hour for a maximum of 15 min at a time. This allowed time to engage in activities of daily living, such as getting dressed, bathing, cleaning and cooking. They were also asked to travel by car or public transport wherever possible to limit activity accumulated when travelling. Interrupted Sitting Regimen For the interrupted sitting regimen, participants were asked to interrupt sitting bouts at least once every 30 min with standing or physical activity for a duration of 3-5 min each time. They were asked to interrupt sitting in at least 10 of their waking hours (i.e., at least 20 activity breaks per day) and accumulate a total of 6-10 min of interruptions in sitting in each of these hours. This gave participants flexibility as to which hours they interrupted their sitting during the day (e.g., a mix of at work and during leisure time). The frequency and duration of interruptions to sitting was advised based on previous controlled laboratory and free-living studies that observed improvements in postprandial glucose with similar protocols [13,17,18]. Participants were also instructed to engage in ≥1.5 h of standing and/or physical activity across each day. A range of smartphone and computer apps were suggested to be used for alerts/reminders to interrupt sitting at least every 30 min. Demonstrations and written suggestions for activities to engage in when interrupting sitting were provided. This included standing, walking, simple resistance activities (e.g., knee lifts, half squats, calf raises, lunges), stair climbing and repeated sit-to-stand transitions). Breaking up sitting with these activities has resulted in significantly attenuated postprandial glucose concentrations over a single day in laboratory conditions [13,14,17]. Uninterrupted Sitting Regimen During this regimen, participants were instructed to (1) increase sitting as much as possible, (2) sit for ≥10 h/day, and (3) engage in no more than a combined total of 1.5 h/day of standing and stepping. Participants were asked to sit continuously without any breaks for 7 of the 10 h in which they were asked to remain seated (i.e., they were required to accumulate 7 bouts of uninterrupted sitting that were each ≥1 h in duration), except for visiting the toilet. They were free to select when in the day they engaged in these 7 h of uninterrupted sitting. This was to encourage the accumulation of sitting in prolonged bouts. During the other 3 h, participants were instructed to interrupt their sitting no more than once per hour for a maximum of 15 min at a time. This allowed time to engage in activities of daily living, such as getting dressed, bathing, cleaning and cooking. They were also asked to travel by car or public transport wherever possible to limit activity accumulated when travelling. Interrupted Sitting Regimen For the interrupted sitting regimen, participants were asked to interrupt sitting bouts at least once every 30 min with standing or physical activity for a duration of 3-5 min each time. They were asked to interrupt sitting in at least 10 of their waking hours (i.e., at least 20 activity breaks per day) and accumulate a total of 6-10 min of interruptions in sitting in each of these hours. This gave participants flexibility as to which hours they interrupted their sitting during the day (e.g., a mix of at work and during leisure time). The frequency and duration of interruptions to sitting was advised based on previous controlled laboratory and free-living studies that observed improvements in postprandial glucose with similar protocols [13,17,18]. Participants were also instructed to engage in ≥1.5 h of standing and/or physical activity across each day. A range of smartphone and computer apps were suggested to be used for alerts/reminders to interrupt sitting at least every 30 min. Demonstrations and written suggestions for activities to engage in when interrupting sitting were provided. This included standing, walking, simple resistance activities (e.g., knee lifts, half squats, calf raises, lunges), stair climbing and repeated sit-to-stand transitions). Breaking up sitting with these activities has resulted in significantly attenuated postprandial glucose concentrations over a single day in laboratory conditions [13,14,17]. To improve compliance and ecological validity, participants could perform an activity that they felt was best suited to the situation or environment they were in at the time. Standardisation of Dietary Intake and Physical Activity Participants were asked to avoid engaging in moderate-to-vigorous physical activity and from consuming alcohol for 48 h before their first experimental regimen and throughout the rest of the experimental protocol. A standardised instant pasta meal (464.0 ± 2.0 kcal, carbohydrate 80.7 ± 2.8 g, protein 18.2 ± 1.2 g, and 7.0 ± 1.1 g fat) was provided for participants to consume at the same time in the evening before each 4-day regimen (Figure 1). Participants used electronic scales (Salter Disc Electronic Kitchen Scale, HoMedics Group Ltd., Tonbridge, UK) to weigh all food and drink intake throughout the first 4-day regimen they took part in. They recorded the time and volume of dietary intake in a food diary and were asked to replicate this dietary intake exactly during the second 4-day regimen. Participants were instructed to consume at least three meals containing ≥50% of carbohydrate (examples of such meals were provided to each participant) on each experimental regimen day as well as any snacks they wanted to consume to encourage multiple glucose excursions throughout the day that were amenable to change via the experimental regimens. 2.6. Measurements 2.6.1. Sitting, Standing and Stepping Sitting, standing, stepping and postural transitions were measured throughout the 11-day experimental period using the activPAL3 activity monitor, which provides valid measures for these outcomes [31,32]. The activPAL3 was worn on the anterior of the right thigh and was attached to the skin using an adhesive dressing (Hypafix Hypoallergenic Tape; BSN Medical Limited, Hull, UK). The device was waterproofed with a nitrile sleeve and a Hypafix dressing to enable continuous wear. To aid with processing of data from the activPAL, participants used a diary to record the time they woke up and got out of bed, when they were at work, went to bed and to sleep, and times that the device was removed [33]. Data was processed using an automated algorithm [34] executed within STATA (StataCorp LLC, College Station, TX, USA) to derive the following outcomes: sitting time, number and duration of short sitting bouts (0-30 min), number and duration of prolonged sitting bouts (≥30 min and ≥60 min), standing time, number of sit-upright transitions, stepping time, and number of steps. A valid wear day was accepted when wear time was >10 h, there were >500 steps recorded, and no more than 95% of the recorded data was in one activity category (i.e., sitting, standing or stepping) [34]. For inclusion in the analysis, at least one valid day of wear was required. Flash Glucose Monitoring The FreeStyle Libre was used to measured flash glucose concentrations on a continuous basis throughout the study. During the preliminary visit, a Freestyle Libre sensor was inserted subcutaneously at the midline of the back of the upper arm in line with manufacturer guidelines. The FreeStyle Libre provides valid measures of interstitial glucose compared to reference capillary blood glucose values from the YSI analyzer (Yellow Springs Instrument, Yellow Springs, OH, USA) and is consistently accurate over 14 days [35]. Participants thus wore the FGM sensor continuously throughout the 11-day experimental period. The device samples and stores interstitial glucose concentrations every 15 min. The data was transferred to a Freestyle Libre reader and exported into Microsoft Excel at the end of the experimental protocol by a researcher. Data was processed using a custom R script. Days in which the device recorded data for <70% of the 24 h period were classified as invalid wear. A minimum of one valid day in each experimental regimen was required for inclusion in the analysis. The following glucose metrics were calculated across 24 h periods for each of the experimental regimens starting with each participant's wake time on the first day of monitoring: (a) mean glucose concentrations, (b) total area under the curve (AUC) calculated using the trapezoidal method, (c) net incremental area under the curve (iAUC) calculated by subtracting the waking baseline glucose concentration for each day from total AUC, and (d) glycaemic variability, i.e., glucose coefficient of variation (CV). Mean glucose concentrations, glucose total AUC and glucose iAUC were also calculated for waking hours only. Statistical Analysis All statistical analysis was completed using SPSS version 22 (IBM, Armonk, NY, USA). Q-Q plots were visually inspected to assess normality of the data prior to analysis. All variables were deemed to be normally distributed. The main effect of experimental regimen (uninterrupted sitting versus interrupted sitting), experimental regimen day, and the Nutrients 2022, 14, 605 6 of 12 regimen x day interaction for the study outcomes were analysed using linear mixed models. Fixed factors in the model were experimental regimen, day, and covariates. Participant ID was initially entered as a random factor in each model, but this term was subsequently removed as this covariance parameter was redundant. For glucose AUC models, baseline glucose concentration (for each day) was entered as a covariate, while waking wear time was entered as a covariate in sitting, standing and stepping models. Unless stated otherwise, data is presented as mean (95% confidence interval [CI]). The alpha level for statistical significance was p ≤ 0.05. Effect sizes (Cohen's d) were calculated to indicate the magnitude of difference for significant outcomes with d < 0.2, 0.2-0.49, 0.5-0.79 and ≥0.8 considered trivial, small, medium and large effects, respectively [36]. Study Sample Recruitment of participants took place between November 2016 and April 2017. Participant flow throughout the study is shown in Figure 2. Following screening, there were 13 participants enrolled into the study. One participant withdrew prior to commencing the experimental protocol. Analysis was thus conducted for the 12 participants who completed the study; their descriptive characteristics can be seen in Table 1. Statistical Analysis All statistical analysis was completed using SPSS version 22 (IBM, Armonk, NY, USA). Q-Q plots were visually inspected to assess normality of the data prior to analysis. All variables were deemed to be normally distributed. The main effect of experimental regimen (uninterrupted sitting versus interrupted sitting), experimental regimen day, and the regimen x day interaction for the study outcomes were analysed using linear mixed models. Fixed factors in the model were experimental regimen, day, and covariates. Participant ID was initially entered as a random factor in each model, but this term was subsequently removed as this covariance parameter was redundant. For glucose AUC models, baseline glucose concentration (for each day) was entered as a covariate, while waking wear time was entered as a covariate in sitting, standing and stepping models. Unless stated otherwise, data is presented as mean (95% confidence interval [CI]). The alpha level for statistical significance was p ≤ 0.05. Effect sizes (Cohen's d) were calculated to indicate the magnitude of difference for significant outcomes with d < 0.2, 0.2-0.49, 0.5-0.79 and ≥0.8 considered trivial, small, medium and large effects, respectively [36]. Study Sample Recruitment of participants took place between November 2016 and April 2017. Participant flow throughout the study is shown in Figure 2. Following screening, there were 13 participants enrolled into the study. One participant withdrew prior to commencing the experimental protocol. Analysis was thus conducted for the 12 participants who completed the study; their descriptive characteristics can be seen in Table 1. Dietary Intake The mean carbohydrate, fat and protein intake that participants self-reported in their food diary during the experimental regimens was 294 ± 109 g, 86 ± 38 g and 86 ± 25 g/day, respectively. Carbohydrate, fat and protein intake accounted for 63%, 19% and 18%, respectively, of total dietary intake. Total energy intake was 9460 ± 2832 kJ/day. Sitting, Standing and Stepping ActivPAL data was unavailable for one participant on days 3 and 4 of the interrupted sitting regimen. All other monitoring days were valid for all other participants across both regimens. As shown in Table 2, daily sitting time was significantly lower in the interrupted sitting regimen than in uninterrupted sitting (d = 0.65). Time spent in prolonged ≥30 min sitting bouts was also significantly lower in the interrupted sitting than the uninterrupted sitting regimen (d = 1.29), as was time spent in prolonged ≥ 60 min sitting bouts (d = 1.99). Participants accumulated significantly more time in short sitting bouts in the interrupted sitting regimen than in uninterrupted sitting (d = 0.82). Although the number of short sitting bouts and the number of ≥30 min prolonged sitting bouts did not differ significantly between regimens, the number of ≥60 min prolonged sitting bouts was significantly lower in the interrupted sitting than the uninterrupted sitting regimen (d = 2.38). The number of sit-upright transitions per day did not differ significantly between regimens. The main effects of day (all p > 0.17; data not shown) and the regimen x day interaction effects (all p > 0.37) were not significant for any of the sitting or sit-upright transition variables (see Supplementary Material Table S2). Standing time was not significantly different between the uninterrupted sitting and interrupted sitting regimens. In the interrupted sitting regimen, participants spent significantly more time stepping than during the uninterrupted sitting regimen (d = 2.49). Correspondingly, the number of steps was also significantly higher in the interrupted sitting than the uninterrupted sitting regimen (d = 3.06). The main effects of day (all p > 0.32; data not shown) and the regimen x day interaction effects (all p > 0.32) were not significant for standing time, stepping time, or number of steps (see Supplementary Material Table S2). Flash Glucose Monitoring The FGM monitor was active for an average of 97.1% of the wear period during the study. There were three missing days of data for one participant and one missing day for three participants; data from all participants therefore met the criteria for inclusion in the analysis. There was no significant difference in 24 h mean glucose concentrations, glucose total AUC, glucose iAUC or CV between the uninterrupted sitting and interrupted sitting regimens (see Table 3). There was no main effect of day for 24 h mean glucose concentrations, total AUC or iAUC (p = 0.65, 0.59 and 0.94, respectively; data not shown). There was a main effect of day for glucose CV (p = 0.02), which was significantly lower by 3.0% on day 4 than day 3 (p = 0.04; data not shown). There were no significant experimental regimen x day interaction effects for 24 h mean glucose concentrations, total AUC, iAUC or CV (p = 0.47, 0.71, 0.73 and 0.84, respectively; see Supplementary Material Table S3). During waking hours, there was no main effect of experimental regimen for mean glucose concentrations, glucose total AUC, glucose iAUC or CV; see Table 3. The main effect of day was not significant during waking hours for mean glucose concentrations, glucose total AUC, or iAUC (p = 0.56, 0.72 and 0.95; data not shown). There was a main effect of day for glucose CV (p = 0.01), which was significantly lower by 3.9% on day 4 than day 1 (p < 0.01; data not shown). The regimen x day interaction effect was not significant for waking hours mean glucose concentrations, total AUC, iAUC or CV (p = 0.95, 0.79, 0.84 and 0.98, respectively; see Supplementary Material Table S3). Discussion The main finding in this study was that, in overweight and obese individuals, lower amounts of daily sitting and prolonged sitting did not improve continuously monitored glucose concentrations over four days when compared with prolonged sitting when selfimplemented under free-living conditions. Based on this finding, public health recommendations to reduce and interrupt sitting may not be effective in the short term for improving Nutrients 2022, 14, 605 9 of 12 continuous glucose. These findings are in contrast to a similar study in individuals with T2DM who experienced a 36% reduction in 24 h glucose iAUC when 4.7 h of sitting was replaced with standing and light-intensity walking over four days [18]. The discrepancies in our findings when compared with this previous research may be because there were more pronounced differences in sitting and activity between the two regimens in the study by Duvivier et al. [18]. Specifically, sitting was 58 min/day lower and stepping 40 min/day higher in the interrupted sitting than the uninterrupted sitting regimen, with no difference in standing. In the Duvivier et al. [18] study, sitting was 282 min/day lower and stepping and standing higher by 132 min/day and 150 min/day, respectively, in the "sit less" than the "sitting" regimen. In another study that compared a control condition of normal habitual activity to interrupting sitting with walking every 30 min for the two hours after breakfast, lunch and dinner, free-living 24 h glucose concentrations were unaffected in participants with T2DM [19]. This may have been because participants did not significantly reduce their daily sitting time, although stepping time was significantly higher by 25 min/day in the condition of breaking up sitting [19]. In the context of these findings, a greater reduction in sitting than the 58 min/day reported in the present study may be required to achieve improvements in continuously monitored 24 h glucose concentrations. The present study found that lower amounts of daily sitting and prolonged sitting did not affect continuously monitored glucose levels during waking hours, when participants would have been predominantly in a postprandial state. In contrast to this, Blankenship et al. [19] reported significant reductions in total AUC for glucose and a trend for lower mean glucose during waking hours in response to interrupting sitting compared with normal habitual activity. This is despite larger reductions in daily sitting and a higher duration of stepping time in the current study. It is thus plausible that participants living with T2DM may be more sensitive than overweight and obese participants with normal glucose control in terms of beneficial postprandial (i.e., waking hours) glucose responses to reducing and interrupting sitting. The findings of the present study should also be considered in the context of the metabolically healthy obese phenomenon, which suggests that not all overweight and obese individuals are metabolically impaired [37]. Greater reductions in sitting and increases in physical activity may thus be needed in this type of participant for free-living glucose benefits. Participants in the present study significantly reduced the time they spent in prolonged ≥30 and ≥60 min sitting bouts by 99 min/day and 63 min/day, respectively. Prolonged sitting time in the study by Blankenship et al. [19] was not significantly reduced in their breaks condition, while it was not reported in the study by Duvivier et al. [18]. Based on these findings, it appears that reducing prolonged sitting through activity interruptions may not impart beneficial effects on continuously monitored glucose under free-living conditions. Glucose benefits may thus only be realised if the number of sitting interruptions is increased, or as suggested earlier, participants are at the lower end of the metabolic health spectrum. The present study, however, suggests that it was challenging for participants to change the number of interruptions (i.e., sit-upright transitions) in sitting as these were similar between regimens. This could be due to competing tasks at work or home, for example, that may dictate their sitting and activity-related behaviours. It is not possible to make comparisons to the study by Duvivier et al. [18] in this regard, as the number of sitting interruptions was not reported. Similar findings were seen in the study by Blankenship et al. [19] in which the number of interruptions in sitting during the breaks condition did not differ compared with a habitual activity condition. As research in this field is in its infancy, further studies that explore participants' ability to manipulate their sitting behaviour in free living conditions and the effects that this has on glucose responses are needed to appropriately inform public health and clinical care guidelines. The reasons that reductions in total and prolonged sitting time did not improve glucose responses is not clear, especially as reducing prolonged sitting in laboratory-based studies has attenuated postprandial glucose responses in overweight and obese individuals [13,15]. Thus, the issue of compliance may be important when attempting to translate findings from controlled, laboratory settings into free-living settings. Additionally, the intensity of the physical activity breaks was not controlled in the present study, which may have heighted the variability in individual glucose responses to the regimens, limiting the ability to detect significant effects when compared with previous laboratory work. It could also be postulated that, as opposed to reducing time spent in prolonged sitting bouts, interrupting sitting more regularly and reducing the number of prolonged sitting bouts, which were not achieved in the present study, are required. This may help to maintain permeability of muscle cells to glucose via contraction-mediated pathways and upregulation of genes that are involved in carbohydrate metabolism and translocation of the glucose transporter protein GLUT-4 [38,39]. Furthermore, although continuous glucose monitoring provides the opportunity to evaluate glucose responses to interrupting sitting under free-living conditions, these devices do not provide any indication with regard to the effects that an intervention might have on insulin sensitivity, which may be more amenable to changes in sitting time. Indeed, substituting total sitting time and the number of prolonged sitting bouts with corresponding increases in standing and light-intensity walking over four days under free-living conditions significantly improved insulin sensitivity during an oral glucose tolerance test that took place the morning after the experimental regimen ended in overweight and obese participants [20]. The present investigation did not assess glucose tolerance or insulin resistance status of the sample. The participants may have been metabolically 'healthy' in the context of these measures [37], which may mean there is limited potential for improving glucose in response to interrupted sitting. Nonetheless, this study did demonstrate that it was possible to favourably manipulate overweight and obese participants' sitting, prolonged sitting and stepping, which, if repeated, could improve metabolic health in the longer term [40]. Future research should therefore consider whether the reductions in daily and prolonged sitting could affect insulin responses under 'free-living' conditions and investigate the long-term effects of such interventions. The main strengths of this study include the evaluation of continuously monitored glucose concentrations under free-living conditions in response to manipulations in daily and prolonged sitting using a randomised crossover design, which enhanced ecological validity when compared with controlled laboratory studies. Furthermore, sitting, standing and stepping were continuously monitored throughout the experimental protocol and dietary intake was standardised between the regimens, with energy intake data suggesting minimal under-reporting. Despite differences in daily and prolonged sitting time and stepping time, the number of sitting interruptions did not differ between the experimental regimens. It is thus not possible to determine the effects that interrupting sitting combined with reduced daily and prolonged sitting and increased stepping may have on continuously monitored glucose in this study. Further, it may be more feasible to ask participants to interrupt their sitting more often in relation to their normal habitual activity as opposed to asking them to limit their sitting interruptions in an imposed uninterrupted sitting regimen. As the participants in this study were sedentary and physically inactive, another limitation is the ability to generalize the findings to individuals who may sit less and meet physical activity guidelines. Conclusions The findings of this study suggest that although it is possible to manipulate sitting and stepping under free-living conditions, lower volumes of daily and prolonged sitting and increases in stepping do not improve continuous glucose concentrations over the short-term in overweight and obese, but otherwise healthy, individuals. Future interventions should explore the longer-term effects of reducing and interrupting sitting on glucose metabolism in this population group.
2022-02-01T16:11:54.670Z
2022-01-30T00:00:00.000
{ "year": 2022, "sha1": "2e57796f648d8c97d538b0b789f0abb7f3a3e21b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/3/605/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a536dcfa875e2b1b759af4718b9e738ce278d1d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245793121
pes2o/s2orc
v3-fos-license
Analysis of Omnichannel Consumer Behavior: Purchase Intention on Omnichannel Restaurants in Indonesia The advancement of the internet and emerging technology has altered the world of industry through sectors. Additional networks also emerged, alter-ing buying preferences and purchasing behaviour. The omnichannel approach is a modern innovative method of marketing strategy in which, with the use of advanced technologies across networks, it is possible to integrate various channels for shopping to provide customers with a specific and comprehensive shopping experience. There is a disconnect in customers' omnichannel attitudes and businesses' ability to execute an omnichannel approach. The aim of this paper is to define and understand the factors that affect omnichannel consumers' actions during the shopping phase, specifically their adoption and usage of technologies. An initial model was created to explain omnichannel consumer behavior using the variables used in UTAUT2 and other previous research on omnichannel consumer behavior. Structural Equation Modelling was used to test the model on a sample of 495 customers of omnichannel restaurants (SEM). The findings suggest that the following factors significantly affect purchasing intention in an omnichannel context: habit, encouraging circumstances, personal innovativeness, hedonic drive, Introduction Recent technological advancements have allowed the digitization of numerous areas of industry. Starting in the retail industry and has been implemented in many, digitalization has presented new opportunities and challenges to businesses across industries. One of the challenges is the increasing complexity it made in selling to the customers. Consumer behavior has shifted dramatically as a result of advancements in mobile platforms and social networking, as well as the convergence of these emerging channels into online and offline retailing. Consumers are in-creasingly active in using alternate channels in buying products or services, requesting infor-mation, and asking about usage or availability (Neslin et al, 2006). When customers switch to dif-ferent platforms, a multichannel approach under which silo channels are planned and handled independently of one another is inconsistent and rife with inconsistencies (Saghiri et al., 2017). In an omnichannel setting, channels are used concurrently throughout the quest and purchasing period, rendering it very difficult for retailers to retain control (Verhoef et al, 2015). Omnichannel commerce may be the third generation of e-commerce, in which several networks are utilized consistently (Juaneda-Ayensa et al., 2016). A study by EConsultancy (2019) found that most of the browsing online by Indonesian consumers is done on a smartphone, while the purchase decision is made in different channels such as desktop or even offline. This behavior is a defining characteristic of omnichannel consumers. The report also found that in Indonesia, 322 even though the marketers (90%) are aware of the need for real-time marketing through wellintegrated marketing technology and data, only half (54%) have integrated platforms in their respective companies. Most of the marketers that have integrated platforms also stated that their platform's capabilities are still unsatisfactory (45%). A significant number of the marketers in Indonesia also don't have an integrated platform for their marketing needs (29%). This statistic shows a gap between the practice of omnichannel strategy in Indonesia and the consumer behavior of Indonesian consumers, which are already omnichannel focused. Thus, an omnichannel strategy where integration and synergic management of different channels are the focus is ur-gently needed to explain the increasing complexity of consumer behavior in Indonesia. This essay aims to advance theoretical awareness of the technological and behavioral antecedents of omnichannel consumers' product and service buying processes. We carried out this research in the restaurant industry context because of two reasons. The first being there are very few papers addressing the omnichannel strategy in the restaurant context and the second being the restaurant context is especially relevant in Indonesia, where online food delivery services are prominent with Go-Food and Grab-Food are one of the main reasons consumer behavior in the restaurant industry is changing (Mordor Intelligence, 2019). Additionally, this article proposes a modern technology paradigm focused on UTAUT2 (Venkatesh et al., 2012), which is expanded to include new dimensions-personal innovativeness, perceived security danger, and perceived compatibility-and is tailored to the omnichannel background. Our study has significant analytical and managerial ramifications, as identifying the drivers of omnichannel consumer behavior enables businesses to develop unique omnichannel management techniques to improve customer loyalty through an optimized shopping experience (Verhoef et al, 2015;Juaneda-Ayensa et al, 2016;Kazancoglu & Aydin, 2018). Literature Review Omnichannel retailing context The concept of omnichannel retailing is seen as a progression from multichannel retailing (table 1). Although multichannel implies a division between channels, consumers in an omnichannel setting should be able to seamlessly migrate between networks, whether online or offline, within single purchase. A customer can search for products or services in the morning through their smartphone, compare prices in the afternoon through their PC, and purchase the product after work in the physical store. Consumers who shop omnichannel are a global phenomenon (Schlager and Maas, 2013), and they expect a frictionless and optimized purchasing experience along their purchase route (Juaneda-Ayensa et al., 2016). Due to the holistic management of networks, consumers communicate with the business rather than the site (Lazaris et al., 2015). An omnichannel strategy's primary characteristic is that it is customer-centric and places a premium on the interaction between channels and consumers (Verhoef et al., 2015). Thus, the omnichannel strategy broadens the channel selection while still taking into consideration customer-brand-retail channel interactions (Neslin et al, 2014). This is because users, regardless of how they interact, expect a consistent, seamless, and interactive experience. Omnishoppers no longer enter the channel; rather, they are constantly present in it or several channels at the same time, owing to technological advancements and increased versatility (Juaneda-Ayensa et al, 2016). These consumers are always on the lookout for opportunities to use their smartphones to conduct searches, evaluate brands, and find better options to maximize the advantages provided by each channel (Yurova et al., 2017). It is critical to initiate research into omnichannel consumer behavior (Neslin et al., 2014;Verhoef et al., 2015). Consumer behavior in an omnichannel context Previous research has shed light on what makes an omnichannel buying experience and the impact it has on behavioral intentions. One of the key drivers of an omnichannel strategy is technology, as the growing adoption of new innovations in retail has shifted customer expectations and desires (Schlager & Maas, 2013). Omnichannel consumers want a cohesive and optimized experience regardless of the medium they use; they switch effortlessly between networks, whether offline or online, based on their tastes (Piotrowicz & Cuthberson, 2014). Technology is critical in online retailing. As a result, recent research on omnichannel user behavior has focused on technology. Channel convergence is a distinguishing characteristic of the omnichannel shopping experience (Hure et al., 2017;Kazancoglu & Aydin, 2018;Shi et al., 2020). The integration of channels is the primary explanation for the sophistication of retail patterns (Hure et al., 2017). Additionally, omnichannel buyers feel they have a greater understanding of the investment they have purchased than the salespeople do. They believe they have a greater degree of leverage over the sales encounter and are therefore more demanding (Rippe et al., 2015). Increased complexity of shopping behaviors, interaction in between channels as a brand experience, and an expected and consistent, and seamless shopping experience are the main characteristics of omnichannel experience (Hure et al., Multichannel Strategy Omnichannel Strategy Perceived interaction with the brand. Customers No possibility of triggering interaction. Can trigger full interaction. Use channels in parallel. Use channels simultaneously. Retailers No possibility of controlling integration of all channels. Control full integration of all channels. Sales People Do not adapt selling behavior. Adapt selling behavior tailored to customer needs and knowledge. 2017). Shi et al. (2020) have also found that connectivity between channels, flexibility to switch between them and consistent experience is what made an omnichannel shopping experience. Despite an increase in research on information and communication technology (ICT) in mul-tichannel settings, it is important to continue exploring omnichannel consumer conduct, as previous research has shown mixed results (Neslin et al., 2014;Verhoef et al., 2015;Juaneda-Ayensa et al., 2016;Hure et al., 2017). It is important to determine how consumers' views about technology influence their buying choices in novel situations (Escobar-Rodriguez & Carvajal-Trujilo, 2014). Unified theory of technology acceptance and use in an omnichannel environment: Model and hypothesis The approach of our study is based on Venkatesh and colleagues' extension of the Unified Theory of Acceptance and Usage of Technology (UTAUT2) model (2012). This model is used to attempt to define the variables that affect technology acceptance and use across a customer's omnichannel buying journey. Following a study of the literature, we chose UTAUT2 as the primary catalyst of omnichannel strategy because it explains market penetration and the use of technologies (Venkatesh et al., 2012). Additionally, this model has been used in previous research on the omnichannel environment (Juaneda-Ayensa et al., 2016;Susanto et al., 2019). Furthermore, Kazancoglu and Aydin (2018) discovered that UTAUT2 is the most powerful predictor of omnichannel purchasing intention. This theory informs our understanding of omnichannel consumers' attitudes toward technology and the aspects in which those attitudes influence their purchasing intentions in the shopping context (Juaneda-Ayensa et al., 2016;Susanto et al., 2019). According to UTAUT2, seven factors influence a consumer's choice to use information and communication technology: success objectives, effort perceptions, social effects, enticing circumstances, hedonic incentives, price worth, and habit. Previously published studies on omnichannel consumer behavior ignored facilitating criteria and price point, owing to the belief that omnichannel is freely available (Juaneda-Ayensa et al., 2016;Susanto et al., 2019). However, a recent report de-termined that these two variables are important in the context of omnichannel retailing, necessi-tating their inclusion in this study (Kazancolgu & Aydin, 2018). As Venkatesh et al. (2012) indicate, UTAUT2's applicability should be checked through multiple domains and with additional variables, especially in the sense of customer behavior. As a consequence of previous studies, we used the variables personal innovativeness (Juaneda-Ayensa et al., 2016), perceived security danger (Kazancoglu & Aydin, 2018;Shi et al., 2020), and perceived compatibility (Shi et al., 2020) to get a deeper understanding of the degree to which the model's. The term "performance anticipation" refers to the degree to which several outlets may be used simultaneously and/or technology assists customers when buying goods, in this case, food and beverages (Venkatesh et al., 2012). It has been shown that the anticipation of success is a strong indicator of behavioral intent (Juaneda-Ayensa et al., 2016;Susanto et al., 2019). As a consequence, we made the following hypotheses: H1: Planned success has a favorable impact on omnichannel buying intention. Effort expectancy refers to the degree of convenience with which shoppers communicate with multiple touchpoints and platforms during the shopping journey (Juaneda-Ayensa et al., 2016). This factor is significant in both voluntary and mandatory usage contexts (Venkatesh et al., 2012) and it positively affects purchase intention. Thus, the following hypothesis was pro-posed: H2: Planned commitment has a favorable impact on omnichannel buying intention 1 st ICEMAC 2020 325 The extent to which consumers feel they can use omnichannel as a buying tool is referred to as social influence (family, near friends). This aspect has been interpreted as a direct predictor of behavioral purpose in previous customer behavior research using the theory of rational intervention (TRA) and theory of expected behavior (TPB) as subjective norms (Ajzen, 1980). Subjective standard or social impact is largely characterized as the way individuals believe others would see them as a consequence of their usage of a particular technology (Venkatesh et al., 2012), and it has a positive effect on purchasing intention. As a result, the following theory is proposed: H3: Social impact has a positive effect on the intention to buy through omnichannel The word "habit" applies to the degree to which individuals perform an activity without think-ing about it (Venkatesh et al., 2012). This definition was added to the UTAUT2 as a new structure derived from the previous UTAUT (Venkatesh et al., 2003) and is an indicator of technology us-age in some previous studies. When a human repeatedly engages in a particular behavior, the ac-tivity is likely to become automatic (Jasperson et al., 2005;Limayem et al, 2007). As a result, the following theory has been advanced: H4: Habit positively affects omnichannel purchase intention Though utilitarian motivation was included as part of the success expectancy component in the UTAUT model (Venkatesh et al., 2003), hedonic motivation was included as a separate construct in UTAUT2. The UTAUT2 model (Venkatesh et al., 2012) incorporates hedonic incentive, which has been found to play a significant role in assessing technological adoption and use (Brown & Venkatesh, 2005). Shopping appeal may be either hedonistic or utilitarian. Hedonic motives are characterized by adjectives such as pleasurable and enjoyable, whereas utilitarian motives are logical and task-oriented (Babin et al., 1994). Numerous studies conducted in the res-taurant industry have discovered that individuals order food, especially via online channels, for the enjoyment and fun associated with the practice (Alagoz & Hekimoglu, 2012;Yeo et al., 2017;Alalwan, 2020). Additionally, hedonic drive has been identified as a significant indicator of online food distribution applications in Indonesia (Prabowo & Nugroho, 2019). As a result, the following theory was advanced: H5: Hedonic motivation positively affects omnichannel purchase intention Facilitating conditions are described as the services and assistance available to customers to en-gage in conduct (Venkatesh et al., 2003). In the omnichannel sense, it can be described as if a channel encourages omnichannel shopping, thus improving the integration and seamlessness of the shopping journey (Kazancoglu & Aydin, 2018;Shi et al., 2020). As omnichannel customers desire an optimized and streamlined shopping experience that allows for fast channel switching, channel-related products and services can provide customers with versatility and choices in a variety of different areas (Shi et al., 2020). For instance, payment mechanism choices should be as diverse as possible to accommodate complexity (Shi et al., 2020), because if there is a flaw with the payment structure, it may be argued that it impedes the shopping journey (Kazancoglu & Aydin, 2018). Another example is product supply, where if there is an issue (whether it is a connection issue, a mistake, or inconsistencies), it may be said to impede the shopping trip (Kazancoglu & Aydin, 2018). As a result, the following theory was advanced: H6: Facilitating condition positively affects omnichannel purchase intention 326 Price value should be seen as a critical aspect, as it serves as the distinguishing distinction between a customer and an organizational environment. Price value can be described as the cognitive trade-off between the application's perceived advantages and the monetary expense of utilizing it (Venkatesh et al., 2012). It can be interpreted in the omnichannel sense as to whether shopping through the omnichannel provides monetary gain and value to the customer (Yeo et al., 2017). As a result, the following theory was advanced: H7: Price value positively affects purchase purpose through all platforms. External variables used in UTAUT2 extension Consumers have the opportunity to embrace or utilize emerging technologies or innovations as they come into touch with them. Previous research has shown that multichannel users have the tenacity to discover and use innovative platform alternatives (Rogers, 1995;Konus et al., 2008). Personal innovativeness is a measure of a person's willingness to experiment with new or different products and to seek out new experiences as a result (Midgley & Dowling, 1978). Market innovativeness is a predictor of ICT acceptance and purchasing intention in previous studies (Citrin et al., 2000), and has also been found to be a predictor in an omnichannel sense (Juaneda-Ayensa et al, 2016;Susanto et al., 2019). As a result, the following theory was advanced: H8: Personal innovativeness positively affects omnichannel purchase intention The term "perceived compatibility" applies to the extent to which an experience is perceived to be associated with the user's existing principles, interests, behaviors, and current and previous interactions (Aljabri & So-hail, 2012). When a buyer switches from one platform to another, their perceived compatibility with the new channel is critical in deciding their buying plan (Amaro & Duarte, 2015). When in the omnichannel sense, it is essential to test the compatibility of consumers' prior familiarity with particular shopping platforms (Shi et al., 2020). As a result, the following theory is advanced: H9: Perceived compatibility positively affects omnichannel purchase intention Danger perception has been identified as a highly significant element in the omnichannel buying experience (Kazancoglu & Aydin, 2018). Perceived danger can be characterized as the consumer perception that omnichannel shopping is risky in terms of protection (Shi et al., 2020). Additionally, Kazancoglu and Aydin (2018) discovered that customers view omnichannel as dangerous due to neither the likelihood of the mechanism collapsing nor financial risks (price inconsistencies). As a result, our research hypothesizes that if shoppers consider less of these dangers, they would see omnichannel shopping as more beneficial than detrimental. As such, the following theory was proposed: H10: Perceived risk negatively affects omnichannel purchase intention. 327 Research strategy This study is a quantitative and conclusive study that aims to analyze the antecedents of technology adoption on omnichannel purchase intention. We designed an online questionnaire focused on omnichannel restaurant consumers and was administered online through Instagram, Twitter, and Facebook, to name a handful. An omnichannel customer is described for this study as a shopper who has made at least one transaction from an omnichannel restaurant through at least two distinct channels or touchpoints. In all, 495 respondents fit the criteria set by our defini-tion of omnichannel consumers, who have purchased at least using two channels in the last eight months prior to the collection of the data (October 2020). To carry out our study, we defined omnichannel restaurants as restaurants that have both online and offline presence, and they must also be integrated into one, big system. Secondly, the restaurants must have an online delivery platform (in this case, Go-Food and Grab-Food), be-cause both are two of the most popular food delivery channels in Indonesia. We defined those restaurants as restaurants under the management of ISMAYA group and Mitra Adiperkasa (MAP) Group in Indonesia for several reasons. First and foremost, the restaurants under both groups are very well known in Indonesia, such as Starbucks Coffee, Burger King, Domino's Pizza, The Peo-ple's Café, Sushi Groove, etc. The second reason is being ISMAYA and MAP both have website and apps that shows information about the restaurants under them, allowing them to communicate information online directly to consumers. Third, both have membership programs that allow consumers to gain benefits from shopping in both restaurants. Both also have integrating features such as MAP Voucher (the ability to pay using this voucher in any of MAP's restaurants) and IS-MAYA's points and rewards able to be used in their restaurants. The questionnaire was divided into two parts. The first section questioned respondents about their shopping experience at their most often visited omnichannel restaurants (table 2, attached below on appendix I, page 12). The respondents are made to read the definition and some examples of the omnichannel shopping experience, and They were then asked to score their degree of agreement with each object on a five-point Likert scale ranging from 1 to 5, (strongly disagree) (strongly agree). The second part of the questionnaire is used to ask for socio-demographic information, such as age, gender, education, monthly income, and frequency of shopping in a month (table 3). 328 Less than 1.000.000 20% 1.000.000-5.000.000 40,2% 5.001.000-10.000.000 23,8% 10.001.000-20.000.000 9,7% More than 20.000.000 6,3% Frequency of Shopping (Monthly) 1 or 2 times 37% 3 to 5 times 41,8% 6 to 9 times 11,5% More than 10 times 9,7% Due to the uniqueness of the application area, the measurement scales were translated and modified to the Indonesian language, followed by a wording test to ensure that no misspellings or misunderstandings existed during the measurements of ten participants. To evaluate the results, we used IBM SPSS Statistics 25 to conduct exploratory factor analysis (EFA) on 30 respondents as a pre-test and then used LISREL 8.54 to conduct covariance-based structural equation model-ing (CB-SEM) of latent variables. We performed a confirmatory factor analysis (CFA) and evalu-ated the measurements in the measurement model prior to evaluating the hypothesis in the structural model using CB-SEM. A measurement model is used to determine the scale's validity and reliability in this analysis. As such, the composite reliability (CR) and Cronbach alpha's values reflect the reliability of the scales used in this study. A construct is said to be reliable if its CR val-ue is greater than 0,7 and its Cronbach alpha value is greater than 0,6. The loading factor of the calculation model and the KMO test, both of which had a minimum value of 0,5, also affirm the constructs' validity (Hair et al., 2006). Measurement model We launched an investigation confirmatory factor analysis (CFA) on the products and made some changes. Both products were checked to have a minimum standard loading factor of greater than 0.5. The item PI3 had a value lower than 0,5. We thus decided to exclude the item from the model to improve the model's validity (Hair et al., 2006). It was also verified that all constructs had a value of CR> 0,7 and Cronbach alpha>0,6 which means that the reliability of the constructs was confirmed. The validity of the constructs was also confirmed, with all the constructs having a value of KMO test above 0,5. Furthermore, all items had a loading factor value greater than 0,5, which makes the measurements valid (table 4, attached below on appendix II, page 13). Structural model In the structural model, the significance level of the hypothesized relationships can be said to be significant with the value of t-value>1,645 or >-1,645, as the hypotheses are one-tailed. CB-SEM was performed by using LISREL 8.54. The structural model explains the intention to pur-chase in the omnichannel context well, with an R^2 value of 78% (table 5, attached below on Ap-pendix III, page 14). This result validated the proposed model's predictive potential (Hair et al., 2006). The significance, sign, and magnitude of the path coefficients are shown in table 5. From ten hypotheses, six were supported as their t-value are greater than>1,645 or >-1,645. The hypothesis that was supported was, by order of magnitude, are habit, facilitating conditions, personal innovativeness, hedonic motivation, performance expectancy, and social influence. The other four hypotheses, effort expectancy, price value, perceived compatibility, and perceived risk are not supported as their relationships were not significant. The increasing complexity of consumer shopping behavior in the digital era has given birth to omnichannel retailing (Hure et al., 2017;Shi et al., 2020). Omnichannel consumers are a global phenomenon (Schlaager & Maas, 2013) and they expect seamless and integrated experience in their shopping journey. The omnichannel strategy can be defined as customer management where throughout the customer relationship, the shopper interacts with a brand through differ-ent devices and channels (such as a physical store, online channel, mobile channel, social media) thus making all touchpoints of the said brand must be integrated to provide a seamless and com-plete shopping experience (Juaneda-Ayensa et al., 2016). Thus it is important to investigate the field of omnichannel consumer behavior, as there are still mixed results in the previous studies (Neslin et al., 2014;Verhoef et al., 2015;Juaneda-Ayensa et al., 2016;Hure et al., 2017) 330 The best indicator of omnichannel buying intention was found to be a habit. This indicates that omnichannel consumers in Indonesia are very used to switching between different channels and shopping with them. This is following previous studies in the omnichannel context (Kazan-coglu and Aydin, 2018;Sun et al., 2020). Companies must take note that the omnichannel strategy is especially important as omnichannel consumers are everywhere and have already formed a habit. This factor will become more important in the coming years as more retailers adopt true omnichannel strategies (Juaneda-Ayensa et al., 2016). This study has found facilitating conditions as an important factor for the adoption of technology in the omnichannel context, following Kazancoglu and Aydin (2018). This shows that the facilitating conditions when consumers are using the channel are very important, such as product availability, good connection to the internet. Technical problems such as an error in the payment system, the mismatch between product availability should be avoided for consumers to have a seamless and integrated shopping experience (Kazancoglu & Aydin, 2018;Park & Kim, 2020;Shi et al., 2020). Personal innovativeness is a predictor of omnichannel purchase intention. This confirms and strenghthens the results of previous studies (Juaneda-Ayensa et al., 2016;Susanto et al., 2019). This result implies that individuals who are more innovative regarding ICT will have a stronger intention to purchase using omnichannel. Omnichannel consumers seek out new technology, and in turn new touchpoints and channels, to experiment with it and try it among their friends and families. As such, companies should try to introduce new features or re-introduce existing features of their channels and touchpoints to attract this kind of customer. Following previous research, performance expectancy is found to be a significant factor in omnichannel purchase intention which has been confirmed in much previous literature (Ven-katesh et al., 2012;Verhoef et al., 2015;Juaneda-Ayensa et al., 2016;Kazancoglu & Aydin, 2018;Jo & Lee, 2019;Susanto et al., 2019). This also suggests that consumers will continue to buy using in the omnichannel environment when they perceive usefulness in doing so. While contrary to pre-vious findings, effort expectancy was not found to affect purchase intention. This could be at-tributable to the advance of ICT technology that has stabilized apps implementation, to the de-gree that consumers perceive little difficulty in using them . Social influence was also found to affect omnichannel purchase intention. This is following previous theories such as TRA and TPB that normative factors affect behavioral intentions (Ajzen, 1991). Previous studies in the same context have also found similar results (Alagoz and Hekimoglu, 2012;Lee et al., 2019;Susanto et al., 2019). This implies that when consumers buy products, especially in the food and beverage industries, they are very affected by their friends and families. Companies should use this information to formulate strategies where it's easier or cheaper to buy in bulks in certain channels and touchpoints. Hedonic motivation was also found to affect purchase intentions. This is also in accordance to previous studies (Alagoz & Hekimoglu, 2012;Venkatesh et al., 2012;Juaneda-Ayensa et al., 2016;Susanto et al., 2019). This also means that there are pleasure, fun, and enjoyment to be found in shopping using omnichannel. This could be attributed to the shopping experience of omnichannel consumers themselves, where they expect a seamless and holistic experience (Juaneda-Ayensa et al., 2016). This means that hedonic and utilitarian factors are part of the journey (Melero et al., 2015). Price value perceived compatibility, and perceived risk are found to not affect purchase intention. In the case of price value, consumers do not perceive a price benefit because there are no differences in material benefits in between using different channels . Perceived compatibility was also found not significantly affect purchase intention. This also means that consumers shop using omnichannel regardless of their beliefs, values, and shopping preferences (Shi et al., 2020). Finally, the perceived risk did not influence omnichannel purchase intention. This means that consumers buy in the omnichannel context regardless of the need for security or risks 331 involved. This could also mean that the possibility of buying in an omnichannel context off-sets the risks perceived by consumers. Companies could introduce scenarios of touchpoints where consumers perceive the need for security in which retailers can use new technologies to manage consumers directly in the physical stores (Juaneda-Ayensa et al., 2016). Conclusion The primary goal of this study was to ascertain the factors that affect technology acceptance and use in the omnichannel context, as well as their impact on buying intention among omni-channel consumers in Indonesia, using Venkatesh et alUTAUT2's model (2012). The study's find-ings indicate that the UTAUT2 model is insufficient for predicting omnichannel shopping behav-ior and should be expanded to include additional variables. Kazancoglu and Aydin (2018) used two variables in this study: facilitating conditions and price value, as these two variables were omitted from the UTAUT2 model in an omnichannel sense. According to Kazancoglu and Aydin, enabling conditions are a critical factor in the implementation of technologies in an omnichannel setting (2018). While previous research (Juaneda-Ayensa et al., 2016;Susanto et al., 2019) omit-ted the element, this study demonstrates that promoting conditions should be investigated fur-ther as a factor influencing technology adoption and usage in an omnichannel setting. Additional-ly, this research makes a significant contribution by examining omnichannel shopping activity in the restaurant setting, an area where very few studies have been conducted, and none in Indone-sia. As such, the context provided in this study could serve as a foundation for potential studies on omnichannel behaviors. The findings also have real consequences for Indonesian omnichannel managers, since it has been shown previously that shoppers have developed an affinity for omnichannel shopping and retailers must integrate the technique as quickly as possible. Omnichannel is one of the most effective management and marketing techniques for enhancing a critical aspect of their company, namely their consumers' streamlined and holistic shopping experience. Managers must be cognizant of the process by which such strategies are developed since they are responsible for thoroughly defining and analyzing the technology that suits their market approach and how the technology is adopted by their consumers (Juaneda-Ayensa et al., 2016). Restaurants, in particular, where normative factors such as personal inventiveness and hedonic drive have a beneficial effect on purchasing intention, must be diligent in incorporating novel and imaginative contact points and pursuing word-of-mouth marketing. This study has certain limitations because our data is gathered in Indonesia and is limited to ISMAYA and MAP restaurant patronage. This can restrict the scope of results. Second, the survey included an overwhelming plurality of women (73,1 percent) between the ages of 17 and 25. (67,7 percent). Because this study used online questionnaires, which are more common among younger people, and older people are less comfortable with online testing. Additionally, the use of online questionnaires can result in selection bias. As a result, prospective studies could use a vari-ety of data collection techniques to minimize those prejudices. Additionally, our study suggests other avenues for potential research, such as examining the emerging presence of technology in particular networks, such as offline or physical shop channels. Additionally, sociodemographic considerations are not included in this study. Additionally, future studies could examine the im-portance of such variables in supplementing the current model. This analysis aims to investigate the current omnichannel customer behavior phenomena, as technology is driving the transfor-mation of retailing's future. As such, an integrated and holistic shopping experience is critical to riding the current wave of e-commerce successfully.
2022-01-07T16:17:07.683Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "ab18c1d6226914fcde6aae4590a9304e350d50b3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11594/nstp.2021.1037", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "844d4195314d884df8e2c9b6e6f464213d4845ee", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
28484172
pes2o/s2orc
v3-fos-license
What can gauge-gravity duality teach us about condensed matter physics? I discuss the impact of gauge-gravity duality on our understanding of two classes of systems: conformal quantum matter and compressible quantum matter. The first conformal class includes systems, such as the boson Hubbard model in two spatial dimensions, which display quantum critical points described by conformal field theories. Questions associated with non-zero temperature dynamics and transport are difficult to answer using conventional field theoretic methods. I argue that many of these can be addressed systematically using gauge-gravity duality, and discuss the prospects for reliable computation of low frequency correlations. Compressible quantum matter is characterized by the smooth dependence of the charge density, associated with a global U(1) symmetry, upon a chemical potential. Familiar examples are solids, superfluids, and Fermi liquids, but there are more exotic possibilities involving deconfined phases of gauge fields in the presence of Fermi surfaces. I survey the compressible systems studied using gauge-gravity duality, and discuss their relationship to the condensed matter classification of such states. The gravity methods offer hope of a deeper understanding of exotic and strongly-coupled compressible quantum states. I. INTRODUCTION One of the remarkable developments to emerge from research in string theory in the past decade is the idea of gauge-gravity duality [1]. This is an equivalence between a quantum field theory in D spacetime dimensions, and a quantum theory of gravity in D + 1 spacetime dimensions. The D-dimensional theory does not have a gravitational force, and is to be viewed as a 'hologram' of the D + 1 dimensional theory. This is a true equivalence between theories, and not a projection to a restricted portion of Hilbert space: the number of degrees of freedom of the D-dimensional theory equal those of the (D + 1)-dimensional theory with gravity. It is thus in the spirit of early ideas of Bekenstein and others [2][3][4][5] that in theories of quantum gravity, the entropy of a region of spacetime is proportional to its spatial surface area (and not its spatial volume). A powerful feature of the duality is that it is also a strong-weak coupling duality: it maps a strongly-coupled theory in D dimensions to a weakly-coupled theory in D + 1 dimensions, and vice versa. It has therefore, justifiably, raised hopes for understanding new classes of strongly coupled quantum field theories in D dimensions. This hope extends to researchers, like condensed matter physicists, who had no previous interest in theories of quantum gravity. The initial examples [1] of gauge-gravity duality appeared in highly supersymmetric gauge theories e.g. the SU(N ) Yang-Mills gauge theory with maximal supersymmetry (N = 4) in D = 4. This is dual to a string theory on a 5-dimensional space with constant negative curvature: anti-de Sitter space, abbreviated AdS 5 . In the low energy limit, the string theory reduces to a gravity theory on AdS 5 also with maximal supersymmetry. Clearly, neither of these models are of any specific interest to condensed matter physics. However, since then it has become clear that this is but the simplest of a general class of duality between conformally invariant field theories (CFTs) in D dimensions and theories of gravity in AdS D+1 . Strong evidence for such a duality has also appeared for CFTs without any supersymmetry [6,7]. As we will review in Section II A, CFTs appear at quantum phase transitions of generic models relevant for experiments in condensed matter and ultracold atomic gases, and some of these are directly in the classes for which gauge-gravity duality has been studied. It is perhaps useful here to make an analogy with earlier developments in the solution of strongly interacting many body systems. Starting with the work of Bethe in 1931 [8], a wide class of solvable quantum many body systems were discovered in one spatial dimension (D = 2). These models are 'integrable' in that they have an infinite number of conservation laws, and explicit wavefunctions can be written down for all eigenstates using the Bethe ansatz. However, integrability invariably required fine-tuning of the structure of the Hamiltonian, and no one expected that integrable systems would be directly applicable to any experiments. Nevertheless, the structure of the integrable models and their excitations taught us a great deal about quantum many body physics in D = 2, and these insights were crucial in the development of the far more general effective field theory of the 'Tomonaga-Luttinger' liquid [9]. The latter theory applies to generic interacting systems in D = 2, and has found numerous experimental applications. The hope is that a similar success will eventually be achieved using gauge-gravity duality, for strongly interacting quantum systems in D > 2. We have the analog of the Bethe ansatz: a rapidly-increasing class of special models with known gravity duals. Much research [10][11][12][13][14][15][16][17][18] is now directed towards generalizing the insights obtained from these solvable models to a general effective field theory approach to strongly interacting quantum systems using holography. I will review applications of gauge-gravity duality to two broad classes of condensed matter systems, which I denote "conformal quantum matter" and "compressible quantum matter". In the first "conformal" class I include models with quantum critical points described by CFTs. Here the relevance of the holographic approach is evident, as the equivalence to dual gravity models has been explicitly demonstrated. Much has been learnt about such CFTs using traditional field-theoretic methods (such as the expansion of Wilson and Fisher [19]). However, there are key questions about their long-time correlations at nonzero temperatures (T ) which cannot be described in a controlled manner by these methods. Numerical studies also fail for such questions because of difficulties associated with the "sign" problem: quantum simulations require summing over intermediate states whose weights are not positive real numbers. Remarkably, holography does yield useful results for such long-time correlations, including the specific numerical values of transport co-efficients and damping rates: these results will be discussed in Section II. The second class of "compressible quantum matter" will be defined more precisely in Section III. But as the name indicates, these are systems which have a non-zero and finite compressibility at T = 0. Familiar examples of compressible states are Fermi liquids, solids, and superfluids. However, much modern condensed matter research has focused on the search for new types of compressible states, motivated primarily by the "strange metal" behavior of numerous correlated electron systems [20]. Current studies using holography have produced many examples of compressible quantum matter, although their place in the traditional condensed matter classification has not yet been fully established. These issues are the focus of active research, as I will describe in Section III. I close this section by noting that the above classification implicitly assumes D > 2. The case D = 2 is special: Tomonaga-Luttinger liquids are both conformal and compressible, and this is partly responsible for their simplicity. I will implicitly assume D > 2 in the remainder of the paper, where the two classes are distinct and far more complicated, and have properties quite different from Tomonaga-Luttinger liquids. A. Condensed matter analysis Let us begin by describing the simplest model which realizes a CFT, realizable in a laboratory [21,22]. Consider spinless bosons hopping on a lattice of sites, i, with shortrange repulsive interactions. With a boson annihilation operator b i , such physics is usefully captured by the Hubbard Hamiltonian where b i is the canonical boson annihilation operator, n i = b † i b i is the boson number operator, w is the hopping matrix element between nearest-neighbor sites, and U is the on-site repulsive energy between a pair of bosons. Let us assume that the average boson density is exactly 1 per site, and examine the ground state as a function of the dimensionless parameter U/w. For large U/w, the boson repulsion dominates and so the bosons avoid each other by localizing on separate sites. This leads to the insulating state shown in Fig. 1: any motion of bosons requires placing at least two on a site, and this is suppressed by the repulsive energy. In the opposite limit of small U/w, we can treat the system as a nearly free Bose gas, and this undergoes Bose-Einstein condensation to a the superfluid state also illustrated in Fig. 1. In this state, a snapshot of the wavefunction shows large number fluctuations on each lattice site, and so a supercurrent is able to flow without dissipation. The distinct ground states in the limit of small and large U/w implies they cannot be smoothly connected: they are separated by a quantum phase transition at an intermediate critical point. More generally we denote parameters like U/w by a generic coupling g, and the quantum critical point appears at g = g c . The nature of the ground state near and at g = g c is well understood, and a pedagogical description appears in the Supplementary Material. For our purposes, the most important property is that the ground state and its low energy excitations are efficiently described by a universal quantum field theory [23]. By 'universal' we mean that the same field theory applies for a large class of models of the superfluid-insulator transition, and not just the Hubbard model in Eq. (2.1). The field theory is expressed in terms of a complex field ψ(r, t), which is just the continuum limit of the boson annihilation operator b i , with r the D − 1 dimensional spatial co-ordinate, and t the time. The action for the field theory is (see Supplementary Material) note that its couplings change as a function of the coupling g. This low energy quantum field theory already has an 'emergent' symmetry not shared by the Hubbard model: it is invariant under Lorentz transformations, with the velocity v playing the role of the velocity of light. Here v is a sound velocity, and its value is determined by w, U , and the lattice spacing. The symmetry of the theory becomes much larger at the quantum critical point at g = g c . Here, as may be familiar to some readers from the theory of second order phase transitions, the theory is scale invariant. Specifically, the structure of the quantum correlations remain invariant under the rescaling transformation where b is the rescaling factor. Formally, this scale invariance arises because the quantum critical point is realized as a fixed point of the renormalization group transformation. Actually, we are not done with the set of emergent symmetries. Quantum field theories which are Lorentz invariant and which obey certain 'hyperscaling' properties, are also invariant under conformal transformations of spacetime. These are transformations which preserve the Lorentzian metric of spacetime upto an overall factor, and so preserve all angles. Specifically, the Lorentzian metric rescales under conformal transformations where the rescaling factor b can be a certain set of smooth functions of spacetime; the rescaling transformation is a special case of a conformal transformation. The critical point of S b , describing the superfluid-insulator quantum phase transition, is invariant under such conformal transformations, and is so a CFT. We will restrict further attention to the case of 2 spatial dimensions, with D = 3, when this CFT is strongly coupled. The CFTs for D ≥ 4 are essentially free field theories, and we don't need sophisticated methods to analyze them. As we noted earlier, many properties of the strongly coupled CFT at T = 0 are well understood. The main tool is the renormalization group, and its realization in the context of various analytic and numerical expansions-these are well-established methods which I will not dwell on here. However, all of these methods fail for certain key questions on the dynamics at T > 0. To describe these questions, we need the phase diagram of the model at T > 0 [24], shown in Fig. 2. There is much interesting physics associated with the many distinct features of this phase diagram, but for now the reader is asked to focus on the distinction between the blueand pink-shaded regions. In the blue-shaded regions, the physics can be described in terms of the familiar excitations of either the insulator or the superfluid, which are illustrated in Fig. 3. For the insulator, the excitations are particle or hole excitations above the background of the insulator with one particle per site. In contrast, for the superfluid, the excitations are point-like vortices in the background of the Bose condensate. A semiclassical theory of gases of such excitations provides an essentially complete description of the long-time correlations in the blue-shaded regions. Let us now turn to the pink-shaded region of Fig. 2, separated from the blue-shaded regions by crossovers indicated by the dashed lines; the reader is referred to another review article [20] for a detailed discussion of the location and shape of these crossover lines. The defining characteristic of the pink-shaded 'quantum critical' region is that its dynamics is controlled by the CFT and its excitations. These excitations do not have a particle-like interpretation, they are not amenable to an effective classical description, and they interact strongly (and universally) with each other. We are now faced with the challenge of describing the long-time dynamics of this strongly interacting quantum critical regime. The traditional renormalization group methods have been applied to the problem of quantum critical dynamics [25,26]. While a great deal of insight, and some analytic results have been obtained, the results are not expected to be qualitatively accurate. We will shortly turn to a description of the new holographic methods, and describe their promise in eventually solving this challenging problem. It is helpful to focus on a specific observable characterizing the quantum critical dynamics in the pink-shaded region of Fig. 2. Let us choose the frequency-dependent conductivity σ(ω): we endow the bosons with a (possibly fictitious) charge q, and examine the current response in the presence of an 'electric' field coupling to the q charge, which is oscillating at a frequency ω. General arguments based on the properties of the CFT imply that [26] where is Planck's constant/(2π), k B is Boltzmann's constant, and Σ is an unknown, dimensionless universal function of the dimensionless ratio of frequency to temperature. This structure follows from the fact that in D = 3 the conductivity is a dimensionless number when expressed in units of Q 2 / ; and the ability of the strong and universal interactions between the excitations of the CFT to relax the electrical current. Each CFT in D = 3 is characterized by its own function Σ(ω). We are now faced with the basic challenge: accurately determine the function Σ(ω) for the quantum critical point of the theory S b in Eq. (2.2) describing the superfluid-insulator transition. As we will see below, even simple questions on the overall shape of this function remain unresolved. The traditional field-theoretic methods (which are based on the renormalization group), as well as numerical studies, do work accurately in determining the high frequency limit ω k B T . Here, the correlations are essentially unchanged from the ground state values, and so are amenable to a controlled analysis. The value of the number Σ ∞ = Σ(ω → ∞) has been so determined [27][28][29], and there are no obstacles (in principle) in refining this work to even more precise values. How about determining the opposing limit [30], the value of the number Σ 0 = Σ(ω → 0)? This is a number characterizing the long-time response at a non-zero temperature. So we have to extrapolate from our knowledge of the short-time dynamics to the long-time limit. For systems with excitations which are weakly interacting particles, such extrapolations have traditionally been carried out by the venerable Boltzmann equation, and its many descendants. Such methods are evidently not directly applicable to the non-particle-like excitations of a CFT, which also have strong interactions. Nevertheless, let us forge ahead and see what the Boltzmann approach teaches us about the qualitative shape of the function Σ. Examining Fig. 3, we are immediately faced with a choice: do we approach the critical point starting from the insulator or the superfluid? As we will now argue, a qualitatively different answer obtains from the two approaches. Let us begin by extrapolating the excitations of the insulator to the critical point. The insulator has particle and hole excitations, and there is no difficulty in writing down a Boltzmann equation for these excitations [26]. In the simplest picture, thermally excited quasiparticles undergo Brownian motion with mutual collisions, while drifting in the applied 'electric' field E. The average velocity of these particles, v, therefore obeys an equation like where τ c is the mean time between collisions of the particles. If we extrapolate this picture to the critical point (without much justification), we expect that this time would be determined by the only energy scale characterizing the CFT, which is k B T , and hence τ c ∼ /(k B T ). Solving Eq. (2.6, we then predict a "Drude" form for the frequency-dependent conductivity at low frequencies, with Combining the real part of Eq. (2.7) with our earlier considerations in the large ω limit, we can surmise a frequency dependent conductivity of the CFT with the qualitative form shown in Fig. 4 [26]. We reiterate that this form implicitly builds in the structure of the particle-like excitations of the insulator. However, in principle, it should be equally valid to approach the critical point from the superfluid side, using its vortex-like excitations as the basic degree of freedom. In two spatial FIG. 4: Expected form for the frequency-dependent conductivity of the CFT of the superfluidinsulator transition, using the Boltzmann picture applied to the particle excitations of the insulator [26]. dimensions, vortices can be specified by the location of their point-like center, and so can also be viewed as excitations which are 'particles'. We can, therefore, similarly write down a Boltzmann equation for the thermal and quantum dynamics of these vortex excitations. We will refrain from doing so here, but note a basic property we will need: the physical conductivity of the underlying boson model is equal to the resistivity of the vortex-like particles entering this Boltzmann treatment [31]. This can be deduced from the fact that a flow of vortices in a superfluid induces a voltage in the transverse direction. From this we expect that the conductivity of the quantum critical point is roughly the inverse of the previous prediction in Fig. 4; this result for the vortex picture of transport is shown in Fig. 5. We now have two qualitatively distinct predictions for the frequency-dependent function Σ, shown in Fig. 4 and 5. It is a fair statement that for the CFT of the superfluid-insulator insulator transition described by the boson Hubbard model, we do not know today which of these results, if either, is the correct one. The conventional and vector large N expansions [26,32] both have a built-in bias towards the excitations of the insulator, and so predict a form qualitatively similar to that in Fig. 4. However, we are unable to say if these results should be believed for the physical case of interest. It is remarkable that this basic property of one of the simplest quantum critical point in two spatial dimensions remains unresolved. B. Holographic analysis Let us now change gears, and discuss what gauge-gravity duality can teach us about the questions at the end of the last section. I will only be able to present a very short summary of the principles of gauge-gravity duality here; the reader is referred to the many other reviews in the literature [10][11][12][13][14][15][16][17][18]. The safest route is to begin with the solvable model, and then use physical arguments to generalize to a wider class of CFTs, including perhaps those of physical interest to us. The canonical solvable model in D = 3 spacetime dimensions is a highly supersymmetric CFT now known as the ABJM model [33]. In a particular 'large N ' limit, the low energy limit of this CFT is known to map onto a supersymmetric gravity theory in D = 4 spacetime dimensions. We shall be interested here in the nature of correlations of currents of globally conserved charges (analogous to the electrical current of the boson model above) in this theory. Choosing a particular conserved current of the ABJM theory (it doesn't matter which one), its current correlations are described by an especially simple and familiar theory of gravity and 'electromagnetism' in D = 4: the Einstein-Maxwell (EM) theory with a negative cosmological constant, with action [15] Here x ≡ (t, u, r) is a 4-dimensional spacetime co-ordinate upon which the gravity theory is based; r labels the 2-dimensional space upon which our CFT lives, t is time, and u is the new 'emergent' co-ordinate of spacetime. The coupling κ is related to Newton's gravitational constant by κ 2 = 8πG. The gravity theory is expressed in terms of a metric g and its Riemann curvature scalar R, while the U(1) gauge theory has 'electromagnetic' flux F ab ; we use small Latin letters for the spacetime co-ordinates of the D + 1-dimensional spacetime. Here, and henceforth, we have set the velocity of 'light' to unity, v = 1. The physical significance of the gravity and the U(1) gauge field can be described after considering the vacuum solution of the classical equations of motion associated with Eq. 2.8. This solution has F µν = 0, and the metric associated with This is the metric of the space of uniform negative curvature, known as AdS 4 , and L is the radius of curvature. It is evidently invariant under Lorentz transformations, and also under scale transformations in (2.3) provided we choose furthermore, it is also invariant under a suitable extension of the conformal transformation in (2.4). This invariance is the crucial connection relating AdS 4 to conformal transformations, and to CFTs: the group of isometries of AdS D+1 is the same as the group of conformal transformations in D spacetime dimensions. The transformation in (2.10) also suggests a physical interpretation for the new co-ordinate u: it transforms like an energy/momentum scale, and so can be viewed as the running energy/momentum scale for the renormalization group [15]. Thus the physics as u → ∞ is the ultraviolet (UV) or short-distance/time physics, while the physics as u → 0 is the infrared (IR) or long-distance/time physics. The gravity theory on AdS D+1 maintains the complete 'history' of the renormalization group flow in the structure of the metric as a function of u. We also note that the AdS space in Eq. (2.9) has a boundary as u → ∞. This boundary is just the D-dimensional Lorentzian spacetime upon which our CFT of interest 'lives'. We are now in a position to describe the role of the U(1) gauge field F ab in Eq. (2.8). This is the field which is 'dual' to the conserved current of the CFT. Let us label the conserved current J µ ; we use small Greek letters to represent the components of the D-dimensional Lorentzian spacetime. The conductivity is related to the two-point correlator of J µ . To enable computation of this two-point correlator, let us include a source term coupling to the conserved current of the CFT: Then a fundamental property of the gauge-gravity duality is that this source, K µ , is the limiting boundary value of the vector potential A M associated with the U(1) flux F ab [15]: The complete prescription for computing the conductivity of the CFT using the holographic approach is as follows. Solve the equations of motion of the gravitation theory subject to the constraint in Eq. (2.12), for an arbitrary spacetime dependent K µ (r, t). From these solutions, compute the functional dependence of the gravitational action on K µ . This is precisely the functional dependence of the CFT action on K µ , and so correlators of J µ are easily obtained by taking functional derivatives of the action with respect to K µ . Before we describe the results of such a computation, we need to discuss 2 additional points. First, the discussion above has implicitly assumed that we were at T = 0. However, our most difficult questions about the CFT were at T > 0, so it is essential we extend to non-zero temperatures. The key to this extension is to examine black hole solutions of the equations of motion of Eq. (2.8). These are analogous to the Schwarzschild solution, and take the form [12,15] Here R is a parameter which labels the location of the black hole horizon. Also, strictly speaking, this is not a black hole, but a black brane: the horizon is spatially infinite, and extends across the flat two-dimensional r space. As shown by Hawking [3], the black brane horizon must have a temperature, T , and in the present duality we can identify this temperature with the temperature, T , of the CFT. This Hawking temperature is most simply computed by analytically continuing the metric in Eq. (2.13) to imaginary time, and demanding that the resulting space be periodic in the imaginary time direction with period /(k B T ); such a computation yields where we have momentarily reinserted the factor of v; this can be viewed as an equation which fixes the value of R. This black-brane is a powerful feature of the present holographic approach. It enables us to see how dissipative and relaxational processes of the CFT at nonzero temperature have a natural interpretation in the gravitational theory. As illustrated in Fig. 6, waves propagating in the D + 1 dimensional space get damped because they lose energy across the black-brane horizon: it is this damping which Eq. (2.12) eventually relates to the dissipative transport co-efficients of the CFT [34]. A 2+1 dimensional CFT Black brane at temperature of CFT FIG. 6: Schematic of the geometry of gauge-gravity duality for conformal quantum matter. The gravity theory is defined on the bulk 4-dimensional spacetime (the time co-ordinate, t, is not shown). The CFT resides on the boundary at u → ∞. Waves falling across the black brane at u = R capture the dissipation of the CFT at non-zero temperature. Second, as we noted earlier, the action in Eq. (2.8) was obtained for the special CFT described by ABJM. It also describes a very wide class of related supersymmetric CFTs, but this does not include the CFT associated with the boson Hubbard model in Eq. (2.2). To extend to this wider class, we will use the spirit of effective field theory, applied to the holographic method as proposed recently in Ref. 35. We can also view Eq. (2.8) as the simplest effective action for the metric and the U(1) gauge field, invariant under the underlying symmetries, in which all fields are expanded to include 2 gradients of spacetime co-ordinates. Let us assume this is a reasonable starting point, and extend this action to include 4 spacetime gradients. As we are only interested in linear response to the source K µ in Eq. (2.11), we can restrict this extension to include only 2 powers of A M . With these conditions, it turns out only one additional term is allowed, modulo allowed co-ordinate transformations, and the extended action is [35] where C abcd is the Weyl curvature tensor. Eq. (2.16) is our final theory for a very wide class of CFTs, which we hope applies also to the CFT of the boson Hubbard model with reasonable accuracy. This theory depends upon two-dimensionless parameters: e and γ. However, both parameters can, in principle, be fixed by matching to the short-time, or ω → ∞, limit of the correlators of the CFT of interest; e is related to Σ ∞ , while γ connects to a 3-point correlator of the current J µ and the stress-energy tensor [35]. Let us summarize our proposal for applying gauge-gravity duality to the CFT realized in the boson Hubbard model. We use effective field theory to motivate the effective action of the D + 1 dimensional gravity theory in Eq. (2.16). Conventional field-theoretic methods can be used to compute characteristics of the CFT at T = 0, and these can be matched to the gravitational theory to fix the values of parameters. Then we extrapolate to the long time limit at T > 0 using the solution of the gravity equations of motion in the black brane background, as describe below Eq. (2.12). It is interesting that this procedure parallels that used for Boltzmann-like equations: use properties of the quasiparticles at T = 0 to compute their scattering cross-section, so determine the collision term, and then solve the Boltzmann equation to extrapolate to the long time limit at T > 0. Evidently, the D + 1 dimensional gravity theory has taken the place of the Boltzmann equation for CFTs without quasiparticle excitations. We are now ready to describe the results of this method in the computation of σ(ω). The results depend upon the values of e and γ. However, the dependence on e can be scaled out by examining the ratio σ(ω)/σ ∞ . Furthermore, stability of the holographic theory requires that |γ| < 1/12 [35]. Explicit results for this range of γ are shown in Fig. 7. We see from [35] for the frequency-dependent conductivity σ(ω) of a CFT in 2+1 dimensions from the holographic gravity theory in Eq. (2.16). The results shown depend only upon the parameter γ which is restricted by stability requirements to the range |γ| < 1/12. The γ = 0 case has a particle-vortex self-duality and applies to special supersymmetric models like the ABJM theory. Fig. 7 that the change in σ(ω) from the γ = 0 result to the maximal allowed values of γ is quite limited: this stability is very encouraging, and is a partial a posteriori justification for the validity of the gradient expansion applied to the holographic theory. A remarkable feature of Fig. 7 is that the results correspond very neatly to the conductivities surmised from the Boltzmann equation in Figs. 4 and 5: the γ > 0 case is similar to the particle Boltzmann sketch in Fig. 4, while the γ < 0 case is similar to the vortex case in Fig. 5. Thus it is the sign of γ which determines whether a given CFT is more accurately described by the particle-like or vortex-like excitations. We do not yet know the value of γ for the CFT of the boson Hubbard model, but the route is now open to its determination by the strategy we have outlined above. We close by noting a curious feature of Fig. 7. The γ = 0 case yields a frequencyindependent σ(ω). Recall that γ = 0 corresponded to the supersymmetric ABJM-like theories. These special theories have a self -duality under the particle-vortex transformation which accounts for this novel feature [36]. A. Condensed matter analysis We begin by defining compressible states, in a form applicable to both condensed matter and holographic theories [37]. • Consider a continuum, translationally-invariant quantum system with a globally conserved charge Q i.e. Q commutes with the Hamiltonian H. Couple the Hamiltonian to a chemical potential, µ, which is conjugate to Q: so the Hamiltonian changes to H −µQ. The ground state of this modified Hamiltonian is compressible if Q changes smoothly as a function of µ, with d Q /dµ non-zero. Note that our definition of compressibility refers to the ground state, and so we need T = 0. Also, as noted in Section I, we will restrict our attention to compressible states in dimensions greater than unity (D > 2). Remarkably, it turns out there are only a few known states which satisfy these seemingly innocuous requirements. The states are: 1. Solids: Translational symmetry is broken, and the matter 'crystallizes' into a periodic arrangement. Changing µ changes the period of the lattice, allowing a continuous variation in Q . Fermi surface which separates the occupied and empty states. This picture generalizes to interacting particles, to all orders in perturbation theory, as shown elegantly by Landau. In a Landau Fermi liquid, there is a sharp Fermi surface, and the only low energy excitations are sharp quasiparticles (renormalized from the bare fermions) in the vicinity of the Fermi surface. These quasiparticles carry opposite Q charges on either side of the Fermi surface. Changing µ changes the location of the Fermi surfaces, and the newly occupied states allow a smooth variation in Q . Apart from mild variations which combine features of such states, these are the only examples of compressible quantum states in traditional condensed matter physics. The past decades have seen intensive search for other 'exotic' compressible states of condensed matter. As reviewed more completely in Ref. 20, this search has been motivated by the ubiquitous appearance of 'strange metal' behavior in a variety of correlated electron compounds, including the high temperature superconductors. From such studies a new class of compressible states appears to have emerged, which we refer to here generically as 'non-Fermi liquids'. Many recent studies [24,[38][39][40][41][42][43] have emphasized the strong-coupling nature of the theory of non-Fermi liquids in two spatial dimensions (D = 3), and noted that important questions remain unresolved. Let us describe one of the simplest examples of a compressible non-Fermi liquid state. This was developed as a theory of a possible spin liquid (SL) state in frustrated quantum antiferromagnets [44][45][46][47][48], and a complete derivation appears in the Supplementary Material. The primary degrees of freedom are fermionic 'spinons' f α , with α =↑, ↓ a spin label. These are strongly coupled to an 'emergent' U(1) gauge field B µ . Note that this gauge field is not to be confused with Maxwell gauge field associated with electromagnetism, whose fluctuations are ignored in all our analyses. It should also not be confused with the gauge field A a in Section II B, which resides in the extended (D + 1)-dimensional space. Rather, B µ is a degree of freedom which emerges from the dynamics of the antiferromagnet, and encodes the complex quantum entanglement between valence bonds in the spin liquid. The action for this non-Fermi liquid state is simple (see Supplementary Material) where both f α and B µ are fluctuating functions of r and t, and the remaining parameters are constants. The Fermi energy ε F controls the value of the fermion density f α f α , but we are not free to choose its value. This density is coupled to a fluctuating U(1) gauge field which mediates long-range 'Coulomb' interactions, and so just as in the jellium model of the electron gas, stability of the system requires net neutrality in the gauge charge. We have thus included a background neutralizing charge density N , and the value of ε F must be chosen so that Thus the system is not compressible with respect to the charge f † α f α ; moreover this charge is not associated with a global conservation law, but with a gauge invariance. Instead, the quantity realizing the globally conserved charge Q associated with compressibility is the spin density. The spin density has 3 components, and let us arbitrarily choose the z component, and so where σ z is a Pauli matrix. The "chemical potential" coupling to Q is the Zeeman coupling of an applied magnetic field. The compressibility of the SL state implies that its magnetization can be varied smoothly as a function of the applied magnetic field, and the magnetic susceptibility is non-zero at T = 0 [45]. Like Landau Fermi liquids, a non-Fermi liquid SL state also has Fermi surfaces, but the character of the fermionic excitations near the Fermi surface is quite different. Formally, for interacting electron systems, the Fermi surface is defined by the momentum k = k F for which there is a zero in the inverse fermion Greens function: Landau's Fermi liquid theory also predicts a specific singularity in the fermion Green's function near the Fermi surface: this singularity reflects the fact that the inverse-lifetime of the quasiparticles vanishes as the square of their distance from the Fermi surface. The energy of the quasiparticles is parametrically larger: it is linear in the distance from the Fermi surface, and so the quasiparticles are well-defined excitations. In the known non-Fermi liquids, the quasiparticles do not remain well-defined excitations away from the Fermi surface: they are strongly scattered by the low energy modes the U(1) gauge field. This is reflected in the singularity of the fermion Green's function near the Fermi surface, which has been argued to have the generic form [40] G where we are expanding in the vicinity of the Fermi surface point at (k F , 0), with q x = k x −k F , q y = k y , and q = q x + q 2 y (see Fig. 8). The universal scaling function F is a complexvalued scaling function which determines the spectral density of the fermionic excitations. The parameters η and z are critical exponents whose values can be estimated in various expansion methods [40,41]. However, the theory of these fermionic and gauge modes is strongly coupled [39,40] (in D = 3), and so the reliability of these estimates is unknown. We now make a few more remarks about the Fermi surfaces encountered so far: (i ) The Fermi surface of non-Fermi liquids is sharply defined, even though the fermionic quasiparticles are not: the value of k F is precisely defined by Eq. fields, or to order parameter fluctuations near a symmetry-breaking quantum phase transition. Unlike the familiar case at zero density, there does not appear to be any fundamental difference between Abelian and non-Abelian gauge fields. (iii ) There is a crucial relation, known as the Luttinger relation, which constrains the total area enclosed by Fermi surfaces to the values of conserved charges. Specifically, let us consider a compressible state with Fermi surfaces labeled by : each Fermi surface is associated with the singularity (3.4) in the Green's functions of fermions carrying global charge q under symmetry associated with Q, and encloses area A . Then the Luttinger relation is [49,50] q A = 4π 2 Q . (3.6) The fermionic excitations near the Fermi surface can also carry charges associated with fluctuating gauge fields, as in Eq. (3.1). If none of the Fermi surfaces have gauge charges, we usually obtain a Fermi liquid. If we have Fermi surfaces with and without gauge charges coexisting, we obtain a compressible state labeled as a fractionalized Fermi liquid [51,52]. A Luttinger relation like Eq. (3.6) applies for each conserved U(1) charge Q associated with a compressible state. We also note [37] that for the case of an Abelian gauge field (as in Eq. (3.1)), there is an additional requirement for global neutrality in the gauge charge: in this case, there is additional Luttinger relation for the Fermi surfaces carrying the gauge charges, equating the sum of their areas to the background charge (which is N in Eqs. (3.1) and (3.2)). Our discussion of compressible states of condensed matter now suggests a conjecture [37]: The rationale behind this conjecture is that (in D − 1 spatial dimensions) we need a surface of dimension D − 2 of zero energy excitations to have adequate phase space to allow Q to vary smoothly as a function of the chemical potential. In D > 2, bosons can't generically have such a surface of zero energy excitations: there will be negative energy states on one side of this surface, rendering the system unstable to Bose condensation. In contrast, fermions are allowed to have such surfaces with a singularity as in Eq. (3.4), as is evident already from the free fermion theory: the negative energies now represent hole-like excitations of the fermions. We are finally in a position to state the challenge to the holographic approach to condensed matter physics. Can gauge-gravity duality provide a classification of possible states of compressible quantum matter? Clearly, any such classifications should include the familiar phases: solids, superfluids, and Fermi liquids. Are there additional states, and do they correspond to the non-Fermi liquid states with Fermi surfaces described above? If so, we can hope that they will provide a new perspective on their strong-coupling physics. Or is the conjecture incorrect, and are there entirely new types of compressible quantum states which have been overlooked in condensed matter studies? B. Holographic analysis The study of compressible states using holography is the focus of much current research [10][11][12][13][14][15][16][17][18]. This research does not yet have definitive answers to the questions posed at the end of the previous section. However, rapid progress has been made recently, and here I will give my perspective on the current state of the theory. This research promises to eventually achieve a holographic classification of the possible states of compressible quantum matter. Such a classification will be useful for condensed matter applications, especially in two spatial dimensions, regardless of the artificial nature of the microscopic degrees of freedom used to realize the compressible phases. The basic starting point is to extend the Einstein-Maxwell theory S EM in Eq. (2.8) which we used in the conformal case to a situation with non-zero density. Given the relationship in Eq. (2.12), we can turn on a finite density by including a chemical potential, in which case we now have the boundary condition while the other components of the gauge field vanish in the corresponding limit near the boundary. We will now work to all orders in µ, rather than the linear response in the source term in Section II B. In the simplest (and frequently used) approach, we make no further change. We solve the equations of motion of the action in Eq. (2.8) subject to the boundary condition in Eq. (3.7) for the gauge field. Such a solution is of the Reissner-Nordström form [55]: the metric remains as in Eq. (2.13), but the function f (u) in Eq. (2.14) is replaced by Again R is the position of the horizon, as it can be checked that f (R) = 0. In addition, we have a non-zero gauge field given by which obeys Eq. (3.7), and vanishes on the horizon. As above Eq. (2.15), we can compute the temperature of the horizon, and now find that it equals (3.10) as in Eq. (2.15), this equation fixes R in terms of physical parameters. A sketch of this geometry, generalizing Fig. 6, is shown in Fig. 9. Note that the total 'electric' flux in the bulk is conserved between the horizon and the boundary at u = ∞: this follows from the potential in Eq. (3.9), which shows that the electric field ∼ 1/u 2 , and the metric in Eq. (2.13), which shows that the surface area ∼ u 2 . This gravitational solution can now be applied to examine physical properties of the quantum system on the boundary as a function of the temperature, T , and the chemical potential, µ. For T µ this was examined in Ref. 56, and successfully used to develop a general theory of thermoelectric transport in compressible quantum systems near quantum critical points. We will not review this here, as our primary interest is in the limit T µ for non-superfluid states. I also note in passing the interesting recent work of Bhattacharya et al. [57] which has obtained new insights on the low temperature hydrodynamics of superfluids using the dual gravity approach. A notable feature of the T → 0 limit of the above Reissner-Nordström solution is apparent This is different from the conformal case of Section II B, where the horizon was absent at T = 0. An immediate consequence is that our quantum system on the boundary, has a non-zero entropy density in its ground state, as can be verified by a computation of the free energy of the gravitational solution. The free energy computation [12] also shows that the compressibility of the ground state is finite, where the charge density is a smooth function of µ: So we have succeeded in obtaining a phase of compressible quantum matter, but it has the unacceptable feature of a ground state degeneracy which increases exponentially with system size. However, we will not discard this solution right away: let us examine some of its properties more carefully and see if a physical interpretation emerges. This will also help point the way towards improving this holographic description. We examine the form of the metric more closely; using the relationship (3.11) in Eq. (3.8), we find Note that f (u) has a double zero at the horizon u = R, unlike the conventional single zero in the conformal case in Eq. (2.14). Such a horizon is known as extremal, and this feature is linked to its anomalous properties. Defining a co-ordinate which is zero at the horizon, u = u − R, and expanding to the lowest non-vanishing order inũ, the metric in Eq. (2.13) becomes (3.14) The notable feature of this metric is its separation into the first two terms dependent only upon t andũ, and the last term dependent only upon the spatial co-ordinate. This means that spacetime has factorized into a spatial R 2 , and a curved space inũ and t. By comparison with Eq. (2.9), the reader will recognize that the latter space is AdS 2 , and so we conclude that the near-horizon spacetime of a Reissner-Nordström black brane is AdS 2 × R 2 . This factorized form of the metric has strong consequences for all the low energy properties of the compressible quantum system on the boundary. Let us begin by interpreting the AdS 2 factor. What kind of quantum system does this describe? As I have reviewed in more detail elsewhere [16,17,58], it describes the physics of a single quantum impurity universally coupled to a CFT, analogous to the overscreened fixed points of multichannel Kondo problems [59], or a vacancy in a two-dimensional insulating antiferromagnet at a magnetic ordering critical point [60]. There is a conformal structure to the correlations near the impurity, and the ground state has a finite entropy in the limit T → 0 [59,61]. All these features are physical and robust, and potentially realizable in generic experimental systems. Supersymmetric generalizations of such quantum impurity systems can be solved [62] by gauge-gravity duality, and explicitly reveal an AdS 2 metric in the low energy physics. We are now ready to interpret AdS 2 × R 2 . The factorized form of the metric means that we can view the two-dimensional quantum system as consisting of an infinite number of quantum impurity problems, one at each position in space. Because there is an ultraviolet cutoff above which the AdS 2 factorization does not hold, there is a similar cutoff in spatial separation below which the quantum impurities remain coupled. Thus the low energy physics is controlled by a finite density of independent quantum impurities. Each quantum impurity is coupled to a gapless environment, which is presumably a mean-field description of the other quantum impurities. Many condensed matter theorists will immediately recognize the similarity of this picture to that of dynamical mean field theory (DMFT) [63]. Such large dimension/co-ordination number approximations have also been applied to the vicinity of magnetic ordering transitions, and descriptions of compressible non-Fermi liquid states have been obtained [61,[64][65][66][67][68]. Such states now have a non-zero ground state entropy density [59,61], a consequence of adding up the entropy of the finite density of decoupled quantum impurity problems. It was argued [58] that these compressible critical states, obtained in the limit of large spatial dimension or lattice co-ordination number, which provide a specific microscopic realization of the physics of AdS 2 × R 2 . The holographic theory shares the feature of having a non-zero ground state entropy density, and we will now see that the structure of correlations of local operators also match. Correlations of the boundary theory are determined by inserting probe fields in the 'bulk' gravity theory, and computing the limiting values of their Green's functions near the boundary. As representative examples of such correlations, we introduce the simplest example of 'matter' fields in the gravity theory: a complex scalar field φ, and a Dirac fermion, Ψ. The action of these fields respectively has the schematic form where ω ab is the spin connection of the curved geometry, and the Γ represent Dirac matrices. The parameter m determines the scaling dimension of the operator being used to probe the compressible quantum states. We are now faced with the problem of determining the inverses of the differential operators in Eq. (3.15) in the 4-dimensional space defined by Eqs. (2.13) and (3.13), and taking their limit u → ∞ limit. This is an intricate problem in solving differential equations on a curved space, addressed in recent work [69][70][71][72][73]. In the low frequency (ω) limit, the Green's functions of the boundary compressible quantum theory had the following form for both the boson and fermion cases [72]: where C(k), D(k), and ν k are smooth functions of the wavevector k. This peculiar behavior, with a smooth dependence on k, and a singular dependence on ω, is a direct consequence of the AdS 2 × R 2 factorization of the geometry in Eq. (3.14): the independent quantum impurities have temporal correlations which decay with a power-law in time, but are spatially uncorrelated. The behavior in Eq. (3.16) does not correspond to any of the generic compressible condensed matter phases considered in Section III A. Instead, it is essentially the behavior observed in the compressible critical states in the limit of large spatial dimension or lattice co-ordination number [61,[64][65][66][67][68]. The correspondence between these large dimension models and the holographic solutions extends also to the extension of Eq. (3.16) to non-zero temperatures. The computation of the inverses of the operators in Eq. (3.15), also yields stability conditions on the gravity theory. For the boson action, S b , the condition is C(k) > 0. This is satisfied for m sufficiently large. However, with decreasing m there is instability towards condensation of the scalar, leading to a new compressible phase with superfluidity [74][75][76][77][78][79]; we will discuss this phase further in Section III B 1. For the fermion case, C(k) is allowed to be either positive or negative. If C(k) changes sign as a function of the k, we obtain a Fermi surface [72] at k = k F , where C(k F ) = 0, by Eq. (3.4). The singularity of the Green's function in Eq. (3.16) at the Fermi surface is a special case of the generic form in Eq. (3.5); the latter is singular as a function of both ω and k − k F , and reduces to the singular part of the present form obtained for the AdS 2 × R 2 theory only in the limit where the dynamic critical exponent z → ∞. Indeed, the value z = ∞ is another way of characterizing the peculiar phase of the holographic theory with a geometry with factorization between space and time. Finally, we should compare the compressible state described by the Reissner-Nordström geometry to the conjecture towards the end of Section III A. We have obtained a compressible state which preserves both the translational and global U(1) symmetry associated with Q. We found Fermi surfaces only over a limited parameter regime, but even when present, they enclose areas which do not obey the Luttinger relation in Eq. (3.6). For the conjecture to be valid, the deficit in the Luttinger relation must be made up by 'hidden' Fermi surfaces associated with fermions which carry additional gauge charges, as in the non-Fermi liquid and fractionalized Fermi liquid states discussed in Section III A. The correlators of fermions with gauge charges are not readily computable in the gauge gravity duality, and are a worthy topic for future research. Beyond AdS 2 × R 2 Given the non-zero ground state entropy density of the compressible state with z = ∞ theory described in Section III B, it seems clear we need to move beyond the Reissner-Nordström geometry obtained via the equations of motion of the Einstein-Maxwell action in Eq. (2.8). In particular, at the very least, we should include the back-reaction of the matter fields in Eq. (3.15) upon the geometry. This is an active topic of current research, and I summarize a few recent developments. As an example of a back-reaction on the metric, we can move to the superfluid state where the bosons are condensed, and recompute [80,81] the metric g and gauge field A t from the classical equations of motion of S EM + S b , where φ is treated as a c-number as in the Bose-Einstein theory at T = 0. In the non-superfluid state, some studies [82][83][84][85][86] have accounted for the back reaction of fermions in the Thomas-Fermi approximation, where the fermion density at a given u is a function only of the local chemical potential. In both of these cases, a qualitatively similar back-reaction is obtained, which is illustrated in Fig. 10. The electric field is screened by the bosonic or fermionic matter in the bulk, so that the electric field vanishes as u → 0. In the opposite limit u → ∞, the electric field remains pinned by the value of Q , as is required by Gauss' Law on the boundary. For the metric, There is no horizon, and the small u geometry has a 'Lifshitz' form, as in Eq. (3.17). The bulk charge density screens the electric field, so that the electric field vanishes as u → 0. the following qualitative structure is obtained as u → 0 where α and z are constants. It has become the practice in the string theory literature to call this the 'Lifshitz' metric [87] (for not particularly good reasons). A key property of this metric is that it is invariant under the following transformations, which generalize (2.3) and (2.10), t → t/b z , r → r/b , u → b u, (3.18) and identify z as the dynamic critical exponent. The AdS 4 metric of Eq. (2.9) corresponds to z = 1. The AdS 2 × R 2 metric of Eq. (3.14) is obtained in the limit z → ∞ (as expected), after setting u =ũ 1/z . The existence of this scale-invariant structure in the small u limit is puzzling, and is not well understood. It implies there are emergent low energy excitations which scale with the dynamic critical exponent z. In the traditional condensed matter superfluid state, the only low energy excitations of a superfluid are the Goldstone modes, and there are no dynamic critical fluctuations. This indicates that the holographic superfluids [74,75] are not conventional superfluids at T = 0, and perhaps co-exist with a non-Fermi liquid sector. For the non-superfluid compressible state obtained using the Thomas-Fermi theory, it is possible that these critical excitations are associated with the gapless gauge excitations which led to Eq. (3.5) in the non-Fermi liquid. However, the latter excitations are associated with long wavelength, low energy fluctuations at momenta near the Fermi surface, and these are not contained in the Thomas-Fermi theory. Indications from recent work [88][89][90][91][92] are that these low-energy excitations arise from the presence of an infinite number of Fermi surfaces, with a Fermi wavevector which varies continuously with u. For condensed matter applications, we therefore need to move the theory of compressible states to a regime with a small number of Fermi surfaces. This was addressed recently in Ref. 92 by filling up fermionic states of S f in a suitable metric. A key point made in Ref. 92 is that a Fermi liquid must be a confining state of the gauge theory defining the CFT, as we discuss in the Supplementary Material. In holographic studies of zero density theories of particle physics, it is known that a confining state corresponds to a geometry which terminates at finite u = u m , rather than extending all the way to u = 0; see S f in Eq. (3.15), without making the Thomas-Fermi approximation. A conventional Fermi liquid state was found, without extraneous excitations. Moreover, the Luttinger relation in Eq. (3.6) was found to be exactly obeyed with a small number of Fermi surfaces, including the simplest case with a single Fermi surface; this Luttinger relation was a consequence of Gauss's Law for the electric flux in the u direction. Confining geometries were also considered for the superfluid state [93,94], and our present arguments indicate that these are traditional superfluids. Holographic realizations of the familiar compressible phases of condensed matter physics, Fermi liquids and superfluids, thus finally appear to have been found. Extensions of this understanding to non-Fermi liquids and fractionalized Fermi liquids are important topics for future research. The connection between the Luttinger relation and Gauss' law [92] indicates that the holographic realizations of these phases will have at least part of the electric flux 'leaking out' at u = 0, as in Fig. 9, and in a recent study [89]. IV. CONCLUSIONS We divided our discussion into two classes of systems realizable in condensed matter physics: conformal and compressible. Conformal systems were discussed in Section II. These concerned generic models which realize conformal field theories (CFTs) at a quantum phase transition, the simplest being the superfluid-insulator transition of bosons at integer filling in a periodic potential in two spatial dimensions. Many T = 0 properties of the CFT are accurately computable by traditional field theoretic perturbative expansions. However, this success does not extend to long-time correlations at T > 0, because the orders of limit of t → ∞ and T → 0 do not commute for all such methods. The Boltzmann equation, and its many descendants, are traditionally used to resum the perturbative methods, and access the long time limit; these are only expected to be reasonable when particle-like excitations are long-lived. The holographic method provides an alternative description of relaxational and transport properties at long time: it does not assume any picture of particle-like excitations and so is far removed from, and complementary to, the Boltzmann description. We described a gradient expansion method [35] for generating an effective holographic theory for generic CFTs. The coupling constants of this effective theory can be fixed by matching to various T = 0 correlators of the CFT. Then the effective theory generates useful results for the long-time limit, including the values of transport co-efficients. I believe this method offers hope for quantitative, testable predictions not only for the linear transport problems discussed in Section II B, but also non-linear and non-equilibrium [95] problems in the vicinity of quantum critical points. Compressible systems were studied in Section III A. The familiar condensed matter examples are superfluids, solids, and Fermi liquids (and their variants). An exotic class of compressible states have emerged in condensed matter studies motivated by the strange metal problem of correlated electron systems [20]. These are varieties of non-Fermi liquids, all of which have sharp Fermi surfaces, but the fermionic quasiparticles near some or all of the Fermi surfaces are not well-defined because of their strong coupling to deconfined gauge fields, or to gapless modes at a quantum phase transition. This led us to the conjecture [37], at the end of Section III A, that all compressible, continuum quantum systems which do not break translational or the global U(1) symmetry (associated with the conserved density) at T = 0, must have Fermi surfaces; further, the areas enclosed by these Fermi surfaces must obey the Luttinger relation in Eq. (3.6). The holographic method has the potential to provide us with an alternative classification, and a deeper understanding of the compressible states of quantum matter. Such a classification should surely include the familiar solids, superfluids, and Fermi liquids, and it is not yet settled if the exotic compressible states obtained via holography correspond to those obtained so far in the condensed matter literature. It remains possible that the Fermi surface conjecture is false, and holography leads us to qualitatively new types of compressible phases. The Reissner-Nordström solution of the Einstein-Maxwell action provides the simplest theory of a compressible state. We reviewed the physical properties of this state: a non-zero ground state entropy, 'locally critical' correlations in time, a non-Fermi liquid Fermi surface with dynamic critical exponent z = ∞, and T > 0 correlations similar to those of critical quantum impurity problems. We argued that all these properties were in close correspondence with the 'fractionalized Fermi liquid' phase [51,52] of lattice quantum models solved in the limit of large spatial dimensionality or large lattice co-ordination number [58]. In Section III B 1 we discussed ongoing research moving beyond the Reissner-Nordström geometry. In many studies, back-reaction of the matter fields on the gravity theory was examined in Thomas-Fermi-like approximations, and led to a low energy 'Lifshitz' geometry in Eq. (10) with a finite z. However, the physical interpretation of the critical low energy excitations with such a geometry remains unclear: such excitations are absent from conventional superfluids and Fermi liquids, and it is unlikely they correspond to the modes of known non-Fermi liquids. They may well be artifacts of the presence of an infinite number of Fermi surfaces, with a continually varying Fermi wavevector, along the emergent holographic direction [92]. In other studies [92][93][94], confining geometries which terminate along the holographic u direction have been considered, and these do realize traditional superfluids and Fermi liquids, without extraneous excitations. These studies also indicate that the compressible non-Fermi liquid (or fractionalized Fermi liquid) phases will be realized in geometries with all (or part) of the electric flux remaining unscreened until the end of the holographic direction. It does seem that further refinements of the theory are needed before the links between the condensed matter and holographic theories of compressible quantum matter are completely established. Given the pace of progress in the past few years, we can hope that many of these issues will be resolved in the not too distant future. Note added: A new approach to holographic theories of compressible quantum matter has since appeared in Ref. 96. where b i is the canonical boson annihilation operator, n i = b † i b i is the boson number operator, w is the hopping matrix element between nearest-neighbor sites, U is the on-site repulsive energy between a pair of bosons, and µ is the chemical potential. Let us assume that the average boson density is exactly n 0 per site, where n 0 is a positive integer. For U/w 1, the ground state is simply where |0 is the empy state with no bosons. In the same limit, the lowest excited states are "particles" and "holes" with one extra or missing boson, 3) For w/U = 0 strictly, the particle and hole energies (relative to the ground state) are For 0 < w/U 1, these states will develop dispersion. By considering the first order splitting of the degenerate manifold of particle or hole states (degeneracy associated with the site of the particle or hole), one obtains (considering the square lattice for simplicity) where we have Taylor expanded around the minimum at k = 0, giving m p = 1/(2(n 0 + 1)w) and m h = 1/(2n 0 w) The excitation gaps, ∆ p/h , are to this order in w/U . As long as both of the these gaps are positive, our starting point of a Mott insulating state with an average of n 0 particles per site is stable. When one of the gaps vanish, the Mott insulator is no longer stable, and we have a quantum transition to a superfluid state. Let us assume that it is ∆ p that vanishes first with increasing w. The transition then corresponds to a Bose-Einstein condensation of particles, with −∆ p acting as the effective chemical potential. At T = 0, an increasing chemical potential implies an increasing particle density, and so the superfluid state will have a density greater than that of the Mott insulator. Similarly, if the value of µ is such that ∆ h vanishes first, the superfluid state will have a density smaller than that of the Mott insulator. However, let us consider the special case when the density of both the Mott insulator and the superfluid are equal to n 0 ; this is often naturally the case under experimental conditions. Our reasoning makes it clear that this is only possible if µ is chosen so that ∆ p = ∆ h ≡ ∆. This both gaps vanish simultaneously, the insulator-superfluid transition corresponds to condensation of both particles and holes (which can be viewed as "anti-particles"). This symmetry between particles and anti-particles is responsible for the relativistic structure of the low energy theory. Let us now proceed to derive the effective action for the low energy theory near the insulator-superfluid transition. While it is possible to derive a field theory of this condensation from H b , we instead just write it down based on our simple physical picture. We model the particle and hole excitations by fields p(r, τ ), h(r, τ ) respectively, in the imaginary time (τ ) path integral. The weight in the path integral is, as usual, the Euclidean action, (1.8) Here we have included a term Λ which creates and annihilates particles and holes together in pairs, which is expected since this conserves boson number. Microscopically this term arises from the action of the hopping w on the naive ground state, which creates particle-hole pairs on neighboring sites, so Λ ∼ O(w) (the spatial dependence is unimportant for the states near k = 0). We have neglected -for brevity of presentation -to write a number of higher order terms involving four or more boson fields, representing interactions between particles and/or holes, and other boson number-conserving two-body and higher-body collisional processes. Note that the dependence upon w/U in Eq. (1.8) arises primarily through implicit dependence of ∆. Without loss of generality, we assume Λ > 0, and change variables to the linear combi- Then the quadratic terms in the action are Notice that the quadratic form for ψ becomes unstable, before that of ξ. So let us integrate out ξ, expanding the resulting action in powers and gradients of ψ. In this manner we obtain the theory for the superfluid order parameter ψ [3] This is the promised relativistic field theory, written in Eq. (2.2) in the main paper. The energy gap for both particle and hole excitations is √ ∆ 2 − Λ 2 , and this vanishes at the quantum critical point. There are a number of additional higher-order terms, not displayed above, which are not relativistically invariant. However, all of these are formally irrelevant at the Wilson-Fisher fixed point which controls the critical theory. Finally, we note that the scaling properties and relativistic invariance of the critical point are sufficient to establish its invariance under conformal transformations. A. Quantum critical transport To illustrate the general issues, we begin by computing the transport properties of the free field theory of a complex scalar with mass m, written in a Lorentz invariant notation: This theory can be obtained from Eq. (1.11) at u = 0, after appropriate rescalings of coordinates and fields. The conserved electrical current is Let us compute its two-point correlator, K µν (k) at a spacetime momentum k µ at T = 0. This is given by the one-loop diagram which evaluates to (1.14) The second term in the first equation arises from a 'tadpole' contribution which is omitted in Eq. (1.13). Note that the current correlation is purely transverse, and this follows from the requirement of current conservation Of particular interest to us is the K 00 component, after analytic continuation to Minkowski space where the spacetime momentum k µ is replaced by (ω, k). The conductivity is obtained from this correlator via the Kubo formula In the insulator, where m > 0, analysis of the integrand in Eq. (1.14) shows that that the spectral weight of the density correlator has a gap of 2m at k = 0, and the conductivity in Eq. (1.16) vanishes. These properties are as expected in any insulator. At the critical point, where m = 0, the fermionic spectrum is gapless, and so is that of the charge correlator. The density correlator in Eq. (1.14) and the conductivity in Eq. (1.16) evaluate to the simple universal results K 00 (ω, k) = 1 16 Going beyond the free field theory in Eq. (1.12), the effect of interactions can be accounted for order-by-order in u. In the renormalization group approach, u takes the value specified by the Wilson-Fisher fixed point at the quantum critical point. Combined with the absence of of divergencies in the perturbative expansion (which is a consequence of Eq. (1.15)), this means the only effect of interactions is to change the pre-factor in Eq. (1.17) to a different universal numerical value. So we write where σ ∞ is a universal number dependent only upon the universality class of the quantum critical point, whose value can be computed by various expansion methods. Non-zero temperatures We begin by repeating the above computation for the free field theory at T > 0. This only requires replacing the integral over the loop frequency in Eq. (1.14), by a summation over the Matsubara frequencies which are quantized by integer multiples of 2πT . Such a computation, via Eq. (1.16) leads to the conductivity Re[σ(ω)] = P δ(ω) + θ(|ω| − 2m) 16 Here χ is the charge susceptibility (here it is the compressibility), and D is the charge diffusion constant. By the universality of the Wilson-Fisher fixed point, we expect that these have universal values in the quantum critical region: where again C χ and C D are universal numbers. For the conductivity, we expect a crossover from the collisionless critical dynamics at frequencies ω T , to a hydrodynamic collisiondominated form for ω T . This entire crossover is universal, and is described by a universal crossover function σ(ω) = Σ(ω/T ). which is a version of Einstein's relation for Brownian motion. II. COMPRESSIBLE QUANTUM MATTER Now we will consider the Hubbard model for fermionic particles with spin S = 1/2 (electrons) on the triangular lattice. For small U/w, the ground state of this model is a metal, rather than a superfluid. This is because the fermions cannot condense; instead they occupy all single particle states inside a 'Fermi surface' in momentum space, forming a Fermi liquid. For large U/w, and with a density of one electron per site, we do expect an insulating state to form, with a gap to both particle and hole excitations, just as was the case for bosons. However, the electron localized on each site of the lattice now has a spin degeneracy, and we also have to specify the spin wavefunction in the insulator. At the largest values of U/w, it is believed that the insulator has long-range antiferromagnetic order; we will not study this ordered state here. The nature of the insulator at smaller U/w, and in particular, in the vicinity of the insulator-metal transition is still a question of some debate. In the following, we will assume that the insulating state proximate to the critical point is a particular "U(1) spin liquid", which we will describe more completely below. The Hubbard Hamiltonian is Here c iα , α =↑, ↓ are annihilation operators on the site i of a triangular lattice. The density of electrons is controlled by the chemical potential µ which couples to the total electron density, with For completeness, we also note the algebra of the fermion operators: Let us begin by considering the case U = 0. Then the ground state is a metal at all densities, with a Fermi surface separating occupied and empty states in momentum space. Landau's Fermi liquid (FL) theory describes how the free-electron model of a metal can be extended to non-zero U . For our purposes, we need only two basic facts: (i) the fermionic excitations near the Fermi surface are essentially non-interacting electrons, and (ii ) the area enclosed by the Fermi surface is equal to the electron density-this is Luttinger's theorem, which we state more explicitly below. At U = 0, the Hamiltonian of the FL metal is 2t cos(k · e 1 ) + cos(k · e 2 ) + cos(k · e 3 ) c α (k), (2.4) where the e i are as shown in Fig. 1. The reciprocal lattice consists of the vectors G = The electronic dispersion in Eq. (2.4) is plotted in Fig. 2: it only has simple parabolic minima at k = 0, and its periodic images at k = G, and there are no Dirac points. At any chemical potential, the negative energy states are occupied, leading to a Fermi surface bounding the set of occupied states, as shown in Fig. 3. Luttinger theorem states that the total area of the occupied states, the shaded region of the first Brillouin zone in Fig. 3 occupies an area, A, given by where N = α c † α c α is the total electron density. This relationship is obviously true for free electrons simply by counting occupied states, but it also remains true for interacting electrons. Now we turn up the strength of the interactions, U , at a density of one electron per site. By the same argument as that for bosons, an insulator will appear for sufficiently large U . As stated above, we will focus particular route to the destruction of the small U Fermi liquid, one which reaches directly to an insulator which is a 'spin liquid' [5][6][7]. The spin liquid insulator is a phase in which the spin rotation symmetry is preserved, and there is a gap to all charged excitations. However, there are gapless spin excitations, and an emergent compact U(1) gauge field in a deconfined phase. The key to the description of this metal-insulator transition is an exact rewriting of the Hubbard model in Eq. (2.1) as a compact U(1) lattice gauge theory. To derive this, let us associated with this gauge transformation. The constraint in Eq. (2.9) will be the Gauss law of this gauge theory. These operator identities are related to those of the 'slave rotor' representation [8]. First, let us rewrite the Hubbard model in terms of these new bosonic and fermionic operators. The Hubbard Hamiltonian in Eq. (2.1) is now exactly equivalent to where the constraint in Eq. (2.9) is implemented using an auxilliary field λ i (τ ) which acts as a Lagrange multiplier. A key observation now is that the partition function in Eq. (2.12) is invariant under a site, i, and τ -dependent U(1) gauge transformation ζ i (τ ) where the fields transform as in Eq. (2.10), and λ transforms as In other words, λ transforms like the temporal component of a U(1) gauge field. How do we obtain the spatial components of the gauge field? For this, we apply a "Hubbard-Stratonovich transformation" to the hopping term in Eq. (2.11). For this, we introduce another auxiliary complex field Q ij (τ ) which lives on the links of the triangular lattice and replace the hopping term by (2.14) We now see from Eq. (2.10), that Q ij transforms under the gauge transformation in Eq. (2.10) as In other words, arg(Q ij ) is the needed spatial component of the compact U(1) gauge field. So far, we have apparently only succeeded in making our analysis of the Hubbard model in Eq. (2.1) more complicated. Instead of the functional integral of the single complex fermion c iα , we now have a functional integral over the complex fermions f iα , the bosons p i , h i , and the auxilliary fields λ i and Q ij . How can this be helpful? The point, of course, is that the new variables help us access new phases and critical points which were inaccessible using the electron operators, and these phases have strong correlations which are far removed from those of weakly interacting electrons. The utility of the new representation is predicated on the assumption that the fluctuations in the auxiliary fields Q ij and λ i are small along certain directions in parameter space. So let us proceed with this assumption, and describe the structure of the phases so obtained. We parameterize and ignore fluctuations in the complex numbers Q ij , and the real number λ. With these definitions, it is clear from Eqs. (2.13) and (2.15) that B ij and B τ form the spatial and temporal components of a U(1) gauge field, and so must enter into all physical quantities in a gauge invariant manner. The values of Q ij and λ are determined by a suitable saddle-point analysis of the partition function, and ensure that the constraint (2.9) is obeyed. With these assumptions, the partition function separates into separate fermionic and bosonic degrees of freedom interacting via their coupling to a common U(1) gauge field (B iτ , B ij ). In the continuum limit, the gauge fields become a conventional U(1) gauge field B µ = (B τ , B). The partition function of the gauge theory is field. We begin by neglecting the gauge fields, and computing the separate phase diagrams of L f and L b . The fermions are free, and so occupy the negative energy states determined by the chemical potential λ. The phase diagram of L b is more interesting: it involves strong interactions between the p and h bosons. It can be a analyzed in a manner similar to that of the boson Hubbard model (see Chapter 9 of Ref. 4), leading to the familiar "Mott lobe" structure shown in Fig. 4. At large values of Q/U we have the analog of the superfluid states of the boson Hubbard model, in which there is a condensate of the same operator as that in Eq. (1.9): (2.18) Note that ψ is the ladder operator for the number operator n = p † p−h † h used to characterize the insulating phases in Fig. (4). The ψ operator carries unit charge under the U(1) gauge field (from Eq. (2.10), and so the superfluid phase, with ψ = 0, does not break any global symmetries (unlike the boson model of Section I). Instead it is a "Higgs" phase. In the presence of the Higgs condensate, the operator relation in Eq. (2.8) implies that c α ∼ f α , and so the f α fermions carry the same quantum numbers as the physical electron. Consequently, the f α Fermi surface is simply an electron Fermi surface. Furthermore, the Higgs condensate quenches the B µ fluctuations, and so there are no singular interactions between the Fermi surface excitations. This identifies the present phase as the familiar Fermi liquid, as noted in Fig. 4. We note that we can equally well identify this phase as a "confining" phase of the U(1) gauge theory, in which the ψ boson has formed a bound state with the f α fermions, which is just the gauge-neutral c α fermion. Indeed, as is well known, Higgs and confining phases are qualitatively the same when the Higgs condensate carries a fundamental gauge charge, as is the case here. Having reproduced a previously known phase of the Hubbard model in the U(1) gauge theory, let us now examine the new phases within the 'Mott lobes' of Fig. 4. In these states, the boson excitations are gapped, and number operator n = p † p−h † h has integer expectation values. The constraint in Eq. (2.9) implies that only n = −1, 0, 1 are acceptable values, and so only these values are shown. It is clear from the representation in Eq. (2.7) that any excitation involving change in electron number must involve a p or h excitation, and so the gap to the latter excitations implies a gap in excitations carrying non-zero electron number. This identifies the present phases as insulators. Thus the phase boundary out of the lobes in Fig. 4 is a metal-insulator transition. The three insulators in Fig. 4 have very different physical characteristics. Using the constraint in Eq. (2.9) we see that the n = −1 insulator has no f α fermions. Consequently this is just the trivial empty state of the Hubbard model, with no electrons. Similarly, we see that the n = 1 insulator has 2 f α fermions on each site. This is the just the fully-filled state of the Hubbard model, with all electronic states occupied. It is a band insulator. Finally, we turn to the most interesting insulator with n = 0. Now the electronic states are half-filled, with f † α f α = 1. Thus there is an unpaired fermion on each site, and its spin is free to fluctuate. There is a non-trivial wavefunction in the spin sector, realizing an insulator which is a 'spin liquid'. In our present mean field theory, the spin wavefunction is specified by Fermi surface state of the f α fermions. Going beyond mean-field theory, we have to consider the fluctuations of the B µ gauge field, and determine if they destabilize the spin liquid. The f α fermions carry the B µ gauge charge, and these fermions form a Fermi surface. The gapless fermionic excitations at the Fermi surface prevent the proliferation of monopoles in the compact U(1) gauge field: the low energy fermions suppress the tunneling event associated with global change in B µ flux [9,10]. Thus the emergent U(1) gauge field remains in a deconfined phase, and this spin liquid state is stable. These gapless gauge excitations have strong interactions with the f α fermions, and this leads to strong critical damping of the fermions at the Fermi surface which is described by a strongly-coupled field theory [11][12][13]. The effect of the gauge fluctuations is also often expressed in terms of an improved trial wavefunction for the spin liquid [5]: we take the free fermion state of the f α fermions, and apply a projection operator which removes all components which violate the constraint in Eq. (2.9). This yields the 'Gutzwiller projected' state where the product over k is over all points inside the Fermi surface. Finally, we turn to an interesting quantum phase transition in Fig. 4. This is the transition between the spin liquid and the Fermi liquid at total electron density N = 1, which occurs at the tip of the n = 0 Mott lobe. From the boson sector, this looks like a Higgs transition, of the condensation of a complex scalar ψ as in Section I, but in the presence of a fluctuating U(1) gauge field. However, the fermionic sector is crucial in determining the nature of this transition. Indeed, in the absence of the Fermi surface, this transition would not even exist beyond mean field theory: this is because the U(1) gauge field is compact, and the scalar carries unit charge, and so the confining and Higgs phases of this gauge theory are smoothly connected. So we have to combine the Higgs theory of a complex scalar with the gapless Fermi surface excitations. We can obtain the field theory for this metal-insulator transition by applying the methods of Section I to Eq. (2.17). The analog of the condition ∆ p = ∆ h needed to obtain a density of one electron per site is now λ + µ = 0. In this manner, we find the field theory [7,15] L = |(∂ µ + iB µ )ψ| 2 + s|ψ| 2 + u|ψ| 4 + iB τ N where the energy ε F is to be adjusted to yield total fermion density N = 1. The transition is accessed by tuning s, and we move from a spin liquid with ψ = 0 for s > s c , to a Fermi liquid with ψ = 0 for s < s c . Within the the spin liquid phase, we can safely integrate out the gapped ψ quanta, and so obtain the theory in Eq. (3.1) of the main text. The critical properties of the theory at s = s c have been studied [7,14], and an interesting result is obtained: the Fermi surface excitations damp the gauge bosons so that they become ineffective in coupling to the critical b fluctuations. Consequently, the gauge bosons can be ignored in the ψ fluctuations, and the transition is in the universality class of the 2+1 dimensional XY model. In other words, quite unexpectedly, the critical theory is the same as that of the superfluid-insulator transition of Section I. There are additional gapless excitations associated with the gauge field and the Fermi surface, but these are irrelevant
2012-04-11T12:04:42.000Z
2011-08-04T00:00:00.000
{ "year": 2011, "sha1": "af374e950804671fec70a2b1fe73e54825ae8956", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1108.1197", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d7ddf30f052ce12d9e68dd1e7c24ff4a4e138175", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252972489
pes2o/s2orc
v3-fos-license
Psychological distress and post-traumatic growth in France during the COVID-19 pandemic: A mediation model of psychosocial safety climate as a determinant of work performance The psychosocial safety climate (PSC) reflects workers’ perceptions of senior management’s concern for mental health. Because the COVID-19 pandemic has exacerbated organizational issues, PSC could be a target for interventions attempting to preserve both the psychological health of employees and the economic health of companies. This study examines the direct and indirect relationships between PSC and work performance through two indicators of psychological health, psychological distress and post-traumatic growth, during a health crisis, i.e., prior to the second confinement in France. To this end, 2,004 participants from the French workforce completed a survey in October 2020. The results of mediation analyses indicate that PSC has a direct and positive influence on post-traumatic growth (PTG) and performance, as well as a direct negative influence on psychological distress. PSC also has an indirect positive influence on performance via psychological distress. Organizations that wish to jointly address mental health and performance at work would benefit from optimizing PSC. Introduction On March 11, 2020, the World Health Organization (WHO) officially declared the COVID-19 a pandemic (World Health Organization [WHO], 2020), and as we have all seen, the disease rapidly spread across the globe (Bontempi, 2022). The global population has experienced many health restrictions, e.g., lockdowns, curfews, and Frontiers in Psychology 01 frontiersin.org social distancing, which have required people to adopt new behaviors in all areas of their lives (Raile et al., 2020). In the workplace, the health crisis has led to new organizational practices, such as teleworking (Feng and Savani, 2020), which have greatly transformed employees' work experiences, e.g., work and home overload while telecommuting (Burk et al., 2021). Some authors underline the pressing need to act to preserve employees' psychological health during the pandemic (Chen et al., 2021). Long before the pandemic, the WHO already stressed the urgency of increasing investment in mental health because depression was already one of the leading causes of disability in the world (World Health Organization [WHO], 2017). Dzau et al. (2020) discuss the risks of a parallel pandemic specific to mental health if organizations do not react quickly to protect their staff. Longitudinal studies are consistent in showing that the COVID-19 pandemic exacerbated mental health problems (Daly et al., 2020;Pierce et al., 2020) and that these effects may even have been underestimated (Czeisler et al., 2021). This crisis context illustrates the extent to which organizations must strike a balance between productivity on the one hand and the health and wellbeing of their employees on the other hand. Psychosocial safety climate (PSC) theory highlights the implications of attaining a balance between productivity and mental health for organizations and their staff. Specifically, PSC refers to "shared perceptions regarding policies, practices, and procedures for the protection of worker psychological health and safety, " and PSC represents "the causes of the causes of work stress" (Dollard and Bakker, 2010, p. 579). Many studies have demonstrated the precursor role of PSC for work design and employee health, e.g., the reduction of emotional exhaustion (Idris et al., 2011, Idris et al., 2014Mansour and Tremblay, 2019), but few researchers have used this theory to understand the role of PSC in work performance. Idris et al. (2011) showed that PSC was directly and positively related to perceived performance. In addition, these authors highlighted that PSC positively influences job resources, e.g., managerial support, thus increasing engagement at work, which, in turn, enhances performance. Conversely, a better PSC was associated with reduced work demands, which, in turn, reduced the risk of burnout. In their study, both burnout and engagement were predictors of job performance. As Ipsen et al. (2020) argue that "good health is good for business" (p. 1) and that there is a need to address mental health and performance at work simultaneously in research and organizations because these two issues are intrinsically interrelated (Nowrouzi-Kia et al., 2021). Moments of crisis, such as those triggered by the COVID-19 pandemic, cause an upsurge in mental health problems but also create transformational opportunities for individuals and organizations. One such opportunity is the phenomenon of post-traumatic growth (PTG), which is the set of positive changes following a traumatic event Calhoun, 1996, 2004b). Although little studied in the context of a crisis (Gori et al., 2021), this form of growth may have been experienced by some employees. The experience of contracting COVID-19 can be traumatic for some individuals, leading them to experience increased anxiety, distress, and depression (Masiero et al., 2020;Cohen and Nica, 2021). For others, however, the experience can also lead to lasting changes in the way they view the world, e.g., appreciating life more, changing their relationship to work, or altering their spiritual life (Nearchou and Douglas, 2021). These opportunities for PTG during a health crisis may, on the one hand, depend on individual characteristics such as resilience, hope, or beliefs (Nearchou and Douglas, 2021;Vazquez et al., 2021). On the other hand, they may also depend on organizational context because some studies suggest that PTG is more likely in a context in which mental health issues are prioritized and supported by top management (Wood et al., 2020). The present study examines the mediating role of psychological distress and PTG in the relationship between PSC and job performance. More precisely, this research raises the following questions: (1) by putting in place the appropriate practices, policies, and procedures related to psychological health, especially during a health crisis, can organizations limit their employees' psychological distress while helping them achieve PTG? (2) To what extent does PSC influence employees' performance during a health crisis? (3) To what extent are psychological health indicators such as psychological distress and PTG explanatory mechanisms for the relationship between PSC and performance? To answer these questions, this study analyzes PSC, psychological distress, PTG, and perceived performance. Psychosocial safety climate A good PSC is characterized by freedom from psychological and social risk or harm at the highest levels of the organization (Dollard and Bakker, 2010). Specifically, PSC includes four dimensions: (1) top management commitment, namely their support in the prevention of work-related ill-being through the implementation of useful and decisive actions; (2) priority given to PSC by senior management, which is reflected in the importance placed on the psychological health and safety of employees vs. production; (3) communication, which refers to the organization's ability to listen, dialog, and take into account its members' contributions to psychological health and safety; and (4) organizational participation, which entails the consultation of employees and unions on issues related to psychological health and safety (Dollard and Bakker, 2010). As an organizational resource likely to influence the constraints (i.e., by requiring compensatory physical and/or psychological efforts in order to cope with the situation while achieving the objectives set) and resources (i.e., by reducing the intensity of the constraints and their deleterious effects on health while stimulating personal growth and development) of a job (Hakanen et al., 2006), PSC can be considered an extension of the Job Demands-Resources model (JD-R: Demerouti et al., 2001;Schaufeli and Bakker, 2004;Bakker and Demerouti, 2007). The JD-R model is based on two distinct psychological processes: the health impairment process, which assumes that constraints lead to various health problems, e.g., depression (Hakanen et al., 2008), and the motivational process, which argues that resources have motivational potential because they promote employee learning and development (Bakker and Demerouti, 2007). Dollard et al. (2019) assert that PSC mitigates health problems indirectly by reducing constraints and their effects and increases work commitment indirectly through resources. More concretely, in a weak PSC context, employees and their managers may have no internal mechanism, e.g., reporting procedures or a counseling unit, enabling them to report individual (e.g., chronic fatigue, stress, and risk of burnout) or collective (e.g., work overload and interpersonal conflicts) difficulties to management. A good PSC implies that the organization gives a high priority to the mental health of staff and managers and puts in place the necessary mechanisms to ensure managers have the resources needed to support their staff. A good PSC has been associated with better managerial practices because it promotes better mental health for managers (Biron et al., 2018;Parent-Lamarche and Biron, 2022). In the same vein, a multi-level study of healthcare workers during the pandemic showed that PSC promotes resilience through hope, as well as increasing the impact of supportive leadership on employee hope (Siami et al., 2022). In contrast, when PSC is low, the means available to employees to report their difficulties may be inadequate or non-existent. As a result, the work-related constraints to which they are exposed are more likely to persist over the long term, affecting their health and performance (Liu et al., 2020;Biron et al., 2021). Similar findings (Idris et al., 2015;Lee and Idris, 2017) confirmed that PSC acts as an antecedent to job demands and resources. By strengthening employees' job resources, e.g., learning opportunities, PSC increases their interest in and enthusiasm for their work, i.e., work engagement, as well as their performance. Despite its theoretical soundness, few studies have considered the mechanisms through which PSC influences work performance during a health crisis. Therefore, this study analyzes the effects of PSC on the psychological health (psychological distress and posttraumatic growth) and performance of employees during a health crisis. Psychological distress Psychological distress is generally used as an early indicator of mental disorder (Kessler et al., 2003b). It is associated with various symptoms, such as cognitive impairments, irritability, depression, and anxiety (Ching et al., 2021). Previous studies indicate that high psychological demands, low work support, and low recognition for work efforts are strong predictors of psychological distress (Duchaine et al., 2017). Regarding the consequences of psychological distress, these include decreases in work productivity due to absenteeism (Duchaine et al., 2020) and presenteeism (Biron et al., 2021). For example, a study by Mirza et al. (2019) conducted in a sample of the oil and gas workers in Malaysia has shown that psychological distress mediates the relationship between PSC and safety behaviors in such a way that PSC reduced psychological distress, which, in turn, improved safety behaviors. Like Mirza et al. (2019), in this study, we suggest that a psychologically safe climate will help reduce distress, which, in turn, will improve work performance. Hypothesis 1. Psychosocial safety climate is negatively related to psychological distress. Post-traumatic growth The COVID-19 pandemic has engendered or reinforced work-related constraints, e.g., job uncertainty, such that the work environment may now pose new risks to workers' psychological health (Zahiriharsini et al., 2022). It has consequently become essential to identify the organizational variables, e.g., PSC, related to both positive and negative outcomes for employees, specifically in times of a health crisis. Introduced by Calhoun, 1996, 2004a,b, the concept of PTG corresponds to the set of positive changes following a traumatic event. More precisely, it describes the process of individuals experiencing these changes in certain areas of their lives through the reevaluation of their worldview (Gori et al., 2021). Although PTG is considered a salutogenic concept (Hamama-Raz et al., 2020), Tedeschi and Calhoun (2004a) clarify that, while PTG occurs more frequently in the context of suffering and inner struggle, it can also emerge in the lives of individuals who have not experienced specific trauma (Tedeschi and Calhoun, 1996), particularly in occupational settings (Sattler et al., 2014). For example, Stanton et al. (2006), in their review of the literature on the subject, suggest that cancer patients can experience PTG by, among other things, seeking more social support or using positive and adapted coping strategies. Accordingly, the constraints associated with the pandemic situation, e.g., successive lockdowns, may have both traumatic and constructive consequences (for a narrative review on PTG in the workplace during COVID-19, see Finstad et al., 2021;Vazquez et al., 2021). Calhoun (1996, 2004b) identified five areas that are central to the concept of PTG: personal strength, new possibilities in life, relationships with others, appreciation of life, and spiritual change. First, people who experience an increase in personal strength feel that they can better handle everyday tasks and events that had been perceived as insurmountable, e.g., hard-to-achieve goals or internal conflicts. Second, PTG involves the identification of new possibilities for oneself and one's life, such as taking a different path than one had planned, e.g., career reorientation or a change in career development) (Tedeschi and Calhoun, 2004b). Third, PTG is characterized by potentially more intimate interpersonal relationships. Individuals thus become more aware of the importance of their relationships and cherish them more. This change also results in increased compassion for others, e.g., during a restructuring or job loss (Tedeschi and Calhoun, 2004b). Fourth, greater appreciation of life can also qualify as a PTG experience. For example, many aspects of daily life, however, small, are associated with small joys that can take on special meaning. The sense of priorities is profoundly altered such that "little things" are more valued, e.g., time spent with loved ones (Tedeschi and Calhoun, 2004b). Finally, the PTG experience can include positive changes in spirituality. People who experience PTG, be they religious or not, often engage in spiritual and existential reflection, which helps them cope with painful emotions or loss (Tedeschi and Calhoun, 2004b). To summarize, the PTG experience allows individuals to engage in a cognitive process, e.g., positive reinterpretation, positive reframing, interpretive control, and the reconstruction of events, that imparts meaning to their experiences and future perspectives. It allows them to develop resources with which to cope with new and undesirable situations (Hobfoll, 2002;Sattler et al., 2014). Post-traumatic growth is increasingly being investigated in work settings (e.g., physicians, Taku, 2014;firefighters, Yang and Ha, 2019;paramedics, Ragger et al., 2019), but occupational factors are rarely considered. The literature has focused on the benefits of individual (e.g., emotional intelligence, Li et al., 2015;optimism, Yang and Ha, 2019; sense of coherence, Ragger et al., 2019) or personal (e.g., family support, Taku, 2014) characteristics in terms of PTG; scant research has explored organizational avenues of action. However, a few studies have noted the positive influence of the meaning of work (Hamama-Raz et al., 2020), recognition at work (Idås et al., 2019), and perceived social support in the workplace (Sattler et al., 2014) on PTG. Maitlis (2020) endorses various organizational practices that promote the development of PTG in employees, such as establishing a supportive organizational culture for employees coping with trauma, paying special attention to teams that are suffering, and creating organizational conditions that promote interpersonal trust and psychological safety. Hypothesis 2. Psychosocial safety climate is positively related to post-traumatic growth. Relationships between psychosocial safety climate, distress, growth, and performance Organizational performance reflects a firm's results, ranging from productivity to profitability, while remaining dependent on employees' perceived performance (Ipsen et al., 2020). Depending on their efficiency levels, personnel may or may not achieve the objectives set by the employer. This is why many authors emphasize the fact that psychological health and performance are intrinsically linked (Peccei and Van De Voorde, 2019) so that employees with good psychological health report better performance than those with poor psychological health. Despite organizational and governmental policies that assume a lack of connection between health and performance (Hasle et al., 2019), Ipsen et al. (2020) argue that these variables should be examined and integrated jointly into central managerial concerns and practices. The present study attempts to respond to this call by focusing on workers' psychological health and performance simultaneously. Performance indicators vary widely between studies and can include subjective, e.g., perceived performance , or objective measures, e.g., total sales volume (Shannahan et al., 2013). It can be self-reported or not, e.g., completed by the supervisor (Alessandri et al., 2017), and can also be protean with respect to the profession studied, e.g., safe behavior among oil and gas workers (Mirza et al., 2019). Kessler et al. (2003a) recommend examining performance as a subjective and global construct whereby employees evaluate their overall performance according to their own criteria. Although this operationalization does not allow one to distinguish among employees' skills, behaviors, and results, it does allow one to put these factors into perspective and determine whether employees have met the organization's requirements (Shimazu and Schaufeli, 2009;Shimazu et al., 2010). Moreover, this approach seems particularly well suited to a representative sample of a national population, as may be the case in our study, i.e., the French population. One of the main objectives of examining performance is to identify the variables that best predict it, particularly during a health crisis in which labor shortages are acute. Thus, employees' health is construed as a key determinant of performance (Ipsen et al., 2020), such that wellbeing and ill-being will have differentiated effects. For example, several studies have shown that sleep disorders (Giorgi et al., 2018), psychological ill-being (Huang and Simha, 2018), and perceived stress (Lindegård et al., 2014) lead to performance deterioration. In addition, a few studies find that psychological distress is negatively related to performance (Lim and Tai, 2014) because distress leads to decreased attention, motivation, and effort. Conversely, several studies demonstrate that engagement at work (Shimazu and Schaufeli, 2009) and subjective wellbeing (Salgado et al., 2019) increase performance. Similar results were also found during the COVID-19 pandemic. Nemteanu et al. (2021) showed that job satisfaction positively influenced performance, while negatively affecting counterproductive behaviors. Similarly, Prodanova and Kocarev (2021) highlighted the negative indirect influence of information and communications technologies (ICT) anxiety on work-from-home performance via job efficacy. Although no research to date has examined the influence of PTG on performance, it is likely that, as a salutogenic indicator, the resources with which PTG is associated, e.g., improved self-image and higher quality of interpersonal relationships, allow employees to experience more positive effects and events perceived as stimulating and, thus, to adopt the appropriate behaviors so as to achieve high levels of performance. Thus, we offer the following hypotheses (Figure 1). Hypothesis 3. Psychological distress is negatively related to perceived performance, whereas PTG is positively related to perceived performance. Hypothesis 4. The positive relationship between PSC and perceived performance is mediated by psychological distress and post-traumatic growth. Participants and procedure All participants in this study were recruited through a French opinion polling institute, OpinionWay, with which we collaborated in this work. The participants completed an online questionnaire between October 19 and 28, 2020. In an invitation was sent by email, in which they were told how to access the questionnaire. The targeted sample was representative of the characteristics of the working population in France, e.g., the ratio of men to women, and aged 18 years or more. The representativeness of the sample was based on quota methods for gender, age, and profession, which was performed after stratification by region and town size. In addition, the participants were told that this research was anonymous and confidential, that there were no right or wrong answers, and that it was important to answer sincerely. The survey was completed in no more than 20 min. The socio-demographic and socioprofessional characteristics of the participants are presented in Table 1. Measures Psychosocial safety climate Participants reported their perceptions of their organization's PSC based on four items (α = 0.90 for this study; i.e., "Senior management shows support for stress prevention through involvement and commitment, " "Senior management considers employee psychological health to be as important as productivity, " "There is good communication here about psychological safety issues which affect me, " "In my organization, the prevention of stress involves all levels of the organization" (Dollard, 2019). The instructions they were given took into account the COVID-19 pandemic context (i.e., "The following statements relate to psychological health and safety within your organization. Considering your current employment status during this pandemic, please select the answer that best fits your situation"). Responses ranged from 1 (Strongly Disagree) to 5 (Strongly Agree). Psychological distress The six items of the Kessler Psychological Distress Scale (K6; Kessler et al., 2002Kessler et al., , 2003b were used to measure the frequency with which participants exhibited symptoms of non-specific psychological distress the week prior, e.g., feeling nervous, depressed, agitated, or irritable (α = 0.90 for this study). The response choices ranged from 1 (Never) to 5 (All the time). This measure was used because it reflects the diagnostic criteria for psychological unhappiness, specifically major depression and generalized anxiety disorder (Kessler et al., 2002). The K6 has been validated with adults in several studies; its psychometric properties are as good as those of the K10 (Kessler et al., 2002(Kessler et al., , 2003bFurukawa et al., 2003). The scale can also be used with established threshold to discriminate cases of serious mental problems from non-cases (Kessler et al., 2003b(Kessler et al., , 2010. Post-traumatic growth The post-traumatic growth inventory (PTGI), developed by Tedeschi and Calhoun (1996), measures perceived benefits following a traumatic event. Participants responded to a total of 21 items. Specifically, they were asked to report the extent to which events related to the health context, i.e., the declaration of the COVID-19 pandemic, confinement, and re-opening, caused lasting changes (α = 0.95 for this study; "I have new interests, " "I feel closer to others, " "I have a greater appreciation of the value of my life"). The responses ranged from 1 (Not at all) to 5 (Totally). Perceived performance Performance was measured by responses to the following question: "Over the past week, how would you rate your performance at work on a scale of 0-100%?" (Kessler et al., 2003a). The responses ranged from 0% (the worst performance an employee could deliver) to 100% (the best performance an employee could achieve) in 5% increments. The main reason for the choice of this scale is that the nature of the performance indicators varies significantly from one study to another. This makes it all the more difficult to examine work performance when it is studied in a population-based sample, such as the one used in this study. In this respect, some authors propose to measure performance through a one-item subjective scale (Shimazu and Schaufeli, 2009;Shimazu et al., 2010), which allows us to take into account disparate professional backgrounds. Analyses The data were analyzed using the Statistical Package for Social Sciences (SPSS 23) software. To test the set of hypotheses, several steps were followed. First, descriptive and correlation analyses were conducted to explore the relationships between the variables, i.e., PSC, psychological distress, PTG, and perceived performance. Second, analyses were conducted to test the mediating effects of psychological distress and PTG on the relationship between PSC and perceived performance. To this end, the procedure defined by Hayes and Preacher (2014) was used. It involves estimating four parameters (i.e., alpha, which corresponds to the regression weight of PSC on each mediator, namely psychological distress and PTG; beta, which corresponds to the regression weight of the mediators of perceived performance; c, which corresponds to the total effect (i.e., direct and indirect) of PSC on perceived performance; and c , which corresponds to the direct effect of PSC on perceived performance (indirect effect = c − c ). We can thus differentiate the direct and indirect effects of an independent variable on a dependent variable. Finally, the indirect effect is calculated as the product of the alpha × beta relationships for each mediator. Its 95% confidence interval is estimated from a resampling procedure that is repeated 5,000 times. This commonly used procedure produces a more reliable estimate of the confidence interval because it is robust to a non-normal distribution on the part of the indirect effect (Preacher and Hayes, 2008). These mediation analyses were conducted using the freely available macro PROCESS v3.5 (model 4) developed by Hayes (2022). Next, simple mediation analyses were performed to identify potential mechanisms, i.e., psychological distress and PTG, via which PSC influences perceived performance. The results are presented in Table 3. Psychosocial safety climate is negatively associated with psychological distress ( Discussion This study examined the effects of PSC on psychological distress, PTG, and perceived performance among French employees during the COVID-19 pandemic, specifically prior to the second confinement in France (October 30 to December 15, 2020). First, as hypothesized, our results indicate that PSC is positively related to PTG but negatively related to psychological distress. These results support H1 and H2. Our results partially supported H3, showing that distress was negatively associated with performance but the association with PTG was not significant. As for H4, the association between PSC and performance was partially mediated by psychological distress. PSC indirectly fostered work performance by reducing psychological distress. The mediating effect of PTG was not significant. Theoretical contributions Our results are consistent with previous work that found that PSC was associated with positive consequences for both psychological health and performance (Idris et al., 2015). More tangibly, PSC is an organizational resource that tends to mitigate constraints such as work overload, whilst promoting resources such as social support, autonomy, and skills development (Dollard and Bakker, 2010;Yulita et al., 2022). PSC implies that key stakeholders are enabled to respond promptly and proactively to the psychological health issues exacerbated by the pandemic. PSC has been associated in previous studies with many mental health outcomes such as psychological distress (Platania et al., 2022;Yulita et al., 2022), and with the core components of the JD-R model such as burnout and engagement (Dollard and Bakker, 2010;Idris et al., 2011). However, to the best of our knowledge, it has never been used in the context of crisis as an antecedent to PTG, thus responding to the recent call to use PSC with a broader range of outcomes (Dollard et al., 2019). This implies that employees who evolve in a work climate in which they perceive that their wellbeing is considered and preserved by their organization report less psychological distress and tend to experience the COVID-19 crisis more positively. The PSC thus helps maintain healthy working conditions, allowing employees to thrive professionally through good health and strong performance. Second, consistent with previous research Lim and Tai, 2014), our results showed that psychological distress was negatively associated with perceived performance. Furthermore, we demonstrate that PSC positively influences perceived performance via psychological distress. In other words, psychological distress is an explanatory mechanism for the relationship between climate and performance such that, when the PSC is perceived to be high, performance levels increase via a decrease in distress levels. These results are coherent with the Conservation of Resources theory (Hobfoll, 1989(Hobfoll, , 2002. Hobfoll and Shirom (2000) postulate that, when individuals have the necessary resources, e.g., a strong PSC, to cope with the constraints of their environment, they are also able to conserve and renew individual resources to preserve their wellbeing. Employees with sufficient resource reservoirs can undertake various projects at the workplace, intellectual challenges, or new career or training opportunities because they have the energy and motivation to achieve these goals. Conversely, if employees lack the necessary resources, e.g., weak psychosocial security climate, to perform their work despite certain constraints, e.g., a lack of autonomy or recognition, they risk developing higher levels of ill-being, e.g., psychological distress, and being unable to achieve their performance objectives. For employees, high levels of psychological distress are often associated with lower levels of concentration (Lim and Tai, 2014) and work engagement (Inoue et al., 2010). They thus become inattentive and put forth less effort when carrying out their tasks. Third, contrary to our expectations, our results suggest that PTG is not significantly correlated with perceived performance and that it does not mediate the relationship between climate and performance. Accordingly, although PSC promotes the development of PTG, which is beneficial to employees' psychological health, it does not enhance workers' performance. This can be explained mainly via conceptual reasons linked to the very definition of PTG and its components. Calhoun (1996, 2004b) identified five factors that are central to the concept of PTG: personal strength, new possibilities in life, relationships with others, appreciation of life, and spiritual change. While it is true that this growth allows individuals to develop new resources through pleasurable emotional, interpersonal, or spiritual experiences, it is also possible that the benefits of these experiences remain highly personal. In other words, the benefits experienced through PTG do not induce changes or improvements in performance but, rather, in individual wellbeing. For example, although PTG does not influence employee performance directly, it remains associated with significant reservoirs of resources that employees can draw on. Since PSC was found to be a positive determinant of PTG, future research could investigate the explanatory mechanisms behind this association. For example, PTG may depend not only on contextual factors such as PCS, but also on leadership behaviors specific to PTG, as suggested by Wood et al. (2020) in their study of a military sample. Lastly, as pointed out by Maitlis (2020), it is likely that certain aspects of growth are not enacted behaviorally. Limitations and future research directions Although this study deepens our understanding of the relationship between PSC and perceived performance during a health crisis through two indicators of psychological health (psychological distress and PTG), it has limitations that deserve mention. First, this work is based on a transverse study protocol, which does not allow us to demonstrate causal relationships between our constructs, e.g., PSC and PTG. Therefore, longitudinal and experimental studies should be conducted to confirm and generalize these results, both within a representative sample of the French population and with more specific professionals, e.g., teachers, or hierarchical levels, e.g., local managers. Second, we examined the extent to which specific indicators of psychological health, i.e., psychological distress and PTG influence perceived employee performance. However, we did not consider any objective performance indicators that could limit social desirability bias, i.e., the tendency to distort selfdescriptions in a positive sense (McCrae and Costa, 1983), nor did we consider multisource measures, e.g., co-workers and supervisors, that could minimize common variance bias, i.e., variance in the dimensions studied attributable to the measurement method rather than to the constructs that the measures are assumed to represent (Podsakoff et al., 2003). Although we used only tools whose psychometric qualities had been confirmed repeatedly, future research could draw on multisource data, e.g., peer-perceived organizational citizenship behaviors, and other indicators of organizational health, e.g., absenteeism and turnover. Multi-item and multi-dimensional scales would also be welcome because, while this tool has advantages, e.g., the ability to survey a sample with a variety of jobs, it does not allow for the examination of specific behaviors associated with performance, e.g., organizational citizenship behavior, nor the achievement of more concrete organizational objectives, e.g., the quality of brand and product presentation, including those relating to the COVID-19 pandemic, such as performance while teleworking. The present study was conducted in the context of a health crisis, but it would be relevant to contrast these results with data collected in a less turbulent context. For example, Dollard and Bailey (2021) showed that, in times of crisis, as well as in non-crisis times, PSC can be developed and sustained with leaders and teams through appropriate interventions. Placing mental health as a priority for top management is even more relevant given that the pandemic has generated and even exacerbated emerging risks, such as unethical culture, technological pressure, and the management of organizational change (Zahiriharsini et al., 2022). As suggested by Dollard and Bailey (2021), the pandemic has put mental health on the radar of policy makers. This has led to a multitude of interventions that are not always grounded in theory or empirical evidence. Our study corroborates previous ones highlighting the fact that the PSC is a key target for both mental health and organizational performance (Idris et al., 2015;Biron et al., 2021;Dollard and Bailey, 2021;Parent-Lamarche and Biron, 2022). Practical implications Our results underline the benefits of PSC for employees' psychological health and performance in the context of a health crisis, particularly during periods of confinement. Therefore, it is essential for organizations to put in place policies, practices, and procedures explicitly intended to preserve workers' psychological health and safety (Dollard and Bakker, 2010;Dollard et al., 2019). These measures could include developing an internal process to encourage employees to share their problems during a health crisis, e.g., individual or group interviews on health and psychological safety, and proposing internal solutions to address them. For example, the health context has disrupted many work practices, e.g., the deployment of telecommuting, and compartmentalized departments and colleagues, leading to feelings of isolation. In cases in which difficulties regarding teleworking, e.g., work overload and an imbalance between life areas, reach top managers, it could be interesting to train all the staff in good practices related to telework in order to avoid an increase in working hours, i.e., starting earlier and finishing later, and mental overload related to household tasks, e.g., looking after the children while attending a meeting via videoconference. Concurrently, drawing on Dollard and Bailey (2021), managers could be trained in practices that take such difficulties into account, on the one hand, by equipping them to recognize the signals of ill-being in their teams and, on the other hand, by enabling them to address the associated emotional load. Conclusion Overall, this research sheds light on the role of PSC in perceived performance via two distinct mental health pathways, namely psychological distress and PTG. This expands the scope of studies that have primarily considered the effects of PSC on mental health, thus attempting to answer the call of Ipsen et al. (2020) to consider mental health and performance simultaneously rather than separately, as is most often the case in research and practice. Given the deterioration of mental health in many workplaces as a result of the pandemic and critical and pervasive labor shortages in several work sectors, it is crucial that leaders develop better practices, policies, and procedures to ensure that workers can work in psychologically safe environments. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. Author contributions ÉS, J-PB, and CN contributed to conception and design of the study. ÉS and CN organized the database. ÉS and HI performed the statistical analysis. ÉS and J-PB wrote the first draft of the manuscript. CB and HI wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. Dollard, M. F., and Bailey, T. (2021). Building psychosocial safety climate in turbulent times: The case of COVID-19. J. Appl. Psychol. 106, 951-964. doi: 10. 1037/apl0000939 Dollard, M. F., and Bakker, A. B. (2010). Psychosocial safety climate as a precursor to conducive work environments, psychological health problems, and employee engagement.
2022-10-19T13:33:50.711Z
2022-10-18T00:00:00.000
{ "year": 2022, "sha1": "9ebdedf5dfac552c247d8b6a5ff1a26cfdd9b74e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9ebdedf5dfac552c247d8b6a5ff1a26cfdd9b74e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
245663758
pes2o/s2orc
v3-fos-license
RELIGION AND CIVIL SOCIETY IN MODERN UZBEKISTAN : The article examines the issues of politicization and radicalization of religious consciousness in Uzbekistan in the first years of independence, the emergence of political Islam and the consequences of this fact for the development of civil society. Introduction It is well known that one of the most dramatic consequences of the collapse of the Soviet Union was the change of the place and role of religion in the spiritual, cultural and socio-political life of the new independent states. A lot of research has been devoted to the specifics of this process. Most of them emphasize that this happened as a result of the loss of spiritual and ideological guidelines and the formation of some kind of ideological vacuum, which was bound to be filled with new ideology, based on a massive social base and powerful integrative potential. In Uzbekistan, which is considered one of the influential centers of Islamic civilization and where Muslims make up almost 90 percent of the population, the Islamic religion claimed the role of such an ideology. Perhaps, this was one of the main threats to the formation of civil institutions based on the principles of democratic pluralism and free thinking [1][2][3]. Materials and methods The dynamic revival of religious values also contributed to the enhancement of the role and significance of religion in public and political life. And this revival was due to a number of specific factors. The most important of them should be considered a fundamental change in the attitude of the state institution to religion. As a result of the constitutional recognition of the right of religion to participate in public life, not formally, as was the case in Soviet times, but the actual introduction of freedom of conscience into practice, the adoption of legislative acts designed to regulate religious life in accordance with the rules of law, a legal field was formed, in which religion began to function freely. Of course, the formation of this field did not develop smoothly all time. Religious freedoms that appeared quite spontaneously were also used by destructive forces [4][5][6][7][8]. In the new situation, radical religious authorities, acting on behalf of the Islamic religion, have become more active. In their practice, the desire to expand the sphere of Islamic jurisdiction began to be clearly manifested, not limiting it only to the area of religious rituals and spiritual and moral education. The religious views of such figures organically fit into the general idea for foreign radical religious movements that have intensively penetrated into Uzbekistan that Islam is both a religion and a state. One of the dangerous consequences for national security of such a mutually stimulating coincidence of interests was the emergence of political Islam in Uzbekistan and the emergence in the early 90-s of the XX century the first Islamist organizations in the Fergana Valley. The most famous of them were "Adolat" ("Justice"), "Odamiylik va insoniylik" ("Humanity") and "Islom lashkarlari" ("Warriors of Islam"). The ideological centers of these movements were the cities of Namangan, Andijan and Kokand, the population of which is more religious than other regions of Uzbekistan. The illegal paramilitary detachments created by them began to arbitrarily appropriate the functions of official law enforcement agencies to maintain public order and observance of morality. It is known from recent history that, the sociopolitical crisis reached its apogee on December 8, 1991, when the Islamists, led by the notorious person Tahir Yuldashev, seized the building of the hakimiyat of the Namangan province and publicly demanded that Uzbekistan should be declared as an Islamic state. Then for the new Uzbekistan, which had just embarked on the path of independent development, a truly fateful moment came. And in that dangerous situation for the future of the country, the decisive role was played by the personal qualities of the first president of Uzbekistan Islam Karimov, who came to the raging Islamists and came into direct contact with them. Here his political wisdom, patriotism and human fearlessness were fully manifested. In fact, this was the first decisive blow dealt to the radical Islamist movements, which openly announced their intention to create the theocratic state in the Fergana Valley called "Kokand Khanate". The result of this evolution of the religious situation was the clash of the traditional for the peoples of Uzbekistan understanding of Islam, characterized by deep tolerance and exaltation of enlightenment, with its extremist interpretations, actively promoted by radical local religious leaders and Islamist movements infiltrating from abroad. This clash has become, without any exaggeration, a deadly threat to the possibility of building a legal democratic state and a pluralistic civil society in Uzbekistan. The evolution of the religious situation required urgent improvement of legislation designed to regulate the religious life of society and ensure its compliance with the highest national interests. Moreover, in the actions of Islamist groups, as well as religious and sectarian movements associated with other religions that penetrated the country, signs of violation of the provisions of the Constitution of the Republic of Uzbekistan on guarantees of freedom of conscience for all citizens, inadmissibility of the forced imposition of religious views (Art. 31), the prohibition of the creation of political parties on religious base, the inadmissibility of the creation of paramilitary, secret societies and associations (Article 57), the principle of separation of religious organizations and associations from the state (Article 61) and much more. In such conditions, measures were taken to improve the legislative framework for the religious life, which played a positive role in changing the general situation in the country. However, religious and educational activities were of fundamental importance for the further evolution of the general situation. It put a reliable barrier on the path of further radicalization of the religious consciousness of the Muslim part of the population, which was dangerous for national security. In Uzbekistan, this alarming reality was timely realized. Accordingly, widespread religious and educational activities were launched throughout the country [7][8][9][10]. The spectrum of religious educational work and the components involved in it is very extensive and varied. It includes the popularization of the primary sources of Islamthe Koran and Sunna, the revival of the religious heritage of the Uzbek people, active work in translating important theological, legal and historical works on Islam into the Uzbek language, the development of a network of educational institutions designed to train religious personnel, etc. The purely religious wing, represented by the structures of the Muslim Board of Uzbekistan and enlightened religious figures, governmental, women's and other public organizations, research institutions, NGOs, etc. Religious elements with radical and extremist views also sought to influence this process. The religious-political party "Hizb al-Tahrir" has shown particular activity in this. Leaflets and other printed materials actively disseminated by the party at the end of the 20th and first years of the 21st centuries were replete with open calls for extremism in public and political life and terrorist actions directed against the established law of public order, including armed struggle against the legitimate constitutional order. The negative consequences of such activities were not limited to Uzbekistan alone, but went beyond its borders, creating a real threat to the processes of liberalization and democratization of public and political life, the establishment of the principles of pluralism throughout the Central Asian region. The first fundamental steps of a religious and educational nature were initiated by influential local theologians. They considered that it is necessary to start, figuratively speaking, with the "legalization" in Uzbekistan of the written primary sources of Islamthe Qur'an and Sunna. It is noteworthy that most of the Uzbek ulema' remained faithful to the traditions of a tolerant understanding of Islam in the spirit of the Hanafi school of fiqh, which prevailed from the early stages of the spread of Islam in Central Asia. During the years of independence, the most famous of these scholars-theologians carried out three translations of the meanings of the Qur'an into the Uzbek language. These translations were published in mass circulation. This was a natural and logical step, for the more and more active Islamist movements, when arguing their ideas, appealed primarily to these sources, arbitrarily interpreting their positions to please their selfish aspirations. The Hizb al-Tahrir party distinguished itself with particular radicalism in this. It stuffed the consciousness of Muslims with a radical interpretation of the Qur'anic verses and hadiths, tried to incite them against representatives of non-Muslim confessions. The fact that some of the leaflets distributed by the party with similar content were in Russian and English clearly indicated that the target of Hizb al-Tahrir's activities was not only the local population, but also foreign citizens. This was another dangerous clash between radical and traditional Islam in modern Uzbekistan. The power of its influence on national security was like a time bomb. Another fundamental project that played a special and, at the same time, controversial role in the popularization of the primary sources of Islam in Uzbekistan, was the translation into Uzbek of the hadith collections. The choice of specialists who carried out this difficult religious and educational project naturally fell, first of all, on those collections of hadiths that were compiled by outstanding muhaddiths who were born and raised in Mavarannahr al-Imam al-Bukhari and al-Imam al-Termizi. The full versions of the collections "al-Jami' al-Sahih" and "Sunan al-Termizi" were translated into the native language of their great compilers and published for the first time in the shortest time. As life shows, along with the translated version of hadiths, the Muslim population of Uzbekistan received a large portion of religious ideas that did not correspond to their traditional understanding of Islam. This circumstance was also skillfully used by the supporters of radical and political Islam for their particular purposes. The above-mentioned large-scale religious and educational projects, carried out in cooperation with religious, state and, in part, non-governmental organizations, have played a positive role in the evolution of the religious situation. Their main effect was to switch the attention of significant social strata from the religious-populist ideas of radical Islamist movements to a conscious and serious study of the primary sources of Islam. These projects, widely supported by the state and actively promoted by influential representatives of the local Muslim clergy, in addition to their religious and educational effect, put a reliable barrier on the path of further intensification of radical interpretations of the ideas of Islam. An important segment of the religious and educational work carried out in modern Uzbekistan is the activity to revive the spiritual and religious heritage of local people. Over the years of independence, with the broad support of the state, jubilee celebrations of outstanding Islamic scholars who were born on the land of Uzbekistan have been held. A special resonance was caused by the anniversary celebrations of the prominent muhaddiths al-Imam al-Bukhari and al-Imam at-Termizi, the Muslim theologian-founder of one of the two theological schools in Sunni Islam -Maturidiyya Abu Mansur al-Maturidi, an outstanding Muslim jurist, author of the famous work on Muslim law "al-Hidaya" Burhanuddin al-Marginani, Sufis Abdulkhalik Gijduvani, Bahauddin Naqshband, Najmuddin Kubro, Khoja Ahror Wali and others. Such anniversaries, held with the active support of the state, have given a powerful impetus to religious and educational work, filling it with a philosophical and humanistic content and a spirit of deep tolerance. On the other hand, they have noticeably intensified a serious scientific study of the religious heritage of the Uzbek people. Carried out in the course of preparing and holding events dedicated to certain spiritual and religious personalities of the past, scientific research of Islamic studies and educational character serves to strengthen the scientific base of religious enlightenment of the population, and through this the propaganda of enlightened Islam. Large number of religious and sacred places were restored with the effective material and moral support of the state, including the mausoleums of al-Imam al-Bukhari -in the Samarkand region, Abu Mansur al-Maturidi and Shahi-Zinda -in Samarkand, Bahauddin Naqshbandin Bukhara, al-Imam at-Termezi -in Termiz, Najmuddin Kubro in Khorezm and dozens of other places throughout Uzbekistan, are becoming influential centers for educating the population in the spirit of enlightened Islam. This, as the experience of the past years shows, serves as an important means of educating the population, especially the youth, of reliable spiritual and ideological immunity against various radical interpretations of Islam. It should be admitted that the religious and educational work in the first years of independence was carried out rather chaotically. Often this important activity was carried out by people who did not have elementary knowledge in the field of religion in general, and Islam in particular. The continuation of such a situation was fraught with a great danger for the further evolution of the religious situation and threatened with a complete seizure of the initiative in religious and educational activities by radicals. As the experience of the years after gaining independence shows, political Islam, represented by such organizations as Hizb al-tahrir al-Islami, was able to partially take advantage of this circumstance. All this has accelerated the process of formation and development of a network of specialized institutions that are engaged in the promotion of traditional Islamic religious values for the peoples of Uzbekistan. This network, which today has become a powerful tool for influencing the evolution of the religious situation in the country, includes a number of specific segments. They can be conditionally grouped according to the following principle: -The most large-scale of them are ore than two thousand mosques functioning today throughout Uzbekistan. Most of them are run by imam-khatibs with special religious education. Effective measures are being taken to constantly improve their professional knowledge. -A special place in the network of religious and educational activities is occupied by the International Islamic Academy of Uzbekistan, founded on the initiative of the President of the Republic of Uzbekistan Sh. Mirziyoyev. As a secular educational institution, the Academy seeks to synthesize purely theological and scientifically based knowledge about religion in general, and Islam, in particular. It is a place for training religious scholars and improving their qualifications. The Academy is dynamically developing into an influential center for scientific research on Islam, as well as a center for the dissemination of scientifically based knowledge about religion in general. -Another link of institutions that have been actively involved in religious and educational activities in recent years are non-governmental nonprofit organizations whose interests lie in the field of spiritual and religious values. They seek to contribute to the religious enlightenment of the population of Uzbekistan. Conclusion The foregoing makes it possible to draw two fundamental conclusions: First, religious and educational activity to promote enlightened Islam has become a determining factor in the formation of a correct religious situation in modern Uzbekistan. It acts as the main mechanism for countering radical and extremist religious movements. Secondly, in the light of a fundamentally new attitude towards religion in the new Uzbekistan, the state provides all-round support for religious and educational activities and considers it a major factor in ensuring the religious security of society. The state seeks to provide all possible assistance in the implementation of the constructive potential of religion, which may be in demand for the formation and further development of civil society in modern Uzbekistan.
2022-01-04T16:03:50.710Z
2021-12-30T00:00:00.000
{ "year": 2021, "sha1": "094f606cb4cccd76bc3ac00f082cc8b70f3b7aac", "oa_license": null, "oa_url": "https://doi.org/10.15863/tas.2021.12.104.50", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f997b6712fea387baa894560b0b73345be9f680d", "s2fieldsofstudy": [ "Political Science", "Sociology", "History" ], "extfieldsofstudy": [] }
249628965
pes2o/s2orc
v3-fos-license
Within-participant statistics for cognitive science Experimental studies in cognitive science typically focus on the population average effect. An alternative is to test each individual participant and then quantify the proportion of the population that would show the effect: the prevalence, or participant replication probability. We argue that this approach has conceptual and practical advantages. The goal of a scientific experiment is to learn something about the world. In the cognitive sciences, experiments are typically performed on a sample of participants randomly selected from a population (see Glossary, [1]). Statistical methods are used to make a quantitative statement about the population from the results of the experiment. Many experimental questions pertain to the existence of an effect. For example, whether stimuli of a particular class activate a particular brain region. Typically, researchers address such questions from the perspective of the population mean, by applying null-hypothesis significance testing (NHST) to determine whether the mean effect is different from zero (statistically significant). An alternative is to evaluate whether each individual participant demonstrates the effect and then quantify the population prevalencethe proportion of the population that would show the effect if they were tested in this experiment [2]. This approach allows reliable scientific knowledge to be obtained through longer experiments with fewer participants, as in psychophysics [3]. However, without the formal generalization to the population provided by prevalence, such results are often dismissed as case studies. Within-participant statistics and population prevalence Recent developments allow generalization of within-participant results to the population prevalence, using either frequentist [4] or Bayesian [2] methods (Box 1). Bayesian prevalence is straightforward to apply to any experiment. It requires only that we test the effect of interest separately in each participant, controlling the false positive rate of the within-participant test (e.g., by verifying modeling assumptions or using distribution-free methods). The within-participant test itself can be performed using any statistical or modeling approach (linear or nonlinear, parametric or nonparametric, inferential or predictive). Although our focus here is the human participant, Bayesian prevalence can be directly applied to other organisms (e.g., rodents), models (e.g., deep neural networks), or sampled units (e.g., neurons). Within-participant statistics build in replication The idea that there may be a problem with common statistical practice in experimental studies of cognition is receiving increased attention. Widely termed the replication crisis, concerns have arisen because many results are not obtained again when the experiment is repeated. NHST of the population mean is usually the only analysis considered when discussing the issues underlying the replication crisis. We highlight two reasons why Bayesian prevalence Glossary Case study: a descriptive analysis of an individual or group with no statistical generalization to a population. Without generalization, the results pertain only to the participants in the study. Null hypothesis significance testing: starts from a null hypothesis, typically that the population mean is zero. The P value quantifies how surprising the observed experimental results would be if that null hypothesis were true. If this is less than a prespecified threshold (usually 0.05), we reject the null hypothesis of zero mean and declare the population mean result to be statistically significant. Population: the larger group from which the participants in an experiment (the sample) were randomly selected. The goal of statistical analysis is to generalize from the sample to the population, which requires a statistical model of the population. Issues around defining the population considered in a study are beyond the scope of this piece. Population mean: the typical approach in cognitive science is to model the population with a Gaussian distribution. The population mean is the true value of the mean parameter of the population Gaussian model. Population prevalence: the population is modeled with a binomial distribution, accounting for the error rates of the within-participant statistical test, with individuals either showing an effect or not. The population prevalence is the binomial proportion parameter of this model. This is the probability of a true positive within-participant replication if the experiment was run on a new randomly sampled participant. Box 1. Bayesian prevalence Several approaches quantitively summarize within-participant results. Grice et al. [11] propose reporting the sample proportion as a person-centered effect size, but this does not provide a formal generalization to the population. Frequentist NHST methods applied to a binomial model can test various hypotheses about the population prevalence (e.g., the global null, that the prevalence is 0, or the majority null, that the prevalence is <0.5, Figure IA) [4,12]. We recently proposed a Bayesian method to estimate the population withinparticipant replication probability, accounting for the false positive rate of the statistical test [2]. Bayesian prevalence returns a posterior distribution over the population prevalence, given the observed experimental data ( Figure IB). From this, we can compute the maximum a posterior (MAP) estimatethe best guess, or most likely value of the population parameter ( Figure IC). To quantify the uncertainty of this estimate, we compute Bayesian highest posterior density intervals (HPDIs) for various levels (such as 50% and 96%; Figure IB). These intervals provide the range within which the true population value lies with the specified probability. Bayesian prevalence can also quantify the posterior distribution for the difference in prevalence between different tests performed on the same participants, or between the same test applied to samples of participants from different populations.The posterior prevalence can be calculated for different effect size thresholds (not just p = 0.05) [2]. Open source code implementing Bayesian prevalence in Python, Matlab and R is available at https://github.com/robince/bayesian-prevalence. An online web application is available at https://estimate.prevalence.online/. may be less susceptible to these issues. First, when analyzed separately, each participant provides an independent replication of the experiment. Therefore, Bayesian prevalence has replication built in, and it directly quantifies the population-level, within-participant replication probability. Second, the output of Bayesian prevalence is a posterior distribution for the prevalence of the effect. This provides a graded estimate explicitly including uncertainty. Bayesian prevalence provides a clear quantitative statement about the population withinparticipant replication probability, which is explicitly linked to the experimental procedure considered. In contrast, NHST reduces an experiment to a binary result (significant or not) whose interpretation involves more challenging logic, often leading to misinterpretation [5] or overinterpretation [6]. Limitations of Bayesian prevalence There are several limitations to Bayesian prevalence. First, it cannot be applied to data from a single participant. In Figure I in Box 1, we show how population prevalence estimates scale with the number of participants. Second, within-participant statistics cannot pool information across individuals as hierarchical models do. Thus, sensitivity to some effects may be decreased. However, prevalence can detect effects that the population mean does not ( Figure 1). Third, Bayesian prevalence is currently restricted to effects that are quantifiable within individuals (rather than between-participant research questions), although it can be compared between two populations [2]. Finally, for some effects (e.g., those requiring novelty, learning, or other one-shot interventions) it may be difficult to collect enough data to have sufficient withinparticipant sensitivity. Bayesian prevalence supports new research directions From cultural psychology to brain stimulation, many fields now recognize the challenge of addressing diversity in cognition, where a single population average cannot provide a full description [7]. For example, the proportion of participants who will respond to a particular brain stimulation protocol is critical to evaluating its practical potential but is not considered in population mean NHST analyses. This argument generalizes to other interventions or biomarkers: the higher bar of evidence set by requiring reliable effects within individuals is a prerequisite for many practical applications. In neuroimaging, there is renewed interest in the psychophysical approach of longer experiments with fewer subjects [8,9], often combining data over many experimental sessions. Hardware advances such as OPM-MEG and fNIRS allow more participant mobility and more comfortable acquisition of longer sessions. Relatedly, clinical studies of rare conditions often have small numbers of participants who show greater heterogeneity, both of which are problematic for population mean inference ( Figure 1). Bayesian prevalence provides a population generalization that is currently missing for both types of small-N studies. The population mean approach requires alignment of effects across participants, which becomes more challenging as the spatial resolution of imaging techniques increases (e.g., laminar fMRI at 7T), or for invasive methods where electrode positions differ. If the within-participant inference is properly corrected for multiple comparisons, then Bayesian prevalence can be estimated for a broad region of interest without requiring precise overlap of the effect across participants ( Figure 1B). These new recording modalities and approaches require reliable discovery-led exploratory research alongside confirmatory hypothesis testing. Typical NHST has well-documented shortcomings for such exploratory research, where a priori effect size estimates (required for power analyses) are difficult to obtain, and power analysis for common multivariate techniques (e.g., cluster methods) is not yet fully developed. Replicating the effect across multiple participants provides a more robust approach and reduces the potential for false positives from researcher degrees of freedom (see Figure I in Box 1). The development of online experimental platforms has made studies with large numbers of participants more common. One drawback is that with large samples, population mean effects can be detected as statistically significant even when they may be too small to be practically meaningful. Prevalence does not suffer from this drawback. Large numbers of participants allow accurate prevalence estimates, but effects are detected within individual participants and grounded to the experiment considered (e.g., a 10-min experiment vs a 1-h experiment). It is noteworthy that practical applications of neuroimaging or behavioral biomarkers have been difficult to obtain. One reason for this could be that individuals can differ categorically across many aspects of cognition from behavioral strategy to neural anatomy [3,7]. Another is that the focus on the population mean may have led scientists to study effects with low betweenparticipant variance [10]. However, more variable effects (Figure 1) might be more informative in terms of health and disease outcomes, even though they are less reliable from the population mean perspective. Concluding remarks We argue that an easy-to-adopt epistemological shift in statistical perspective can improve the robustness and interpretability of results in cognitive science and beyond. A focus on the population mean is ubiquitous in cognitive science and, for many, it is synonymous with population generalization. However, for many research questions, effects at the level of the individual participant may be more relevant. Bayesian prevalence explicitly quantifies the within-participant replicability of an experiment, providing a result that is less susceptible to the issues underlying the replication crisis. Prevalence can provide stronger population-level evidence from smaller numbers of participants and is more robust to heterogenous effects ( Figure 1). However, estimation of population prevalence and population mean are not mutually exclusive, and they can offer complementary perspectives. Researchers can report within-participant effect sizes and population prevalence, together with an estimate of the population mean, ideally including population variance. Experimental and statistical methods to better describe individual brains, rather than the average brain, might lead to new insights and practical applications.
2022-06-14T14:07:44.013Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "3e08d9e6be77b621d32b652bb6e896bb5b3cf096", "oa_license": "CCBY", "oa_url": "http://eprints.gla.ac.uk/271605/2/271605.pdf", "oa_status": "GREEN", "pdf_src": "Elsevier", "pdf_hash": "90f060f2539e9383740f58cf1207b926127b0f60", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
127067
pes2o/s2orc
v3-fos-license
Functional Response of Eretmocerus delhiensis on Trialeurodes vaporariorum by Parasitism and Host Feeding The parasitoid wasp, Eretmocerus delhiensis (Hymenoptera, Aphelinidae) is a thelytokous and syn-ovigenic parasitoid. To evaluate E. delhiensis as a biocontrol agent in greenhouse, the killing efficiency of this parasitoid by parasitism and host-feeding, were studied. Killing efficiency can be compared by estimation of functional response parameters. Laboratory experiments were performed in controllable conditions to evaluate the functional response of E. delhiensis at eight densities (2, 4, 8, 16, 32, 64, 100, and 120 third nymphal stage) of Trialeurodes vaporariorum (Hemiptera, Aleyrodidae) on two hosts including; tomato and prickly lettuce. The maximum likelihood estimates from regression logistic analysis revealed type II functional response for two host plants and the type of functional response was not affected by host plant. Roger’s model was used to fit the data. The attack rate (a) for E. delhiensis was 0.0286 and 0.0144 per hour on tomato and 0.0434 and 0.0170 per hour on prickly lettuce for parasitism and host feeding, respectively. Furthermore, estimated handling times (Th) were 0.4911 and 1.4453 h on tomato and 0.5713 and 1.5001 h on prickly lettuce for parasitism and host feeding, respectively. Based on 95% confidence interval, functional response parameters were significantly different between the host plants solely in parasitism. Results of this study opens new insight in the host parasitoid interactions, subsequently needs further investigation before utilizing it for management and reduction of greenhouse whitefly. The greenhouse whitefly Trialeurodes vaporariorum Westwood (Hem., Aleyrodidae) is a cosmopolitan, polyphagous, familiar, and key pest that attacks many crops and causes serious economic damage to crops throughout the tropical and subtropical areas and in greenhouses (Byrne 1990, Gerling 1990). The nymphs and adults of the greenhouse whitefly suck fluids and surplus sugars from plants are excreted as honeydew (Byrne 1990). Moreover, whiteflies are potential vectors of viruses (van der Linden and van der Staaij 2001). This pest have numerous wild and domestic host species from the families; Solanaceae and Asteraceae. For instance, prickly lettuce, Lactuca serriola (Asteraceae) is one of the most important wild hosts of the greenhouse whitefly (Roditakis 1990). The T. vaporariorum populations can be attacked by some parasitoids from the family Aphelinidae. Among these parasitoid wasps, genera Encarsia and Eretmocerus have been received more attentions from entomologists (Urbaneja and Stansly 2004, Urbaneja et al. 2007, Liu et al. 2015. These genera are primary and solitary parasitoid for different nymphal stages of whiteflies (Zolnerowich and Rose 2008). The genus Eretmocerus includes 85 nominal species which are very important in biological control and in integrated management of whiteflies (Noyes 2012). Potential of Eretmocerus sp. (Lopez and Botto 1997) and E. eremicus (Gamborena and van Lenteren 2004) as an agent for biological control were studies on greenhouse whitefly. Also, investigation on reproductive biology of E. warrae a thelytokous parasitoid of T. vaporariorum showed that this parasitoid can potentially contribute to biological control of greenhouse whitefly (Hanan 2012). Some Eretmocerus parasitoid wasps can suppress host by parasitizing and feeding. By feeding on host haemolymph, a female parasitoid can increase her longevity and fecundity (Liu et al. 2015). Host-feeding by female parasitoids has been reported in many species of Eretmocerus and Encarsia (Zang and Liu 2008). Hostfeeding is the consumption of host haemolymph coming from the wound, caused by the female ovipositor (Jervis and Kidd 1986). Eretmocerus delhiensis Mani (1941) has recently been reported on sugarcane whitefly, Neomaskellia andropogonis Corbett from Iran (Khadempour et al. 2014b). The biology of this parasitoid has been studied on N. andropogonis and T. vaporarioirum (Khadempour 2013, Ebrahimifar et al. 2016. Results showed that the increase rate of natural increase of E. delhiensis on N. andropogonis was higher (0.24) than T. vaporarioirum (0.17). The development time from egg to adult and adult longevity of E. delhiensis on greenhouse whitefly were 15.5 and 5.7 d, respectively (Ebrahimifar et al. 2016). However, Viggiani (1985) has been reported male and female for E. delhiensis, it was reported only female from Iran (Ebrahimifar et al. 2016). Therefore the E. delhiensis shows a thelytokous reproduction behavior in Iran. This parasitoid is a destructive syn-ovigenic parasitoid (Ebrahimifar et al. 2016). Destructive syn-ovigenic parasitoids fed on their hosts which lead to death of the hosts (Jervis 2007). Differences in killing efficiency can be compared by estimation and comparison of functional response parameters (Livdahl andStiven 1983, Juliano 2001). The key factor for a predator is assessment of potential and predation rate on different host densities (Yu et al. 2013). The functional response is the behavioral reaction of a natural enemy to host density which means numbers parasitized or eaten versus the initial numbers. There are many evidences that show type and parameters of a functional response are affected by different abiotic and biotic factors including temperature, host species, natural enemy, physical conditions in laboratory, host plant and the age of the parasitoid (Mohaghegh et al. 2001, Allahyari et al. 2004, Reay-Jones et al. 2006, Moezipour et al. 2008, Jamshidnia et al. 2010, Jamshidnia and Sadeghi 2014. Among the criteria utilized for evaluating the potential of natural enemies, are attack rate and handling time that measured by functional response of natural enemies (parasitoids or predators) to increasing host density. Different factors may influence the type of functional response or parameters values by change in searching pattern (Holling 1959). The objectives of the current study were to determine the type of functional response, its parameters and evaluation of host-feeding and parasitism of parasitoid wasp, E. delhiensis on T. vaporariorum on two plant hosts, tomato and prickly lettuce at different densities. The results from this study will help to our knowledge about, interaction between parasitoid and host density to improve it use in biological control program. Rearing of T. vaporariorum and E. delhiensis Population of parasitoid wasp E. delhiensis was collected from origin colony on sugarcane whitefly, N. andropogonis (Hem., Aleyrodidae) from Khuzestan Province, Iran (Latitude: 31 20 0 N, Longitude: 48 40 0 E). The greenhouse whiteflies were collected from tomato greenhouses in Tehran. Parasitoid population was reared on T. vaporariorum colony on tomato plant (Solanum lycopersicum L. cultivar super-chief) and its wild host, prickly lettuce, Lactuca serriola L. (Asteraceae) in greenhouse conditions (25 6 3 C, 60 610% RH) at College of Aburaihan, University of Tehran, Iran. Two distinct colonies of hosts and parasitoids on host plants were reared for three generations and then used in experiments. Functional Response Experiments Design In order to determine the functional response of E. delhiensis, individual parasitoids were exposed to eight densities (2,4,8,16,32,64, 100 and 120) third nymphal stages of greenhouse whitefly. This parasitoid preferred third nymphal instars for parasitism (unpublished data). The leaves with whitefly nymphs of tomato and prickly lettuce plants were used. For densities of 2, 4, 8, 12 replicates were used while for other densities, 10 replicates were utilized for each host plants. Each leaf was fixed on moist filter paper (to prevent desiccation) in a Petri dish (10 cm diameter). A few drops of water were added to the filter paper to keep them moist during the experiments. A ventilation hole (1 cm diameter) was created in the lid of each Petri dishe, which was covered with net cloth. One female parasitoid (<24 h old) was introduced into the experimental arena. After 24 h, the parasitoid wasps were removed. Host mortality by parasitism and host-feeding was determined 7-8 d later. In parasitized hosts the mycetome displacement was visible, while hosts killed by host-feeding were flattened and desiccated (Yan and Wan 2011). All experiments were performed in controlled conditions at temperature of 25 6 1 C, 65 6 5% RH and a photoperiod of 16: 8 h (L: D) in growth chamber. Data Analysis Data analysis for functional response includes two steps. In the first step, shape (type) of functional response must be determined by determining if the data fit a type II or III functional response. Logistic regression of the proportion of parasitized hosts versus the initial number of host is the most effective way of determining this (De Clercq et al. 2000, Juliano 2001, Allahyari et al. 2004, Jamshidnia et al. 2010). In the first step, we fitted a polynomial function (Juliano 2001). Where, P 0 , P 1 , P 2 , and P 3 are parameters intercept, linear, quadratic and cubic coefficients, respectively, that are to be estimated. N a is the number of parasitized or attacked nymphs and N 0 is the initial host density. These parameters were estimated using the CATMOD procedure in SAS software (SAS Institute 2011). The sign of P 1 and P 2 can be utilized to distinguish the shape of the curves. A positive linear parameter (P 1 ) indicates that functional response is type III while the functional response is type II when the linear parameter is negative (Juliano 2001). After determining the type of functional response, parameters of handling time (T h ) and attack rate (a)were estimated (Juliano 2001). We used nonlinear least square regression (NLIN procedure with DUD method in SAS) to estimate the parameters of the Rogers (1972) random parasitoid equation (2) and random predator equation (3). Where, N a ¼ Host attacked, N 0 ¼ Host density, T t ¼ Time of exposure to parasitoid, a¼ Instantaneous searching rate and T h ¼ Handling time. Results and Discussion Functional response curves of parasitoid wasp, E. delhiensis are shown in Fig. 1. The average number of hosts fed on and parasitized by increasing the host densities at first increased, and then approached a constant level. The results of logistic regression analysis (Table 1) indicated the linear coefficient (P 1 ) was negative for parasitism and host feeding on two host plants. These results indicate a type II functional response for E. delhiensis. Thus, type of functional response was not affected by host plant in both cases of parasitism and host feeding. Type II functional response was previously recorded in a number of species Eretmocerus (Sohani et al. 2008, Shao et al. 2010, Xu et al. 2014. Different studies on functional response of insect parasitoids show that more than three-quarters are type II functional response while less than one-fifth have type III functional response (Fern andez-Arhex and Corley 2003). The functional response of parasitoids can be affected by density of parasitoid and its host (Mills and Lacan 2004). The type II functional response of Eretmocerus mundus on Bemesia tabaci and E. hayati on B. tabaci biotype B and Q has been reported (Sohani et al. 2008, Shao et al. 2010. Furthermore, functional response of Encarsia formosa on T. vaporariorum and Aphelinus thomsoni on the aphid, Drepanosiphum platanoidis was type II (Collins et al. 1981, Fransen andMontfort 1987). The type II functional response of E. delhiensis on sugarcane whitefly N. andropogonis was reported by Khadempour et al. (2014a). Based on our results and the findings of other researchers (Collins et al. 1981, Sohani et al. 2008, Shao et al. 2010, Xu et al. 2014 it seems that the type II functional response is common in aphelinid wasps. The type II functional response indicate an inverse density dependent relationship between the proportion of parasitism (or host feeding) and host density (Holling 1959). Therefore, E. delhiensis can be more efficient in control of greenhouse whitefly at low density. The random parasite and predator equations fit the experimental data well for E. delhiensis in parasitism and host feeding, respectively. Rogers random model is more suitable than Holling disq equation for describing functional response when host depletion occurs (Juliano 2001). Results of NLIN regression indicated that parameters a (searching rate) and T h (handling time) were both significant (Table 2). Estimated searching rate values (a) for E. delhiensis on tomato were 0.0286 and 0.0144 h À1 and on prickly lettuce were 0.0434 and 0.0170 h À1 for parasitism and host feeding, respectively. The values of T h for this parasitoid on tomato were 0.4911 and 1.4453 h and on prickly lettuce were 0.5713 and 1.5001 h for parasitism and host feeding, respectively. Based on 95% confidence intervals, for the values of T h and a for parasitism, the observed difference was statistically significant on both hosts because there was no overlapping between them but in host feeding, the observed difference was not statistically significant. The tomato cultivar (super-chief) was used in this study has more pubescent and trichome than prickly lettuce, consequently a values were reduced in tomato. Presumably, pubescent and trichome of plant host affected the parasitism and host feeding of parasitoids. Furthermore, it seems that hairy leaf of tomato may be slowing the parasitoid movement than in prickly lettuce leaf. The values of a and T h for E. delhiensis on N. andropogonis on sugarcane were reported 0.0594 h À1 and 0.766 h (Khadempour et al. 2014a). Handling time for E. mundus on B. tabaci was reported 0.343 h by Sohani et al. (2008). Differences observed in parameters values of the current study compared with other studies may be due to the difference of host and parasitoid species. On the other hand, parasitoids after parasitizing and/or feeding on their hosts maybe spent different times to clean their body which lead to changes in the values handling time and attack rate. The results of the functional response study can be used to preselect the candidates of biological control (van Lenteren et al. 2016). The results of this study in both parasitism and host-feeding showed the functional response of E. delhiensis was type II. van Lenteren et al. (2016) suggested that natural enemies with type II functional response could be used in inundative biological control. Clearly, this parasitoid kill their hosts not only by parasitism but also by hostfeeding. Hence, it can be a promising candidate for biological control of greenhouse whitefly. Although functional response is an important tool for evaluating natural enemies but success and failure of a natural enemy in biological control cannot be only attributed to this factor. In addition, different factors such as; biotic and abiotic factors, host traits, may influence the discovery and parasitism efficiency of natural enemies. Thus, further studies should be performed to evaluate the efficiency of E. delhiensis as a biological control agent of greenhouse whitefly.
2018-04-03T03:08:09.106Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "396adc4241c20602ed3e78fb366fcfe187b28741", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jinsectscience/article-pdf/17/2/56/14043178/iex029.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "396adc4241c20602ed3e78fb366fcfe187b28741", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
7847379
pes2o/s2orc
v3-fos-license
Remote ischaemic conditioning and remodelling following myocardial infarction: current evidence and future perspectives Remote ischaemic conditioning (rIC) has demonstrated its effectiveness as a powerful cardioprotective tool in number of preclinical and limited clinical settings. More recently, ischaemic postconditioning given after an ischaemic event such as a myocardial infarction (MI) has shown not only to reduce infarct size but also to have beneficial effects on acute remodelling post-MI and to reduce the burden of heart failure and other detrimental outcomes. Building on this platform, repeated rIC over a number of days has the potential to augment the protective process even further. This review considers the current evidence base from which the concept of rIC in the setting of post-MI remodelling has grown. It also discusses the ongoing and planned clinical trials which are attempting to elucidate whether the protection imparted by rIC in the preclinical setting can be translated to the clinic and become a realistic weapon in the clinician’s armoury to tackle acute remodelling and heart failure post-MI. Introduction Remote ischaemic conditioning (rIC) is a non-invasive therapeutic technique whereby intermittent interruption of blood to an organ or muscle confers protection against ischaemia/reperfusion (I/R) injury to a distant organ. RIC can be implemented prior to an expected ischaemic insult (preconditioning), during the evolution of an ischaemic insult (per-conditioning) or soon after the completion of an ischaemic insult (postconditioning). For the purposes of this review, the term rIC will encompass all of these techniques. The technique evolved from the phenomenon of local ischaemic conditioning of the heart and has been successfully used to reduce myocardial damage and improve cardiovascular outcomes in the context of primary percutaneous intervention (PPCI) for acute myocardial infarction (MI) [1,2], elective coronary angioplasty [3][4][5], coronary artery bypass surgery [6], valve surgery [7] and paediatric cardiac surgery [8]. Beyond the well-established acute protective phase, early preclinical studies have hinted at an additional role for rIC, predominantly in positively influencing post-MI ventricular remodelling. In addition to directly affecting final infarct size, rIC may act to increase recruitment of stunned myocardium as well as modulating remodelling processes such as cell death with an increased emphasis on autophagy, cardiomyocyte hypertrophy, extracellular matrix (ECM) changes and the influx of proinflammatory cells to the damaged myocardium. This potential new role for rIC may have a profound effect in reducing the incidence and impact of post-MI heart failure. Remodelling following myocardial infarction Heart failure is a major cause of long-term mortality and morbidity after MI. Analysis of registries and of large clinical trials across the western world, conducted in the era of acute revascularisation, has reported incidence rates of post-MI heart failure ranging from 10 to 50 %, depending on a number of factors including the degree and location of infarcted myocardium, how MI and heart failure were defined, whether there was pre-existing heart failure, the treatment modalities used and the characteristics of the populations analysed [9]. A retrospective analysis of Framingham Heart Study participants demonstrated an increase in the incidence of post-MI heart failure from the 1970s to the 1990s, closely linked to a decrease in mortality in acute MI, likely due to advances in myocardial salvage over this time period [10]. The development of chronic heart failure following MI most commonly results from adverse remodelling of the left ventricle, a process of structural reorganisation which occurs within the first few weeks to months after the acute event. Such remodelling is directly related to the extent of myocardial damage (due to initial necrosis and secondary apoptosis) and is most likely to occur following transmural infarction, as well as being heavily influenced by concomitant microvascular obstruction and lethal reperfusion injury in the era of acute revascularisation [11,12]. The process of remodelling is triggered by the initial ischaemia/reperfusion insult which sets into motion a number of events. In the initial stages, the changes in the left ventricle are predominantly due to the effects of infarct expansion causing cardiomyocyte necrosis and apoptosis which ultimately leads to myocardial wall dilatation via a number of mechanisms including changes in excitation-contraction coupling and an increased expression of foetal genes leading to an alteration in proteins produced. In the later stages, remodelling is largely fuelled by hypertrophy of surviving cardiomyocytes in response to pressure and volume changes and neurohumoral signalling, reorganisation of the ECM with deposition of scar tissue and an inflammatory-driven process whereby substantial ECM turnover in border areas leads to cell slippage and further dilatation. From a whole-organ perspective, these changes impact on cardiac dimensions and function. These initial changes act to maintain an adequate cardiac output in the face of a loss of functioning myocardium; however, over time remodelling becomes maladaptive. Indeed the extent and the nature of remodelling (both compensatory and subsequently maladaptive), and its progression is a powerful predictor for both heart failure and death following MI, as well as having prognostic implications for further MI, stroke and cardiac arrest [13,14]. Preventing or modifying some or all of the drivers for remodelling may go some way to reducing major adverse cardiovascular events in this setting. Local ischaemic conditioning In 1986, Murry et al. [15] first described an endogenous cardioprotective mechanism in a canine model of MI termed ischaemic preconditioning (IPC), whereby intermittent, non-lethal occlusion and reperfusion of the left anterior descending artery (LAD) immediately prior to a period of sustained occlusion significantly reduced the final infarct size. The first in vivo study in humans assessing the effect of preconditioning was performed by Deutsch et al. [16]. In a small group of patients undergoing elective PCI for an obstructed (LAD), they showed a reduction in electrographic, metabolic and clinical markers of ischaemia during the second cycle of balloon inflation compared to the first. Yellon et al. [17] later utilised IPC prior to coronary artery bypass grafting surgery (CABG), demonstrating preserved levels of myocardial adenosine triphosphate during cardiopulmonary bypass. Over the intervening years, evolution of this technique has seen it applied to situations of unpredictable cardiac ischaemia (as opposed to anticipated ischaemia from elective surgery). Zhao et al. [18] in 2003 introduced the concept of ischaemic postconditioning (IPostC) whereby the conditioning stimulus is applied immediately or soon after the index ischaemic event by intermittent inflations and deflations of the intracoronary balloon to stagger reperfusion. Using a canine model, they demonstrated the effectiveness of IPostC in the context of acute MI with comparable levels in infract size reduction and levels of tissue oedema as well as a variety of markers of cardiac damage when compared to IPC. The possible clinical applicability of IPostC in the setting of acute coronary events was quickly realised. By staggering reperfusion during PCI by repetitively inflating and deflating the angioplasty balloon in the culprit vessel for short periods of time, Laskey et al. [19] showed a reduction in final electrocardiographic ST-segment elevation size and an increase in distal myocardial perfusion. Staat et al. [20], using a similar technique at the time of PCI, showed a significant reduction in creatine kinase release and an increase in myocardial reperfusion in the conditioned group. Windows of protection and delayed conditioning Two distinct phases of cardioprotection resulting from ischaemic preconditioning have been shown to exist and are commonly termed 'windows of protection' [21]. The first window begins immediately following the conditioning stimulus and lasts up to 4 h. Protection within this time period is mainly induced through posttranslational modification of proteins. The second or delayed window of protection occurs 12-72 h after the conditioning event and confers protection mainly through gene transcriptional changes [22][23][24]. In the context of protection against the long-term effects of I/R and subsequent remodelling, the timing of the conditioning stimulus is paramount. Early studies suggested that to impart meaningful protection, conditioning must be implemented before, during or immediately after the clinical event as reperfusion injury is thought to occur within the first 15 min after the event. Dispelling this belief somewhat, Roubille et al. [25] described the damage associated with reperfusion as a 'wave front' and showed that rIC after I/R can be effective up to 30 min post-MI. Basalay et al. [26] also found a similar but more modest phenomenon in a rat model of I/R where rIC was effective in reducing injury when started up to 10 min into reperfusion time. The ability to impart protection, even after a significant time after the acute event, may prove clinically useful in the context of protection against adverse remodelling in post-MI in patients presenting late to hospital, as the remodelling process continues to evolve for several days after the initial insult. Proposed mechanisms of remote ischaemic conditioning RIC took the concept of IPC a step further, allowing the conditioning stimulus to be applied away from the heart in a distant tissue bed. Przyklenk et al. [27] were the first to demonstrate rIC in an animal model of ischaemia/reperfusion. By preconditioning the left circumflex coronary (LCx) artery in dogs, they were able to protect the remote myocardium supplied by the LAD following transient ligation to induce MI and reperfusion. Kerendi et al. [28] later demonstrated the cardioprotective effects of rIC in the post-MI setting. After 30 min of coronary artery occlusion in rat hearts, they remotely conditioned the kidneys, then reperfused the heart and showed a 50 % decrease in infarct size compared to the control. In humans, the most practical application of rIC is by sequentially inflating a blood pressure cuff on the arm or leg, commonly using 3-4 cycles of inflation and deflation. This non-invasive technique affords protection not only to the heart but also to a number of other organs, most notably the brain and kidneys (for review, see Ref. [29]). Although the exact mechanisms of signal transduction from the tissue/organ undergoing rIC to the target organ have yet to be elucidated, various authors have highlighted the importance of humoral and neural signalling pathways as well as modulation of the systemic inflammatory response, perhaps working in an interdependent manner [30,31]. The humoral signalling theory postulates that bloodborne factors are released locally by the tissue undergoing rIC and are then relayed in the blood to the target organ, where they bind to G-protein-coupled receptors triggering a number of intracellular signalling pathways. A number of research groups have illustrated the importance of humoral signalling by isolating naïve animal hearts and treating them with superfusate from rIC-treated animals or human donors and demonstrating cardioprotection [32,33]. We have shown this in our laboratory using isolated adult rat cardiomyocytes [34]. Over the years, numerous humoral factors have been implicated including adenosine, bradykinin, nitrate/nitrites, opioid peptides, prostaglandins, natriuretic peptides, endocannabinoids, angiotensin I and calcitonin gene-related peptide. It is currently believed that the signalling factor(s) is between 3.5 and 15 kDa in size and is hydrophobic [35,36]. More recent candidates for the responsible humoral messenger include stromal cellderived factor-1 (SDF-1a) which recruits stem cells and is activated by hypoxia [37], circulating extracellular vesicles [38] and a panel of anti-inflammatory proteins including haptoglobin and transthyretin [39]. The first evidence for the involvement of neural signalling in rIC was given by Gho et al. [40]. By administering intravenous hexamethonium (a ganglion blocker), they abolished protection afforded by remote ischaemic preconditioning of anterior mesenteric artery or renal artery against sustained MI. Subsequent experiments by Ding et al. [41] showed that by directly severing the renal nerve, one could abolish the cardioprotective effect of renal ischaemia rIC in rabbits. Mastitskaya et al. [42] proposed that rIC involves transmission via vagal preganglionic neurones, whilst further studies have advocated C-fibres as the sensory neural mechanism responsible for rIC [43]. Indeed there is some suggestion that a combined humoral/ neural signalling relay exists where adenosine (or other candidate factors) acts via modulation of afferent neural pathway [44]. Jensen et al. [45] demonstrated that the dialysates from type 2 diabetic individuals with peripheral neuropathy did not afford protection against infarction in a rabbit model, whereas the dialysate from non-diabetics and diabetics without peripheral neuropathy did, implying a fundamental role for neuronal signalling in this process. Furthermore, Basalay et al. [26] suggested that rIC in the pre-, per-and immediate post-MI period is heavily dependent on sympathetic messaging, whereas delayed remote ischaemic postconditioning i.e. [10 min after the event, appears not to rely so heavily on this neural signalling. This suggests a greater level of importance for humoral signalling in late postconditioning as well as potentially for repeated rIC. A final hypothesised mechanism of rIC signalling involves modulation of the inflammatory response, important in initiating and controlling wound healing. Cheung et al. [8] demonstrated that patients given a rIC stimulus prior to undergoing open-heart surgery had a reduced systemic inflammatory response and reduced levels of cardiac damage. Li et al. [46] also highlighted the importance of inflammation by demonstrating a blunted cardioprotective response in mice deficient in NFjB (a transcription factor involved in most inflammatory processes) subjected to rIC. The importance of NFjB was underlined by Wei et al. [47] in a rat model of repeated rIC and MI where they demonstrated significantly reduction in phosphorylation of the NFjB subunit p65 and its inhibitory protein IjBa. In addition, this study showed a reduction in the infiltration of macrophages and neutrophils into the infarcted tissue in the rIC groups as well as a reduction in monocyte chemotactic protein 1 (MCP-1) in the border zone of infarcted tissue. More recently, Cai et al. [48] have shown up-regulation of expression of interleukin-10 (a potent anti-inflammatory cytokine) in a mouse model of rIC which leads to a reduction in myocardial infarct size and improved cardiac contractility. Although some mystery still exists as to the mechanisms of rIC signalling, once the signal reaches the intended organ, the downstream intracellular pathways of rIC are thought to share much in common with local ischaemic conditioning. A number of intracellular pathways have been implicated including the reperfusion injury signalling kinase (RISK) pathway, involving ERK 1/2, p38 MAPK, PI3K-AKT and GSK3b, acting ultimately to prevent opening of the mitochondrial permeability transition pore (mPTP) at the time of reperfusion. Another important downstream pathway is the survivor activating factor enhancement (SAFE) pathway, involving activation of the JAK-STAT3/5 axis, a protective transcription factor in the context of acute ischaemia (for a detailed review, see Ref. [49]). The first window of protection is thought to depend heavily on the RISK pathway, nitric oxide (NOS), PKCe, PKCc and reactive oxygen species. The second window of protection is more dependent on the SAFE pathway and inducible nitric oxide (iNOS) as well as retaining a significant overlap with some of the pathways implicated in the first window of protection [21,50]. For a detailed discussion of our current understanding of the mechanisms of rIC, see the proceedings from the most recent Biennial Hatter Cardiovascular Institute Workshop [51]. Remote ischaemic conditioning and acute myocardial infarction The simple and safe technique of inducing ischaemia by inflating a blood pressure cuff applied to the forearm to a level greater than the systolic blood pressure was first used in the setting of acute MI by Bøtker et al. [1] in the CONDI trial. In this landmark study 4 9 5-min cycles of blood pressure cuff inflation/deflation were applied to the forearm of a cohort of ST-segment elevation MI (STEMI) patients in the ambulance on-route to PPCI and showed that with large anterior MIs caused by total occlusion of the LAD, conditioned patients had a significantly better myocardial salvage index as assessed by gated single-photon emission CT (SPECT) than the control group. A smaller study by Rentoukas et al. [2] was undertaken in STEMI patients where rIC was applied just after PCI using 4 9 4-min cycles of a forearm blood pressure cuff inflation/deflation in combination with morphine. There was a significant reduction in troponin T levels in the conditioned group compared to the control group as well as ST-segment deviation resolution. More recent work by White and colleagues further demonstrated the benefits of rIC, implemented in this setting just prior to PPCI in the context of STEMI. They showed a reduction in myocardial oedema and infarct size as measured by cardiac magnetic resonance imaging (cMRI) as well as reduced levels of troponins in the conditioned group [52]. The excitement generated by these trials must be tempered by the difficulty in interpreting individual studies with small sample sizes and significant population heterogeneity which often assesses non-clinical outcome measures. Reassuringly, a recent comprehensive systematic review and meta-analysis of the available trial data by Le Page et al. [53] showed significant reductions in the hard end points of MACCE and allcause mortality in conditioned groups compared to controls in this setting. Remote ischaemic conditioning and remodelling postmyocardial infarction Thibault et al. first hinted at the prospect that the effects of local IPostC after an MI may have a positive influence on myocardial contractility [54]. They demonstrated a 7 % greater left ventricular ejection fraction (LVEF) after 1 year compared with the control group (p = 0.04) [55]. Similarly, Munk et al. [54] in a sub-study of the CONDI trial showed that in MI patients with an area at risk (AAR) of over 35 %, those who received rIC immediately prior to PPCI had significant improvement in LVEF after 30 days compared to the control group (51 ± 11 vs. 46 ± 9 %, p = 0.03). Furthermore, Hoole et al. [5], as well as demonstrating reduced levels of Troponin T in patients undergoing elective PCI who received rIC compared to control, showed that at 6 months, the major adverse cardiac and cerebral event rate (MACCE) was lower in the rIC group (4 vs. 13 events, p = 0.018). More recent data published by the CONDI investigators underlined some of the long-term benefits of rIC [56]. They followed 256 patients who had suffered a STEMI to a median of 3.8 years, split equally between those who had received rIC at the time of PPCI and those who had received PPCI only. MACCE occurred in 13.5 % of the intervention group compared to 25.6 % of the control group (HR 0.49, CI 0.27-0.89, p = 0.018). However, due to the small sample size, no solid inferences could be made about a number of secondary outcome measures, including the development of chronic heart failure. In all these studies, one-off rIC at or around the time of MI has pointed towards the potential for this technique to reduce the incidence chronic heart failure. However, the degree to which the difference in LVEF and other markers of heart failure is due to remodelling, as opposed to attenuation of infarct size around the time of the acute event, is difficult to ascertain. Animal studies by Reddington's group have hinted that the progression to heart failure can be strongly attenuated, in a 'dose-dependent manner', by serial bouts of rIC soon after an ischaemic event. In a rat model of acute MI, Wei et al. [47] demonstrated the greatest improvement in LV chamber size, LV function and haemodynamic changes post-MI in the group that received repeated remote conditioning every day for 28 days compared to a control group and two groups receiving one-off applications of rIC either before or during ischaemia. The benefit appears to be in addition to the initial improvement seen due to reduction in infarct size and points towards novel mechanism of cardioprotection acting directly on remodelling. The study highlighted a variety of ways in which repeated rIC may work in this context including a reduction in oxidative stress, attenuation of the expression of genes associated with fibrosis and hypertrophy, and blunting of the inflammatory response with reduced levels of neutrophil and macrophage infiltration in the myocardium and reduced cytokine signalling. Previously, the same group had demonstrated that repetitive rIC significantly altered the behaviour of neutrophils after MI with reduced levels of adhesion at days 1 and 10 as well as a reduction in phagocytosis at day 10, apoptosis at days 1 and 10 and an overall change in the prolife of cytokine release [57]. More recent work from this group has suggested the existence of separate and very distinct mechanisms by which 'one-off' traditional rIC and repeated rIC infer protection. Whilst traditional rIC acts through the pathways described previously, repeated rIC was shown in this study to increased production of the autophagosome proteins LC3-II, cathepsin-L and Atg5 [58]. Yamaguchi et al. reinforced the power of repeated rIC post-MI and implicated exosomes as the mediators for signalling in rIC, possibly by their action of transferring anti-fibrotic microRNAs such as miR29a as well as IGF-1, which is known to be protective in the context of remodelling [59]. In addition, work by our laboratory showed that superfusate taken from ischaemic-conditioned Langendorff perfused rat hearts as well as serum taken from human volunteers immediately after undergoing rIC stimulation both independently inhibited endothelin-1-induced hypertrophy in a cellular model of hypertrophy alluding to a humoral mechanism of action [60]. Future perspectives Multiple studies are underway to assess the impact of oneoff rIC protocols at the time of MI on various heart failurerelated outcome. Following on from the first CONDI study [56], CONDI-2 (Effect of RIC on Clinical Outcomes in STEMI Patients Undergoing PPCI) is well underway. This study aims to recruit 2300 participants over a 36-months period from a number of sites across Europe (http://www. clinicaltrials.gov/ct2/show/NCT01857414) with the primary outcome of assessing cardiovascular mortality and hospitalisation for heart failure at 1 year. Completion of the study is expected in late 2016. Running in collaboration with the CONDI-2 trial is the ERIC-PPCI (Effect of Remote Ischaemic Conditioning on clinical outcomes in ST-segment elevation myocardial infarction patients undergoing Primary Percutaneous Coronary Intervention) trial. This trial has recently started recruitment and aims to recruit 2000 participants in total across multiple sites to assess whether rIC at the time of PPCI for STEMI can reduce the combined primary outcome of cardiac death and hospitalisation for heart failure at 12 months (https://clin icaltrials.gov/ct2/show/NCT02342522). DANAMI-3 (DANish Study of Optimal Acute Treatment of Patients with ST-elevation Myocardial Infarction) aims to assess the effect of local ischaemic conditioning on heart failure rates up to 3 years following PPCI for STEMI (http://clinicaltrials.gov/show/NCT01435408). The study has completed recruitment of over 2000 participants, and preliminary results pertaining to acute outcomes have previously been presented [61]. RECOND (Reduction in Infarct Size by Remote Per-postconditioning in Patients With ST-elevation Myocardial Infarction), a Swedish-led study, aims to recruit 120 participants and apply remote per-conditioning during PPCI for STEMI. One of the aims of the study is to compare cMRI-assessed remodelling parameters after 180 days between the conditioned and sham groups (https://clinicaltrials.gov/ct2/show/ NCT02021760). Finally, the RIC-STEMI trial (Remote Ischaemic Conditioning in ST-elevation Myocardial Infarction as Adjuvant to Primary Angioplasty) is a Portuguese-led study aiming to recruit 492 participants. Similarly, this study will recruit from patients suffering STEMI and undergoing PPCI with a 1:1 randomisation to rIC approximately 10 min prior to first angiographic balloon inflation or sham conditioning. Rather than cMRI-based outcomes, the primary endpoint in this study will be death or hospitalisation from heart failure at a minimum of 1 year (https://clinicaltrials.gov/ct2/show/NCT02313961). Two phase II trials are underway with the hypothesis that chronic, repeated rIC use in the post-STEMI period can positively influence cardiac remodelling and reduce the incidence of and progression to heart failure: DREAM (Daily REmote Conditioning in Acute Myocardial Infarction) (http://clinicaltrials.gov/show/NCT01664611) and CRIC (Chronic Remote Ischaemic Conditioning to Modify Post-MI Remodelling) (http://clinicaltrials.gov/ show/NCT01817114). The DREAM study is a UK-based, multi-centre randomised control trial recruiting individuals who have suffered a STEMI and have had successful PPCI. Inclusion criteria includes post-STEMI LVEF \45 % on transthoracic echocardiography with no prior history of MI. The study aims to recruit 72 patients and is powered to detect a 5 % increase in LVEF above natural recovery. Primary outcome data are obtained from baseline and 4-month cMRI to assess LVEF, left ventricular end diastolic volume and systolic volume, infarct size and oedema. An important facet of this trial is the intention to try and elucidate further our understanding of how much rIC in this context acts independently on remodelling when influences on the initial infarct size and MVO attenuation are reduced. This is done by beginning rIC 3 days after the acute event to avoid influencing the size of the infarct. RIC will continue for 4 weeks, performed daily by the participant. The study will randomise participants 50:50 in the intervention or the control group. The intervention group will receive a device that inflates to 200 mmHg in 4 9 5-min cycles of inflation and deflation. The control group will receive identical-looking devices that cycle as the intervention group but only inflate to a maximum of 10 mmHg. As conditioning commences on day 3 post-MI, a greater focus is on the modulation of the remodelling process rather than the infarct sparing properties of rIC. In a similar vein, the CRIC study is a multi-centre randomised controlled trial recruiting from a STEMI/PPCI population in Canada with a recruitment aim of 82. CRIC differs from DREAM in that the investigators will recruit left anterior descending (LAD) territory infarcts only and will exclude diabetic individuals. The reasons for focusing on non-diabetic patients who have suffered large anterior STEMIs in the CRIC study are based on prior work, suggesting that this group is most likely to respond to rIC and hence gains greater impact from the intervention [1,62,63]. Furthermore, rIC will start just prior to PPCI and continue for 4 weeks; therefore, rIC in this context will likely have an influence on infarct size and MVO as well as subsequent remodelling. Primary outcome will be obtained by comparing cMRI at baseline and 28 days, primarily to compare LVEDV. Both the DREAM and CRIC trials are nearing completion, and it is hoped that once these trials are completed we will be in a better position to assess the role of chronic rIC in remodelling and whether this technique merits investigation with larger phase III randomised control trials. Challenges of remote ischaemic conditioning The recent high-profile ERICCA trial, which showed no clinical outcome benefit at 1 year when using rIC compared to sham conditioning during elective on-pump CABG surgery, has tempered the enthusiasm in some quarters for rIC as a potential new cardioprotective therapy [64]. Pertaining to cardioprotection in the context of MI and remodelling, a number of key obstacles remain in effectively translating the protection afforded by rIC in animal and early clinical trials into larger clinical trials and ultimately into routine clinical practice. One major challenge is that of timing of rIC. Patients having an MI presenting late to centres that can administer rIC may have completed their infarct and as such will derive minimal benefit from the procedure with regard to limiting I/R injury, although they may derive benefits from remodelling [65]. Similarly, patients presenting with small infarcts or those receiving PPCI or thrombolysis very early may derive little benefit from rIC as the scope for additional cardioprotection in this setting is limited [66,67]. Another significant challenge is that of the large comorbidities and polypharmacy that is often encountered in the MI patient population. In particular type 2 diabetes, hyperlipidaemia, obesity and hypertension have all been shown to increase the threshold required for effective rIC [68]. Conversely, a number of the medications used in the context of MI or commonly taken by this group of patients already provide a significant degree of cardioprotection, namely ace inhibitors, statins, opioids, insulin and a number of oral hypoglycaemic agents including metformin [69]. There are also a few medication that can inhibit the effects of rIC including sulfonylureas [70]. These issues muddy the waters and make trial design and subsequent clinical translation challenging. Finally, from a practical perspective, because rIC involves the application of a device on the arm that requires a number of inflation and deflation cycles, even with the use of an automated device, this can pose logistical problems in the ambulance or the catheter laboratory during PPCI where time is of the essence and gaining arterial and venous access with the cuff in situ may pose an issue. Furthermore, in scenarios, where rIC must be administered on a regular basis by the patient to target remodelling post-MI, the authors foresee significant concordance issues which may limit the therapy in this context. The use of automated rIC devices that can be interrogated may go some way to overcomeing this issue. Conclusions RIC is only now beginning to reach its translational potential with regard to protection from ischaemic/reperfusion injury. Long-term outcome data for one-off rIC at the time of MI are awaited from the CONDI-2, ERIC-PPCI, DANAMI-3, RECOND and RIC-STEMI trials to supplement promising results from smaller preliminary studies. It is yet to be established whether early preclinical data suggesting a clinically useful role for chronic, repeated rIC use in the context of post-MI remodelling will be borne out in the trial data, but it is hoped that results from both the DREAM and CRIC trials will go some way to answering this question and potentially open the door for larger clinical trials to follow. Acknowledgments APV would like to acknowledge the generous funding received by the NIHR Leicester Cardiovascular Biomedical Research Unit. Compliance with ethical standards Conflict of interests The authors declare that they have no competing interests. In addition, the authors have no affiliations or financial involvement with any organisation or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2017-08-02T22:36:51.056Z
2016-05-13T00:00:00.000
{ "year": 2016, "sha1": "9535149f9896d55d841c1417b12a41efff35487c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10741-016-9560-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2c0a5053b5770e29065695c84ac0a79afe67b866", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219791066
pes2o/s2orc
v3-fos-license
Redefining the Roles of Master and Apprentice: Crossing the Threshold Through the Co-Creation of a First-Year Seminar Students as Partners is an innovative approach to higher education that seeks to redefine both student and faculty roles and expectations on college campuses through the creation of equitable and inclusive partnerships in a variety of ways. This paper details our research in the co-creation of the curriculum for an undergraduate first-year seminar. It describes our journey from conceptualization to assessment of the course including creating the class, administering it for first-year students in the fall of 2018, and evaluating how successful the course was based on both our own perspectives, as well as student course feedback. Findings suggest that both partners had a transformative experience in which they crossed a threshold, creating new expectations surrounding roles and relationships for future student-faculty partnerships. Additionally, the students enrolled in the course provided feedback about the perceived successfulness of the co-created curriculum and the overall course experience based on our collaborative efforts. education, this pedagogical approach seeks to acknowledge student expertise and shift more power to them in a wide array of decision-making processes across campuses. This paper details our experience with the co-creation of the curriculum for an undergraduate first-year seminar (FYS) course. We are an assistant professor of sociology and a student partner majoring in computer science and political science. It describes our journey from conceptualization to assessment of the course including creating the class, administering it for first-year students in the fall of 2018, and evaluating the success of the course based on both our own perspectives as well as student course feedback. We found that the marriage of institutional expectations for an FYS and the SaP pedagogy made perfect sense. FYS classes are typically designed to focus on student success and retention of first-year students. Based on existing research, partnering with a current student in the curriculum design could enhance and reinforce these goals by relying on their expertise as a student rather than faculty's assumptions about what incoming students need. We begin our discussion with a review of the SaP literature. We rely on Cook-Sather's (2014) conceptualization of the threshold concept to frame our own personal threshold moments during this project. We also highlight the SaP pedagogical fit for an FYS course. Next, we present three perspectives on the project: that of the faculty partner, the student partner, and of the entire class compiled from themes found in enrolled student feedback. Findings suggest that SaP allows for a unique type of innovation when a faculty member and student collaborate on curriculum design. This is especially true in the context of an FYS course. However, the student partner experienced some negative costs associated with his role in the course. Lastly, while both partners each had a transformative experience in which they crossed a threshold of new institutional roles and expectations for future student-faculty relationships, students in the course also reaped some of the benefit of this new type of partnership. In fact, it was the student partner's presence in the course that created a sense of belonging for firstyear students struggling to find their sense of place on a college campus. LITERATURE REVIEW Cook-Sather, Bovill, and Felten (2014) focus on reciprocity, respect, and responsibility as the core principles that should guide student-faculty partnerships. While subject-based student-faculty partnerships have the longest history in higher education, SaP as a pedagogy suggests that these traditional relationships are not inherently equal when it comes to power in the partnership as the faculty mentor typically takes the lead while the student acts as an apprentice. Yet these partnerships also provide evidence that students often thrive in their college experience when they develop these close working relationships with faculty. Additional research has focused on students as consultants in partnership with a faculty member (Cook-Sather, 2014;Cook-Sather & Motz-Storey, 2016;Gourlay & Korpan, 2018;Kandiko Howson & Weller, 2016) along with students as mentors to new faculty during the orientation process (Cook-Sather, 2016). However, the most overlooked dimension of SaP is in curriculum design (Loveland, Moys, Tollett, & Towriss, 2016;Moys, 2018;Moys, Collier, & Joyce, 2018). Much of the research in this area has also been limited to the context of the United Kingdom. This paper seeks to fill this gap in the literature with our U.S.-based curriculum-design application of SaP. . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 68 Additionally, we are unaware of any literature that focuses on the FYS course context as a model for implementing SaP. It is important to understand SaP as an ongoing process rather than a static project. Meyer and Land (2006) describe threshold concepts as gateways that lead to transformed understanding. In teaching and learning, one key threshold concept involves transforming the assumption that teachers hold the knowledge that students come to passively learn As a pedagogical approach, SaP embraces this type of threshold moment as a transformative and long-lasting process that changes our way of thinking and redefines relationships across campus, all while encouraging diversity and inclusion of thought (Cook-Sather, 2014;Healey, Flint, & Harrington, 2014;Matthews, 2017). SaP is about changing relationships permanently rather than just for the duration of the project. The outcomes of this change in process are particularly difficult but highly important for minority students who have not traditionally held positions of power on college campuses (Cook-Sather & Luz, 2015). In fact, the SaP pedagogy may help students struggling to feel like they even belong on a college campus develop a sense of identity and place in higher education. Student partners take on an authoritative role as subject experts while students in the classroom can see themselves in the student instructor. Developing both a sense of self as a college student and feeling a sense of belonging early can also aid in overall student success and retention outcomes. The effects of crossing the type of faculty-student power threshold described above can also have affective outcomes on a personal level and disrupt the compartmentalization of institutional life (Cates, Madigan, & Reitenauer, 2018). Felten (2017) notes that these affective outcomes are due to the emotions that are inherent in interpersonal relationships. Being open to the emotional connections made in a student-faculty partnership is what helps participants cross the threshold in a meaningful and long-lasting way. Most importantly, "the benefit of student involvement in the enhancement of teaching is dependent on the perceived authenticity of student voice within a circumscribed idea of student expertise" (Kandiko Howson & Weller, 2016, p. 10). SaP can only be successful if all participants are equally invested in the idea that students can be equal contributors to a project because of their own unique expertise. This may seem counterintuitive to many faculty members, who are trained to guard their knowledge, authority, and power carefully. Opening up to a student emotionally to build a relationship while acknowledging the agency and capacity of the student as more than just a passive learner is necessary for crossing the threshold. Faculty are often the unchallenged authority on a college campus while students sit in the classroom as passive learners. The expertise students bring with them to the classroom often goes overlooked as professors seek to impart the knowledge of their disciplines, as it will appear on final exams. However, SaP seeks to redefine the relationships traditionally found in higher education by breaking down the assumption that knowledge is a one-way street (Cook-Sather & Luz, 2015;Cook-Sather et al., 2018;Mercer-Mapstone et al., 2017;Peters & Mathias, 2018). This is especially relevant in an FYS course in which first-year students are learning about the expectations and practices of their institution. Peer instruction, in which the student partner is a subject expert, is particularly meaningful in this context for both the student partner and the student peers they interact with. . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 METHODS This research was conducted at a small liberal arts institution with a student body of approximately 1,700 students in the Mid-Atlantic region of the United States between June 2018 and January 2019. Our team was comprised of an assistant professor of sociology in her fifth year of teaching at the institution and a computer science and political science double major transitioning from his freshman to sophomore year at the college. This partnership was brought together by a passion for Star Wars and an interest in both teaching and collaborative research. The project was made possible by an internal grant that traditionally facilitates SaP in subject-based faculty-student research partnerships. Between June and August of 2018, we worked to design an FYS from the ground up, guided by the SaP pedagogy, called "Star Wars: The Good, The Bad, and The Sociology." This course spent half the time teaching first-year students how to be successful in college and half the time practicing critical reading and writing skills using the Star Wars saga and supplemental readings pertaining to introductory sociological concepts. We worked together to select readings and video content, design assignments, create in-class activities, and even build quizzes and exams. Joe helped designate which type of institutional resources should be included on the syllabus and the order in which they should be addressed based on his own first-year experience. Elizabeth made arrangements with these resources for key staff in each area to present their resource to the class. Lastly, we also wanted to include the students in their own SaP project, so we created a semester-long assignment that would culminate in a presentation at a campus-wide venue. The students took on the role of subject-experts and presented course material to over 100 of their peers from all across campus. From August to December of 2018, the course had 16 students enrolled. It was a racially diverse group of first-year students and included 13 males and three females. We created assessment mechanisms over the summer which were administered to students and collected for analysis upon completion of the course. These tools included feedback surveys administered at both midterm and end of semester along with an extra-credit evaluative question on the final exam specifically asking, "In what ways do you think that being in a co-created course affected your experience in this class? How did your learning experience compare to other courses this fall?" Surveys were administered using Microsoft Forms, and students received extra credit for completion. All open-ended data were coded by both the student and faculty partner for interrater reliability. Following the work of Berg (2009), we practiced open coding, asking ourselves the question, "How did our partnership affect students in the course we designed?" Common themes were then used to develop coding frames, which included the visible presence of the student partner throughout the course, the relatability of having a student partner create the course, and students feeling prepared for their college education as discussed below. In addition to the feedback data collected from enrolled students, we also relied on our own personal experiences in assessing the usefulness of the SaP pedagogy. We find these subjective disclosures an important part of understanding both our findings and the research that was conducted (Berg, 2009). We each discussed how we came to be interested in the project, the role we played in the partnership when it came to curriculum design, and a reflection on how we each experienced a transformative threshold crossing during our . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 70 partnership. These perspectives are critical in helping other students, faculty, and institutions understand the intricacies of implementing a new pedagogical approach that complements our project findings. CROSSING THE THRESHOLD The faculty partner After agreeing to teach a first-year seminar for my institution, I quickly realized I didn't know anything about being a student in today's world. Therefore, I recruited a student partner. I simply asked for a student who had successfully completed their FYS and who loved Star Wars. I am a sociology professor, and he was a rising sophomore in the Computer Science department; this was an interdisciplinary twist that added to the excitement of our partnership. We began our course-design project by looking for appropriate readings not only on the Star Wars saga and sociology but also on how to be a successful student in college, the importance of higher education, and the value of the liberal arts. Joe helped scour through stacks of books and articles that I had started to collect in the spring. I would give him micro lessons in core sociological concepts including race and gender while also discussing the social institutions of politics, religion, and family, all themes found throughout the films that we hoped to include. He would then determine which readings he felt were accessible and interesting to first-year students. Together we would discuss the rigor and appropriateness of his suggestions. He helped determine how many class periods we might need to cover a certain subject while also preserving space for a breadth of topics. As a non-sociology major, he also made sure that I was providing sociological explanations that were basic enough for non-majors with no prior knowledge of the subject. Joe was also integral in designing engaging activities. For example, we developed the idea of doing a scavenger hunt around campus. He helped identify the most important resources first-year students would need and should know how to locate. He then developed a passport for students to have stamped at each location. These passports also included a helpful fact Joe developed regarding each location. We then used this list to schedule additional fieldtrips to each location for further instruction. It was invaluable to have my student partner's help in these projects because he had recent first-hand experience with these services and the order in which students would likely need to access them. Other ways we collaborated included in-class activity design and writing assignments. Joe was able to articulate what made an assignment accessible and interesting to him during his first year of college. He provided a perspective unique from my own and helped me establish a reasonable level of rigor. An example were reminders from him as to the workload and expectations students would also be facing in their ENG101/102 course and how our class should complement those assignments. When it came to developing grading rubrics for our assignments, Joe helped me understand how to best make sure feedback would be received constructively rather than critically. He also helped design a scaffolded series of grades in which assignments started out low stakes and became increasingly larger portions of the course grade. He emphasized that this would help students feel less deflated and hopeless at the beginning of their college career and incentivize them to respond positively to feedback. Lastly, . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 71 in order to ensure his presence in the course when it was offered in the fall, Joe filmed a series of short videos that were integrated into the class, explaining the course, the assignments, and "tips and tricks" for being successful throughout the class. 1 These would prove to be extremely successful at demonstrating Joe's presence in the class, even when he was not physically present. Initially, I found this project to be very unsettling. This partnership demanded that I essentially pre-test all my ideas with a student, something I don't think many faculty think about or do when we plan a new course. We run it the first time, often creating on the fly, and then we make adjustments based on how it went. Giving up this kind of control to an untrained undergraduate student went against all my training, especially as a female academic needing to possess an air of authority for her students to take her as seriously as their male professors. But then it became freeing. I realized I was still a part of the partnership and could help make sure the course met a specific level of rigor and included all institutional requirements. I spent less time worrying about reaching the students in my class, and I put more effort into helping Joe turn his ideas into praxis. I taught him about sociological concepts, and he helped make them accessible to incoming first-year students, most of whom were not sociology majors. He taught me about computer software capabilities and ways of including technology in the classroom. His enthusiasm for Star Wars, especially the newer films and TV series, was infectious. I explained to him my fondness for the classic trilogy, and he discovered the importance of context when viewing a cultural artifact. Most importantly, he relied on his own FYS experience to shape the experiences of the next incoming class. And I have never felt more prepared to teach a class in my entire career. I was then able to take this transformed sense of partnership into the classroom. I did it by showing Joe's videos in class as well as posting them on our institution's learning management system; Moodle. I continuously opened myself up to critique by students in the class both informally and through the assessment mechanisms designed with Joe. We also had the students create a student-led, campus-wide presentation, empowering them to teach the campus about their semester-long projects. This was another example of how Joe and I sought to teach the campus about SaP both inside and outside the classroom. Attendees were exposed to an explanation of my partnership with Joe while also seeing first-year students taking command of course material. Students enrolled in the course became empowered with their newfound expertise and agency, and many cited the event as their favorite part of the course in their feedback. In The Empire Strikes Back (Kurtz & Kersher, 1980), Yoda tries to teach Luke a lesson after his plane sinks even further into the Dagobah swamps. Luke is convinced that he will never be able to get it out. "Always with you it cannot be done. . . . You must unlearn what you have learned." In embarking on a SaP endeavor, I too had to unlearn what I had learned about traditional student-faculty relationships, expertise, and institutional role expectations. Once I did, I crossed the threshold. The student partner As a student, I have always been interested in the idea of pursuing research in college. At my current institution, I was fortunate enough to receive internal funding to pursue this . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 72 interest. I knew I wanted to do research during my undergraduate years; I just did not know how I would come across it or what I would be studying. In the fall semester of my first year of college, I came across an advertisement from Dr. Kiester detailing a possible student-faculty research opportunity. After seeing this advertisement recruiting students interested in Star Wars and FYS, I knew it was the perfect opportunity for me. After going through Dr. Kiester's application process, I was fortunate to be selected as the student partner. We applied for the available internal funding and became research partners in the summer of 2018, ready to pursue SaP. Over the summer, I spent a significant amount of time learning the processes and thoughts of a professor in higher education designing a new course. Being only a year out of high school at the time, I had my own ideas as to how professors thought and how classes were developed, but nothing close to what I learned with this experience. This was a very interesting project because essentially, I was learning the "behind the scenes" of college. I had the opportunity to talk to a professor outside my own coursework, learning about everything that goes into curriculum development. This was all on top of being equally responsible for an entire FYS course for the upcoming fall semester. It was not just being a student and a professor that made this research partnership interesting, but also the fact that it was interdisciplinary as we are a team made up of a sociology professor and a student studying computer science and political science. Throughout the summer, I realized that it was not just me who was learning new things. I was teaching Dr. Kiester about how it is to be a student at the institution. I was also able to show my own academic strengths while contributing to the course and our work together with my computer science background by providing technical skills she lacked and making our work more efficient and accessible to students in the class. During this double-sided learning experience, we came across an article detailing a particularly resonating concept in SaP work: the threshold concept. We found ourselves experiencing this exact concept, crossing the threshold between student and faculty roles through collaboration on this course-design project. I crossed the threshold utilizing this research and, with the help of Dr. Kiester and the course we made, provided a similar opportunity for other students to also have a transformative experience. Once the class was in session, it was a very abnormal experience to be a student who just happened to also be an "instructor" to a handful of my peers coming into the college. It was both a rewarding experience and an unsettling one. It was rewarding in that I was able to be a mentor to some incoming first-year students and assist them in their academic journey by giving simple tips to aid their studies. I was also able to provide inspiration to a few of the students participating in the course by sparking a desire to venture into their own SaP work in the future. However, it was not all good feelings that came from being the quasi-instructor that I was to these students. I felt as if there was a certain pressure to be a model student for them, as I was the one who helped design and implement assignments that they would be graded on. This affected my day-to-day academics for my own scholarly experience as I held myself to a higher standard when in communication with the students taking the course. This was a positive motivator for me to better myself, but it also added an abnormality to my semester. I . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 73 also noticed different changes in tone throughout the semester from various participating students towards me, because their viewpoints shifted from that of a peer to that of a stereotypical student and instructor. This is also another thing that I did not think to consider when working in a SaP mindset: the social implications that come from being a peer who was partially responsible for what they would be graded on. These social implications are often overlooked in the SaP literature, which is typically very scientific and lacks the emotional responses of the student participants. In my experience, taking up the work as an instructor makes for a vastly different college experience. In being an instructor who is also a peer, there is no longer the barrier of instructor-to-student; all communications are student-to-student, which makes for a much more rewarding and complicated experience for different interactions. Some rewarding moments for me included students showing a desire to achieve their goals at college and being positively affected by participating in our SaP experience. These were some of the best feelings from the whole experience, because they were what I aimed to do in this research project. I wanted to provide a successful entrance into higher education, in which I created a course that could teach the basics of college and inspire the students involved to act on their passions and get the most out of their time in college. When asked about their experience in the class, one student responded: Since this class was co-created under a student-faculty partnership, it has definitely affected my experience in the class. Having a student only a year older help create a class made the class based on a student's perspective. Therefore, him being a student, he knew how much a student can handle and what wasn't important. It affected my experience because I realized if another student can get through this class then so can I. My learning experience in this class taught me how to write better papers and work harder towards my goals. It taught me to look at things from a different perspective than I am used to. This will not only help me in my other courses, but in real life. Receiving responses like this allowed me to see the positive influence that a successfully cocreated course could have on incoming first-year students. However, there were also negative interactions that included direct personal criticisms from students and a forced social separation. For example, when talking with some students before tests, they would sometimes ask for information I would not give them, so I would have to withdraw from the conversation. Other instances included hearing occasional negative comments about having a peer in the role of instructor. Students react differently in a course and in response to their professors. Based on my observations, these reactions appear to be amplified, both positively and negatively, when the course design and instruction comes not only from a professor but also from a peer. Having spent some time unofficially working as a teaching assistant (TA) for another professor, I can also say that this experience differs from this more traditional student-asinstructor relationship. As a TA, my role had more to do with helping fellow students master subject material with an understanding that I was not privy to all course content. However, in this instance, students were well aware that I had helped develop content including questions . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 74 for quizzes and exams. It was this knowledge that created a negative reaction from peers when I refused to simply give them access to the content and the answers. Overall, I hope that this research helps break down the stereotypical barrier between students and professors and ends the social implications for students participating in this innovative work for higher education. I believe that if SaP becomes more prevalent in academia, then it will foster a more creative and collaborative approach to the college experience. I am happy to have had this opportunity to work as a student partner on such an innovative pedagogical approach, and I am thrilled to be a part of the SaP pilot program at my institution. I am going to continue to work on breaking the barriers between myself as a student with my other professors in the future, and take more control over my own education as well as the people around me. This is all due to crossing the threshold. The class Sixteen first-year students took this class during their first semester at the institution. The class was designed to help first-year students develop basic college skills including critical thinking, writing, and oral communication. Students were also tasked with participating in a SaP project of their own. They had to create presentations that were the culmination of a semesterlong assignment and that required group work to complete. At the end of the semester, each group then made a campus-wide presentation, demonstrating the way in which they had become subject experts. Throughout the class, there were various feedback mechanisms given to the students to gauge the success of the course. This included both a midterm and end-of-semester survey evaluating satisfaction with the course along with whether or not the student partner's impact on the course felt noticeable (e.g., Do you think Joe's input in designing this course has been noticeable?; How satisfied are you with the writing assignments in this course?). Additionally, we looked at institutionally administered course evaluations for any mention of level of satisfaction with the course or the student partner's impact. After compiling the data and going through the open coding process, we noticed some key themes in students' feedback. These themes included the visible presence of the student partner throughout the course, the relatability of having a student partner create the course, and students feeling prepared for their college education. The first theme of the student feedback pertained to the visible presence of the student partner throughout the course. This was accomplished with pre-recorded video segments by Joe directly speaking to the class. We decided to use these videos to make sure that students were invested in SaP and knew that Joe was an integral part of the course design and implementation even though he was enrolled in his own courses at the time this course was offered. Students responded positively to his presence: Being able to walk into class and seeing a video of Joe explaining the next assignment and his suggestion of when to start it. Also, being able to have a student to talk to when you have any questions was really awesome too. My learning experience was completely different compared to my other courses. Having a student and a professor made me learn so much more and I was truly appreciative to have this opportunity. . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 75 I believe that the co-created class under a student-faculty partnership was more beneficial. This affected my experience by giving a better understanding to the class. This was done with videos from Joe at the beginning of every chapter. Also, by a clear explanation on what kind of effort was required to do well. I feel that this class being co-created in partnership with a student definitely aided in the explanation of the sociological aspect of the class. The terms were clear and understandable for a first-year class that consisted of many different majors and levels of interest in sociology. Students often smiled when Elizabeth would play a new video at the beginning of class, often saying "Hi Joe" in response to the video. While faculty often convey to their classes things such as course expectations, assignment details and deadlines, and substantive material, hearing the same information come from a peer was perceived to be more beneficial than coming from the professor. Second, not only was Joe's presence noticeable and welcome, students reported that he made the class more personable. Using SaP in an FYS provides students with a more relatable pillar in their education. Students noted in their feedback that faculty have not been in the seat of an undergraduate student for a very long time, making them feel less approachable. However, under the SaP approach, students appreciated that they had a peer providing professorial insight alongside the faculty member, giving students someone more relatable in the course. When asked about their experience in the class, this was the most prevalent theme, as the following student comments demonstrate: My experience in this class felt very familiar and welcoming meaning that Joe knows what being a student is like, so he co-created the course with that in mind. Making it simple. It honestly made the class better. Joe's input made it better for the students. His input helped us a lot because he knows what it's like being a college student. He was able to help set things up so that we didn't become overwhelmed. This course was probably one of my favorites because I like Star Wars and I like Sociology. It was able to blend the two. This course was very interesting. I think this affected me in a good way. Joe sits in my seat everyday and understands what it's like to be a student in today's time. It affected my experience by giving me something to relate to within the class. I knew that it wasn't going to just be 100% all the professor's input going into the course. My learning experience was much greater when compared to other courses because I had many different ways to learn. . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 76 I feel like it affected my experience because you got to feel the student part of the class and also because I know Joe. I got help from him and got to ask him questions about the class. While we had anticipated that there would be benefits for both the student and faculty partners when utilizing SaP to cross the threshold, we did not anticipate the significant impact it appears to have had on the students in our co-created class. They too seem to have crossed a threshold when it comes to their assessment of having someone deemed more relatable conveying the same information. This increased their perceptions of success in the class and helped them develop a sense of belonging on campus. The third theme that we noticed in the student feedback was the level of preparedness students felt they achieved in this course. Since student success is a primary goal of an FYS course, we designed a variety of assessments that would measure this goal including writing assignments, tests, group work, and public speaking presentations. Students suggested in their feedback that the course did make them feel adequately prepared to handle the rigor of future college courses: This was a class that assigned a lot of work, between papers and other work, just like my English class. In my English class, not many people understood what was due and when. However, in this class the structure was made so everyone could understand. I had to do more intense group work for this course than any other course that I am taking, which for me took away from how I completed assignments that were part of the entire group "effort." My least favorite part of the Experience Event was the nerves I had before it. I get really nervous and I was super scared to present in-front of many people. My favorite part of it was me overcoming my nerves and being proud of myself after it. These responses suggest that Joe's collaboration in the curriculum design resonated with his peers. His understanding of what skills students needed to build in their first semester of college and how to get them to actively engage with our course assignments was invaluable to our FYS. Constant feedback is important for any course creator, but even more so for ones seeking to actively give students an amplified voice in their own education. It gives an idea of what students like and dislike about a course. This feedback is truly irreplaceable in the assessment of the success of a course, particularly one grounded in SaP. We were grateful not only for the positive feedback but also for more critical assessments of assignments or course structure that could be used to improve the course in its next iteration, such as the following comments: This course was one of my favorite courses this semester. . . . Knowing that this class was co-created with a student, and was also being taught for the first time, I came into . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 77 the class expecting for there to be rough spots. Things like the essays at the beginning of the semester or the scheduling of out of class trips felt off. However, they were off by product of it being a new class, not a failure on behalf of the curriculum. My all-time favorite part about this class was the Experience Event we did, something none of my other classes would have even thought of doing. Projects like that showed genuine collaboration by both student and teacher, and I'd love to see more in the future. I do not know how to incorporate it into a sociology class, but I would recommend talking about the music of Star Wars for a day. The soundtrack is just as memorable and well-known as the characters, along with the fact that John Williams did an amazing job in every movie. It would be interesting to talk about how themes are reused in different trilogies, and also how different songs are used in different contexts throughout a trilogy such as "Duel of Fates" in the prequels. We loved this last comment so much, we decided to create a Sociology of Star Wars music module for the class in the fall of 2019. The students seemed to appreciate this course overall and provided a large amount of feedback and their personal assessments of the course throughout its duration. We feel this is imperative to both an FYS course and the SaP model, and our use of it in curriculum development made a positive learning experience for both the enrolled students as well as ourselves. DISCUSSION AND CONCLUSION Over the past six months, we have actively sought to change the student-faculty dynamic while increasing the success of an FYS course at our institution. We embarked on this project excited to create a new experience for ourselves and our students. In doing so, we discovered several implications of SaP pedagogy, particularly in the FYS course context. First, we found that the FYS context greatly benefits from the innovation of the SaP pedagogy. Students felt more engaged and connected to the course because of the student partner's role in the course design. His presence in the class, even in the form of a pre-recorded video, gave them confidence and helped them feel as though they were more prepared to face the challenges of college life and that they in fact belonged on a college campus. This is particularly important for first-year students and especially meaningful for first-generation students who least know what to expect in college. These findings also have implications when it comes to the issue of stereotype threat that many contemporary students face when they arrive on a college campus. Steele (1997) suggests that stereotype threat is a sense of not belonging coupled with a fear of performing a negative stereotype associated with one's group identity characteristics. This is especially relevant for racial minorities and first-generation college students with no idea what to expect on campus or in the classroom. The findings from our racially diverse sample suggest that SaP may have a role in helping students overcome this threat by providing them a sense of belonging and giving them a boost in confidence that they will be successful in college. These results would also have implications for student retention which is particularly important during the fall to spring semester of a first-year student. . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 78 Secondly, our findings demonstrate how impactful a peer can be when given the chance to be a subject expert. Historically, we have seen this in traditional student leader roles on campuses including resident assistants, peer tutors, and student government. Our findings indicate that peers in curriculum design and their presence in the classroom should also be utilized to increase student success and connectedness, especially when it comes to incoming first-year students. As our students noted from their own SaP experience, they were very proud of their participation in the campus wide presentation event which made not only them, but students in the audience, imagine what their own student-faculty partnership might look like and in what other areas they might be viewed as subject experts. SaP is then capable of increasing student investment in the college experience in general and in their campus in particular. Finally, both partners had a transformative experience in which they crossed the threshold of traditional faculty-student relationships. We concur that "genuine partnership in learning and teaching is an act of resistance to the traditional, often implicit, but accepted, hierarchical structure where staff have power over students" (Matthews, 2017, p. 6). Both partners felt an equal opportunity to contribute to the course design, each having to learn to cope with a shift in authority. Elizabeth had to break free from authoritative expectations as a subject expert to allow Joe the freedom to teach her what students want and need. Joe had to overcome submissive expectations as a passive learner seeking knowledge to assert his expertise at being a successful student. While this originally made us both uncomfortable, the experience changed our outdated view of traditional relationships on a college campus and made us want to advocate for reimagining a larger role for student-faculty partnerships in classrooms across campus, not only for the student and faculty partners but for the students enrolled in their class. Through our own perspectives about crossing the threshold combined with student feedback on the course's success, it appears that relying on the expertise of the student partner was successful in changing the experience of each person touched by this course. Students enrolled in the class and those who attended the student-led campus event approached us for advice about initiating their own SaP curriculum-design project at the institution. It is this enthusiasm and feedback that we hope to share across our own campus while also inspiring and encouraging our readers to seek SaP opportunities at their own institutions without waiting for a campus-wide initiative. By sharing our own subjective experiences as the researchers, we hope to strengthen the understanding of the transformative threshold moment and its significance in perpetuating SaP as a process rather than a static moment. This is important when considering the long-lasting implications for the student success and retention campuses seek. This project clearly has institutional and geographic limitations that we note. Additionally, Bindra et al. (2018) note a significant problem with SaP pedagogy in that it can act as a mechanism that reproduces inequality at institutions that fail to acknowledge the importance of including minority students even if they do not meet certain GPA or academic standards. They note that SaP projects that do not adequately compensate student partners monetarily also privilege wealthy students who can afford to partake in the experience without . Redefining the roles of master and apprentice: Crossing the threshold through the co-creation of a first-year seminar. International Journal for Students as Partners, 4(1). https://doi.org/10.15173/ijsap.v4i1.3826 79 pay. Disrupting these traditional power structures is a valuable component of SaP as a threshold concept and should be further investigated. This research was successfully reviewed according to our institution's IRB committee guidelines.
2020-06-04T09:12:58.708Z
2020-04-07T00:00:00.000
{ "year": 2020, "sha1": "e0a73a0680cc772758144f4f80e8a0a29109e86e", "oa_license": "CCBY", "oa_url": "https://mulpress.mcmaster.ca/ijsap/article/download/3826/3627", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d117ddf3cf8d4457f39b7a8aecd0510841fa0041", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
265050610
pes2o/s2orc
v3-fos-license
Cosmic Vine: A z=3.44 large-scale structure hosting massive quiescent galaxies We report the discovery of a large-scale structure at z=3.44 revealed by JWST data in the Extended Groth Strip (EGS) field. This structure, called the Cosmic Vine, consists of 20 galaxies with spectroscopic redshifts at 3.43<z<3.45 and six galaxy overdensities ($4-7\sigma$) with consistent photometric redshifts, making up a vine-like structure extending over a ~4x0.2 pMpc^2 area. The two most massive galaxies ($M_*\approx10^{10.9}~M_\odot$) of the Cosmic Vine are found to be quiescent with bulge-dominated morphologies ($B/T>70\%$). Comparisons with simulations suggest that the Cosmic Vine would form a cluster with halo mass $M_{\rm halo}>10^{14}M_\odot$ at z=0, and the two massive galaxies are likely forming the brightest cluster galaxies (BCGs). The results unambiguously reveal that massive quiescent galaxies can form in growing large-scale structures at z>3, thus disfavoring the environmental quenching mechanisms that require a virialized cluster core. Instead, as suggested by the interacting and bulge-dominated morphologies, the two galaxies are likely quenched by merger-triggered starburst or active galactic nucleus (AGN) feedback before falling into a cluster core. Moreover, we found that the observed specific star formation rates of massive quiescent galaxies in z>3 dense environments are one to two orders of magnitude lower than that of the BCGs in the TNG300 simulation. This discrepancy potentially poses a challenge to the models of massive cluster galaxy formation. Future studies comparing a large sample with dedicated cluster simulations are required to solve the problem. Introduction Galaxy clusters are the most massive gravitationally bound structures in the Universe.Brightest cluster galaxies (BCGs) are the most luminous and massive elliptical galaxies located at the centers of galaxy clusters.Studying the progenitors of galaxy clusters and their BCGs in the early Universe is fundamental for our understanding of galaxy formation and evolution.In the past decade, massive and dense structures of galaxies have been continuously discovered at high redshift from z ∼ 2 out to the epoch of reionization (e.g., Capak et al. 2011;Walter et al. 2012;Mei et al. 2015;Wang et al. 2016;Mantz et al. 2018;Oteo et al. 2018;Miller et al. 2018;Zhou et al. 2023;Brinch et al. 2024;Morishita et al. 2023).These structures have large scales, some extend over tens to hundreds of comoving Mpcs (e.g., Koyama et al. 2013;Cucciati et al. 2018;Forrest et al. 2023), and most host a high abundance of star-forming galaxies.In these structures the most massive members are usually rich in gas and dust, and show vigorous star formations and complex morphologies.Simulations suggest that some of them would collapse and form galaxy clusters at later cosmic time, and hence they are likely proto-clusters hosting proto-BCGs (e.g., Chiang et al. 2013Chiang et al. , 2017;;Rennehan et al. 2020;Ata et al. 2022;Montenegro-Taborda et al. 2023).However, when and how the Marie Curie Fellow. proto-BCGs quenched their star formation and transformed their morphology remain open questions. In the z < 1 Universe it is well established that environmental quenching (Peng et al. 2010) is the dominant channel ceasing star formation in cluster galaxies, where galaxies were quenched via gas stripping and strangulation after falling into a virialized cluster core (Gunn & Gott 1972;Larson et al. 1980;Moore et al. 1998;Laporte et al. 2013;Peng et al. 2015;Shimakawa et al. 2018;Boselli et al. 2022).This quenching process was often presumed for z > 2 proto-clusters.For example, Shimakawa et al. (2018) proposed a scenario where the first generation of massive quiescent cluster galaxies is formed in an already collapsed cluster core where the environmental quenching is taking place.Nonetheless, at z > 1 this picture has been debated by multiple studies (e.g., Gobat et al. 2013;van der Burg et al. 2013van der Burg et al. , 2020;;Webb et al. 2020;Ahad et al. 2024) that argue that most massive cluster galaxies are quenched by self-driven processes (e.g., mass quenching, AGN feedback) before entering a cluster core.Therefore, detailed study of high-redshift quiescent galaxies and their environments is crucial to disentangling the quenching mechanisms. Recently, quiescent members have been spectroscopically identified in galaxy overdensities at z 3 (Kubo et al. 2021(Kubo et al. , 2022;;McConachie et al. 2022;Ito et al. 2023;Shi et al. 2023;Sandles et al. 2023), which are exquisite samples to test the environmental quenching models.However, the shallow depth of photometric surveys and the high incompleteness of spectroscopy observations has hampered our efforts to reveal their large-scale structures and assess the dynamical status of their local environment; it is unclear whether they are hosted by a virialized cluster core.The situation is currently changing with the successful operation of the James Webb Space Telescope (JWST).Its unprecedented sensitivity and long wavelength coverage allow us to efficiently select distant quiescent galaxies and reveal their large-scale environments. On the other hand, massive quiescent galaxies have been identified at z > 3 (e.g., Glazebrook et al. 2017;Schreiber et al. 2018a;Forrest et al. 2020a,b;D'Eugenio et al. 2021;Valentino et al. 2023;Carnall et al. 2023a), but their large-scale environments are barely studied due to the lack of deep imaging and spectroscopy follow-ups on megaparsec scales.Accordingly, a megaparsec-scale structure at z > 3 hosting massive quiescent galaxies in a well-defined JWST survey field would be an ideal laboratory to study the quenching and formation of proto-BCGs.In this paper we report a large-scale structure the Cosmic Vine at z = 3.44 in the Extended Groth Strip (EGS) field covered by JWST surveys, and investigate two massive galaxies in the structure.We adopt flat ΛCDM cosmology with H 0 = 70 km s −1 Mpc −1 , Ω M = 0.3, as well as a Chabrier initial mass function (Chabrier 2003). Data processing and measurements This study used photometric data from JWST and the Hubble Space Telescope (HST), and spectroscopy data from JWST and the literature.The JWST+HST photometric data and catalogs are publicly available in the Dawn JWST Archive (DJA) 1 , and the reduced images and spectra have been visualized on the DJA Interactive Map Interface2 . The JWST imaging data are from the Cosmic Evolution Early Release Science survey (CEERS, Finkelstein et al. 2023).The data reduction, calibration, and source extraction follow the same pipeline applied in multiple studies (e.g., Valentino et al. 2023;Jin et al. 2023;Giménez-Arteaga et al. 2023;Kokorev et al. 2023;Gillman et al. 2023).Briefly, we retrieved the pipeline-calibrated Stage 2 NIRCam products from the Mikulski Archive for Space Telescopes (MAST), then calibrated the data and processed them as mosaics using the Grizli package (Brammer & Matharu 2021).The calibrated images are aligned to stars from the Gaia DR3 catalog (Gaia Collaboration 2023).Sources were first extracted in the stacked map of longwavelength (LW) images using source extraction and photometry (SEP, Barbary 2016), and photometry was measured within apertures of 0.3 , 0.5 , and 0.7 in diameter on the position from the extraction.The CEERS photometric catalog includes photometry of seven JWST bands (F115W, F150W, F200W, F277W, F356W, F410W, and F444W), and seven bands of HST (F105W, F125W, F140W, F160W, F435W, F606W, and F814W).We adopted 0.5 aperture photometry with aperture correction.The photometric redshifts were calculated using the EAzY code (Brammer et al. 2008) that fit above photometry with a linear combination of 12 pre-selected flexible stellar population synthesis (FSPS) templates. The JWST spectroscopic observations used in this work are data from NIRSpec Prism grating (project ID: DD-2750, PI: P. Arrabal Haro), which was taken using the Micro Shutter Assembly (MSA) multi-object spectroscopy (MOS) mode with "clear" filter.The data were reduced and calibrated using MsaExp3 , following the reduction process described in Heintz et al. (2023).In short, we processed the spectroscopic data set using the custommade pipeline MsaExp v. 0.6.7 (Brammer 2023).This code utilizes the Stage 2 products from the MAST JWST archive and performs standard calibrations for wavelength, flat-field, and photometry on the individual NIRSpec exposure files.MsaExp then corrects for the noise and the bias levels in individual exposures.The 2D spectra are combined for individual exposures, and the 1D spectra are extracted using an inverse-weighted sum of the 2D spectra in the dispersion directions.The NIRSpec Prism observations have wavelength coverage from 0.7µm to 5.3µm, with a varying spectral resolution from R ∼ 50 at the blue end to R ∼ 400 at the red end.The spectral redshifts are measured by fitting the 1D spectra with emission lines and continuum using MsaExp.We then visually inspected the fitted spectra and ranked the robustness of the redshift with grades from 0 to 3, which are 0=data quality problem; 1=no features; 2=with features but ambiguous redshift; 3=robust.We adopt the redshifts with robust features, grade 3. Given the low resolution of NIR-Spec Prism spectrum (R ∼ 100 at 3 µm), in Table 1 we rounded the Prism redshifts to the precision of 0.001. Selection The structure, the Cosmic Vine, was initially selected by applying the overdensity mapping technique in Brinch et al. (2023) with photometric redshifts z phot from the CEERS catalog and seven spectroscopic redshifts z spec from literature.The overdensity mapping technique is based on a weighted adaptive kernel technique developed by Darvish et al. (2015) and Brinch et al. (2023).In the overdensity mapping procedure, the photometric redshift uncertainties were accounted for in the weight of the chosen redshift bin, and spectroscopic redshifts z spec are given with the highest weights.We performed the overdensity mapping in the redshift range 2 < z < 5 with redshift bin size of 5%(1 + z).The Cosmic Vine was selected as the most significant overdensity in the redshift bin 3.29 < z < 3.77 (Fig. A.1, top).As shown in Fig. 1, six peaks of galaxy overdensities are found with >4σ significance over the field level, and three of them are found with >6σ.The primary overdensity peak (i.e., Peak A, upper right of Fig. 1) is centered on RA 214.86605,Dec 52.88426.Subsequently, we also searched for extra spectroscopic redshifts in the latest DJA archive and the literature.As listed in Table 1, we found 20 spectroscopically confirmed members in total, whose redshifts were collected from the DJA and multiple surveys (Schreiber et al. 2018b;Kriek et al. 2015;Stawinski et al. 2024;Cooper et al. 2012).In Fig. 1 we show 18 galaxies with z spec in green circles; the other two sources (ID=42414, 46256 in Table 1) are located farther north, and hence are not shown in the figure.The two most massive members are Galaxy A and E (Fig. 1, right), and there are ∼200 candidate members with 3.3 < z phot < 3.6, including a quiescent candidate (Galaxy B) selected by Valentino et al. (2023) and a submillimeter galaxy (Galaxy D) identified by Gillman et al. (2023). SED and spectral fitting For the confirmed members, we fit the JWST+HST photometry and NIRCam spectra using the Bagpipes code (Carnall et al. 2018) with fixed z spec .Following the recipes in Carnall et al. (2023a), we assumed a double-power-law star formation history (SFH), the attenuation curve of Salim et al. (2018), and the radiation fields in the range of −4 < log U < −2.We Notes. ( * ) Sources on the edge of the NIRCam LW mosaics; ( †) Uncertain redshift due to inconsistent z phot = 2.43 +0.20 −0.14 ; (a) Keck/MOSFIRE (Schreiber et al. 2018b), (b) JWST/NIRSpec (this work), (c) MOSDEF (Kriek et al. 2015), (d) Keck/DEIMOS (Stawinski et al. 2024), (e) Keck/MOSFIRE (this work), ( f ) DEEP3 (Cooper et al. 2012); n: Sérsic index; B/T : Bulge-to-total ratio in F277W; Type: QG (quiescent galaxy), SF (star-forming).used a metallicity grid from log(Z/Z ) = −2.3 to 0.70, A V grid from 0 to 4, and an age grid from 1 Myr to 2 Gyr.For Galaxy A, we ran Bagpipes with its JWST+HST photometry at fixed z spec = 3.434.As Galaxy E has NIRSpec Prism data and shows post-starburst features, we performed spectrophotometric fitting following a method similar to that in Strait et al. (2023).We first scaled the NIRSpec 1D spectra to the JWST photometry using a wavelength-dependent polynomial scaling curve.We note that Galaxy E shows a broad emission (FWHM ∼ 3700 km s −1 ) at the wavelength of Hα, which might be from active galactic nucleus (AGN) activity or a blending of Hα and [NII], whereas it is not feasible to model broad Hα+ [NII] with this low-resolution spectrum (R ∼ 100 at 3µm).We thus fit a single Gaussian to the broad Hα line and subtracted the best fit (FWHM = 3696 km s −1 ) from the spectrum.Following (Carnall et al. 2023a), we masked out narrow emission lines and fit the broadline-subtracted and masked spectrum together with the JWST+HST photometry.As the continuum can be boosted by AGN and nebular, we included an AGN component and a nebular model to account for continuum emission from AGN and star-forming regions.The best-fit results are presented in Fig. 2 and Table 1. For the other confirmed members, we ran SED fitting with JWST+HST photometry using the same Bagpipes setups.As flagged by asterisks in Table 1, four sources are on the edge of or are out of the CEERS NIRCam mosaics.Two of them have NIRSpec spectroscopy, and we thus fit their spectra to derive stellar masses and star formation rates (SFRs).The other one has no JWST data, and we thus adopted the measurements from the EGS-CANDELS catalog (Stefanon et al. 2017).The last source (ID=49474) is a Lyα emitter only found in the catalog of Stawinski et al. (2024), and thus no photometry is available for SED fitting. Morphology analysis In order to quantify the morphology, we used SourceXtractor++ (Bertin et al. 2020;Kümmel et al. 2020) to fit the light profile of the JWST images over the whole CEERS survey field.To have meaningful results, the morphological fitting was only done for sources detected in F444W with S /N > 20.For each source we applied two models: a single-Sérsic model with index varying from n = 1 to 8 and a Bulge+Disk decomposition with fixed index n = 4 for bulge and n = 1 for disk.The single-Sérsic fitting was performed by simultaneously fitting all available JWST images, and the Bulge+Disk decomposition was done for image of each JWST band.In Fig . A.2 we show an example of Bulge+Disk decomposition in F200W.The results of the single-Sérsic index, effective radius, and bulge-to-total ratio (B/T ) are listed in Table 1 for the confirmed members of the Cosmic Vine.The error bars of the morphological parameters were obtained from the covariance matrix of the model fit, which was computed by inverting the approximate Hessian matrix of the loss function at the best-fit values.These error bars are found to be considerably underestimated by a factor of 2−3 (Euclid Collaboration 2022, 2023;Shuntov et al., in prep.), and should be considered only as a lower limit. Cosmic Vine: A large-scale structure at z = 3.44 In addition to the overdensity of photometric redshifts, 20 galaxies have been found with 3.434 < z spec < 3.45 in the Cosmic Vine area.As listed in survey of Lyα emitters (Stawinski et al. 2024), and the DEEP3 survey (Cooper et al. 2012).The galaxies with z spec are shown as green circles in Fig. 1, which overlap well on the galaxy overdensities of photomeric redshifts.The source with z spec in the Cosmic Vine also dominates the available z spec at z ∼ 3.4 in the ∼100 arcmin 2 CEERS field (Fig. A.1, bottom); the overdensity of the z spec sources in the Cosmic Vine is 8.8σ above the field level.This thus solidly confirms that the Cosmic Vine is a real structure at z ∼ 3.44.Remarkably, the shape of the Cosmic Vine is significantly elongated, extends over a length of ∼4 Mpc, and has a narrow width of ∼0.2 pMpc on the sky, which is significantly larger than compact galaxy groups and proto-clusters at z > 3 (e.g., Oteo et al. 2018;Miller et al. 2018;Daddi et al. 2022;Sillassen et al. 2022;Zhou et al. 2023).In the literature there are two structures that are very similar to the Cosmic Vine.The first is the z ∼ 3.35 large-scale structure PCl J0959+0235 reported by Forrest et al. (2023), which is at a similar redshift, and hosts multiple overdensity peaks on a similar scale and massive quiescent members (McConachie et al. 2022).The second is the z = 2.2 large-scale structure found by Spitler et al. (2012), which has a comparably long and vine-like shape. We note that a "tail" made of five sources with z spec is present on the bottom left of Fig. 1, but no galaxy overdensity has been found on it because the five galaxies are on the edge of or are out of the CEERS NIRCam mosaics, and the photometric infor-mation is incomplete.Hence, the membership identification is limited by the area of the CEERS survey; the actual size of the Cosmic Vine would be larger if there were members that existed outside of the JWST mosaics. Massive quiescent galaxies Remarkably, the two most massive galaxies in the Cosmic Vine, Galaxy A and Galaxy E (Fig. 1), are found to be quiescent.Galaxy A is located in the densest region of the Cosmic Vine, which is known as Peak A. Galaxy A has been classified as a quiescent galaxy in multiple studies (Schreiber et al. 2018b;Valentino et al. 2023;Carnall et al. 2023b), and was first reported at z spec = 3.434 by Schreiber et al. (2018b) using Keck/MOSFIRE spectroscopy.Notably, the redshift z = 3.434 has a confidence probability of 84% and was flagged as an uncertain redshift in Schreiber et al. (2018b).However, using the latest JWST and HST photometry, the photometric redshift of Galaxy A has been constrained to be z phot = 3.53 +0.08 −0.10 (16th, 84th quartiles) by EAzY SED fitting in the DJA catalog.As an independent measure, Carnall et al. (2023b) estimated a z phot = 3.44 +0.14 −0.08 using Bagpipes with a different version of JWST+HST photometry.The two z phot results agree well with the z spec = 3.434, and are consistent with the median redshift of Cosmic Vine within the z phot uncertainty.Furthermore, the z phot uncertainty of Galaxy A (∆z ∼ 0.1) is two times smaller than the median z phot error of the other confirmed members.All these pieces of evidence support that Galaxy A is a member of the Cosmic Vine. With state-of-the-art JWST and HST photometry, as shown in Fig. 2, the Bagpipes SED fitting yields a stellar mass of log(M * /M ) = 10.82 ± 0.02 and an upper limit of SFR < 0.5 M yr −1 (95th quantile), confirming its massive and quiescent nature.The inferred SFH suggests a post-starburst picture with a peak of star formation 350 M yr −1 occurring at z ∼ 4.5 and being quiescent by z = 4 (Fig. A.3).The peak SFR is comparable with that of submillimeter galaxies (SMGs) at z ∼ 4 (e.g., Jin et al. 2022).Coincidentally, a tidal tail associated with Galaxy A is robustly detected in NIRCam F200W and LW images (Fig. A.2), indicating a merger morphology.Our morphology analysis gives a Sérsic index of n ∼ 2.7 and a bulge-to-total ratio of B/T > 0.7, revealing a bulge-dominated morphology.Moreover, the size of Galaxy A is extremely compact with an effective radius of r eff = 622 ± 3 pc.The size and the stellar mass surface density within the r eff (log(Σ eff ) = 10.43 ± 0.03 M kpc −2 ) are comparable with that of compact starburst galaxies (Puglisi et al. 2019;Gullberg et al. 2019;Diamond-Stanic et al. 2021), which again supports the major merger and post-starburst nature. Galaxy E was selected as a quiescent candidate by Merlin et al. (2019) in the Stefanon et al. (2017) catalog.Recently, it was re-selected by Carnall et al. (2023b) using its specific star formation rate (sSFR) derived from SED fitting with JWST photometry (z phot = 3.53 ± 0.12), and also selected by Valentino et al. (2023) using the NUVUV J diagram in Gould et al. (2023).It is the most massive galaxy in the Cosmic Vine with a log(M * /M ) = 10.95 ± 0.03, which is confirmed at z = 3.442 with JWST/NIRSpec Prism spectroscopy (Fig. 2).Galaxy E is well detected with a Hα emission and a strong Balmer break.As no other lines are present in the spectrum, it appears to be a post-starburst galaxy (e.g., Chen et al. 2019;French 2021).The Bagpipes fitting of the NIRSpec spectrum shows negligible star formation with an upper limit of SFR SED < 0.4 M yr −1 (95th quantile) and a moderate attenuation A V = 0.45 ± 0.06.The inferred SFH is relatively uncertain, but suggests a quenching time at z ∼ 4. The Hα emission of Galaxy E appears dominated by a broad component (FWHM = 3696 ± 324 km s −1 ), which suggests AGN activity or blending of [NII]+Hα.Given the low resolution of the prism spectrum, the two cases cannot be distinguished with current data, and high-resolution spectroscopy is required to identify the potential AGN activity.However, here we derived a SFR Hα upper limit for the two cases.For the first, assuming the broad component is from an AGN, the residual is minimal after subtracting the best-fit broad Gaussian (i.e., 1.39 × 10 −18 erg s −1 cm −2 ).By integrating the Hα absorption of the best-fit model, we obtained an upper limit for narrow Hα flux of 3.45 × 10 −18 erg s −1 cm −2 .Accounting for the attenuation, it gives a constraint of SFR Hα < 1.8 M yr −1 according to the Hα-SFR correlation in Pflamm-Altenburg et al. (2007).This might suggest that Galaxy E is a quiescent galaxy hosting an active black hole, similar to the z = 4.7 GS-9209 (Carnall et al. 2023a).For the second case, assuming there is no any AGN contribution to the Hα emission, the integrated Hα flux would be L4, page 5 of 10 2.59 × 10 −17 erg s −1 cm −2 .Adopting a ratio of [NII]/Hα = 0.3, which is typical for star-forming galaxies, we obtained an upper limit of SFR Hα < 11.8 M yr −1 .We note that this is a conservative limit because the [NII]/Hα ratio can be high in high-z quiescent galaxies (e.g., [NII]/Hα = 0.97 in Carnall et al. 2023a), and the Hα from star formation could be even fainter if there is any AGN activity.The two SFR limits give sSFR upper limits of log(sSFR/yr −1 ) < -10.7 and log(sSFR/yr −1 ) < -9.9, respectively.Both results are compatible with the sSFR from the SED fitting (Fig. A.3), and support the quiescent nature of Galaxy E.Here we adopt the more conservative limit of log(sSFR/yr −1 ) < −9.9. Our morphology analysis shows that Galaxy E has a Sérsic index of 3.81 that is close to local elliptical galaxies, and the bulge-disk decomposition gives a B/T = 0.76, revealing a bulge-dominated morphology.In contrast to Galaxy A, Galaxy E is located in a relatively isolated environment, where the local overdensity is just above the field level with a 2σ significance.No robust interacting features are found on Galaxy E, and its effective radius is about three times larger than that of Galaxy A. In comparison with the z ∼ 0.1 post-starburst galaxies that have an average n = 1.7 (Sazonova et al. 2021), the Sérsic indices of the two galaxies are larger by a factor of 1.6 and 2.2, respectively. We note that the SED of Galaxy E is bluer than Galaxy A and other typical quiescent galaxies, which occurs because the blue part of our best-fit model (λ obs < 1.5 µm) is dominated by AGN.SED fitting without an AGN would yield a high SFR = 495 ± 102 M yr −1 with high attenuation A V = 1.00±0.04.With such a high SFR and attenuation Galaxy E would be detected in the farinfrared (FIR) and (sub)millimeter.We checked ancillary FIR and millimeter data sets (MIPS, Herschel, and SCUBA2), and found that Galaxy E is not detected in any images.Furthermore, we made use of the Super-deblended FIR+submm+radio catalog in the EGS field from Le Bail et al. (in prep.), in which they deblended the MIPS, Herschel, SCUBA2, and AzTEC images using the Super-deblending technique (Jin et al. 2018;Liu et al. 2018).As in Fig. A.4, Galaxy E is not detected in any FIR or (sub)millimeter bands, and is only tentatively detected at MIPS 24µm and VLA 3GHz with S /N ∼ 3. We performed a panchromatic NIR-to-radio SED fitting and obtained an upper limit of SFR IR < 210 M yr −1 .This FIR SFR is compatible with the SFR Hα limit, but disagrees with the dusty SFR ≈ 500 M yr −1 solution, and hence the dusty star-forming scenario is disfavored for Galaxy E. Halo mass As the Cosmic Vine is an extremely long and large structure (∼4 pMpc), it is unlikely to be hosted by a single dark matter halo.Although, the densest region Peak A might be already collapsed.We thus estimated the dark matter halo mass of the Peak A following the methods in Sillassen et al. (2022): (1) using the M halo -M * scaling relation from Behroozi et al. (2013) and the stellar mass of Galaxy A, it yields a halo mass of log(M halo /M ) = 12.5 ± 0.4; (2) we obtained a total stellar mass of M * ,total = (2.6 ± 0.4) × 10 11 M by summing the stellar masses down to 10 7 M of all confirmed and candidate members in the Peak A within a radius of 15 (111 pkpc).Adopting the dynamical mass-constrained M halo − M * scaling relation for z ∼ 1 clusters with 0.6 × 10 14 < M/M < 16 × 10 14 (van der Burg et al. 2014) yields a halo mass of log(M 200 /M ) = 12.8; (3) adopting the stellar-to-halo mass relation of Shuntov et al. (2022) and M * ,total = (2.6 ± 0.4) × 10 11 M , we obtained a halo mass of log(M halo /M ) = 12.7; (4) assuming a group velocity dispersion σ V = 400 km s −1 , we found that the galaxy number of Peak A (in a putative R vir < 15 ) is more overdense than the average field density by a factor of 97 at z ∼ 3.4 in the CEERS catalog.Applying a mean baryon and dark matter density of 7.41 × 10 −26 kg m −3 in comoving volume and a galaxy bias factor of 10-20 at z = 3.4 (Tinker et al. 2010), we obtained a halo mass of log(M halo /M ) = 12.4−12.7.The four methods agree on an average log(M halo /M ) = 12.66 with a scatter of 0.26 dex.We adopted a halo mass of log(M halo /M ) = 12.7 with a conservative uncertainty of 0.4 dex that is representative at these faint levels (e.g., Daddi et al. 2021;Sillassen et al. 2022). Quenching mechanisms The elongated shape, the large size (∼4 pMpc), and wide velocity range (∼1100 km s −1 ) suggest that the Cosmic Vine is not a virialized system.The abundance of star-forming galaxies (Table 1), a confirmed member of Type 1 AGN (Table 1, ID=46256), and a potential SMG cluster member (Fig. 1, source D; see also Gillman et al. 2023) indicate that the cluster is in its growing phase (Shimakawa et al. 2018).In comparison with the z = 2.16 Spiderweb proto-cluster (Koyama et al. 2013;Shimakawa et al. 2018;Jin et al. 2021), the Cosmic Vine has a comparable co-moving size and velocity range.However, the Spiderweb is at least partially virialized, as is evident from the extended X-ray emission and the detection of the Sunyaev-Zeldovich effect (Tozzi et al. 2022a,b;Di Mascolo et al. 2023).In contrast, Peak A of the Cosmic Vine is approximately eight times less massive than the core of Spiderweb, and the projected shape appears elongated, which means that it is unlikely a virialized structure.Surprisingly, two massive quiescent galaxies formed in this large structure, in which Galaxy E is explicitly not in the core region but already quenched.This indicates that a cluster core is not essential for quenching massive cluster galaxies, and quenching mechanisms that require a virialized cluster core are thus disfavored.We realized that recent studies suggest that ram-pressure stripping (RPS) can occur in local clusters that are not fully virialized (e.g., Lourenço et al. 2023), where hot intercluster medium (ICM) has formed in clusters with log(M halo /M ) = 14−15 that are undergoing merging.However, the core of the Cosmic Vine is less massive in M halo by more than one order of magnitude, and a hot ICM is unlikely to form.On the other hand, RPS is expected to suppress star formation in low-mass galaxies more efficiently than in massive galaxies.On the contrary, in the Cosmic Vine only the most massive members are quenched.For example, Galaxy C is spectroscopically confirmed at z = 3.439 and located well in the core region.It is less massive than Galaxy A by a factor of six, but it is fairly starforming, which is inconsistent with the picture of gas stripping. The challenge is to determine what culprit is quenching their star formations at such an early cosmic time.Thanks to the high sensitivity and the long-wavelength coverage of JWST, the two quiescent galaxies are revealed with interesting features that allow us to assess their quenching mechanisms.As shown in Table 1, the two galaxies show bulge-dominated morphologies (B/T > 0.7).Galaxy A has an extremely compact bulge and a tidal tail, both of which point to a merger.Galaxy E shows potential AGN acitivity.On the other hand, SFHs from SED fitting suggest they were quenched at 4 < z < 6.Given that the post-merger timescale is ∼1 Gyr (Lotz et al. 2008), this allows the merger event to happen before the starburst and quenching phases, as suggested by the SFHs.Therefore, it is likely that the two galaxies were quenched by merger-triggered starbursts in the L4, page 6 of 10 past 500 Myr.Strong AGN feedback is also a possible quench to star formation; however, this is difficult to verify since the AGN activity could have taken place after the quenching of the galaxy (z < 4). Comparison with simulations We compare the halo mass estimate with the masses of proto-clusters in Chiang et al. (2013) and TNG300 simulations (Montenegro-Taborda et al. 2023).Chiang et al. (2013) used a semi-analytical galaxy formation model (Guo et al. 2011) run on the dark matter-only N-body simulation Millennium (Springel et al. 2005), with which they tracked the evolution of dark matter and galaxies in about 3000 clusters from z = 7 to z = 0. Montenegro-Taborda et al. ( 2023) selected 280 systems with M 200 ≥ 10 14 M at z = 0 in the TNG300 simulation (Pillepich et al. 2018) and traced their progenitors and proto-BCGs at high redshift.In Fig. 3 we compare the halo mass of Peak A with the results from the Chiang et al. ( 2013) model and TNG300.As Galaxy A is likely a proto-BCG, we also compare its stellar mass with that of BCGs in TNG300 (Montenegro-Taborda et al. 2023).We found that the halo mass of Peak A is consistent with the progenitor of a Fournax-class cluster in the TNG300 simulations, even accounting for the halo mass uncertainty, and it also partially agrees with the theoretical prediction in Chiang et al. (2013).This suggests that Peak A would evolve to a cluster with M halo > 10 14 at z = 0.Although the halo mass of Peak A might be lower if it is not virialized, merging with nearby overdensities at a later time would significantly increase the mass to above that of the cluster progenitors, and the final mass can be even more massive if galaxies in the large scale fall into Peak A (e.g., Ata et al. 2022).Meanwhile, the stellar mass of Galaxy A is also consistent with the BCG progenitors at z = 3.44 (Fig. 3, left), supporting the idea that Galaxy A is a proto-BCG.The consistency with simulations supports the idea that the Cosmic Vine is on the way to forming a cluster.Furthermore, the halo and stellar masses of other massive proto-clusters at 1.3 < z < 4.4 are also found to be consistent with the simulations (e.g., Rosati et al. 2009;Stanford et al. 2012;Gobat et al. 2013;Andreon et al. 2014;Mantz et al. 2018;Wang et al. 2016;Miller et al. 2018;Sillassen et al. 2022;Coogan et al. 2023;Shimakawa et al. 2024;Pérez-Martínez et al. 2023;Tozzi et al. 2022b;Di Mascolo et al. 2023), which again supports the picture of a forming cluster.We note that Galaxy E has a slightly higher stellar mass than Galaxy A, but is relatively isolated; it is possible that Galaxy E will become a BCG if it falls into the cluster core at a later cosmic time.Moreover, we compared the sSFRs of z > 2.5 quiescent cluster galaxies in the literature (Kubo et al. 2021(Kubo et al. , 2022;;McConachie et al. 2022;Ito et al. 2023;Shi et al. 2023) to the BCGs in TNG300 (Montenegro-Taborda et al. 2023).The sSFRs of BCGs in TNG300 were measured within the radius of 2R e and 50 kpc, respectively.We found that the observed sSFRs at z > 3 are all lower than the predictions from TNG300 by one to two orders of magnitude (Fig. 3, right).The discrepancy remains even when accounting for the uncertainty of sSFRs measured within a <50 kpc radius of the proto-BCGs in TNG300.This stark discrepancy poses a challenge to models of massive cluster galaxy formation in TNG300.It is unclear why the TNG300 fails to reproduce the quiescence of massive protocluster galaxies.This could be caused by a combination of many effects.At first, as suggested by the potential quenching mechanisms of the two quiescent galaxies, TNG300 might be lacking strong starburst and AGN feedback, and hence inefficient to quench star formations.This picture agrees with a recent study by Kimmig et al. (2023) for field galaxies.Second, the SFHs of BCGs might not be monotonically decreasing with cosmic time as the star formation of quiescent galaxies can be rejuvenated (e.g., Remus & Kimmig 2023), and the quiescence on a timescale shorter than the time stamp spacing of TNG300 would be missed in the simulation.Third, recent studies found that Illustris overpredicts the Madau & Dickinson (2014) SFR density by a factor of two at z ∼ 3.5 (e.g., Shen et al. 2022), which could partially overestimate the SFRs of BCGs.Furthermore, the discrepancy could also be due to the different methods with L4, page 7 of 10 which the SFR and M * are measured.We note that the number density of quiescent cluster galaxies is a more straightforward quantity for comparison with the simulations; however, the sample size at z > 3 is too small to give a good constraint on the number density.Fortunately, identifying a large sample of quiescent cluster galaxies at high redshift will come true soon with the Euclid telescope.Future work comparing a large sample with dedicated cluster simulations will be essential to solve the problem, for example Cluster-EAGLE (Barnes et al. 2017), Magneticum (Remus et al. 2023), FLAMINGO (Schaye et al. 2023), and TNG-Cluster (Nelson et al. 2023). Conclusions Using JWST and ancilary data in the EGS field, we discovered a large-scale structure, the Cosmic Vine, at z = 3.44.Our conclusions are as follows: 1. Cosmic Vine is confirmed by 20 galaxies with spectroscopic redshifts at 3.43 < z < 3.45.It hosts six galaxy overdensities of ∼200 candidate members in a vine-like structure extending over ∼4 × 0.2 pMpc 2 . 2. The two most massive galaxies (M * ≈ 10 10.9 M ) in the Cosmic Vine are found to be quiescent with bulge-dominated morphology.This unambiguously demonstrates that massive quiescent galaxies can form in growing large-scale structures at z > 3, disfavoring the environmental quenching mechanisms that require a virialized cluster core. 3. We derived a halo mass of log(M halo /M ) = 12.7 for the primary overdensity peak in Cosmic Vine.Comparisons with simulations suggest that the Cosmic Vine would form a cluster with halo mass M halo > 10 14 M at z = 0, and the two massive galaxies are likely forming the BCGs. 4. In a comparison with the sSFR of proto-BCGs in the TNG300 simulation, we found that the observed sSFRs of massive quiescent galaxies in z > 3 dense environments are significantly lower by one to two orders of magnitude.This stark discrepancy poses a potential challenge to the models of massive cluster galaxy formation. A large sample of quiescent cluster galaxies at high redshift and comprehensive comparisons with dedicated cluster simulations will be essential to solving the discrepancy between observations and simulations, and will thereby shed light on the detailed physics of cluster formation. Fig. 1 . Fig. 1.JWST color-composed image of the Cosmic Vine (Red: F356W+F410W+F444W; Green: F200W+F277W; Blue: F115W+F150W).Left: The large scale structure.Sources with 3.435 < z spec < 3.455 are marked with green circles.The white contours show the overdensity of 3.2 < z phot < 3.7 sources in step levels of 2, 4, and 6σ.Right: 10 × 10 images centered on two massive galaxies in the Cosmic Vine.Galaxies with z spec are highlighted with green arrows, and candidate members with 3.3 < z phot < 3.6 are marked with cyan arrows. Fig. 2 . Fig.2.SED and spectra of Galaxy A and E. The blue curves show the best fit of the Bagpipes fitting.For Galaxy E, The NIRSpec Prism 2D spectrum is overplotted on the 1D spectrum.The 1D spectrum is shown in red with the uncertainty marked in shade. Fig. 3 . Fig. 3. Comparison with simulations.Left: Mass vs. redshift for literature proto-clusters and TNG300 simulations.The gray and blue shaded areas show the simulated halo mass evolution of proto-clusters in the Chiang et al. (2013) and TNG300 simulations (Montenegro-Taborda et al. 2023), repectively.The red shaded area marks the stellar mass of BCGs in TNG300 (Montenegro-Taborda et al. 2023).The halo mass of peak A and the stellar mass of Galaxy A are consistent with the predictions from the models, suggesting a massive descendant with a halo mass of more than 10 14 M at z = 0. Right: sSFR vs. redshift for BCGs.The sSFRs of BCGs in TNG300 simulations (Montenegro-Taborda et al. 2023) are shown as blue and red curves, overlaying with the observed sSFRs of massive quiescent members in z > 2 proto-clusters.The blue shaded regions indicate the 16th to 84th percentile range of the r < 50 kpc sSFR measurements in TNG300. Table 1 . Confirmed members of the Cosmic Vine.
2023-11-09T06:41:19.218Z
2023-11-08T00:00:00.000
{ "year": 2023, "sha1": "d983b0c335c285dbc5e930ce3fa7e28d5975fec5", "oa_license": "CCBY", "oa_url": "https://www.aanda.org/articles/aa/pdf/2024/03/aa48540-23.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f8a060633e9ad7053efa49538c4224c2c47bc1c5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10459015
pes2o/s2orc
v3-fos-license
Pharmacokinetics and metabolism of 13-cis-retinoic acid (isotretinoin) in children with high-risk neuroblastoma – a study of the United Kingdom Children's Cancer Study Group The administration of 13-cis-retinoic acid (13-cisRA), following myeloablative therapy improves 3-year event-free survival rates in children with high-risk neuroblastoma. This study aimed to determine the degree of inter-patient pharmacokinetic variation and extent of metabolism in children treated with 13-cisRA. 13-cis-retinoic acid (80 mg m−2 b.d.) was administered orally and plasma concentrations of parent drug and metabolites determined on days 1 and 14 of courses 2, 4 and 6 of treatment. Twenty-eight children were studied. The pharmacokinetics of 13-cisRA were best described by a modified one-compartment, zero-order absorption model combined with lag time. Mean population pharmacokinetic parameters included an apparent clearance of 15.9 l h−1, apparent volume of distribution of 85 l and absorption lag time of 40 min with a large inter-individual variability associated with all parameters (coefficients of variation greater than 50%). Day 1 peak 13-cisRA levels and exposure (AUC) were correlated with method of administration (P<0.02), with 2.44- and 1.95-fold higher parameter values respectively, when 13-cisRA capsules were swallowed as opposed to being opened and the contents mixed with food before administration. Extensive accumulation of 4-oxo-13-cisRA occurred during each course of treatment with plasma concentrations (mean±s.d. 4.67±3.17 μM) higher than those of 13-cisRA (2.83±1.44 μM) in 16 out of 23 patients on day 14 of course 2. Extensive metabolism to 4-oxo-13-cisRA may influence pharmacological activity of 13-cisRA. The retinol derivative 13-cis-retinoic acid (13-cisRA) is now an established component of the treatment of high-risk neuroblastoma, despite the fact that early phase II trials conducted with low-dose 13-cisRA showed limited clinical benefit in patients with recurrent disease (Reynolds et al, 1991;Finklestein et al, 1992;Kohler et al, 2000). When 13-cisRA was administered as a highdose (160 mg m À2 day À1 ), intermittent regimen in a Children's Cancer Group phase III randomised trial, a significant improvement in 3-year event-free survival (EFS) was observed (Matthay et al, 1999). Factors that may explain the efficacy observed in the latter study include the higher dose administered and the use of an intermittent dosing regimen, incorporating 2-weeks of 13-cisRA followed by a 2 week rest period, on each course of treatment (Matthay and Reynolds, 2000). Retinoids are susceptible to oxidative metabolism, and the extensive metabolism of all-trans-retinoic acid (ATRA) in acute promyelocytic leukaemia (APL) may influence relapse rates (Muindi et al, 1992). Analysis of patient samples from a phase I study of 13-cisRA previously suggested increasing levels of the metabolite 4-oxo-13-cisRA during a course of treatment, although actual concentrations were not quantified (Khan et al, 1996). An additional concern with the administration of 13-cisRA is that many neuroblastoma patients are very young. Owing to the large size and number of 13-cisRA capsules required to obtain the specified dose, younger children are physically unable to take the drug unless the capsules are opened and the contents mixed with food before administration. This practice raises concerns regarding the actual dose of drug that these patients are receiving, in addition to the possibility that the drug may be unstable during this procedure. The current study was designed to determine the pharmacokinetics of 13-cisRA and the extent of oxidative metabolism when administered to high-risk neuroblastoma patients using an intermittent dosing regimen. In addition, the absorption of 13-cisRA when administered following opening of the capsules and mixing the contents with food was investigated. Patient eligibility and treatment The study protocol was approved by the UK Trent Multicentre Research Ethics Committee and written informed consent was obtained from patients or parents as appropriate. Patients less than 18 years with a central venous catheter, who were receiving 13-cisRA as part of their standard clinical treatment, were eligible. 13-cis-retinoic acid (80 mg m À2 b.d.) was administered orally as part of a protocol for high-risk neuroblastoma (Matthay et al, 1999), starting between 80 and 120 days post-myeloablative and radiation therapy in all cases. Drug was administered for 14 days, with a 14 day break before the next course. Toxicity was assessed by the National Cancer Institute Common Toxicity Criteria (CTC version 2.0) and patients were followed clinically on a 6-monthly basis. In patients who were unable to swallow the capsules, each capsule was snipped with a pair of scissors and the contents extruded into ice-cream or yoghourt before ingestion. Patients were not fasted before administration. On each day of the study, the administration of the studied dose of 13-cisRA was performed in hospital, supervised by a trained research nurse and fully documented. The most recent ALT, bilirubin and creatinine measurements before 13-cisRA treatment were obtained from the patients' notes (mean 26 days before administration, maximum 63 days prior). Blood sampling and analysis Blood samples for measurement of concentrations of individual retinoids were obtained from a central line before administration and at 1, 2, 4 and 6 h post-administration. Samples were obtained on days 1 and 14 (first and last day) of treatment courses 2, 4 and 6 following administration of the first dose of 13-cisRA on the particular study day. When sampling on these courses was not possible, samples were obtained on days 1 and 14 of an alternative treatment course. Course 1 was not studied in most cases to allow the families and patients to become familiar with the administration processes. Blood samples (5 ml) were collected in heparinised tubes and centrifuged at 1200 g for 10 min at 41C. Plasma was separated and frozen at À201C, before analysis using a highperformance liquid chromatography assay, with a limit of quantitation of 0.02mg ml À1 for all retinoids. This analytical assay allowed for individual quantification of 13-cisRA, 9-cisRA and ATRA, in addition to the metabolite 4-oxo-13-cisRA, as previously described (Veal et al, 2002). All blood and plasma samples were wrapped in aluminium foil to protect them from light, and all sample handling was carried out in dim light. The assay was validated with regard to linearity, reproducibility and stability of the analytes according to standard practice (Shah et al, 1992). Pharmacokinetics/statistical analysis A population pharmacokinetic analysis was carried out using 13-cisRA plasma concentrations obtained on study day 1 from the first available course of treatment for each of 28 patients (course 1 for five patients, course 2 for 20 patients and courses 3, 4 and 6 for one patient each). Peak and trough 13-cisRA levels from days 1 and 14 were determined, together with peak levels of the metabolite 4-oxo-13-cisRA on day 1 and 14; these were compared between treatment courses 2, 4 and 6. Using NONMEM (Beal and Sheiner), a series of population pharmacokinetic models were fitted to 13-cisRA data from the first available course of treatment, with all patients included in the analysis. A one-compartment model with first-order absorption (ADVAN2) was fitted using the FOCE estimation method with Z, e interaction. A composite within-subject error model was used together with an exponential between-subject error model for each of the population PK parameters (CL, V, K a ). Models were fitted using firstly, a diagonal, and then a full block structure for the between-subject covariance matrix. An absorption time lag was added, which was also allowed to vary across the population. A zero order absorption model with an absorption lag was also fitted (ADVAN1); both the absorption lag time and absorption duration were allowed to vary across the population. Two further models were fitted to try to more closely model the absorption phase. The first was a 'transit model', which assumes that the absorption delay is due to the passage of drug through a chain of transit compartments (Savic et al, 2004). The number of transit compartments and the rate at which the drug moves between each of the compartments were estimated from the data. The model was parameterised in terms of CL, V, K a , N (the number of transit compartments) and the MTT (the mean transit time to the absorption compartment). The second model was a modified zero-order absorption model with an absorption lag time. This model assumes that the appearance of the drug in a dose compartment is described by a zero-order process over a fixed duration (D). Absorption into a central compartment was described by a first-order process with rate parameter K a . This model was parameterised in terms of CL, V, K a , LAG and D. Both of these models allowed all pharmacokinetic parameters to vary between individuals according to an exponential error model; both were fitted using ADVAN2. The relationships between pharmacokinetic parameters (apparent clearance, AUC and apparent volume of distribution), peak concentrations of 13-cisRA and 4-oxo-13-cisRA on days 1 and 14 of treatment, and covariates were examined graphically and either t-tests or linear regression applied as appropriate. The pharmacokinetic parameter estimates used were the empirical Bayes estimates obtained from the modified zero-order absorption model, except the peak concentrations. Covariates investigated were age, weight, ALT, bilirubin, creatinine, GFR, sex, method of administration and CTC grade 3/4 toxicity. The logarithm of all pharmacokinetic parameters and continuous covariates was used in preference to the untransformed variables owing to the skewed nature of the data. Empirical Bayes estimates of pharmacokinetic parameters for courses 4 and 6, where data were available, were obtained from the final population model. The Cox proportional hazards model was used to investigate any relationship between pharmacokinetic end points, including peak levels of 13-cisRA and 4-oxo-13-cisRA on days 1 and 14 on the first available course of treatment, and time to relapse/ progression of disease, relative to the first 13-cisRA administration. Patient characteristics and treatment Twenty-nine children were entered into the study. One patient had to withdraw from the study before samples being taken, owing to removal of their central venous catheter. The study population had a median age of 3.2 years (range 1.1 -18.7) and included 16 male and 12 female patients. Patient characteristics for the 28 evaluable patients are given in Table 1. Pharmacokinetics The pharmacokinetics of 13-cisRA were best described by a modified one-compartment, zero-order absorption model combined with an absorption lag time (LAG) allowing a full covariance matrix for the random effects. The differential equations for the final model are: where X 0 is the dose of 13cisRA administered at time t ¼ 0 and assumed to enter the system at time t ¼ LAG; X 1 (t) is the amount of drug in the dose compartment at time t; X 2 (t) is the amount of drug in the observation compartment at time t; D is the duration of the zero-order input; k a is the rate of the first-order absorption; CL is apparent clearance and V is apparent volume of distribution. However, both the transit model and the first-order absorption model with lag time gave very similar parameter estimates and individual fits, despite having slightly larger objective function values (Table 2). Mean population pharmacokinetic parameters were; apparent clearance 15.9l h À1 , apparent volume of distribution 85 l and absorption lag time 40 min. There was large interindividual variability associated with these parameters. Clearance had an inter-individual CV of 69%, while both volume of distribution and K a had CVs in excess of 100%. Correction of clearance or volume of distribution for body surface area did not reduce the CV (Table 3). There was a large degree of variation between courses in day 1 13-cisRA levels (Figure 1; Tables 3 and 4). Whereas some patients showed little variation between courses (e.g. patients 15 and 20), others exhibited large differences (e.g. patients 2 and 7). In contrast, variation in 13-cisRA levels between study days (1 and 14) of the same course, in particular course 2, was relatively small (Figure 2A). Owing to difficulties in obtaining stable parameter estimates, the inclusion of covariates into a population pharmacokinetic model was not undertaken. Instead, the relationship between pharmacokinetic parameter estimates and covariates was examined via plots and either t-tests or linear regression as appropriate. 13-cisretinoic acid AUC and day 14 peak 4-oxo-13-cisRA levels were both found to be related to weight, age and whether or not the capsule had been opened before administration. Higher weight and age were also associated with larger AUCs and peak 4-oxo-13-cisRA levels (Po0.02 in all cases). Day 1 peak 13-cisRA concentrations were found to be linked to method of administration (Po0.02). When 13-cisRA capsules were swallowed, AUC was found to be on average 1.95-fold larger (95% confidence interval (CI) 1.16, 3.28) than when capsules were opened; day 1 peak 13-cisRA levels were 2.44-fold higher (CI 1.23, 4.83) and day 14 peak 4-oxo-13-cisRA levels were 2.65-fold higher (CI 1.41, 5.00) when capsules were unopened. Figure 3 shows AUC values for 13-cisRA observed on day 1 of course 2 of treatment in patients who swallowed 13-cisRA capsules vs patients for whom the capsule contents were mixed with food. Larger absolute doses of 13-cisRA were associated with higher day 14 peak 4-oxo-13-cisRA levels (P ¼ 0.01). A larger 13-cisRA apparent volume of distribution was associated with higher creatinine levels (P ¼ 0.02), but none of the covariates were found to be related to apparent clearance of 13-cisRA, peak 13-cisRA levels from day 14 or peak 4-oxo-13-cisRA levels from day 1. Oxidative metabolism Extensive accumulation of 4-oxo-13-cisRA occurred in all patients during each treatment course, with plasma concentrations higher than those of 13-cisRA on day 14 of course 2 of treatment in 16 out of 23 patients for whom data were available (Figure 4; Table 4). In course 2, peak concentrations of 4-oxo-13-cisRA increased from 0.2 to 5.9mM on day 1 to 0.7 -11.6 mM on day 14 of treatment, as compared with 13-cisRA plasma concentrations of 0.3 -5.5 mM on day 14. Similar increases in the level of metabolite over the 14 days of treatment were observed in subsequent courses (Figure 4). No other retinoic acid metabolites were detected in plasma samples of patients receiving 13-cisRA and concentrations of ATRA accounted for less than 5% of total retinoids in all samples. Clinical response and toxicity Of the 28 evaluable patients, 13 (46%) were alive with no disease at follow-up. Time to follow-up ranged from 18 months to 4 years. Of the remaining patients, three (11%) were alive with disease progression and 12 (43%) had died following disease relapse. There was a greater likelihood of relapse for patients with higher day 14 peak 4-oxo-13cisRA plasma concentrations (P ¼ 0.014; Cox regression analysis). No other pharmacokinetic parameters were related to time to relapse or survival. Treatment was reasonably well tolerated, although several patients had persistent haematological toxicity following the previous myeloablative therapy. Nine patients experienced some form of mild skin toxicity (eight CTC grade 2, one grade 3), with only one report of mild cheilitis. Hypercalcaemia (grade 2 or 3) was reported in two patients. There was no evidence to suggest that any of the toxicities observed were linked to the pharmacokinetics of 13-cisRA or its metabolite. DISCUSSION The addition of 13-cisRA to the treatment of high-risk neuroblastoma has been shown to improve 3-year event-free survival rates. Nevertheless, despite the addition of 13-cisRA to high-dose myeloablative chemotherapy treatment protocols, more than half of patients still suffer relapse within 3 years. While in some patients relapse is mostly influenced by the biology of the tumour, the high degree of variability in the pharmacokinetics and metabolism of 13-cisRA observed in this study suggests that there is scope for further improvements based on individualisation of dosing or schedules. A series of population pharmacokinetic models were fitted to 13-cisRA plasma concentration -time data from 28 children with high-risk neuroblastoma. Following detailed analysis of these data, the pharmacokinetics of 13-cisRA were best described by a modified one-compartment, zero-order absorption model combined with an absorption lag time. Some patients had almost no absorption lag time followed by a very rapid absorption, whereas in other patients a prolonged lag time was observed, together with very slow absorption. Conversely, a very delayed but rapid rise in plasma concentrations was seen in some patients. Thus, a relatively complex model with an absorption lag time was required for all of the standard models. However, the simple first-and zero-order models did not fit particularly well when this approach was taken. Use of the modified zero-order model or the transit model provided a much better fit to the data. The dosing regimen of 80 mg m À2 twice daily for 14 days was associated with significant inter-patient variation in 13-cisRA pharmacokinetics, comparable with that observed following ATRA administration for the treatment of APL (Lanvers et al, 2003). For 13-cisRA, pharmacokinetics may be influenced by a number of factors, including both the method of drug administration and the extent of metabolism. Children diagnosed with high-risk neuroblastoma are commonly aged between 1 and 5 years, presenting a practical problem with regards to administering large numbers of 13-cisRA capsules. For example, an average child of 5 years with a surface area of 0.75 m 2 would require a daily dose of 120 mg, administered as six large 20 mg capsules or 24 5 mg capsules. Younger children are therefore often unable to take 13-cisRA unless it is removed from the capsules before administration. In addition to the safety concerns that may arise from this practice, with regards to the handling of a potentially teratogenic substance, opening the capsule may impact on the actual dose of drug received. This may reflect loss of drug during handling in addition to the welldocumented instability of 13-cisRA in the presence of light. Of the 28 patients recruited to the current study, 17 (age 1.1 -10.7 years) required the capsules to be opened and the contents mixed with food. Plasma concentrations of 13-cisRA in these patients were significantly lower than those observed in children who were able to swallow the capsules, although both groups exhibited a wide variation in plasma concentrations. No attempt was made to control food intake around the time of administration. A previous study has reported higher plasma concentrations when 13-cisRA was administered within 1 h before or after a standard meal, compared with the fasted state (Colburn et al, 1983). Application of the data presented here to guide dosing of 13-cisRA would also need to be adapted to reflect differences in the administration of 13-cisRA in different countries, for example punching a hole in the capsule so that it can be chewed before swallowing. The younger children, for whom the capsules were most likely to have been opened, also received the lowest absolute doses in milligrams. Following oral administration, 13-cisRA may be subject to firstpass metabolism and subsequent plasma concentrations will depend on the rate and extent of metabolism to 4-oxo-13-cisRA, generally thought to represent a pathway of retinoid inactivation. Oxidative metabolism has previously been observed in studies with ATRA (Muindi et al, 1992;Rigas et al, 1993) and fenretinide (Villani et al, 2004), and auto-induction of oxidative metabolism has been associated with disease relapse, following chronic administration of ATRA in APL (Muindi et al, 1992). This can be avoided by the use of intermittent ATRA dosing schedules (Adamson et al, 1995), an approach that has also been recommended to minimise the side effects of ATRA in children (Lanvers et al, 2003). Our results with 13-cisRA show an accumulation of the 4-oxo-metabolite between days 1 and 14 of treatment in all patients, with metabolite concentrations higher than those of the parent drug by day 14 in approximately 70% of patients. There was no corresponding decrease in parent drug concentrations, which would have indicated enzyme induction. Thus, repeated dosing of 13cisRA is unlikely to be associated with cytochrome P450 (CYP) enzyme induction, as is seen with ATRA (Adamson et al, 1993). With the limited size of the current study, it is difficult to obtain a clear indication of the impact of inter-patient pharmacokinetic variation and metabolism of 13-cisRA on clinical efficacy or toxicity. The degree of toxicity was similar to that reported in the Phase I study of 13-cisRA at the equivalent dose level (Villablanca et al, 1995). A statistically significant relationship was observed between the incidence of disease relapse and plasma metabolite concentrations, with an increased likelihood of relapse for patients with higher day 14 peak 4-oxo-13-cisRA levels. A corresponding decrease in parent drug was not detectable. It should be noted that older patients, who may have a higher risk of relapse, would receive higher doses of unopened capsules. As lower doses of retinoic acid have been shown to be ineffective in the treatment of neuroblastoma (Matthay et al, 1999;Kohler et al, 2000), the extent of inter-patient pharmacokinetic variation following high-dose 13-cisRA therapy warrants further investigation. Individuals with plasma concentrations below those associated with antitumour activity may require an increased dose. One approach to overcome interindividual variation, and to ensure that plasma concentrations are not compromised by the method of administration, would be to monitor plasma concentrations on day 1 of treatment. Data in Figure 2 indicate that plasma concentrations do not vary substantially between days of treatment, thus dose adjustment based on the day 1 plasma concentrations is feasible. It is notable that the plasma concentrations achieved in the current study are up to four-fold lower than those reported in a phase I study of the same formulation (Khan et al, 1996). The reasons for this difference are not easily identified. The age range, and thus the doses administered, in the phase I study population are not different to that studied here. It may be that the method of administration was different in the previous study, although the difference in AUC is equally marked in our patients who swallowed Figure 4 4-oxo-13-cisRA data; course 2 for all patients shown by study day; separate graphs shown by method of administration. Patient 26 received 13cisRA via and NG tube. 13-cis-retinoic acid pharmacokinetics in children the capsules whole. There are differences in the assay for drug and metabolite in plasma, but our method was stringently validated and rigorous efforts were employed to minimise loss of drug due to photodegradation. Data on ATRA and the 4-oxo metabolite are comparable between the two studies, although little detail is given in the phase I report (Khan et al, 1996). The conversion to 4-oxo-13-cisRA is the major route of 13-cisRA metabolism. Although there was no indication that this led to significantly lower parent retinoid levels during treatment, higher 13-cisRA levels may be achieved by limiting the extent of metabolism. Although oxidation of 13-cisRA may be mediated by a number of enzymes (Sonneveld et al, 1998;Nadin and Murray, 1999;Chen et al, 2000), specific inhibitors of retinoid metabolism, such as R116010, may be used to inhibit such oxidation (Armstrong et al, 2005) and thus may increase 13-cisRA plasma concentrations. These data also indicate that lower systemic 13-cisRA exposures are associated with the practice of opening the capsules and mixing the contents with food before administration. This finding highlights the importance of obtaining appropriate pharmaceutical formulations of medicines for children. Although a simple increase in 13-cisRA dose is likely to overcome this problem, it may be more appropriate to design a study incorporating the monitoring of plasma concentrations, with a view to standardisation of drug exposure and the development of more robust pharmacokinetic -pharmacodynamic relationships. The data presented here show that optimisation of dosing of 13-cisRA in high-risk neuroblastoma is possible, based on knowledge of the clinical pharmacology of this drug.
2014-10-01T00:00:00.000Z
2007-01-16T00:00:00.000
{ "year": 2007, "sha1": "9be2d19f2dd68743c10df0ea949f81b024373d9c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6603554.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9be2d19f2dd68743c10df0ea949f81b024373d9c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249322401
pes2o/s2orc
v3-fos-license
Laser-Directed Energy Deposition of Dissimilar Maraging Steels with a Defect-Free Interface: Design for Improved Surface Hardness and Fracture Toughness Maraging steels are a class of low-carbon ultra-high-strength martensitic steels. Due to their excellent weldability, these steels have been widely applied for laser-based additive manufacturing (AM). MAR-60HRC is a newly developed maraging grade for AM with a nominal chemical composition of 13.0Ni-15.0Co-10.0Mo-0.2Ti, Fe bal. (wt%), capable of achieving hardness levels of ~ 740 HV. Alternatively, 18Ni300 is a commercialized maraging steel with an excellent combination of strength and toughness at the peak aged hardness (i.e., ~ 590 HV). This work aims to investigate the properties and microstructure of MAR-60HRC fabricated by Laser Directed Energy Deposition (L-DED). Further, the manufacturability of bimetallic parts comprising a tough 18Ni300 core and a hard MAR-60HRC on the surface was evaluated. After proper aging, the multi-layered material showed a surface hardness of ~ 720 HV1 and apparent fracture toughness of 71 MPa m1/2, higher than that of MAR-60HRC (i.e., 60 MPa m1/2), and lower than 18Ni300 (i.e., 90 MPa m1/2). The excellent combination of surface hardness and fracture toughness was discussed, considering the crack arrest at the interface and the flawless interface between the two steels. Introduction Additive manufacturing (AM) technologies are receiving ever-increasing attention in recent years due to their unique characteristics in the production of complex geometries in a short time and sustainable way for a variety of materials [1]-3.Laser-directed energy deposition (L-DED) is one of the most practiced AM technologies for manufacturing metallic components due to its distinctive capacities in developing functionally graded materials and cladding.Moreover, compared to traditional repair technologies, such as arc welding, TIG welding, and plasma welding, L-DED shows several advantages such as relatively lower heat input, less distortion, lower dilution rate, and relatively higher geometrical accuracy that makes it a suitable option for repair and restoration as well [4].L-DED has found its application mainly in the fields of wear protection, repair, and maintenance of high-value parts, such as tools, molds, and aerospace components made from Fe-, Ti-, and Ni-based alloys [5].Maraging steels are a class of high-performance, lowcarbon martensitic steels that combine ultra-high strength (UTS > ≈1800 MPa) and high toughness [6].The relatively soft and tough Fe-Ni martensite can be age-hardened by the precipitation of nano-sized intermetallic particles containing substitutional alloying elements (e.g., Mo, Ti, Al) through a simple heat treatment.The addition of Co in this alloy system raises the martensite start (M s ) temperature, enabling the addition of more substitutional alloying elements (e.g., Mo) without the risk of stabilizing retained austenite [7].Moreover, Co increases the supersaturation of Mo in the matrix, providing an increased amount of this element taking part in precipitation.The combination of Co with a specific weight percentage of Mo, as in the Mo-containing quaternary alloys (Fe-18Ni-Co-Mo), increases the strength by up to 500 MPa [7,8].Ti is the other alloying element contributing to age hardening at low weight percentages (i.e., 0.2 to ~ 2.0 wt%) [9].Depending on the alloying elements, the main intermetallic strengthening compounds in these systems are orthorhombic Ni 3 (Mo), hexagonal Ni 3 (Ti, Al), rhombohedral Fe 7 Mo 6 μ phase, hexagonal Fe 2 Mo Laves phase, hexagonal ω phase, Ti 6 Si 7 Ni 16 G phase, and A 8 B hexagonal "S phase" [9]-13.In recent years, several Fe-18Ni-Co-Mo alloys have been developed on an industrial and research scale [14]-15, and Ni-Co-Mo compositions with lower Ni content, as well as Co-free classes, have been assessed [16]-21.Due to the high Ni content and the absence of C (less than 0.03wt%), hardenability is not an issue in these alloys, and hence cooling rate after solution annealing treatments is not critical [8].In the solution annealed condition, the alloys, depending on the type and content of alloying elements, show hardness of the order of 30 HRC.Parts can be thus readily machined.Additional hardening can be achieved by aging at the recommended temperatures and times [22].Dimensional changes after aging treatments are generally negligible, thus enabling finishing operations to be carried out on the parts before age hardening.They also show good polishability [8,14].More importantly, their excellent toughness at high strength levels makes this steel class an ideal candidate for use in applications requiring high strength-to-weight ratios [14,23].Maraging steels are therefore considered viable materials for tooling applications such as injection molds, extrusion tools, die-casting dies, core pins, and cores, as well as for structural applications [14].The fracture toughness versus hardness for some of the maraging steel available in the literature is presented in Fig. 1.As expected, by increasing the hardness, a drop in fracture toughness is evident. Due to their excellent weldability even without preheating and negligible dimensional changes after age hardening, maraging steels have been exploited commercially in near net-shape AM processes such as laser powder bed fusion (L-PBF) and directed energy deposition (DED) [28].On an industrial scale, parts in 18Ni300 (18.0Ni-9.0Co-4.5Mo-0.7Ti-0.1Al,Fe bal.(wt%)) with complex geometries and enhanced mechanical properties have already been commercialized [29].This steel shows a peak hardness of ~ 600HV, and tensile strength of ~ 2000 MPa, with good ductility and toughness.However, in certain applications, a further increase in hardness aimed at increasing the wear resistance of 18Ni300 has been considered a crucial requirement.This led to the development of Osprey® MAR-60HRC (hereafter, MAR-60HRC), showing a maximum achievable hardness of ~ 61HRC and ultimate tensile strength of ~ 2600 MPa [30].The higher Co and Mo content in this alloy gives rise to higher vol% and enhanced Ni 3 Mo and µ phase precipitation kinetics leading to higher hardness after aging. Other than alloy modification, the hardness can be increased by the addition of hard ceramic particles to fabricate AM 18Ni300 metal matrix composites.However, it is reported that substantial additions of reinforcing particles result in collision and agglomeration of un-melted/un-dissolved ceramic particles with much higher melting points than that of the matrix to the sides of the melt pools [31,32].The stabilization of retained austenite using carbon-containing ceramic particles such as TiC and WC is also documented, and interfacial reactions between the matrix and the reinforcement leading to the embrittlement of the interface are witnessed in some research works [31,33,34].More importantly, mixing the ceramic particles with steel powders introduces an additional step to the process chain with the risk of powder contamination.In other studies, plasma nitriding has been applied to increase the surface hardness of 18Ni300 [35]-37.The process is assessed to increase the surface hardness of AM-18Ni300 both in the direct aging and after solution annealing aging.The surface hardness is reported to reach up to 1000 HV 0.01 , and it declined to ~ 700 HV 0.01 within 30 µm from the surface and gradually dropped to the value of the base material at a depth of ~ 100 µm.The presence of some connected pores and cracks in the compound layer of the nitrided zone and the formation of discrete TiN particles are reported for the nitriding of AM-18Ni300 [37].An alternative approach for realizing parts with different properties on the surface and at the core is producing bimetals.To date, few studies have investigated the fabrication of bimetals of maraging steels [38,39], and these studies are limited to selective laser melting (SLM) of maraging steels on a tool steel baseplate.The different heat treatment responses of these materials might be a limiting factor in some applications. Nearly all engineering structural materials must meet both strength and toughness requirements, however, most materials cannot exhibit both of these qualities simultaneously [40].A solution to promote toughness in hard materials is to exploit extrinsic toughening mechanisms, e.g.crack deflection, zone shielding (transformation toughening, crack tip dislocation shielding, residual stress fields), or contact shielding (bridging, wedging, sliding) [41].The peculiar micro/mesostructure of additively manufactured parts can be exploited in this regard.Suryawanshi et.al [42] demonstrated that heterogeneous microstructure in Al-12Si parts produced by SLM resulted in higher fracture toughness due to crack deflection (tortuous crack path).Huang et.al [43] exhibited that in in-situ alloyed Ti41Nb, alternating Nb-poor regions (NPRs) at melt-pool boundaries resulted in a layered composite mesostructure that favored extrinsic toughening through crack deflection. In their pioneer work, Kürnsteiner et.al. utilized the intrinsic heat treatment (IHT) by tuning the thermal history through the application of proper interlayer dwell times, to obtain an alternating mesostructure of hard and soft layers for a Fe19N-i5Ti steel produced by L-DED and demonstrated a marginal improvement of impact toughness for this steel at cryogenic and room temperatures at a higher tensile strength compared with the uniform microstructure, and a notable improvement at 200 °C [44]. In the current work, we studied the heat treatment and microstructure of MAR-60HRC fabricated by L-DED; further, we report the surface engineering of an L-DED 18Ni300 substrate by deposition of MAR-60HRC using L-DED.The fully compatible chemical compositions of these two alloys (i.e., both comprise Ni, Co, Mo, and Ti, as alloying elements) should give the advantage of forming a clean interface with strong metallurgical bonding.The thickness of the harder MAR-60HRC can be freely increased to increase the loadbearing capability of the component.The proposed design should benefit both from the high hardness of MAR-60HRC on the surface and the excellent toughness of the core 18Ni300, leading to a damage-tolerant material with an extremely high surface hardness which can be specifically beneficial for tooling applications by improving wear resistance. Materials and Methods Gas atomized OSPREY ® MAR-60HRC, and 18Ni300 maraging steel powders with a particle size distribution (−105 µm + 50 µm) were used for depositions.The nominal composition of the powders is presented in Table 1. Deposition of samples was carried out using a LASERTEC 65 3D hybrid machine (DMG MORI AG) with a 2500W diode laser (λ = 1020 nm) and a Coax 14 nozzle.The laser has a top-hat beam profile with a spot diameter of 3 mm at a focal length of 13 mm.After undertaking a series of process optimization tests to obtain the optimum processing parameters, samples were deposited with the processing parameters listed in Table 2.A meander scanning strategy with 90 ˚ rotation between successive layers was applied.Argon was used as the carrier and shielding gas with a flow rate of 5 and 5.5 l/min, respectively.Samples with a relative density of 99.5%, measured using the Archimedes method, were obtained. Aging heat treatments were performed using an L75 Platinum LINSEIS dilatometer in an Argon atmosphere.Samples were age hardened at different temperatures and times to obtain the aging curves.All aging treatments were carried out by direct aging of the as-built specimens. Cuboid samples (45 mm × 10 mm × 35 mm) were deposited using the parameters described above (Table 2).Three single-edge notched bend (SENB) specimens (6 mm × 3 mm × 30 mm) were cut out of cuboids for each material using wire electro-discharge machining (EDM) (Fig. 2).Finally, a notch (a /w = 0.5, ρ = 90 μm) was introduced in the samples by EDM.Following the results obtained from aging curves, samples were heat-treated 1 3 directly from the as-built condition at 480 °C for 6 h prior to experiments.Plane strain fracture toughness testing was performed using 3-point bending tests under stroke control at a rate of 0.5 mm⋅min −1 using a 1343 Instron machine equipped with a 5 kN load cell.In the plane strain testing conditions, the stress concentration factor decreases with increasing the root radii [45], and higher applied stress is thus needed to reach the "critical stress intensity."This leads to slightly higher apparent fracture toughness (K app ) compared with that of the plane strain fracture toughness (K IC ) obtained with fatigue pre-cracked specimen (i.e., ρ → 0).It should be noted that fracture toughness data obtained in this study should not be used directly in the design of structural elements, and are intended only for comparison between different specimens within this study, which have all undergone identical EDM processes.Despite the large notch radius, the Kapp values do not appear to vary significantly below a critical radius (ρ = 100 μm) [45], therefore the Kapp values are still useful for comparative analysis. In bimetallic samples, the notch was realized in the harder material (i.e., MAR-60HRC), as shown in Fig. 2. A distance (l) of ~ 800 μm between the interface and notch root was considered.In this condition, the plastic zone radius (r y ) [46] ahead of the notch, conservatively calculated through Eq. 1 considering the critical value of stress intensity (K app ) for MAR-60HRC (i.e., 60 MPa m 1/2 obtained using a crack tip radius of 90 µm), and the yield strength of the softer material (σ y 18Ni300 ~ 1950 MPa [47]), is 500 μm, thus smaller than l.It is important to remark that the K app numbers in this work cannot be used as a measure for the K IC of the material.However, by using the apparent fracture toughness obtained with the current test method, which is larger than that of standard K IC (i.e., 30 MPa m 1/2 ) reported for the wrought counterparts with almost similar chemical composition, and aged hardness [19,48], the calculated plastic zone radius will be overestimated by around four times.This overestimation plausibly satisfies the condition after which, the plastic zone near the notch tip (~ 500 µm) is still within the MAR60HRC region, far enough from the interface (i.e., 800 µm) [46].As a result of reducing the complexity in the plastic zone by excluding the effect of 18Ni300, interpretation of the fracture toughness of the bimetal, in particular, the direct influence of the interface and its interaction with the unstable crack propagation from MAR-60HRC towards 18Ni300 can be analyzed. Metallographic cross-sections were prepared by grinding up to 1200 grit, subsequent polishing with 3 μm and 1 μm diamond pastes, followed by a final oxide polishing.Microstructural characterizations were done on metallographic cross-sections chemically etched by Villella's reagent, using both light optical microscope (LOM) (Zeiss Axiophot) and scanning electron microscopy (SEM) (Jeol JSM-IT300LV).SEM was also employed to characterize the fracture surfaces.Electron backscattered diffraction (EBSD) combined with electron dispersive x-ray spectroscopy (EDS) elemental mapping was carried out using a Symmetry EBSD detector on a Field Emission Gun Scanning Electron Microscopy (FE-SEM, Zeiss Sigma, Germany) on oxide-polished crosssections to investigate the phase constitutions and semiquantitative elemental analysis, respectively.XRD analysis on age-hardened samples was carried out using an Italstructures (IPD3000/CPS120) instrument equipped with a Co Kα source of 2000 W (λ = 0.17889 nm).The angular step size and integration time were 0.03° and 4 s, respectively.XRD spectra were then analyzed using MAUD software [49] to quantify the phases.Hardness (HV1) measurements were performed according to ASTM E92-17. Microstructure and Hardness of L-DED MAR-60HRC The microstructure of the deposited as-built samples comprises fine cellular/dendritic features (Fig. 3a).EBSD band contrast and phase maps revealed the presence of retained austenite (RA) located in the intercellular regions (yellow arrows in Fig. 3b and c).Corresponding EDS maps evidenced a higher concentration of Mo and Ni and a lower Fe content in the intercellular areas (Fig. 3d-f).The driving force for dendrite/cell nucleation and growth is provided by thermal undercooling.The microsegregation of alloying elements to the cell/dendrite boundaries occurs due to solute the constraint by primary austenite crystals, these oxides remain fine in size [51]. The aging curves (Fig. 4a) of as-built MAR-60HRC with 370 HV1 starting hardness shows that the steel (dilatometric samples) can be age-hardened in ~ 10 min to around 575 HV1 by aging at 480 °C.Holding for longer times (~ 480 min) leads to a maximum hardness of ~ 730 HV1.After the peak hardness is achieved (~ 480 min), a slight drop in hardness by longer holding is evident.By aging at 525 °C, the hardness increases to 625 HV1 in merely 2 min.Continuation of aging for 30 min results in maximum achievable hardness (680 HV1) at this aging temperature.Longer times cause softening due to over-aging.At 600 °C, the maximum hardness is obtained by 2 min holding (i.e., 650 HV1), and a longer holding time leads to overaging.For comparison, the aging curves of 18Ni300 are reported (Fig. 4b).The as-built hardness is around 330HV1, slightly lower than that of MAR-60HRC.The aging response is very similar to the MAR-60HRC, while the achievable hardness is almost 180 HV1 lower than that of the MAR-60HRC undergoing identical heat treatment schedules. The yield strength (σ y ) and hardness of maraging steel in the as-built condition can be modeled by Eq. 2 [52] where σ 0 is the lattice friction stress, σ martensite is the contribution of martensite sub-structure (e.g., block and lath size and dislocation density), while σ ss represents the contribution of solid solution hardening (Eq.3). (2) The critical resolved-shear stress due to the presence of substitutional solute atoms, σ ss , is a function of the atomic fraction of each substitutional element (x i,a' ) and its strengthening constant (B i ) given the lattice and modulus misfit compared with Fe.It is assumed that softer intercellular RA does not negatively influence the overall strength of the as-built material. In aged conditions, intermetallic particle strengthening (σ p ) emerges, which can be modeled by Orowan strengthening (Eq.4).This term can be added linearly to Eq. 2 where μ is the shear modulus, b is the Burgers vector, f p is the volume fraction of each intermetallic precipitate, and r p is their average radius.It should be noted that after aging, the solid solution hardening effect of the elements contributing to the formation of the precipitates (i.e., Ni, Ti, and Mo) decreases by increasing the volume fraction of the precipitates. Given the above description, especially the Orowan strengthening mechanism, up to the peak aging condition, strength increases by an increase in the number density of nano-sized precipitates, which is correlated to the vol% and size of precipitates.Over-aging leads to a drop in hardness and strength because of precipitate coarsening (i.e., lowered number density) and the formation of reverted austenite.Higher temperature aging and longer times contribute to enhanced coarsening and a larger volume fraction of reverted austenite due to the enhanced diffusivity.This is clearly reported that the increase in Ni content from 11 to 23 wt% does not dramatically change the dislocation density and strength of Fe-Ni lath martensite [53].Therefore, from the strength point of view, the key difference between MAR-60HRC and 18Ni300 chemistry is their Co, Mo, and Ti content.Given the high supersaturation of alloying elements within the Fe-Ni martensite in AMprocessed maraging steels, the higher as-built hardness of MAR-60HRC should be discussed in view of a larger substitutional solution hardening effect.According to Fleischer [54], the strengthening results from elastic interactions of solute atoms with screw dislocations, governed by the difference in effective modulus and size of the substituted atom.Ti and Mo show the highest solid solution strengthening effect [52], while Co (having a nearly similar atomic size to iron) has the lowest contribution.The substantial increase in Mo causes more extensive solid solution strengthening in MAR-60HRC, leading to increased as-built hardness.Consequently, in aged condition, higher Mo wt%, and its combination with high Co gives rise to a higher vol% of Ni 3 Mo and possibly μ-phase precipitates, thus resulting in higher maximum post-age hardness (61HRC) compared with 18Ni300 (55HRC), as discussed in the previous works of authors [30]. Figure 5a and b present the microstructure of samples after aging for 5 h at 480 and 525˚C, respectively.The inset in Fig. 5a shows the uniform distribution of nano-sized intermetallic particles within the martensitic matrix, which are responsible for the increase in hardness and strength.It can be clearly observed that the thickness of the cell boundaries, which appear brighter in electron backscatter mode (BED) micrographs, is increased in the sample aged at a higher temperature.This is an indication of increased reverted austenite content upon aging at higher temperatures.In agreement with previous works [6,55], RA at the cell boundaries can serve as a preferential site for the austenite reversion during aging.This is because, from an energetic and crystallographic point of view, it is favorable for reverted austenite to form Ni-enriched shells around existing retained austenite.In general, since austenite is an equilibrium phase in the aging temperature interval for most of the Fe-Ni maraging alloys, austenite reversion by diffusion during aging is expected.Austenite reversion is accelerated by increasing the aging temperature due to the enhanced diffusivity, leading to the local enrichment of Ni within the martensitic matrix upon dissolution of Ni-rich intermetallic precipitates and formation of Fe-Mo rich precipitates [23].The XRD results (Fig. 5c) for the as-built samples and those aged at 480˚C and 525˚C show ~ 5, 8, and 20 vol% austenite, respectively.The austenite reversion kinetics at lower aging temperature for a holding time of 5 h was not significant, and the austenite content did not change drastically compared with the as-built material.Aging at higher temperatures gave rise to the reversion of a significant amount of austenite, in line with the microstructural analysis (5 a, and b) and the hardness results (Fig. 4a).The peaks pertaining to austenite in XRD measurements, especially for the sample aged at 480 °C (see (200) peak intensity), 525 °C (see (111) peak intensity), indicate the existence of a texture.The texture in RA in laser-processed steels is a result of the epitaxial growth of parent austenite at high temperature along the build direction during the deposition (see Fig. 3).The reverted austenite preferentially grows on the existing RA and therefore preserves the same texture [56]. Interface Characteristics An epitaxial dendrite growth can be observed in the first layer of MAR-60HRC deposited on the 18Ni300, implying a defect-free and compatible interface between the two materials due to their similar composition (Fig. 6a). The change in the chemical composition, starting from the 18Ni300 core and passing through the interface towards MAR-60HRC at the top surface is plotted by using EDS analysis (see Fig. 6b and Table 3).The plot shows a drop in Ni and Ti wt% to ~ 15.5 and 0.6 respectively in the first deposited layer of MAR-60HRC at the interface, while Mo and Co increase significantly to around 6.4 and 12.0 wt% respectively (zone A1, Fig. 6b and Table 3).By approaching the second layer of MAR-60HRC, Ni content further decreases to ~ 14.0 wt%, and Ti to ~ 0.5 wt%, while Mo and Co increase to 7.3 and 14.0 wt%, respectively (zone A2, Fig. 6b and Table 3).Within the third layer of the deposited MAR-60HRC, the concentration profiles become relatively flat, and the chemical composition detected by EDS approaches the nominal MAR-60HRC composition (zone A3, Fig. 6b and Table 3).For comparison, the theoretical chemical composition in the case of mixing the two powders at equal wt% (i.e., 50/50) is shown in Table 3.The results indicate that a mixture of 18Ni300 and MAR-60HRC characterizes the vicinity of the interface, showing an equivalent chemical composition in between the starting materials, which can be roughly said to comprise ~ 50 wt% 18Ni300, MAR-60HRC bal.By increasing the distance from the interface, the material gradually is diluted in 18Ni300 and approaches the pure MAR-60HRC composition.This is achieved roughly at the top surface of the third deposited layer of MAR-60HRC onwards. With reference to a previous work of the authors on the simulation of the thermal histories of the deposited layers using current processing parameters [30], it is plausible that by the deposition of the first layer of MAR-60HRC, the last deposited layer of 18Ni300 partially remelts and mixing of these materials takes places in the liquid state due to Marangoni effect.This should probably lead to a composition that roughly contains 50 wt% 18Ni300, and 50 wt% MAR-60HRC at the vicinity of the interface (i.e., mixed layer 1, corresponding to regions in the 1st deposited layer of MAR-60HRC).By depositing the second layer of MAR-60HRC, the solidified layer (mixed layer 1) is partially remelted and mixed with the MAR-60HRC, and the equivalent solidified composition may become roughly 25wt% 18Ni300, and 75wt% MAR-60HRC (i.e., mixed layer 2).The successive depositions probably result in a highly diluted composition in 18Ni300, which can be considered pure MAR-60HRC.Moreover, due to the heat transfer from the solidifying layers to the solidified layers, the concentration gradients might lead to interdiffusion at high temperatures in the solid state.This discussion is essential as it will be used later in this manuscript to describe the hardness profile of the bimetal.Given the compatible chemical compositions of the two alloys, the smooth concentration profiles of the alloying elements near the interface, and the SEM micrographs, no parasitic reaction phases at the interface are expected. The hardness profiles (Fig. 7a) before and after the age hardening of the bimetal confirm that the as-built hardness of the 18Ni300 is around 325 HV.By approaching the interface, the hardness increases to about 340 HV and rises gradually to around 370 HV in the MAR-60HRC region, in line with the aging curves of the two base materials (Fig. 4).In the aged condition, a similar trend is observed.Hardness in 18Ni300 is around 560 HV and increases to 650 HV at the interface, followed by a gradual increase to ~ 720 HV.The hardness gradient at the interface covers around 2 mm, roughly representative of three deposited layers (layer thickness ~ 0.7 mm), which is in agreement with the chemical composition gradient near the interface, where mixing of the softer 18Ni300 with MAR-60HRC was evident.In order to further investigate the dilution influence on the hardness, a compositionally graded (CG) sample was produced through an in-situ mixture of powders and by increasing the amount of MAR-60HRC by 10 vol% in each successive layer.The experimental hardness profile and the calculated hardness profile based on the rule of mixture (ROM) are depicted in Fig. 7b.The microhardness at point 1 in Fig. 8a is equal to the microhardness of the layer with 40 vol% of MAR-60HRC in Fig. 8b.In the second layer, covering points 2, and 3 in Fig. 8a, the microhardness in bimetal lies in between of the hardness levels of 60 to 70 vol% of MAR-60HRC (see Fig. 7b).At the top of the third deposited layer (point 4 in Fig. 8a), the microhardness approaches that of 100% MAR-60HRC (see Fig. 7b).The microhardness does not change further from point 4 towards the top layers of the bimetal, therefore, still corresponding to the hardness of pure MAR-60HRC.Together with the EDS analysis, these results can appropriately explain the interface characteristics.The discrepancy between the ROM and experimental results can be due to the change in precipitation kinetics by changing Mo, Ti, and Co content in the mixtures.In 18Ni300 with 0.6-0.8wt% Ti, Ni 3 Ti is one of the main precipitates to form at 400 °C to 500 °C aging interval; this is accompanied by the precipitation of Mo-rich intermetallic particles (e.g., Ni 3 Mo) [57].By increasing Mo and decreasing Ti in the mixtures, which comprise higher vol% of MAR-60HRC, the activity of Mo increases; moreover, the increase in Co wt% decreases the solubility of Mo in the matrix.The combination of these two factors enhances the precipitation of the Ni 3 Mo, while Ni 3 Ti will still be present as a result of the higher Ti content in the mixtures compared with the pure MAR-60HRC.The morphology and size of these two precipitates at the aging temperature differ from each other [12], leading to different contributions to strength according to the Orowan relation.This can lead to an increased experimental hardness compared with the theoretical ROM results.Interestingly, when the vol% of MAR-60HRC in the mixture is either low (i.e., up to 20%) or high ( i.e., 90%), the experimental hardness is identical to that of ROM; therefore, there exists a threshold above which, the precipitation kinetics will be affected by the introduction of the second material.This needs to be further analyzed in the future work of the authors using thermal analysis. Fracture Toughness The K app of the 18Ni300 at 565 ± 6HV is 90 ± 2 MPa m 1/2 , and this value drops to 60 ± 3 MPa m 1/2 for MAR-60HRC at 725 ± 8 HV.The Bimetal specimen shows a fracture toughness of 71 ± 2 MPa m 1/2 at a similar surface hardness level of MAR-60HRC (Fig. 8a).It can be clearly observed that the K app values are systematically higher than the K IC for the wrought samples, which can be attributed both to the microstructure of AM samples as well as the effect of the blunt notch on the stress field at notch tip for the Fig. 8 Apparent fracture toughness values for the samples versus surface hardness (blue points), as well as K IC for wrought 18Ni300 [59,60] and 13Ni400 [19,48] (gray points), and b Kapp versus hardness under the notch for L-DED specimens samples in the current study.The higher toughness of bimetal compared with MAR-60HRC can be attributed to the influence of concentration and hardness gradient near the interface.The notch tip was located in an area showing a hardness of ~ 670HV (Fig. 8b), which is equal to the hardness of a mixture of 50% MAR-60HRC, 18Ni300 bal.(see Figs. 6 and 7).The lower hardness below the notch leads to higher K app in bimetal.In order to confirm this hypothesis, fracture toughness tests were carried out on samples processed by the 50%-50% mixtures of MAR-60HRC and 18Ni300.The fracture toughness of the 50%-50% samples, showing a hardness level of 660 HV, perfectly lies in between 18Ni300 and MAR-60HRC (Fig. 8a) and matches with that of bimetal if hardness under notch is taken into account for the latter (Fig. 8b).It is evident that the bimetal with a very high surface hardness shows an excellent fracture toughness as well, shifting its position to the right side of the hardness-toughness linear trend.This is mainly due to a defect-free interface and the concentration and hardness gradients below the notch.The fracture toughness obtained in this work for 18Ni300 and MAR-60HRC are systematically higher than those of conventionally manufactured maraging counterparts at the same hardness level (see Fig. 1).As discussed earlier in the experimental section, this is a consequence of the larger notch tip radius in the current study compared with the fatigue pre-cracked notch.This systematic difference was also documented in other works [58], where the fracture toughness of the tool steels by using a fatigue pre-cracked notch (i.e., ρ → 0) was lower than those performed on the EDM notched samples with a notch radius of 50 µm. The fracture surface of 18Ni300 shows ductile transgranular fracture with the presence of dimples (Fig. 9a and b).This can be considered a completely transgranular fracture.The fracture surface of the MAR-60HRC, on the other hand, shows a quasi-cleavage fracture behavior, a preferential crack propagation along the columnar grains and dendritic substructure can be observed, but it is hard Fig. 9 SEM images of the fracture surfaces for a&b18Ni300 and c&d MAR-60HRC to be concluded as an intergranular fracture (Fig. 9c, and d).The crack interaction with the interface in the bimetal sample will be discussed in the next section. Crack Interaction with Interface in Bimetal The load-displacement curves in 18Ni300 show that after reaching the maximum force (i.e., the onset of crack initiation), the load drops instantaneously due to the unstable crack propagation until complete fracture (Fig. 10a).Signs of limited plastic deformation before reaching the maximum load were evident in the load-displacement records.However, drawing the 5% secant line following the ASTM E399 recommendation did not result in a significant change in the fracture toughness calculations.The records for the MAR-60HRC is also showing a similar sudden load drop after reaching the maximum load, and the load-displacement curve obeys a perfect linear elastic behavior (Fig. 10a). The load-displacement record in bimetal is different, and it shows a maximum, followed by a sudden drop to around 0.4 kN, representing the unstable crack propagation up to that point (Fig. 10a) and not reaching zero without showing a complete fracture.At this point, to further understand the crack propagation behavior, the test was conducted for one sample with sustained loading until complete fracture (Fig. 10b).Looking at the load-displacement curve, the appearance of a pop-in is obvious (inset in Fig. 10b), where the force slightly increases after the sudden drop and again starts to decrease gradually until complete fracture.This behavior can be related to a decline in the driving force for crack propagation and crack arrest in the microstructure.Thus a larger driving force was needed for the crack propagation. Interestingly, the metallographic cross-section prepared after stopping the test before the occurrence of the popin shows that the propagating crack in the MAR-60HRC region is arrested at the interface due to the shielding effect of the tougher 18Ni300 (Fig. 11a) in agreement with the load-displacement curve.The crack propagation path in bi-material layered structures is governed by a competition between the direction of the "maximum" driving force and the "weakest" microstructural path [59].This experiment shows that the original crack, "perpendicular" to the interface, does not deflect towards the interface when approaching it.Therefore, the interface was not the "weakest" path.This can be another indication of a strong and defect-free interface, thanks to the very compatible chemical composition of the two materials which implies a good bonding between the hard surface and the tough core that can preclude delamination or spalling specifically in tooling application.Moreover, by continuing the loading, the crack propagates in the 18Ni300, and the fracture surface at the interface shows a transition from a quasicleavage fracture in MAR-60HRC to a transgranular ductile fracture in 18Ni300 with no traces of decohesion or transversal cracking at the interface (Fig. 11b).This is also backed by the EDS analysis on Area 1 (The first deposited layer of MAR-60HRC) and Area 2 (18Ni300) in the vicinity of the interface (Table 4).Apart from the improved K app of bimetal compared with MAR-60HRC, the crack arrest at the interface (i.e., extrinsic toughening effect) can account for an increased damage tolerance factor in industrial applications.It is noteworthy that the scope of the current study was mainly to demonstrate the feasibility of improving fracture toughness through the fabrication of bimetal parts and to verify the possibility of stopping crack Conclusion A modified ultrahigh strength maraging steel (MAR-60HRC) was fabricated using laser metal deposition, and subsequently, the possibility of producing bimetals with a hard surface (MAR-60HRC) and a tougher core (18Ni300) was evaluated.From the experimental work carried out in this work, the following conclusions can be drawn: • As-built microstructure of MAR-60HRC comprised martensite and a small vol% of intercellular retained austenite because of heavy micro-segregation of alloying elements to the cellular boundaries.• Direct aging at 480 °C for 6 h resulted in a high hardness (~ 720 HV) for the MAR-60HRC, about 180 HV higher than reference 18Ni300.Aging at higher temperatures resulted in earlier hardness drop due to austenite reversion and precipitate coarsening. • The fracture toughness (K app ) of MAR-60HRC at the peak aged condition was lower than that of 18Ni300 (i.e., 60 MPa m 1/2 vs. 90 MPa m 1/2 ).• Bimetal samples composed of a high-toughness core of 18Ni300 and a hard surface of MAR-60HRC were deposited successfully.A smooth microstructural and hardness transition could be observed without evident interface defects.The fracture toughness (K app ) of the bimetal (i.e., 71 MPa m 1/2 ) was higher than that of MAR-60HRC and lower than 18Ni300.While the surface hardness of the bimetal was equal to that of MAR-60HRC (~ 720 HV).• The fracture toughness results in the current work are systematically higher than the K IC reported in literature for the wrought counterparts, which is mainly due to the effect of the blunt notch on the stress field at notch tip for the samples in the current study.• In addition to the enhanced fracture toughness, the bimetal with a hard surface and a softer core exhibited crack arrest at the interface.This extrinsic toughening effect can be of high interest when designing a high wear resistant, and damage-tolerant material. Acknowledgements Two of the authors (SA and PB) acknowledge the financial support from the AMICO project (Grant number ARS01_00758) from the Italian Ministry of Education. Funding Open access funding provided by Università degli Studi di Trento within the CRUI-CARE Agreement. Declarations Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Faraz Deirmina and Sasan Amirabdollahian have equally contributed to this work. 2 Fig. 2 Fig. 2 Schematic of the samples prepared for fracture toughness tests Fig. 3 a Fig. 3 a LOM micrograph showing the overall microstructure of the L-DED MAR-60HRC (Vilella's reagent), b EBSD band contrast image, c Corresponding phase map overlaid by band contrast image, and corresponding EDS elemental map for d Mo, e Ni, f Fe, g IPF maps of martensite, and h IPF maps RA with respect to builddirection Fig. 4 Fig. 4 Aging curves of a MAR-60HRC and b 18Ni300 maraging steel Fig. 5 Fig. 5 SEM micrographs L-DED MAR-60HRC aged for 5 h at a 480 °C (the inset highlights nano-sized precipitates within the martensitic matrix) and b 525 °C; note the increased thickness of the intercellular austenite appear brighter in BED-SEM micrograph; c XRD results Fig. 6 a Fig. 6 a LOM micrograph showing the interface in as-deposited bimetal, b EDS analysis results for Ti, Co, Ni, and Mo along the height and the corresponding SEM micrograph of the interface between the two materials Fig. 7 a Fig. 7 a Hardness profile across the height for the bimetal samples in the as-built and aged states, b Hardness profile of the CG sample experimental and ROM Fig. 10 a Fig. 10 a Load displacement curves for the fracture toughness tests and b load pop-in for the bimetal Fig. 11 a Fig. 11 a DF-LOM image highlighting the crack propagation path in the bimetal samples, and b SEM image of the fracture at the interface of the two materials, demonstrating the transition of fracture mode Table 3 Composition of different layers near the interface of the bimetal, wt% Table 4 Chemical composition of two areas at the fracture interface of the bimetal, in wt%Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by/4.0/.
2022-06-04T15:20:01.035Z
2023-05-09T00:00:00.000
{ "year": 2023, "sha1": "0e68d89ce650fed729875b0adcd98f8530a04a54", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12540-023-01424-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "f3bbb81a5d4504415349877d817c4b62ef115a91", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
221844835
pes2o/s2orc
v3-fos-license
Aptamer-Based In Vivo Therapeutic Targeting of Glioblastoma Glioblastoma (GBM) is the most aggressive, infiltrative, and lethal brain tumor in humans. Despite the extensive advancement in the knowledge about tumor progression and treatment over the last few years, the prognosis of GBM is still very poor due to the difficulty of targeting drugs or anticancer molecules to GBM cells. The major challenge in improving GBM treatment implicates the development of a targeted drug delivery system, capable of crossing the blood–brain barrier (BBB) and specifically targeting GBM cells. Aptamers possess many characteristics that make them ideal novel therapeutic agents for the treatment of GBM. They are short single-stranded nucleic acids (RNA or ssDNA) able to bind to a molecular target with high affinity and specificity. Several GBM-targeting aptamers have been developed for imaging, tumor cell isolation from biopsies, and drug/anticancer molecule delivery to the tumor cells. Due to their properties (low immunogenicity, long stability, and toxicity), a large number of aptamers have been selected against GBM biomarkers and tested in GBM cell lines, while only a few of them have also been tested in in vivo models of GBM. Herein, we specifically focus on aptamers tested in GBM in vivo models that can be considered as new diagnostic and/or therapeutic tools for GBM patients’ treatment. Introduction GBM or astrocytoma grade IV is the most aggressive, infiltrative, and lethal brain tumor in humans [1]. GBM is currently incurable, due to its resistance to conventional therapies and invasive nature. Despite advances in therapeutic options, the prognosis remains very poor due to the lack of safe and effective carriers able to specifically target tumor cells and to penetrate into the tumor [2]. In the past decade, much attention has been focused on aptamers that are emerging as safe delivery vehicles for targeted cancer therapeutics. Aptamers are short single-stranded DNA or RNA oligonucleotides able to bind targeted molecules with high affinity in a three-dimensional shape [3]. Aptamers against a specific target can be generated via an in vitro selection process called Systematic Evolution of Ligands by EXponential enrichment (SELEX). The conventional SELEX method mainly consists of three steps: selection, partitioning, and amplification. Before the selection step, a library of oligonucleotides (DNA or RNA) is synthesized, with generally up to 1015 different unique sequences [4]. Each unique sequence contains random bases (20-50 nt) flanked by two conserved primer-binding sites, which are used for the PCR amplification step. During selection, the library is incubated with Figure 1. Schematic representation of a Cell-Systematic Evolution of Ligands by SELEX method. Initially, a library of oligonucleotides is incubated with the target cells. The unbound sequences are removed by washing, while the bound sequences are collected. After an incubation with the negative cells, the bound sequences are discarded, while the unbound sequences are collected and amplified by PCR. The PCR products are utilized for the next round of selection. After several selection rounds, the enriched sequences are sequenced and characterized [47]. Initially, a library of oligonucleotides is incubated with the target cells. The unbound sequences are removed by washing, while the bound sequences are collected. After an incubation with the negative cells, the bound sequences are discarded, while the unbound sequences are collected and amplified by PCR. The PCR products are utilized for the next round of selection. After several selection rounds, the enriched sequences are sequenced and characterized [47]. Herein, we summarize the studies involving specific GBM aptamers whose therapeutic efficacy was tested and confirmed in GBM in vivo models ( Figure 2 and Table 1). Gint4.T and CL4 Aptamers The platelet-derived growth factor receptor beta (PDGFRβ) is overexpressed in GBM cells and is involved in cell migration [42]. Camorani and co-authors [42], by using U87MG GBM cells, selected a PDGFRβ-specific RNA aptamer (Gint4.T) after 14 differential whole cell-SELEX rounds. Gint4.T has been tested in U87MG cells, showing that it is able to reduce migration and proliferation. Specifically, Gint4.T induced an S-phase cell-cycle arrest and stimulated the differentiation of the U87MG cells [42]. The anti-EGFR pro-apoptotic CL4 is an aptamer raised against the epidermal growth factor receptor (EGFR) and found to have a strong cytotoxic effect in EGFR positive cancer cells [48]. CL4 was also tested in vivo in U87MG-derived mouse xenografts. The aptamer was administered intravenously at days 0, 3, 5, and 7, leading to a significant reduction in tumor growth [42]. Interestingly, the anti-GBM properties of Gint4.T/CL4 combined aptamers were compared with three commercial anticancer drugs-gefitinib and cetuximab (both drugs against EGFR) and imatinib (against PDGFR) [49]. Dose-and time-dependent experiments showed that T98G and U87MG cells treated with gefitinib and cetuximab are extremely resistant to both the anticancer therapeutic agents. Differently, Gint4.T + CL4 treatment reduced (70%) cell viability similarly to the temozolomide treatment, which is, at present, the conventional GBM therapy [50,51]. In order to assess the in vivo targeting of Gint4.T, mice bearing xenografts from luciferaseexpressing U87MG cells were treated with the aptamer single intravenous injection. Bioluminescence assay demonstrated that Gint4.T preserved its binding specificity in vivo leading to tumor growth inhibition. Then, Gint4.T and CL4 aptamers were administered simultaneously intravenously in U87MG-derived mouse xenografts causing the inhibition of tumor growth with the reduction in the EGFR amount and PDGFR phosphorylation more than the single independent treatments [32]. The antitumor activity of these aptamers in vivo was also confirmed by immunohistochemical staining Aptamers Showing In Vivo Therapeutic Effects A large number of aptamers have been generated against GBM cell membrane proteins [47]. Some of these aptamers were developed as anti-GBM tools and/or tested as a vehicle for antitumoral molecules blocking the activity of GBM receptors involved in gliomagenesis and cancer progression ( Table 1). Gint4.T and CL4 Aptamers The platelet-derived growth factor receptor beta (PDGFRβ) is overexpressed in GBM cells and is involved in cell migration [42]. Camorani and co-authors [42], by using U87MG GBM cells, selected a PDGFRβ-specific RNA aptamer (Gint4.T) after 14 differential whole cell-SELEX rounds. Gint4.T has been tested in U87MG cells, showing that it is able to reduce migration and proliferation. Specifically, Gint4.T induced an S-phase cell-cycle arrest and stimulated the differentiation of the U87MG cells [42]. The anti-EGFR pro-apoptotic CL4 is an aptamer raised against the epidermal growth factor receptor (EGFR) and found to have a strong cytotoxic effect in EGFR positive cancer cells [48]. CL4 was also tested in vivo in U87MG-derived mouse xenografts. The aptamer was administered intravenously at days 0, 3, 5, and 7, leading to a significant reduction in tumor growth [42]. Interestingly, the anti-GBM properties of Gint4.T/CL4 combined aptamers were compared with three commercial anticancer drugs-gefitinib and cetuximab (both drugs against EGFR) and imatinib (against PDGFR) [49]. Dose-and time-dependent experiments showed that T98G and U87MG cells treated with gefitinib and cetuximab are extremely resistant to both the anticancer therapeutic agents. Differently, Gint4.T + CL4 treatment reduced (70%) cell viability similarly to the temozolomide treatment, which is, at present, the conventional GBM therapy [50,51]. In order to assess the in vivo targeting of Gint4.T, mice bearing xenografts from luciferase-expressing U87MG cells were treated with the aptamer single intravenous injection. Bioluminescence assay demonstrated that Gint4.T preserved its binding specificity in vivo leading to tumor growth inhibition. Then, Gint4.T and CL4 aptamers were administered simultaneously intravenously in U87MG-derived mouse xenografts causing the inhibition of tumor growth with the reduction in the EGFR amount and PDGFR phosphorylation more than the single independent treatments [32]. The antitumor activity of these aptamers in vivo was also confirmed by immunohistochemical staining for the pan-proliferative marker Ki-67, revealing a significant decrease of proliferating cells in Gint4.T-treated mice, which was strongly enhanced by the combined treatment (Gint4.T + CL4) [32]. Gint4.T aptamer was also used as a chimera conjugated with an siRNA against the signal transducer and activator of transcription-3 (STAT3), leading to the formation of the Gint4.T-STAT3 aptamer-siRNA chimera named AsiC [44]. STAT3 is a signal transductor responding to EGFR and IL-6 receptor activation [52]. After its phosphorylation, STAT3 translocates into the nucleus and regulates the expression of genes involved in cell cycle, survival, hypoxia, angiogenesis, invasion, and immune response [52]. This transcription factor is often deregulated in cancers, including GBM, representing an excellent potential therapeutic target [52]. Esposito and colleagues developed the AsiC chimera (Gint4.T-siRNA by a stick-based approach), demonstrating that its binding ability to the cell target (mediated by Gint4.T) and the silencing potential (mediated by the siRNA) are preserved [53][54][55][56]. STAT3 acts as a key oncogenic factor regulating survival, proliferation, migration, and invasion of cancer cells [44,57]; accordingly, AsiC-treated cells showed a strong reduction in cell viability, the activation of programmed cell death, and a less migratory ability compared to negative controls [44]. The AsiC aptamer chimera was also tested on GBM growth in vivo. Specifically, U87MG cellbearing mice were treated with AsiC or Gint4.T independently to compare their ability in vivo by intraperitoneal administration. The authors demonstrated an increased reduction in tumor growth in AsiC-treated mice compared to the Gint4.T treatment as tested by tumor volume and histopathological and immunohistochemical analyses [44]. AsiC-treated xenografts showed a strongly reduced cellular density and the Ki-67 positivity decreased to approximately 25%. The level of STAT3 mRNA and target genes, such as cMYC, Bcl-2, and Bcl-XL, was reduced in the tumors of treated mice as well as the level of pro-caspase 3, PARP, and Bcl-XL proteins and of programmed cell death ligand 1 (PDL1) [42]. AS1411 Aptamer The AS1411 aptamer is the first oligodeoxynucleotide aptamer to enter phases I and II of several cancer clinical trials [57][58][59]. It has been demonstrated that AS1411 interferes with nucleolin (NCL), whose expression is correlated with cell proliferative level and shows overexpression in GBM [60][61][62]. Specifically, Cheng and colleagues [43] demonstrated that the AS1411 aptamer by binding NCL decreased GBM cell proliferation, with p53 and cyclin A1 upregulation and Bcl-2, Akt1, and cyclin B1 downregulation. Additionally, this aptamer is also able to inhibit GBM cell migration and invasion by the downregulation of Akt1 [43]. The strong anticancer effects of AS1411 have been confirmed in vivo, in a a xenograft model of human glioma established in severe combined immunodeficient (SCID) mice. In particular, AS1411 or aptamer control sequence was subcutaneously injected in a single dose near the tumor area, every 5 days for 20 days. In vivo experiments showed a strong reduction in the tumor volume and an increased survival time of the glioma-bearing mice [43]. Aptamers Used as Drug Vehicles The current clinical treatment of glioma is represented by surgery followed by radiotherapy and chemotherapy. The latter is the most commonly used method for glioma treatment [63][64][65]. The main obstacles in the treatment of gliomas are represented by the BBB and poor targeting that severely compromise the delivery of drugs to the tumor site during chemotherapy and lead to dangerous off-target effects. In the last few years, nanoparticle (NP)-based drug delivery systems demonstrated their ability to enhance cancer chemotherapy [66,67]. Although these delivery systems are very promising, they still have some limits such as poor cell targeting, premature release of the drug, and lack of real-time monitoring. The challenge is represented by the design of nanoparticles able to cross the blood-brain/blood -tumor barrier that, thanks to the aptamers' targeting ability, are able to exclusively reach glioma cells and release their cargo over an extended period of time to achieve an efficient therapeutic response [68,69] (Table 1). AS1411 Aptamer The AS1411 aptamer, mentioned above, was largely used to decorate different kinds of NPs loaded with chemotherapeutic drug and to specifically deliver them to GBM cells [37]. Paclitaxel (PTX) is a chemotherapeutic agent largely used against various types of solid tumors including gliomas [70][71][72][73][74][75]. Unfortunately, its clinical efficacy is limited by its poor aqueous solubility, non-tumor-specific cell killing, and serious adverse side effects caused by its solvent [76]. Guo and co-authors [36] demonstrated the in vivo efficacy of the AS1411 aptamer when coupled with PTX-loaded NPs. The authors [36] used NPs derived from poly(D,L-lactic-co-glycolic acid) (PLGA). The PLGA NPs were then functionalized with polyethylene glycol (PEG), because pegylated polymeric NPs showed a significant reduction in systemic clearance compared with similar particles without PEG [77,78]. PTX-loaded PEG-PLGA NPs (PTX-NPs) were prepared and AS1411 was conjugated to the PTX-NP surface forming the Ap-PTX-NP complex exploiting aptamer AS1411-nucleolin interaction as a strategy to make PTX delivery to gliomas more specific and effective [36]. A xenograft nude mouse model with C6 glioma implanted in the armpit and an intracranial tumor model in Wistar rats were used for determining in vivo drug distribution and tumor growth delay. After intravenous administration, set according to those usually used in antiglioma therapy [72,75,79], they found a faster and stronger reduction in tumor growth compared to controls. AS1411-nucleolin-mediated recognition and internalization significantly enabled the cellular association of Ap-PTX-NP in C6 glioma cells, allowing long-term and precise in vivo tumor targeting and improving the antiglioma efficacy of PTX on both mice and rats bearing C6 glioma xenograft [36]. Luo and co-authors [40] exploited the dual targeting potential of the AS1411 aptamer, conjugating it also to a different kind of NP made of poly(L-γ-glutamyl-glutamine) (PGG). These novel PGG-PTX NPs highly improved the PTX aqueous solubility, prolonged the plasma half-life, and showed a lowered toxicity compared with PGA-PTX in mice [80][81][82], but it lacks a targeting specificity. For this reason, PGG-PTX NPs were conjugated to AS1411 that indeed enhanced the glioma treatment [36,37,83]. Moreover, the AS1411-PGG-PTX complex showed an increased ability to penetrate deeper into GBMs (2.5-fold) than the PGG-PTX NPs without the AS1411 aptamer [40]. Accordingly, in vivo studies demonstrated that AS1411-PGG-PTX nanoconjugates displayed much more accumulation of the drug and deeper penetration into GBM tissues than the PGG-PTX NPs and increased the median survival of intracranial U87MG GBM-bearing nude mice [40]. AS1411 was also used in an aptamer and peptide dual-functioned nanoparticle system [37]. Gao and co-authors [37] conjugated poly(ethyleneglycol)-poly(ε-caprolactone) (PEG-PCL) NPs and the AS1411 aptamer with a phage-displayed TGN peptide that is a specific targeting ligand of the BBB [37]. The authors [37] created the cascade targeting strategy named AsTNP to achieve high and precise brain glioma targeting. To evaluate the anticancer effect of the AsTNP system, the authors used docetaxel (DTX) that was loaded into the PEG-PCL NPs. Indeed, DTX has been widely used in the treatment of several malignancies including brain tumors [16], being an inhibitor of microtubule depolymerization [84]. In vitro, cell uptake, and three-dimensional tumor spheroid penetration studies demonstrated that AsTNP could target endothelial and tumor cells but also penetrate the endothelial monolayers and tumor cells to reach the core of tumor spheroids [37]. BALB/c nude mice bearing C6 orthotopic glioma treated by tail vein injection were used to test the in vivo potential of AsTNP to deliver DTX to tumor cells. The authors [37] demonstrated that TGN could facilitate the transportation of particles from the blood to the brain and the AS1411 aptamer could recognize glioma cells and enrich DTX-loaded particles in the tumor site. In this way, AsTNP decreased the toxicity caused by the incorrect distribution of DTX and increased the median survival of glioma-bearing mice achieving the antiglioma effect using a more precise drug delivery at a relatively low dose. The AS1411 aptamer was also involved in the development of another dual targeting system [41]. In addition to the aptamer target (nucleolin), it has also been shown that the transferrin receptor (TfR) is overexpressed on the surface of brain capillary endothelial cells (a major component of BBB) and GBM cells [41]. Thus, several studies demonstrated that transferrin (Tf)-conjugated NPs can cross the BBB and target brain glial cells [85,86]. Based on these evidences, the combination of Tf and AS1411 aptamer nanoparticles has been proposed for the targeting of TfR-and nucleolin-expressing gliomas [41]. As therapeutic strategy, Zhu and co-authors [41] combined AS1411 and TfR with photodynamic therapy (PDT), based on the activity of a photosensitizer, which generates reactive oxygen species (ROS) under laser irradiation with a specific wavelength, leading tumor cells to apoptosis. Moreover, it is well known that targeted chemotherapy combined with PDT can significantly improve cancer treatment [87]. The authors [41] selected the anticancer agent RBT [Ru(bpy) 2 (tip)] 2+ , which is a high-efficiency photosensitizer for the photodynamic tumor therapy [88]. To refine the system, the authors used mesoporous ruthenium nanoparticles. Indeed, among the different inorganic metal nanoparticles used to treat and diagnose brain gliomas [89][90][91][92][93], the ruthenium nanoparticles are particularly appealing due to their good biocompatibility [41]. The authors [41] optimized the ruthenium nanoparticles obtaining mesoporous ruthenium nanoparticles (MRN) increasing the loading ability of antitumor drugs (28.2%) [94]. In addition, since glutathione (GSH) levels in tumor cells are much higher than in normal cells, drug release based on endogenous GSH is considered to be the most efficient strategy and disulfide bonds are the most commonly used part of the GSH trigger system [95][96][97][98]. Standing on these evidences, the authors [41] proposed the use of MRN RBT-loaded NPs covalently bound to both Tf and AS1411 with the addition of the disulfide bonds to obtain a dual-targeted nanomedicine delivery system for drug delivery to gliomas, which they called RBT@MRN-SSTf/Apt [41]. In vitro, the RBT@MRN-SSTf/Apt complex penetrates deeply and has a significant inhibitory effect on 3D tumor cell spheres. To assess the in vivo therapeutic potential, U87MG orthotopic tumor-bearing nude mice were treated with RBT@MRN-SSTf/Apt combined with laser [41]. In vivo data demonstrated that the median survival of tumor-bearing nude mice was significantly prolonged with minimal weight loss [41]. This and other studies provide new possibilities for the design of dual-targeted NP-mediated drug delivery systems combined with photodynamic therapy. GMT8 Aptamer Among the aptamers developed through whole-cell SELEX, the GMT8 aptamer has been found to be the one with the highest binding affinity for GBM cells even though its target is still unknown [8]. Therefore, it has been used as an efficient ligand for GBM-targeting therapy to improve drug delivery to GBM cells and enhance tumor penetration [38]. The GMT8 aptamer was also enhanced employing the non-cytotoxic polyethylene glycol-poly ε-caprolactone (PEG-PCL) NPs. Specifically, GMT8 was conjugated on the surface of PEG-PCL NPs loaded with DTX forming the delivery system named ApNP [38]. GBM cells treated with only DTX, DTX-loaded NPs, or ApNP were analyzed and the nuclear fragmentation (suggestive of apoptosis) was found higher in cells treated with ApNP compared to those treated with only DTX or with DTX-loaded NPs [38]. The in vivo therapeutic potential of ApNP was tested in orthotopic U87MG GBM-bearing nude mice. The animals treated with fluorescent NPs and ApNP showed higher brain fluoresce in ApNP-treated mice, with the GMT8 aptamer increasing mice survival time compared to unconjugated NPs [38]. ATP Aptamer An and co-authors [39] developed a targeting delivery system based on an ATP aptamer coupled with a modified substrate of the amino acid transporter LAT1 and a GSH responsive molecule [39] acting as dual-release regulating factors for the efficient doxorubicin (DOX) delivery into GBM cells. The ATP DNA aptamer is a 25-base single-stranded oligodeoxynucleotide selected from a random-sequence DNA pool and showing high affinity for ATP [39]. To regulate DOX delivery, the ATP aptamer was hybridized with its cDNA, forming a DNA scaffold as a DOX carrier. DOX can specifically intercalate into the GC pairs of the DNA scaffold, yielding DOX/ATP aptamer complex without changing the duplex structure of the DNA scaffold and allowing DOX release depending on ATP concentration. Indeed, in vitro, only the ATP amount of tumor cells was able to trigger the DOX release, while it was not induced by the low extracellular concentration, ensuring DOX release only to the tumor cells [39]. The system was then improved using another targeting molecule named 3CDIT, a substrate derivate of LAT1 that is an amino acid transporter expressed by both the BBB and glioma cells [39]. Then, considering the great disparity in GSH levels between extracellular and intracellular compartments, the 3CDIT substrate was decorated onto a GSH-responsive polymer (pOEI), yielding a 3CDIT-targeting pOEI complex that was afterwards condensed with the DOX/ATP complex forming the 3CDIT-pOEI/DOX/ATP aptamer delivery system [39]. To test its therapeutic efficacy, the 3CDIT-pOEI/DOX/ATP complex was injected through tail vein in a glioma nude mice model with stable luciferase expression. In vivo, this new delivery system demonstrated an outstanding glioma accumulation, DOX release, and antitumor therapeutic effect, without systemic toxicity, thereby opening a new scenario for safe and efficient glioma chemotherapy [39]. Aptamers Able to Enhance GBM Therapy Efficacy As reported above, GBM is one of the most common and deadliest brain tumors. Surgical resection followed by radiotherapy (postoperative fractionated external-beam radiotherapy started within 6 weeks of surgery with 60 Gy) [99] and chemotherapy (using temozolomide (TMZ)) is the standard therapy for this type of cancer. The effects of this traditional therapeutic strategy are very limited due to the abnormal migration and invasion ability of GBM cells as well as their resistance to chemo-and radiotherapy. Therefore, the discovery of a drug able to inhibit GBM cell proliferation, migration, and invasion and decrease chemo-and radioresistance is a crucial step for developing an efficient GBM treatment [100,101]. To date, a few aptamers have been able to increase the efficacy of GBM existing therapies in vivo (Table 1). U2 Aptamer The first is the DNA aptamer, named U2, obtained by cell-SELEX technology using GBM cells overexpressing epidermal growth factor receptor variant III (EGFRvIII) [102]. EGFRvIII is the most common gain-of-function mutation observed in 50% of GBM patients. This mutation has reduced constitutive EGFR-mediated signals compared to the wild-type EGFR [32,103,104]. In addition, EGFRvIII expression has been correlated to chemo-and radioresistance in GBM patients [19,105,106]. By flow cytometry and immunofluorescence methods, U2 aptamer has been demonstrated to bind specifically EGFRvIII-expressing GBM cells and the aptamer-receptor complex was internalized into the cells through the endosome recycling pathway [102]. Since the EGFRvIII suppression led to a reduction in cancer cell proliferation, migration, and invasion [102,107], Zhang and co-authors [102] analyzed the U2 effects on these typical cancer features. They demonstrated that the aptamer caused a time-and dose-dependent increase in apoptosis rate and reduced the migration capability of cancerous cells diminishing the phosphorylation level of EGFRvIII. Most importantly, the U2 aptamer enhances the radiosensitivity of EGFRvIII-expressing cells by decreasing the phosphorylation of DNA repair effectors and enhancing the DNA repair process [102]. In a nude mouse model bearing EGFRvIII-expressing GBM cells, U2 aptamer dramatically reduced tumor volume and showed a significant antitumor effect compared to the control [102]. NOX-A12 Aptamer Radiotherapy is an important component of the GBM treatment able to abrogate local angiogenesis inducing the tumor mass to activate the neovasculogenesis pathway, which involves de novo growth of blood vessels [108]. Starting from this evidence, Brown and colleagues [109] speculated that tumor recurrence could be markedly reduced by inhibition of the circulating pro-angiogenic CD11b+ myelomonocytes that express high levels of stromal cell-derived factor 1 (SDF1) playing a key role in angiogenesis by recruiting endothelial progenitor cells through a CXCR4/CXCR7 dependent mechanism. Therefore, the most effective strategy for preventing post-irradiation vasculogenesis in GBM would be to block SDF-1 receptors (CXCR4 and CXCR7). A PEGylated-L-oligoribonucleotide aptamer (the so-called Spiegelmer) NOX-A12 (olaptesed pegol) that binds with high affinity SDF-1 was used to achieve this purpose [109]. NOX-A12 consists of 45 L-enantiomeric RNA nucleotides and carries a 40-kDa polyethyleneglycol (PEG) modification at its 5 -end to increase plasma permanence time. The non-natural L-nucleotides confer biostability to the molecule because L-oligonucleotides are not recognized by nucleases. Additionally, their mirror-image nature renders Spiegelmers immunologically passive, with a low risk of neutralizing antibodies and no Toll-like receptor activation. To test NOX-A12 in vivo, the authors used N-ethyl-N-nitrosourea (ENU)-induced brain tumors in the Sprague-Dawley rat, a model that has been proven to be extremely resistant to anticancer therapy [110]. The authors demonstrated that NOX-A12-mediated SDF-1 blockade was indeed effective in inhibiting or delaying GBM recurrences following irradiation in this model. Considering that NOX-A12 was delivered in vivo following brain irradiation at doses and time periods that can be safe and well tolerated in humans, this aptamer was engaged for a clinical trial in 2019. The purpose of the ongoing study (ClinicalTrials.gov identifier NCT04121455) is to obtain exploratory information on the safety and efficacy of NOX-A12, in combination with radiation therapy in patients with newly diagnosed GBM either not amenable to resection (biopsy only) or after incomplete tumor resection. AS1411 Aptamer With the nanotechnology improvements, different metal nanomaterials have been also developed to enhance the antitumor efficacy of radiotherapy. In particular, silver nanoparticles (AgNPs) showed excellent radiosensitizing properties that have been confirmed on glioma cells in vitro and in vivo [46,[111][112][113][114]. Thus, in order to enhance the radiosensitivity of tumor cells, Zhao and co-authors [46] conjugated NPs with polyethylene glycol (PEG) and the AS1411 aptamer (AsNPs), and demonstrated that conjugated AsNPs led to a lower cytotoxicity and enhanced the AgNP endocytosis into C6 GBM cells. In addition, only the PEG-AS1411 conjugated nanoparticles reached the spheroid core, suggesting an improved penetration ability. Since previous studies indicated the apoptosis induction as a potential mechanism of radiosensitization, the authors demonstrated that AsNPs carried out their radiosensitizing function triggering the apoptotic response [46]. For the in vivo radiosensitization experiments, AsNPs were conjugated with the Cy5 fluorophore and systematically administrated to nude mice bearing intracranial glioma. Cy5-AsNPs were able to accumulate into the tumor and, most importantly, the median survival time of mice, treated with AsNPs plus irradiations, was significantly elongated. Notably, systemic toxicity was not observed in injected mice [46]. Immune-Modular Aptamers Another class of aptamers enhancing GBM therapy efficacy is represented by aptamers able to modulate the immune response. Immunotherapy is based on the use of monoclonal antibodies, immune adjuvants, and vaccines against oncogenic viruses [115]. Immune-modular aptamers showed a high targeted delivery capacity, conferring on them less off-target side effects and a good plasticity. For these reasons, they are considered excellent potential therapeutic tools [116]. Usually, in order to reduce toxicity, this class of aptamers is targeted to the tumor cells by the conjugation to a second aptamer, which is able to precisely bind to cancer surface receptors, pledging an activity only against cancer cells [117] (Table 1). VEGF-4-1BB Bi-specific Aptamer Among the immune-modular aptamers, 4-1BB was tested in an oncogene-induced GBM model in vivo. 4-1BB was a co-stimulatory receptor, stimulating activated CD8+ T-cell survival, expansion, and differentiation into memory cells [118], thereby enhancing tumor immunity and inhibiting cancer growth [119][120][121]. Different studies reported that systemic treatments with anti-4-1BB antibody (Ab), synergized with vaccination and other immunotherapies, showed tumor growth inhibition in mice [116,120,[122][123][124][125][126]. Unfortunately, anti-4-1BB Ab causes immune anomalies due to polyclonal activation of CD8+ T cells and consequent overproduction of IFNγ and TNF [45,122,124,127]. To bypass this block, Schrand and colleagues [45] tested the efficacy of a bi-specific aptamer-based approach. A specific targeting based on an aptamer developed against the broadly expressed stromal product, the vascular endothelial growth factor (VEGF) [122], was used to bind GBM stroma, while the 4-1BB aptamer was used to bind the T-cell co-stimulatory receptor in order to induce the immune response activation. The in vivo efficacy of the VEGF-4-1BB system has been tested in high-grade glioma murine models overexpressing PDGFRβ and STAT3 in cancer precursor cells of newborn mice. The treatment with this bi-specific aptamer (VEGF-4-1BB) led to an enhanced mice survival, confirming the conjugate antitumoral efficacy and suggesting its ability to cross the BBB [45]. Discussion Aptamers show many features that make them promising agents for cancer/GBM treatment. These short single-stranded DNA or RNA oligonucleotides are able to bind to target molecules with high affinity in three-dimensional shapes like antibodies or peptides but with additional benefits. Monoclonal antibodies (mAbs) are immunogenic peptides/proteins that can bind to specific epitopes inducing biochemical reactions. Multiple studies have reported the importance of mAbs in cancer; for example, an anti-mouse transferrin antibody fused with the GDNF (cTfRmAb-GDNF) can be delivered to the brain in vivo [128]. Although there are few examples of mAbs in clinical trials, the presence of antibodies in the brain has also been linked to neurological disorders such as psychosis [129]. Considering the above, aptamers can be an important alternative. Indeed, aptamers possess several advantages compared to antibodies. For example, aptamers do not produce an immune response, can be selected for a wider range of targets including highly toxic compounds, are generated in a relatively short time compared to antibodies, and are characterized by a high cell and tissue permeability due to their small size (~12-30 kDa) compared to the typical IgG antibody (~150-170 kDa). Moreover, aptamers are chemically synthesized, and there is no risk of biological contamination and batch-to-batch consistency and, finally, they do not involve animals. Regarding peptides, which are short linear chains of amino acids (aa) usually <50 aa in length, they can bind, modulate, and inhibit specific proteins of interest; for example, they can inhibit a specific interaction between two proteins. In cancer treatment, these peptides can be used in a variety of ways, including carrying cytotoxic drugs, hormones, radionuclides, and vaccines similarly to aptamers [130]. Peptides, similarly to aptamers, have several important advantages over antibodies, such as they have a small size and are relatively easier to synthesize than recombinant antibodies. However, they have several disadvantages compared to aptamers, such as low bioavailability, metabolic liability, poor cell permeability, immunogenicity, high degradability in vivo by proteases, strong delivery problems, and less versatility compared to aptamers [131]. Considering the above, aptamers can be considered as highly promising molecules for future clinical application; however, despite several advantageous features, some limitations must be considered. The most important one is their susceptibility to digestion by nucleases, especially in vivo. At present, several technical approaches have been utilized to improve aptamer nuclease resistance, the most promising being the substitution of the 2 OH of the sugar backbone of RNAs with fluoro, amino, or methoxy functional groups or the use of locked nucleic acids (LNAs) [132]. During the last few years, a large number of aptamers have been generated against GBM cell membrane proteins; some of them are able to cross the BBB and have been proven to be efficient as therapeutic agents in GBM in vivo models. Among them, we note the following: (i) aptamers developed as anti-GBM tools blocking cancer progression; (ii) aptamers used as drug vehicles for antitumoral molecules or for nanoparticles and their derivatives, representing novel drug delivery systems; (iii) aptamers used as adjuvants to increase the efficacy of GBM existing therapies in vivo; and (iv) aptamers able to modulate the immune response, enhancing tumor immunity and inhibiting cancer growth. However, despite the availability of several aptamers for GBM therapy, only the NOX-A12 aptamer is, at present, recruited for a clinical trial started in 2019. Conclusions The exponential increase in aptamer research and their growing applications in multiple fields of biological and clinical research clearly indicate them as strong rivals of antibodies and peptides as molecules with specific binding characteristics and therapeutic potential. In this scenario, future studies on aptamer-based therapy are necessary to open novel possibilities in the advanced clinical therapy of GBM. Conflicts of Interest: The authors declare no conflict of interest.
2020-09-23T13:06:07.298Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "ed413f5432fc66973180cc76d2f2233486635c00", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/molecules25184267", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6da3b5b29bb20f1acfb975c0740d64a7efa244f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
42719585
pes2o/s2orc
v3-fos-license
Beads-on-String Structure of the Electrostatic Complex of DNA with a High-Generation PAMAM Dendrimer The electrostatic complexes of polyanionic DNA with cationic dendrimer have been considered as a potential non-viral vector for gene delivery and a model system for understanding DNA-histone interaction. Although it is believed that the gene transfection efficiency may be influenced by the structure of the complex, the supramolecular structure of DNA-dendirmer complexes and its dependence on various system parameters such as dendrimer generation number, charge density, charge ratio and ionic strength are not well resolved. In this study, we investigate the structure of the complex of DNA with polyamidoamine (PAMAM) dendrimer of generation nine (G9) by means of synchrotron small angle X-ray scattering (SAXS). It is found that DNA is always able to wrap around the dendrimer to yield the beads-on-string structure irrespective of the charge density of the dendrimer. The effect of charge density on the persistence length of the chromatin-like fiber thus formed and the pitch length of the DNA superheix wrapping around the dendrimer are elucidated from the calculation of the SAXS profiles based on beads-on-string structure models. The first section in your paper Introduction The electrostatic complexes of polyanionic DNA with various cationic agents, including lipids, macrocations, polyelectrolytes and amphiphilic block copolymers, have received much attention in recent years due to the effort in developing non-viral vectors for gene therapy. [1] The complexation is driven mainly by the electrostatic attraction between DNA and the cationic species coupled with the entropic gain from counterion release and it usually results in significant aggregation of DNA chains, leading to the formation of submicrometer-sized particles. [2,3] Two levels of the structure can hence be defined for the complexes: (1) the " colloidal level" characterized by the topological feature (e.g. the shape and size) and the surface charge of the particles at the length scale of several hundred nm or above; (2) the " supramolecular level"characterized by the organization of DNA chains and the cationic agent within the particles at the characteristic length scale of several nm. It is believed that the gene transfection efficiency is influenced by the structure of the complex; [4,5] consequently, resolving the structures at both levels and the strategy for tuning them by various parameters such as charge ratio, ionic strength, pH and temperature have been regarded as an important fundamental task for the realization of effective non-viral gene vector. The present study concerns the supramolecular structure of the complexes of DNA with cationic dendrimer (called " dendriplexes" ) in pure water. Dendrimer is a class of hyperbranched macromolecule composing of layers of monomer units irradiating from a central core. [6] Each complete grafting cycle is called a " generation"(denoted by Gn with n being the generation number). The dendrimers possessing amine groups at the surface and/or the interior can be protonated to controlled level under acidic aqueous environment. The macrocations thus formed have been considered as the carriers for macromolecular drug and gene delivery. [7] Based on the above discussion, it is suggestive that the complexes of DNA with high-generation dendrimers (e.g. G7~G9) may exhibit the beads-on-string structure, while those with low-generation dendrimers (e.g. G2 and G3) may show the ordered columnar mesophase. The more complicated systems will hence be the complexes with dendrimers of intermediate generations (e.g. G4~G6). In this case, neither the DNA bending energy nor the electrostatic interaction may completely dominate the structure formation; therefore, the dendriplexes may exhibit interesting structure transformation with respect to the interplay between these two factors prescribed by the charge density of the dendrimer, the charge ratio, salt concentration, etc. Moreover, some structures intermediate between the columnar mesophase and the beads-on-string structure may be formed. Using synchrotron small angle X-ray scattering (SAXS), we have shown recently that, depending on the charge density of the dendrimer prescribed by its degree of protonation (dp), the complexes of DNA with PAMAM G4 dendrimer exhibited three distinct nanostructures characterized by different degrees of DNA bending. [17] At dp < 0.3 the dendriplex displayed a square columnar phase, while the beads-on-structure with DNA wrapping around each dendrimer tightly by ca. 1.4 turns was formed at dp ≥0.6 . An intermediate structure called " hexagonally-packed DNA superhelices" was identified at 0.3 ≤dp ≤0.5, where the DNA chains organized in a hexagonal lattice twisted moderately to enhance the charge matching with the dendrimer. The observed structural transition with respect to the increase of dendrimer dp was in accord with the increasing weighting of electrostatic attraction over DNA bending energy. In this paper, we investigate the structure of the complex of DNA with PAMAM G9 dendrimer to examine if DNA can always wrap around this high-generation dendrimer with various charge densities due to lower DNA bending energy cost. Some previous attempts have been made to reveal the wrapping of DNA around dendrimer. For example, Ottaviani et al. used electron paramagnetic resonance (EPR) spectroscopy to study the interactions between dendrimers and DNA and nitroxidelabeled PAMAM dendrimers. [18,19] DNA chain was found to wrap around G7 dendrimer. Similarly, Chen et al. investigated the binding of dendrimers to ctDNA by fluorescence titration of Ethidium bromide (EB) with a premixed solution of DNA and various amount of dendrimers. [20] The DNA complexation with PAMAM G7 dendrimer was found to be analogous to the DNA-histone interaction as DNA could wrap around the dendrimer. These spectroscopic studies could reveal the local interaction of DNA with dendrimer; however, the knowledge of the ' ' beads-on-string' ' structure at a larger length scale remains largely unknown. Previous AFM studies have observed the globule particles formed by DNA-dendrimer complex, indicating that the DNA chain adsorbed on the surface of dendirmer could be bent due to strong electrostatic interaction. [21] Although detailed internal structure of these particles was not observed by AFM, Ritort et. al. used force-extension curves (FECs) to show similar unfolding and refolding force as well as similar compaction ratios to those of chromatin fibers. [22] In this study, synchrotron SAXS is employed to gain insight into the structure information ranging from the internal structure of the DNA-wrapped dendrimer particles to the global organization of these ' ' nucleosome-like' ' particles. We will demonstrate that DNA chain is indeed able to wrap around G9 dendrimer tightly even for the dendrimers with extremely low charge density. It will also be shown that the global organization of the resultant nucleosome-like particles or the conformation of the " chromatin-like" fibers is influenced by the charge density of the dendrimer. dendrimer was obtained from DNT as methanol solutions. After thorough drying, the solids were weighed and then redissolved in distilled water to produce a stock solution of 0.1% (w/w).The solutions were stored at 4°C till use. Complex Preparation. To complex with the polyanionic DNA, the amine groups in PAMAM dendrimer were first protonated by adding prescribed amount of 0.1 N HCl solution. The primary amine groups at the outer surface of the dendrimer tended to be protonated first because their basicity (pK a 9.0) is larger than that of the interior teriary amines (pK a 5.8). [23] Therefore, the PAMAM G9 dendrimers with different degrees of protonation (dp) were prepared by controlling the pH of the solution. The solution of the protonated dendrimer solution was then mixed with the aqueous solution containing prescribed amount of DNA to obtain the complex. The concentration of DNA aqueous solution was 2 mg/ml. The complexaton was usually manifested by visually observable precipitation. Small Angle X-ray Scattering (SAXS) Measurements. The supramolecular structures of the complexes in pure water were probed SAXS at room temperature. The aqueous suspensions of the complexes were directly introduced into the sample cell comprising of two ultralene windows. The SAXS experiments were performed at the Endstation BL23A1 of the National Synchrotron Radiation Research Center (NSRRC), Taiwan. The energy of X-ray source and sample-to-detector distance were 14 keV and 2259 mm, respectively. The scattering signals were collected by MarCCD detector of 512x512 pixel resolution. For the in-house experiment, the wavelength of X-ray source and sampleto-detector distance were 0.154 nm (Cu ) and 65 cm (for the low-q configuration) or 23 cm (for the high-q configuration), respectively. The scattering intensity profile was output as the plot of the scattering intensity (I) vs the scattering vector, q = (4 / )sin( /2) (= scattering angle), after corrections for sample transmission, empty cell transmission, empty cell scattering and the detector sensitivity. Results and discussion The charge density of the PAMAM G9 dendrimer is prescribed by its dp value which stands for the number fraction of protonated amine groups in the dendrimer. The nominal N/P ratio of the dendriplex, prescribed by the feed molar ratio of the amine groups (irrespective of whether they are protonated) to the phosphate groups of DNA, is fixed at 6/1. Figure 1 displays the SAXS profiles of the dendriplexes with three dp values for the G9 dendrimer. The scattering patterns are very different from those associated with the ordered columnar phases exhibited by the dendriplexes with lower-generation dendrimers, [22] showing that DNA in the complexes does not organize to form columnar mesophases. For dp=0.02 and 0.1, the SAXS pattern is characterized by a s h o u l d e r ( ma r k e d b y " q m " ) n e a r 0 . 0 5 Å -1 , a small hump ( ma r k e d b y " i=1" ) a t c a . 0 . 0 8 Å -1 and a large broad shoulder/peak near 0.14~0.15 Å -1 . When the dp is increased to 0.5, the SAXS profile displays a sharper primary peak and a s ma l l s h o u l d e r ( ma r k e d b y " i=1" ) a p p e a r i n g l i k e a f o r m f a c t o r p e a k . An a d d i t i o n a l p e a k ( ma r k e d b y " q p " ) i s i d e n t i f i e d a t 0 . 2 4 Å -1 . Because the ordered columnar mesphase is not formed in DNA/G9 complexes, we resort to another possible structure, n a me l y , t h e " b e a d s -on-s t r i n g " s t r u c t u r e . A G9 d e n d r i me r mo l e c u l e ( R = 5 9 Å) i s much larger than a lower-generaiton dendrimer molecule, such as G2 (R = 12 Å) thereby rendering a much higher charge density in protonated G9 dendrimer (cf. 2048 positive charges on G9 vs. 16 positive charges on G2 for dp = 0.5). The strong electrostatic attraction between DNA and G9 dendrimer may induce the DNA chain to wrap tightly around the dendrimer molecule for effective charge matching. The energy cost in bending the DNA chain to wrap around a G9 dendrimer is also lower since the large molecular size reduces the surface curvature of the dendrimer. It is known that the subunit of [24] According to the authors, the SANS pattern collected at 10% D 2 O approximates to the SAXS pattern. In this case, the scattering profile shows a low-q peak and two additional high-q peaks. The low-q peak with the equivalent Bragg spacing (d) of ca. 10.5 nm was attributed to the interparticle distance between the nucleosome particles constituting the chromatin fiber. The two high-q peaks were considered to stem from DNA component, where the peak corresponding to d = 5.5 nm was ascribed to the pitch of DNA wrapping around the histone protein. The observed SAXS patterns of dp 0.02 and 0.1 dendriplexes studied here are indeed quite similar to that SANS profile of chromatin, which strongly suggests that the DNA chain wraps around G9 dendrimer to yield the beads-on-string structure (as schematically illustrated in Figure 2) even at the dp as low as 0.02 (corresponding to 82 positive charges per dendrimer macrocation). Since the DNA used here has 9200 base pairs with the fully extended length of ca. 3 m, each DNA strand is able to wrap around a large number Although the scattering result implies the formation of beads-on-string structure irrespective of dp, the fact that the SAXS profile depends on dp indicates that the internal structure of the chromatin-like fiber varies with dp. Following the work of Baldwin et al., [24] we consider the primary peak to stem from the spatial correlation of the nucleosome-like particles. The interparticle distance calculated from the peak position (q m ) via d = 2 /q  m is 12.6 nm. Considering that the diameters of DNA and dendrimer are 2.0 nm and 11.4 nm, respectively, the interparticle distance of the closely packed nucleosome-lile particles should be d = 11.4 + 2 x 2=15.4 nm. The observed d is however smaller than this value. We postulate that the primary scattering peaks in SAXS profiles are instead associated with the correlation of the dendrimer molecules along the fiber contour (i.e. z-axis; see Figure 2). The dendrimer molecules are closely spaced along the fiber axis as the interparticle distance is only slightly larger than the molecular diameter. Figure 2. Schematic illustration of the chromatin-like beads-on-string structure; P and d correspond to the pitch length and the axial interparticle distance of the nucleosome-like particles, respectively. The chromatin-like fiber formed by the complex should possess certain persistence length. The fact that the primary peak is broad and weak in intensity at dp < 0.5 signals that the axial correlation is limited and hence the fiber has a relatively small persistence length (or more flexible). The primary peak becomes sharper and more intense when dp is increased to 0.5, implying that the complex fiber becomes stiff with large persistence length. In this case, the charge density of the dendrimer is very high (with 2048 charges on the surface), and a significant amount of positive charges remain unmatched even if DNA wraps around the dendrimer tightly. The complex is hence overcharged and the strong electrostatic repulsion between the overcharged nucleosome-like particles stiffens the fiber significantly. Moreover, the long-range correlation of the pitches due to persistent wrapping of DNA along the fiber axis leads to a relatively sharp and clear pitch peak located at ca. 0.24 Å -1 . The pitch length calculated from the peak position according to P = 2 /q P is 2.6 nm. This value is slightly larger than the diameter of DNA, showing that the DNA segments are closely spaced along the fiber contour to effectively match the positive charges on dendrimer at the cost of bending energy. The formation of beads-on-string structure by DNA-G9 dendrimer complexes is formally verified by comparing the observed SAXS profiles with the calculated form factor of a chromatin-like fiber. We construct a chromatin-like rod formed by a DNA chain wrapping around a number of dendrimer macrocations placing along a fiber axis (i.e., z axis) with the axial interparticle distance of d ( Figure 2). Each dendrimer is approximated by a sphere with internal monomer density fluctuations, [25] and the DNA superhelix is approximated by a uniform helical cylinder with a prescribed pitch length P and pitch angle, as shown in Figure 3. The radius of the superhelix (R h ) given by R h = P/(2 tan ) is defined as the radial distance between the centerline and central trace of the helix (R h = 0 for completely straightened DNA, see Figure 3). Therefore, the helical trace of the cylinder can be calculated from the following equations. [26] where is phase angle that prescribes the direction of the groove of the superhelix. The regular pitch of a helix can give rise to a scattering peak locating at q p = 2 /l P = 2 /Pcos , where l P is the projection of P onto the normal of the helical segment (see Figure 3). It can be shown that q p ≈ 2  /l P as long as 2 R h is significantly larger than P. It is noted that the wrapping of DNA around the dendrimer is assumed to be tight; therefore, the value of R h may vary with z. After constructing the chromatin-like fiber with prescribed values of P and d, we divide the system into numerous volume elements (each with the size of 8 observed SAXS profile and the calculated SAXS pattern for the dp/0.5 dendriplex. The SAXS profile is calculated assuming a chromatin-like rod composing of 10 nucleosome-like particles placing in sequence with P=2.6 nm and d = 14 nm. The partial structure factors associated with DNA-DNA and dendrimer-dendrimer correlation, i.e. SDD(q) and Sdd(q), are also displayed. The figure on the right shows the actual picture of the chromatin-like segment generated for calculating the SAXS curves. where n and m stands for either DNA (D) or dendrimer (d) and |r i -r j | is the distance between i and j volume element. S dd (q) is further corrected by adding an additional component arising from internal monomer density fluctuations. 25 The SAXS intensity is finally calculated from the three partial structure factors by where   D (= 5.7 x 10 10 cm 2 ) and   d (= 2 x 10 10 cm 2 ) is the scattering length density contrast of DNA and PAMAM dendrimer relative to water, respectively. It is noted that at present we have not been able to obtain the scattering profiles that quantitatively match the experimental results, as there are a large number of parameters needed to be considered, such as the distribution of SLD within the dendrimer molecule, the possible size distribution of dendrimer arising from non-uniform protonation, the persistence length of the chromatin-like fiber, the wrapping mode of DNA, etc. Here we merely seek the structures that give rise to the scattering profiles resembling the experimental results, and from which we can identify the salient features of the beads-on-string structures formed at different degrees of protonation. Figure 4 compares the experimental SAXS profile of the dendriplex with dp =0.5 with that calculated for a chromatin-like rod composing of 10 nucleosome-like particles placing in sequence with P = 2.6 nm and d = 14 nm. The partial structure factors associated with DNA-DNA and dendrimer-dendrimer correlations, i.e., S DD (q) and S dd (q), are also displayed in the figure. From the calculated SAXS profile, it can be seen that the scattering pattern at q < 0.12 Å -1 is dominated by S dd (q), where the primary peak is associated with the interparticle distance between the dendrimers (or the nucleosome-like particles) along the z axis and the small hump near 0.08 Å -1 is the first form factor maximum of the dendrimer. However, S DD (q) dominates the scattering pattern at q > 0.12 Å -1 . In this case, a clear pitch peak (q P ) becomes visible at 0.24 Å -1 . The intensity and breadth of this pitch peak is dependent on the number of nucleosome-like particles in the fiber assumed for the calculation. The peak drops in intensity and broadens with decreasing number of nuclesosomelike particles. Therefore, a chromatin-like fiber with larger persistence length should display a more clear pitch peak along with a sharper primary peak in the SAXS profile. The close resemblance of calculated SAXS profile to the experimentally observed scattering pattern verifies the formation of beads-on-string structure by the dp/0.5 dendriplex. The calculation also confirms that the primary peak is associated with the axial correlation of the dendrimer (or nucleosome-like particle) in the 5. Comparison between the experimentally observed SAXS profile and the calculated SAXS pattern for the dp/0.2 dendriplex. The SAXS profile is calculated assuming a chromatin-like rod composing of four nucleosome-like particles with a Gaussian distribution of the pitch length (mean value = 2.8 nm and the variance =0.3 nm). chromatin-like fiber and the peak marked by " i = 1"corresponds to the first form factor maximum of the dendrimer. The peak observed at 0.24 Å -1 corresponds to the pitch peak found in the calculated profile; therefore, the clear pitch peak in the observed SAXS profile attests a rather large persistence length of the chromatin-like fiber and the pitch length of the DNA wrapping is 2.6 nm. Our present calculation is however not rigorous enough to provide an accurate estimate of the persistence length. For dendriplexes with lower dp, no clear pitch peak is identified and the corresponding SAXS profiles in the high-q region display a broad shoulder. Moreover, the primary scattering peak is also broad and relatively weak. These features attest that the persistence length of the chromatin-like fiber is short. We found that the assumption of a monodisperse pitch length in the nucleosome-like particles cannot produce the SAXS profile showing a broad high-q shoulder. On the other hand, the assumption of polydisperse pitch length yields the SAXS pattern closely resembles the experimentally observed profiles. Figure 5 displays the experimental SAXS profile of the complex with dp = 0.2 and the SAXS profile calculated under the assumption of Gaussian distribution of the pitch length with the mean value of 2.8 nm and the variance of 0.3 nm. In this case, the intensity at q > 0.05 Å -1 is obtained by summing the form factor profiles of the nucleosome-like particles with different pitch lengths according to the weighting prescribed by the Gaussian distribution function. The low-q intensity (q < 0.05 Å -1 ) is calculated for the chromatin-like rod composing of four nucleosome-like particles placed in sequence with the interparticle distance of 11.8 nm. It can be seen that the agreement is fairly good, thereby showing that the chromatin-like fiber formed by the dnedriplex with lower dp is more flexible (comparing to that associated with dp/0.5 dendriplex) and there exists a relatively clear distribution of pitch length of the DNA superhelix wrapping around the dendrimer. Conclusion We have revealed that DNA can wrap around PAMAM G9 dendrimer to yield the chromatin-like structure irrespective of the charge density of the dendrimer. The wrapping mode of the nucleosomelike particle and the global conformation of the chromatin-like fiber however depend on dendrimer charge density. At low dp (< 0.5), DNA can still wrap around the dendrimer tightly with a distribution of pitch length; the chromatin-like fiber thus formed has a smaller persistence length. At dp = 0.5, the DNA chain wraps around the dendrimer regularly and tightly with the pitch length of 2.6 nm. The resultant chromatin-like fiber is highly stiff due to strong electrostatic repulsion between the nucleosome-like particles. Acknowledgemen We acknowledge the financial support of the National Science Council of the Republic of China under grant No. NSC 97 -2923 -E-007-001.
2017-09-16T00:22:39.989Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "66eee737138693e278e1458b8f3049ecd897a3c2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/272/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "11bd8150f8e69ecfbb58d11ec2c4131a7f42d316", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
252804447
pes2o/s2orc
v3-fos-license
Systemic immune-inflammation index as a novel predictor of atrial fibrillation after off-pump coronary artery bypass grafting SUMMARY OBJECTIVE: This study aims to examine the predictive role of systemic immune-inflammation index on postoperative new-onset atrial fibrillation in patients undergoing off-pump coronary artery bypass grafting. METHODS: A total of 722 patients undergoing elective off-pump coronary artery bypass grafting between January 2017 and September 2021 were included in this study and divided into two groups as the atrial fibrillation group (n=172) and the non-atrial fibrillation group (n=550). Both groups were compared in terms of patients’ baseline clinical features, operative and postoperative variables, and preoperative hematological indices derived from the complete blood count analysis. Multivariate logistic regression and receiver-operating characteristic curve analyses were performed to detect the independent predictors of postoperative new-onset atrial fibrillation. RESULTS: The median age and length of hospital stay in the atrial fibrillation group were significantly higher than those in the non-atrial fibrillation group. The median values of white blood cell, platelet, neutrophil, neutrophil/lymphocyte ratio, platelet/lymphocyte ratio, and systemic immune-inflammation in the atrial fibrillation group were significantly greater than in those in the non- atrial fibrillation group. Logistic regression analysis demonstrated that age, platelet, platelet/lymphocyte ratio, and systemic immune-inflammation were independent predictors of postoperative new-onset atrial fibrillation. receiver-operating characteristic curve analysis revealed that systemic immune-inflammation of 706.7×103/mm3 constituted cut-off value to predict the occurrence of new-onset atrial fibrillation with 86.6% sensitivity and 29.3% specificity. CONCLUSION: Our study revealed for the first time that systemic immune-inflammation predicted new-onset atrial fibrillation after off-pump coronary artery bypass grafting. INTRODUCTION Atrial fibrillation (AF) is the most frequent dysrhythmia after coronary artery bypass grafting (CABG) and its incidence has been reported as 10-40% in the literature 1 . New-onset AF following CABG has been demonstrated to be associated with serious morbidity, mortality, and increased financial burden 2 . To reduce this increased morbidity, mortality, and financial burden, it is important to determine the patients with a higher risk of postoperative AF and thereby to take necessary precautions so that AF does not occur. Therefore, simply available, inexpensive, and reliable biomarkers that can be utilized in daily clinical practice are required for these purposes. For the pathogenesis of new-onset AF after CABG, many factors such as surgical trauma, intraoperative cardiopulmonary bypass usage and cardioplegia administration, perioperative discontinuation or inappropriate use of beta-blocker agents, hypoxia, and electrolyte imbalance are held responsible. Nevertheless, less information is available about the electropathophysiological molecular mechanisms and factors leading to postoperative AF. In contrast, the inflammatory process is also known to be one of the contributing pathophysiological factors in AF occurrence 3,4 . Various inflammatory biomarkers such as C-reactive protein, interleukins, and tumor necrosis factor-α have been extensively studied in relation to AF. In addition to these markers, hematological indices obtained from complete blood count (CBC) test such as white blood cell (WBC) 5 , red cell distribution width (RDW) 6 , mean platelet volume (MPV) 6 , neutrophil/lymphocyte ratio (NLR) 7 , and platelet/lymphocyte ratio (PLR) 8 have been investigated as potential prognostic and predictive biomarkers for postoperative new-onset AF. Rev Assoc Med Bras 2022;68(9):1240-1246 Systemic immune-inflammation index (SII) is a novel biomarker that develops based on platelet count and NLR (SII=platelets count×NLR) and demonstrates patients' inflammatory and immune statuses simultaneously. It was shown that high SII levels were related to poor outcomes and the index was an important prognostic marker in various types of cancers 9 . SII was also reported to independently predict the major adverse cardiovascular events such as nonfatal myocardial infarction (MI), nonfatal stroke, heart failure, and cardiac death in patients with coronary artery disease (CAD) undergoing percutaneous coronary intervention 10 . Moreover, a recent study demonstrated that high SII levels were associated with poor outcomes after elective off-pump CABG 11 . In contrast, to the best of our knowledge, a relationship between SII and new-onset AF in patients undergoing off-pump CABG has never been examined. Thus, we designed this study to examine whether a potential relationship between SII and new-onset AF was present in patients undergoing elective isolated off-pump CABG. Additionally, we also investigated other possible predictive factors and perioperative outcomes of the new-onset AF after off-pump CABG. Study population and design The study was started once approval was obtained from the local ethics committee. This was a retrospective observational cohort study and conducted on patients undergoing elective isolated off-pump CABG at a tertiary referral center in Turkey between January 2017 and September 2021. The study population consisted of a total of 722 patients, and patients were divided into two groups, namely, AF group (n=172) and non-AF group (n=550), according to the occurrence of new-onset AF during the postoperative period. Patients' preoperative baseline clinical characteristics, comorbid conditions, laboratory parameters obtained from the CBC analysis, intraoperative data, postoperative outcomes, and complications were screened through the computerized medical database of the hospital, recorded, and compared between the groups. Thus, the predictive risk factors and perioperative results of new-onset AF after off-pump CABG were determined. Patients with a history of previous AF or other cardiac dysrhythmias, those undergoing emergency, reoperative or on-pump CABG, and concomitant cardiac operation such as mitral valve surgery were excluded from the study. In approximately past 7 years, we preferred and routinely performed the ''off-pump technique'' for patients undergoing CABG to avoid the detrimental effects of cardiopulmonary bypass. All patients undergoing off-pump CABG were informed about the operation and perioperative process, and their verbal and written consents were obtained before the operation. They were operated via a standard median sternotomy under general anesthesia. Internal thoracic artery and vena saphena magna were used as primary bypass grafts in most of the patients. Postoperative monitoring During the postoperative first 48 h in ICU, electrocardiogram (ECG), invasive central venous and arterial pressures, and oxygen saturation of the patients were continuously monitored, and arterial blood gas analyses were performed regularly every 2-4 h. Cardiac rhythms of patients were assessed by obtaining standard 12-lead ECGs every day for the remaining days until discharge. In addition, heart rate and rhythm were assessed by palpation of the radial pulse at least once in every 4 h. An additional 12-lead ECG was obtained and analyzed in case of tachycardia, palpitation, or a suspicion of an irregular cardiac rhythm. New-onset AF was diagnosed with the existence of an irregular RR interval and an absence of P wave on the ECG. Laboratory analysis Blood samples were taken from a peripheral vein after a 6-8-h fasting period. The samples were placed into sterile tubes containing a standard amount of anticoagulant and were quickly delivered to the laboratory for the analysis. To determine preoperative values of the parameters of the CBC test, the samples were studied in an automatic CBC analysis device (Beckman Coulter Inc., CA, USA). The studied and derived CBC parameters for this study were hemoglobin (HGB), hematocrit (HCT), mean corpuscular volume (MCV), platelet (PLT), neutrophil (NEU), lymphocyte (LYM), platelet distribution width (PDW), plateletcrit (PCT), WBC, RDW, MPV, NLR, PLR, and SII. The SII was calculated using the formula "platelet count × neutrophil/lymphocyte ratio." Statistical analysis The Shapiro-Wilk test was used to evaluate the normality of variables. Continuous variables were presented as median (min-max) values, while categorical variables were expressed as numbers (percentages). Continuous variables were analyzed using the Mann-Whitney U-test, while categorical variables were analyzed using the chi-square test. Multiple explanatory variable logistic regression analysis was performed to determine the risk factors/covariates for AF. Receiver-operating characteristic (ROC) curve analysis was performed to determine the cut-off values of selected variables, via logistic regression, for AF from area under the curve (AUC). The ROC curve analysis was performed using the "Optimal Cutpoints" library of R software 12 . The R software was used to perform the statistical analyses (R Core Team, 2021) 13 . A p-value <0.05 was regarded as statistically significant for all analyses. RESULTS Patients with new-onset AF were significantly older than those without AF, and the median ages were 69 (38-85) and 62 (35-87) years for the AF group and the non-AF group, respectively (p=0.001). There were no significant differences with regard to other baseline clinical characteristics and comorbidities between the groups. When the laboratory parameters were compared between the groups, it was detected that the median values of WBC, PLT, NEU, NLR, PLR, and SII in the AF group were significantly greater than those in the non-AF group (Table 1). When operative and postoperative data were considered, we found that the median length of hospital stay in the AF group was significantly longer than that in the non-AF group (6 [5-24] days for the AF group vs. 5 days for the non-AF group; p=0.001). In terms of other operative and postoperative variables, the groups were similar and no significant differences were detected (Table 2). Following the determination of the potentially significant risk factors, a multivariate logistic regression analysis was performed to assess the relationship between the occurrence of new-onset AF and independent predictors by adjusting for significant variables. According to multivariate logistic regression analysis, age, PLT, PLR, and SII were detected to be independent predictors of new-onset AF. The ROC curve analysis demonstrated that age of 66 years constituted cut-off value for predicting the occurrence of new-onset AF with 57.5% sensitivity and 64.7% specificity, PLT of 177×10 3 /mm 3 constituted cut-off value for predicting the occurrence of new-onset AF with 88.9% sensitivity and 21.1% specificity, PLR of 14.52 constituted cut-off value for predicting the occurrence of new-onset AF with 61.0% sensitivity and 51.1% specificity, and SII of 706.7×10 3 /mm 3 constituted cut-off value for predicting the occurrence of new-onset AF with 86.6% sensitivity and 29.3% specificity (Figure 1). DISCUSSION Our study revealed that patients in the AF group were significantly older and had a longer length of hospital stay compared with those in the non-AF group. Although among hematological indices, WBC, PLT, NEU, NLR, PLR, and SII levels in the AF group were significantly greater than those in the non-AF group, according to multivariate analysis, only PLT, PLR, and SII were gained significantly and considered the associated predictive indices for postoperative new-onset AF. In our opinion, the most intriguing finding of the study was that SII independently predicts new-onset AF after off-pump CABG for the first time in the existing literature. Determining the predictive risk factors of new-onset AF following the cardiac surgery is critical because it allows for the development of preventive measures and necessary prompt management. The use of several medications, such as β-blockers, statins, and steroids, for prophylaxis against postoperative AF should be considered in the preoperative period. In contrast, although numerous potential risk factors for postoperative new-onset AF have been identified in different studies, "advanced age" is the most known predictive variable that has been identified practically in every study found in the literature 1-4 . Our study confirmed that advanced age was a substantial and independent risk factor for postoperative new-onset AF as previously reported in the literature. Studies investigating hematological indices obtained from a simple CBC test for prediction of new-onset AF after cardiac surgery have increased recently, and these hematological indices have become the focus of attention on this topic. The CBC test is inexpensive, easily and quickly accessible in many centers, and includes many different reliable indices. Among the indices, WBC, RDW, PLT, and MPV as well as NLR and PLR are the most studied and identified predictive variables for new-onset AF 14 . Although various indices derived from the CBC test have been reported to predict new-onset AF after CABG in different studies, the results are often inconclusive and inconsistent with each other. In contrast, a recent systematic review and meta-analysis including a total of 6,098 patients from 22 studies that fit the eligibility criteria showed that preoperative PLT, MPV, WBC, NLR, and RDW were associated predictive hematological indices with new-onset AF after cardiac surgery 15 . In our study, we detected PLT as well as PLR as predictive hematological indices for new-onset AF following off-pump CABG. SII is a novel hematological marker that is observed using the CBC test by bringing together three inflammatory peripheral cell counts (platelet, neutrophil, and lymphocyte), which reflects patients' inflammatory and immune statuses simultaneously. SII has been widely studied in patients with cancer and it emerged as a significant hematological prognostic indicator in many types of cancer 9 . SII has also been examined in many different cardiovascular diseases, such as CAD 10 , severe calcific aortic stenosis 16 , infective endocarditis 17 , and pulmonary embolism 18 . Additionally, in a large-scale cohort study on 13,929 middle-aged and older Chinese adults who were free of cardiovascular disease and cancer, the relationship of SII with incident cardiovascular diseases including stroke and CAD was examined, and a high SII level was detected to be significantly associated with the cardiovascular diseases 19 . Moreover, Bağcı et al. 20 recently investigated the predictive capacity of SII for the detection of new-onset AF in patients with ST-elevation MI and showed that SII predicted new-onset AF following ST-elevation MI. In contrast, the predictive role of SII on postoperative outcomes in patients undergoing cardiac surgery has also been recently studied. Yoon et al. 21 assessed the prognostic implications of preoperative SII on 213 patients undergoing isolated tricuspid valve surgery and demonstrated that high SII levels were independently associated with the major early-term perioperative complications. Dey et al. 11 conducted a retrospective single-center risk-prediction study on 1,007 patients undergoing elective off-pump CABG, and revealed that the SII cut-off value of 878.06×10 3 /mm 3 predicted poor outcomes, such as major adverse cardiovascular events, renal failure, sepsis, and death, with 97.6% sensitivity and 91% specificity. In this study, we revealed that the SII cut-off value of 706.7×10 3 /mm 3 predicted postoperative new-onset AF with 86.6% sensitivity and 29.3% specificity. Our study had several limitations. The major limitations of our study were its single-centered design and retrospective nature. Another important limitation was the lack of a correlation analysis with other inflammatory markers such as C-reactive protein and interleukins. Additionally, heart rhythm monitoring could not be performed on a continuous basis following an ICU stay. Although in the following days, after ICU heart rhythm was routinely monitored with standard ECGs at least twice a day and an additional ECG was obtained in all cases when any rhythm abnormality was suspected, there was a possibility of unnoticed short-time and transient attacks of asymptomatic AF. CONCLUSION Our study demonstrated that age, PLT, PLR and SII were independent predictive risk factors of new-onset AF following off-pump CABG. Among these aforementioned factors, SII was detected to predict postoperative new-onset AF for the first time in the literature, and to the best of our knowledge, our study is the first clinical research to examine the predictive role of SII on new-onset AF in patients undergoing off-pump CABG. Nonetheless, further well-designed studies with larger patient participation are needed to support the results of our study and obtain more evident scientific information.
2022-10-11T15:20:48.904Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "50bf238403609a7e7f7bf58bd38dea75cc0516ab", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ramb/a/8RzfXX6sGSnbxnr4BdSYx3x/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ffc52b37ed41848f2a35552ebb8defc1e68b234", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264069789
pes2o/s2orc
v3-fos-license
Early Season Monitoring of Tarnished Plant Bug, Lygus lineolaris, in Wild Hosts Using Pheromone Traps Simple Summary The tarnished plant bug, Lygus lineolaris (Hemiptera: Miridae), is a polyphagous pest and causes severe economic damage to cotton crops. Managing the weedy field edges is important in preventing early-season infestations of L. lineolaris in cotton to prevent damage to the squares and other fruiting structures. Scouting fields for L. lineolaris is time- and labor-intensive, and end-user variability associated with field sampling can lead to inaccuracies. Insect traps that combine visual cues and pheromones are more accurate, sustainable, and economically feasible in contrast to traditional insect detection methods. In this study, we investigated the application of red or white sticky cards baited with the female-produced sex pheromone to monitor the overwintering L. lineolaris populations in early spring. Field experiments demonstrated that the red sticky cards baited with a pheromone blend containing hexyl butyrate, (E)-2-hexenyl butyrate, and (E)-4-oxo-2-hexenal in a 4:10:7 ratio are highly effective in trapping L. lineolaris adults in early spring before the row crops are planted, and in monitoring their movement into a cotton crop. Abstract The tarnished plant bug, Lygus lineolaris (Hemiptera: Miridae), has a wide host range of over 700 plant species, including 130 crops of economic importance. During early spring, managing the field edges with weeds and other wild hosts is important in preventing early-season infestations of L. lineolaris in cotton to prevent damage to the squares and other fruiting structures. Scouting fields for L. lineolaris is time- and labor-intensive, and end-user variability associated with field sampling can lead to inaccuracies. Insect traps that combine visual cues and pheromones are more accurate, sustainable, and economically feasible in contrast to traditional insect detection methods. In this study, we investigated the application of red or white sticky cards baited with the female-produced sex pheromone to monitor overwintering L. lineolaris populations in early spring. Field experiments demonstrated that the red sticky cards baited with a pheromone blend containing hexyl butyrate, (E)-2-hexenyl butyrate, and (E)-4-oxo-2-hexenal in 4:10:7 ratio are highly effective in trapping L. lineolaris adults in early spring before the row crops are planted, and in monitoring their movement into a cotton crop. The monitoring of L. lineolaris should help growers to make judicious decisions on insecticide applications to control early pest infestations, thereby reducing economic damage to cotton. Introduction Cotton, Gossypium hirsutum L., is one of the world's most widely grown agricultural crop with a production of 118.5 million bales from 32.8 million hectares in 2022-2023 [1].The United States (US) is ranked the third largest producer of cotton, with a production of 17.6 million bales, behind India (27 mi) and China (27 mi) [2].Several insect pests cause Insects 2023, 14, 805 2 of 10 economic damage to cotton across the developmental stages of the crop.The tarnished plant bug, Lygus lineolaris (Palisot de Beauvois) (Hemiptera: Miridae), is considered one of the most economically important pests of cotton as their feeding can significantly reduce the crop yield due to the abscission of squares, boll malformation, and deformities of plant terminus [3].Eradication programs of Boll weevil (Anthonomus grandis Boheman) (Coleoptera: Curculionidae), coupled with the area-wide use of transgenic Bacillus thuringiensis (Bt) against caterpillar pests, has caused L. lineolaris to emerge as a key pest in cotton-growing regions of the US.Management of this pest is heavily dependent on insecticides and requires multiple applications to minimize economic loss [4].The pest status of this insect pest has increased significantly in the past two decades, resulting in a multi-fold surge in the number of insecticide applications [5,6]. Lygus lineolaris is a polyphagous pest and is reported to feed on more than 700 plant species in 55 families in North America [7].The spatial distribution of economically important crops and weedy hosts in an agroecosystem significantly affects this pest's movement and population dynamics early in the season [8].In early spring, weed hosts are a major refuge for L. lineolaris in the absence of agricultural crops.Once the weedy hosts senesce or are destroyed, the insects migrate into adjacent crop fields and cause damage to the cotton squares [7,9,10].Decisions on insecticide application for managing L. lineolaris are based primarily on economic thresholds developed by scouting cotton fields during the period of peak cotton susceptibility using visual and beat cloth methods [4,11,12].Sampling accuracy for L. lineolaris may be impacted by the dispersal distance from the weedy hosts toward the interior of the cotton field [13].Insects counts can be increased by the migrating insects, and higher adult counts were observed in cotton field plots with larger weed stand borders [13], potentially leading to additional insecticide applications.Monitoring the migratory movement of L. lineolaris between the wild host and the cotton crop is complex and is not well studied under the field conditions.Studies by Reisig [14] recommended sampling, using a sweep net, to be performed at locations not less than 15.3 m from the field edges to minimize the sample variation. Insecticide applications based on well-informed scouting practices are critical for the management of this key pest.Prohibitory costs associated with the weekly scouting of the weedy field edges and adjacent cotton fields make it difficult for growers to make judicious decisions on insecticide applications.Monitoring tools based on visual cues and pheromones will be very useful in tracking Lygus populations in the wild hosts and weedy field edges early in the season to make informed decisions for pest management using insecticides.Previous studies have shown that the red-colored sticky cards are highly attractive to L. lineolaris adults compared to the blue, white, and yellow sticky cards.The red sticky cards baited with pheromone lures containing hexyl butyrate, (E)-2-hexenyl butyrate, and (E)-4-oxo-2-hexenal in a 4:10:7 ratio, respectively, trapped a significantly higher number of L. lineolaris than those lured with 10:4:2 or 7:10:4 blends or an unbaited control in the cotton field experiments [15]. The current study investigated the use of pheromone traps for monitoring L. lineolaris populations in different weed or wild hosts near cotton fields during the early spring season.The red and white sticky cards with or without pheromone lures were used.In three field experiments, we compared the effectiveness of the different traps for trapping L. lineolaris within a field of wild host plants, for monitoring the distribution and outward movement of L. lineolaris from a field of wild host plants during early spring before the crops were planted, and for monitoring the movement of L. lineolaris from wild hosts to cotton.Research findings from this study have field applications in using pheromone traps for the early season monitoring of L. lineolaris in the weedy field edges, which may contribute to effective weed and pest management strategies for row crops. Study Locations and General Field Experiment Procedures Field experiments were performed in two different field locations during the early spring and summer of 2022, in the Alcorn State University demonstration plot in Merigold, MS, and the USDA research farm, Southern Insect Management Research Unit, located in Stoneville, MS.For recording the catches of L. lineolaris on traps, the field-collected sticky cards were stored inside a Ziploc bag, and the number of L. lineolaris caught on the sticky cards was counted and recorded later in the laboratory.The male/female ratio of the insects was calculated for all the L. lineolaris trapped on the three out of the six replications of red and white sticky card color used.The species and sex of the captured L. lineolaris were determined based on the morphological characteristics of the abdomen.For female L. lineolaris adults, there is a groove that begins at the bottom and rises to the middle of her abdomen as the ovipositor lies in the center, almost hidden.This groove is not present for males and has a tapered abdomen [16][17][18].The catches of L. lineolaris were analyzed via ANOVA followed by Tukey's HSD to test for the significance of differences between means using the JMP statistical program (SAS, Cary, NC, USA). Preparation of Lures and Traps Pheromone lures used in the field experiments were prepared at the Natural Resources Institute, UK, according to George et al. [15].Briefly, a blend of hexyl butyrate (HB), (E)-2-hexenyl butyrate (E2HB), and (E)-4-oxo-2-hexenal (E4OH) in a 40:100:70 ratio was formulated in sunflower oil with the major component at 10%.An antioxidant, 4-Methyl-2,6-di-tert-butylphenol (BHT; 10% of major component), and UV screener, Waxoline Black (10% of major component), were also added.The mixture (100 µL) was formulated on cigarette filters in pipette-tip dispensers (1 mL; Fisher Scientific, Loughborough, UK).The smaller end of the pipette tip was open and the larger end was sealed with a Teflon septum and crimp seal, before wrapping them with duct tape to exclude light.Aluminum foil bags were used to pack the lures and sent to USDA where they were stored in a refrigerator (4 • C) before use. Red (Pherocon SWD trap; Trécé Inc., Adair, OK, USA) and non-UV white (Great Lakes IPM, Inc., Vestaburg, MI, USA) double-sided sticky cards (25 cm × 11.25 cm) were used in the experiments.The color and the hot melt glue on these sticky cards are effective in the monitoring and trapping of different insect species.The sticky cards were tied to a 30 cm × 2 mm-thick vinyl coated cable (Lowe's Inc., Mooresville, NC, USA) and was attached to a 104 cm steel-painted metal traditional shepherd's hook (LG Sourcing, Inc., North Wilkesboro, NC, USA) using gorilla black duct tape (Gorilla Glue Company, Cincinnati, OH, USA).Hanging the sticky cards to the shepherd's hook helped to keep the sticky cards above the plant canopy and prevented contact with the plant.As these traps were deployed in the field for weeks under rainy and windy conditions, this setup allowed free movement, easy visibility, and prevented any wind damage to the sticky cards.The pheromone lures were attached horizontally to the center of the card using a plastic wire, with the pipette tip pointing away from the card [15]. Comparison of Different Colored Sticky Cards with Pheromone Lures for Monitoring Lygus lineolaris in Wild Hosts The first experiment was performed at the Alcorn State University demonstration plot in early spring (7-21 April 2022) to study the attraction of L. lineolaris surviving on different weed or wild hosts towards the different colored double sticky cards with or without pheromone lures.A 0.30-hectare field plot with mustard (Brassica juncea) and other weed hosts were used for this experiment.Plants were in the flowering stage and the initial visual sampling showed a good population of L. lineolaris.Treatments included (1) red sticky card; (2) red sticky card + pheromone; (3) white sticky card; and (4) white sticky card + pheromone.Treatments were arranged 12 m apart in the same row and 5 m apart between rows.Traps were arranged in a completely randomized design and were replicated six times.Traps were arranged in a zig-zag pattern between the rows to allow for maximum spacing between the traps.The L. lineolaris that were trapped on the sticky cards were counted 7 days and 14 days after the deployment of the traps. Perimeter Trapping of Lygus lineolaris Using Pheromone Traps A second experiment was designed to study the dispersal and movement of L. lineolaris adults from wild hosts near a cotton and soybean field in early spring.The experiment field at the USDA research farm consisted of a 1-hectare rectangular plot with different weeds, wild grasses, and flowering plants.This plot was maintained as a pollinator plot throughout the year during the dearth of other pollen and honey resources in the offseason.During this experiment in early spring (20 April-18 May 2022), no other crops were planted next to this wildflower plot, and it provided wild hosts for feeding and survival early in the season.Treatments included (1) red sticky card; (2) red sticky card + pheromone; (3) white sticky card; and (4) white sticky card + pheromone.Treatments were set 15 m apart in the field borders covering all the four sides of this rectangular field.Traps were arranged randomly in a completely randomized design and were replicated six times.The numbers of L. lineolaris caught on the sticky cards were counted weekly for four weeks.The same pheromone lures were used throughout the duration of the experiment, and the sticky cards were replaced weekly. Tracking Movement of Lygus lineolaris from Weed Hosts to Cotton Using Pheromone Traps Multiple studies have reported that the overwintering L. lineolaris adults survive on weeds or other wild hosts in early spring before moving on to cotton and other row crops in the summer season.In a third experiment, the red and white sticky cards with or without pheromone lures were used to monitor the movement of L. lineolaris from the wildflowers to cotton.The pheromone traps covered all the four sides of the cotton field plot and were arranged to intercept the movement of the adults moving to the cotton crop from the wildflower plots.The experiment was performed at the Alcorn State University demonstration farm from 14 July-12 August 2022.Two patches of native wildflowers and other weeds (18.3 m wide, 0.41 hectare) were allowed to grow during the early spring season.These two wildflower beds were 30.5 m apart, and a plot of cotton (18.3 m wide, 0.41 hectare) was planted in early June in the middle of this space.A 6.1 m space was left clean without any weeds between the wildflowers and cotton on both sides of the cotton.Traps were arranged 15.3 m apart lengthwise in the center of this 6.1 m wide barrier space.The number of L. lineolaris caught on the sticky cards were counted weekly for four weeks.The same pheromone lures were used throughout the experiment, and the sticky cards were replaced weekly.Visual inspection of fruiting structures were randomly performed on forty plants (10 plants/row) in four rows of the experimental plots to check for the presence of Lygus adults and nymphs.The damage to the cotton squares was measured by counting the number of aborted squares on the plant.A total of 100 cotton squares was checked randomly and was replicated four times, and the percentage square retention was calculated. Comparison of Different Colored Sticky Cards with Pheromone Lures for Monitoring Lygus lineolaris in Weed Hosts The wild hosts contained a good number of L. lineolaris adults in early spring (Figure 1).The red sticky cards attached with pheromone lures caught significantly more adults than the white sticky cards with pheromone lures or the red and white sticky cards alone (F 3,23 = 30.47;p < 0.001, n = 6) (Figure 2).After two weeks, the total L. lineolaris caught on the red sticky cards with pheromone lures were almost five times more than the white sticky cards with the same pheromone lure.A major portion of L. lineolaris collected on the sticky cards, irrespective of the treatments, were males, and the male/female percentage ratio was 97:3 for all the insects caught on the colored sticky cards.No differences were observed in the male/female ratio of L. linoelaris adults on the red or white sticky cards, with or without pheromones.The white sticky cards were observed to attract many other species of dipteran insects, compared to the red sticky cards with less bycatch of other insect species. Insects 2023, 14, x FOR PEER REVIEW 5 of 11 the sticky cards, irrespective of the treatments, were males, and the male/female percentage ratio was 97:3 for all the insects caught on the colored sticky cards.No differences were observed in the male/female ratio of L. linoelaris adults on the red or white sticky cards, with or without pheromones.The white sticky cards were observed to attract many other species of dipteran insects, compared to the red sticky cards with less bycatch of other insect species.the sticky cards, irrespective of the treatments, were males, and the male/female percentage ratio was 97:3 for all the insects caught on the colored sticky cards.No differences were observed in the male/female ratio of L. linoelaris adults on the red or white sticky cards, with or without pheromones.The white sticky cards were observed to attract many other species of dipteran insects, compared to the red sticky cards with less bycatch of other insect species. Perimeter Trapping of Lygus lineolaris Using Pheromone Traps Monitoring the movement of L. lineolaris adults from the wildflowers was very effective using the sticky card traps with pheromone lures.The red sticky cards baited with pheromone lures caught the most number of L. lineolaris adults than the other sticky card treatments throughout the four weeks of the experiment (p < 0.005, n = 6) (Table 1). Table 1.Mean (±SEM) cumulative catch of Lygus lineolaris on red and white sticky cards with or without pheromone lures placed around the borders of a 1-hectare plot with wild hosts for five weeks during early spring.The same pheromone lures were used throughout the experiment and sticky cards were replaced every week (20 April-18 May 2022; n = 6); means followed by different letters within the same week are significantly different (p = 0.05).The total number of L. lineolaris caught on the red sticky card baited with pheromone was eight times higher than the white sticky cards with the same pheromone lure.The addition of the pheromone lure significantly increased the attraction towards the red sticky cards, although not towards the white sticky cards.The red and white sticky cards with or without the pheromone lure caught > 95% males compared to females (Figure 3).Few females were found on these traps irrespective of the color or presence of pheromone. Weeks after Monitoring the movement of L. lineolaris adults from the wildflowers was very effective using the sticky card traps with pheromone lures.The red sticky cards baited with pheromone lures caught the most number of L. lineolaris adults than the other sticky card treatments throughout the four weeks of the experiment (p < 0.005, n = 6) (Table 1). Table 1.Mean (±SEM) cumulative catch of Lygus lineolaris on red and white sticky cards with or without pheromone lures placed around the borders of a 1-hectare plot with wild hosts for five weeks during early spring.The same pheromone lures were used throughout the experiment and sticky cards were replaced every week (20 April-18 May 2022; n = 6); means followed by different letters within the same week are significantly different (p = 0.05).The total number of L. lineolaris caught on the red sticky card baited with pheromone was eight times higher than the white sticky cards with the same pheromone lure.The addition of the pheromone lure significantly increased the attraction towards the red sticky cards, although not towards the white sticky cards.The red and white sticky cards with or without the pheromone lure caught > 95% males compared to females (Figure 3).Few females were found on these traps irrespective of the color or presence of pheromone. Tracking Lygus lineolaris Movement from Weed Hosts to Cotton Using Pheromone Traps The red sticky cards caught significantly more L. lineolaris than the other trap treatments deployed in the experiment (F 3,23 = 24.48;p < 0.0001, n = 6) (Figure 4).The sampling of the cotton plants and wild hosts visually during the second week of the experiment showed the presence of L. lineolaris adults in wild hosts (0.5 ± 0.2) and cotton (1.5 ± 0.6).At the end of the experiment, a square retention count was performed in cotton to assess the damage to the cotton squares.Cotton plants showed a 90% square retention even under the untreated conditions. The red sticky cards caught significantly more L. lineolaris than the other trap treatments deployed in the experiment (F3,23 = 24.48;p < 0.0001, n = 6) (Figure 4).The sampling of the cotton plants and wild hosts visually during the second week of the experiment showed the presence of L. lineolaris adults in wild hosts (0.5 ± 0.2) and cotton (1.5 ± 0.6).At the end of the experiment, a square retention count was performed in cotton to assess the damage to the cotton squares.Cotton plants showed a 90% square retention even under the untreated conditions. Discussion Lygus lineolaris is a major hemipteran pest species in the Miridae family and causes extensive damage to agriculturally important crops.Like many other Mirids, their polyphagous feeding behavior, the wide host range, and their development of resistance to major insecticide classes makes L. lineolaris a very successful pest in cotton and other crops.The extensive application of malathion for boll weevil eradication has caused resistance development in L. lineolaris to organophosphates and pyrethroids in the mid-southern states [19,20].Multiple studies have documented the presence of L. lineolaris on weed hosts near commercial crop field edges during the early spring season before they move on to commercial crop fields [7,10,13,21].Scouting the susceptible stages of cotton using drop cloths, sweep nets, and visual observations on square retention and the number of dirty squares is important in making insecticide application decisions.However, there are limitations with these sampling methods depending on the insect and cotton growth stages [4,14,22].The availability of alternate weed hosts in the field margins and the dispersal distance from these alternate hosts to the cotton interior may significantly affect these sampling techniques, which in turn may affect the management practices [13]. The detection and monitoring of pest species using pheromone traps have gained momentum in recent decades as it is ecofriendly, cost-effective, and contributes towards integrated pest management.Pheromone traps can be used for early season detection, season-long pest abundance monitoring, and mass trapping strategies.Pheromone traps that combine visual and pheromone cues may be very useful when monitoring L. lineolaris adults early in spring on different weedy hosts bordering the cotton fields.Lygus lineolaris Discussion Lygus lineolaris is a major hemipteran pest species in the Miridae family and causes extensive damage to agriculturally important crops.Like many other Mirids, their polyphagous feeding behavior, the wide host range, and their development of resistance to major insecticide classes makes L. lineolaris a very successful pest in cotton and other crops.The extensive application of malathion for boll weevil eradication has caused resistance development in L. lineolaris to organophosphates and pyrethroids in the mid-southern states [19,20].Multiple studies have documented the presence of L. lineolaris on weed hosts near commercial crop field edges during the early spring season before they move on to commercial crop fields [7,10,13,21].Scouting the susceptible stages of cotton using drop cloths, sweep nets, and visual observations on square retention and the number of dirty squares is important in making insecticide application decisions.However, there are limitations with these sampling methods depending on the insect and cotton growth stages [4,14,22].The availability of alternate weed hosts in the field margins and the dispersal distance from these alternate hosts to the cotton interior may significantly affect these sampling techniques, which in turn may affect the management practices [13]. The detection and monitoring of pest species using pheromone traps have gained momentum in recent decades as it is ecofriendly, cost-effective, and contributes towards integrated pest management.Pheromone traps can be used for early season detection, season-long pest abundance monitoring, and mass trapping strategies.Pheromone traps that combine visual and pheromone cues may be very useful when monitoring L. lineolaris adults early in spring on different weedy hosts bordering the cotton fields.Lygus lineolaris pheromone components, their different blends, and their activity towards Lygus have been reported earlier [23][24][25][26][27].Previous studies have field tested these lures using the white sticky cards against different Lygus spp.and have reported varying results in attracting Lygus adults [28,29].George et al. [15] reported the use of pheromone lures to increase the attraction of L. lineolaris adults to the red colored sticky cards in cotton under the field conditions.Combining pheromone lures with the red colored sticky cards had a multiplicative effect on attracting and trapping L. lineolaris adults. In the current study, a similar red sticky card pheromone combination was used to monitor the early season population of L. lineolaris in wild hosts, and their distribution and movement within and away from these alternate hosts.Lygus populations that multiply and build populations on weedy hosts need to be monitored and managed effectively before damage is sustained to the agriculturally important crops.Three set of experiments in two different locations investigated the efficacy of these pheromone traps in monitoring L. lineolaris movement in early spring.The first experiment showed that the red sticky cards baited with pheromone lure traps caught much higher numbers of L. lineolaris than the white sticky cards baited with pheromone or the sticky cards alone, and so are presumably more sensitive to detecting low populations.These pheromone traps were deployed in an area with different types of wild hosts, and the higher trap catches showed that these plants harbor a good number of L. lineolaris adults and act as breeding grounds before the agricultural crops are planted.The male/female ratio was 97:3, which is not surprising for an insect trap containing female-produced sex pheromone.However, the sticky cards without the pheromone lures also had > 90% males, which showed that there could be a higher proportion of males than females in the field, or the males are more active fliers, resulting in an increased number of males in our sticky cards. A second experiment in wildflowers was also performed during the early spring season and our trapping results showed that there is a very active field population of L. lineolaris, and these adjacent weeds and wildflowers act as alternate hosts.These randomized treatment traps in the borders covering the whole 1-hectare plot trapped the insects that were moving in and out of this field.There was a higher proportion of males than females in the traps as in the previous experiment.If these insects are not managed before the cropping season, their movement into the neighboring crops can cause yield loss.Recent studies have shown that an early season infestation of L. hesperus, even at low densities, can result in significant yield loss, whereas a mid-season infestation of the same pest had only marginal effects on the cotton yield [30,31].The distribution of weeds and alternate hosts in the field landscape can influence the movement of L. lineolaris [32,33].Fleischer and Gaylor [21] previously reported the abundance of L. lineolaris on different types of weed hosts and how the weeds act as nurse crops for L. lineolaris early in the season.They have also reported that diversifying the agroecosystems may result in less damage to cotton, as multiple cultivated hosts will be available to feed on during the cropping season.Our experiments showed that the red sticky pheromone trap would be an effective tool for the early season monitoring of L. lineolaris. The final experiment monitored the movement of L. lineolaris adults from wild hosts to the cotton crop during the summer season.The field sampling of wild hosts in the spring showed the presence of L. lineolaris in many of the wild hosts in the plots.Even though trapping L. lineolaris using sticky cards is not a preferred practice for pest management, the presence of these pheromone traps bordering the cotton plot may have contributed to a reduction in the movement of L. lineolaris from wild hosts to cotton.Very low populations of L. lineolaris were observed on the cotton plants and a 90% square retention rate was observed even without insecticide application.Using these pheromone traps and lure combinations will enable landscape-level sampling, reducing end-user variability, which continues to be a major problem in the scouting and monitoring of cotton insect pests.Effective weed management is important in reducing early Lygus populations that damage the developing fruiting structures, including the squares that critically impact the cotton yield. Studies by D'Ambrosia et al. [13] described the development and seasonal movement of L. lineolaris from weeds into the cotton fields.The proximity to the weedy field edges can influence the dispersal of L. lineolaris late instar nymphs to the cotton crops.Mark-recapture studies have shown that L. lineolaris late instar nymphs can walk up to 50 m in situations where its weedy hosts were destroyed [34].D'Ambrosio et al. [13] also reported that the flightless nymphs could reach cotton more readily from the declining weed hosts.Field weed edges with a higher perimeter relative to the area may also facilitate the movement of L. lineolaris into cotton as there is a higher weed-cotton interface.Flight mill studies have reported that adult L. lineolaris can travel 12 km/12 h [35], and 37.5 m/day in flowering cotton [36]. The current sampling strategies using drop cloths and sweep nets have advantages and disadvantages and require more human input and resources, in addition to the variability associated with these sampling methods.Monitoring pest populations using sticky cards and pheromone lures is cheap, effective, and requires less human input compared to the traditional sampling methods.These traps can be deployed early in the season to monitor the overwintering pest populations in weeds and other wild hosts.There are also options for automated pest monitoring, allowing growers to monitor pests caught on pheromone traps and send real-time trap catch information to computers or cell phones.These automatic traps are currently available and can be used for the continuous monitoring of pests based on which management decisions can be designed.The use of pheromone traps and automated trapping methods can help growers to optimize the use of pesticides and reduce insecticide residues.Combining the traditional sampling methods, pheromone traps, and automated trapping methods will help growers to monitor and make judicious decisions on managing the field weed edges that host early season pests. Author Contributions: Conceptualization, J.G., G.V.P.R. and D.R.H.; writing original draft, J.G. and D.R.H.; methodology, investigation, and validation, J.G., J.P.G. and C.J.; formal analysis and visualization, J.G.; resources, and review and editing, J.G., J.P.G., C.J., G.V.P.R. and D.R.H.; supervision and project administration, J.G. and G.V.P.R.All authors have read and agreed to the published version of the manuscript. Figure 1 . Figure 1.Lygus lineolaris adults on different types of wild hosts in early spring: (a) Patch of daisy fleabane (Erigeron annus) with L. lineolaris adults next to a cotton field in early spring; (b) Lygus lineolaris adults on daisy fleabane (Erigeron annus) flowers and (c) mustard (Brassica juncea) flowers in the early spring season. Figure 2 . Figure 2. Mean (±SEM) cumulative catch of Lygus lineolaris caught on red and white sticky cards with or without pheromone lures in mustard, Brassica junceae, plot during early spring.The same pheromone lures were used during the two-week experiment (7-21 April 2022), and sticky cards were changed every week (n = 6; means with different letters are significantly different via Tukey's HSD test after a significant ANOVA, p = 0.05). Figure 1 . Figure 1.Lygus lineolaris adults on different types of wild hosts in early spring: (a) Patch of daisy fleabane (Erigeron annus) with L. lineolaris adults next to a cotton field in early spring; (b) Lygus lineolaris adults on daisy fleabane (Erigeron annus) flowers and (c) mustard (Brassica juncea) flowers in the early spring season. Figure 1 . Figure 1.Lygus lineolaris adults on different types of wild hosts in early spring: (a) Patch of daisy fleabane (Erigeron annus) with L. lineolaris adults next to a cotton field in early spring; (b) Lygus lineolaris adults on daisy fleabane (Erigeron annus) flowers and (c) mustard (Brassica juncea) flowers in the early spring season. Figure 2 . Figure 2. Mean (±SEM) cumulative catch of Lygus lineolaris caught on red and white sticky cards with or without pheromone lures in mustard, Brassica junceae, plot during early spring.The same pheromone lures were used during the two-week experiment (7-21 April 2022), and sticky cards were changed every week (n = 6; means with different letters are significantly different via Tukey's HSD test after a significant ANOVA, p = 0.05). Figure 2 . Figure 2. Mean (±SEM) cumulative catch of Lygus lineolaris caught on red and white sticky cards with or without pheromone lures in mustard, Brassica junceae, plot during early spring.The same pheromone lures were used during the two-week experiment (7-21 April 2022), and sticky cards were changed every week (n = 6; means with different letters are significantly different via Tukey's HSD test after a significant ANOVA, p = 0.05). Figure 3 . Figure 3. Male/female ratio of Lygus lineolaris caught on sticky card traps in the perimeter of weed hosts using sticky traps.Sex ratio is presented as the percentage of males/females (n = 152) caught on the traps.Experiments were performed in early spring (20 April-18 May 2022). Figure 3 . Figure 3. Male/female ratio of Lygus lineolaris caught on sticky card traps in the perimeter of weed hosts using sticky traps.Sex ratio is presented as the percentage of males/females (n = 152) caught on the traps.Experiments were performed in early spring (20 April-18 May 2022). Figure 4 . Figure 4. Mean (±SEM) cumulative number of adult Lygus lineolaris caught during five weeks (14 July-12 August 2022) on red or white sticky cards with or without pheromone lures placed in barrier rows between wild hosts and cotton (n = 6; means labeled with different letters are significantly different via Tukey's HSD after a significant ANOVA; p = 0.05). Figure 4 . Figure 4. Mean (±SEM) cumulative number of adult Lygus lineolaris caught during five weeks (14 July-12 August 2022) on red or white sticky cards with or without pheromone lures placed in barrier rows between wild hosts and cotton (n = 6; means labeled with different letters are significantly different via Tukey's HSD after a significant ANOVA; p = 0.05). Funding: This research received no external funding.This work was supported by the U.S. Department of Agriculture, Agricultural Research Service, Research Project# 6066-22000-090-00D Insect Control and Resistance Management in Corn, Cotton, Sorghum, Soybean, and Sweet Potato, and Alternative Approaches to Tarnished Plant Bug Control in the Southern United States.The findings and conclusions in this publication are those of the author(s) and should not be construed to represent any official U.S.D.A. or U.S. Government determination or policy.Any mention of trade names or commercial products in this publication is solely for the purpose of providing specific information.It does not imply a recommendation or endorsement by the U.S. Department of Agriculture.
2023-10-14T16:16:29.461Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "98f02506d55f60c55083ebcc8d7bdaeccf1cee9c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4450/14/10/805/pdf?version=1696671626", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "3a536096dbc80798fbabda8329ce61db56018813", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
219439326
pes2o/s2orc
v3-fos-license
Development of Teaching Materials by Using Thinking Map on Embryology Learning The purpose of this study is to develop learning materials by using thinking map in integrative learning model. The research method is educational design research which consist of three stages of preliminary, prototyping and assessment. The data obtained with the validation and test of learning result were analyzed by descriptive statistics. The study participants were 69 students and 3 experts. The result of preliminary research shows that there is diversity of student characteristic in Embryology learning. The teaching material prototype uses a thinking map designed and validated by 3 experts with a valid average result. The results of the implementation test show that the achievement of student learning outcomes on embryology learning with good average value. These results show that the designed prototype matches expectations with the results found. These results also show that the designed product has good internal relevance and consistency. So it is concluded that the learning material is obtained using thinking map on the integrative learning model with good internal relevance and consistency. Introduction Thinking map are the consistent visual pattern linked directly to eight specific through process and help students reach higher levels of critical and creative thinking. Because thinking map is the language of cognitive process pattern [1]. Each thinking map corresponds to a single thinking process such: Circle map-helps define words or things in context and presents points of view; Bubble map-describes emotional, sensory, and logical qualities; Double bubble map-compares and contrasts qualities; Tree map-shows the relationships between main ideas and supporting details; Flow mapshows events as a sequence; Multi-flow map-shows causes and effects and helps predict outcomes; Brace map-shows physical structures and part-whole relationships; Bridge map-helps to transfer or form analogies and metaphors [2]. Thinking maps make an excellent addition to any classroom, because the teachers teach the students to think critically about material of learning, and connections between the materials of learning [3]. Thinking maps could be utilized by the students to broaden critical thinking skills and enhance their understanding of the content graphic organizers [3], like brain-based learning and multiple intelligences have been the focus of many workshops [4]. In other study showed that thinking map was also effective to improve science process [5]. Because to adopting the use of the strategy of thinking maps in teaching from the teachers as one of the effective means to the learner and making work ships aim to show the importance of using thinking maps strategy in teaching. Previously, concept map was the tools used in learning that the effective dialogue between teacher and student [6,7]. Concept map was an activity with numerous uses in the biology classroom, and it value in planning, teaching, revision, and assessment, and the attitudes of students and teachers towards its use. The concept map was assessed as an instructional strategy for use by high school students in learning biology concepts [8]. Concept map was discussed as a tool for the visualization of knowledge structures that can be exploited within biological education [9]. The concept map can be used to help students to improve their learning achievement and interests [10]. Concept map, a meta-learning tool, is appearing on the scene as a potential pathway for promoting the acquisition of problem-solving skills [11]. Teachers' use of concept mapping as an alternative assessment strategy in advanced level biology classes and its effects on students' cognitive skills on selected biology concepts [12]. The use of concept maps in biological learning is known from several research results. Concept map can be use on skeleton concept maps [13]. The content validity as well as application validity for the given concept map and argue for the practical relevance of the proposed validation frame-works [14]. The synergy that can be created when concept-mapping techniques are used in collaboration with the construction of power point presentations to increase the richness of the learning experience [15]. The use of five methods of representing cognitive structures -free word association, controlled word association, tree construction, concept map and flow map [11]. An integrative learning model has been designed [16,17,18]. In this learning model, teaching materials using thinking map is a part supporting system. There are five essential elements on learning model. The fifth component is syntax, the social system, the principles of reaction, supporting system and instructional and nurturing effect [19]. Integrative learning is a part of the modern instructional. Many integrative instructions identified to be part of the modern instructional design. Integrative instruction was as part of modern instructional design [20] that use in learning design [21] and to improved skill in biology learning [22]. The other articles are describing the integration of methods, strategies and learning materials [23,24,25]. At preliminary, the complex problems found on embryology learning in our institution, such as the using method of teaching is not equal with learning material, low mastery of learning, and embryology course is the main and prerequisite subject at Department of Biology Education, IAIN Batusangkar, West Sumatera Indonesia. The complex problems also show that students have different academic ability. The complete data is written in the research findings section. We think that providing a learning resource that includes a thinking map is one way to solve the problem. In this study, the research question is how the relevance and internal consistency of teaching materials using thinking map in the integrative learning model? The objective of the study was to describe the relevance and internal confidence of teaching materials using thinking maps in integrative learning models. Method This research is an educational design research consisting of phases: preliminary research, prototyping phase and assessment phase [26,27]. In this study, learning material of embryology [28] used by 69 students who took the subject of embryology in the Department of Biology Education IAIN Batusangkar academic year 2017/2018. This study also involved 3 experts in the fields of embryology, learning technology and Indonesian language. The product assessment instrument validated by 3 experts and test result learning instruments measured its validity and reliability using  Cronbach. The instruments used are validation sheets and learning outcomes test instruments. Validity validation test results are valid with a mean of 3.25. The results of the validity and reliability of the test results of learning outcomes in Cronbach are 81.52 and 75.54. Data on learning outcomes were analyzed by descriptive statistics [29]. The research procedure is at the preliminary, the researcher conducts an analysis of the characteristics of students. Student characteristics are based on cumulative achievement index, origin and school majors, anatomy courses and aptitude test results. At the prototyping stage, researchers designed prototype embryology teaching materials using thinking map. This prototype is assessed by experts [30]. Revisions are made on the basis of expert judgment. At the assessment stage, valid products are tested in the classroom on embryology learning. The trial was conducted for 8 meetings. The learning process uses the syntax of integrative learning models. At the end of the meeting a written test is conducted for students using instruments that have been developed previously. Product quality is determined from the aspect of relevance and internal consistency [31]. Result and Discussion The results of the study show that it has fulfilled the internal relevance and consistency aspects. This aspect of relevance and consistency is based on research findings on development characteristics, validity test and observation of learning process. The design and development process are based on findings on preliminary research. The results of the study at the preliminary stage showed that in the aspect of the cumulative achievement index, the highest average GPA of students was at 3.01-3.51 with a percentage of 41%. In the aspect of student learning achievement in the course of anatomy is in the B-, value that is equal to 31.5%. Internal relevance and consistency in the development of teaching materials using thought maps can be seen from the research findings on aspects of product development characteristics. The prototype characteristics of teaching materials using mind maps are consistent, flexible, evolving, integrative and reflective [1]. These five characteristics are adapted to the development research process in education. The development matrix for teaching materials using thought maps is written in Table 1. The prototype design is carried out based on the prototype matrix for the development of teaching materials using thought maps on integrative learning models. The design results are listed in Table 2. In Table 2 it can be seen that there are 47 pieces of material visualized into eight types of thought maps. The results of this design were carried out by expert review with 3 experts. The results of the validity test are written in Table 3. The results of the validity test show that the prototype obtained a value with a valid average (Table 3), and the validator gives revised suggestions on the aspects of consistency in the use of terms in the thought map and the use of images in the product book. These results also show that there is internal relevance and consistency in the development of teaching materials using thought maps in integrative learning models. The next stage is a prototype of teaching material using an assessment mechanism (assessment phase) using summative evaluation techniques. This stage also describes the practicality of using prototypes. Summative evaluation is done by applying the use of prototypes in the learning process. Application of prototype use in learning process using integrative learning model syntax. The practicality of applying this prototype is determined by student learning outcomes. The findings of the study on the results of the application of teaching materials using thought maps on integrative learning models are listed in Table 4. The test results showed that there were 36 students as participants obtained grades A, A-, B + and B. These results also indicated that the level of practicality of the prototype was at level is very practical and practical. This result also shows that there is relevance and consistency in the development of teaching materials using thought maps on integrative learning models. At the development process, internal relevance and consistency is determined by the existence of an initial identification process [33] of the products designed, and through the design, assessment and revision process. Formative evaluation is carried out on prototypes to reflect the level of product resistance to revisions [30] and documentation the products that have been designed. These aspects were fulfilled in this study. The explanation also shows the existence of internal relevance and consistency in the development of teaching materials using thought maps on integrative learning models. At internal relevance and consistency in educational development research is also determined by how much expectation is matched (expectation) with those of research findings [34]. The expected expectation of the research results is evidenced by a series of tests such as expert review and small or large group tests. The results of the research on the development of teaching materials using thought maps in this integrative learning model also show the existence of expert review and application tests (small large group tests). This finding also shows that internal relevance and consistency have been fulfilled in the development of teaching materials using thought maps in integrative learning models. The results show that the developed subject meets five characteristics of the thinking map [1]. This result also shows that the developed teaching material can be visualized into the form of all types of thought maps. Because the research findings have helped to absorb the course material well. The results show that embryology learning materials can be visualized and support the lecturer's role in discovering and describing linear and nonlinear knowledge structures [32]. Embryological material has been visualized into eight forms of thought maps. The research findings also show that flow maps, plural flow maps and bridge maps are the most visualized type of embryological material. These results are suspected because embryological learning materials that are characterized by theories, processes, facts and concepts [35] can be met with flow maps, multiple plural maps and bridge maps [32]. Because the flow map visualizes the information settings in sequence. Plural flow maps visualize the analysis of physiological feedback systems and bridge maps visualizing the use of analogies and metaphors to understand concepts [32]. In terms of the use of learning methods, this study uses integrative learning model syntax supported by the theory of integrative learning approaches [36]. One of the most important aspects of an integrative model / approach is interdisciplinary, thought map and problem solving. The use of syntax of integrative learning model in learning process has direct impact [19] in the form of equalization of student learning outcomes on good quality. These findings also show that the fulfillment of the internal relevance and consistency aspects of the development of teaching materials uses the thinking map in the integrative learning model. Conclusion This study concludes that teaching materials have been obtained using thought maps in integrative learning models with good internal relevance and consistency. The aspects of internal relevance and consistency are based on characteristic analysis, validity testing and student learning outcomes. More extensive tests (large group tests) are needed to increase product resistance to revisions. The use of thought maps on content with the characteristics of theories, concepts, facts and processes should be more widely used in integrative learning processes. Because it will help students to achieve teh learning materials. The use of thought maps should also involve students / students in the design process. So that the process of mapping and changing information to form (meta-cognitive) knowledge is more beneficial for students.
2020-05-21T09:12:36.251Z
2020-05-14T00:00:00.000
{ "year": 2020, "sha1": "de9fceefa1ad78da24b7f83b3b84aef31e695806", "oa_license": "CCBY", "oa_url": "https://online-journals.org/index.php/i-joe/article/download/13437/7039", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9b72d9cf4fe36776b2d7fc06ae73b0a3f268b323", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
18069882
pes2o/s2orc
v3-fos-license
A thick shell Casimir effect We consider the Casimir energy of a thick dielectric-diamagnetic shell under a uniform velocity light condition, as a function of the radii and the permeabilities. We show that there is a range of parameters in which the stress on the outer shell is inward, and a range where the stress on the outer shell is outward. We examine the possibility of obtaining an energetically stable configuration of a thick shell made of a material with a fixed volume. Introduction It is well known that the fluctuations of electromagnetic fields in vacuum or in material media depend on the boundary conditions imposed on the fields. This dependence gives rise to forces which are known as Casimir forces, acting on the boundaries. The best known example for such forces is the attractive force experienced by parallel conducting plates in vacuum [1]. Casimir forces between similar, disjoint objects such as two conducting or dielectric bodies are known in most cases to be attractive [2] and are sometimes viewed as the macroscopic consequence of Van der Waals and Casimir-Polder attraction between molecules. However, Boyer [3] showed that the zero point electromagnetic pressure on a conducting shell is directed outward 2 . In this paper we address the question: can there exist a compact ball for which the Casimir forces would not expand the ball to infinity? We look for such behavior in a simple model. In view of the dominance of the Casimir forces at the nanometer scale, where the attractive force could lead to restrictive limits on nanodevices, the study of repelling Casimir forces is of increasing interest. Indeed, Boyer, following Casimir's suggestion, studied inter plane Casimir force with one plate a perfect conductor while the other is infinitely permeable. He showed that, in this case the plates repel [4]. This problem was reconsidered since in [5,6]. In addition to the Casimir effect for a conducting spherical shell, various cases of material balls where considered in the literature. The case of a ball made of a dielectric material was considered by many authors and exhibits strong dependence on cutoff parameters [7,8,9,10,11,12]. The case of a dielectric-diamagnetic ball has also been extensively studied, especially under the condition of ǫµ = 1, which will be referred to as the uvl (uniform velocity of light) condition, since its introduction by Brevik and Kolbenstvedt [13,14,15,16]. A medium with the uvl property has in many cases cutoffindependent values for the Casimir energy (see also [17,18,19] for the Casimir energy of a cylinder with uvl). A heuristic argument for this statement goes as follows: the zero point energy of the electromagnetic field in a uvl medium is a sum over the eigenfrequencies of the system, and behaves for high frequencies as I ω I ∼ I ck I , where the factor c is common to all the media and the geometric information on the system enters via the allowed k ′ s. This expression in the limit of high k ′ s behaves exactly as the vacuum energy with ǫµ = c −2 and the sum becomes regularizable by subtraction of the vacuum energy. In cases where this condition is not fulfilled there are problems of UV divergent terms which are proportional to the volume in which the velocity of light is different from the velocity of light in the background. We will use the uvl condition for our thick shell. In all of the above mentioned cases, namely, the conducting sphere, dielectric ball or a dielectric-diamagnetic ball with uvl, the resultant pressure was found to be repulsive. In order to obtain diverse behaviors we mix the case of disjoint bodies 2 Assuming that zero point forces are in general attractive, Casimir considered a semiclassical toy-model for the electron in which the coulomb self-repulsion is balanced by a Casimir type attraction. One of the consequences of Boyer's calculation is that since the pressure on a conducting sphere is outward it cannot balance the Coulomb repulsion in Casimir's model. (where there is usually attraction) and the dielectric ball scenario as follows: We consider the case of a thick shell ( fig. 1), with three permeability regimes (inner, middle and outer) µ I , µ II , µ III , under a uvl condition. In this case there are two competing effects, i.e.: interaction between the inner and outer boundaries, and the repulsive pressure experienced by each boundary. Using a formula derived in [20] we show in sections 2 and 3 that for a dilute medium the energy of the thick shell is Where r and R are the inner and outer radii respectively and the parameters κ r and κ R are defined in section 3. From this expression one can easily obtain the energy of a single ball by taking the limit of R to infinity, and also the energy of two parallel infinite dielectrics by taking both of the radii to infinity while keeping a finite distance d = R − r. Next, in sections 4 and 5 we investigate this expression for the energy and show that there is a range of parameters such that the force on the outer shell is attractive. Energy of the electromagnetic fluctuations In this section we briefly review the Green's function method for calculation of Casimir energies (see [21] and [20] for details). To calculate the Casimir energy of a medium under the uvl condition, we use the perturbative technique suggested in [20]. There, the Born series for the correlation function D ik (ω; r, r ′ ) of the electromagnetic fields in material medium is presented. The correlation function D ik is defined by is the retarded Green's function. Throughout we use the gauge A 0 = 0, so that the indices i, k range 1, 2, 3 and D is a 3 × 3 matrix. The correlation function D is known [21] to be the Green's function for the equation (in units where = c = 1) where [a] stands for a matrix whose elements are [a] ik = ǫ ijk a j and I is the identity operator, which, in coordinate space is just the 3 × 3 identity matrix times a delta function. In eq. (2), ǫ is the permittivity and µ the permeability of the medium. In the following we assume that µ(r) and ǫ(r) are scalar functions (of course, in the general case both µ and ǫ are tensors). This equation can be also written in the form: The correlation function of electromagnetic fields in vacuum, where ǫ = µ ≡ 1, will be denoted D 0 . It is the inverse of the operator ([∇] 2 − ω 2 ) and is well known. It is given by the formula: We now wish to use the known D 0 to express D via a Born series. Define: and It follows from (3) that as an operator D satisfies: Thus D is given by the following formal series: We now impose the "uniform velocity of light" condition in the medium by setting ǫµ ≡ I. This eliminates the P terms and we are left with the expansion: The correlation function D can be used to calculate various properties of the electromagnetic field. We use it to calculate the energy density of the field by using the relations: Where the round brackets stand for the Fourier transform with respect to time of the correlation function 1 β this correlation function of the fields A i is related to the retarded Green's function D by the fluctuation dissipation theorem [21]: Thus, at zero temperature, the energy density of the electromagnetic field is: where we have chosen to neglect the dependence of the permeability µ and permittivity ǫ on the frequency ω. Inserting the expansion (9) in (11) one can obtain a series for the energy density of fluctuations of the electromagnetic field. It was shown in [20] that the first contribution to the Casimir energy density (i.e., energy of the electromagnetic field in the presence of external conditions minus the energy density of the electromagnetic field without them) comes from the term D 0 QD 0 QD 0 µ in (9) 3 . For a dilute medium this term gives the dominant contribution to the Casimir energy. Casimir energy of a thick shell The Casimir energy per dω is obtained via eq. (11) by the integration of ρ(r, ω) − (contribution of D 0 ) over space. This yields the following general formula for the density of Casimir energy of a medium with a radially symmetric permeability µ [20] assuming the uvl property, and diluteness: Where: and the integration domain T is such that u, v and s can form a triangle. The function g 2 is given by: We consider a shell of thickness R − r with inner radius r, and an outer radius R. The permeability of the shell is µ II (Fig. 1), it is imbedded in a medium with permeability µ III and its core has permeability µ I . We define κ r = log µ I µ II and κ R = log µ II µ III . We now calculate the Casimir energy of the thick shell using eq. (12). In our case µ is a sum of two radial step functions, and µ ′ (s) is just a pair of delta functions at s = r and s = R. Thus the integration over s in (12) becomes immediate and yields the energy density ρ T (2) (ω) = −4πω 2 coth( βω 2 )Im (κ r I(r) + κ R I(R)) The I's in (15) can be explicitly calculated: and: So that ρ T (2) (ω) = −4πω 2 coth( βω 2 )Im κ 2 r 2r 0 We can identify the first two terms as the densities of Casimir energy per dω for balls of radii r and R. The total Casimir energy can now be obtained by integration of the density (18) over the frequencies ω using (26) and (27). The result is: This is our final expression for the energy of a thick shell, where the assumption of diluteness implies |κ r |, |κ R | << 1. Let us check that this result coincides with the known result for a single sphere when R goes to infinity. To do so we expand κ r in terms of the diluteness parameter ξ = µ I −µ II µ I +µ II which is commonly used in the literature: Substituting this expansion in (19) and taking the limit R → ∞ we immediately regain the usual result for the energy of a dielectric-diamagnetic ball [16,22], namely: E C (r, ∞) = 5 ξ 2 32rπ + O(ξ 4 ). This energy yields an outward pressure on the ball. Note also that the Casimir energy of two parallel dielectrics per unit area can be obtained from (19), by taking the limit of large radii. To see this we keep d = R − r finite while taking the limit r, R → ∞, and divide by the surface area thus obtaining: To illustrate the broad range of behaviors that are possible from an expression for the energy such as (19), we study in more detail two cases: 1. κ r = −κ R . This happens when the inner and outer material are the same. E.g. one can imagine a thick material shell in vacuum. 2. κ r = κ R . In this case the ration of magnetic constants between the inner and middle materials is the same as the ratio between the middle and outer material. Stress Now we use the expression for the energy (19) in order to evaluate the Casimir stress on the shells. The pressure on the outer shell is given by While the pressure on the inner shell is given by One may recognize the self Casimir force acting on the inner and outer shells in the terms independent of r or R respectively. We now turn to examine two cases: The pressure on the outer shell is The sign of this expression depends only on the ratio c = r R . The pressure can be written in the form: For a constant outer radius R, the behavior of the pressure as a function of the ratio c is indicated in Fig.2. One can see that at a radii ratio of approximately 0.19 the Casimir pressure suddenly changes sign and becomes an inward pressure. However, the pressure on the inner shell is always directed outward so that the inner and outer regions attract. This attraction may be balanced by adding the compression resistance of the middle medium, or by adding the volume dependence of µ. In the case that r < 0.19R, the pressure on the inner shell is larger than the pressure on the outer shell, so that one can imagine the two shells growing until they reach the 0.19 ratio and then the outer shell will start contracting. This result is not too surprising. If one considers two conducting shells which are very close, then there will be attraction between the shells, of the order of magnitude of attraction between conducting plates. This attraction will lose its dominance once the radii become far in magnitude and then each shell will experience its own outward Casimir pressure. However, in order to know if the outer radius stays finite or goes to infinity, it is necessary to introduce the law by which the smaller radius changes as we change the outer radius. An example for such a situation is given in the next section. 4 κ r = κ R In this case the pressure on the outer shell is outward for all R. The inner shell will be subject to an outward pressure as long as R > 3.46r. For R < 3.46r the pressure will be an inward one on the inner shell. We return to this situation in the next section. A thick shell with a fixed volume In the previous section we calculated the pressure assuming that the volume of the inner ball is fixed. Another interesting model is to consider the volume of the thick shell itself as constant. Thus we can assume v = 4π 3 (R 3 − r 3 ) remains constant, and look for a minimum of the energy under this condition. In this case, as the outer shell expands, the distance between the inner and outer boundaries decreases. So that if there is attraction between the shells, it will be energetically favorable for the shell to expand to infinity gaining from the energy of interaction between the boundaries as well as from the tendency of each separate boundary to grow. This is indeed what happens in the case where κ r = −κ R . However, for the second possibility, namely κ r = κ R there is repulsion between the boundaries. This protects the shell from growing to infinity and a stable minimum of the potential can be found. A typical example is illustrated in fig. 3, where the minimum of the potential is for R ≃ 1.01, r ≃ 0.31 assuming the volume of the substance in the middle region is kept constant at 4π 3 . Another possibility is to keep the distance between the shells constant (imagine the inner and outer boundaries are attached by means of stiff rods Figure 3: The potential of a thick shell with a fixed volume as function of R, for κ r = κ R . There is a clear minimum at R = 1.01. of constant length). Qualitatively the results are then similar to the results discussed above for the cases κ r = −κ R and κ r = κ R . Discussion One of the most intriguing aspects of the Casimir force is its sign: this force was shown to exhibit different behavior in different problems. It seems that the sign of the force depends on the balancing of several effects, such as a tendency to minimize the average curvature of the boundaries, and on the other hand the interaction between different patches of the boundary. In the calculations here, which are confined to the case of uniform velocity of light, we have shown how these effects can work with or against each other in an explicit way resulting in a new range of behaviors. We obtained a general expression for the energy as function of the radii and permeabilities in the limit of dilute media. In particular, for fixed distance between the shells, in the limit of radii approaching to infinity, we regain, in essence, the standard expression for the Casimir energy between parallel dielectric media. However, now the sign of the interaction energy in (19) is determined by the relative size of their permeabilities: for a shell enclosed between two vacuua we get attraction between the boundaries, as it should be for the standard Casimir two plates case. Repulsion is obtained when the permeability of the shell itself is between the permeability of the inner substance and the permeability of the outer substance (i.e. µ I < µ II < µ III or µ I > µ II > µ III ) 5 . Appendix Two integration formulae are needed to calculate the I's:
2014-10-01T00:00:00.000Z
2001-08-08T00:00:00.000
{ "year": 2001, "sha1": "99f14a5686e6ae4ab5f200a1c5d9a9caff14ac3f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0108053", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ca32da2b3677834e72ab967cc83c276307fccf05", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256380314
pes2o/s2orc
v3-fos-license
Aromatic compounds‐mediated synthesis of anatase‐free hierarchical TS‐1 zeolite: Exploring design strategies via machine learning and enhanced catalytic performance Simultaneous achievement of constructing mesopores and eliminating anatase is a long‐term pursuit for enhancing the catalytic performance of TS‐1. Here, we developed an aromatic compounds‐mediated synthesis method to prepare anatase‐free and hierarchical TS‐1 for olefin epoxidation. A series of hierarchical TS‐1 zeolites were prepared by introducing aromatic compounds containing different functional groups via the crystallization process. The formation of intercrystalline mesopores and insertion of titanium into framework were facilitated at different extent. The synergistic coordination of carboxyl and hydroxyl in aromatic compounds with Ti(OH)4 realizes the uniform distribution of titanium species and eliminates the generation of anatase. Noteworthily, eight machine learning models were trained to reveal the mechanism of additive functional groups and preparation conditions on anatase formation and microstructure optimization. The prediction accuracy of most models can reach more than 80%. Benefiting from the larger mesopore volumes (0.37 cm3·g−1) and higher content of framework Ti species, TS‐DHBDC‐48h samples exhibit a higher catalytic performance than other zeolites, giving 1‐hexene conversion of 49.3% and 1,2‐epoxyhenane selectivity of 99.9%. The paper provides a facile aromatic compounds‐mediated synthesis strategy and promotes the application of machine learning toward the design and optimization of new zeolites. INTRODUCTION Epoxides are among the most useful synthetic intermediates for producing high added-value chemicals in the chemical industry, [1][2][3] so great efforts have been made to improve the catalytic efficiency for the epoxidation of olefins in the last century. As an important branch of MFI topological zeolites, titanium silicalite-1 (TS-1) has been widely applied in catalytic oxidation of various olefins since its discovery owing to the active titanium species, [4,5] unique micropore structure, [6,7] and high hydrothermal stability. [8][9][10][11][12][13] Olefin oxidation catalyzed by TS-1 using H 2 O 2 as oxidant can achieve high efficiency under mild conditions, and water is the only by-product, which attract increasing attention for its environment friendly and pollution-free process. [14][15][16][17][18] The coordinated state of the titanium species in TS-1 is significant for the catalytic performance, [19] which exist in various coordination forms. The highly dispersed tetrahedrally coordinated Ti species (TiO 4 ) in the framework, [20] and sometimes high coordinated Ti species (TiO 5 and TiO 6 species) are considered as the catalytic active sites. [4] But anatase TiO 2 in the form of octahedral coordination is harmful because it could decompose H 2 O 2 and then reduce the effective utilization of oxidants. [21][22][23] However, Ti species are likely to aggregate and generate anatase owing to the mismatched hydrolysis rate of Ti source and Si source. [24] Furthermore, the crossed micropore channel of TS-1 less than 2 nm limits the accessibility of bulky molecules to active sites, [2,10,25] resulting in the low conversion of substrates. Enhancing the content of framework Ti, eliminating the formation of anatase and constructing hierarchical porous structure have always been hot research issue for TS-1 zeolites. Plenty of approaches have been made to prepare anatase-free and hierarchical TS-1 zeolites. Hard templates, such as chitosan and precursor nanoparticles, were used to synthesize hierarchical TS-1 by hydrothermal or steam-assisted crystallization method. [22,26,27] However, the interaction between most hard templates and Ti species is weak, so it is difficult to regulate the coordination state of Ti species. [28] Macromolecular compounds, such as polyacrylamide (PAM), 2-(2-[4-(1,1,3,3-Tetramethylbutyl)phenoxy]ethoxy)ethanol (Triton X-100), polyvinyl butyral (PVB), polydiallyldimethylammonium chloride (PDADMAC) and polyvinyl alcohol (PVA), were used as soft templates to synthesize mesoporous or anatasefree TS-1 with different morphologies. [2,[29][30][31][32] Recently, introducing crystallization modifiers has been proven to be a feasible strategy to modulate the crystallization process of TS-1. L-Lysine acts as a crystallization inhibitor to promote the oriented aggregation of nanoparticles in a noncompact manner, and the interconnected mesopores are obtained by a two-step crystallization strategy. [33] Amino acid L-carnitine can interact with Ti precursors to realize the uniform distribution of framework Ti species and control the morphology of TS-1 zeolites. [28] However, constructing abundant mesopores and eliminating the formation of anatase simultaneously during the crystallization process of TS-1 are still very challenging. So, it is highly desirable to develop a facile and effective method for fabricating hierarchical and anatase-free TS-1. In recent years, machine learning methods have shown broad prospects in materials structural design and performance prediction, [34][35][36] proving a new way to accelerate the structural design optimization and performance prediction of zeolites and other materials. [37][38][39][40] In this work, a novel one-step aromatic compoundsmediated synthesis method was developed for fabricating anatase-free hierarchical TS-1 zeolites. Aromatic compounds with good structure stability containing various functional groups were chosen as multifunctional mediators to guide the formation of intercrystalline mesopores, regulate the distribution of Ti species and eliminate the generation of anatase during the crystallization process. Machine-learning strategy was introduced to investigate and discuss the key influencing factors of the anatase-free hierarchical TS-1, and 8 machine models were used to study the effects of 12 features and show accuracy of more than 80% in predicting the formation of anatase. The catalytic performance of obtained TS-DHBDC-48h samples is improved by 29.4% in the epoxidation of 1-hexene and 53.3% of cyclohexene than conventional TS-1 samples. This paper provides a feasible strategy to regulate the coordination state of Ti and induce the formation of mesopores in TS-1 simultaneously and promotes the development of machine-learning-assisted fabricating new zeolites. Synthesis of TS-1 Nano-sized TS-1 was synthesized by a hydrothermal method with the molar composition of SiO 2 : The samples prepared using BDC/DABDC/DHBDC crystallized for 48h were named as TS-BDC-48h, TS-DABDC-48h and TS-DHBDC-48h, respectively. The sample without aromatic compounds was named as TS-WAC-48h for reference. Furthermore, the DHBDC-mediated TS-1 zeolites under the crystallization time of 0.5 h / 1 h / 1.5 h / 2 h / 4 h / 24 h were also synthesized and named as TS-DHBDC-xh, which xh represents the crystallization time. Characterization The crystallinity and phase purity were measured by powder X-ray diffraction (XRD) via a Bruker D8 ADVANCE X-ray diffractometer with Cu Kα1 radiation (λ = 1.5406 Å) operated at 40 kV and 40 mA. The relative crystallinity (RC) is the ratio of the total characteristic peak (7.9 • , 8.9 • , 23.1 • , 23.9 • and 24.4 • ) areas of the aromatic compoundsmediated zeolites to the reference sample. SHIMADZU UV-2550 absorption spectrometer was used to record the ultraviolet-visible diffuse reflectance spectra (UV-Vis) in the range of 200-450 nm, and nanoscale BaSO 4 plate was used as a reflectance reference. UV-Raman spectra with an excitation wavelength of 325 nm were explored by a UV-Raman spectrograph using Horiba Scientific LabRAM HR Evolution. Fourier transform infrared (FT-IR) spectra were conducted on a Nicolet 6700 spectrometer in the range from 4000 cm −1 to 400 cm −1 . X-ray photoelectron spectroscopy (XPS) was examined to test the state of Ti elements in composites with Thermo escalab 250XI (America). Micromorphology and grain size were collected by scanning electron microscopy (SEM) using the SU8000 electron microscope. Transmission electron microscopy (TEM) was obtained by Hitachi 7700 to detect the microstructure of all samples. N 2 adsorption-desorption isotherms were performed by Micromeritics ASAP 2420 instrument at 77 K. The specific surface area and pore size distribution were analyzed by density functional theory method. Inductively coupled plasma optical emission spectrometer (ICP-OES) was employed to calculate the contents of Ti and Si elements in samples on an Agilent ICP-OES 730 (America) instrument. The MAS NMR of 13 C and 29 Si was tested with a 500 MHz Avance III 600WB spectrometer (Bruker, Germany) to characterize the chemical environment of C and Si atoms. The chemical drifts of 13 C and 29 Si were determined using tetramethylsilane as a reference with a magic angle spinning rate of 5 kHz. Thermogravimetric (TGA) curves were obtained using Thermo Gravimetric Analyzer (449F3, Germany), and the temperature ranges from room temperature to 873 K with a heating rate of 10 K⋅min −1 . Machine learning models Random Forest (RF), Support Vector Machine (SVC), Decision Tree (DT), Stochastic Gradient Descent (SGD), Logistic Regression (LR), eXtreme Gradient Boosting (XGBoost), k-Nearest Neighbors (KNN) and Natural Bayes (NB) were trained to predict the existence of anatase using 12 features, including the type and number of functional groups in additives and preparation parameters. The additives include those reported in the literature and used during our experiment (see attachment). "1" represents that anatase exists in the prepared TS-1, and "0" is the opposite. We firstly processed the data with Synthetic Minority Over-sampling Technique (SMOTE) algorithm to eliminate the scale imbalance caused by the inconsistent number of 0 and 1. Then the collected data were divided into two sets, 70% of which were training sets and 30% were test sets. We implemented the 10-fold Cross-Validation method to verify whether the above models have an overfitting phenomenon and reported the prediction accuracy in the evaluation metrics. All the algorithms were obtained from the Scikit-Learn package and programmed using Python. Catalytic tests The epoxidation of 1-hexene with H 2 O 2 was completed in a 25 mL three-necked round-bottom-flask with a reflux condenser. Ten millimeters methanol, 10 mmol 1-hexene, 10 mmol H 2 O 2 (30 wt%) and 50 mg catalyst were added to the flask in order. Then the reaction started at 333 K with vigorously magnetic stirring for 2 h. Afterwards 0.5 mL reaction solution was mixed with 2 mL methanol, and the mixture was centrifuged under 10,000 r/min for 3 min. The supernate was analyzed by a gas chromatograph-mass spectrometer (Agilent 7890A). The conversion of 1-hexene and the selectivity of products were calculated via the area normalization method. The recycled TS-1 samples were washed with methanol for three times and dried at 353 K under vacuum atmosphere overnight. Then the recycled zeolites were calcinated at 823 K in air for 3 h. Characterization of TS-1 zeolites The preparation process of aromatic compounds-mediated synthesis method is shown in Scheme 1. Firstly, the mixture of TPAOH/H 2 O and TEOS/TBOT was hydrolyzed under stirring at room temperature, and the produced alcohols were removed at 353 K. Then aromatic compounds and hydrolysate were added into a Teflon-lined stainless autoclave, and the crystallization process was carried out. Finally, the products were obtained after washing, drying and calcination. These samples were named by the kind of aromatic compounds and crystallization time. The DHBDC-mediated zeolites with different Ti/Si molar ratios (0.045, 0.050 and 0.055) were also synthesized following the same procedure. TS-DHBDC-48h (TTIP) and TS-DHBDC-48h (TEOT) were prepared following the synthesis process as TS-DHBDC-48h, while TBOT was replaced by TTIP and TEOT. The XRD patterns of the prepared TS-1 samples are shown in Figure 1A,B. The characteristic peaks shown at 7.9 • , 8.9 • , 23.1 • , 23.9 • and 24.4 • match the standard diffraction peaks (JCPDS No. 43-0055) and prove that high-purity TS-1 zeolites were successfully synthesized. [10,20] Because the ion radius of Ti 4+ is larger than that of Si 4+ , the increased cell volume is a strong evidence for the insertion of Ti atoms into the frameworks and the formation of tetrahedrally coordinated Ti species. [6,7] Lattice constant and cell volume are shown in Table S1. The cell volume of aromatic compounds-mediated zeolites is larger than reference samples, indicating that aromatic compounds are helpful for the coordination of Ti species with Si species to form TiO 4 species. The relative crystallinity of aromatic compounds-mediated zeolites is higher than that of TS-WAC-48h, which proves that aromatic compounds contribute to enhancing the crystallinity of TS-1. The XRD patterns of the TS-DHBDC samples prepared at different crystallization time are shown in Figure 1B. DHDBC-mediated zeolites show a growing trend in cell volume and RC with the increasing crystallization time. [41,42] The characteristic peaks of TS-DHBDC-1h at 23.9 • and 24.4 • have not been formed, and the crystallinity is much lower. The characteristic peaks of MFI topology have been formed, and the crystallinity is significantly improved when the crystallization time is extended to 2 h. All these results suggest that introducing aromatic compounds can effectively accelerate the crystallization process and enhance the crystallinity of TS-1 zeolites. The Ti coordination states of the as-synthesized TS-1 zeolites were investigated by UV-Vis spectra in Figure 1C. All the samples show a strong absorption band at λ = 220 nm, which is assigned to charge transfer between tetrahedrally coordinated Ti and O in the framework. [43,44] The phenomenon suggests that Ti species have been successfully inserted into the framework. Compared with TS-DHBDC-48h, the curves of TS-BDC-48h, TS-DABDC-48h and TS-WAC-48h exhibit an absorption band at λ = 300-330 nm, proving the existence of anatase TiO 2 . [41,45] 2,5-dihydroxyterephthalic acid has extra dihydroxyl groups compared to terephthalic acid, but anatase TiO 2 can be notably found in TS-BDC-48h. To obtain more detailed information about the crystallization process, the UV-Vis spectra of DHBDC-mediated TS-1 zeolites with different crystallization time from 1 to 24 h were analyzed in Figure S1A. During the entire crystallization, no peaks attributed to anatase TiO 2 can be found. To further study the inhibition effect of DHBDC on the formation of anatase TiO 2 , TS-1 samples with different Ti/Si molar ratio (0.045, 0.050 and 0.055) were synthesized using the same synthesis method as TS-DHBDC-48h. It can be seen in Figure S1B that even if Ti/Si molar ratio is significantly increased, no anatase can be detected. Therefore, it can be concluded that DHBDC plays a vital role in avoiding the formation of anatase. Furthermore, the hydrolysis rate of TEOT and TTIP is faster than TBOT, easily leading to the aggregation of Ti species and the formation of anatase TiO 2 . In order to qualitatively analyze the coordination state of Ti species, UV-Raman spectra of synthesized zeolites were detected and shown in Figure 1D. TS-WAC-48h, TS-BDC-48h and TS-DABDC-48h show similar peaks at 390, 516 and 637 cm −1 , indicating the existence of anatase TiO 2 in these samples. [47,48] Conforming to the UV-Vis results, TS-DHBDC-48h exhibits no characteristic peaks of anatase TiO 2 . The peak at 380 cm −1 is the characteristic of silicate-1 zeolites, and the peaks at 960 and 1125 cm −1 are ascribed to the Ti-O-Si groups in the framework [47] The intensity of the characteristic peak at 960 cm −1 in TS-WAC-48h was significantly weaker than that of aromatic compounds-mediated zeolites. Therefore, it can be qualitatively determined that more extra-framework Ti species may exist in TS-WAC-48h. And, the aromatic compounds-mediated synthesis method is considered to be an effective strategy to enhance the Ti content in the framework and prevent the generation of anatase TiO 2 . The FT-IR spectroscopy was collected to discuss the coordination environment of Ti species in prepared zeolites. The absorption bands at 1230, 1100, 960, 550 and 455 cm −1 belong to the structure of MFI topology, [49] indicating the successful synthesis of TS-1 samples, which is associated with the XRD analysis. The peak at 960 cm −1 is ascribed to the vibration of the Ti-O-Si group in the framework, proving Ti species incorporate into the framework successfully. [7] The intensity ratio of the absorption band at λ = 960-800 cm −1 (I 960 /I 800 ) was calculated to evaluate Ti content in the framework of different samples. [24] The I 960 /I 800 of aromatic compounds-mediated zeolites is 2.20, 2.14 and 2.15, which is higher than that of TS-WAC-48h. This result demonstrates that aromatic compounds are beneficial to facilitating Ti species to coordinate with Si species and increasing the Ti content in the framework. Furthermore, the characteristic peak at 960 cm −1 formed at 1.5 h of crystallization, and the I 960 /I 800 of TS-DHBDC-4h has exceeded that of TS-WAC-48h in Figure S2A. In order to determine the chemical interaction between DHBDC and Ti species, the FT-IR spectra of the supernatant at the early crystallization stage of DHBDC-mediated zeolites is shown in Figure S2B. The Ti-O-Si at 960 cm −1 and Ti-O-C bonds at 1125 cm −1 formed gradually as the increased crystallization time. [50] The Ti-O-C bonds prove that Ti(OH) 4 species have a suitable chemical interaction with the C-OH bonds in DHBDC and provides evidence for coordination interactions between DHBDC and Ti species. Yu et al. believed that the amino/carboxyl groups of amino acids strongly interact with specific crystal surface of zeolites to regulate the Ti coordination states. [42] Wang et al. thought that the Ti nodes coordinated with MOFs ligands (2-aminoterephthalic acid) to avoid the generation of anatase when using Ti MOFs as Ti-containing precursors. [51] Similarly, the chemical interaction between DHBDC and Ti species can be speculated as coordination interaction of functional groups. XPS analysis in Figure 2B shows the chemical composition and electronic structure of Ti species. All TS-1 samples exhibit signals at the binding energy of 460.1 and 465.4 eV, while the peak at the binding energy of 458.4 eV exists in TS-WAC-48h, TS-BDC-48h and TS-DABDC-48h. The signals of electronic binding energies at 460.1 and 458.4 eV are assigned to Ti 2p 3/2 , corresponding to tetrahedrally coordinated Ti(IV) species in framework and octahedrally coordinated Ti(VI) in extra-framework, respectively. [41] Through the element analysis of XPS, the Ti species in TS-DHBDC-48h is in the form of tetrahedrally coordinated Ti 4+ without extra-framework Ti species. These phenomena indicate that DHBDC promotes Ti species to insert into the framework and restricts the formation of anatase, which is in accord with the UV-Vis and UV-Raman results. The SEM and TEM images of conventional sample and hierarchical samples are shown in Figure 3. All samples exhibit an irregular and ellipsoid-like morphology with a smooth surface. [42] The grain size of these samples ranges from 130 to 150 nm. Comparing Figure S3A and S4A, it can be seen that TS-DHBDC-1h has a larger grain size than TS-WAC-1h. The pH of hydrolysate before and after adding aromatic compounds is shown in Table S2. The hydrolysate adding DHBDC has a relatively low pH, which significantly reduces the nucleation rate and increases the growth rate. Therefore, the particle size of TS-DHBDC-1h is larger at the initial crystallization stage. [33] Mesopores can be found in the DHBDC-mediated zeolites. According to Figure S4F and G, the TEM images of TS-DHBDC-1h and TS-DHBDC-1.5h prove that the intergranular mesopores were formed during the aggregation of nanozeolite particles. [33] Compared with the TEM images of Figure 3E-H, TS-1 zeolites synthesized via aromatic compounds-mediated method exhibit noticeable intercrystalline mesopores. And, no obvious intergranular mesopores can be found in TS-WAC-1h and TS-WAC-48h. It can be inferred that aromatic compounds have significant effects on guiding the formation of intergranular mesopores. ICP-OES was analyzed to determine the mass fraction of total Ti species and Si/Ti molar ratio. Therefore, the mass fraction of Ti given in Table 1 includes both the mass of tetrahedrally coordinated Ti species and highly coordinated Ti species. Compared with other zeolites, TS-DHBDC-48h possesses much lower Ti content (1.53 wt%) and higher Si/Ti molar ratio (48.6). Based on the analysis of UV-Vis, UV-Raman and XPS analysis, it is clear that the Titanium element in TS-DHBDC-48h exists in the form of tetrahedrally coordination. So, 1.53 wt% of the Ti species can be determined as the mass fraction of framework Ti species. For TS-WAC-48h, TS-BDC-48h and TS-DABDC-48h prepared without using DHBDC as mediator, the existence of anatase may be responsible for the high mass fraction of Ti elements. The N 2 adsorption-desorption isotherms and pore width distribution curves of the prepared samples are shown in Figure 4. All samples except TS-WAC-48h exhibit the typical type-IV N 2 adsorption-desorption isotherms in Figure 4A. The N 2 absorbed amounts of TS-1 samples synthesized via aromatic compounds-mediated method increase rapidly at P/P 0 = 0.9-1.0, indicating mesopores were successfully introduced. The emergence of the H3 hysteresis loop may be ascribed to the slit mesopores during the aggregation and accumulation process of zeolite nanoparticles. [10] DHBDC-mediated zeolites prepared at different crystallization time display the same isotherms and hysteresis loop in Figure S5A, which proves the existence of intercrystalline mesopores during the entire crystallization process. On the contrary, the N 2 adsorption-desorption isotherm of TS-WAC-48h is type-I and shows the feature of microporous materials. [52] As shown in Figure 4B, the micropores distributed at less than 1 nm in all samples are the characteristic of MFI topological channels. [52] In Figure S5B, the main mesoporous size gradually increases as prolonging the crystallization time. The mesopore width of TS-DHBDC-1h ranges from 20 to 30 nm, while the pore size increases to 30-40 nm for TS-DHBDC-4h. Finally, the mesopore width of TS-DHBDC-48h is centrally distributed at 50 nm, which is larger than that of other samples. It can be speculated that small intergranular mesopores are interconnected and become larger ones during the aromatic compounds-mediated crystallization process. [33] The textural structure of synthesized TS-1 zeolites is illustrated in Table 1 Solid state nuclear magnetic resonance was carried out to identify the effect of 2,5-dihydroxyterephthalic and the chemical environment of Si/Ti species. The TS-DHBDC-48h MAS NMR spectra of 13 C before calcination and 29 Si after calcination are shown in Figure 5. The TS-DHBDC-48h before calcination displays strong characteristic signals at δ = 9.7, 15.4 and 62.8 ppm in Figure 5A, which are assigned to the three carbon species of template TPA + trapped in the zeolite channels. [53] In addition, the signals of carboxyl group and the C atoms in benzene of 2,5-dihydroxyterephthalic acid are not detected, indicating that aromatic compounds are detached from TS-1 during centrifugation following the supernate. This phenomenon proves that the inhibitory effect of DHBDC on the formation of anatase TiO 2 during crystallization and DHBDC is not incorporated into the framework of TS-1. In the 29 Si MAS NMR spectra, the resonance signal at δ = −113.0 ppm belongs to Si(OSi) 4 (Q 4 ) species, and the acromion appears at −115.9 ppm. [54,55] The two resonance peaks of −113.0 and −115.9 ppm appears simultaneously, indicating that the crystal system of zeolite changes from monoclinic to orthorhombic, which is the compelling evidence of Ti atoms entering the lattice of zeolite. [42] The resonance signal at −102.4 ppm is ascribed to Si(OSi) 3 OH (Q 3 ) species, and its peak intensity can directly reflect the quantity of defect sites in zeolites. [56] These results prove that DHBDC-mediated TS-DHBDC-48h possesses excellent crystallinity and few defects. The formation and evolution mechanism of mesopore in TS-DHBDC-48h In order to better investigate the effects of the aromatic compounds on the formation of intergranular mesopores, TGA curves of TS-DHBDC-48h and TS-WAC-48h before calcination were obtained ( Figure S6). The weight loss of TS-DHBDC-48h and TS-WAC-48h is 12.17% and 12.57%. The little difference in weight loss between TS-DHBDC-48h and TS-WAC-48h indicates that the aromatic compound is almost removed after the following centrifugation. So, the aromatic compound just guided the formation of intercrystalline mesoporous during crystallization and was not incorporated into the zeolite frameworks. It is certain that charge repulsion exists between the negatively charged aromatic carboxylates and silica nanoparticles in the basic environment during the crystallization, but there is also a certain binding force between them. Yu et al. demonstrated the complexation between L-lysine and silicon species leads to the aggregation of protozeolite nanoparticles in a noncompact manner. [33] It can be determined that aromatic compounds play an active role in the aggregation and maturation process of zeolite nanoparticles. Hydroxyl groups are strong polar groups, so the association phenomenon is very significant. As shown in Figure S2B, the association peaks of intermolecular hydrogen bonds formed between hydroxyl groups appears in the FT-IR figure at 3550-3200 cm −1 . [57] The intermolecular hydrogen bond may include not only the chemical interaction between silicon hydroxyls and titanium hydroxyls, but also nano silicon particles and aromatic compounds. Hydrogen bonded complex is an important embodiment of complexation effect. Therefore, the interaction that make carboxyl in aromatic compounds attach on the surface of silicon nanoparticles can be speculated as hydrogen bonding. The crystallization pathway of aromatic compoundsmediated TS-1 is a kind of nonclassical crystallization manner. [58] Firstly, the aggregation of nanoparticles is dominant within the first 0.5 h of crystallization. Zeolite nanoparticles begin to be oriented attachment under the guidance of aromatic compounds, as shown in the selected area of Figure S7A. The size of formed TS-DHBDC-0.5h nanoparticles is about 10 nm in Figure S7B. The stacking process of protozeolite particles follows an oriented-attachment manner. [33] Secondly, intraparticle ripening occurs preferentially over nanoparticle orientation after 1-h crystallization. The size of TS-DHBDC samples prepared from 1 to 48 h almost remains the same ( Figure S4 and Figure 3D). Intraparticle ripening process involves larger zeolite particles swallowing up smaller ones to minimize the Gibbs free energy of the whole system. [59,60] Intergranular mesopores inside the particles are derived from the ripening process and then grow to larger ones. This ripening phenomenon can be observed in Figure S4F-J. Finally, the continuous phagocytosis between zeolite nanoparticles eventually leads to a relatively smooth surface and interconnected intercrystalline mesopores. Scheme 2 illustrates the grain growth process under the mediation of aromatic compounds and the intergranular mesopores formation mechanism of hier-archical zeolites. The TEM images of TS-DHBDC-0.5h to TS-DHBDC-48h in Scheme 2 also proved the evolution process of intercrystalline mesopores, which is in accordance with the evolution of above-mentioned mesopores width from BET analysis. In addition, the mechanism is applicable to the preparation of mesoporous TS-1 mediated by other aromatic compounds containing carboxyl groups. The elimination mechanism of anatase In order to identify whether the hydroxyl and carboxyl group in 2,5-dihydroxy terephthalic acid acts alone or synergistically to regulate the crystallization process, the UV-Vis spectrum of hydroquinone-mediated zeolite (TS-HQ-48h) was also tested. The spectrum shows obvious absorption peak at 300-330 nm ( Figure S8), indicating the synergistic effect of hydroxyl group and carboxyl group to inhibit the formation of anatase. [22] Since titanium hydroxyl and silicon hydroxyl partially formed polymers during hydrolysis, the hydroxyl group and carboxyl group coordinate with titanium hydroxyl to achieve uniform distribution of Ti species during depolymerization process. The detailed depolymerization, coordination and reorganization processes are shown in Scheme 3. However, due to the lack of coordination of hydroxyl groups of terephthalic acid, uncoordinated Ti-OH may combine with other Ti species, resulting in the aggregation of Ti species and the generation of extra-framework anatase. In addition, the binding effect of amino group in DABDC and Ti-OH is weaker than that of hydroxyl group, [61] which also leads to the uneven distribution of Ti species and the formation of anatase. The synergistic coordination of hydroxyl group and carboxyl group plays a key role in eliminating anatase. Thus, adding appropriate additives in the crystallization process of zeolites is of great significance to improve the coordination of Ti species. MACHINE LEARNING-ASSISTED INVESTIGATION OF AROMATIC COMPOUNDS-MEDIATED SYNTHESIS PROCESS In recent years, machine learning has attracted great attention in materials screening and performance prediction owing S C H E M E 3 The elimination mechanism of anatase in TS-DHBDC-48h during the crystallization process. The hydrogen atoms have been omitted for viewing purposes to the advantages of recognizing the hidden laws in material data. [35] In the literature related to zeolites, the principle of machine learning has been successfully applied to synthesize zeolites with different topological structures. [62,63] However, few reports are related to the structural optimization of zeolites by machine learning models. This part reasonably discusses the effect of synthesis conditions on anatase and broadens the application of crystallization modifiers in preparing high-performance TS-1. Based on 68 groups of experimental data from our previous experiment and reported articles, machine learning models were trained to study the effects of preparation conditions and different functional groups in additives on the formation of anatase. The additives include BDC, DABDC, DHBDC, pphenylenediamine, 2-methylimidazole, p-aminobenzoic acid, 2-aminoterephthalic acid, 2,5-dimethylterephthalic acid, hydroquinone, Tween20, L-carnitine, Triton X-100, PVB, PDADMAC, 1,3,5-benzenetricarboxylic acid and PEG-1000. [22,[28][29][30][31]64,65] We visualized the types and quantities of functional groups in additives, as well as the preparation parameters (Ti/Si molar ratio, TPAOH/Si molar ratio, H 2 O/Si molar ratio, crystallization temperature and time). The parallel coordinate plot is shown in Figure S9. The ordinate of functional groups is calculated by multiplying the molar number of additives with the number of functional groups contained. Blue lines (1) represent that anatase exists in TS-1 under this condition, and red lines (0) refer to the opposite results. When the additives contain amino, imino and carbon-carbon double bond functional groups, the probability of anatase in TS-1 samples is high. The phenomenon indicates that the additives containing the above functional groups may play a positive effect on the formation of anatase. However, the influence of preparation parameters is complex, so machine learning models were used to analyze the data more deeply. We processed the data with SMOTE algorithm to solve the problem of unbalance date categories. Eight machine learning models, including RF, SVC, DT, SGD, LR, XGBoost, KNN and NB, were trained to predict the existence of anatase. In the standard 10-fold cross-validation process, the collected data were divided into 10 subsets. Then, the eight machine learning models were trained on nine subsets, and the remaining subset was used as test set. [35] After adjusting the parameters by the Bayesian Optimization algorithm, the cross-validation score of each model is shown in Figure 6A. XGBoost, RF, NB, LR and DT models perform well, and the 10-fold cross-validation accuracy is above 0.8. In addition, we took 70% of the collected data set as a training F I G U R E 6 Comparison of machine learning models scores: (A) accuracy of 10-fold Cross-Validation, (B) accuracy of the training set and test set set and 30% as a test set to train the above eight machine learning models, and realized the high-speed and low-cost performance prediction of TS-1. The results are shown in Figure 6B. It can be seen that SVC and SGD models show good accuracy in training set but poor performance in test set. There is an evident overfitting phenomenon for SVC and SGD models, indicating that these two methods have large generalization error and poor generalization ability for data set. RF, DT, LR, XGBoost and NB models show excellent prediction accuracy in both the training and test set, which corresponds to the results of 10-fold cross-validation. Eight machine learning models were further compared by receiver operating characteristic curve (ROC) in Figure S10. ROC shows the relationship between recall and false positive curve. The evaluation metrics are used to evaluate the results of TS-1 classification and anatase detection. The dotted line in Figure S10 represents the ROC curve of the pure random classifier. The classifier with excellent performance should deviate far from the dotted line and locate in the upper left corner, indicating that the machine learning model has higher prediction accuracy. [35] It is unrealistic to distinguish several classifiers by directly comparing ROC curves. So, area under curves (AUC) were measured to make a more intuitive judgment in Figure S10. The AUC of a perfect classifier should be 1, while the AUC of a pure random classifier should be 0.5. [35] Compared with the other models, NB and LR models show well prediction accuracy with AUC values of 0.90 and 0.88, respectively. Most machine learning algorithms are the "black box", which has been proved difficult to interpret the prediction results. [35] For the above four models (NB, LR, RF, DT) with high prediction accuracy, the feature importance can rank these features, [36] which is helpful to study the reaction mechanism and accelerate the development of anatase-free TS-1. Figure S11 shows the feature importance of 12 features, and the four models all reveal that hydroxyl group has a critical impact on the formation of anatase. [36,66] Combined with the experimental part of this paper, it can be inferred that hydroxyl group can effectively inhibit the aggregation of Ti species and the formation of anatase. In addition, it can also be found in the DT model that carboxyl group and crystallization temperature have a significant influence on the results. So, we speculate that the synergistic effect of hydroxyl and carboxyl groups inhibits the formation of anatase. Higher crystallization temperature may accelerate the aggregation of Ti species and promote the formation of anatase; Ti/Si molar ratio and H 2 O/Si molar ratio also directly or indirectly affect the existence state of Ti species in TS-1. LR model was selected for further data analysis owing to its high prediction accuracy compared with other models. The Partial dependency plot shows the marginal effect of features on the model prediction results. In this part, we evaluated the relevance of functional groups and preparation conditions on the generation of anatase. Figure 7 displays the change of predicted value with feature value. The partial dependence shows a gradual downward trend with increasing the usage of hydroxyl, carboxyl, ether bond and ester groups, indicating that these four functional groups play a positive role in inhibiting the formation of anatase. The speculation is consistent with the result of TS-DHBDC-48h and some TS-1 samples prepared using Triton X-100 and Tween20 as soft templates. However, imino group, amino group, carbon-carbon double bond and Ti/Si molar ratio can promote the formation of anatase, which is consistent with the result of prepared samples adding 2-methylimidazole and p-phenylenediamine. It is worth noting that only the DT model (AUC = 0.79) that performs well is a white box model with excellent interpretability among the above models. [35] Therefore, the effect of functional group types and preparation parameters on the formation of anatase in TS-1 can be revealed through the decision process. The obtained flowchart is shown in Figure 8. It shows the decision process of how to classify the reaction results according to the experimental input conditions. The samples attribute of each node indicates the number of instances. And the value attribute in every node describes the number of training instances of each category, representing the number of samples without anatase (left) and with anatase (right). The Gini attribute of the node measures its impurity, and class represents the category of the leaf node. When the number of samples with anatase is more than the samples without anatase, it is classified as 1, and the color of the node is blue, otherwise it is orange. [67] In this case, the most likely path to synthesize TS-1 without anatase was obtained, which is indicated by the red arrow. Starting from the root node, the flowchart was divided into two categories according to the number of hydroxyl groups in additives. It is judged that the addition amount is >0.014, and then it moves to the right child node. Further, the node will evaluate whether the addition amount of carboxyl is ≤0.02 and enter the next depth. The judgment condition is changed to whether the crystallization time is ≤42 h. Because the node is impure and does not reach the maximum depth, it is split here. The judgment condition is the number of hydroxyl groups, but its threshold value becomes 0.108. It is judged to enter the left leaf node, which is pure and the color is the deepest orange. All instances are classified as the absence of anatase, and the decision tree stops here. In addition, when the added amount of hydroxyl group is >0.014, and carboxyl group is >0.02, all examples are also classified as the absence of anatase, which corresponds to the above experimental results. Similarly, the instances of the route marked by blue arrow are classified as the presence of anatase, and the node color is the deepest blue. Based on more experimental data, these machine learning models are expected to achieve higher accuracy. In the feature, the developed machine learning models can be combined with high-throughput experiments, which will greatly accelerate the development of high-performance zeolites. [36] CATALYTIC PERFORMANCE The catalytic performance of the as-synthesized TS-1 was investigated for the epoxidation of 1-hexene using H 2 O 2 as oxidant at 333 K for 2 h (Figure 9). TS-WAC-48h shows a poor conversion (38.1%) of 1-hexane and selectivity (99.2%) of 1,2-epoxyhenane, which can be attributed to the negative effect of anatase and the limitation of lower mesopore volume. [22,68] The main product is 1,2-epoxyhenane, and by-products include pentanal and ring-opening products 1,2-hexanenol. 1-hexene conversion was boosted when using aromatic compounds-mediated zeolites as catalysts. TS-BDC-48h and TS-DABDC-48h display improved 1-hexene conversion (40.9%, 48.1%) owing to the larger mesopore volume and higher framework Ti content than TS-WAC-48h, but the selectivity decreases to 98.8% and 96.1%, respectively. TS-DHBDC-48h exhibits a much higher catalytic activity, giving 1-hexene conversion of 49.3% and 1,2-epoxyhenane selectivity of 99.9%. The conversion of 1-hexene using DHBDC-mediated TS-DHBDC-48h as catalyst improved by 29.4% than conventional TS-WAC-48h. Furthermore, the conversion of 1-hexene catalyzed by DHBDC-mediated zeolites with different crystallization time is shown in Figure 9B, which is 34.7%, 34.4%, 38.7% and 41.0%, respectively. And the selectivity of 1,2-epoxyhexane is 99.9%. The increased conversion may be related to the crystallinity of these samples. When crystallization time is above 4h, the zeolites show higher catalytic activity than TS-WAC-48h. Although TS-DHBDC-1h exhibits the largest mesoporous volume and specific surface area, the lowest conversion of 1-hexene may be caused by the relatively low framework titanium content in the early crystallization stage. The reaction pathway of olefin epoxidation in TS-1/H 2 O 2 /CH 3 OH system is shown in Figure S12. It is generally believed that the active sites of TS-1 are the tetrahedrally coordinated Ti species. [4,5] The path can be summarized as follows: formation of the active intermediate Ti-OOH, stabilization of the intermediate and transfer of the active O atoms. [54] Due to the coordination-unsaturated and electron-accepting ability, the tetrahedrally coordinated Ti species preferentially adsorb and activate H 2 O 2 , and subsequently form five-membered ring Ti-O α -O β -H active intermediates coordinated with methanol. The active O atoms on the intermediate are then transferred to the adsorbed olefin molecules to complete the epoxidation reaction, while the active sites return to the initial state. We compared the catalytic performance of aromatic compounds-mediated zeolites with those reported TS-1 samples ( Figure 9C) in terms of 1-hexene conversion and 1,2-epoxyhexane selectivity. [69] The property of DHBDCmediated TS-DHBDC-48h was much better than those of most reported TS-1 zeolite. The 1-hexene conversion is at a fairly high level, and the 1,2-epoxyhexane selectivity reaches the highest value reported, which are even higher than those reported properties using more oxidant or reacting for longer time. The excellent catalytic performance of TS-DHBDC-48h is mainly attributed to the following points. (i) Larger mesopore size and volume: interconnected intercrystalline mesopores (about 50 nm) contribute to the formation of five-membered Ti-O α -O β -H intermediates and promote the diffusion of olefins to intermediates and epoxy products. (ii) Higher framework Ti content: more tetrahedrally coordinated Ti species provide more active adsorption sites for H 2 O 2 and substrates. (iii) Efficient utilization of oxidants: the absence of anatase avoids the decomposition of H 2 O 2 and improves its utilization ability. The recycling experiment was carried out using TS-DHBDC-48h for the epoxidation of 1-hexene ( Figure 9D). The conversion of 1-hexene maintains at about 50%, and the selectivity of 1,2-epoxyhexane all retains 99.9% during the recycling process. Moreover, the MFI topology is well kept, but the relative crystallization decreases to 102.8% ( Figure S13). The decreased crystallinity may be caused by the loss of active Ti sites and the formation of defects. The UV-Vis spectrum of recycled TS-DHBDC-48h suggests no anatase exists in the catalyst after reaction and calcination ( Figure S14). The I 960 /I 800 decreases from 2.15 to 1.94 ( Figure S15). The morphology of reused TS-DHBDC-48h remains ellipsoid-like in Figure S16. All these results indicate that TS-DHBDC-48h has excellent stability and recycling performance for the epoxidation of 1-hexene. The conversion of 1-hexene when using TS-DHBDC-48h (TTIP) and TS-DHBDC-48h (TEOT) as catalysts is 47.9% and 46.3% ( Figure S17), respectively. Except for 1,2-epoxyhexane, there are no other by-products in the epoxidation reaction. This experiment confirms the general applicability of DHBDC as a multifunctional crystallization mediator in the field of Ti-containing zeolites. To confirm the catalytic performance for other olefins of the aromatic-compounds mediated TS-1, cyclohexene epoxidation was also tested. The cyclohexene conversion of all samples in Figure S18A is 22.55%, 26.12%, 28.22% and 34.57%, respectively. Compared with TS-WAC-48h, the conversion of cyclohexene using TS-DHBDC-48h as a catalyst increased by 53.3%. But the cyclohexane oxide selectivity of TS-DHBDC-48h is only 22.0%. Furthermore, the mesopores and large surface area of TS-DHBDC-1h make it an excellent catalytst. The cyclohexene conversion (26.43%) and cyclohexane oxide selectivity (34.8%) are superior to TS-WAC-48h. The conversion of cyclohexene catalyzed by TS-DHBDC-2h, TS-DHBDC-4h and TS-DHBDC-24h is 23.20%, 24.97%, 28.49%, respectively. In conclusion, the state of Ti species and pore width are the main factors affecting the catalytic performance of TS-1 zeolites. It is evident that using aromatic compounds, especially DHBDC, to regulate the crystallization of TS-1 is an effective strategy to improve catalytic performance. CONCLUSIONS This paper not only develops a facile aromatic compoundsmediated synthesis method for fabricating hierarchical anatase-free TS-1 zeolites but also provides a new path for microstructure optimization of zeolites assisted by machine learning strategy. High-performance TS-1 zeolites were synthesized by introducing aromatic compounds containing different functional groups during the crystallization process. Aromatic compounds lead to the oriented-attachment of zeolite nanoparticles and contribute to the evolution of intercrystalline mesopores in the ripening process. The synergistic coordination of carboxyl group and hydroxyl group in DHBDC with Ti(OH) 4 helps to realize the uniform distribution of titanium species and eliminate the generation of anatase. In addition, 8 machine learning models were used to study the effects of the types and quantities of additive functional groups and preparation parameters on the formation of anatase by combining machine learning analysis with experimental data. And the DT model provides the preparation paths of anatase-free TS-1 under the guidance of crystallization time and the addition amount of hydroxyl and carboxyl groups. The larger mesopore volume and proper Ti chemical environment make TS-DHBDC-48h a universal catalyst in the epoxidation of 1-hexene and cyclohexene. This study provides a novel aromatic compounds-mediated synthesis strategy and demonstrates the potential of the developed machine learning method for accelerating microstructural optimization of zeolites. C O N F L I C T O F I N T E R E S T The authors declare no competing financial interest. D ATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
2023-01-30T16:11:51.458Z
2023-01-28T00:00:00.000
{ "year": 2023, "sha1": "f75c405bf63ca0c4d490be12290583ed2a81e258", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/agt2.318", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "2075c093bc4ad464a64aa114b35642ad38e3469b", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
221788446
pes2o/s2orc
v3-fos-license
Advanced diagnostic imaging and intervention in tendon diseases Degenerative tendon pathology represents one of the most frequent and disabling musculoskeletal disorders. Diagnostic radiology plays a fundamental role in the clinical evaluation of tendon pathologies. Moreover, several minimally invasive treatments can be performed under imaging guidance to treat tendon disorders, maximizing the efficacy and reducing procedural complications. In this review article we describe the most relevant diagnostic features of conventional and advanced US and MRI imaging in tendon disorders, along with the main options for image-guided intervention. (www.actabiomedica.it) This article aims to review the relevant diagnostic features of conventional and advanced imaging in tendon disorders -namely the degenerative pathology -and the main options for image-guided intervention. Histology of tendon diseases and imaging correlates Tendons are made up of dense fibrous connective tissue, mainly represented by multiple subunits of collagen fibers. This three-dimensional structure allows adequate transmission of mechanical force preventing damage and disruption of the fibers under stress. Tendons can be affected by different types of diseases that differ both macroscopically and microscopically (23). Tendinosis, the main degenerative tendon alteration, is characterized histologically by collagen disorientation, fiber disorganization, and separation due to increased mucoid ground substance, increased cellularity and vascular spaces with or without neovascularization, focal necrosis, and calcifications (24). In more advanced stages, partial tendon rupture may show the superimposed presence of tears, including fibroblastic and myofibroblastic proliferation, hemorrhage, and organizing granulation tissue. Together with degenerative changes, reactive inflammatory changes in the tendon sheaths and paratenonium (tendonitis/paratenonitis) are often associated (25). MRI and ultrasound are potent tools for the assessment of tendon anatomy. The imaging appearance correlates with the histological tendon structure and the changes that occur during pathologic processes (26). Thanks to its availability (27)(28)(29), low cost (30,31), absence of ionizing radiation (32)(33)(34)(35), and the ability to study superficial anatomical structures (36)(37)(38)(39), ultrasound is often the first imaging method of approach to the evaluation of tendons. The ultrasound appearance of tendinopathy generally shows tendon thickening with loss of the normal fibrillar structure, and increased spacing of the hyperechoic fibrillar lines with reduced tendon echogenicity; calcifications may also be detected, usually near tendon insertion. Doppler study may also reveal the proliferation of neovessels and increased vascularization in case of inflammation. In severe tendinosis/partial rupture, anechoic fluid may be visible inside the tear in acute/subacute stages; in more chronic phases, the echogenicity may increase, making the differentiation with the tendon less evident. Dynamic imaging during muscle contraction or passive movement is often useful to unveil small tendon gaps. Doppler imaging may also be useful to distinguish small intrasubstance tears from vessels that have developed in the tendinopathic tendon. Regarding the localization, degenerative changes are more frequent in the midportion and areas of critical vascularization, while tendon anomalies in spondyloarthritis tend to be more closely related to fibrocartilaginous enthesis (40)(41)(42). Being more panoramic (43,44) and multiparametric (45)(46)(47)(48)(49)(50), magnetic resonance imaging (MRI) is the other imaging modality of choice for the evaluation of tendon diseases, including tendinopathies. The tendon structure plays a critical role in determining the appearance in magnetic resonance imaging. The water and collagen in the tendon are aligned, reducing the T2 and T1 relaxation time. Along with tendon thickening, an increase in signal intensity in T2weighted imaging sequences is often the first sign of tendon anomaly at MRI. Although the appearance of a complete or partial tear is variable, the main finding is the T2-hyperintense fluid signal at the tear site and in surrounding tissues; however, in late stages, intermediate signal scar tissue can obscure this finding. The fibroadipose involution of the myotendinous junction is another typical finding of chronic lesions (26,34). T2 mapping As mentioned above, conventional MRI sequences can detect morphologic changes occurring in tendinopathy. However, the sensitivity and specificity of MRI to determine disease grading have proven to be variable with the evaluation of the signal intensity alone (35). Furthermore, the diagnosis and monitoring of post-surgical tendon healing remain primarily subjective. In recent years, advanced MRI sequences capable of identifying and showing structural and biochemical tissue changes have been implemented. Among them, T2 mapping was developed to exploit the sensitivity of magnetic resonance imaging to the biophysical properties of numerous tissues. In musculoskeletal pathology applications, T2 mapping demonstrated to be sensitive to structural and biochemical composition in the cartilage matrix (2). In particular, T2 relaxation time mapping correlates with the changes in collagen matrix integrity and cartilage water content that occur during the pathophysiology of degenerative osteoarthritis. Paralleling cartilage histology, biochemical and degenerative changes in tendons are related to proteoglycan loss and disorganization of the collagen matrix, which becomes less elastic, allowing increased mobility of water and consequentially increased levels of H2O proton content; this leads to an increase of T2 relaxation values compared to normal levels. Both T2 and T2* mapping sequences can detect biochemical changes that occur in the early stage of tendinopathy, namely the change of collagen orientation, and the collagen and water content (35). Although the experiences in literature are limited, focused mainly on Achilles and supraspinatus tendon, early evidence shows that T2 mapping values are significantly higher in tendons tears, compared to tendinosis and healthy tendons (36). Some works also confirm the validity of this sequence in identifying and monitoring tendon healing after surgery (37). From our unpublished experience, T2 mapping sequences may also be valid for identifying changes in tendon structure after percutaneous minimally invasive tendon treatment, such as tendon needling and Plateletrich-plasma injections ( fig.1). Sonoelastography Elastography is an US method for quantitative imaging of the distribution of biological tissue strains and elasticity, that has been evaluated in various settings in clinical radiology as a reliable and useful complementary modality to conventional ultrasound in the evaluation of lesions in the liver, spleen, breast, thyroid, and prostate (38). Over the past years, the number of studies on elastography in tendon pathologies has risen considerably, and several studies in vitro and in vivo have tried to provide answers regarding the normal and pathologic biomechanical and structural properties of tendons (39). However, they have also aimed to assess the reliability of elastography and the prospects offered by this technique in daily clinical practice. There are several techniques of elastosonography, the most used being: • Quasi-static elastography (QSE). The operator exerts, through the US probe handheld compression, slow mechanical stress (strain) on tissues that induce tissue displacement. Comparing the position of the structures at rest and under compression, the displacement generated by the stress can be estimated. The degree Figure 1. Axial images of standard T2 (a, c) and T2 mapping (b, d) sequences in a patellar tendon, before (a, b) and six months after (c, d) percutaneous US-guided intratendinous PRP injection treatment. Note the reduction of mean T2 relaxation values of circular ROIs within the most tendinopathic area (visible as a hyperintense intrasubstance alteration on standard imaging) after treatment, consistent with signs of tendon healing of stiffness is represented in a color scale that identifies the "softest" and "hardest" tissue areas (38,40). • Shear wave elastography (SWE). By generating several near-simultaneous pushes moved through the medium at supersonic speed, the device creates high-intensity shear waves that move through the medium transversely relative to the initial radiation force. Shear waves cause particulate moves recorded with high-frequency imaging (5000 to 30,000 Hz), from which the system calculates color elastograms in real-time (quantitative analysis) (41). The literature on the application of elastosonography to the study of tendons has been focused mainly on the Achilles tendon in most studies ( fig. 2). In QSE, the strain ratio (tendon/Kager's fat) was found to be higher and the tendon softer in case of tendinopathy, when compared to healthy subjects. QSE could also be a reliable tool in monitoring Achilles tendon after surgical repair: one research revealed that after percutaneous tenorrhaphy, tendons tend to stiffen progressively for at least a year, reflecting the abnormal collagen composition during scar formation. QSE was also used to detect small partial tears of the supraspinatus tendon ( fig.3) and to evaluate structural and biomechanical changes of the Achilles tendon in metabolic diseases such as diabetes, acromegaly, severe renal insufficiency (38,42,43). Interventional radiology procedures in tendon diseases Several intervention have been introduced for the conservative treatment of tendinosis, including rest, oral and topical analgesics, and physical therapy (cryotherapy, stretching, eccentric strengthening, taping, bracing, extracorporeal shock-wave therapy) (44). Few of these treatments have been shown to be effective, and corticosteroid injections, once a mainstay of tendinopathy treatment, have been found to be harmful to tendons. Consequently, clinicians have sought safer and more effective interventions for the treatment of this condition. In recent years, the indications for interventional radiology procedures have expanded thanks to the use of all diagnostic methods (45)(46)(47)(48)(49)(50). With the use of US, the specific location of a tendon pathology can be accurately delineated, and this also allows to target many percutaneous minimally invasive treatments (51,52). Among them, the most popular ones include percutaneous needle tenotomy (PNT), high-volume injection (HVI), and orthobiologic interventions, such as intratendinous injection of Platelet-rich Plasma (PRP) (53). Percutaneous needle tenotomy (PNT), also termed needling, consists in the repeated passing of a needle (16-22 gauge) through the tendon, to disrupt the chronic degenerative process (including scar tissue), induce localized bleeding and fibroblast proliferation, which can lead to growth factor release, collagen formation, and ultimately healing. The number of needle passes (usually from 20 to 40) may vary based on numerous factors, such as patient characteristics, severity, and size of the tendinopathic area, presence or absence of tears, operator experience, and comfort level. This technique can be used alone or in combination with the injection of orthobiologic products. There is considerable variability in postprocedural care, such as the use of anti-inflammatory medications, the use of bracing, rehabilitation protocols, and return-to-activity guidelines. Most publications in literature are limited series on gluteus tendons, hamstring tendons, tensor fascia lata tendon, common extensor tendon of the elbow, rotator cuff tendons, Achilles and patellar tendon. Almost all studies showed improvement in clinical symptoms and reported rare complications (54)(55)(56). High-volume injection (HVI) treatment focuses on using high volumes of injectate (about 10 mL of 0.5% bupivacaine, 25mg of hydrocortisone, and between 12-40 mL of normal saline) to disrupt neovessels and neonerves, which may reduce neurogenic inflammation and pain, and promote healing. This technique has mainly been studied in patellar and Achilles tendinopathy, where most neovessels and neonerves are sonographically observed between the affected tendon and an adjacent fat pad. US-guided HVI for the treatment of patellar and midportion Achilles tendinopathy has shown promise in improvement of both pain and functional scores, with positive effects persisting for up to 15 months. No significant complications have been reported. Another potential advantage of this procedure is that, because the tendon is not mechanically debrided, rehabilitation and return to activity may progress at a faster rate than with PNT (57,58). In the last decade, the use of biological therapies for the treatment of musculoskeletal diseases has increased significantly. Among them, platelet-rich plasma (PRP) is one of the most used compounds in regenerative musculoskeletal medicine, in particular for tendinopathies and degenerative joint pathologies, namely osteoarthritis. PRP is the blood-derived plasma fraction and contains high concentrations of platelets that release several growth factors with a critical role in tissue healing, such as platelet-derived growth factor, transforming growth factor-beta, and vascular endothelial growth factor. There are many studies and authors who report an intensification of the healing of tendinopathies treated with PRP. Several trials showed that PRP is an effective treatment for lateral epicondylitis, with short-term and long-term (up to 24 months) efficacy compared to steroids. Best available evidence suggests explicitly that LR-PRP (leucocyte-rich PRP) should be the treatment of choice. Only two trials of high-quality evidence -though with limited followup of 6 months -support the use of PRP in chronic refractory patellar tendinopathy, and recommended LR-PRP over LP-PRP (leucocyte-poor PRP). Concerning Achilles tendinopathy, one clinical trial reported efficacy of 4 LP-PRP injections, but also found similar results for high-volume injection of anesthetic, corticosteroid, and saline, perhaps suggesting the benefit may be due to mechanical volume effects (59,60). Although there remains a paucity of evidence to recommend PRP injections for rotator cuff tendinopathy routinely, PRP may be a safe and effective alternative to corticosteroid injections in conservative treatment of rotator cuff tendinopathy. PRP injections are an effective treatment for improving pain and function in chronic plantar fasciitis and may be superior to corticosteroids, especially considering the complications of multiple corticosteroid injections that are not associated with PRP. Nevertheless, there is substantial evidence that corticosteroids can be useful in chronic tendinopathy at relieving pain, reducing swelling, and improving function in the short term (61). Recently, the anti-inflammatory and lubrication properties of hyaluronic acid have drawn the scientific community's interest to treat tendinopathies. Some in vitro and animal in vivo studies demonstrate encouraging results in terms of the ability of hyaluronic acid to facilitate tendon gliding, as well as creating better tendon structural organization. Moreover, the injection of hyaluronic acid may limit the proinflammatory effect by restoring viscoelasticity and by stimulating the endogenous synthesis of hyaluronic acid (62,63). Conclusions In conclusion, diagnostic and interventional radiology are nowadays a matter of fact in all clinical settings (64)(65)(66)(67)(68)(69)(70). Tendon evaluation has improved with advances in ultrasound technology and MRI sequences. Beyond the anatomical changes detected with conventional imaging, MRI T2 mapping sequences and ultrasound elastography can provide information about the biochemical changes in tendon tissue and the consequent mechanical alteration induced by tendinopathy, giving reliable quantitative data for the diagnosis, staging, and follow-up. The significant advancements in the use of musculoskeletal US for interventional purposes have played a critical role in the development of minimally invasive interventions to treat tendon diseases. Conflict of interest: Authors declare that they have no commercial associations (e.g. consultancies, stock ownership, equity interest, patent/licensing arrangement etc.) that might pose a conflict of interest in connection with the submitted article.
2020-09-19T13:06:02.333Z
2020-07-13T00:00:00.000
{ "year": 2020, "sha1": "3ac6ca02b75655375a663f745475f7975778c433", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6c14c2e1a2ca7a9f78652a3e954c0ca7e827f775", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14033843
pes2o/s2orc
v3-fos-license
Deducing the multidimensional Szemeredi Theorem from an infinitary removal lemma We offer a new proof of the Furstenberg-Katznelson multiple recurrence theorem for several commuting probability-preserving transformations T_1, T_2,>..., T_d: \bbZ\curvearrowright (X,\S,\mu), and so, via the Furstenberg correspondence principle introduced in, a new proof of the multi-dimensional Szemeredi Theorem. We bypass the careful manipulation of certain towers of factors of a probability-preserving system that underlies the Furstenberg-Katznelson analysis, instead modifying an approach recently developed for the study of convergence of nonconventional ergodic averages to pass to a large extension of our original system in which this analysis greatly simplifies. The proof is then completed using an adaptation of arguments developed by Tao for his study of an infinitary analog of the hypergraph removal lemma. In a sense, this addresses the difficulty, highlighted by Tao, of establishing a direct connection between his infinitary, probabilistic approach to the hypergraph removal lemma and the infinitary, ergodic-theoretic approach to Szemeredi's Theorem set in motion by Furstenberg. Introduction We give a new ergodic-theoretic proof of the multidimensional multiple recurrence theorem of Furstenberg and Katznelson [6], which their correspondence principle shows to be equivalent to the multidimensional Szemerédi Theorem. Our proof of Theorem 1.1 will call on some rather different ergodic-theoretic machinery from Furstenberg and Katznelson's. Our main technical ingredients are the notions of 'pleasant' and 'isotropized' extensions of a system. Pleasant extensions were first used in [1] to give a new proof of the (rather easier) result that the 'nonconventional ergodic averages' associated to f 1 , f 2 , . . . , f d ∈ L ∞ (µ) always converge in L 2 (µ) as N → ∞. (This was first shown by Tao in [14], although various special cases had previously been established by other methods [2,3,15,10,11,16,4].) Much of the present paper is motivated by the results used in [1] to give a new proof of this convergence. Isotropized extensions are a new tool developed for the present paper, but their analysis is closely analogous to that of pleasant extensions. After passing to a pleasant and isotropized extension of our original system, the limit of (1) takes a special form, and in this paper it is by analyzing this expression that we shall prove positivity. It turns out that this special form enables us to make contact with the machinery developed by Tao in [13] for his infinitary proof of the hypergraph removal lemma. Since the hypergraph removal lemma offers a known route to proving the multidimensional Szemerédi Theorem (as shown, subject to some important technical differences, by Nagle, Rödl and Schacht [12] and by Gowers [8]), and this in turn is equivalent to multidimensional multiple recurrence, Tao's work already offers a proof of multiple recurrence using his infinitary removal lemma. In a sense, our present contribution is to short-circuit the above chain of implications and give a near-direct proof of multiple recurrence using Tao's ideas. Unfortunately, we have not been able to make a reduction to a simple black-box appeal to Tao's result; rather, we formulate (Proposition 6.1) a closely-related result adapted to our ergodic theoretic setting, which then admits a very similar proof. With this caveat, our work addresses the question of relating infinitary proofs of multiple recurrence and hypergraph removal explicitly raised by Tao at the beginning of Section 5 of [13]: it turns out that his ideas are not directly applicable to an arbitrary probability-preserving Z d -system, but becomes so only when we enlarge the system to lie in the special class of systems that are pleasant and isotropized. Acknowledgements My thanks go to Vitaly Bergelson and David Fremlin for helpful comments on an earlier version of this paper, and to the Mathematical Sciences Research Institute where a significant re-write was undertaken after a serious flaw was discovered in an earlier version. Basic notation and preliminaries Throughout this paper (X, Σ) will denote a measurable space. Since our main results pertain only to the joint distribution of countably many bounded real-valued functions on this space and their shifts under some measurable transformation, by passing to the image measure on a suitable shift space we may always assume that (X, Σ) is standard Borel, and this will prove convenient for some of our later constructions. In addition, µ will always denote a probability measure on Σ. We shall write (X e , Σ ⊗e ) for the usual product measurable structure indexed by a set e, and µ ⊗e for the product measure and µ ∆e for the diagonal measure on this structure respectively. We also write π i : X e → X for the i th coordinate projection whenever i ∈ e. Given a measurable map φ : (X, Σ) → (Y, Φ) to another measurable space, we shall write φ • µ for the resulting pushforward probability measure on (Y, Φ). If T : Γ (X, Σ, µ) is a probability-preserving action of a countable group Γ, then by a factor of the quadruple (X, Σ, µ, T ) we understand a globally Tinvariant sub-σ-algebra Φ ≤ Σ. The isotropy factor is the sub-σ-algebra of those subsets A ∈ Σ such that µ(A△T γ (A)) = 0 for all γ ∈ Γ, and we shall denote it by Σ T . If T 1 , T 2 : Γ (X, µ) are two commuting actions of the same Abelian group, then we can define another action by (T −1 T γ 2 , and then we write Σ T 1 =T 2 for Σ T −1 1 T 2 , and similarly if we are given a larger number of actions of the same group. The most important kind of morphism from one Γ-system T : Γ (X, Σ, µ) to another S : Γ (Y, Φ, ν) is given by a measurable map φ : X → Y such that ν = φ • µ and S • φ = φ • T : we call such a φ a factor map. In this case we shall write φ : (X, Σ, µ, T ) → (Y, Φ, ν, S). To a factor map φ we can associate the factor {φ −1 (A) : A ∈ Φ}. Our specific interest is in d-tuples of commuting Z-actions T i , i = 1, 2, . . . , d. Clearly these can be interpreted as the Z-subactions of a single Z d -action corresponding to the d coordinate directions Z · e i ≤ Z d . Given these actions, we shall make repeated reference to certain factors assembled from the isotropy factors among the T i . These will be indexed by subsets of ≥2 : u ⊇ e} (note the non-standard feature of our notation that e ∈ e if and only if |e| ≥ 2): up-sets of this form are principal. We will abbreviate {i} to i . It will also be helpful to define the depth of a non-empty up-set I to be min{|e| : e ∈ I}. The corresponding factors are obtained for ..=T i k , and given an up-set I ⊆ [d] ≥2 by defining From the ordering among the factors Φ e it is clear that ≥2 is a family that generates I as an up-set, and in particular that Φ e = Φ e . An inverse system is a family of probability-preserving systems T (m) : Γ (X (m) , Σ (m) , µ (m) ) together with factor maps from this one can construct the inverse limit lim m← (X (m) , Σ (m) , µ (m) , T (m) ) as described, for example, in Section 6.3 of Glasner [7]. Finally, the following distributional condition for families of factors will play a central rôle through this paper. The Furstenberg self-joining It turns out that a particular d-fold self-joining of µ both controls the convergence of the nonconventional averages (1) and then serves to express their limiting value. Given our commuting actions and any for A 1 , A 2 , . . . , A k ∈ Σ. That these limits always exist (and so this definition is possible) follows from the convergence of the nonconventional averages (1), although approaches to convergence that use this self-joining (as in [1], or for various special cases in [15] and [16]) actually handle both kinds of limits alternately in a combined proof of their existence by induction on k. Given the existence of the limits (1) and the assumption that (X, Σ) is standard Borel, it is easy to check that µ F e extends to a k-fold self-joining of µ on Σ ⊗e . This is the Furstenberg self-joining of µ associated to T i 1 , T i 2 , . . . , T i k . It is now clear from our definition that the assertion of Theorem 1.1 can be re-stated as being that if µ(A) > 0 then also µ F [d] (A d ) > 0. It is in this form that we shall prove it. The following elementary properties of the Furstenberg self-joining will be important later. Proof This is immediate from the definition: if A i j ∈ Σ for each j ≤ k then where B j := A j if j ∈ e and B j := X otherwise; but then this last average simplifies summand-by-summand directly to as required. Proof If e = {i 1 < i 2 < . . . < i k } and A j ∈ Φ e for each j ≤ k then by definition we have as required. It follows from the last lemma that whenever e ⊆ e ′ the factors π −1 i (Φ e ) ≤ Σ ⊗e ′ for i ∈ e are all equal up to µ F e ′ -negligible sets. It will prove helpful later to have a dedicated notation for these factors. [d] -completion of the sub-σ-algebra π −1 i (Φ e ), i ∈ e, as the oblique copy of Φ e , and denote it by Φ F e . More generally we shall refer to factors formed by repeatedly applying ∩ and ∨ to such oblique copies as oblique factors. It will be important to know that Furstenberg self-joinings behave well under inverse limits. The following is another immediate consequence of the definition, and we omit the proof. , T ×d with factor maps φ ×d m also form an inverse system with inverse limit X d ,Σ ⊗d ,μ F [d] ,T ×d . Pleasant and isotropized extensions We now introduce the main technical definitions of this paper: that of 'pleasant systems', closely following [1], and alongside them the related notion of 'isotropized systems'. Recall that to a commuting tuple of actions T 1 , T 2 , . . . , T d : ≥2 if the i th coordinate projection π i is relatively independent from the other π j , j ∈ e, over the factor π −1 It is fully pleasant if it is (e, i)-pleasant for every pair i ∈ e. DEFINITION 4.2 (Isotropized system). A commuting tuple of actions Intuitively, both pleasantness and isotropizedness (say when e = [d]) assert that the factors Φ i are 'large enough': in the first case, large enough to account for all of the possible correlations between the coordinate projections under the Furstenberg self-joining, and in the second to account for all of the possible intersection between Φ e and the combination j∈[d]\e Φ {i,j} up to negligible sets. This notion of pleasantness is very similar to Definition 4.2 in [1], where 'pleasant systems' were first introduced as those in which the larger factors Σ T i ∨ Φ i were 'characteristic' for the asymptotic behaviour of the nonconventional averages (1) in L 2 (µ). Here our emphasis is rather different, since we are concerned only with the integrals of these ergodic averages, rather than the functions themselves. For these integrals it turns out that we can discard the factors Σ T i from consideration. This lightens some of the notation that follows, but otherwise makes very little difference to the work we must go through. Notice that the subset e ⊆ [d] is allowed to vary in both of the above definitions: this nuance is important, since the pleasantness property relating a proper subfamily of actions T i , i ∈ e, is in general not a consequence of the pleasantness of the whole family, and similarly for isotropizedness. The main goal of this section is the following proposition. This will rely on a number of simpler steps, many closely following the arguments of [1]. We first show that any tuple of actions admits an (e, i)-pleasant extension and, separately, an (e, i)-isotropized extension. The first of these results is proved exactly as was Proposition 4.6 in [1], and so we shall only sketch the proof here. The idea behind the proof is to construct of a tower of extensions, each accounting for the shortfall from pleasantness of its predecessor, and then the pass to the inverse limit. Proof We form (X,Σ,μ,T ) as the inverse limit of a tower of smaller extensions, each constructed from the Furstenberg self-joining of its predecessor. Let (X (1) , Σ (1) , µ (1) ) be the Furstenberg self-joining (X e , Σ ⊗e , µ F e ) and define on it the transformationsT and interpret it as an extension of (X, Σ, µ, T ) with the coordinate projection π i as factor map. We now see that if f j ∈ L ∞ (µ) for each j ∈ e then and from the above definition that the factor of X e = X (1) generated by (π j ) j∈e\{i} is contained in j∈e\{i} Φ {i,j} . If we now iterate this construction to form (X (2) , Σ (2) , µ (2) , T (2) ) from (X (1) , Σ (1) , µ (1) , T (1) ), and so on, then the approximation argument given for Proposition 4.6 of [1] shows that the inverse limit is (e, i)-pleasant. Remark Since the appearance of [1], Bernard Host has given in [9] a method for constructing a pleasant extension of a system without recourse to an inverse limit. However, we will make further use of inverse limits momentarily to construct an extension that is fully pleasant, rather than just (e, i)-pleasant for some fixed (e, i), and at present we do not know of any quicker construction guaranteeing this stronger condition. ⊳ A similar argument gives the existence of (e, i)-isotropized extensions. Proof Once again we build this as an inverse limit. First form the relatively independent self-product (X (1) , µ (1) ) := (X 2 , Σ ⊗ Φe Σ, µ ⊗ Φe µ) with coordinate projections π 1 , π 2 back onto (X, Σ, µ), and interpret it as an extension of (X, Σ, µ) through the first of these. Choose arbitrarily some i ∈ e, and now define the extended actions T these all preserve µ (1) , even in the latter case, because our product is relatively independent over the factor left invariant by each T −1 j T i for j ∈ e. We now extend (X (1) , Σ (1) , µ (1) , T (1) ) to (X (2) , Σ (2) , µ (2) , T (2) ) by repeating the same construction, and so on, to form an inverse series with inverse limit (X,Σ,μ,T ). We will finish the proof of Proposition 4.3 using the following properties of stability under forming further inverse limits. Proof We give the proof for the retention of (e, i)-pleasantness, the case of (e, i)isotropizedness being exactly analogous. Since any 1-bounded member of L ∞ (μ) may be approximated arbitrarily well in L 1 (μ) by 1-bounded members of L ∞ (µ (m) ), by a simple approximation argument it will suffice to prove that given m ≥ 1 and f j ∈ L ∞ (µ (m) ) for each j ∈ e we have X j∈e However, by definition and Lemma 3.4 we know that after choosing any m 1 ≥ m for which (X (m 1 ) , Σ (m 1 ) , µ (m 1 ) , T (m 1 ) ) is (e, i)-pleasant the above is obtained with {i,j} in place of j∈e\{i}Φ {i,j} , and now letting m 1 → ∞ and appealing to the bounded martingale convergence theorem gives the result. It now remains only to collect our different properties together using more inverse limits, whose organization is now rather arbitrary. is (e (m+1)/2 , i (m+1)/2 )-pleasant when m is odd and (e m/2 , i m/2 )-isotropized when m is even. The two parts of Lemma 4.6 now show that the resulting inverse limit extension has all the desired properties. Furstenberg self-joinings of pleasant and isotropized systems Having established that all systems have fully pleasant and isotropized extensions, it remains to explain the usefulness of such extensions for the proof of Theorem 1.1. This derives from the implications of these conditions for the structure of the Furstenberg self-joining. . Pick i ∈ e. By Lemma 3.2 there is some -almost surely. Let {a 1 , a 2 , . . . , a k } be the antichain of minimal elements in I; this clearly generates I as an up-set. Since e ∈ I we must have a j \ e = ∅ for each j ≤ k. Pick i j ∈ a j \e arbitrarily for each j ≤ k, so that, again by Lemma 3. ) by sums of products of the form p j≤k φ j,p • π i j with φ j,p ∈ L ∞ (µ↾ Φa j ), and so by continuity and linearity it suffices to assume that F 2 is an individual such product term. This represents F 2 as a function of coordinates in X d indexed only by members of [d] \ e, and now we appeal to Lemma 3.1 and the pleasantness of However, now the property that (X, Σ, µ, T ) is (e, i)-isotropized and the fact that f 1 is already Φ e -measurable imply that and since each e ∪ {j} ∈ I (by the maximality of e in P[d] \ I), under π i this conditional expectation must be identified with E µ (F 1 | Φ F I∩ e ), as required. ≥2 are two up-sets then Proof This is proved for fixed I by induction on I ′ . If I ′ ⊆ I then the result is clear, so now let e be a minimal member of I ′ \ I of maximal size, and let and furthermore, by approximation, to do so only for F that are of the form . However, for these we can write and by Lemma 5.1 On the other hand (I ∪ I ′′ ) ∩ e ⊆ I ′′ (because I ′′ contains every subset of [d] that strictly includes e, since I ′ is an up-set), and so Lemma 5.1 promises similarly that Therefore the above expression for by the inductive hypothesis applied to I ′′ and I, as required. Completion of the proof We have now set the stage for our analog of Tao's infinitary hypergraph removal machinery. Observe first that the conclusion of Theorem 1.1 clearly holds for the commuting tuple T 1 , T 2 , . . . , T d : Z (X, Σ, µ) if it holds for any extension of that tuple. Therefore by Proposition 4.3 we may assume our commuting tuple is fully pleasant and fully isotropized, and so need only prove for such µ that if . It seems quite likely that in some cases the factors Φ I of the original system can still exhibit a very complicated joint distribution, even after passing to a fully pleasant and isotropized extension. However, the understanding of the oblique copies is already enough to complete the proof of multiple recurrence using a relative of Tao's 'infinitary removal lemma' in [13]. One of his chief innovations was an infinitary analog of the property of hypergraph removability for a collection of factors of a probability space (Theorem 4.2 of [13]). Here we shall actually make do with a more modest conclusion than his 'removability', but our argument will follow essentially the same steps. We shall derive Theorem 1.1 as the top case of the following inductive claim, tailored to our present needs. PROPOSITION 6.1. Suppose that I i,j for i = 1, 2, . . . , d and j = 1, 2, . . . , k i are collections of up-sets in [d] ≥2 such that [d] ∈ I i,j ⊆ i for each i, j, and suppose further that the sets A i,j ∈ Φ I i,j are such that Then we must also have The following terminology will be convenient during the proof. The conclusion of multiple recurrence follows from Proposition 6.1 at once: Proof of Theorem 1.1 from Proposition 6.1 Suppose that A ∈ Σ is such that Then by the pleasantness of the whole system we have , as required. It remains to prove Proposition 6.1. This will be done by induction on a suitable ordering of the possible collections of up-sets (I i,j ) i,j , appealing to a handful of different possible cases at different steps of the induction. At the outermost level, this induction will be organized according to the depth of our up-sets (defined in Section 2). Let us first illustrate how the above reduction to Proposition 6.1 and then the inductive proof of that proposition combine to give a proof of Theorem 1.1 in the simple case d = 3. Example Suppose that T 1 , T 2 , T 3 : Z (X, Σ, µ) is a fully pleasant and fully isotropized triple of actions and that A ∈ Σ has µ F [3] (A 3 ) = 0. We will show that µ(A) = 0. As in the above argument, we know that and so we must actually have µ F Clearly A is contained in B 1 ∩B 2 ∩B 3 up to a µ-negligible set, so it will suffice to show that this intersection is µ-negligible. , Φ 2 and Φ 3 can be generated using intersections of members from countable generating sets in each Φ {i,j} . Let i ) > 1 − δ}, so for large n this set should be a µ-approximation to B i , and observe in addition that It easy to check from Corollary 5.2 that Φ F i must be relatively independent from 3] when {i, j, k} = {1, 2, 3}, and from this we compute that and so since δ < 1/3 we must have µ F [3] (C (n) 3 ) = 0 for all n. The importance of this is that for large n we have now approximated the sets B i by sets C (n) i that lie in the simpler σ-algebras Ξ (n) i but nevertheless still enjoy the property that the measure µ F [3] (C (n) 3 ) is strictly zero. Since each B {i,j},n is finite, for any given n we may write each C (n) i as a finite union of subsets of the form C j},n for every p, and these must now also enjoy the property that for all possible indices p 1 , p 2 , p 3 . Next the fact that µ F [3] (π −1 i (C)△π −1 j (C)) = 0 whenever C ∈ Φ {i,j} (Lemma 3.2) comes into play, allowing us for example to move the set C 2,1,p 2 under the first coordinate rather than the second in the above equation, and similarly. In this way we can re-arrange the above equation into the form This equation involves the sets D := D 1, . Now, Corollary 5.2 tells us that the three oblique copies Φ F {i,j} are relatively independent over Φ F {1,2,3} under µ F [3] , and so we deduce from the above equation that where the first and second line here are equal by Lemma 3.2 since all the functions involved are Σ T 1 =T 2 =T 3 -measurable. Taking the union of these equations over triples of indices p 1 , p 2 , p 3 gives µ(C 3 ) = 0 for any n, and so since the sets C We now turn to full induction that generalizes the above argument, broken into a number of steps. Proof Let I i 1 ,j 1 = e 1 , I i 2 ,j 2 = e 2 , . . . , I i ℓ ,j ℓ = e ℓ be an enumeration of all the (principal) up-sets of depth k in our collection. We will treat two separate cases. First suppose that two of the generating sets agree; by re-ordering if necessary we may assume that e 1 = e 2 . Clearly we can assume that there are no duplicates among the coordinate-collections (I i,j ) k i j=1 for each i separately, so we must have i 1 = i 2 . However, if we now suppose that A i,j ∈ I i,j for each i, j are such that then the same equality holds if we simply replace A i 1 ,j 1 ∈ e 1 with A ′ i 1 ,j 1 := A i 1 ,j 1 ∩ A i 2 ,j 2 and A i 2 ,j 2 with A ′ i 2 ,j 2 := X. Now this last set can simply be ignored to leave an instance of a µ F [d] -negligible product for the same collection of up-sets omitting I i 2 ,j 2 , and so property P of this reduced collection completes the proof. On the other hand, if all the e i are distinct, we shall simplify the last of the principal up-sets I i ℓ ,j ℓ by exploiting the relative independence among the associated oblique copies of our factors. Assume for notational simplicity that (i ℓ , j ℓ ) = (1, 1); clearly this will not affect the proof. We will reduce to an instance of property P associated to the collection (I ′ i,j ) defined by which has one fewer up-set of depth k and so falls under the inductive assumption. ∈ Φ e ℓ \{e ℓ } and A ′ i,j := A i,j for (i, j) = (1, 1), we have that µ(A 1,1 \ A ′ 1,1 ) = 0 and it follows from the above equality that also µ F = 0, so an appeal to property P for the reduced collection of up-sets completes the proof. Remark The first very simple case treated by the above proof is the only step in the whole of the present section that is essentially absent from Tao's arguments in Sections 6 and 7 of [13]. Nevertheless, it seems to be essential for the correct organization of the present argument, since we need to allow for which of our sets are lifted under which coordinate projections in the hypothesis that µ F Proof Let I i 1 ,j 1 , I i 2 ,j 2 , . . . , I i ℓ ,j ℓ be the non-principal up-sets of depth k, and now in addition let e 1 , e 2 , . . . , e r be all the members of I i ℓ ,j ℓ of size k (so, of course, r ≤ d k ). Once again we will assume for simplicity that (i ℓ , j ℓ ) = (1, 1). We break our work into two further steps. Step 1 First consider the case of a collection (A i,j ) i,j such that for the set A 1,1 , we can actually find finite subalgebras of sets B s ∈ Φ {es} for s = 1, 2, . . . , r such that A i ℓ ,j ℓ ∈ B 1 ∨ B 2 ∨ · · · ∨ B r ∨ Φ I 1,1 ∩ ( [d] ≥k+1 ) (so A 1,1 lies in one of our nonprincipal up-sets of depth k, but it fails to lie in an up-set of depth k + 1 only 'up to' finitely many additional generating sets). Choose M ≥ max s≤r |B s |, so that we can certainly express ≥k+1 ) . Inserting this expression into the equation now gives that each of the M r individual sets -negligible. Now consider the family of up-sets comprising the original I i,j if i = 2, 3, . . . , d and the collection e 1 , e 2 , . . . , e r , I 1,2 , I 1,3 , . . . , I 1,k 1 corresponding to i = 1. We have broken the depth-k non-principal up-set I 1,1 into the higher-depth up-set ≥k+1 and the principal up-sets e s , and so there are only ℓ − 1 minimaldepth non-principal up-sets in this new family. It is clear that for each m ≤ M r the above product set is associated to this family of up-sets, and so an inductive appeal to property P for this family tells us that also for every m ≤ M r . Since the union of these sets is just d i=1 k i j=1 A i,j , this gives the desired negligibility in this case. Step 2 Now we return to the general case, which will follow by a suitable limiting argument applied to the conclusion of Step 1. Since any Φ {e} is countably separated, for each e with |e| = k we can find an increasing sequence of finite subalgebras B e,1 ⊆ B e,2 ⊆ . . . that generates Φ {e} up to µ-negligible sets. In terms of these define approximating sub-σ-algebras B e,n , so for each I i,j these form an increasing family of σ-algebras that generates Φ I i,j up to µ-negligible sets (indeed, if I i,j does not contain any sets of the minimal depth k then we simply have Ξ (n) i,j = Φ I i,j for all n). Observe that by Corollary 5.2, for each n we have that Φ F I 1,1 and (i,j) =(1,1) π −1 i (Ξ (n) i,j ) are relatively independent over π −1 1 (Ξ (n) 1,1 ). Given now a family of sets (A i,j ) i,j associated to (I i,j ) i,j , for each (i, j) the conditional expectations E µ (1 A i,j | Ξ (n) i,j ) form an almost surely uniformly bounded martingale converging to 1 A i,j in L 2 (µ). Letting B (n) i,j ) > 1 − δ} for some small δ > 0 (to be specified momentarily), it is clear that we also have µ(A i,j △B (n) i,j ) → 0 as n → ∞. Let also We now compute using the above-mentioned relative independence that for each pair (i, j). However, from the definition of B (n) i,j we must have almost surely, and therefore the above integral inequality implies that From this we can estimate as follows: and so provided we chose δ < d i=1 k i −1 we must in fact have µ F [d] (F ) = 0. We have now obtained sets (B (n) i,j ) i,j that are associated to the family (I i,j ) i,j and satisfy the property of lying in finitely-generated extensions of the relevant factors corresponding to the members of the I i,j of minimal size, and so we can apply the result of Step 1 to deduce that µ as n → ∞, as required. Proof of Proposition 6. 1 We first take as our base case k i = 1 and I i,1 = {[d]} for each i = 1, 2, . . . , d. In this case we know that for any A ∈ Φ [d] the pre-images π −1 i (A) are all equal up to negligible sets, and so given A 1 , A 2 , . . . , The remainder of the proof now just requires putting the preceding lemmas into order to form an induction with three layers: if our collection has any non-principal up-sets of minimal depth, then Lemma 6.4 allows us to reduce their number at the expense only of introducing new principal up-sets of the same depth; and having removed all the non-principal minimal-depth up-sets, Lemma 6.3 enables us to remove also the principal ones until we are left only with up-sets of increased minimal depth. This completes the proof.
2009-03-09T20:00:59.000Z
2008-08-16T00:00:00.000
{ "year": 2010, "sha1": "d045ad4a122e52779f2ab327d9b78c8d5e02c2a2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0808.2267", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d045ad4a122e52779f2ab327d9b78c8d5e02c2a2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
252738897
pes2o/s2orc
v3-fos-license
Radiation-induced eosinophil increase ratio predicts patient outcomes in non-small celllung cancer Background and purpose Radiotherapy (RT) is a double-edged sword in regulating immune responses. This study aimed to investigate the impact of thoracic RT on circulating eosinophils and its association with patient outcomes in non-small cell lung cancer (NSCLC). Materials and methods This retrospective study included 240 patients with advanced NSCLC treated with definitive thoracic RT from January 2012 to January 2020. Statistics included Kaplan-Meier analysis of overall survival (OS) and progression-free survival (PFS), multivariate Cox analyses to identify significant variables, and Spearman’s correlation to qualify the relationship between dose-volume histogram (DVH) parameters and EIR. Results Absolute eosinophil counts (AECs) showed an increasing trend during RT and an obvious peak in the 1st month after RT. Thresholds of eosinophil increase ratio (EIR) at the 1st month after RT for both OS and PFS were 1.43. Patients with high EIR above 1.43 experienced particularly favorable clinical outcomes (five-year OS: 21% versus 10%, P<0.0001; five-year PFS: 10% versus 8%, P=0.014), but may not derive PFS benefit from the addition of chemotherapy to RT. The higher a patient’s EIR, the larger the potential benefit in the absence of chemotherapy. DVH parameters including heart mean dose and heart V10 were negatively associated with EIR. None of these DVH parameters was correlated with the clinical outcomes. Conclusion EIR may serve as a potential biomarker to predict OS and PFS in NSCLC patients treated with RT. These findings require prospective studies to evaluate the role of such prognostic marker to identify patients at risk to tailor interventions. Introduction Radiotherapy (RT) is the most available treatment for patients with NSCLC who are not suitable for surgery and the great proportion of patients with limited-stage small cell lung cancer (SCLC). The poor survival rate of the localized lung cancer patients who received RT is due to the limited treatment delivery to tumors (1). Constrained by the radiation toxicities of adjacent organs such as uninvolved lung, heart, spinal cord, and esophagus, attempting to escalate radiation dose has failed to translate into improvements in outcome (2). RT has long been known to induce immune system activation through the production of local inflammatory responses, increasing tumor-infiltrating immunostimulatory cells, and promoting tumor antigens to release (3)(4)(5)(6)(7)(8). An effective immune response contributes to improving patient outcomes. Eosinophil, as a type of circulating immune cell, is cardinal in infiltrating multiple tumors (9) and correlates with cancer patient outcomes in distinct histological types of tumors (10,11). High levels of eosinophils in colorectal tumor (12), nasopharyngeal carcinomas (13), and melanomas (14) correlated with better outcomes. Post-treatment absolute eosinophil counts (AECs) may be a prognostic biomarker in NSCLC, some published findings have even verified its correlation with progression-free survival (PFS) (10). In addition to its immune-stimulating effects, radiation is also known to induce immunosuppression (15). Since the pulmonary circulation receives the entire cardiac output, a great number of circulating immune cells are directly destructed during thoracic radiation (16)(17)(18). Larger exposure to RT fields may cause larger lung volume to radiation, and as a result, induces more eosinophils destruction. Since a large volume of blood circulates through the heart during each thoracic RT fraction, therefore, the heart dose is a plausible parameter of eosinophils destruction. We hypothesized that the fewer AECs impaired by radiation exhibited better patient outcomes through restricting heart dosevolume histogram (DVH) parameters. In addition, we investigated whether eosinophil preservation predicted benefit from the addition of chemotherapy in the homogeneous NSCLC cohort. Material and methods Patients A retrospective analysis was carried out for advanced NSCLC patients who were treated with RT at a single academic cancer center between January 2012 and January 2020. Inclusion criteria: pathologic confirmation of NSCLC, stage III (Eighth edition of lung cancer stage classification), stage IV with oligometastatic disease, receipt of radical thoracic RT to the primary disease, radiation dose ≥46 Gy, with full blood counts recorded 1 week prior to initiation of RT and at least once after RT. Patients were diagnosed with acute or chronic infections, any type of immunodeficiency, hematological disorders, or anti-inflammatory treatment before RT which would affect AECs in the peripheral blood were excluded. The Ethics Committee of Chongqing University Cancer Hospital approved this retrospective study. Data collection Patients' demographic data, clinicopathological characteristics and treatment conditions were further collected manually by using the hospital electronic medical record database. Variables included gender, age, ECOG PS, smoking history, TNM stage, use of corticosteroids and so on. Sequential chemoradiotherapy (s-CRT) was defined as chemotherapy delivered within 1 month before and/or after RT in this study. Chemotherapy delivered beyond 1 month before or after RT was considered as received RT alone. The complete blood cell count closest to the start of RT was chosen and taken as the baseline blood count. The AECs of baseline, during (week1, week2, week3, week4), and after (month1, month2, month3) RT were recorded. RT modalities included intensity-modulated radiotherapy (IMRT), three-dimensional conformal radiotherapy (3D-CRT), and helical tomotherapy (TOMO). Patients included in this study were treated with standard fractionation regimes. RT dose and planning target volume (PTV) were collected directly from the treatment plans. To enhance comparability, the radiation dose was converted to an equivalent dose in 2 Gy fractions (EQD2) assuming alpha/beta=10 for tumor. DVH parameters were obtained including body (V2, V5, V10), lung (mean dose, V2, V5, V10), and heart (mean dose, V2, V5, V10). Efficacy was assessed according to the Response Evaluation Criteria in Solid Tumors (RECIST) 1.1 (19). Overall Survival (OS) was defined as the time from radiotherapy administration to death by any cause or the time of the last follow-up (14 July 2021). PFS was defined as the time from radiotherapy administration to the first recorded instance of disease progression, death, or last follow-up visit, whichever came first. Statistics Descriptive statistics were used to examine whether the data in the study followed a normal distribution. Continuous data were presented as median and interquartile range (IQR) for nonnormal distribution and mean± standard deviation (SD) for normal distribution. Categorical data were compared using the c2 test. Student t-test was applied for comparing centers of groups for continuous data. Mann Whitney U-test and Wilcoxon signed-rank test were applied for comparing centers of groups. Spearman's correlation coefficients were used for nonnormal distributive data to determine the relationship between DVH parameters and eosinophil count and quantify these associations. The continuous predictor was linear and showed no natural threshold for patient stratification, restricted cubic spline loaded in R packages were used to transform the predictor from a continuous variable into a categorical variable by deriving a cutoff value. Kaplan-Meier analyses for PFS and OS were graphed when the data were separated by the threshold of eosinophil increase ratio (EIR). Univariate and multivariable Cox regressions were applied to access the effects of patient-, tumor-, and treatment-correlated to the clinical outcomes and estimate the hazard ratio (HR) and 95% CI. Variables with a P<0.2 in the univariate analysis were included in the multivariable analysis. Interaction between EIR and chemotherapy was assessed via the likelihood ratio test. Twotailed P-values of <0.05 were considered statistically significant. All analyses were performed by using SPSS 26.0 (IBM, Armonk, NY, USA) and R 3.6.3 (R core team, Vienna, Austria). Patient characteristics The flowchart of the study cohort is presented in Supplementary Figure 1. Out of 325 patients with NSCLC treated with RT between 2012 and 2020, 85 patients were excluded because of insufficient treatment data (n=35), received dose less than 46 Gy (n=24), and no full blood count data (n=26). A total of 240 patients with advanced NSCLC were included in the analysis. The median follow-up was 21 months with 149 events (62%) at the last follow-up. The clinicopathological and DVH parameters were listed in Table 1. The mean age of the population was 62 years (range 55 to 67 years), 88% of the participants were male and 25% were non-smokers. Tumors were adenocarcinoma (32%) or squamous (58%) histology, and most often T4 (52%), with N3 (48%) nodal status. The most frequent RT technique was intensity-modulated RT (86%). 124 (52%) patients received RT alone while s-CRT was used in 106 (44%) patients and only 10 (4%) were treated with concurrent CRT. The median prescribed radiation dose was delivered as 60 Gy (IQR 56 to 66 Gy) in 2-Gy fractions over a median 42-day treatment course. Dynamic changes of AECs To visualize patients' peripheral blood eosinophil trends in the cohort that received RT, AECs were graphed with respect to time (referring to the start of RT until the time 3 months after RT). As shown in Figure 1, AECs overall showed an increasing trend during RT and there was a characteristic of double peaks in the 1 st week during RT and the 1 st month after RT for patients who received RT alone while a single peak in the 1 st month after RT for patients treated with s-CRT. Given the above findings, we named the ratio of eosinophil count in the 1 st month after RT to that at baseline as Eosinophil Increase Ratio (EIR), which was able to reflect the efficiency and kinetics of the radiationinduced eosinophilia. Association between EIR and clinical outcomes Kaplan-Meier (Log-rank) and univariate analysis revealed that significantly higher median OS and PFS were observed in the higher EIR (EIR>1.43) group than in the lower EIR (EIR ≤ 1.43) group (Five-year OS: 21% versus 10%, P<0.0001; Five-year PFS: 10% versus 8%, P=0.014 Figures 2-4). Among the 240 patients, 109 (45%) had an EIR>1.43 and the distribution of the two groups did not differ according to clinical factors (Table 1). Furthermore, in the multivariate Cox analysis, the higher EIR was an independent protective factor for OS (HR 0.541, 95% CI 0.382-0.765, P=0.0001) and PFS (HR 0.685 95% CI 0.511-0.916, P=0.012 Tables 2, 3). Altogether, the results suggested that higher EIR was associated with a good prognosis for patients who received RT. To evaluate whether the addition of chemotherapy to RT influenced the predictive function of EIR, the cohort was grouped based on sequential chemotherapy administration. The distribution of the RT alone and s-CRT groups did not differ according to clinical factors (Supplementary Table 1). EIR Figure 2C; median PFS of EIR >1.43 and ≤1.43: 14 months versus 11 months, P=0.0074, Figure 2E). However, in the cohort that received s-CRT, EIR predicted OS rather than PFS (median OS of EIR >1.43 and ≤1.43: 36 months versus 27 months, P=0.022, Figure 2D; median PFS of EIR >1.43 and ≤1.43: 17 months versus 11 months, P=0.33, Figure 2F). In addition, compared with the RT alone cohort, the correlation between EIR and OS was attenuated by chemotherapy administration (Figures 2C, D). The interaction between EIR and chemotherapy showed the higher a patient's EIR, the larger the potential benefit in the absence of sequential chemotherapy (Supplementary Figure 2). Confirming these findings, multivariate Cox analyses demonstrated that EIR has significantly associated with OS in the RT alone cohort (HR 0.334, 95% CI 0.196-0.572, P<0.0001), rather than in the s-CRT cohort (Supplementary Table 2). Association between EIR and dosimetry Lastly, predictors of EIR determined by Spearman's correlation analyses were shown in Supplementary Tables 3, 4. The result revealed that none of the clinicopathological factors was associated with EIR (Supplementary Table 3). To provide insight into the association between DVH parameters and EIR in the RT alone cohort. The results revealed that higher heart mean dose (r=-0.192, P=0.033) and heart V10 (r=-0.189, P=0.035) were significantly associated with lower EIR (Supplementary Table 4 and Figure 5). Of note, none of the heart mean dose and heart V10 was associated with PFS or OS (Figures 3, 4). Dynamic changes of AECs before and after RT were plotted according to patients with or without s-CRT. Discussion To the best of our knowledge, this is the first report that uses EIR to predict PFS and OS benefit from RT for patients with advanced NSCLC. A high EIR is a beneficial prognostic factor for patients receiving RT alone but is attenuated by s-CRT. This study also reports that the heart mean dose and heart V10 were significantly associated with the decline of peripheral blood eosinophils. However, these DVH parameters do not independently associate with PFS or OS. Therefore, restriction of heart DVH parameters could indirectly affect patients' clinical outcomes by means of retarding eosinophils decline in patients receiving RT. A growing body of literature has manifested the fact that high eosinophil counts could positively affect the efficacy of immunotherapy in head and neck squamous cell carcinoma (20), NSCLC (21), and melanoma (22) due to its potentially antitumorigenic functions and contribution to the infiltration and activation of other immune cells into tumors (7,9). However, utilizing pre-or post-treatment AECs to predict clinical benefits could not reflect whether the dynamic change of AECs caused by treatment affects patient outcomes. In addition, in a recent study of 234 NSCLC cases managed with definitive RT, patients with higher eosinophil counts after radiation had a longer PFS (HR 0.73, P=0.0294) (10). However, the main limitation in this study was that the median intervals from baseline to peak eosinophil counts were different in their two NSCLC cohorts (one cohort was 15 days, another was 37 days), suggesting obvious heterogeneity between Kaplan-Meier curves showed overall survival (A), progression-free survival (B) in the NSCLC cohort, overall survival in patients who received RT alone (C) or received s-CRT (D) and progression-free survival in patients who received RT alone (E) or received (F) s-CRT. these two groups; moreover, the eosinophil peak time point of each individual was inconsistent and even extended to 5 months after RT in some cases, which would confound the effect of RT. The strength of our study was that we used EIR, the ratio of eosinophil count in the 1 st month after RT to that at baseline, to reflect the dynamic change before and after RT. There was an early peak of eosinophil count in the 1st week during RT for patients who received RT alone, but not s-CRT, and the predictive role that EIR played regrading to OS was attenuated by chemotherapy administration. All the baseline characteristics Spearman's correlation coefficient between DVH parameters and EIR at varying percentages of body, lung, and heart doses for patients who received RT alone. between RT alone group and s-CRT group did not differ, meaning that the predictive power of EIR was affected by chemotherapy, rather than pre-existing differences. The interaction between EIR and chemotherapy showed the higher a patient's EIR, the larger the potential benefit in the absence of sequential chemotherapy. We can infer that enough time interval is needed after RT for the recovery of eosinophil until EIR exceeds 1.43 before subsequent chemotherapy administration. Moreover, the adequate time interval between induction chemotherapy and RT, and the fewer chemotherapy cycles before RT may facilitate a better survival outcome by means of retarding eosinophils decline. Since a large volume of blood flowed through the heart, thoracic radiation can impair circulating immune cells. Several pieces of evidence had studied the severe lymphopenia associated with DVH, treatment and clinical factors (23)(24)(25). A meta-analysis suggested that gross tumor volume, lung V5 and heart V5 were predictive of lymphopenia by pooling 10 quantitative studies (26). Consistently, our study is the first to investigate the interaction between DVH parameters and the change of eosinophils. The results showed that heart mean dose and heart V10 were significantly negatively associated with EIR. Thus, we should minimize the heart DVH parameters as low as possible to optimize RT treatment plan. This study has limitations, including those inherent in retrospective reviews. The eosinophils from our study were not isolated for further characterization of their phenotypes, and the heterogeneity of protumorigenic and antitumorigenic eosinophil phenotypes could not be evaluated in our study. In addition, the association between low EIR and survival would warrant further investigations. It is possible that the low EIR results in a poor immune status or induces radiation-related toxicity profiles. Furthermore, all patients in our study were Chinese individuals treated with RT alone or s-CRT. Our findings may not be generalizable to other populations with different treatment modalities and racial backgrounds. In conclusion, our study's findings suggested that EIR was an independent prognostic factor for survival outcomes among patients with NSCLC undergoing RT. Further studies would be warranted to tailor treatments based on the risk stratification by EIR. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Author contributions Equal contribution and first authorship: N-HW and XZ contributed equally to this work and share first authorship.
2022-10-07T13:40:20.961Z
2022-10-07T00:00:00.000
{ "year": 2022, "sha1": "930ee6ad49af3fd39aaf0caf7782504362bff854", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "930ee6ad49af3fd39aaf0caf7782504362bff854", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258028905
pes2o/s2orc
v3-fos-license
The effects of differing anticoagulant regimes on blood quality after cell salvage in coronary artery bypass grafting (CABG): a pilot study Background Cell salvage reduces allogenic blood transfusion requirements in surgery. We present a pilot study exploring the impact of anticoagulant choice, citrate or heparin, on the quality of cell salvaged blood in adults undergoing coronary artery bypass grafting (CABG). Materials and methods Elective on pump CABG patients were randomly allocated to citrate or heparin anticoagulation. We measured red blood cell characteristics and inflammation in both the blood collection reservoir and the washed red blood cell concentrate. Postoperatively, the level of biomarkers and the coagulation profile in the peripheral blood as well as the transfusion requirements of allogenic blood products were studied. Results Thirty eight patients were included, 19 in the citrate group and 19 in the heparin group. Baseline characteristics were similar. In the washed red blood cell concentrate, Mean Hb (g/dl) and Ht (%) were lower in the citrate group [Hb: 18.1 g/dL (SD 1.3) vs. 21.1 (1.6), p < 0.001; Ht: 59.9% (54.7–60.9) vs. 63.7% (62.3–64.8); p < 0.001]; Mean corpuscular volume (MCV, μm 3) was higher [99.1fL (9.4) vs. 88 (4.2), p < 0.001] and mean corpuscular hemoglobin concentration (MCHC, g/dl) lower in the citrate group [31.9 g/dl (29.6–32.4) vs. 33.6 (33.1–34.0) p < 0.001]. Thrombocyte count (1000/μl) was higher in the citrate group [31.0 (26.0–77.0) vs. 13.0 (10.0–39.0); p = 0.006]. There were no differences in the requirement for allogenic blood products’ transfusion (intraoperatively and postoperatively) or in the coagulation parameters after washed red blood cell concentrate infusion. Higher IL-10 was found in the citrate group in the blood collection reservoir, higher neutrophil-derived myeloperoxidase (MPO) in the heparin group after washed red blood cell concentrate infusion. Conclusion Though red blood cells in washed red blood cell concentrate were more swollen and diluted in the citrate group with more residual thrombocytes, published quality guidelines were met in both groups. Our pilot study suggests that differences in inflammatory markers in the blood collection reservoir and after infusion of washed red blood cell concentrate indicate a possible pro-inflammatory effect of heparin compared to citrate. A larger study is warranted to confirm these results and their possible clinical consequences. Trial registration ClinicalTrials.gov: NCT02674906. Registered 5 February 2016. Supplementary Information The online version contains supplementary material available at 10.1186/s13019-023-02246-w. Background Allogenic blood transfusion, a necessary tool in managing perioperative blood loss, has several adverse effects. Perioperative allogenic blood transfusion, even when limited in volume, is associated with postoperative morbidity, increased length of stay and so cost, and mortality in both general and cardiac surgery [1,2]. Cell salvage (CS) can reduce perioperative allogenic blood transfusion needs during cardiac surgery [3,4]. Anticoagulation for CS can be achieved with either citrate or heparinized saline [5,6]. Because of their properties and differing mechanisms of action, these anticoagulants differ in their impact on acid-base balance, cellular energy supply, calcium homeostasis, inflammation and oxidative stress [7,8]. Cytosolic calcium (Ca 2+ ) has an important signaling function mediated by releasing various signaling mediators from intracellular granules of activated blood platelets, polymorphonuclear cells, monocytes and macrophages. By modifying Ca 2+ availability, citrate can exert a modulating effect on inflammatory signaling. Glucose and citrate, key components of citrate solutions, can also serve as energy sources for red blood cells [7,8]. Heparin has an elaborate binding profile and can have both proand anti-inflammatory effects depending on its binding site, concentration and environment [7,[9][10][11]. Blood collected in the cell saver blood collection reservoir (BCR) can be used to study the effects of heparin vs. citrate directly on inflammatory parameters in red blood cells collected from the surgical field. To our knowledge, only one clinical study by Mortelmans et al. compared the effects of these anticoagulants in CS [12]. In this pilot study we aimed to study the effects of citrate and heparin anticoagulant regimens on quality of washed red blood cell concentrate (WRBC), and at various time points on inflammatory parameters and variables of hemolysis and coagulation, in adult patients undergoing on-pump coronary artery bypass graft surgery (CABG). Methods This single center pilot study (NCT02674906) was performed in the operating room and the mixed medicalsurgical ICU of Ziekenhuis Oost Limburg (ZOL), Genk, Belgium, a 805-bed non-university teaching hospital. This study was approved by the local ethics committee and written informed consent was obtained prior to surgery from the patient or legal representative. After consent, patients were randomized preoperatively either to a citrate anticoagulant group or a heparin anticoagulant group for cell salvage. During CABG, the perfusionist and anaesthesiologist were unblinded for the anticoagulation regimen used. Blinding was complete for the patient, as well as personnel in the intensive care unit (ICU) and ward. Adult patients undergoing elective on-pump CABG were considered eligible for inclusion. Elective surgery was defined as planned at least 24 h before surgery. Off-pump CABG, urgent procedures, use of vasoactive medication prior to surgery, infection treated with antimicrobial therapy, chronic inflammatory disease, immune suppressive drug treatment, active neoplasia, renal replacement therapy, or use of extracorporeal membrane oxygenation (ECMO) were exclusion criteria. In addition, we excluded patients who had massive intraoperative bleeding that could not be safely managed while collecting study data. Patients could be excluded at any time if the inclusion criteria were no longer met or when exclusion criteria appeared. Patients were randomized using randomly generated treatment allocations within sealed opaque envelopes. CS is a procedure to collect blood lost in the surgical field for reuse. Blood is collected in cell saver blood collection reservoir (BCR), where it is mixed with an anticoagulant, either citrate or heparinized saline. The collected blood-anticoagulant mixture is processed by filtering, after which it is drawn into a centrifuge. Isotonic saline solution is added to the centrifuge bowl as washing fluid. The centrifugal procedure separates red blood cells, which are denser and are propelled against the outer wall of the bowl, while less dense plasma moves towards the centre of the bowl where it is deposited in a waste bag. Waste products, including white blood cells, platelets, plasma, anticoagulant, fat, clotting factors, and free plasma haemoglobin are collected in the waste fluid. The washed red blood cell concentrate (WRBC) is collected in a separate bag. Anaesthesia, cardiopulmonary bypass and cell salvage procedures were all executed according to the standard protocol used in our institution (see Additional file 1: Supplement 1). Fixed administration rates of anticoagulant to the cell saver circuit could potentially lead to clot formation in case of more than moderate blood loss and therefore BCR and its filter were primed with a larger volume of anticoagulant (300 ml, compared to 150-200 ml). During the cell salvage procedure measures were therefore taken to keep the anticoagulant to blood ratio (ATBR) as constant as possible. Cell salvage volumes (total fluid volume collected in BCR, incubation time, waste fluid and washing fluid used) and intraoperative diuresis were noted. Both type and volumes of fluids administered by the anaesthetist and perfusionist intraoperatively were tracked as well as the volume of red blood cells that were washed. Washed red blood cell concentrate (WRBC) was transfused per-operatively or immediately post-operatively. If there was ongoing blood loss, the standard hospital protocol was applied to manage this; if there was necessity for allogenic blood transfusion, this was administered irrespective of the type of anticoagulant used (the ICU ward and the intensivist were blinded to the anticoagulant group). The standard protocol left transfusion at the discretion of the board certified anesthesiologist and/or ICU staff member; a Hb < 8 g/dl in the peripheral blood is in our protocol considered the threshold for allogenic blood transfusion in this type of patients. We collected baseline characteristics of patients before CABG. We measured effects of the specific anticoagulation protocol on quality of WRBC, and at various time points inflammatory parameters and variables of hemolysis and coagulation. On one hand, the quality of WRBC was analysed by measuring hemoglobin (Hb), hematocrit (Ht), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC), red cell distribution width (RDW), free hemoglobin (fHb), iron, thrombocytes, white blood cell (WBC) count and differentiation in WRBC compared to Hb and Hct in the BCR. Inflammatory parameters were analysed by measuring levels of interleukin (IL)-6, IL-8, IL-10 and myeloperoxidase (MPO) at baseline, in the BCR (in the blood-anti-coagulant mixture before processing) and in peripheral blood of the patient (taken from the arterial catheter in place) at two time points; Firstly, when the patient was successfully weaned from cardiopulmonary bypass, protamine had been given and when the WRBC had not yet been transfused. Secondly, in the ICU, 2-3 h post transfusion of WRBC, but before extubation or allogenic blood transfusion. The impact after transfusion of WRBC was evaluated, apart from measuring inflammatory parameters, by measuring fHB, iron, transferrin, ferritin, haptoglobin, hepcidin, prothrombin time (PT), activated partial thromboplastin time (aPTT), international normalized ratio (INR), rotational thromboelastometry (ROTEM) in peripheral blood of the patient (taken from the arterial catheter), at the same two time points before and after transfusion of WRBC. Furthermore, blood loss in thoracic drains and transfusion requirements over the first 24 postoperative hours were compared between groups. Statistics were performed using SPSS ® Statistics version 28 (IBM). Normally distributed values were reported as the mean (SD), and non-normally distributed values were reported as the median (25-75th percentile). Because of the limited sample size, normality was determined by the Shapiro-Wilks test. Independent samples t-test or the Mann-Whitney U test were used as appropriate. The Paired Samples t Test or the Wilcoxon signed-rank test were used as appropriate to analyse values per patient. Chi 2 was used for comparison of categorical values. A double-sided p value of less than 0.05 was considered statistically significant. A sample size of 34 achieves 80% power to detect a difference of 2 g/dl in Hb, a primary outcome parameter, using a two-tailed two sample t-test at a 0.05 level of significance. Equal sized groups and a standard deviation of 2 was assumed. Results Over a 3 month study period, a total of 38 patients were included, 19 patients in each group. There were no significant differences in baseline characteristics, including fasting lipid profiles, between the groups, apart from a higher lymphocyte count in the citrate exposed patients (of citrate group) ( Table 1, Panel A) (laboratory values see Additional file 1: Supplement 2). Furthermore, only 2/19 (11%) patients in both groups were taking aspirin on the day of surgery (p = 1.000). In all other patients, antiplatelet therapy had been ceased. There were no differences in ICU length of stay (citrate group: median (25-75%) 3 (2-4) days; heparin group 3 (2-5) days, p = 0.300), hospital length of stay (citrate group: median (25-75%) 9 (8-11) days; heparin group 8 (7-10.5) days, p = 0.257) or mechanical ventilation (citrate group: median (25-75%) 8 (7-11) hours, heparin group 8 (7-13) hours, p = 0.665). There was no in-hospital mortality in the citrate group, one of 19 patients died in the ICU in the heparin group, due to massive middle cerebral artery stroke (5.3% p = 0.311). The quality of WRBC WRBC from patients in the citrate group had lower Hb, Ht and RBC count compared to heparin exposed patients ( Table 1, Panel C). In addition, they had a higher mean corpuscular volume (MCV) and red blood cell distribution with (RDW), with a lower mean corpuscular hemoglobin concentration. Thrombocyte There were no differences between fHb and iron in the washed RBC concentrate. WBC counts did not differ between groups in WRBC. WBC differentiation showed a higher count and percentage for lymphocytes, monocytes and eosinophils in the citrate group (see Table 1, Panel C) and a higher percentage (but not absolute count) of neutrophils in the heparin group. Inflammatory parameters In the BCR, despite no difference at baseline, the antiinflammatory marker IL-10 was significantly higher in the citrate group [203.70 (108.98-411.42) vs 108.10 (60.00-176.71) p = 0.013]. There were no significant differences in the collecting bowl for the other inflammatory markers, Hb and Ht (see Table 1, Panel B). Before WRBC transfusion, there were no significant differences in inflammatory parameters between the groups (see Additional file 1: Supplement 2). After WRBC transfusion (see Table 2), MPO was higher in the heparin group than in the citrate group [92.5 (83.5-117.3) vs 129.9 (112.5-187.9), p = 0.019]. There were no differences in other inflammatory parameters. The impact after transfusion of WRBC Free Hb, iron, ferritin, haptoglobin and hepcidin did not differ between the groups, both before and after WRBC transfusion. There was no difference in aPTT, PT, INR and ROTEM analysis before and after WRBC transfusion between groups. Both intra-operatively and postoperatively there was no difference between the groups in the need for transfusion of allogenic blood products (Packed Cells, Thrombocytes, Fresh Frozen Plasma) and total blood volume lost during the first 24 post-operative hours was comparable (see Table 3). Cell salvage volumes There were no significant differences between study groups in total fluid volume collected in BCR, ATBR, incubation time, waste fluid and washing fluid. Although the volume of WRBC was higher in the citrate group this difference was not significant. The significant difference in Ht in WRBC between groups (see Table 1, Panel C) was therefore offset by the higher volume of WRBC in the citrate group, resulting in no significant difference in the product of Ht and the volume of WBRC. The red blood cell volumes washed twice did not differ between groups [citrate 45 (61) ml, heparin 67 (53) ml, p = 0.172] No significant differences were noted in fluid volume or type, administered before, during and after cardiopulmonary bypass. Intraoperative diuresis was not significantly different between the groups. The total heparin dose during cardiopulmonary bypass was not significantly different between the groups (Citrate group: mean heparin dose: 32,361 (11,776) IU, heparin group: 36,711 (13,087), p = 0.296). There were no early surgical revisions in either study group (apart from 1 sternal revision in the heparin group, 4 days after surgery). Discussion In this trial comparing the effects of differing anticoagulant regimens (citrate or heparin) on cell salvage in adult patients undergoing elective pump CABG, small but relevant differences were observed between the two groups. WRBC had a significantly lower hematocrit and higher MCV and a larger residual thrombocyte count in the citrate group. There were also differences in WBC differentiation between groups. In the collecting bowl (before the cell salvage washing procedure but after treatment with anti-coagulant) IL-10 was significantly higher in the citrate group. After WRBC transfusion in the patients, there were no differences in interleukin blood levels between the groups, but MPO was significantly higher in patients in the heparin group. Both intra-operatively and postoperatively, this did not result in differences in transfusion of allogenic blood products or blood loss. Washed red blood cell concentrates in both groups met the quality guidelines published by the American Association of Blood Banks [14]. We believe the differences in MCV and MCHC can be attributed to a number of factors. ACD-A is hypotonic (400-440 mOsm/kg, sodium content 214-234 mEq/l) and acidic (pH 4.5-5.5). Both have been implicated in increased MCV, the latter especially when ACD-A concentrations are relatively high [15]. Increased MCV and RDW values in the citrate group reflect the findings by Mortelmans et al. [12]. MCHC differences can be attributed to the changes in MCV as MCH values do not differ significantly. In contrast to the Mortelmans study [12], in which significantly higher fHb values were found in patients after WRBC transfusion at the end of surgery in the citrate group, we found no differences in fHb after transfusion of WRBC. The amount of fHb measured by Mortelmans however, is higher than that expected after the passive infusion of fHb present in WRBC alone and implies ongoing hemolysis in the citrate group after transfusion of WRBC. Both in our study and in the Mortelmans study, no differences were found in fHb in WBRC. However, Mortelmans did find higher fHb in waste fluids in the citrate group compared to heparin, something which was not part of our protocol. It seems likely that the washing cycle may have masked significant differences in the fHb in the WRBC between study groups. Our findings after transfusion of WRBC gave no indication of ongoing hemolysis differences between groups. Differences in findings could, of course, be due to the differing surgical procedures studied. Since ACD-A is a hypotonic solution, the osmotic effect on red blood cells would increase with increasing ATBR. However, the ATBR and incubation time of blood in the BCR was comparable in both groups. Consecutive washing cycles could possibly have an impact on WRBC quality but the volume of red blood cells washed twice and the proportion of patients per study group whose red blood cells were washed twice, were again comparable between groups. During the washing cycle, the centrifuged volume of red blood cells at which the cell saver sensor detects the buffy coat will have been comparable between the groups. In the citrate group because of the larger MCV, there are therefore fewer red blood cells. This can explain the lower Hb, but not the lower Ht, in the citrate group (since the MCH was comparable between the groups). The same cell saver device was used in all patients but can be expected to deliver a comparable Ht of WRBC in both groups, only when MCV is similar. A larger MCV, with a comparable MCH, may have caused a less efficient centrifugal movement of RBC in the citrate groups during washing. This may have led to a more dilute WRBC, which would also explain why the volume of WRBC in the citrate group was higher (though not significantly), although the total volume of fluid in the BCR was comparable between the groups. Since the product of Hb and the volume of WRBC was comparable between the study groups, the total amount of autologous Hb transfused in both groups was comparable, which argues against significant differences in hemolysis between groups. Fortunately, the quality guidelines for perioperatively salvaged RBCs, published by the American Association of Blood Banks (AABB), were met in both groups (Hct > 50%, Hb > 15 g/dl, fHb < 1.0 g/dl) [14] There was a significantly lower residual thrombocyte count in the WRBC in the heparin group. This could be an effect of more pronounced thrombocyte activation in the heparin group. Both in vitro [8] and in vivo [16], platelet degranulation, measured by release platelet factor 4 (PF4) is increased by heparin compared to citrate, and therefore most likely a Ca-dependent process, which is likely downregulated by the chelating effect on free calcium of citrate. It seems plausible that platelet Boer et al. Journal of Cardiothoracic Surgery (2023) 18:116 activation by heparin and subsequent aggregation may cause a higher proportion of platelets to be washed out during the washing cycle, causing lower thrombocyte counts in the primary infusion bag of washed erythrocytes. A more dilute red blood cell concentrate in the citrate group may allow thrombocytes to infiltrate between red blood cells, leading to a higher thrombocyte count in WRBC. The smaller thrombocyte volumes may explain why only the thrombocyte count and not the total WBC count, was higher in the citrate group. At present, the clinical implications of this finding remain unclear. However, low platelet count has routinely been described after centrifugal cell salvage techniques and recent cell salvage methods utilizing ultrafiltration haven been advocated because they demonstrated improved thrombocyte values in salvaged blood [17]. The inflammatory response elicited by cardiac surgery is characterized by a significant release of both pro-and anti-inflammatory cytokines [18][19][20] and cell salvage with washing has been demonstrated to ameliorate the inflammatory profile of cardiotomy suction blood [19,20]. Based on studies mainly performed in the setting of extracorporeal circuits in hemodialysis or hemofiltration, it has been hypothesized that citrate, by causing a decrease is extracellular calcium concentration, effects cytosolic calcium concentrations, which acts as an intracellular messenger [7] activating neutrophils and platelets and subsequent mediator release. Bohler et al. [21], comparing citrate to heparin dialysis, demonstrated (through reduced neutropenia, C3a levels, and lactoferrin release) that depletion of ionized calcium reduced neutrophil degranulation in the extra-corporeal circuit. Two other studies [16,22] in a similar population found that citrate diminished the release of MPO that is released by (activated) neutrophils. Finally, in a study in ICU patients undergoing CVVH, Tiranathanagul et al. [23], found higher post-filter serum MPO levels in the heparin group compared to citrate. Citrate significantly decreased systemic pre-filter serum MPO and IL-8 levels. To date, no effects of citrate versus heparin have been documented effecting IL-10 levels either in extracorporeal circuits in renal replacement therapy or in cell salvage. Despite the fact that some studies have found no differences when comparing heparin and citrate as anticoagulants in renal replacement therapy [24,25], there is overwhelming evidence that citrate seems to downregulate inflammation, while heparin seems to be pro-inflammatory at higher concentrations [7], a conclusion reinforced by our findings. Interpretation of the differences found in WBC differentiation in the washed RBC concentrate is difficult because of minimal baseline differences in WBC differentiation, though the effects of citrate versus heparin as described above, especially the differing effects on polymorphonuclear degranulation, could play a role. Undoubtedly this pilot study has a number of limitations; primarily it is a small, single center study, despite which findings are statistically robust. The larger anticoagulant priming volume could cause early hemolysis when ATBR was highest, but this was masked by the washing process. A blood smear and quantification of osmotic fragility might have added reinforcing data. Heparin was given systemically in both study groups. Despite equivalent doses in both groups, a supra-additive effect of systemic heparin with heparinized saline in the heparin group cannot be excluded totally. Differences in blood loss during the first 24 h, may have been nullified by the standard algorithm for the treatment of postoperative bleeding and the effect of differing techniques and outcomes between surgeons could be a possible confounding factor. Furthermore, blinding was not complete until the patient left the operating room.
2023-04-09T13:15:20.560Z
2023-04-08T00:00:00.000
{ "year": 2023, "sha1": "592aa913862c491757fc8a20ef4c47e761a489e7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "592aa913862c491757fc8a20ef4c47e761a489e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1655115
pes2o/s2orc
v3-fos-license
Restriction-Spectrum Imaging of Bevacizumab-Related Necrosis in a Patient with GBM Importance: With the increasing use of antiangiogenic agents in the treatment of high-grade gliomas, we are becoming increasingly aware of distinctive imaging findings seen in a subset of patients treated with these agents. Of particular interest is the development of regions of marked and persistent restricted diffusion. We describe a case with histopathologic validation, confirming that this region of restricted diffusion represents necrosis and not viable tumor. Observations: We present a case report of a 52-year-old man with GBM treated with temozolomide, radiation, and concurrent bevacizumab following gross total resection. The patient underwent sequential MRI’s which included restriction-spectrum imaging (RSI), an advanced diffusion-weighted imaging (DWI) technique, and MR perfusion. Following surgery, the patient developed an area of restricted diffusion on RSI which became larger and more confluent over the next several months. Marked signal intensity on RSI and very low cerebral blood volume (CBV) on MR perfusion led us to favor bevacizumab-related necrosis over recurrent tumor. Subsequent histopathologic evaluation confirmed coagulative necrosis. Conclusion and Relevance: Our report increases the number of pathologically proven cases of bevacizumab-related necrosis in the literature from three to four. Furthermore, our case demonstrates this phenomenon on RSI, which has been shown to have good sensitivity to restricted diffusion. INTRODUCTION We present a case of a 52-year-old man who first presented with a generalized tonic-clonic seizure. Subsequent MRI revealed a ring enhancing mass in the left frontal lobe. The patient underwent gross total resection, and pathology was consistent with GBM. Approximately 5 weeks later, the patient was started on standard combined chemotherapy and radiation (temozolomide at 75 mg/m 2 /day and involved-field radiation at a total dose of 60 Gy in 2.0 Gy fractions over a 6-week period). Concurrently, the patient was also started on bevacizumab as part of a clinical trial to assess the efficacy of bevacizumab for newly diagnosed GBM. Thereafter, the patient continued to receive adjuvant temozolomide and bevacizumab for 9 months. Sequential MRI's were obtained during this time using an advanced diffusion-weighted technique called Restriction-Spectrum Imaging (RSI), which has been shown to provide improved conspicuity and delineation of high-grade primary and metastatic brain tumors compared with standard diffusion-weighted imaging (DWI) (1). Approximately 6 months following surgery, a small focal area of restricted diffusion appeared on the RSI sequence adjacent to the resection cavity. Over the next 18 months, this area of restricted diffusion became significantly larger and more confluent, eventually crossing the corpus callosum and extending into the contralateral frontal lobe (Figure 1). During this time, the patient also had progressive cognitive decline including increasing confusion, forgetfulness, and abulia. Therefore the imaging findings were initially interpreted as recurrent tumor and biopsy was considered. However, on further inspection, it became clear that the degree and homogeneity of the restricted diffusion seen in this case was much greater than what is typically seen in high-grade glioma (Figure 2A), with GBM's usually demonstrating less intense and more heterogeneous restricted diffusion. Furthermore, review of the patient's dynamic susceptibility contrast (DSC) MR perfusion revealed that the cerebral blood volume (CBV) in the region of the restricted diffusion lesion was remarkably low -lower than www.frontiersin.org FIGURE 1 | Progression of findings on RSI over a 12-month period. Three coronal RSI images spanning a 12-month period following surgery and chemoradiation depict an enlarging area of restricted diffusion which eventually crosses the corpus callosum and extends into the contralateral frontal lobe. that of the contralateral normal appearing white matter (NAWM). In contradistinction, high-grade gliomas typically demonstrate markedly elevated CBV (2, 3) ( Figure 2B). Given the findings of marked signal intensity on the RSI sequence and very low CBV on DSC MR perfusion, it was concluded that this large region of restricted diffusion likely Frontiers in Oncology | Neuro-Oncology represented bevacizumab-related necrosis, despite the fact that it had enlarged over time and crossed the corpus callosum. Therefore, biopsy was not pursued, and the patient continued to be followed with sequential MRI's. Approximately 2 years after initial diagnosis, the patient expired, and underwent autopsy. Gross pathologic evaluation demonstrated yellow, caseous material in the left frontal lobe compatible with necrosis (Figure 3). Histopathologic evaluation of this area revealed coagulative necrosis, gliosis, hyalinized blood vessels, and scattered atypical gemistocytes ( Figure 4A). Although it is unclear if these gemistocytes represent tumor gemistocytes or reactive gemistocytic astrocytes, the Ki-67 stain was completely negative (Figure 4B), indicating that these are non-proliferating cells. In short, there was no evidence of proliferating, recurrent tumor at the time of autopsy. BACKGROUND Previous reports have described the development of regions of marked and persistent restricted diffusion in a subset of patients with malignant glioma during treatment with bevacizumab (4-6). Histopathologic data from these regions reveal necrosis and fibrotic, hyalinized blood vessels rather than viable tumor. Furthermore, these lesions have been correlated with improved outcomes (5). However, based on standard MR imaging, differentiating these lesions from areas of viable tumor, which are also associated with restricted diffusion (7,8), remains challenging. DISCUSSION Although the development of regions of marked and persistent restricted diffusion has been described in a subset of malignant glioma patients treated with bevacizumab (4-6), this is still a fairly new phenomenon and has been pathologically proven in only three patients, with the present case increasing this number to four. Furthermore, these prior reports utilized standard DWI. In the present case, we describe this same phenomenon using a recently described advanced diffusion-weighted technique called RSI (9). In previous studies, RSI has shown greater sensitivity to restricted diffusion compared to standard DWI because it utilizes multiple b-values and diffusion times to separate out the spherically restricted water compartment from the hindered water compartment (9). As demonstrated in Figure 1, the degree and homogeneity of the RSI signal in this case is striking, facilitating the differentiation of this area of bevacizumab-related necrosis from recurrent tumor which demonstrates less intense and more heterogeneous restricted diffusion (Figure 2A). Furthermore, we demonstrate that by utilizing DSC MR perfusion, we can further increase our diagnostic certainty in discriminating areas of bevacizumab-related necrosis from recurrent tumor. It is well known that MR perfusion of high-grade glioma yields elevated CBV (2, 3). In the present case, the area of bevacizumab-related necrosis demonstrated essentially no CBV, corroborating similar findings in prior reports (5, 6) and further differentiating this lesion from recurrent tumor. The histopathologic findings in our case were very similar to those described in the prior reports including coagulative necrosis, gliosis, and hyalinized blood vessels ( Figure 4A). However, our case also showed scattered atypical gemistocytes. Regardless of whether these gemistocytes represented tumor gemistocytes or reactive gemistocytic astrocytes, the Ki-67 stain was completely negative (Figure 4B), indicating that these cells were inactive non-proliferating cells and that no viable tumor was present. Previous reports have proposed different explanations for the etiology of these areas of marked and persistent restricted diffusion. While one report has suggested that these lesions represent exacerbation of radiation necrosis by bevacizumab (4), others have suggested that these lesions result from bevacizumab-induced chronic hypoxia (6). We favor the latter explanation as it is well www.frontiersin.org established that typical radiation necrosis actually demonstrates decreased signal on DWI, due to facilitation of diffusion related to liquefaction within the area of radiation necrosis (10). Further investigation is needed with larger patient cohorts and more histopathologic data, including immunohistochemistry, in order to determine the actual pathophysiology of these lesions. CONCLUDING REMARKS In summary, we present a case of bevacizumab-related necrosis in a patient with GBM, emphasizing the distinctive appearance of this entity on RSI which, together with perfusion imaging, allowed us to differentiate this entity from recurrent tumor. AUTHORS CONTRIBUTION Nikdokht Farid: author contributed to conception, acquisition, and interpretation of data. Author also participated in drafting the article. Daniela B. Almeida-Freitas: author contributed to conception, acquisition, and interpretation of data. Author also participated in drafting the article. Nathan S. White: author contributed in analysis and interpretation of data and participated in revising the article critically for important intellectual content. Carrie R. McDonald: author contributed to conception and interpretation of data. Author also participated in drafting the article. Karra A. Muller: author contributed in acquisition of data and participated in revising the article critically for important intellectual content. Scott R. VandenBerg: author contributed in acquisition of data and participated in revising the article critically for important intellectual content. Santosh Kesari: author contributed in acquisition of data and participated in revising the article critically for important intellectual content. Anders M. Dale: author contributed to conception and interpretation of data. Author also participated in revising the article critically for important intellectual content.
2016-06-17T21:18:13.048Z
2013-08-27T00:00:00.000
{ "year": 2013, "sha1": "afb122a4a3d9cd6b20e8a49255698b69e3283ab9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2013.00258/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "afb122a4a3d9cd6b20e8a49255698b69e3283ab9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17125223
pes2o/s2orc
v3-fos-license
Hodge numbers of moduli spaces of stable bundles on K3 surfaces We show that the Hodge numbers of the moduli space of stable rank two sheaves with primitive determinant on a K3 surface coincide with the Hodge numbers of an appropriate Hilbert scheme of points on the K3 surface. The precise result is: Theorem: Let $X$ be a K3 surface, $L$ a primitive big and nef line bundle and $H$ a generic polarization. If the moduli space of rank two $H$ semi-stable torsion-free sheaves with determiant $L$ and second Chern class $c_2$ has at least dimension 10 then its Hodge numbers coincide with those of the Hilbert scheme of $l:=2c_2-\frac{L^2}{2}-3$ points on $X$. By a generic polarization we mean a polarization which does not lie on any wall, i.e. any H−semi-stable sheaf is stable. Note that by the smoothness criterion [Mu1] the moduli space M H (L, c 2 ) is smooth and projective of dimension 4c 2 − L 2 − 6 if it is non-empty. Moreover, by dimension counting (cf. [T], lemma 3.1, [O'G], prop. 7.2.1) the moduli space of µ−stable vector bundles is dense in M H (L, c 2 ). We would like to mention that the Hodge numbers of the Hilbert scheme of points on a surface can be expressed in terms of the Hodge numbers of the surface [GS, Ch]. In fact, even the Hodge structure is known. We would like to compare the Hodge structure of the moduli spaces and of the Hilbert schemes, but even in the special case dealt with in section 1 we cannot prove that they coincide. Our work was motivated by a talk of J. Le Potier in Lambrecht in May 1994. In this talk he explained how to use moduli spaces of coherent systems (or framed modules) to A special case In this section we prove the theorem in the case that Pic(X) = Z · L and c 2 = L 2 2 + 3. 1.1 The birational correspondence to the Hilbert scheme Throughout this section we will assume that the Picard group is generated by an ample line bundle L, i.e. Pic(X) = Z · L. Under this assumption a torsion-free sheaf with determinant L is µ−stable if and only if it is µ−semi-stable. For the convenience of the reader we recall the stability condition for framed modules ( [HL]): Let δ = δ 1 · n + δ 0 , D ∈ Pic(X) and E be a torsion-free rank two sheaf. A framed module (E, α) consists of E and a non-trivial homomorphism α : E → D. It is (semi-)stable if P Ker(α) (≤)P E /2 − δ/2 and for all rank one subsheaves M ⊂ E the inequality P M (≤)P E /2 + δ/2 holds. In fact, any semi-stable framed module is torsion-free. In [HL] it was shown that there exists a coarse projective moduli space of semi-stable framed modules. Lemma 1.1 Let D = L and 0 < δ 1 < L 2 . Then a framed module (E, α) is µ−stable if and only if E is µ−stable. The moduli space M H (L, c 2 , D, δ) is independent of the specific δ in this range. Proof: It suffices to show that for a µ−stable vector bundle E there is always a non-trivial homomorphism E → L. Since Hom(E, L) = H 0 (X, E) and H 2 (X, E) ∼ = H 0 (X, E * ) = 0 by the stability of E, the Riemann-Roch-Hirzebruch formula χ(E) = L 2 /2 − c 2 + 4 shows that under the assumption h 0 (X, E) > 0. ✷ Lemma 1.3 Let N(L, c 2 , L, δ) be the set of all (E, α) ∈ M (L, c 2 , L, δ) such that Ker(α) ∼ = O X . It is a closed subset, which contains all stable pairs (E, α) with E locally free. Proof: If E is locally free, then Ker(α) has to be locally free. By stability it is thus isomorphic to O X . N(L, c 2 , L, δ) is closed, since Ker(α) ∼ = O X if and only if length(Coker(α)) = c 2 , i.e. if the length is maximal; this is a closed condition. where I Z is the ideal sheaf of a codimension two cycle of length c 2 . Thus we can define a morphism ψ : N(L, c 2 , L, δ) −→ Hilb c 2 (X) by mapping (E, α) to [Coker(α)]. Proof: It is enough to show that Ext 1 (I Z ⊗ L, O X ) = 0 for all Z ∈ Hilb c 2 (X). By the assumption h 0 (X, We have seen that any (E, α) ∈ N(L, c, L, δ) induces an exact sequence Conversely, any section s ∈ H 0 (X, E) of E ∈ M (L, c) gives a homomorphism α : is isomorphic to P(H 0 (X, E)). In fact, N(L, c, L, δ) can be identified with Le Potier's moduli space of coherent systems of rank one [LP]. The picture we get in the case c 2 = c := L 2 /2 + 3 is described by the following diagram. Hilb c (X) Both morphisms ϕ and ψ are birational. This is due to the fact that for the generic [Z] ∈ Hilb c (X) the restriction map H 0 (X, L) → H 0 (X, L Z ) is injective and hence h 1 (X, I Z ⊗ L) = 1. This shows that ψ is generically an isomorphism. Since the fibres of ϕ are connected and both spaces are of the same dimension, also ϕ is birational. Note that in particular the moduli space M(L, c) is irreducible. Results about birationality of certain moduli spaces and corresponding Hilbert schemes have been known for some time, e.g. Zuo has shown that M H (O X , n 2 H 2 + 3) is birational to Hilb 2n 2 H 2 +3 (X) (cf. [Z]). The moduli spaces of framed modules make this relation more explicit. They are used in the next section to show that the Hodge numbers of the moduli space and the Hilbert scheme coincide. Comparison of the Hodge numbers First, we recall the notion of virtual Hodge polynomials [D], [Ch]. For any quasi-projective variety X there exists a polynomial e(X, x, y) with the following properties: i) If X is smooth and projective then ii) If Y ⊂ X is Zariski closed and U its complement then e(X, x, y) = e(Y, x, y) + e(U, x, y). iii) If X → Y is a Zariski locally trivial fibre bundle with fibre F then e(X, x, y) = e(Y, x, y) · e(F, x, y). iv) If X → Y is a bijective morphism then e(X, x, y) = e(Y, x, y). is a diagram of quasi-projective varieties, where Z → X and Z → Y admit a bijective morphism to a P n − bundle over X, resp. Y , then e(X, x, y) · e(P n , x, y) = e(Z, x, y) = e(Y, x, y) · e(P n , x, y). Hence e(X, x, y) = e(Y, x, y). The idea to prove that M(L, c) and Hilb c (X), with c := L 2 2 + 3, have the same Hilbert polynomial is to stratify both by locally closed subsets M(L, c) k and Hilb c (X) k such that the birational correspondence given by the moduli space of framed mod- Using the universal subscheme Z ⊂ X × Hilb c (X) with the two projections p and q to X and Hilb c (X), resp., and the semi-continuity applied to the sheaf I Z ⊗ p * (L) and the projection q it is easy to see that this defines a stratification into locally closed subschemes. All strata are given the reduced induced structure. We want to show that both morphisms admit a bijective morphism to a P k−1 −bundle over the base. In fact, they are P k−1 −bundles, but by property iv) of the virtual Hodge polynomials we only need the bijectivity. Definition 1.6 Let A k := π * (E k ), where π : N(L, c, L, δ) k × X → N(L, c, L, δ) k denotes the projection and E k is the restriction of the universal sheaf E, and let B k := Lemma 1.7 A k and B k are locally free sheaves on M(L, c) k and Hilb c (X) k , resp., and compatible with base change, i.e. Proof: By definition and using Serre-duality we see that Hilb c (X) k = {Z| dim Ext 1 (I Z ⊗ L, O X ) = k} and that it is reduced. Thus the claim for B k follows immediately from the base change theorem for global Ext-groups [BPS]. In order to prove the assertion The kernel of the universal framed module on N(L, c, L, δ)×X restricted to N(L, c, L, δ) k induces a morphism to P(A k ) which is obviously bijective. Analogously, by the universality of P(B k ) (cf. [La]) the universal framed module over N(L, c, L, δ) × X completed to an exact sequence and restricted to the stratum induces a bijective morphism of N(L, c, L, δ) k to P(B k ). We summarize: Both manifolds M (L, c 2 ) and Hilb c 2 (X) are symplectic. One might conjecture that in general two birational symplectic manifolds have the same Hodge numbers or even isomorphic Hodge structures, but we don't know how to prove this. The general case By deforming the underlying K3 surface the proof of the theorem is reduced to the case considered in section 1. Deformation of K3 surfaces The following statements about the existence of certain deformations of a given K3 surface will be needed. 2.1.1 Let X be a K3 surface, L ∈ Pic(X) a primitive nef and big line bundle. Then there exists a smooth connected family X −→ S of K3 surfaces and a line bundle L on X such that: · Pic(X t ) = Z · L t for all t = 0 (L t is automatically ample). Proof: The moduli space of primitive pseudo-polarized K3 surfaces is irreducible ( [B2]). Since any even lattice of index (1, ρ − 1) with ρ ≤ 10 can be realized as a Picard group of a K3 surface ( [Ni], [Mor]) the generic pseudo-polarized K3 surface has Picard group Z. ✷ 2.1.2 Let X be a K3 surface whose Picard group is generated by an ample line bundle L, i.e. Pic(X) = Z · L. Furthermore, let d ≥ 5 be an integer. Then there exists a smooth connected family X −→ S of K3 surfaces and a line bundle L on X such that: where D is represented by a smooth rational curve, both line bundles L 0 and L 0 (2D) are ample and primitive and the intersection matrix is Proof: Again we use the irreducibility of the moduli space of primitive polarized K3 surfaces. The existence of a triple (X 0 , L 0 , D) with ample L 0 , smooth rational D and the given intersection form was shown by Oguiso [Og]. It remains to show that L 0 (2D) is ample. Obviously, L 0 (2D) is big and for any irreducible curve C = D the strict inequality (L 0 (2D)).C > 0 holds. The assumption on d implies (L 0 (2D)).D > 0. Note that the extra assumption L 2 ≥ 4 in [Og] is only needed for the very ampleness of L 0 which we will not use. ✷ 2.1.3(a) Let X be a K3 surface whose Picard group is generated by an ample line bundle L, i.e. Pic(X) = Z · L. If L 2 > 2 there exists a smooth connected family X −→ S of K3 surfaces and a line bundle L on X such that: where both line bundles L 0 and L 0 (2D) are ample and primitive and the intersection matrix is L 2 1 1 0 2.1.3(b) If we assume that L 2 > 6 we have the same result as in (a) with "L 0 (2D) is ample" replaced by "L 0 (−2D) is ample". Proof: For both parts we need to prove the existence of a triple (X 0 , H, D) with ample and primitive H and H(2D), such that D 2 = 0, H.D = 1 and H 2 = 2n > 2 for given n. By the results of Nikulin we can find a K3 surface with this intersection form. It remains to show that H and H(2D) can be chosen ample. We can assume that H ∈ C + , i.e. H is in the positive component of the positive cone (if necessary change (H, D) to (−H, −D)). We check that H is not orthogonal to any (-2) class, i.e. for any δ := aH + bD (a, b ∈ Z) with δ 2 = a 2 H 2 + 2ab = −2 we have H.δ = 0. If H were orthogonal to δ this would imply that aH 2 + b = 0. Hence −a 2 H 2 = −2 which contradicts H 2 > 2. Thus H is contained in a chamber. Since the Weyl group W X 0 , which is generated by the reflection on the walls, acts transitively on the set of chambers, we find σ ∈ W X 0 such that σ(H) is contained in the chamber {w ∈ C + |wδ > 0 for all effective (−2) classes δ}. Applying σ to (H, D) we can in fact assume that H is contained in this chamber. On a K3 surface the effective divisors are generated by the effective (-2) classes and C + \ {0}. On both sets H is positive. Thus H is ample. In order to prove that also H(2D) is ample we show that D is effective and irreducible. This follows from the Riemann-Roch-Hirzebruch formula χ(O(D)) = 2, which implies D or −D effective, and H.D = 1. Thus C.D ≥ 0 for any curve C. Thus H(2D).C > 0. Since H(2D) is big we conclude that H(2D) is ample. To prove (a) we choose H 2 := L 2 and use the irreducibility of the moduli space to show that (X, L) degenerates to (X 0 , H). Defining L 0 := H this proves (a). In order to prove (b) we fix (H(2D)) 2 := L 2 and let (X, L) degenerate to (X 0 , H(2D)). The assumption on H translates to L 2 > 6. With L 0 := H(2D) we obtain (b). ✷ Deformation of the moduli space We start out with the following Lemma 2.1 Let E be a simple vector bundle on a K3 surface such that L := det(E) is big. The joint deformations of E and X are unobstructed, i.e. Def (E, X) is smooth. Moreover, Def (E, X) → Def (X) and Def (L, X) → Def (X) have the same image. Proof: The infinitesimal deformations of a bundle E together with its underlying manifold X are paramatrized by H 1 (X, D 1 0 (E)), where D 1 0 (E) is the sheaf of differential operators of order ≤ 1 with scalar symbol. The obstructions are elements in the second cohomology of this sheaf. Using the symbol map we have a short exact sequence Its long exact cohomology sequence compares the deformations of E, X, and (E, X). In particular, if E is simple the trace homomorphism H 2 (X, End(E)) → H 2 (X, O X ) is bijective and the composition with the boundary map H 1 (X, T X ) → H 2 (X, End(E)) is the cup-product with c 1 (E). Since there is exactly one direction in which a big and nef line bundle L cannot be deformed with X the cup-product with c 1 (L) = c 1 (E) is surjective. Thus H 1 (X, D 1 0 (E)) → H 1 (X, T X ) is onto the algebraic deformations of X and H 2 (X, D 1 0 (E)) vanishes. ✷ The following lemma will be needed for the next proposition. Its proof is quite similar to what we will use to prove the theorem. Proof: 1st step: First, we show that M H (L, L 2 2 + 3) is irreducible whenever L is an ample line bundle on a K3 surface. By a result of [Q] the moduli spaces M H (L, L 2 2 +3) and M L (L, L 2 2 +3) are birational. In particular, the number of irreducible components is the same. We consider a deformation as in 2.1.1. The corresponding family of moduli spaces M Lt (L t , L 2 2 + 3) is proper and by lemma 2.1 every stable bundle on X can be deformed to a stable bundle on any nearby fibre. This shows that M L 0 (L 0 , L 2 2 + 3) has as many irreducible components as , which is irreducible. 2nd step: Assume e := c 2 − L 2 2 − 3 > 0 and L 2 > 2. We apply 2.1.3(a). By the same arguments as above we obtain that the number of irreducible components of M (L, c 2 ) is at most the number of irreducible components of M L 0 (L 0 , c 2 ). Again using [Q] we know that M L 0 (L 0 , c 2 ) is birational to M L 0 (2D) (L 0 , c 2 ). The µ-stable part of the latter is isomorphic to the µ-stable part of M L 0 (2D) (L 0 (2D), c 2 + 1). We have (L 0 (2D)) 2 = L 2 + 4 > 2 and c 2 + 1 − (L 0 (2D)) 2 2 − 3 = e − 1. Therefore we obtain by induction over e and step 1 that M L 0 (2D) (L 0 (2D), c 2 + 1) is irreducible. Since the locally free µ-stable sheaves are dense in the moduli spaces, this accomplishes the proof in this case. 3rd step: Here we assume that e := c 2 − L 2 2 − 3 < 0. By assumption 4c 2 − L 2 − 6 ≥ 10. Hence c 2 ≥ 6 and L 2 > 6. Now we apply 2.1.3(b). The same arguments as in the previous step show that the number of irreducible components of M(L, c 2 ) is at most that of M L 0 (−2D) (L 0 (−2D), c 2 − 1). Since c 2 − 1 − (L 0 (−2D)) 2 2 − 3 = e + 1, we can use induction over −e and step 1 to show the irreducibility in this case. 4th step: It remains to consider the case L 2 = 2. Here we apply 2.1.2 with d = 5. As above we conclude that the number of irreducible components of M (L, c 2 ) is at most that of M L 0 (2D) (L 0 (2D), c 2 + 3). Since (L 0 (2D)) 2 = L 2 + 20 − 8 = 14 we can conclude by step 2 or 3. ✷ Mukai seems to know that all moduli spaces of rank two bundles on a K3 surface are irreducible ( [Mu2], p. 157). Since we could not find a proof of this in the literature we decided to include the above lemma. Let X be a K3 surface and L a line bundle on X. For any c 2 there exists a coarse moduli space M s (L, c 2 ) of simple sheaves of rank two with determinant L and second Chern class c 2 . M s (L, c 2 ) can be realized as a non-separated algebraic space ( [AK], [KO]). For any polarization H such that H−semi-stabilty implies H−stability the projective manifold M H (L, c 2 ) is an open subset of M s (L, c 2 ). Note that in the case that Pic(X) = Z · L and H = L any simple vector bundle is in fact slope stable. For sheaves the situation is more complicated. Now let (X , L) −→ S be a family of K3 surfaces with a line bundle L on X over a smooth curve S. By [AK], [KO] there exists a relative moduli space of simple sheaves, i.e. there exists an algebraic space M s (L, c 2 ) and a morphism from it to S such that the fibre over a point t ∈ S is isomorphic to M s (L t , c 2 ). By a result of Mukai the fibres are smooth [Mu1]. Lemma 2.2 shows that for a family (X , L) −→ S both M s (L, c 2 ) and M s (L, c 2 ) −→ S are smooth (at least over the locally free sheaves). For the following we want to assume that Pic(X t ) ∼ = Z · L t for t = 0 and L 2 t > 0. To shorten notation we denote by Z * −→ S * the restriction of a family Z −→ S to S * := S \ {0}. Proposition 2.3 Assume that M Lt (L t , c 2 ) is irreducible for t = 0. Then for any generic ample H ∈ Pic(X 0 ) there exists a smooth proper family Z −→ S of projective manifolds such that Z * −→ S * has fibres M Lt (L t , c 2 ) and the fibre over 0 is isomorphic to M H (L 0 , c 2 ). ("The moduli spaces for different H cannot be separated") Proof: By M(L, c 2 ) * → S * we denote the family of the moduli spaces M (L t , c 2 ). It is proper over S * and the fibres are smooth and irreducible. Claim: If [E] ∈ M s (L 0 , c 2 ) is a point in the closure T 0 of M s (L, c 2 ) * \ M(L, c 2 ) * in M s (L, c 2 ), then E is not semi-stable with respect to any polarization H: Semicontinuity shows that a point E in the closure has a subsheaf of rank one with determinant L ⊗n 0 with n > 0. Hence it is not semi-stable with respect to any polarization. The set T 1 of simple sheaves [E] ∈ M s (L 0 , c 2 ) which are not stable with respect to H is a closed subset of M s (L, c 2 ). We define Z to be the complement of the union of T 0 and T 1 in M s (L, c 2 ). It is an open subset of M s (L, c 2 ). The fibres meet the requirements of the assertion. Claim: Z is separated: Any simple sheaf on any of the fibres X t can also be regarded as a simple coherent sheaf on the complex space X . Thus Z is a subspace of the space of all simple sheaves on X . In order to show that two points are separated in Z it suffices to separate them in the bigger space. Now we apply the criterion of [KO] which says that if two simple coherent sheaves are not separated then there exists a non-trivial homomorphism between them. Since any two sheaves parametrized by Z are either supported on different fibres or stable with respect to the same polarization, this is excluded. Thus Z is a separated with compact irreducible fibres over S * . Take a locally free E ∈ M H (L, c 2 ) and consider a neighbourhood of it in M s (L, c 2 ). By the arguments above this neighbourhood contains locally free simple sheaves on all the nearby fibres. Hence we can assume that all these sheaves on X t =0 are stable, since Pic(X t ) = Z · L t for t = 0. This implies the connectedness of Z. Thus Z −→ S is proper and smooth.✷ Proof of the theorem: i) We first show that the result of section 1 generalizes to the case where we drop the assumption that L generates Pic(X). This is done as follows. By applying 2.3 to a deformation of the type 2.1.1 one sees that M Lt (L t , L 2 t 2 +3) is a deformation of M H (L, L 2 2 + 3) for generic H. Since Hodge numbers are invariant under deformations, both spaces have the same Hodge numbers. Those of the second were compared in section 1 with the Hodge numbers of the appropriate Hilbert scheme. By the same trick we can always reduce to the case where the Picard group is generated by L, in particular we can assume that L is ample. For similar results compare [G].
2014-10-01T00:00:00.000Z
1994-08-04T00:00:00.000
{ "year": 1994, "sha1": "f71e4a86d15eec1ba823e73e58657bbc834f15f4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/alg-geom/9408001", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ceed8f7b4f6908ff81235548aeb9c49a72df74ec", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
58599609
pes2o/s2orc
v3-fos-license
Modeling the Impacts of Manure on Phosphorus Loss in Surface Runoff and Subsurface Drainage Simulation of phosphorus (P) transfer from manured agricultural lands to water bodies via surface runoff and subsurface drainage is potentially of great help in evaluating the risks and effects of eutrophication under a range of best management practice scenarios. However, it remains a challenge since few models are capable of providing a reasonably accurate prediction of P losses under manure treatment. The Environmental Policy Integrated Climate (EPIC) model was applied to simulate the impacts on dissolved reactive P (DRP) losses through surface runoff and subsurface drainage from a solid cattle manure–amended corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation on a clay loam soil (Vertisol) located in the Lake Erie region. Simulations of DRP loss in surface runoff and tile drainage were satisfactory; however, EPIC did not consider DRP loss directly from manure, weakening its accuracy in the prediction of DRP loss in surface runoff. Having previously drawn on EPIC-predicted surface runoff to initiate SurPhos (Surface Phosphorus and Runoff Model) predictions of DRP losses strictly in surface runoff, no comparison had been made of differences in manure application impacts on EPICor SurPhos-predicted DRP losses—accordingly, this was assessed. The SurPhos improved the estimation of DRP loss in surface runoff (Nash–Sutcliffe coefficient, 0.53), especially when large rain events occurred immediately after or within 6 wk of manure application. Generally, EPIC can capture the impacts of manure application on DRP loss in surface runoff and subsurface drainage; however, coupling of the EPIC and SurPhos models increased the accuracy of simulation of runoff DRP losses. Modeling the Impacts of Manure on Phosphorus Loss in Surface Runoff and Subsurface Drainage L ong-term nonpoint losses of dissolved reactive P (DRP) from agricultural lands, which accelerate eutrophication in receiving water bodies, remain a serious ongoing water quality concern (Sharpley et al., 2015).For example, the worst cyanobacterial bloom occurred in Lake Erie in 2011 (Daloğlu et al., 2012;Michalak et al., 2013), and another serious bloom in 2014 left over 400,000 without potable water (Smith et al., 2015b).Manure application has been identified as one of the predominant agronomic practices leading to P loading from farmlands to Lake Erie (Smith et al., 2015b).In 2010, 48.4% of Canadian farms applied manure to their lands, of which the greatest proportion of such applications occurred in Ontario (28.2%) (Erik and Martin, 2011).Water inputs to Lake Erie include 81% from the Detroit River, 10% from precipitation over the lake, and the remaining 9% from runoff (both US and Canada) (Canada-Ontario Agreement Partners, 2017).Between 2003 and 2013, Ontario's largely agricultural Lake Erie basin watersheds contributed approximately 18% of DRP reaching the lake, of which 71% was from nonpoint sources (Canada-Ontario Agreement Partners, 2017). Long based on crop N demand alone, recommended manure application rates have led to over-application and buildup of P in soils (Leytem et al., 2006), since P/N ratios in manures (1:4 to 1:2) largely exceeded the ratio of P/N taken up by crops, for example, 1:8 for major grains (Sharpley, 2016).To meet crop N demand through manure application, each year up to four times more P was applied than was required by the crop, leading to a buildup of "legacy soil P" (Sharpley et al., 2013) and a high potential for excessive P loading of surface waters after manure application (Zhang et al., 2015b). Many studies have shown that both surface runoff and subsurface drainage are important pathways for P discharge from agricultural lands (Smith et al., 2015a;Tan and Zhang, 2011;Zhang et al., 2015a).The Vertisols, typical of Ontario's Lake Erie drainage basin (e.g., Brookston clay loam), are prone to preferential flow through macropores, such as shrinkage cracks in the dry season, as well as earthworm burrows and root channels.These macropores funnel water from the soil surface to tile drainage, especially after a heavy rainfall (Tan and Zhang, 2011;Zhang et al., 2015aZhang et al., , 2015b)).With such soils being predominantly dedicated to annual crops, there is a high risk of nutrient-rich subsurface runoff reaching the Lake Erie basin.Accordingly, temporal changes in soil crack volume and infiltration due to changes of soil moisture must be quantified to accurately predict surface runoff, subsurface drainage, and relevant nutrient losses in regions dominated by vertic soils (Neitsch et al., 2011). Phosphorus in crop residues, soil, and applied amendments (i.e., manure and fertilizer) represents a consistent source of nonpoint pollution in surface runoff from agricultural lands (Bennett et al., 2001;Collick et al., 2016).The latter P source can overweigh all others when a large precipitation event follows closely after amendment application and water soluble forms of P are lost directly from the applied amendments (Withers et al., 2001).In such a case, surface runoff may contain concentrations of water soluble forms of P orders of magnitude greater than those that would originate from the soil alone (Kleinman et al., 2002).As P losses from recently applied amendments can contribute the majority of annual dissolved P losses (Owens and Shipitalo, 2006), it is important to evaluate how such manure applications affect soil P content, the forms of P in the soil, and the movement of P from the field.Having made such an evaluation, if P applications exceed agronomic needs, it is critical to apply an alternative agronomic management scenario to reduce soil P losses and prevent soil P from rising to environmentally hazardous concentrations.All these issues point to a need for a complete understanding of P dynamics in manure, soil, and water fluxes.As field experiments are time-consuming and expensive, computer models are commonly used to predict P dynamics for scientific, management, and policy purposes (Zhang et al., 2018).It is therefore important to be able to reliably predict P losses in both surface runoff and tile drainage.By identifying critical source areas, targeting best management practices, and evaluating impacts of climate change, modeling-despite its inherent uncertainty-will become increasingly crucial in catchment management and policy decision making over the next decade (Sharpley et al., 2015). Based on the original P model developed by Jones et al. (1984), the Environmental Policy Integrated Climate (EPIC) model's P subroutine has been widely evaluated for its ability to predict P losses in surface runoff and subsurface drainage (Peruta et al., 2014;Vadas et al., 2006;Wang et al., 2018a).The model's current version simplifies drainage volume by modifying the lateral subsurface flow, where depth of the tile drainage system and the time required for the tile drainage system to reduce plant stress are used for adjustment.Given the absence of preferential flow or macropore flow routine in EPIC, Wang et al. (2018a) used a crack flow coefficient of 0.4 to appropriately partition inflow into cracks.For a site receiving no P fertilization, this resulted in a reasonably accurate prediction of tile drainage and DRP loss in surface runoff and tile drainage.Similarly, Baffaut et al. (2015) set the crack flow coefficient to 0.5 to balance the surface runoff and groundwater for their Soil and Water Assessment Tool (SWAT) simulation of the Goodwater Creek Experimental Watershed and presented satisfactory calibration and validation of DRP loss. Since EPIC's P subroutine assumes manure to be well mixed into the soil, but fails to consider P losses directly from surfaceapplied manure, it can be expected to provide reliable predictions when manure is well mixed into the soil but could provide poor predictions of the relatively greater quantity of dissolved P loss in surface runoff following the first significant rainfall event after surface application of amendments.Vadas et al. (2007) developed the SurPhos (Surface Phosphorus and Runoff Model) to address direct transfer of P from manure to runoff during a rainfall event, with the aim that it be incorporated into a more complete, process-based model (e.g., EPIC or SWAT).SurPhos can estimate the dynamic fate of applied amendments P, that is, quantify different sources of P leached into surface runoff and soil by a rainfall event. The SurPhos model has been incorporated into the Integrated Farming Systems Model (IFSM) (Sedorovich et al., 2007) and the SWAT model (Collick et al., 2016) to better describe P loss from manure in surface runoff.Sen et al. (2012) compared the results of the SurPhos model with those of the 2008 version of SWAT and suggested using SurPhos to replace P subroutine in SWAT for better determination of the effectiveness of P management practices.The Annual P Loss Estimator (APLE), derived from the daily-time step SurPhos model (Vadas et al., 2009), has been tested for annual P loss prediction in surface runoff based on the ratio of annual runoff and precipitation (Vadas et al., 2012(Vadas et al., , 2015a(Vadas et al., , 2015b)).Incorporating the APLE P loss subroutine into the Chesapeake Bay watershed model, Mulkey et al. (2017) showed better P loss estimation.Fiorellino et al. (2017) used predicted P loss data from APLE to assess the Maryland P Site Index and update its agreement in magnitude and direction.All these applications of the SurPhos model support its superiority against original P models in evaluating the impacts of manure on runoff P loss. In previous work (Wang et al., 2018b), where unavailable surface runoff data under snow melt conditions was substituted with EPIC-predicted daily surface runoff, SurPhos satisfactorily quantified DRP loss in surface runoff from soils amended with solid cattle manure; however, the SurPhos model's limitations (i.e., without considering tile drain) prevented simulation of the equivalent DRP losses in subsurface drainage.Moreover, no comparison was made of differences in the SurPhos and EPIC models' ability to predict the impacts of manure on DRP loss in surface runoff (Wang et al., 2018b).Since EPIC had not been tested for a Brookston clay loam soil receiving solid cattle manure, the aim of the present study was (i) to evaluate EPIC for prediction of crop yields, surface runoff, tile drainage, and relevant DRP losses under a corn (Zea mays L.)-soybean [Glycine max (L.) Merr.] rotation on a clay loam soil (Vertisol) receiving a solid cattle manure application and (ii) to compare impacts of manure on DRP loss in surface runoff predicted by EPIC with that predicted by SurPhos. Field Experiments Field experiments were conducted at the Agriculture and Agri-Food Canada's Hon.Eugene F. Whelan Experimental Farm at Woodslee, ON, Canada.The size of each plot was approximately 0.1 ha, 67.1 m long by 15.2 m wide.The soil was Brookston clay loam, with 25.4% clay, 26.4% silt, and 48.2% sand, which is defined as Orthic Humic Gleysol in the Canadian soil classification system (Evans and Cameron, 1983) and classified as Typic Haplaquolls using the USDA soil taxonomic description (Soil Survey Staff, 1975).The soil's permanent wilting point and field capacity were 12.7 and 30.4%, respectively, and the bulk density was 1.27 Mg m -3 . The cropping system was a corn-soybean rotation.Corn was planted at a density of 79,800 and 79,700 seeds ha -1 in 2008 and 2010, respectively.Soybean was planted at 486,700 seeds ha -1 both in 2009 and 2011.Details of agronomic practices can be found in Table 1.The corn crops were fertilized with 100 kg K ha -1 and 200 kg available N ha -1 prior to planting.A P application rate of 50 kg P ha -1 was achieved through two even applications of solid cattle manure in 2008 (2 and 3 June), and one application in 2010 (11 June).The total solid cattle manure application per plot was 53.00 Mg ha -1 (0.094% total P, 25% dry matter) in 2008 and 28.12 Mg ha -1 (0.178% total P, 26.9% dry matter) in 2010.An S-tine cultivator was used for incorporation after manure application.Chisel plow tillage was also conducted after crop harvest.Roundup (Monsanto) [N-(phosphonomethyl) glycine] (1.4 kg ha -1 ) was applied before planting of both corn and soybean.Dual ) were applied after corn planting.Dual II (1.4 kg ha -1 ) and Sencor (Bayer CropScience)[4-amino-6-tert-butyl-4,5-dihydro-3-methylthio-1,2,4-triazin-5-one] (0.5 kg ha -1 ) were applied after soybean planting. Surface runoff and tile drainage flow volumes were recorded by water meters as well as from analog and digital pulse signals.A multi-channel data logger utilized the analog and digital signals to monitor, measure, and store water volumes on a continuous basis.Samples of surface runoff and tile drainage were collected automatically with each auto-sampler containing 24 1-L bottles (ISCO Model 2900).Sample collection was based on collection volumes varying with the time of year and expected volumes.During the growing season, a 1-L sample of surface runoff or tile drainage water was collected from each plot after 1000 L (surface runoff ) and 2000 L (tile drainage) of flow volume.During the nongrowing season, a 1-L sample of surface runoff or tile drainage water was collected from each plot after 3000 L (surface runoff ) and 5000 L (tile drainage) of flow volume.As such, water samples for each of the 17 periods over the 4 yr were collected based on agronomic practices and forecasted precipitation to reflect the reality of P loss.Thereafter, water samples were transported to laboratory, with 60-mL aliquot of each sample filtered through a 0.45-mm Millipore filter.Filtered samples were refrigerated under 2 to 4°C for DRP analysis.Additional details regarding to field plots layout and water sampling can be found in Wang et al. (2018a). EPIC Simulation Since the EPIC0810 model was calibrated and validated at the same site with no P fertilizer (Wang et al., 2018a), we only validated the model under solid cattle manure application.Validation was based on crop yields, periodic surface runoff and subsurface drainage, and relevant DRP losses from 2008 to 2011.Simulations were driven by weather data (maximum and minimum temperatures, precipitation, wind speed, relative humidity, and solar radiation) from a Whelan weather station less than 0.5 km away from the experimental field.Annual average potential evapotranspiration in Harrow, ON, was obtained from Fallow et al. (2003).The PARM (parameter) (17), crack flow coefficient (0.4), was used to simulate inflow partitioned to cracks.Since this soil is fine textured with an impervious or semiimpervious soil layer restricting percolation downward, one soil layer below the tile drains was set to a saturated hydraulic conductivity of 0.01 mm h -1 to force all the lateral subsurface water above tile drain layer as drainage.The DRP losses in surface runoff were estimated using the exponent-modified GLEAMS (Groundwater Loading Effects of Agricultural Management Systems) method (Sabbagh et al., 1993).The Penman-Montieth equation and variable daily curve number with soil moisture index were used for simulating evapotranspiration and direct runoff, respectively.The observed DRP loss from surface runoff and tile drainage was compared with "labile P" estimated from the EPIC and SurPhos model. SurPhos Simulation Annual P uptake for crops was set to 25 kg ha -1 (Hao et al., 2015).An S-tine cultivator was selected to model manure incorporation as it closely matches the soil mixing efficiency (0.1) and depth of mixing (0.075 m) of the cultivator used on the experimental field.The tillage implement used after harvest was a chisel plow with a soil mixing efficiency of 0.05 and depth of 0.25 m.Each spring, Olsen P was measured at five soil depths (0-0.15,0.15-0.30,0.30-0.50,0.50-0.70,0.70-0.90m) and was used to set labile P in three soil depths (0.02, 0.15, 0.75 m) in SurPhos.We assume labile P in the top soil layer (0.02 m) to be 50% greater than in the second soil layer (Baker et al., 2017).Soil clay, organic matter, and bulk density were set according to field experimental results.More details can be found in Wang et al. (2018b). Simulated Crop Yields Simulated mean annual potential evapotranspiration fell into the range of the annual mean (732 ± 83 mm, Table 2) estimated by Fallow et al. (2003).Simulated mean crop yield, involving corn and soybean, was 6.33 Mg ha -1 (Table 2), some 4.6% greater than the observed yield (6.05 Mg ha -1 ).Statistical analyses showed these predictions to be reasonably accurate, with NSE of 0.97, PBIAS of -4.63%, and RSR of 0.19 (Table 2).Simulated mean corn yield was 8.76 Mg ha -1 , some 5.4% greater than the mean observed yield (8.31 Mg ha -1 ).Simulated mean soybean yield was 3.91 Mg ha -1 , some 2.9% greater than the mean observed yield (3.80 Mg ha -1 ).Since there was no nutritional deficiency (Table 3), water is the most important factor influencing crop yield.The higher or lower simulated (vs.actual) crop yields could be attributed to the under-or overestimated average number of water stress days for crops at each year (Table 3).For example, the simulated soybean yield in 2011 was higher than observed with no simulated water stress that could be underestimated.Overstimulated crop available water (670.7 mm, Table 3) during the growing season because of the underestimated surface runoff for Period 15 (from 23 June 2011 to 7 Sept. 2011) led to the underestimated water stress.Simulated growing season evapotranspiration and water use efficiency (water use efficiency = crop yield/growing season evapotranspiration) showed negligible differences for corn and soybean (Table 3).A significantly simulated greater temperature stress (80.2 d, Table 3) occurred for soybean during 2011 due to a late harvest (December).The minimum temperatures for plant growth for corn and soybean are 8 and 10°C, respectively, which were set as default values in the EPIC model.The temperature stress was calculated on the basis of the temperature before harvest but should be modified to better evaluate the impact of temperatures before crop maturity. Simulated Surface Runoff and Tile Drainage Simulated surface runoff was reasonably well modeled (NSE = 0.70, PBIAS = -3.50%,RSR = 0.55; Fig. 1A), although several overestimations and underestimations occurred.For some overestimated periods (Periods 3, 4, 6, and 17; Fig. 1A), overestimated surface runoff was tied to underestimated subsurface drainage, largely a consequence of a lower crack flow coefficient.Conversely, underestimated surface runoff was tied to overestimated tile drainage for Periods 2, 15, and 16.Similarly, a higher crack flow coefficient led to this phenomenon.For Period 4 from 23 Oct. 2008 to 11 Feb. 2009, simulated surface runoff was 11.0 mm following 1.8 mm of precipitation on 10 Feb. 2009.The minimum and maximum temperatures were 6.3 and 12°C, respectively.One possible explanation is that the model assumed snow melting to have occurred that day, which led to overestimation of surface runoff.On the next day, simulated surface runoff was 32.4 mm with precipitation of 40 mm.The overestimation of surface runoff could result from the soil having been simulated as being saturated from the previous day due to the snow melt simulation.For Period 12 from 6 Aug. 2010 to 21 Dec. 2010 (Fig. 1A), the simulated surface runoff was 15.2 mm while the observed was marginal (0.3 mm).Similarly, the simulated tile drainage (103.4 mm) was much greater than the observed (13.2 mm).Three precipitation events of approximately 30 mm in total occurred in September and early October 2010, when corn had reached or was close to its maximum leaf area index (4.9).One possible reason for the discrepancy is that crop interception of rainfall was not adequately simulated in EPIC (Williams et al., 2012), leading to an overestimation of surface runoff and subsurface drainage.The overestimated tile drainage could also be due to a higher settled crack flow coefficient. The accuracy of simulated subsurface drainage was reasonable (NSE = 0.51, PBIAS = -8.05%,RSR = 0.70 (Fig. 1B), although several periods were either overestimated or underestimated.Generally, all over-and underestimation could be linked to the constant crack flow coefficients that were higher or lower than actual representative crack flow coefficients, which lead to greater or lesser downward water flux to subsurface drains.Another reason of overestimation may be the simplification of tile drainage that forces all lateral subsurface flow above tile drain as tile drainage in this study.The crack flow coefficient, which performs a function akin to that of simulating macropores, can redistribute the total quantity of runoff between surface and subsurface (Neitsch et al., 2011) and thereby contribute to better statistical accuracy of periodical surface runoff and tile drainage.However, this coefficient was set to a constant value of 0.4, whereas in reality, crack volume can change depending on weather conditions, thereafter contributing to a less accurate simulation of some periods (Wang et al., 2018a). Simulated Impacts of Manure Application on Dissolved Reactive P Loss Using EPIC Consistent with overpredicted surface runoff, DRP loss in surface runoff was overestimated (0.14 kg ha -1 ) for Period 6 from 28 Mar.2009 to 26 May 2009.Conversely, underestimation of surface runoff was the major reason for underestimated DRP loss in surface runoff for Periods 5, 11, and 15 (Fig. 2A).A further possible explanation for underestimation of DRP loss in Period 11 (12 June 2010 to 5 Aug. 2010) is that EPIC did not simulate P loss directly from manure.Since solid cattle manure was applied on 11 June in 2010 and a large rainfall event (73.8 mm) occurred on 24 July, heavy losses from manure itself could have occurred.This is addressed below by comparing DRP losses in surface runoff simulated with EPIC or SurPhos. For Period 16 from 8 Sept. 2011 to 9 Nov. 2011 (Fig. 2A), simulated DRP loss in surface runoff was zero while the observed loss was 0.49 kg ha -1 .Similarly, for Period 17, prediction of P loss in surface runoff was 0.05 kg ha -1 , much lower than observed loss (0.24 kg ha -1 ).One possible reason for these discrepancies is inaccurately simulated surface runoff flow volumes of some periods that were directly related to the calculation of DRP loss.Another possible reason is that EPIC did not simulate leaf fall before harvest for annual crops, which could contribute additional organic source of P to the top soil, leading to greater DRP loss in surface runoff.A third possible explanation is that a higher set crack flow coefficient led most of DRP downward to lateral flow or even deep percolation (Williams et al., 2015).This finding was similar to previous results (Wang et al., 2018a), where these two periods had negligible predicted P loss compared with the observed data.Without considering Period 16, simulated P loss in surface runoff was acceptable (NSE = 0.55, data not shown). Simulated DRP loss in subsurface drainage water was satisfactory (NSE = 0.67, PBIAS = -5.39%,RSR = 0.57; Fig. 2B).Consistent with overestimated tile drainage, DRP loss in tile drainage was overestimated for Periods 2, 7, 12, and 15, possibly as a result of an overly high crack flow coefficient resulting in an overestimation of tile drainage and therefore of DRP loss.The obvious underestimation of P loss in tile drainage that occurred in Period 4 (23 Oct. 2008to 11 Feb. 2009) was consistent with underestimated tile drainage.As mentioned in surface runoff section, lower set crack flow coefficients led to this underestimation. Model estimates of DRP loss in both surface runoff (<0.8 kg ha -1 ) (NSE of 0.55) and subsurface tile drainage (<1.3 kg ha -1 ) (NSE of 0.67; Fig. 2) under manure treatment were more accurate than those for plots receiving no P fertilization at the same site reported previously (Wang et al., 2018a): DRP losses in surface runoff (<0.4 kg ha -1 ) (NSE of 0.54) and subsurface tile drainage (<0.6 kg ha -1 ) (NSE of 0.58).This is consistent with the finding that a narrow range of DRP loss with relatively low variability would generally result in lower model accuracy (NSE) than those observed for predictions with manure application and a wider range of DRP losses (Vadas et al., 2017).Indeed, manure application increased the range of DRP loss when followed closely by a large rainfall event, thereby typically leading to better accuracy statistics.Generally, EPIC can adequately simulate the impacts of manure on DRP losses in surface runoff and subsurface drainage. The concentration of labile P in the soil is the main factor influencing DRP loss.The P incorporation routine developed for EPIC by Jones et al. (1984) predicts changes in DRP with time.The DRP loss in surface runoff is based on the concept of partitioning pesticides into solution and sediment phases.The DRP loss in tile drainage is partitioned among tile drainage, lateral flow, and percolation according to their relative flux ratios.Four parameters were used to adjust P loss: PARM (8)-P partition between runoff and sediment, PARM (43)-upward movement by evaporation, PARM (77)-coefficient regulating P flux between labile and active pools, and PARM (78)-coefficient regulating P flux between active and stable pools.The up limit of PARM ( 8) is 20 for EPIC and 100 for APEX.While in our previous study (Wang et al., 2018a), PARM (8) was set to 100 to strongly link P loss and sediment levels, in this study, PARM (8) was set to 40 in EPIC.Thus, the linkage between P loss both in surface runoff and drainage and sediment was increased.The PARM ( 77) and ( 78) were set to 0.1 to increase P desorption as in Wang et al. (2018a).Without changing the P flux among labile, active, and stable P pools, PARM (43) only can partition P upward or downward.We set PARM (43) to 1 to drive most P to tile drainage.Irrespective of weathering, the P sorption coefficient (PSC) of the plots' calcareous soil (11% calcium carbonate content) was set at 0.51.While this showed more accurate results than other PSC equilibriums (Wang et al., 2018a), the fact of having a constant PSC value limited the P sorption-desorption process (Vadas et al., 2006).Thereafter, to improve the simulation of P loss in surface runoff under solid cattle manure treatment, we used predicted daily surface runoff as an input to SurPhos. Comparing EPIC-and SurPhos-Predicted Impacts of Manure Addition on Runoff Dissolved Reactive P Loss Because of the sensitivity of the SurPhos model to surface runoff (Vadas et al., 2008), predicted daily surface runoff drawn from EPIC was used in SurPhos to eliminate the impact of snow melt in winter and early spring (Wang et al., 2018b).Statistical assessment of SurPhos's prediction accuracy for DRP loss in surface runoff (NSE = 0.53, PBIAS = -17.86%,RSR = 0.69) showed it to be greater than that of EPIC (NSE = 0.31), which performed particularly poorly in Period 16, from 8 Sept. 2011 to 9 Nov. 2011 (Fig. 2A).The overestimation by SurPhos occurring in Periods 4, 6, 12, and 17 was consistent with the overestimation of surface runoff predicted by EPIC.The underestimation of DRP losses during Period 11, from 12 June 2010 to 5 Aug. 2010 (0.28 kg ha -1 and 0.01 kg ha -1 for SurPhos and EPIC, respectively, compared with the measured value of 0.51 kg ha -1 ) was consistent with an underestimation of surface runoff.For Period 11, the lower accuracy of the EPIC model's DRP loss simulation compared with that of the SurPhos model may be tied to EPIC only considering manure P as part of soil P after manure application, and therefore ignoring manure decomposition and P loss directly from manure.This assumption was valid when manure was well mixed into the soil and no large precipitation event happened shortly after manure application; however, if a large rainfall event occurred, EPIC simulations underestimated P loss.Immediately after manure application on 11 June 2010, a large loss of DRP in surface runoff occurred as the result of the heavy rainfall of 24 July 2010 (73.8 mm; Fig. 3) in Period 11.This interpretation is consistent with the work of Vadas et al. (2011), who found that P loss 30 d after manure application could equal or exceed those occurring when precipitation closely followed application.While in this case (Period 11) EPIC did not reflect the increase of P loss directly from manure, EPIC (and SurPhos) did reflect the two increases of DRP loss in surface runoff after manure applied on 2 and 3 June when rainfall events occurred on 21 June (45 mm) and 28 June (61 mm) 2008 (Fig. 3).Why EPIC could reflect P loss after one large rainfall event but not another may be because manure P loss in surface runoff was strongly influenced by a precipitation event's rainfall and runoff characteristics (e.g., quantity and intensity of precipitation, runoff-to-rain ratio) (Vadas et al., 2011).The SurPhos model's obvious overestimations of DRP loss in surface runoff from 27 Mar.to 26 May 2009 (Period 6) and 22 Mar. to 22 June 2011 (Period 14) were due to overestimated surface runoff (Fig. 1). The first advantage of SurPhos is in its simulation of direct DRP loss from manure.In ignoring this portion of DRP losses, the EPIC model was unable to quantify DRP loss from dairy manure applications when a significant rainfall event occurred immediately (Collick et al., 2016) or 30 d after manure application (Vadas et al., 2011).Collick et al. (2016) further indicated that the SurPhos P routine captured over 50% of the variation in DRP losses, whereas the routine that ignored direct P loss from manure (EPIC) captured less than 20% of the variation under different scenarios.Using manure P assimilation into soil (SurPhos) rather than direct adsorption of manure to soil (EPIC) also improved the estimation of DRP loss in surface runoff (Vadas et al., 2011). A further reason why SurPhos showed better simulation of DRP loss in surface runoff is rooted in its use of a dynamic PSC (Vadas et al., 2012), allowing labile P to be included in the calculation.In contrast, the PSC value used in EPIC is either based on soil properties or user defined (Williams et al., 2015).With the dynamic PSC in SurPhos, labile P increases when P is added into the soil system, resulting in an increase in PSC.This results in a greater quantity of P remaining as labile P and PSC increasing even more.This feedback loop guarantees a relatively rapid PSC and labile P increase under amendments.Thus, a dynamic PSC was necessary to maintain the equilibrium between different P pools (Collick et al., 2016). SurPhos also uses dynamic P sorption and desorption rate factors related to the PSC rather than the constant factor value (0.1) used in EPIC.Vadas et al. (2006) showed that the dynamic rate factors implemented in SurPhos provided greater accuracy in predicting P sorption and desorption than EPIC, resulting in differences in predicted P losses in the short and long term.Similarly, our prediction based on SurPhos showed greater accuracy in DRP loss in surface runoff than did EPIC, indicating that SurPhos is better in modeling the impacts of manure on runoff DRP. Conclusions The EPIC model was reliable in predicting crop yield, surface runoff, subsurface drainage, and relevant DRP losses from a Brookston clay loam soil (Vertisol) of the Lake Erie region under a corn-soybean rotation receiving a solid cattle manure amendment.Although EPIC reliably simulated DRP loss in tile drainage, it did not consider DRP loss directly from manure, thereby decreasing its accuracy in the prediction of DRP loss in surface runoff following a large rainfall event immediately or up to 6 wk after manure application.SurPhos has the capability to offset this limitation.Improved PSC, P sorption-desorption rate factors, and manure assimilation into soil all contributed to the reliable simulation of impacts of manure on DRP loss in surface runoff by SurPhos.A full comparison of the impacts of manure on DRP loss in surface runoff from soil amended with cattle manure supports the hypothesis that coupling SurPhos and EPIC would increase the overall accuracy in estimating the impacts of manure on DRP loss (Wang et al., 2018b). Simulated crop yields were higher than those observed because of the underestimated water stress resulted partly from underestimated surface runoff and consecutively the overestimated crop available water.However, this result needs to be addressed with further studies.Misestimation of high temperature stress implies that the model should be modified to reflect the impact of such stresses on the crop prior to maturity rather than before harvest.The lower accuracy of surface runoff and subsurface drainage estimates was mainly due to the use of a constant crack flow coefficient.The lack of simulating crop leaf rainfall interception also contributes to the overestimation of surface runoff.Another limitation of EPIC is that it does not simulate leaf drop before harvest for annual crops, thereby potentially limiting organic P sources that contribute to P loss after the crop has reached maturity.These limitations will be the focus of continued efforts to improve P loss modeling. Fig. 1 . Fig. 1.Periodic natural precipitation, and observed and simulated periodic (A) surface runoff and (B) tile drainage flow volumes using EPIC. Fig. 3 . Fig. 3. Daily precipitations and observed and predicted accumulated DRP loss in surface runoff using EPIC and SurPhos.
2019-01-22T22:32:16.586Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "4c50c19cdf339f224cffb7bd1b0cd0a9e9ca8aa5", "oa_license": "CCBY", "oa_url": "https://acsess.onlinelibrary.wiley.com/doi/pdfdirect/10.2134/jeq2018.06.0240", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "4c50c19cdf339f224cffb7bd1b0cd0a9e9ca8aa5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
244148223
pes2o/s2orc
v3-fos-license
Excess Heritability Contribution of Alcohol Consumption Variants in the “Missing Heritability” of Type 2 Diabetes Mellitus We aim to compare the relative heritability contributed by variants of behavior-related environmental phenotypes and elucidate the role of these factors in the conundrum of “missing heritability” of type 2 diabetes. Methods: We used Linkage-Disequilibrium Adjusted Kinships (LDAK) and LDAK-Thin models to calculate the relative heritability of each variant and compare the relative heritability for each phenotype. Biological analysis was carried out for the phenotype whose variants made a significant contribution. Potential hub genes were prioritized based on topological parameters of the protein-protein interaction network. We included 16 behavior-related phenotypes and 2607 valid variants. In the LDAK model, we found the variants of alcohol consumption and caffeine intake were identified as contributing higher relative heritability than that of the random variants. Compared with the relative expected heritability contributed by the variants associated with type 2 diabetes, the relative expected heritability contributed by the variants associated with these two phenotypes was higher. In the LDAK-Thin model, the relative heritability of variants of 11 phenotypes was statistically higher than random variants. Biological function analysis showed the same distributions among type 2 diabetes and alcohol consumption. We eventually screened out 31 hub genes interacting intensively, four of which were validated and showed the upregulated expression pattern in blood samples seen in type 2 diabetes cases. Conclusion: We found that alcohol consumption contributed higher relative heritability. Hub genes may influence the onset of type 2 diabetes by a mediating effect or a pleiotropic effect. Our results provide new insight to reveal the role of behavior-related factors in the conundrum of “missing heritability” of type 2 diabetes. Introduction Type 2 diabetes mellitus (type 2 diabetes) is a complex disease induced by a combination of environmental and genetic factors. Previous studies have shown that overweight, smoking, sedentary lifestyle and education are common risk factors of type 2 diabetes [1][2][3][4]. Meanwhile, genome-wide association studies (GWAS) have identified more than 500 susceptibility loci that demonstrated a robust association with type 2 diabetes [5]. In contrast to the tremendous stride in GWAS research, the conundrum of "missing heritability" in type 2 diabetes has progressed slowly and arduously. Genome-wide chip heritability analysis explained 19% of type 2 diabetes risk on a liability scale, which is much smaller when compared to heritability estimates expected from the observed trait concordance within families [6,7]. Although there are several hypotheses regarding rare variants, structural variants and gene-environment interactions for the missing heritability [8][9][10], the limited incremental value in heritability estimated by GWAS so far suggests that the genetic prediction of complex diseases on a population basis will be challenging. There is still a long way to go to fully understand the etiology of type 2 diabetes before getting it under control. An important controversial assumption about heritability is the idea that the genetic influence on trait development can be separated from the environmental context [10]. In addition to the direct effect of genetics, part of the effect of genetic factors is mediated by environmental factors. Baud et al. found social genetic effects (SGE, effects of an individual's genotypes on others' phenotype, also called indirect genetic effects) can explain up to 29% of phenotypic variance, and for several traits, their contribution exceeded that of direct genetic effects (effects of an individual's genotypes on its own phenotype) [11]. Undoubtedly, ignoring SGE can severely bias estimates of direct genetic effects (heritability) [11]. Xia et al. used a linear mixing model to estimate the indirect heritability between partners, and found evidence of indirect genetic effects between partners in about 50% of phenotypes [12]. The genetic nurturing effect proposed by Kong et al. is a manifestation of the social genetic effect within the family. Using results from a meta-analysis of educational attainment, they found the polygenic score computed for the non-transmitted alleles of 21,637 probands with at least one parent genotyped had an estimated effect on the educational attainment of the proband, that is 29.9% (p = 1.6 × 10 −14 ) of that of the transmitted polygenic score [13]. The evidence above suggests that genetic factors can affect individual phenotypes through their contributions to the environment. Another controversy about missing heritability is that there is currently much debate regarding the best model for how heritability varies across the genome. It has been shown that the LDAK model leads to estimates of common single-nucleotide polymorphism (SNP) heritability, on average, 43% (s.d. 3%) higher than those obtained from the widely used software Genome-wide Complex Trait Analysis (GCTA) and 25% (s.d. 2%) higher than those from the recently proposed extension LD and minor allele frequency (MAF) stratified multi-component GCTA (GCTA-LDMS) across 19 traits [14]. In terms of the rationality of the hypothesis, it is more realistic to employ the LDAK model, where expected heritability varies with both linkage disequilibrium (LD) and MAF [15,16]. In addition, considering the computational burden, the simplified LDAK-Thin model is also an alternative, which is a one-parameter model, and can be incorporated in any existing method simply by changing which predictors are included in the regression and how these are standardized [15]. In this study, we compared the heritability contribution of environmental phenotypes, especially behavior-related environmental phenotypes that have a genetic basis, with that of type 2 diabetes by using heritability estimation models to estimate the relative expected heritability tagged by each variant. The susceptibility variants of candidate environmental phenotypes were further characterized by functional annotation and protein-protein interaction (PPI) analysis to identify the potential key genes of type 2 diabetes. Our work is a new attempt to provide information and evidence to elucidate the genetic mechanisms underlying the missing heritability of type 2 diabetes and promote the development of comprehensive prevention for type 2 diabetes. Overview of Behavior-Related Phenotypes Based on the results of the literature review and the results of Yuan et al., we eventually included 16 behavior-related phenotypes, including educational attainment, lifetime smoking index, alcohol consumption, coffee intake, caffeine intake, breakfast skipping, morningness, insomnia, sleep duration, short sleep, daytime napping, restless leg syndrome, moderate to vigorous physical activity, strenuous sports, vigorous physical activity and accelerometer. The union of variants for type 2 diabetes and the phenotype that both appear simultaneously in the tagging file is defined as the valid variant set for the consequent analysis. A total of 2607 valid variants were included in the analysis. The mean minimum allele frequency (MAF) was 0.28 (s.d. 0.14), and 149 variants were rare variants (MAF < 0.05). The results of traditional epidemiological studies on behavior-related phenotypes of type 2 diabetes and the information of susceptibility variants for each phenotype included in the analysis are shown in Tables 1 and 2, and Figure 1. Estimation of Relative Expected Heritability by LDAK The relative expected heritability estimated of all 2607 variants estimated by SumHer under the LDAK model assumption was 19.5, which was not higher than that of simulated sampling. All variants of behavior-related phenotypes accounted for 83.39% of the total phenotypic heritability. Educational attainment contributed the most, at 76.43% of the total phenotypic heritability. The heritability contributed by the susceptibility variants was significantly correlated with the number of variants (correlation coefficient = 0.90, p < 0.001), as seen in Table 3. The results of simulation sampling showed that the relative heritability of the susceptibility variants of caffeine intake and alcohol consumption was significantly higher than that of random variants. In caffeine intake, the average heritability of the total variants was 0.01 and the average heritability of phenotypic variants was 0.04, while the attribution heritability of phenotypic variants was 2.43% and the relative heritability of phenotypic variants was 4.51 times. The corresponding parameters for alcohol consumption were 0.01, 0.02, 37.45% and 2.24 times, respectively. The relative heritability of phenotypic variants of skipping breakfast, coffee consumption and strenuous sports were also more than 2 times that of type 2 diabetes variants, while it was not statistically significant compared with simulation sampling. Estimation of Relative Expected Heritability by LDAK The relative expected heritability estimated of all 2607 variants estimated by SumHer under the LDAK model assumption was 19.5, which was not higher than that of simulated sampling. All variants of behavior-related phenotypes accounted for 83.39% of the total phenotypic heritability. Educational attainment contributed the most, at 76.43% of the total phenotypic heritability. The heritability contributed by the susceptibility variants was significantly correlated with the number of variants (correlation coefficient = 0.90, p < 0.001), as seen in Table 3. The results of simulation sampling showed that the relative heritability of the susceptibility variants of caffeine intake and alcohol consumption was significantly higher than that of random variants. In caffeine intake, the average heritability of the total variants was 0.01 and the average heritability of phenotypic variants was 0.04, while the attribution heritability of phenotypic variants was 2.43% and the relative heritability of phenotypic variants was 4.51 times. The corresponding parameters for alcohol consumption were 0.01, 0.02, 37.45% and 2.24 times, respectively. The relative heritability of phenotypic variants of skipping breakfast, coffee consumption and strenuous sports were also more than 2 times that of type 2 diabetes variants, while it was not statistically significant compared with simulation sampling. Estimation of Relative Expected Heritability by LDAK-Thin The relative expected heritability estimated of all 2607 variants estimated by SumHer under the LDAK-Thin model assumption was 671.3, which was significantly higher than that of simulated sampling. All variants of behavior-related phenotypes accounted for 86.88% of total phenotypic heritability. Educational attainment contributed the most, at 79.48% of the total phenotypic heritability. The heritability contributed by the susceptibility variants was significantly correlated with the number of variants (correlation coefficient = 0.91, p < 0.001), as seen in Table 4. Compared to the simulation sampling, the relative heritability of variants of 11 phenotypes, including insomnia, educational attainment, lifetime smoking index, alcohol consumption, coffee consumption, daytime napping, sleep duration, short sleep, morningness, moderate to vigorous physical activity and vigorous physical activity, was statistically higher than the random variants. Among the phenotypes with significant differences, the relative heritability of phenotypic variants of all phenotypes was higher than that of type 2 diabetes, except alcohol consumption. The relative heritability of phenotypic variants for short sleep was the highest, which was 1.36 times that of type 2 diabetes, which accounted for 8.87% of the total phenotypic heritability. Biological Function Analysis As the variants of alcohol consumption were identified to contribute higher relative heritability than that of the random variants in two heritability models, we then performed the functional annotation, enrichment analysis and protein interaction network analysis for the targeted phenotypes (Tables S1 and S2). For 98 susceptibility variants associated with alcohol consumption, 7 of them (7.14%) were missense mutations, 1 (1.02%) was a synonymous mutation (rs17029090), 14 (14.29%) were in the untranslated region (UTR) and 66 (67.35%) were mutations in the intronic region. The regulatory element functional annotation results revealed six transcription factor binding sites and two CpG sites. Fourteen variants had Combined Annotation-Dependent Depletion (CADD) scores greater than 12.37, suggesting that they might be deleterious mutations (Table S3). The Chi-Squared test showed no significant difference between the distributions for the functional category and RegulomeDB ranking among type 2 diabetes and alcohol consumption (Tables S4 and S5). KEGG pathway enrichment analysis was performed on 55 genes annotated by susceptibility variants of alcohol consumption (Table S6). Under the false discovery rate at the 0.05 level, the study found that the genes were significantly enriched in glycolysis/gluconeogenesis (hsa00010), tyrosine metabolism (hsa00350), fatty acid degradation (hsa00071), retinol metabolism (hsa00830), metabolism of xenobiotics by cytochrome P450 (hsa00980), drug metabolism-cytochrome P450 (hsa00982), chemical carcinogenesis (hsa05204) and propanoate metabolism (hsa00640). Screening of Hub Genes Based on the closeness, edge percolated component (EPC), and maximum neighborhood component (MNC) stress of the protein interaction network formed by genes annotated from variants associated with alcohol consumption and type 2 diabetes, the genes in the protein interaction network were sorted (Table S7). We eventually screened out 31 hub genes interacting intensively (p < 0.001), as seen in Figure 2, of which 2 genes (GCKR and TCF4) were identified simultaneously by the susceptibility variants of alcohol consumption and type 2 diabetes (Table S8). GCKR is highly expressed in the liver. CAMD2 and RPTOR were identified by the susceptibility variants of alcohol consumption only, of which CAMD2 is highly expressed in brain-related tissues ( Figure S1). Significant Upregulation of RPTOR Among the hub genes screened, four genes (NEUROG3, TCF7L2, MAP2K5 and RPTOR) were validated as showing the upregulated expression pattern in blood samples in type 2 diabetes cases (Figure 3). GCKR is highly expressed in the liver. CAMD2 and RPTOR were identified by the susceptibility variants of alcohol consumption only, of which CAMD2 is highly expressed in brain-related tissues ( Figure S1). Significant Upregulation of RPTOR Among the hub genes screened, four genes (NEUROG3, TCF7L2, MAP2K5 and RPTOR) were validated as showing the upregulated expression pattern in blood samples in type 2 diabetes cases (Figure 3). susceptibility variants of alcohol consumption only, of which CAMD2 is highly expressed in brain-related tissues ( Figure S1). Significant Upregulation of RPTOR Among the hub genes screened, four genes (NEUROG3, TCF7L2, MAP2K5 and RPTOR) were validated as showing the upregulated expression pattern in blood samples in type 2 diabetes cases (Figure 3). Discussion This study provides new insight into the association between type 2 diabetes and alcohol consumption. In this study, we found the variants of alcohol consumption were identified as contributing higher relative heritability than that of the random variants in two heritability models. In the LDAK model, the relative expected heritability contributed by the variants associated with these two phenotypes was twice as much as the relative expected heritability contributed by the variants associated with type 2 diabetes, while in the LDAK-Thin model, the relative expected heritability contributed by the variants associated with these two phenotypes was less than the relative expected heritability contributed by the variants associated with type 2 diabetes. Such inconsistencies in the relative expected heritability of each variant may be due to differences in the weights assigned to the variants in model assumptions. Boyle et al. believed that the heritability of a typical complex phenotype is driven by a large number of variations in the regulatory element region [17]. Liu et al. found that the distribution of heritability in each variant showed tissue specificity, with genes with related functions (e.g., neuronal function in schizophrenia and immune function in Crohn's disease) contributing slightly more to heritability than random genes, while genes not expressed in related cell types did not contribute to heritability [18]. Yet, the specific assumptions of which model is more reasonable still need to be further explored at the level of biological mechanisms. Biological function analysis showed the same distributions for the functional category and RegulomeDB ranking among type 2 diabetes and alcohol consumption. Based on the topological parameters of the protein interaction network, we eventually prioritize an intensively interactive hub 31 genes, of which two hub genes (CAMD2 and RPTOR) were annotated by the variants of alcohol consumption only. Our study provided a comprehensive approach to delineate Discussion This study provides new insight into the association between type 2 diabetes and alcohol consumption. In this study, we found the variants of alcohol consumption were identified as contributing higher relative heritability than that of the random variants in two heritability models. In the LDAK model, the relative expected heritability contributed by the variants associated with these two phenotypes was twice as much as the relative expected heritability contributed by the variants associated with type 2 diabetes, while in the LDAK-Thin model, the relative expected heritability contributed by the variants associated with these two phenotypes was less than the relative expected heritability contributed by the variants associated with type 2 diabetes. Such inconsistencies in the relative expected heritability of each variant may be due to differences in the weights assigned to the variants in model assumptions. Boyle et al. believed that the heritability of a typical complex phenotype is driven by a large number of variations in the regulatory element region [17]. Liu et al. found that the distribution of heritability in each variant showed tissue specificity, with genes with related functions (e.g., neuronal function in schizophrenia and immune function in Crohn's disease) contributing slightly more to heritability than random genes, while genes not expressed in related cell types did not contribute to heritability [18]. Yet, the specific assumptions of which model is more reasonable still need to be further explored at the level of biological mechanisms. Biological function analysis showed the same distributions for the functional category and RegulomeDB ranking among type 2 diabetes and alcohol consumption. Based on the topological parameters of the protein interaction network, we eventually prioritize an intensively interactive hub 31 genes, of which two hub genes (CAMD2 and RPTOR) were annotated by the variants of alcohol consumption only. Our study provided a comprehensive approach to delineate the potential causal genes and biological processes involved in type 2 diabetes pathogenesis and proposed new insight into revealing the role of behavior-related environmental factors in the conundrum of "missing heritability" of type 2 diabetes. Systematic reviews have found a U-shaped association between alcohol consumption and type 2 diabetes [19,20]. Moderate alcohol consumption also has a protective effect on blood glucose management. Initiating moderate wine intake, especially red wine, among well-controlled diabetics as part of a healthy diet is apparently safe and modestly decreases cardiometabolic risk. In particular, only alcohol dehydrogenase allele [ADH1B*1] carriers significantly benefited from the effect of both wines on glycemic control compared with persons homozygous for ADH1B*2 [21]. We found that the ADH1B gene is a missense mutation annotated by the variant rs1229984 associated with alcohol consumption, which implied that it may be a key gene in the biological mechanism of alcohol consumption and type 2 diabetes. However, this gene was not tagged as a hub gene in our study, possibly because the number of genes annotated by variants of type 2 diabetes exceeded that of alcohol consumption, thus it may be diluted by type 2 diabetes-related genes. Among the hub genes identified, we particularly highlighted those annotated by alcohol consumption variants, because these genes may influence the onset of type 2 diabetes by a mediating effect or a pleiotropic effect, which is of significance for the comprehensive prevention of type 2 diabetes. GCKR, a hub gene identified simultaneously by the susceptibility variants of alcohol consumption and type 2 diabetes, has densely interacted with type 2 diabetes-related genes such as FTO and SLC2A2. GCKR is the susceptibility gene candidate of maturity-onset diabetes of the young (MODY), whose protein product binds non-covalently to form an inactive complex with the enzyme to regulate glucokinase in liver and pancreatic islet cells. Previous studies have found that polymorphisms in GCKR (rs780094) are associated with non-alcoholic fatty liver disease in multiple populations [22][23][24]. Evidence of an association between this variant and type 2 diabetes or metabolic risk has also been detected [25,26]. An exome-chip association analysis for circulating FGF21 levels in Chinese individuals found that the common missense variant of GCKR, rs1260326 (p.Pro446Leu), may influence FGF21 expression via its ability to increase glucokinase (GCK) activity [27]. This can lead to enhanced FGF21 expression via elevated fatty acid synthesis, which is recognized as an important metabolic regulator of glucose homeostasis [27,28]. CAMD2 and RPTOR were specifically alcohol consumption annotating genes. CADM2 variants influence a wide range of both psychological and metabolic traits, suggesting common biological mechanisms across phenotypes via the regulation of CADM2 expression levels in adipose tissue [29]. RPTOR encodes a component of a signaling pathway that regulates cell growth in response to nutrient and insulin levels. Its encoded protein forms a stoichiometric complex with the mTOR kinase, of which the dysregulation of signaling is implicated in pathologies that include diabetes, cancer and neurodegeneration [30]. Regarding the indirect effect of genetic factors, our study calculated the heritability contribution of each phenotype and explored the biological function of the potential mechanism. Such a new method identified genes related to the onset of type 2 diabetes, and the function of these pleiotropic genes needs to be verified in subsequent analyses using primary individual-level data or experimental evidence. There are some limitations in this study. Firstly, due to the limitation of computational resources, only two simple heritability models were considered, and the models weighted by functional annotation were ignored. Since the estimated heritability in this study is the relative expected heritability rather than the absolute heritability, the results between models were not comparable to a certain extent. Although we applied the relative heritability of phenotypic variants, the results of some phenotypes were not consistent. The hypothesis relating to which model is more reasonable still needs to be further explored. In particular, whether this phenomenon exists in more complex heritability models also needs to be followed up. In addition, the extrapolation of the conclusions in non-European ancestry needs to be further verified as there are systematic differences not only in gene frequency among different populations, but also in their behavior and lifestyle, such as drinking culture. Further studies on a larger scale are needed to verify the reliability of the conclusions in other populations. Previous studies identified hub genes of type 2 diabetes based on the direct genetic effect, while recent studies found that the majority of phenotypic variance is driven by genes that are not directly related to the phenotypes [18]. Therefore, indirect effects of genetic factors, especially those mediated by modifiable phenotypes such as behaviorrelated phenotypes, should be considered in etiological studies and intervention strategies for chronic diseases such as type 2 diabetes. Identification for Candidate Environmental Phenotypes Associated with Type 2 Diabetes Behavior-related environmental phenotypes found to be potentially causally associated with type 2 diabetes were identified as candidate phenotypes based on previous traditional epidemiological literature reports and Mendelian randomization studies. The literature was searched in the PubMed database, and the search strategies were as follows: The Data Source Genetic variants information of type 2 diabetes was acquired from Mahajan et al. 's work [32]. In this study, GWAS results from 32 studies for 898,130 individuals (74,124 T2D cases and 824,006 controls) of European ancestry were aggregated. Imputation was implemented using the Haplotype Reference Consortium reference panel. Association summary statistics from sex-combined analyses for each variant across all studies were aggregated using fixed-effect meta-analyses with an inverse-variance weighting of log-ORs and corrected for residual inflation by means of genomic control. In total, 403 independent association signals were detected by conditional analyses at each of the genome-wide-significant risk loci for type 2 diabetes (except at the major histocompatibility complex (MHC) region). Summarylevel data are available at the DIAGRAM consortium (http://diagram-consortium.org/, accessed on 13 November 2020) and Accelerating Medicines Partnership type 2 diabetes (http://www.type2diabetesgenetics.org/, accessed on 13 November 2020). The information of susceptibility variants of candidate phenotypes is shown in Table 1. Detailed definitions of each phenotype are shown in Supplementary Table. LDAK Model The LDAK model [14] is an improved model to overcome the equity-weighted defects for GCTA, which weighted the variants based on the relationships between the expected heritability of an SNP and minor allele frequency (MAF), levels of linkage disequilibrium (LD) with other SNPs and genotype certainty. When estimating heritability, the LDAK Model assumes: where E[h 2 j ] is the expected heritability contribution of SNP j and f j is its (observed) MAF. The parameter α determines the assumed relationship between heritability and MAF. In human genetics, it is commonly assumed that heritability does not depend on MAF, which is achieved by setting α = -1; however, we consider alternative relationships. The SNP weights 1 , . . . . . . , m are computed based on local levels of LD; j tends to be higher for SNPs in regions of low LD, and thus the LDAK Model assumes that these SNPs contribute more than those in high-LD regions. Finally, r j ∈ [0,1] is an information score measuring genotype certainty; the LDAK Model expects that higher-quality SNPs contribute more than lower-quality ones. LDAK-Thin Model The LDAK-Thin model [15] is a simplification of the LDAK model. The model assumes j is either 0 or 1, that is, not all variants contribute to the heritability based on the LDAK model. Model Implementation We applied SumHer (http://dougspeed.com/sumher/, accessed on 13 January 2021) [33] to estimate each variant's expected heritability contribution. The reference panel used to calculate the tagging file was derived from the genotypes of 404 non-Finnish Europeans provided by the 1000 Genome Project. Considering the small sample size, only autosomal variants with MAF ≥ 0.01 were considered. Data preprocessing was completed with PLINK1.9 (https://www.cog-genomics.org/plink/1.9/, accessed on 13 January 2021) [34]. SumHer analysies are completed using the default parameters, and a detailed code can be found in http://dougspeed.com/reference-panel/, accessed on 13 January 2021. Estimation and Comparison of Expected Heritability To estimate and compare the relative expected heritability, we define three variants set in the tagging file: G 1 was generated as the set of significant susceptibility variants for type 2 diabetes; G 2 was generated as the union of type 2 diabetes and the set of each behaviorrelated phenotypic susceptibility variants. Simulation sampling is conducted because all estimations calculated from tagging file were point estimated without a confidence interval. We hoped to build a null distribution of the heritability of random variants. This allowed us to distinguish the contribution of phenotypic variants from the null distribution of random variants at the significance level of α = 0.05. Therefore, the random variant set G 3 was generated. G 3 was defined as the union of type 2 diabetes and the random susceptibility variants. The set size of G 3 was equal to that of G 2 , which could control spurious inflation caused by increasing the number of variants. The calculation procedure for G 3 is the exactly same as that for G 2 . The sum of expected relative heritability contributed to by variants in G 1 and G 2 was calculated, respectively. We simulated random sampling progress to generate 100 G 3 sets to the significance of G 2 at the level of α = 0.05. For each phenotype, we also calculated indexes as follows: Average heritability of total variants Average heritability of phenotypic variants Attribution heritability of phenotypic variants Relative heritability of phenotypic variants h 2 G i and n G i (i = 1, 2, 3) were the expected relative heritability and set size of G 1 , G 2 and G 3 . [35]. SNPNexus is a web-based annotation tool developed by Claude Chelala et.al. The latest version was updated in December 2019. CADD scores greater than 12.37 were considered high probability of a harmful mutation [36]. Potential regulatory functions were annotated by RegulumeDB [37]. We also explored the expression of hub genes in the dataset Genotype-Tissue Expression (GTEx) [38] using FUMA [39]. KEGG Pathway Enrichment Analysis To clarify the biological mechanism behind the potential pathogenic genes of type 2 diabetes behavior-related phenotypes, we conducted pathway enrichment analysis on the susceptibility variants of type 2 diabetes in Kyoto Encyclopedia of Genes and Genomes (KEGG) dataset [40] behavior-related phenotypes annotated by GRCH37/HG19. An over-represented analysis was used to test whether potential pathogenic genes of behaviorrelated phenotypes of type 2 diabetes were significantly enriched in the above pathways. The data targeted by over-representative analysis is a group of genes of interest. The statistical principle is the hypergeometric distribution test, and the p-value is calculated by Fisher's exact probability method [41]. The p-value in the target pathway (KI) is calculated as follows: Among them, N is the total number of genes studied, N is the total number of potential pathogenic genes for behavior-related phenotypes of type 2 diabetes, M is the total number of genes in pathway K i and M is the total number of potential pathogenic genes for behavior-related phenotypes of type 2 diabetes in pathway K i . Subsequently, the Benjamini and Hochberg method was used to correct the multiple tests, and the significance level of pathway analysis was defined as the false discovery rate (FDR) < 0.05. Protein Interaction Network Analysis Based on the "guilt-by-association" principle, protein-protein interaction (PPI) analysis identifies a set of genes whose downstream products (proteins) are associated with each other. These identified genes combine to influence disease. In this study, protein interaction network analysis was completed by String (https://string-db.org/, accessed on 8 February 2021) [42]. String is a database of known and predicted protein-protein interactions designed to collect, score and integrate all publicly available sources of protein-protein interaction information, and to supplement the information by calculating the predictions. The visualization of the network was completed by Cytoscape 3.7.0 [43]. Screening of Hub Genes The acquisition of hub proteins and subnetworks in the complex differential protein interaction network is particularly important for the search of the mechanism of life activities. Therefore, in this study, the Cytohubba module [44] was used to sequence the genes in the network and screen the hub genes. Cytohubba can predict and explore key nodes and subnetworks in a given network through several topological algorithms. We used four global topology analysis methods, including the Edge Percolated Component (EPC), Maximum Neighborhood Component (MNC) and centralities based on shortest paths including closeness and stress to prioritize genes in the network. Hub genes were defined as the shared top 25% of genes sorted by each method. Expression Analysis of Hub Gene in Blood Samples To evaluate whether the hub genes identified are differentially expressed, we used publicly available expression dataset GSE184050 from the Gene Expression Omnibus (https: //www.ncbi.nlm.nih.gov/geo/, accessed on 29 September 2021) database. GSE184050 compared changes in gene expression using two longitudinally collected blood samples from subjects who transitioned to type 2 diabetes between the time points against those who did not with a novel analytical network approach. A total of 116 individual samples (50 from type 2 diabetes cases and 66 from healthy controls) were submitted to the analysis. RNA was extracted, amplified, reverse transcribed, labelled and sequenced with Illumina HiSeq 2000 (Illumina, Inc., San Diego, CA, USA). Conclusions We found that alcohol consumption contributed higher relative heritability and eventually screened out 31 hub genes candidate of the development of type 2 diabetes. Hub genes may influence the onset of type 2 diabetes by a mediating effect or a pleiotropic effect. Our results provided new insight into revealing the role of behavior-related factors in the conundrum of the "missing heritability" of type 2 diabetes. Institutional Review Board Statement: The present study did not follow a prespecified analysis plan or protocol. Ethics approval was not required for this study because the data are public available, deidentified, summary-level data. The patients/participants provided their written informed consent to participate in the original studies. Data Availability Statement: Our datasets analyzed during the current study were derived from the following public domain resources: Summary statistics of the GWAS is available from DIAGRAM consortium (http://diagram-consortium.org/, accessed on 13 November 2020). We applied SumHer (http://dougspeed.com/sumher/, accessed on 13 January 2021) to estimate each variant expected heritability contribution. The reference panel used to calculate the tagging file was derived from the genotypes of 404 non-Finnish Europeans provided by the 1000 Genome Project. Data preprocessing was completed with PLINK1.9 (https://www.cog-genomics.org/plink/1.9/, accessed on 13 January 2021).
2021-11-17T16:32:53.333Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "2aa1e188c73388114be0706eb7e58460c2a51fe3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/22/12318/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06e58864309876c3a55c3dd2706a340d3611cbc1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253076986
pes2o/s2orc
v3-fos-license
Pulse Radiolysis and Transient Absorption of Aqueous Cr(VI) Solutions up to 325 °C Pulse radiolysis with a custom multichannel detection system has been used to measure the kinetics of the radiation chemistry reactions of aqueous solutions of chromium(VI) to 325 °C for the first time. Kinetic traces were measured simultaneously over a range of wavelengths and fit to obtain the associated high-temperature rate coefficients and Arrhenius parameters for the reactions of Cr(VI) + eaq–, Cr(VI) + H•, and Cr(V) + •OH. These kinetic parameters can be used to predict the behavior of toxic Cr(VI) in models of aqueous systems for applications in nuclear technology, industrial wastewater treatment, and chemical dosimetry. INTRODUCTION Understanding the ionizing radiation-induced speciation of chromium ions in aqueous solution over a range of temperatures is useful for several applications. First, chromium is found in the coolant systems of industrial processes as it is a corrosion product and primary leachate from various metal alloys, including stainless steels. 1 Aqueous chromates found in industrial wastewaters are highly mobile, and the Cr(VI) oxidation state is very toxic, making it a significant environmental concern. 1 In particular, in nuclear technologies, circulating coolant systems experience intense, multicomponent radiation fields and temperatures up to 315°C. 2 Understanding the speciation of chromates under extreme conditions is essential for operating these systems and safely disposing of their waste products. Ionizing radiation has also been proposed as an efficient method to reduce toxic Cr(VI) to less harmful and less soluble Cr(III) prior to its release into the environment, 3,4 but more data and accurate models of these radiation chemistry reactions are required before this procedure can be efficiently implemented. In addition, potassium dichromate systems have been considered for use as chemical dosimeters because changes in absorbed dose, dose rate, and the amount of oxygen during irradiation present little effect on the observed reduction yield, and thus, the dosimeter has a stable response. 5−8 In order to use potassium dichromate as a benchmark system, however, the radiation-induced behavior of aqueous chromium ions must be well understood and studied over a variety of experimental conditions. When a dilute aqueous solution interacts with ionizing radiation, the radiolysis of the water solvent produces a suite of highly reactive oxidizing and reducing species: (1) where the coefficients represent the room-temperature escape yields (G values) of the species in units of molecules/100 eV. 9 The reactive species from water radiolysis can undergo subsequent reactions with many of the aqueous solution matrix chemicals present to generate secondary radiolysis products, 10 where the ionization constants and their temperature and ionic strength dependencies can be derived from fits to thermodynamic data. 11,12,14−18 The polynuclear dimeric dichromate ion, , also exists at higher concentrations than those studied in this work, and it becomes the dominant form of Cr(VI) below pH 5 at [Cr(VI)] > 7 mM. 11,14 Some reactions between the radiolysis products of water and these chromium ions have been studied in the past, 5−7,19−26 but the pH and exact Cr speciation in these solutions are rarely reported, and the temperature dependance of the reported rate coefficients are largely unknown. In addition, most of the literature does not specify the uncertainties or any ionic strength corrections that may have been employed, making comparison of these values difficult. In the present study, we use pulse electron radiolysis up to 325°C to measure the rate coefficients and Arrhenius parameters for major reactions essential for understanding and modeling the behavior of aqueous Cr(VI) under irradiation. Time-resolved electron pulse radiolysis experiments were performed at the Notre Dame Radiation Laboratory (NDRL) using nanosecond electron pulses from an 8 MeV linear accelerator (LINAC). The transient absorptions of different species were followed for the various chemical reactions studied using a multichannel detection system. 27 A 1000 W xenon lamp probe light is dispersed via an Acton SP2300 f/3.9 30 cm imaging spectrograph onto an array of twenty-four 50 cm length UV-transmitting fiber optic bundles. The bundles each terminate in a photodiode directly coupled to a two-stage operational amplifier assembly connected to digital oscilloscopes for data collection. This bespoke setup allows for simultaneous measurement of a full spectrum with resolutions of approximately 6, 12, or 24 nm per channel depending on the grating selected. This work used the grating with 12 nm per channel resolution over the wavelength range of ∼250−550 nm. The transient absorption measurements were made using two different optical cells. The experiments were performed up to 325°C using a custom-built high-temperature high-pressure titanium cell with sapphire windows and an effective optical pathlength of 2.0 cm, similar to the Hastelloy cell described previously. 28 The decay of the e aq − at 25°C in pure, deaerated water at 720 nm was used as a dosimeter for this cell. Two high-pressure ISCO 260D syringe pumps (Teledyne Isco Inc.) were used to set the experimental pressure to 19.5 ± 0.3 MPa. One pump was filled with a Cr stock solution and the other with water at an equivalent pH such that the solutions could be mixed in an appropriate ratio at a tee connection and supplied to the cell. The mixed solutions were flowed through a preheater coil, into the heated titanium cell located at the end of the electron beamline and out to waste via a backpressure regulator. The Cr(VI) + H • at pH 1.1 kinetic study was performed up to 70°C at atmospheric pressure in a fused silica cell with an optical pathlength of 1 cm. Solutions entering this cell were heated by flowing through a preheater which consisted of a glass coil containing a circulating heated mixture of glycol and water from a temperature bath. The thiocyanate dosimeter was used to determine the absorbed dose in the glass cell (λ max = 475 nm; G ε = 5.2 × 10 −4 m 2 J −1 ). 7 The following solution conditions were used to isolate specific radical species for study: • Hydrated electron (e aq − ). Direct transient decay kinetics of e aq − were observed at 532 nm using N 2 -or Arsaturated aqueous solutions with pH adjusted by concentrated HClO 4 or NaOH. • Hydrogen atom (H • ). The bleach kinetics of Cr (VI) were monitored at 348 nm in N 2 -saturated aqueous solutions with pH adjusted to 1.1, 4.0, and 5.5 using HClO 4 . • Hydroxyl radical ( • OH). For Cr(V), the postbleach recovery of Cr(VI) from its reactions with e aq − and H • in N 2 -or Ar-saturated aqueous solutions was observed at 348 nm. Fitted kinetic traces were an average of 2−4 identical experiments, each of which consisted of six individual measurements. At each temperature, the transient absorption was recorded at a minimum of five Cr concentrations and two doses to check for second-order kinetic effects. Depending on the reaction studied, second-order rate coefficients (k) were either derived from pseudo-first-order exponential fits (k′) to raw kinetic data by plotting k′ against the reactant concentration weighted by the inverse squares of the experimental uncertainties, or they were calculated directly by fitting the observed optical absorbance with coupled ordinary differential equations representing the suite of chemical reactions relevant to the system. Quoted errors for the presented second-order reaction rate coefficients are a combination of measurement precision, sample concentration errors, and uncertainties in the fitted parameters. The IGOR Pro software package from Wavemetrics Inc. was used to fit the data. The change in absorbed radiation due to the decrease in the density of water with the increasing temperature was accounted for in fitting the experimental data. The observed rates for the reactions between the aqueous chromate species and e aq − are influenced by ionic strength. These rates have all been corrected for this effect by the Debye−Brønsted equation: where k i and k 0 are the rate constants at ionic strength i and zero, respectively, Z 1 and Z 2 are charges of reactants 1 and 2, respectively, I is the ionic strength in mol L −1 , and A is the temperature-dependent Debye−Huckel constant calculated using where e is the electron charge, N A is Avogadro's number, ε 0 is the vacuum permittivity, ε is the solvent dielectric constant, k B is the Boltzmann constant, T is the temperature in K, and a factor of 1000 is used to convert between the moles per liter and moles per cubic meter concentration basis. Multichannel Detection System. The multichannel detection system used in this work proved to be very powerful, as the rate coefficients for Cr(VI) + e aq − , Cr(VI) + H • , and Cr(V) + • OH could all be obtained from a single experiment at an appropriate pH by choosing a suitable wavelength for fitting the respective rate coefficients. Figure 1 OH. The other notable feature of these curves is the e aq − decay, which can be seen as a peak in the first few microseconds after the pulse. As the measurement wavelength increases, so does the contribution from the e aq − decay as it approaches its maximum absorbance at 720 nm. 3.2. Cr(VI) + e aq − . The reaction between the hydrated electron (e aq − ) and chromate solutions was studied at pH 4.0, 5.5, and 9.8 at temperatures up to 325°C. An example decay of the e aq − absorption signal is shown for solutions with different Cr concentrations at 532 nm at both 25 and 250°C in Figure 2. Each e aq − decay trace is fit with a double exponential decay function, where one decay represents the dose-dependent contribution of the sapphire windows to the overall observed optical absorption, and the other, much larger, exponential represents the pseudo-first-order rate coefficient for the e aq − decay. The long-term sapphire window absorbance always contributed less than 1 mOD to the overall observed change in optical density. From the timescale of these plots, it can be seen that the rate is greatly increased with temperature. The pseudo-first-order rate coefficients are plotted as a function of the Cr(VI) concentration to obtain the secondorder rate coefficients, as shown in Supporting Information (SI) Figure S1 for pH 4.0 solutions with 11.6 Gy and 34.7 Gy electron pulses at 75.3°C. The rate coefficients given in the paper are shown extrapolated to zero ionic strength. At high chromium ion concentrations and high temperatures, the ionic strength correction becomes statistically significant. Figures S2 and S3 show the effect of this correction on pH 5.5 solutions with 27.7 Gy electron pulses at 25 and 249.3°C, respectively. The change in Cr(VI) speciation as a function of pH and temperature according to the chromic acid ionization in eqs 2 and 3 is shown in Figures S4−S6. The speciation between HCrO 4 − and CrO 4 2− in these solutions was determined, and the individual second-order rate coefficients for each species reacting with e aq − were isolated by assuming no interconversion between chromate and bichromate over the timescale of their reactions with the hydrated electron. The Arrhenius plots of the natural logarithm of the fitted second-order rate coefficients vs 1000/T, where T is the temperature in Kelvin, are given in Figure 3. Both the reactions with chromate (CrO 4 2− ) and bichromate (HCrO 4 − ) show linear Arrhenius behavior until at least 225°C, regardless of the solution pH. The room-temperature rate coefficients and Arrhenius parameters for all the measured reactions are tabulated in Table 1. The activation energies are within the combined experimental uncertainties for the two chromium-(VI) species. Based on their rate coefficients and calculated reaction radii, these reactions with the hydrated electron are likely to be diffusion controlled and occur via long-range electron transfer processes. 29 The rate coefficients for HCrO 4 and CrO 4 2− reacting with e aq − have been reported previously using electron pulse radiolysis at 25°C; however, it is difficult to compare directly with these results as most studies do not report the uncertainties, the exact solution conditions, and/or whether or not they corrected for ionic strength. 19 4 2− + e aq − ) = 1.8 × 10 10 M −1 s −1 , which is slightly faster than the value measured in this work. 19,20 More recently, Lai and Freeman measured the Arrhenius behavior of CrO 4 2− + e aq − up to 80°C and found that k(CrO 4 2− + e aq − ) = 1.7 × 10 10 M −1 s −1 at 25°C, with an activation energy of E A = 16 kJ/mol and pre-exponential factor of A = 10 13 M −1 s −1 . 22 Again, the reported rate is faster than that found in this work; however, the Li 2 CrO 4 solutions employed by Lai and Freeman would have been ∼pH 8, affording ∼6% speciation to HCrO 4 − , which would increase their measured rate relative to a solution of pure CrO 4 2− . In addition, applying ionic strength corrections would lower their reported rate. Thomas, Gordon, and Hart report k = 3.3 × 10 10 M −1 s −1 at pH ∼7 and k = 5.4 × 10 10 M −1 s −1 at pH ∼13 with 1 mM methanol. 23 According to Cr(VI) speciation calculations, the pH 7 results would correspond to a solution of 95% HCrO 4 − , which makes this value identical to within the uncertainty with the value obtained in this work. The pH 13 result, corresponding to a solution of 100% CrO 4 2− , gives a rate faster than that found in this and other studies and is likely in error. 19 at temperatures up to 250°C. The decay of HCrO 4 − from its baseline optical absorption maximum was studied at 348 nm. An example of the HCrO 4 − absorption signal bleach and recovery at 348 nm is shown for solutions with different chromium concentrations at both 25 and 250°C in Figure 4. After a rapid bleach, as some of HCrO 4 − reacts with H • and e aq − , the HCrO 4 − solution recovered partially toward its baseline value due to the back reaction of Cr(V) + • OH. This mechanism was confirmed by adding methanol to the solutions to scavenge • OH, in which case the recovery toward the initial HCrO 4 − absorption was not seen, as shown in Figure S7. Each optical density trace is fit by first assuming the following temperature-dependent yields of water radiolysis products: 30 (11) where t is the temperature in°C, the yields (G values) of the species are in units of molecules/100 eV, and the absorbed doses were determined by dosimetry measurements. The measured change in optical density is fit with the system of chemical reactions for water radiolysis assembled by Elliot and Bartels, 31 4 13 (13) where the rate constant for k 12 was determined in Section 3.2, and k 13 and k 14 are fitted parameters along with the temperature-dependent extinction coefficients for the Cr(VI) species. The change in optical density is given by the sum of contributions from chromium and water radiolysis species. The extinction coefficients for water radiolysis products e aq − and • OH were also fitted for a given temperature with initial guesses based on literature values. 33,34 Figure 5 shows the Arrhenius plots of the natural logarithm of the fitted second-order rate coefficients vs 1/T. The roomtemperature rate coefficient, k 13 = (1.19 ± 0.17) × 10 10 M −1 s −1 , and its associated Arrhenius parameters are listed in Table 1. Cr(VI) + H • was also measured in solutions at pH 1.1 up to 80°C, but since the water radiolysis G-values increase at pH < 4 and these values are not known as a function of temperature, 35 the fitting function could not be used for these solutions. Instead, a single exponential function was fit to the initial HCrO 4 − decay kinetics, which resulted in the rate coefficients plotted in red in Figure 5. These values do not completely agree with those determined at pH 4.0 and 5.5 because of the added uncertainty from fitting just the HCrO 4 − decay and because at pH 1.1 there will be some speciation to H 2 Using a van't Hoff fit of the pK a values determined at each temperature, the enthalpy and entropy of the chromic acid dissociation reaction were found to be Δ r H = −16.5 ± 3.1 kJ/ mol and Δ r S = −57.8 ± 9.2 J/K. The rate coefficient for HCrO 4 − + H • has been reported previously at 25°C. 5,24,25 Hayon and Moreau used steady-state irradiations and competition kinetics with ethanol at neutral pH to obtain rate coefficients of k = 0.6−2.6 × 10 10 M −1 s −1 for this reaction. 24 Sharpe and Sehested measured an approximate value of k(HCrO 4 − + H • ) = 1.5 × 10 10 M −1 s −1 , which may be within the experimental uncertainty of the value obtained in this work. 25 Al-Sheikhly and McLaughlin reported k = 2.3 × 10 10 M −1 s −1 at pH 0.4; 5 however, at this pH, a significant portion of Cr(VI) may be present as H 2 CrO 4 . Cr(V) + • OH. For the pH 4.0 and 5.5 solutions, the rate coefficients for Cr(V) + • OH were also obtained from the fits used for Cr(VI) + H • as described in the previous section. Figure 6 shows the Arrhenius plot resulting from fitting the Cr(VI) optical density recovery at 348 nm for this reaction. The room-temperature rate coefficient, k 14 (Cr(V) + • OH) = (4.80 ± 0.52) × 10 9 M −1 s −1 , and the corresponding Arrhenius parameters are given in Table 1. The pre-exponential factor for this reaction is significantly lower than that seen for the reactions between Cr(VI) and the primary radiolysis products of water. The rate coefficient for Cr(V) + • OH has been reported previously in pH 1 solution at 25°C by Sharpe and Sehested as k = (1.5 ± 0.5) × 10 9 M −1 s −1 . 25 This reaction has also been described at room temperature in basic solution where Baxendale et al. found that k(Cr(V) + • OH) = 5 × 10 10 M −1 s −1 , based on the relative absorptions of CrO 4 2− and the Cr(V) transient at 365 nm. 19 The result from this work at intermediate pH falls between those at high and low pH previously reported. This rate coefficient determines how much of the Cr(V) outcompetes • OH combining with itself, thus affecting the amount of HCrO 4 − recovered after the pulse. At pH 1, only 20% of the Cr(V) was seen to react, 25 where in basic solutions, CrO 4 2− was entirely recovered. 19 The large variation in this rate coefficient with pH between the different studies is not fully understood. Sharpe and Sehested speculated that this effect may be due to spectral changes associated with rapid protonation reactions. 25 The change in the absorbance spectrum for a 55.5 μM K 2 Cr 2 O 7 solution at pH 5.5 from 0 to 10 μs after a 29.2 Gy electron pulse at 25°C is shown in Figure S8. By subtracting the absorbance due to the concentration of HCrO 4 − lost assuming that G(H • + e aq − ) = 3.059 molecules/100 eV, one can see that the Cr(V) product has an absorbance maximum around ∼320 nm. The hypochromate ion, CrO 4 3− , is predicted to be the primary Cr(V) product in alkaline solutions and is known to absorb around 355 nm. 36 The species formed under acidic solutions is not currently known but may be a protonated form of the basic product. CONCLUSIONS The rate coefficients and Arrhenius parameters for the major reactions of aqueous Cr(VI) ions under irradiation have been measured for the first time to high temperatures. All measured rate coefficients increased exponentially with temperature, and the activation energies for the reactions of HCrO 4 − and CrO 4 2− agreed within the experimental uncertainties. The reduction reactions of Cr(VI) to Cr(V) by the e aq − or H • atom are significantly reversed by the back reaction of the product Cr(V) with the • OH radical, and the extent of this oxidation reaction has a strong dependence on pH. Overall, the new kinetic data measured in this work give a much better understanding of the speciation of chromium in aqueous solution under irradiation and at high temperatures. More studies on the radiation chemistry reactions of the other oxidation states of Cr to high temperatures are required to build a complete model of its behavior and thus be able to predict and control Cr speciation in aqueous solution under irradiation for applications such as corrosion product transport in nuclear reactor coolant. The results from this work do support the idea that Cr(VI) can be reduced under irradiation for the proposed radiolytic reduction of Cr(VI) to Cr(III) for environmental release; however, the back reaction between Cr(V) and the • OH radical also has important implications for this application. This reaction suggests that a scavenger for the • OH radical would need to be included in the system to efficiently achieve a complete conversion. For the same reasons, this observation also complicates the proposed use of aqueous Cr(VI) solution as a chemical dosimeter, as the loss of Cr(VI) would not simply be linear with the absorbed dose. ■ ASSOCIATED CONTENT Linear fits of pseudo-first-order rate coefficients to obtain second-order rate coefficients, concentration of Cr(VI) species as a function of temperature and pH, change in the HCrO 4 − optical absorbance with 10 mM MeOH as an • OH scavenger, optical absorption of the full spectrum, and subtracted Cr(V) transient species formed in an irradiated K 2 Cr 2 O 7 solution (PDF) ■ AUTHOR INFORMATION
2022-10-23T15:04:53.770Z
2022-10-21T00:00:00.000
{ "year": 2022, "sha1": "17c9ba2d6f4c99f06c756a28d386d354a85cc6ed", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.2c04807", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74cf756e59315c5e89bf0220bd4c7be897bcd4c0", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
235453249
pes2o/s2orc
v3-fos-license
Helicobacter pylori-Induced Heparanase Promotes H. pylori Colonization and Gastritis Chronic gastritis caused by Helicobacter pylori (H. pylori) infection has been widely recognized as the most important risk factor for gastric cancer. Analysis of the interaction between the key participants in gastric mucosal immunity and H. pylori infection is expected to provide important insights for the treatment of chronic gastritis and the prevention of gastric cancer. Heparanase is an endoglycosidase that degrades heparan sulfate, resulting in remodeling of the extracellular matrix thereby facilitating the extravasation and migration of immune cells towards sites of inflammation. Heparanase also releases heparan sulfate-bound cytokines and chemokines that further promote directed motility and recruitment of immune cells. Heparanase is highly expressed in a variety of inflammatory conditions and diseases, but its role in chronic gastritis has not been sufficiently explored. In this study, we report that H. pylori infection promotes up-regulation of heparanase in gastritis, which in turn facilitates the colonization of H. pylori in the gastric mucosa, thereby aggravating gastritis. By sustaining continuous activation, polarization and recruitment of macrophages that supply pro-inflammatory and pro-tumorigenic cytokines (i.e., IL-1, IL-6, IL-1β, TNF-α, MIP-2, iNOS), heparanase participates in the generation of a vicious circle, driven by enhanced NFκB and p38-MAPK signaling, that supports the development and progression of gastric cancer. These results suggest that inhibition of heparanase may block this self-sustaining cycle, and thereby reduce the risk of gastritis and gastric cancer. INTRODUCTION Helicobacter pylori (H. pylori) is a spiral, microaerobic gram-negative bacterium (1). Epidemiologic studies reveal that about 50% of people worldwide are infected with H. pylori (2). H. pylori infection induces a significant inflammatory response in the gastric mucosa, accompanied by infiltration of immune cells, resulting in chronic infection and sustained damage to gastric mucosal tissues (3,4). Hence, chronic gastritis caused by H. pylori has been widely recognized as the most important risk factor for gastric cancer (5). H. pylori infection usually requires antibiotic therapy, but it is increasingly difficult to achieve eradication in some patients, where antibiotics by itself are not sufficient to cure the disease (6). Although the mechanisms of sustained colonization and chronic infection by H. pylori in the gastric mucosa are not clear, existing studies suggest that the interaction between gastric epithelial cells and gastric mucosal immune cells is a key feature in the pathogenesis of H. pylori infection (7). Therefore, identifying key participants in gastric mucosal immunity and H. pylori infection is likely to provide insights that will lead to new treatment modalities of chronic gastritis and prevention of gastric cancer. By sequestering cytokines and chemokines, mediating the interaction between leukocytes and endothelium in the extracellular matrix (ECM), and facilitating receptor-ligand binding on the surface of immunocytes, heparan sulfate (HS) plays critical roles in multiple inflammatory processes (8). Through degradation and remodeling of the HS polysaccharide chains in the ECM and cell surfaces, heparanase facilitates the extravasation and migration of immune cells towards sites of inflammation (9). Also, heparanase releases HS-bound cytokines and chemokines which further establish concentration gradients that facilitate the bioavailability, activation and directed motility of immune cells (9)(10)(11). It has been previously reported that heparanase is highly expressed in a variety of inflammatory diseases (12,13), including ulcerative colitis, acute pancreatitis (14,15), acute vasculitis (16), acute glomerulonephritis (17), hypersensitivity pneumonia (18) and sepsis (19). The role of heparanase in chronic gastritis caused by H. pylori has not yet been explored. In this study, we report, for the first time, that H. pylori infection promotes the expression of heparanase in gastritis. Heparanase, in turn, further promotes the colonization of H. pylori and aggravates gastritis, forming a vicious circle driven by enhanced NFkB and p38-MAPK signaling and the generation of pro-inflammatory and protumorigenic cytokines. These results suggest that inhibition of heparanase will block this self-sustaining cycle, and thereby facilitate the eradication of H. pylori and reduce the risk of gastritis and gastric cancer. Patients and Specimens Gastric biopsy specimens were collected from patients who underwent electronic gastroscope for gastritis-related symptoms (i.e., ventosity, rhythmic epigastric pain, dyspepsia) at the endoscopy center of the department of gastroenterology, Xinqiao Hospital, The Army Medical University. Gastritis mucosa tissues (two to four nearby spots) were collected by endoscopic biopsy, and urease tests were performed immediately by using urease test paper (Kedi Tech, Zhuhai, China). The tissue samples were then transferred to labeled frozen pipes and stored in a liquid nitrogen. These samples were used for DNA and RNA extraction, H&E staining, H. pylori testing and immunofluorescence. Individuals with atrophic gastritis, hypochlorhydria, antibiotic treatment, autoimmune disease, infectious diseases, and cancer, were excluded. The study was approved by the Ethics Committee of Xinqiao Hospital, The Army Medical University. Written informed consent was obtained from each individual. Demographic characteristics of the patients are presented in Table 1. Mice The heparanase knockout (Hpa-KO) mice have been described previously (20). C57BL/6 mice were purchased from TengXin Bio, Chongqing. All mice used in the experiments were free of pathogenic murine viruses, bacteria and parasites and were fed with sterile food and water in a specific pathogen-free environment. Breeding and animal experiments were performed at the experimental animal center of Xinqiao hospital and were approved by the Ethics Committee of Xinqiao Hospital, The Army Medical University. Mice were euthanized at the 8th week after H. pylori infection. The stomach was isolated and the forestomach removed. Stomach contents were washed and cleaned by sterile phosphate buffer saline (PBS). After flattening on a sterile foam plastic board, half of the tissue was cut into four parts for RNA isolation, DNA isolation, protein extraction and tissue fixation for further immunohistochemistry and immunofluorescence analyses. H. pylori Infection Test H. pylori infection was determined by rapid urease test using biopsy specimens taken from the antrum. Infection was subsequently confirmed by RT-PCR and agarose gel electrophoresis using H. pylori specific 16S ribosomal DNA (rDNA) primers. The 'gold standard' culture test was applied in the case of two patients, as described in Supplementary Figure 3. Flow Cytometry Half of the mouse gastric tissue was used for isolation of single cells, as described (22). Fresh tissues were washed twice with Hanks' solution containing 1% FBS, cut into small pieces (1 mm 3 Immunohistochemistry Formaldehyde fixed and paraffin-embedded sections of gastritis tissue samples were heated in a 60°C oven for 2 h, immersed in xylene for 30 min for dewaxing and immersed in increasing concentrations of alcohol for rehydration. Tissue sections were then subjected to antigen retrieval with citrate under medium heat in a microwave oven for 10 min, cooled for 2 min and incubated with 0.5% Triton X-100 for permeabilization. Hydrogen peroxide and 1% BSA were added in sequence for catalase and antigen blocking, respectively. Tissue sections were then incubated (1 h, 37°C) with primary antibodies (i.e., antiheparanase polyclonal antibody #733, diluted 1:200) (19,23), washed three times and incubated (30 min) with anti-mouse/ rabbit secondary antibodies. Next, DAB chromogenic reagent was added for color development, and hematoxylin was used for counterstaining. Immunofluorescence Fresh frozen sections of gastritis tissue samples were incubated in 100% methanol (chilled at −20°C) at room temperature for 5 min, and then in 4% paraformaldehyde in PBS (pH 7.4) for 10 min at room temperature. Tissue sections were blocked with 1% BSA and glycine in PBST (PBS+ 0.1% Tween 20) for 30 min followed by incubation with the diluted primary antibodies overnight at 4°C. Sections were then incubated with secondary antibodies for 1 h at room temperature, mounted and sealed with coverslips. Photographs were taken with a laser scanning confocal microscope. Western Blot Analysis Western blotting was performed as previously described (23,24 Statistical Analysis Data were analyzed using R (R Core Team, Vienna, Austria). Mann-Whitney U-test was generally used to analyze the differences between two groups of continuous value. R build-in function wilcox.test() was used to perform Mann-Whitney U-test. All data were analyzed using two-tailed tests. The Chi-square test was used to compare classified disordered data groups. Association between H. pylori infection and the expression of heparanase was analyzed by Spearman correlation coefficient. p <0.05 was considered statistically significant. Heparanase Expression Is Increased in H. pylori-Induced Chronic Gastritis To verify the expression of heparanase in chronic gastritis, we obtained gastroscopy biopsy specimens of fresh gastric mucosal tissues from patients who underwent electronic gastroscope for gastritis-related symptoms, including H. pylori-positive chronic gastritis mucosal tissues, H. pylori-positive chronic atrophic gastritis mucosal tissues, and normal mucosal tissues. Fresh frozen sections were prepared, and the expression of heparanase was examined by immunofluorescent staining using anti-heparanase antibodies. Compared to normal gastric tissue, heparanase staining was readily detected in chronic gastritis, gastric mucosa and chronic atrophic gastric mucosa ( Figure 1A and Supplementary Figure 1A). To study the role of heparanase in gastritis, H. pylori strain PMSS1 was used to infect C57BL/6 mice via oral gavage, once every 2 days, 5 × 10 9 CFU each time, for three consecutive times. After 8 weeks, mice were sacrificed, the gastric tissues were collected, and colonization of H. pylori in the gastric mucosa was confirmed by H&E and Warthin-Starry silver staining (24) ( Figure 1B). Notably, heparanase staining intensity was increased prominently in H. pylori-infected gastric mucosa vs. control, un-infected gastric tissue. This was evident by immunofluorescent staining ( Figure 1C and Supplementary Figure 1B) and immunohistochemistry ( Figure 1D and Supplementary Figure 1C). Heparanase Promotes Gastritis Inflammation and H. pylori Colonization To investigate the role of heparanase in chronic gastritis, we infected Hpa-KO and Wild Type (WT) C57BL/6 mice with H. pylori, and the degree of inflammation in the gastric mucosa was examined after 8 weeks. As shown in Figure 2A, infiltration of immunocytes into the gastric mucosa of Hpa-KO mice and the degree of inflammation were significantly decreased as compared to control WT mice infected with H. pylori, consistent with the inflammatory score of the tissues ( Figure 2B). Moreover, the expression of pro-inflammatory cytokines and chemokines including IL-1b, IL-6, IL-22, TNF-a, MIP-2, MCP-1, IL-23, IL-27, IL-12, and INF-g was significantly decreased in Hpa-KO vs control WT mice, whereas the expression of IL-10, one of the main anti-inflammatory cytokines, was significantly increased in the Hpa-KO mice ( Figure 2C and Supplementary Figure 2). To evaluate the impact of heparanase on the colonization of H. pylori, total DNA was extracted from the gastric mucosa, and the relative expression levels of H. pylori 16s rDNA were quantified. As shown in Figure 2D, H. pylori colonization was markedly decreased in gastric tissue of H. pylori-infected Hpa-KO vs WT mice, indicating that heparanase not only enhances gastric inflammation but also colonization of the bacteria. To study the effect of heparanase on the colonization of H. pylori in human chronic gastritis, patients with H. pylori infection were recruited and chronic gastritis tissues were collected during the gastroscopic examination. The collected gastric mucosa was divided into two parts and used to extract DNA and total RNA. The expression of human heparanase was quantified (RT-qPCR) relative to the 16S rRNA of H. pylori and the human beta-globin gene. Importantly, a positive correlation between expression of heparanase and colonization of H. pylori was found in human chronic gastritis ( Figure 2E), further confirming the results of our mouse model ( Figure 2D). Heparanase Facilitates Infiltration of Macrophages in H. pylori-Infected Chronic Gastritis To evaluate the impact of heparanase on the infiltration of immunocytes into the lamina propria of H. pylori-infected chronic gastritis mucosa, we evaluated (RT-qPCR) the expression of markers specific for the different immune cell populations, including NK1.1 and GranB (NK cells), Langarin (dendritic cells), F4/80 (macrophages), and Ly6g (neutrophils). We found that the recruitment of NK cells, dendritic cells, and neutrophils to the inflamed gastric tissue was comparable in WT and Hpa-KO mice ( Figure 3A, left four panels). In striking contrast, recruitment of macrophages (evident by expression of F4/80) to the inflamed Hpa-KO gastric tissue was significantly and markedly decreased ( Figure 3A, right panel). To further confirm the difference in macrophage infiltration, flow cytometry was applied by using fresh mucosa tissue of H. pylori-infected chronic gastritis derived from WT and Hpa-KO mice. Notably, the percentage of F4/80 + CD11b + macrophages in the gastritis mucosa of WT mice was significantly higher than that of H. pylori Induces Expression of Heparanase in Gastric Epithelial Cells and Macrophages Heparanase is highly expressed in chronic gastritis with H. pylori infection, but the source of heparanase in gastritis tissue is still unclear. To investigate whether H. pylori induces the expression of heparanase in epithelial cells of the gastric mucosa, human and mouse gastritis tissues were double-stained with anti-pancytokeratin, a marker of epithelial cells, and anti-heparanase antibodies. As shown in Figure 4A, heparanase and cytokeratins are co-expressed in mouse and human gastritis tissues, consistent with the pattern of heparanase expression in other inflamed tissues. Moreover, the addition of H. pylori to normal human gastric epithelial cells (GES-1) resulted in increased heparanase expression and this increase appeared dose-dependent ( Figure 4B). Notably, increased heparanase expression (over 5-folds) was evident already 6 h after the addition of the bacteria ( Figure 4C) and was further confirmed by immunoblotting ( Figures 4D, E). Since heparanase facilitated the infiltration of macrophages in H. pylori-infected chronic gastritis (Figure 3), we stained H. pylori-infected human gastritis tissue for CD68 and heparanase. As shown in Figure 4F, heparanase staining co-localized with CD68 + macrophage staining, suggesting that heparanase in human gastritis originates also from macrophages. Heparanase Promotes Polarization of Macrophages to the M1 Phenotype Macrophages, like other immune effector cells, have multiple subtypes and various phenotypes depending on the microenvironment. Specifically, macrophages can differentiate into distinct entities classically referred to as activated or inflammatory (M1) macrophages, and selectively activated or anti-inflammatory (M2) macrophages (25). The process of macrophage transformation from one phenotype to another is referred to as macrophage polarization (26). Importantly, M1 phenotype can be induced by IFN-g, and bacterial lipopolysaccharide (LPS) (25). To evaluate the impact of heparanase on macrophage polarization, peritoneal macrophages were isolated from WT and Hpa-KO mice, and macrophages were maintained with 200 MOI H. pylori for 6 h. RNA was then extracted, and the expression of M1-related cytokines was determined by qPCR. Expression of IL-1b, IL-6, TNF-a, CXCL-1, CXCL-10 and iNOS in WT macrophages was higher in response to H. pylori, whereas a lower induction was quantified in Hpa-KO macrophages incubated with H. pylori ( Figure 5A and Supplementary Figure 5A). This was also noted by immunoblotting for nitric oxide synthase (iNOS), a marker of M1 polarization ( Figure 5B). We next performed a similar experiment except that macrophages were treated with LPS and IFN-g instead of H. pylori ( Figure 5C). The relative expression of IL-1b, IL-6, TNF-a, IL-10 and iNOS in macrophages from WT mice was significantly higher than in macrophages from Hpa-KO mice ( Figure 5C and Supplementary Figure 5B), and this was further confirmed by immunoblotting for iNOS ( Figure 5D). Moreover, whereas heparanase alone had no effect on iNOS expression, treatment of WT macrophages with heparanase and H. pylori resulted in a marked increase of iNOS expression, demonstrating a combined effect of H. pylori and heparanase on M1 polarization of macrophages. Heparanase-Stimulated M1 Polarization of Macrophages Involves p38 MAPK and NFkB Previous studies reported that heparanase is a key mediator of macrophage activation and function (23,(27)(28)(29)(30). It was shown that heparanase activates Erk, p38, and JNK signaling in macrophages, leading to increased c-Fos levels and induction of cytokine expression (23). To explore whether the activation and M1 polarization of macrophages in H. pylori-infected gastritis utilize a similar mechanism, peritoneal macrophages were isolated from WT and Hpa-KO mice and inoculated with 200 MOI H. pylori for 6 h. Phosphorylation of Erk, MAPK, NF-kB and JNK was next examined by immunoblotting. As shown in Figures 6A, B, p38 MAPK and NFkB phosphorylation levels were increased by H. pylori in WT peritoneal macrophages whereas lower phosphorylation levels were detected in Hpa-KO macrophages. Interestingly, increased heparanase expression in GES-1 cells by H. pylori ( Figure 6C) also involved p38 phosphorylation. This was concluded because heparanase induction by H. pylori was decreased markedly in GES-1 cells that were pre-treated with an inhibitor of p38 (SB203580; Figure 6C). This confirms previous reports that connect the p38 MAPK pathway with heparanase expression in gastric cancer cells (31). DISCUSSION In most tumor types, heparanase was reported to be overexpressed and correlated with increased tumor size, angiogenesis, metastasis and poor prognosis (8,(32)(33)(34)(35)(36)(37). In line with the notion that heparanase is a drug target, three inhibitors of the heparanase enzyme (Roneparstat, Necuparanib, Pixatimod) have been evaluated in early-stage clinical trials, showing signs of efficacy (38)(39)(40)(41). In addition, heparanase is capable of regulating multiple aspects of the inflammatory process (8,11). Heparanase was reported to facilitate inflammation through the release of cytokines/chemokines anchored in the ECM, activation of innate immune cells and stimulation of cell motility and extravasation (8,(42)(43)(44). For example, heparanase was shown to promote ulcerative colitis (29), acute pancreatitis (14,15), acute vasculitis (16), acute glomerulonephritis (17), sepsis (19), and hypersensitive pneumonia (18). It has been previously reported that heparanase induced by H. pylori infection facilitates the proliferation, invasion and metastasis of gastric cancer cells (31,45). However, the role of heparanase in gastritis and the early stages of gastric cancer initiation is still obscure. Here, we examined the involvement of heparanase in H. pyloriinduced gastritis, which, test gastritis tissues and established a mouse model of H. pylori-infected chronic gastritis Expression and tissue localization of heparanase were illustrated by immunofluorescence and immunohistochemistry. Compared to normal gastric tissue, heparanase was overexpressed in H. pylori-infected chronic gastritis and intestinal metaplasia, indicating that high expression of heparanase is involved in bacteria-induced inflammation. The regulation of heparanase in inflammation is complex and versatile (8). To investigate the specific function of heparanase in H. pylori-infected gastritis, we employed heparanase knockout (HPA-KO) C57BL/6 mice (20). Compared to wild-type (WT) C57BL/6 mice, deficiency of heparanase reduced the infiltration of immunocytes into the lamina propria of the gastric mucosa. The inflammation score of H. pylori-infected gastric mucosa derived from WT mice was significantly higher than that observed in heparanase knockout mice. Likewise, expression of pro-inflammatory cytokines (IL-1b, IL-6, TNF-a, IL-23, MIP-2, MCP-1, IL-27, IL-12, INF-g) was significantly higher in WT vs. HPA-KO H. pylori-infected mice. These results indicate that heparanase plays an important role in H. pylori-induced chronic gastritis, corroborating previous studies on the involvement of MMP10 in gastritis and colonization of H. pylori (7). In contrast, MMP7 was reported to restrain H. pylori-induced gastric inflammation and premalignant lesions in the stomach by altering macrophage polarization (46). To explore the impact of heparanase on the colonization of H. pylori, Hpa-KO and WT mice were subjected to infection with H. pylori. Compared with WT mice, expression of the H. pylori 16s rRNA gene was significantly decreased in the gastric tissue of Hpa-KO mice. Likewise, a positive correlation was found between heparanase expression and H. pylori colonization in human gastric tissues. These results substantiate the notion that H. pylori promote high expression of heparanase which further facilitates the colonization of H. pylori in the gastric tissue. Notably, it was recently reported that H. pylori induces high expression of Reverba that fosters the colonization of H. pylori by impairing host innate and adaptive defense (47), forming a positive feedback loop that aggravates H. pylori induced-gastritis. It appears that, among other effects, H. pylori promotes gastritis and gastric tumorigenesis via upregulated expression of heparanase (31), pylori in WT and Hpa-KO mice (WT mice n = 12, KO mice n = 12), as described in Figure 1. Gastric tissues were collected after 8 weeks, total RNA was extracted and subjected to qPCR applying primer sets specific for NK1.1 and GranB (NK cells), Langarin (dendritic cells), F4/80 (macrophages), and Ly6g (neutrophils). Note, reduced infiltration of macrophages to the inflamed gastric tissue of Hpa-KO mice. (B-E) Flow cytometry. Fresh mucosa tissue (WT mice n = 4, KO mice n = 4), derived from H. pylori-infected chronic gastritis in WT and Hpa-KO mice, was dissociated into a single-cell suspension as described under Materials and Methods. Cells were then subjected to flow cytometry applying antibodies directed against CD45, F4/80, CD103, CD11b, and CD11c. The number of F4/80 + CD11b + macrophages and CD103 + CD11c + dendritic cells in WT vs. HPA-KO mice was compared. *p <0.05; **p <0.01; ***p <=0.001; ns, no significant difference. MMP10 (7) and Rev-erba (47). Yet, the mechanism by which heparanase promotes colonization of H. pylori is still unknown and needs further investigation. While the participation of heparanase in immunocyte chemotaxis, recruitment, extravasation, migration and accumulation in target inflammatory sites, is well documented (8, 10-13, 17, 19, 42, 44, 48, 49), its impact on immunocytes in H. pylori-induced chronic gastritis has not been analyzed. Our results indicate that heparanase regulates primarily the recruitment and accumulation of macrophages in H. pylori-induced chronic gastritis tissue, in agreement with previous studies on the impact of heparanase on macrophage recruitment and activation in cancer and inflammation (23,(27)(28)(29)(30). Given that dendritic cells (DC) and IL-23 take part in the pathogenesis of H. pylori-induced gastritis (50), we examined the accumulation of DC in H. pylori-induced chronic gastritis. While decreased amount of DC was noted in H. pylori-infected Hpa-KO vs WT mice, this decrease was not statistically significant, further emphasizing the preferential involvement of macrophages in the observed heparanase-H. Pylori axis. Notably, Zhuang et al., proposed a multistep model of inflammation during H. pylori infection involving interactions between H. pylori, Th22 cells, DCs, gastric epithelial cells and myeloid-derived suppressor cells within the gastric mucosa (50). In other inflammatory settings, such as colitis (29), delayed-type hypersensitivity (51), inflammatory bowel disease (52) and lung injury caused by sepsis (53), overexpression of heparanase was primarily noted in epithelial and endothelial cells (54). Likewise, our results indicate that the main cellular source of heparanase in H. pylori-induced chronic gastritis are epithelial cells of the gastric mucosa, yet overexpression of heparanase in macrophages was also observed. Our results indicate that heparanase promotes M1 polarization of macrophages driven by H. pylori or LPS + INF-g, resulting in increased expression of pro-inflammatory cytokines such as IL-1b, IL-6 and TNF-a, and further aggravating the severity of gastritis. While the underlying signaling mechanism needs to be elucidated, we have noted that heparanase facilitates M1 polarization in response to H. pylori mainly via activation of the p38 MAPK and NF-kB signaling pathway. Combining the current and previous results (29), it appears that the molecular mechanism underlying the activation and polarization of WT, but not Hpa-KO, macrophages involves a linear cascade by which heparanase activates Erk, p38, and JNK signaling in macrophages, leading to increased c-Fos and NFkB transcriptional activity and induction of cytokine expression (23). We also found that the COMPASS complex (54,55) is impaired in the absence of heparanase, resulting in decreased levels of WDR5 and H3K4 methylation in Hpa-KO vs. WT macrophages (27). It remains to be elucidated whether the currently observed H. pylori-heparanase axis involves WDR5 induction and H3K4 methylation, given their important epigenetic roles in the progression of various cancers (56) including gastric cancer (57). It is well documented that chronic inflammatory conditions contribute to cancer progression (46,47,(58)(59)(60)(61)(62) through, among other mechanisms, mobilization of tumor-supporting immunocytes (e.g., tumor-associated macrophages, neutrophils) which supply bioactive molecules that foster cell survival, angiogenesis, invasion and metastasis (42,47,48). Moreover, in several anatomic sites chronic inflammation is crucially implicated in tumor initiation, producing a mutagenic environment through the release of reactive oxygen/nitrogen species from infiltrating immune cells, generating cytokines, chemokines, growth factors, and anti-apoptotic proteins, and activating tumor-stimulating signaling pathways (e.g., NF-kB, p38MAPK, STAT3) (42,(47)(48)(49). Progression of Barrett's esophagus to adenocarcinoma (63); chronic gastritis to intestinal-type gastric carcinoma (64,65), chronic hepatitis C to hepatocellular carcinoma (64); pancreatitis to pancreatic adenocarcinoma (66) and colitis to colorectal cancer (67) are well-known examples of inflammationdriven tumorigenesis. Remarkably, induction of heparanase before the appearance of malignancy was reported in essentially all of the above-mentioned inflammatory conditions, i.e., Barrett's esophagus (68), hepatitis C infection (69), chronic pancreatitis (70), Crohn disease and ulcerative colitis (29,50). Similarly, H. pylori-induced gastric inflammation and the associated up-regulation of heparanase, observed in the current study, likely promote the development of gastric cancer. Given the causal role of heparanase in tumor progression in tissues in which cancer-related inflammation typically occurs, (i.e., gastrointestinal tract, pancreas, liver), it is conceivable that inflammation-induced heparanase is involved in coupling inflammation and cancer. This notion is supported by a study utilizing a mouse model of colitis-associated colon carcinoma (29), showing that heparanase promotes polarization of innate immunocytes toward pro-inflammatory and/or pro-tumorigenic phenotype. The same self-sustaining crosstalk between the gastric epithelium and immunocytes appears to be driven by H. pylori infection. By sustaining continuous activation and polarization of macrophages that supply cancer-promoting cytokines (i.e., IL-1, IL-6, IL-1b, TNF-a, MIP-2, IL-10, iNOS), heparanase (produced primarily by inflamed epithelium) may participate in creating a pro-tumorigenic microenvironment, characterized by enhanced NFkB and p38-MAPK signaling (71), that supports the development and progression of gastric cancer. In conclusion, our results indicate that H. pylori infection promotes overexpression of heparanase in gastritis which in turn facilitates the colonization of H. pylori and hence worsens gastritis. Inhibition of heparanase by heparin/HS mimicking compounds, neutralizing antibodies and/or small molecules may block this self-sustaining pro-inflammatory cycle and thereby reduce the risk of gastritis and the associated gastric cancer. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. ETHICS STATEMENT The study was approved by the Ethics Committee of Xinqiao Hospital, The Army Medical University. Written informed consent was obtained from each individual. All the breeding processes and animal experiments were reviewed and approved by the Ethics Committee of Xinqiao Hospital, The Army Medical University. Figure 1A (n=4 for each group). (B) Semiquantitative scoring of immunofluorescence presented in Figure. 1C (n=4 for each group). (C) Semi-quantitative scoring of the immunohistochemical study presented in Figure 1D (n=4 for each group). Supplementary Figure 3 | Culture and Gram staining of two H. pylori-positive samples. Gastric mucosa tissues were grinned into tissue homogenates and resuspended with sterile PBS. After centrifugation, the supernatant was added onto blood culture plate containing antibiotics and cultured in 5% O2,10% CO2 and 85% N2 incubator at 37°C for 48 hours. Left: Gray dots on the blood culture plate are colonies of H. pylori. Right: gram staining of H. pylori.
2021-06-17T13:20:14.297Z
2021-06-17T00:00:00.000
{ "year": 2021, "sha1": "a9c81056dccdd23885763887302dfe783ba4d324", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.675747/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9c81056dccdd23885763887302dfe783ba4d324", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236431302
pes2o/s2orc
v3-fos-license
Differential Expression of Ki-67 and P27 in Cholesteatoma Compared to Skin Tissue Predicts the Prognosis of Adult Acquired Cholesteatoma Background : The aim of this study was to compare the differential Ki-67 and p27 staining properties of acquired cholesteatoma in adult patients for prognostic analysis. Methods: Forty-two adult patients with acquired cholesteatoma were enrolled. The cholesteatoma and matched meatal skin tissues of the patients were immunostained with Ki-67 and p27 antibodies. Canal wall down mastoidectomy was performed in all patients. The differential staining properties––positive staining in the cholesteatoma and negative staining in the skin tissue (C+S-), negative staining in the cholesteatoma and positive staining in the skin tissue(C-S+)––were compared for bone erosion scores (BES), stage, and recurrence rates. Results: Isolated findings in the cholesteatoma tissues, without matching with the skin tissues, demonstrated that stage and recurrence rates were not related to findings in the cholesteatoma tissues (P ˃ .05). However, C+S- for Ki-67 and C-S+ for p27 are risk factors for worse prognosis including advanced stage (P < .001 for Ki-67 and P = .008 for p27), BES values (P < .001 for Ki-67 and P = .001 for p27), and recurrence rates (P < .001 for Ki-67 and P = .037 for p27). Conclusion: This is the first paper assessing the cholesteatoma prognosis according to the differential Ki-67 and p27 staining properties of cholesteatoma and healthy skin tissues. Cellular proliferation rate in the cholesteatoma is important but insufficient by itself for predicting the prognosis of cholesteatoma patients. Patients having lower basal levels of cellular proliferation rate and higher cellular activity in the cholesteatoma tissue are prone to worse prognosis with increased stage, recurrence rates, and degree of bone erosion. INTRODUCTION Cholesteatoma is a progressive hyperplastic keratinized squamous epithelium of the temporal bone characterized by osteoclastic activity and bone resorption. It is composed of 3 compartments: a cystic part, a matrix, and a perimatrix. The central cystic part contains dead keratinocytes. It is surrounded by the matrix, and the most external compartment is the perimatrix. The active part of the cholesteatoma is the matrix, which harbors continuous proliferating undifferentiated keratinocytes. The perimatrix consists of fibroblasts and granulation tissue. As the central cystic part expands with continuous desquamation of the dead keratinocytes from the matrix, osteoclastic cascade reactions result in bone erosion. The bone erosion capability of the cholesteatoma can trigger extensive expansion and complications such as hearing loss, vestibular involvement, facial nerve paralysis, and even brain abscess and death. 1,2 To date, the only definitive treatment of the cholesteatoma is surgery. Despite developing technology and the widespread use of endoscopes, recurrence and recidivism after surgery still exist as a major problem and various studies report rates of 0-70%. 3 The underlying pathogenesis and molecular mechanisms of cholesteatoma have not been fully understood. Inflammatory cytokines including interleukin (IL)-1, IL-6, TNF-α, matrix metalloproteinases, imbalance between keratinocyte proliferation and apoptosis, Rho kinase pathway, genetic susceptibility, angiogenetic growth factors, platelet-derived growth factor, and chronic proceeding infections have all been reported to have a role in the cholesteatoma development. 1,2,4,5 Nuclear antigen Ki-67 is a protein that is expressed in proliferating cells; thus, it is commonly used as a mitotic index for tumor grading. 3 Increased Ki-67 expression in the basal and spinous layers of the cholesteatoma was also reported as showing the high proliferation property of keratinocytes in the cholesteatoma. 6 Few studies 3,[7][8][9][10][11][12][13][14][15] have also investigated the Ki-67 labeling index for predicting the prognostic features of cholesteatoma. However, there is such a wide range of Kİ-67 labeling indexes among healthy skin tissues of cholesteatoma patients, ranging from 0.9% to 24%. 10,16 Moreover, a stronger expression of Ki-67 was also reported in the healthy skin tissue compared to the cholesteatoma tissue. 15 Nonetheless, all these studies 3,7-15 compared the Ki-67 labeling index of the cholesteatoma tissues obtained from different patients without matching the results with the healthy skin tissue (control group) of the same patient during comparison to predict the role of Ki-67 on the cholesteatoma prognosis. Cyclin-dependent kinases are also involved in cell cycle and activate cellular proliferation, similar to Ki-67. P27 is the novel cyclin-dependent kinase inhibitor that acts as a tumor suppressor gene, arrests the cell cycle in phase G1, and stops cellular proliferation. 17,18 A limited number of studies have focused on the effect of p27 on the cholesteatoma pathogenesis with conflicting results. [16][17][18] Only one study 18 investigated the role of p27 on recurrence of the cholesteatoma without matching the results with the skin tissue of the same patient. Additionally, to the best of the authors' knowledge, the role of p27 on extensiveness and bone erosion degree of the cholesteatoma has not been evaluated, yet. In this study, we investigated the effect of Ki-67 and p27 on the extensiveness (stage), recurrence rate, and bone erosion score of adults' acquired cholesteatoma by matching the staining status of the markers in the cholesteatoma and healthy skin tissues of the same patients, immunohistochemically. MATERIALS AND METHODS Local ethical committee approval was acquired for this prospective study. The power analysis of the study was performed according to the previously published articles investigating the role of Ki-67 on the cholesteatoma pathogenesis. The result of the power analysis was 12 individuals in each group. Forty-two adults (>18 years old) patients with acquired cholesteatoma were enrolled. The diagnosis of the cholesteatoma was made by intraoperative and histopathological findings. All patients were operated under general anesthesia with a canal wall down mastoidectomy technique. Patients having recurrent disease at first admission and patients who had been operated with a canal wall-up technique were excluded. The cholesteatoma tissue and healthy meatal skin tissue (3 × 3 mm.) away from the cholesteatoma were obtained from all patients during the surgery. Healthy skin tissue was obtained as the control group for each patient. Staging According to the intraoperative and computerized tomography (CT) findings, all patients were staged according to the 2017 EAONO/JOS cholesteatoma staging system. 19 Bone Erosion Score (BES) The erosion status of the middle ear ossicles, scutum, facial nerve canal, tegmen tympani, and otic capsule were noted. Each patient was scored with a bone erosion score ranging from 0 to 12. 7 Recurrence Rate All patients were followed-up for a minimum of 5 years. Patients were examined in the third month, sixth month, and first year visits. Afterward, all patients were routinely examined at each 6-month interval for mastoid cavity control. Patients having perforation of the grafts, unexplained decrement in hearing, persistent otorrhea despite appropriate antibiotic usage, having and diffusion restriction on non-echo-planar diffusion-weighted magnetic resonance imaging were re-operated. According to the intraoperative findings and postsurgical histopathological evaluation results, the recurrence status of the patients was noted. Immunohistochemistry Paraffin-embedded cholesteatoma and meatal skin tissues were fixed with 10% formalin. Slices of 5 µm thickness were prepared for immunohistochemical analysis. Deparaffinization of the slices was performed by xylene and alcohol solution washouts after overnight incubation at 37°C. Afterward, the slices were immunostained using Ki-67 monoclonal antibody and p27 antibody (Abcam® Ki-67 antibody, ab15580, Abcam® p27 antibody, ab193379). Comparison of the Staining Properties of the Tissues Firstly, the number of patients having positive and negative staining with Ki-67 and p27 in the cholesteatoma and skin tissues were compared among all patients. Secondly, only the cholesteatoma tissues of the patients were compared regarding the labeling status according to the subgroup of the patients (stage categories, recurrence rate status, and bone erosion scores). Thirdly, different from the previous studies, the differential staining properties of the patients were also considered by matching the cholesteatoma results with the results of the healthy skin tissues of the same patients. Statistical Analysis Statistical analysis was performed using SPSS Version 24.0 (IBM SPSS, New York, USA, 2016). Data were shown as mean ± standard deviation for continuous variables and the number of cases was used for categorical variables. Data were controlled for normal distribution using the Shapiro-Wilk test. The chi-square test was employed for the comparison of the number of positive-and negative-staining tissues between patient groups. The Z test was used to compare the proportions of the number of patients according to the differential staining properties. In addition, the Mann-Whitney U-test was used to compare the mean BES of positive-and negative staining in the patients' cholesteatoma tissues. One-way analysis of variance test was used to compare the mean BES of the patients according to the differential staining status. A P value of < .05 was regarded as statistically significant. Ki-67 Thirty-nine (92.8%) of the patients had Ki-67-positive staining in the cholesteatoma tissue, and 37 (88.1%) of the patients had Ki-67positive staining in the meatal skin tissue. There was no statistically significant difference between cholesteatoma and skin tissues regarding the number of patients having Ki-67-positive staining (P = .323) ( Figure 3). There was no statistically significant difference between recurrent and non-recurrent patients regarding the number of However, when the differential staining properties of the patients' tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) ( Table 1). The proportion of C(+)S(−) patients was significantly higher in the recurrent group (4/8) compared to the non-recurrent group (0/34) (z value: 4.3347, P < .001). There was no statistically significant difference among the stage groups regarding the number of patients having Ki-67-positive staining in the cholesteatoma tissue (P = .19). However, when the differential staining properties of the patients' tissues were additionally analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for Ki-67 (P < .001) ( Table 2 It was observed that there was no statistically significant difference between recurrent and non-recurrent patients regarding the number of patients with positive p27-staining in the cholesteatoma tissue (P = .657). However, when the differential staining properties of the patients' tissues were further analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) ( Table 3). There was no statistically significant difference among the stage groups regarding the number of patients having p27-positive staining in the cholesteatoma tissue (P = .108). However, when the differential staining properties of the tissues were analyzed, there was a statistically significant difference regarding the number of patients according to the differential staining properties for p27 (P < .001) ( Table 4) DISCUSSION In this study, we found that the number of patients having negative p27 staining in the cholesteatoma tissue was higher than for the staining in the skin tissue, however there was no significant difference with regard to the Ki-67 staining. The recurrence rates, stage, and BES values of the patients were not related to the Ki-67 staining status in the cholesteatoma tissues of the patients. However, differential expression of Ki-67 in the cholesteatoma tissue compared to the healthy skin tissue was related to a worse prognosis with increased recurrence rate, stage, and BES values. It was also observed that, negative p27 staining in the cholesteatoma tissue was only related to the elevated BES values; stage and recurrence rates of the cholesteatoma patients were not related to the p27 staining status in the cholesteatoma tissue. On the other hand, differential non-expression of p27 in the cholesteatoma tissue compared to the skin tissue was related to worse prognosis for cholesteatoma patients, such as increased recurrence rate, stage, and BES values. Proliferating undifferentiated keratinocytes in the matrix and activated fibroblasts in the perimatrix are the main cells in the cholesteatoma tissue that enhance progression. Uncontrolled proliferation of the keratinocytes initiates the vicious cycle; proliferating keratinocytes release cytokines that activate the perimatrix fibroblasts. Activated fibroblasts secrete epidermal growth factor and keratinocyte growth factor, which in turn activate the matrix keratinocytes in a vicious cycle. Osteoclastic molecules secreted from the proliferating keratinocytes and activated fibroblasts trigger the bone resorption and in turn the development of complications. 1,21 Thus, cellular proliferation rate in the cholesteatoma tissue is an important factor for progression of the cholesteatoma. Ki-67 nuclear antigen was demonstrated to be expressed in proliferating cells in the late G1, S, G2, and M phases of the cellular cycle. This protein has been widely used as a proliferation marker for tumors and proliferative diseases including cholesteatoma. 8 Most studies [8][9][10][11]13,16,20,22 reported higher expression levels of Ki-67 protein in the cholesteatoma tissue compared to the healthy skin tissue. However, Kuczkowski et al. 6 found an insignificant increase in Ki-67 expression in the cholesteatoma, and Kim et al. 15 Cyclin-dependent kinase inhibitor p27 blocks cyclin D, E, A, and B-dependent kinases. Thus, the decrease in the levels of p27 is related to the increased cyclin-dependent kinase levels and enables ongoing cellular proliferation. 16 Lower expression levels of p27 in the cholesteatoma tissue compared to the healthy skin tissue were reported as an evidence of higher cellular activity of the cholesteatoma. 17,18 On the other hand, Chae et al. 16 demonstrated an increased expression level of p27 in the cholesteatoma tissue. Regarding the prognostic value of p27 expression in the cholesteatoma tissue, Kuczkowski et al. 18 reported a lower expression of p27 in the recurrent cases of cholesteatoma than the primary acquired ones, without matching the cholesteatoma-skin values. Additionally, the effect of p27-dependent cellular proliferation on other prognostic factors of cholesteatoma such as extensiveness (stage) and bone erosion levels have not been evaluated, yet. According to our study results, the number of patients having negative p27 staining was greater in the cholesteatoma tissue compared to skin tissue, demonstrating the increased cellular proliferation of the cholesteatoma. Moreover, differential non-expression of p27 in the cholesteatoma tissue compared to skin tissue (C(−)S(+)) was a prognostic factor for cholesteatoma with increased stage, recurrence rate, and BES values. Combining the results of p27 with Ki-67, we can say that every cholesteatoma patient has a basal cellular proliferation activity rate in the meatal skin. Cellular proliferation rate in the cholesteatoma is important but not solely enough for predicting the prognosis of cholesteatoma patients. Patients having lower basal levels of cellular proliferation rate and higher cellular activity in the cholesteatoma tissue are prone to worse prognosis with increased stage, recurrence rates, and bone erosion degrees. The limitation of this study was the limited patient number. Future studies with a higher number of patients should also investigate the role of Ki-67 and p27 on cholesteatoma prognosis by comparing each patient's cholesteatoma results with the healthy skin tissue results. However, we think that our results are meaningful and can lead future studies with our long interval (minimum 60 months) follow-up period for the detection of cholesteatoma recurrence. In conclusion, the differential expression of Ki-67 and non-expression of p27 in the cholesteatoma tissue compared to the healthy skin tissue were associated with worse prognosis, including increased otitis media, with increased cholesteatoma stage, recurrence rate, and bone erosion degree for the cholesteatoma patients. With upcoming studies, Ki-67-and p27-targeted topical treatment options may be enhanced to prevent the growing and expansion of the cholesteatoma. Considering the future studies that will evaluate the effect of cellular proliferation on cholesteatoma progression, it would be more logical to analyze the basal levels of cellular proliferation rates for each patient and to match the results of cholesteatoma tissues with the healthy skin tissues for predicting the prognosis. Ethical Committee Approval: Local Ethical committee approval was acquired for the current study. Informed Consent: Written informed consent was taken from all patients who participated in the study.
2021-07-27T06:23:24.059Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "bb5a1a32b4c5303356bbf2e4b53a93114dc6eae3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5152/iao.2021.9453", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fba9344dec92f9e2abd00a5d7b46c0c5b2e74f53", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248516919
pes2o/s2orc
v3-fos-license
Blood Metabolomics Analysis Identifies Differential Serum Metabolites in Elite and Sub-elite Swimmers Objective: Metabolites in body fluids, such as lactate, glucose, and creatinine, have been measured by conventional methods to evaluate physical function and performance or athletic status. The objectives of the current study were to explore the novel metabolite biomarkers in professional swimmers with different competition levels using nuclear magnetic resonance (NMR) metabolomics, and try to establish a model to identify the athletic status or predict the competitive potential. Methods: Serum samples were collected from 103 elite and 84 sub-elite level Chinese professional swimmers, and were profiled by NMR analysis. Results: Out of the thirty-six serum metabolites profiled, ten were associated with the athletic status of swimmers (with p < 0.05). When compared with sub-elite swimmers, elite swimmers had higher levels of high-density lipoprotein (HDL), unsaturated fatty acid, lactic acid, and methanol. Elite swimmers had lower levels of isoleucine, 3-hydroxybutyric acid, acetoacetate, glutamine, glycine, and α-glucose. A model with four metabolites, including HDL, glutamine, methanol, and α-glucose, was established to predict athletic status by adjusting with different covariates. The area under the curve (AUC) of the best model was 0.904 (95% CI: 0.862-0.947), with a sensitivity and specificity of 75.5 and 90.2%, respectively. Conclusion: We have identified ten metabolite biomarkers with differentially expressed levels between elite and sub-elite swimmers, the differences could result from genetic or sports level between the two cohorts. A model with four metabolites has successfully differentiated professional swimmers with different competitive levels. INTRODUCTION For competitive sports, identification of competition status or predicting the development trend of competition level has been a topic discussed in the fields of sports training, monitoring, and talent identification (Johnston et al., 2018). In some studies, certain metabolites in blood and urine, such as lactate, glucose, and creatinine have been measured by conventional methods to evaluate physical function and performance or athletic status (Papassotiriou and Nifli, 2018). Other metabolites have not been detected due to their low concentration in the blood or urine, which could still be important in the biological reactions of exercise processes (Alosco et al., 2020). More sensitive methods are in need to detect those lowly expressed metabolites, so that their potential important information will not be omitted. Metabolomics is an important part of systems biology, apart from genomics, transcriptomics, and proteomics (Nicholson and Lindon, 2008;Baharum and Azizan, 2018). Nuclear magnetic resonance (NMR) is a commonly used analytical method in metabolomics (Markley et al., 2017;Ragavan and Merritt, 2019). The NMR method has a few advantages, such as simple sample pretreatment and the capacity to analyze several biological fluids including blood, urine, and saliva (Emwas, 2015;Bingol, 2018). Because of these advantages, NMR has been widely used in sports and exercise science (Sun et al., 2017;Heaney et al., 2019;Pitti et al., 2019). Metabolomics analyses have been employed to monitor the metabolic profile of elite athletes. A pilot metabolomics analysis compared the metabolic profile between high-and moderateendurance and power elite athletes, and reported that highendurance and high-power athletes present a different metabolic profile, which includes metabolites related to energy production, fatty acid metabolism, oxidative stress, and steroid biosynthesis (Al-Khelaifi et al., 2018). Another study investigated the metabolic fluctuations in saliva samples of professional basketball players during a game, and showed that quarters 1 and 3 had similar saliva metabolic profiles, while quarters 2 and 4 also demonstrated similar saliva metabolic profiles, but metabolic profiles after quarters 1 and 3 were different from those after quarters 2 and 4 (Khoramipour et al., 2021). The metabolic data also suggested that the first and third quarters relied more on anaerobic energy contribution, whereas the second and fourth quarters utilize more aerobic energy (Khoramipour et al., 2021). These studies suggested that metabolic files can be altered after both acute exercise and chronic training. Therefore, metabolomics analyses can be utilized to identify athletic ability, training level, and state of a certain event athlete. Swimming is a sporting event requiring great physical fitness (Burkett et al., 2018). The growth cycle of elite swimmers is very long; therefore, even small differences at a certain stage of athletes could affect their athletic capacity, which might result in them to be either elite athletes or sub-elite athletes (Haugen et al., 2019). Metabolites in body fluids have been used to monitor the training effects in swimmers. A study measured the urine metabolites before and after a swimming training session of elite swimmers with metabolomics analyses and reported peaks of ketone bodies, creatine, phosphate, and nitrogenous compounds after a 150 min training session (Khoramipour et al., 2021). Moreover, the metabolites of elite swimmers prior to the training session were different from those of controls (Khoramipour et al., 2021). The authors suggested the peaks of metabolites in urine can be used to evaluate and to adjust the physical training of elite swimmers (Moreira et al., 2018). However, it is still not clear whether there is any difference in the blood metabolomics characteristics between swimmers from different athletic statuses or levels. And if present, whether we can establish a prediction model for athletic status or level based on the different metabolites. In this current study, we recruited swimmers from different athletic statuses as the research participants and investigated the characteristics of the blood metabolomics with the NMR method. With this cross-sectional study, we performed an untargeted metabolic analysis to determine the serum metabolites associated with athletic status in swimmers. Furthermore, we aimed to explore specific metabolites that could serve as biomarkers to identify the athletic status and evaluate athletes' potential to achieve an elite level. By establishing the model with serum metabolites, coaches and researchers might be able to better assess the athletic status and competitive level of swimmers. Ethics Approval This study was conducted according to the Declaration of Helsinki and approved by the Ethics Committee at the School of Life Sciences, Fudan University, China. Written informed consent was obtained from all participants. Study Design and Participants All participants (swimmers) were in their post competition recovery period. Two weeks before the blood sample collection, all swimmers adopted a training program with similar exercise volume and intensity (30 min of land exercises before swimming, including 15 min of stretching exercises and 15 min of relaxation exercises; swimming session lasting about 80-90 min, 4000 m swimming at around 60% of the maximum intensity; 15 min of relaxation exercises after the swimming session). The daily diet was carried out according to the unified recipe (a unified diet menu for athletes from Monday to Sunday at the training base), supervised by the coach in charge. In two weeks, athletes who took medicine did not follow the training program or who did not follow the diet were excluded from the study. After two weeks, all qualified swimmers (n = 187) were categorized into two groups (elite group and sub-elite group) according to their officially certified level of sports competition. A total of 103 international-and national-level swimmers were from the Shanghai and Zhejiang professional swimming teams as the elite group. Athletes in the elite group have participated in international or national swimming competitions. There were 53 male athletes (height = 184.7 ± 5.2 cm, body mass = 78.7 ± 9.3 kg, age ranges: 18-29 years, training years: more than 10 years) and 50 female athletes (height = 171.8 ± 5.0 cm, body mass = 62.2 ± 6.3 kg, age ranges: 16-27 years, training years: more than 8 years). Eighty-four firstand second-grade swimmers were from the Shanghai professional swimming team, Shanghai University of Sport, Shanghai Jiao Tong University, and Tong Ji University as the sub-elite group. Athletes in the sub-elite group have participated in provisional or universities swimming competitions. There were 52 male athletes (height = 180.1 ± 6.3 cm, body mass = 77.1 ± 10.1 kg, age ranges: 17-23 years, training years: more than 9 years) and 32 female athletes (height = 168.7 ± 4.9 cm, body mass = 59.8 ± 8.6 kg, age ranges: 16-22 years, training years: more than 8 years). Blood Samples Collection and Metabolomics Analysis Blood samples (5 ml) were collected in tubes without anticoagulant from swimmers in the morning after overnight fasting. The samples were kept at room temperature for 30 min and then centrifuged at 4°C at 4000 rpm for 15 min (Centrifuge 5702R, Eppendorf AG, Hamburg, Germany). The serum samples were aliquoted into freezing tubes (Corning 430,659, 2 ml, United States), frozen immediately in liquid nitrogen, and stored at -80°C for around 1 week without a second freezethaw cycle before testing. On the day of NMR analysis, 170 ul serum sample was mixed with 340 μl PBS (phosphate buffer saline) in a 5 mm NMR tube and used directly for 1 H NMR detection. All NMR spectra were acquired at 298 K via Bruker AVIII 600 MHz NMR spectrometer (600.13 MHz for proton frequency), equipped with a cryogenic probe (Bruker Biospin, Germany). For the analysis of serum samples, we used the Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence (RD-90°-(τ-180°τ)n-ACQ), where τ = 350 ms and n = 100. A total of 32 transients for all samples were collected into 32K data points over a spectral width of 20 ppm with a 90°pulse length adjusted to 11.3 ms. The free induction decays were multiplied by an exponential window function with the line-broadening factor of 1 Hz prior to Fourier transformation. Each spectrum was corrected for phase and baseline deformation manually using Topspin 2.1(Bruker Biospin) and the chemical shift (α-glucose at δ 5.237). The spectral region (0.4-8.6) was integrated into bins with a width of 0.002 ppm using the AMIX package (v3.9.2, Bruker Biospin). Some noise signals, such as water signals (δ 4.200-5.152) were removed. The areas of all bins were then normalized to the volume. The normalized data was used for multivariate analysis, and the model was constructed using the orthogonal projection to latent structure-discriminant analysis (O-PLS-DA) with unit variance (UV) scaling and validated with a 7-fold crossvalidation method using soft independent modeling by class analogy (SIMCA)-P1(van 12.0, Umetrics, Sweden). The parameter R 2 Y is indication of the Y variables being explained by the model and Q 2 represents the predictability of the model. The significance of the model was also validated by CV-ANOVA (p < 0.05). To assist the biological interpretation of the loadings generated from the model, the loadings were firstly backtransformed and then plotted with color-coded O-PLS-DA coefficients in MATLAB 7.1. The color code corresponds to the absolute value of the O-PLS-DA correlation coefficients |r|, which indicated the contribution of the corresponding variable to the group separation. Assessment of Covariates The gender, birth date, and years of professional training of all participants were obtained with a questionnaire. Body mass index (BMI) was calculated as the weight in kilograms divided by the square of the height in meters. Body fat percentage was measured with Inbody720 (InBody Co., Ltd., Seoul, South Korea). Physical performance and function covariates were measured with standardized test methods (Yang and Shen, 2019) including grip, back strength, standing long jump (SLJ), standing vertical jump (SVJ), abdominal curl, vital capacity, sit-and-reach, acoustic reaction time, and quiet heart rate, which were closely related to the physical performance of swimmers. Health covariates were tested with an automatic biochemical instrument (Access2, Beckman Inc., United States) including hemoglobin, erythropoietin (EPO) and myoglobin (MYO). Statistical Analysis Continuous variables of baseline characteristics and physical performance were presented as mean (standard deviation, SD) or median (with interquartile range, IQR), and categorical variables were expressed as frequencies (%) when appropriate. The Student's t-test or Mann-Whitney U test was used to compare the continuous variables, and the Pearson's χ 2 test was used for comparisons of categorical variables. The Student's t-test was used to compare the serum metabolites between elite and sub-elite swimmers, and to determine the significantly different metabolites. Using the method of Lasso regression, the obtained significant different metabolites were screened to further be reduced. Based on the results of Lasso regression, logistic regression was performed using the R package "glmnet" for dimensionality reduction to select metabolomic markers (Friedman et al., 2010). Correlation analysis was performed between the significant metabolites and the baseline characteristics. Based on the selected metabolites and different covariates, three models were established using multivariate logistic regression, model 1 was unadjusted by any covariate, model 2 was adjusted by baseline covariates, and model 3 was adjusted by baseline and physical performance covariates. The receiveroperating characteristic (ROC) curve was analyzed for every model, with area under curve (AUC) calculated to evaluate its effect for identification and prediction of athletic status. To avoid biased estimation, average values of AUC were generated from 10-fold cross-validation (the dataset was randomly divided into ten parts, using nine of them in turn as the training set and one as the test set) in ROC analysis (Cui et al., 2020). The optimal combination of specificity and sensitivity was determined by the Youden index method (Youden, 1950). All the above analyses were used with the IBM-SPSS 24.0 for WINDOWS and R Studio (R core 3.5.3), and the differences were considered to be statistically significant when p < 0.05. Baseline Characteristics and Covariates of Swimmers The baseline characteristics, physical performance, and health indicators of all participants by athletic statuses were summarized in Table 1. Their average age (SD) was 19.3 (2.7) years, and 56.1% of them were men. Interestingly, sub-elite level swimmers had longer trained years than elite level swimmers (p < 0.001). Elite level swimmers had higher values in physical performance covariates of abdominal curl/min and sit-and-reach (p < 0.001), and lower values in BMI, body fat percentage, vital capacity, and resting heart rate (with p < 0.05, p < 0.01, and p < 0.001, respectively) than sub-elite level swimmers. Other baseline characteristics, physical performance, and health covariates were not significantly different between the two groups (p > 0.05). Analysis of NMR Spectra and Significant Metabolites Typical 1 H NMR spectra of serum samples were obtained from elite level and sub-elite level swimmers (Supplementary Figure S1). Resonance peaks were assigned to specific metabolites based on published data and 2D NMR spectra with further confirmation by using public databases HMDB (human metabolome database) and BMRB (biological magnetic resonance bank) (Jiang et al., 2012;Song et al., 2015). 36 metabolites were assigned, involving multiple metabolic pathways such as carbohydrates, amino acids, and nucleotides (Supplementary Table S1). Metabolites Selection and Establishment of Discrimination Models Among the 14 metabolites, ten of them were differentially expressed between the elite and sub-elite level swimmers. The elite-level swimmers showed significantly higher levels of highdensity lipoprotein (HDL), lactate (Lac), methanol, and UFA, but lower concentrations of isoleucine (Ileu), 3-hydroxybutyric acid (3-HB), acetoacetate, glutamine (Gln), Glycine (Gly), and αglucose (α-Glc) (p < 0.05) (Table 2; Figure 2). These ten significantly different metabolites by athletic statuses were then analyzed by LASSO regression to screen and select candidate metabolomic biomarkers, which would be used to identify and predict the athletic status of swimmers. After the LASSO regression analysis, eight metabolites, including HDL, lac, acetone, Gln, methanol, Gly, α-Glc, and UFA, were selected for subsequent modeling analysis (Supplementary Figure S4). After log 2 transformation, the eight selected metabolites were taken into a multivariate logistic regression to establish models. Out of the eight metabolites, four metabolites showed significance after the multivariate logistic regression analysis. Correlation analysis was conducted between the four significant metabolites (HDL, Gln, methan,ol and α-Glc) and baseline characteristics. The four significant metabolites were significantly correlated with a small number of baseline characteristics (p < 0.05), but the correlation was low ( Table 3). Three models were generated, including four metabolites unadjusted or adjusted for different covariates. Model one included four metabolites without any covariates, model two was adjusted for baseline characteristics (gender, age, years of professional training, BMI, and body fat percentage) based on model 1, and model three was further adjusted for physical performance (SLJ, abdominal curl, vital capacity,y and sit-and-reach) based on model 2. In these models, the four metabolites were all independent influencing factors on athletic statuses in swimmers (p < 0.05) ( Table 4). The unadjusted model one identified or predicted the athletic status of swimmers reasonably well, holding an AUC of 0.835 (95% CI: 0.776-0.894) with the internal validation. The ROC curve analysis showed that the AUC increased significantly to 0.882 (95% CI: 0.835-0.929) and 0.904 (95% CI: 0.862-0. 47), when baseline characteristics (age and years of professional training) were included and baseline characteristics plus physical performance (age, years of professional training, abdominal curl and sit-and-reach) were included, respectively ( Figure 3). According to the AUC value of ROC curve analysis, model three had the best identification or prediction ability in three models (AUC >0.9), of which the optimal sensitivity and specificity were 75.5 and 90.2%, respectively. Of note, when only age and year of training were included for ROC analysis, the AUC was 0.769; when all the covariates were included in ROC analysis, the AUC was 0.815. Both the AUC values were less than those with metabolites, suggesting the models with metabolites were better. DISCUSSION In this current study, using the high-throughput 1 H-NMR method, we conducted a broad search for serum biomarkers on professional swimmers' athletic status. We detected 36 serum metabolites with the NMR platform, most of them being amino acids. Ten of the metabolites were significantly different between elite and sub-elite swimmers, with four higher and the other six lower in the elite swimmers. After the LASSO and logistic regression analysis, four serum metabolites were identified significantly associated with the athletic status of elite swimmers. Furthermore, our study showed that the model of four metabolites adjusted for baseline characteristics and physical performance indicators could identify or predict the athletic status of swimmers reasonably well. Metabolomics is currently widely used in many disciplines, due to its systematic, comprehensive, and high-throughput advantages (Youden, 1950;Jiang et al., 2012;Song et al., 2015;Sket et al., 2020). For example, metabolomics can be used to find trace changes in biological samples such as blood and urine, which are difficult to achieve with traditional detection and analysis techniques (Kingsbury et al., 1998). In the field of sports science, metabolomics has been demonstrated as a very promising research and analysis tool, which can not only obtain comprehensive information on athletes' metabolites at baseline or after training (Heaney et al., 2019), but also systematically monitor the physiological state of athletes (Lavoie et al., 1983;Richard et al., 2008;Brites et al., 2017). In recent years, sports researchers have been using the methods of metabolomics to study the blood or urine metabolome characteristics of swimmers in various physiological states (Knab et al., 2013;Couto, et al., 2017;Al-Khelaifi et al., 2018;Pla et al., 2021). Among these studies, some scholars have studied the changes of the metabolic profile of elite athletes (including swimmers) in different events (Al-Khelaifi et al., 2018), some have studied the metabolic response of high-level swimmers under specific intensity training programs (Pla et al., 2021), others have studied the effects of supplementing different fresh fruit juices on chronic resting and postexercise inflammation, oxidative stress, immune function, and metabolic characteristics (Knab et al., 2013). These various studies show that it is very extensive to use metabolomics techniques and methods to study the application scenarios of swimmers. In this study, we found different levels of metabolites between swimmers of different athletic statuses, which illustrated that there were differences in the characteristics of blood metabolomics between different athletic statuses. Three out of the six metabolites lower in the elite swimmers were amino acids, including isoleucine, glutamine, and glycine. It has been previously reported that there were contrasting plasma-free amino acid patterns in elite athletes, depending on the training and fatigue status (Kingsbury et al., 1998). The higher levels of HDL and unsaturated fatty acids but lower levels of α-glucose suggested that elite swimmers probably had different substrate utilization when compared with sub-elite swimmers. An early study has reported an increase in lipid utilization in elite swimmers during a training session (Lavoie et al., 1983). Notably, both HDL and unsaturated fatty acids have been implicated with antioxidative effects (Richard et al., 2008;Brites et al., 2017). It has been previously reported that HDL was associated with physical activity and athletic sports (Kraus et al., 2002;Valimaki et al., 2016;Fikenzer et al., 2018). It is generally believed that the HDL concentration of athletes with a certain training level is significantly higher than that of the general population. For instance, Lee H et al. (2009) found that the concentration of blood HDL of athletes with a certain training level was higher than that of the general population independent of the type of sports they were engaged in. Other researchers reported that athletes engaged in different sport disciplines showed differences in blood HDL concentration, possibly due to the different metabolic characteristics associated with aerobic and anaerobic exercise (Chou et al., 2005;Lee et al., 2009). The above reports indicated that the concentration of HDL might be related to whether or not exercise was performed, exercise style, duration, and intensity. The HDL result of our study is consistent with previous research results. Glutamine makes up a large number of free amino acids in muscle, accounting for about 60% of the total free amino acids in the human body. Glutamine can be synthesized by glutamic acid, valine, and isoleucine. Sports practice suggested that the level of glutamine in the body could drop sharply after high-intensity strength training. If glutamine did not restore from the diet, the body will decompose muscle protein to meet its demand for glutamine. This phenomenon will not only affect the muscle volume, but also lead to the reduction of the body's immunity (Armstrong et al., 2014). The difference in blood glutamine levels between the two athletic statuses may be related to the long-term FIGURE 3 | ROC analyses for identification or prediction of athletic status. The blue curve represents the ROC curve of with four metabolites of HDL, Gln, methanol and α-Glc; the green curve represents the ROC curve including the four metabolites and covariates (age and years of professional training at baseline); the red curve represents the ROC curve including the four metabolites, baseline covariates (age and years of professional training) and physical performance indicators (abdominal curl and sit-and-reach); the purple curve represents the ROC curve only including baseline covariates (age and years of professional training) without metabolites; the yellow curve represents the ROC curve only including baseline covariates (age and years of professional training) and physical performance indicators (abdominal curl and sit-and-reach) without metabolites. Frontiers in Physiology | www.frontiersin.org May 2022 | Volume 13 | Article 858869 8 training effects on isoleucine and valine, as we detected lower levels of those in elite swimmers as well. Additionally, an interesting but important finding was that the level of blood methanol in elite swimmers was significantly higher than that in sub-elite swimmers. Methanol is a colorless, transparent, flammable, and volatile toxic substance. Acute methanol intoxication can damage brain function and optic nerve via inducing neuroinflammation (Zakharov et al., 2017;Zakharov et al., 2018). Yet methanol naturally exists in normal healthy individuals, which could be from diets, such as alcoholic beverages, fruits and vegetables; or from fermentation by gut bacteria and metabolic processes of S-adenosyl methionine (Dorokhov et al., 2015). Since the diet of the athletes has been standardized in the current study, we speculate the differential level of methanol might be likely due to the differences in metabolic methanol. However, there is still no report about the effect of the physiological concentration of methanol on sports ability. Some studies have shown that the methanol extract obtained from some special plants, such as the leaves of Eugenia species, shade dried plants, and Syzygium calophyllifolium bark, has the roles of anti-oxidant, anti-inflammatory, anti-hypertensive, antilipidemic, reducing blood glucose levels (Chandran et al., 2016;Aluko et al., 2019;Goldoni et al., 2019) and can even boost androgen levels (Kamran et al., 2018). Based on the aforementioned evidence, we speculate that the higher concentration of blood methanol may help swimmers improve their athletic performance. In the future, it would be interesting to explore the mechanism of higher blood content of methanol and its effects in sports capacity. We also found the level of α-glucose was lower in elite swimmers than that in sub-elite swimmers. The α-glucose is an isomer of D-glucose, which acts as a diuretic, and detoxifier. It is generally believed that the baseline blood glucose concentration of professional athletes is lower than that of the general population (Lippi et al., 2008), which could be due to longterm high-level professional training. In addition to the four serum metabolites finally included in the model, some other metabolites are different between the elite and the sub-elite level swimmers, such as isoleucine and valine of BCAA (branched-chain amino acids), which is also an interesting phenomenon. In our study, it was found that the concentrations of isoleucine and valine (only in male swimmers) were significantly lower in elite-level swimmers. This result was similarly reported in a recent research on elite cyclists. Cyclists with higher exercise ability were found that after a graded exercise test to exhaustion, the concentration of isoleucine in blood changed, which was significantly lower than that of athletes with lower exercise ability, but there was no significant difference in baseline test (San-Millán et al., 2020). This result may be due to the accelerated decomposition of branched-chain amino acids in blood caused by acute exercise, which are converted into Acetyl-CoA and enter the TCA cycle to participate in energy supply. The difference in exercise ability also leads to the change of metabolic profiles of branded-chain in amino acids. This was different from the results of our study. The swimmers in our study were in the basic state rather than the state after acute exercise. The reason why the baseline of cyclists had not changed may be that they were all high-level athletes, and the differences of ability were not obvious. In our study, there were still great differences in the sports ability and level between elite and sub-elite swimmers. On the contrary, the change in BCAA metabolic profiles can also cause the change in exercise training adaptability, which has been verified in animal experiments. Xu et al. (Xu et al., 2017) reported that after knocking out the gene of the enzyme that inhibits the decomposition of BCAA, mice showed higher adaptability to endurance training, and the concentration after training was also lower than that of normal mice, indicating that BCAA participated in the adaptation of endurance training. Other studies have also verified that BCAA supplementation can improve the adaptability of endurance training and have positive benefits for endurance training (Kim et al., 2013;Gervasi et al., 2020). Another interesting phenomenon in our study was that the differences in sports level and serum metabolome also have gender characteristics. In some studies on sports ability and differential metabolites, the participants involved were generally one gender (Kim et al., 2013;San-Millán et al., 2020;Margolis et al., 2021) or a total of men and women (Gervasi et al., 2020); however, few studies focused on gender characteristics. There was a study on the changes in fat-free mass and plasma amino acids of male and female recruits after military training (Margolis et al., 2021). The study found that BCAA increased in women but did not change in men. The authors concluded that this result may be related to the differences in dietary intake, fat-free mass ratio, and energy balance between men and women. The background of this study was still different from our work, and further research can be carried out in the follow-up. The underlying mechanisms for the different levels of metabolites among athletes at different competition levels are less known, we speculate that it could be due to differences in genetics and training regime. It is well known that genetic differences contribute to athletic capacity (Pitsiladis et al., 2016;Voisin et al., 2016;Yang et al., 2017). Of the genetic variants, ACTN3 R577X and ACE I/D are two well-studied polymorphisms (Levinger et al., 2017;Yan et al., 2018). Notably, genetic variants have been reported to influence metabolic traits of elite athletes (Banting et al., 2015;Al-Khelaifi et al., 2019). More specifically, a recent genome-wide association study (GWAS) with 490 elite athletes, combined with high-resolution metabolomics profiling, reported 145 significant single nucleotide polymorphism (SNP)metabolite associations (Al-Khelaifi et al., 2019). Moreover, four significant associations between SNPs and metabolites, were only identified in elite endurance athletes (Al-Khelaifi et al., 2019). On the other hand, the training regime is known to be different among athletes at different competition levels , and has been shown to influence the levels of metabolites (Yan et al., 2009). Of note, the metabolomics results of this study were obtained via the NMR method, which has a lower sensitivity compared to the mass spectrometry method (Emwas et al., 2019). It would be beneficial to validate our results with the use of additional methods, such as LC-MS and GC-MS. With the development of high-throughput detection and bioomics technology, there are many efficient and accurate detection methods for athletes' physical function and sports state evaluation, which enriches the original evaluation system (Gomes et al., 2020;Nieman, 2021;Sellami et al., 2022), such as evaluation of physical performance with SNPs (Yang et al., 2021a). With the popularization of detection instruments and the development of detection technology, metabolomics has greatly reduced the cost and improved accuracy. It can detect more trace metabolites and find some significantly changed trace metabolites, which are difficult to find changes by traditional methods (Heaney et al., 2019;Schranner et al., 2020). In this study, the AUC value of the evaluation model established by using the changes of trace metabolites of human serum and athletes' physical characteristics was greatly improved, reaching more than 0.9, which has obvious advantages compared with the 0.7 level of the evaluation model with other methods (Yang et al., 2021a;2021b). Besides, there are still some limitations in this study. Due to the scarcity of elite athletes, the sample size is relatively small, so men and women are not analyzed separately. On the validation of the model, due to the sample size, only internal crossvalidation can be carried out, but not external data set validation, which is a limitation of reliability and applicability. Later, we can try to solve the problems by accumulating and sharing the samples of elite swimmers. Another limitation to this study is a static time point. In the future, it would be interesting to look at a before/after swimming exercise, or a two-week training program. We may conduct further research on the existing basis. In conclusion, our study highlighted the potential of serum metabolomics to discover metabolite biomarkers for the athletic status of professional swimmers. Ten serum metabolites were associated with athletic status in Chinese professional swimmers. The different levels of metabolites among athletes at different competition levels could be due to differences in genetics and training regime. A four-metabolite model after being adjusted by covariates, could identify or predict swimmers' athletic status reasonably well. Using this model with metabolite biomarkers, coaches and researchers could evaluate the competitive level at present and predict the potential of swimmers to develop to the elite level. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding authors. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee at School of Life Sciences, Fudan University, China. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS MC, CW, and CJ: writing the manuscript, data analysis and interpretation; XS: data analysis and interpretation, reviewing a draft of the manuscript; MH, LW, QG, and YY: manuscript revision; XY and RY: research concept and study design, reviewing a draft of the manuscript.
2022-05-05T13:19:16.566Z
2022-05-05T00:00:00.000
{ "year": 2022, "sha1": "9f2e0d8af028527558e3cb69505e5ae7d75abede", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9f2e0d8af028527558e3cb69505e5ae7d75abede", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
221165291
pes2o/s2orc
v3-fos-license
Sponges and Sponge-Like Materials in Sample Preparation: A Journey from Past to Present and into the Future Even though instrumental advancements are constantly being made in analytical chemistry, sample preparation is still considered the bottleneck of analytical methods. To this end, researchers are developing new sorbent materials to improve and replace existing ones, with the ultimate goal to improve current methods and make them more efficient and effective. A few years ago, an alternative trend was started toward sample preparation: the use of sponge or sponge-like materials. These materials possess favorable characteristics, such as negligible weight, open-hole structure, high surface area, and variable surface chemistry. Although their use seemed promising, this trend soon reversed, due to either the increasing use of nanomaterials in sample preparation or the limited scope of the first materials. Currently, with the development of new materials, such as melamine sponges, along with the advancement in nanotechnology, this topic was revived, and various functionalizations were carried out on such materials. The new materials are used as sorbents in sample preparation in analytical chemistry. This review explores the development of such materials, from the past to the present and into the future, as well as their use in analytical chemistry. Introduction Many scientists consider sample preparation as an integral part of the analytical process, which enhances the quality of the obtained results. Others consider sample preparation as the bottleneck of analytical chemistry since labor-intensive steps are required, thus limiting the productivity [1,2]. Nevertheless, more and more emphasis is placed on sample preparation, trying to face the challenges of various matrices in analysis. Since the basic principles of sample preparation remain the same, researchers are developing new sorbent materials to improve and replace existing ones, with the ultimate goal to improve current methods and make them more efficient and effective [3]. To this end, back in the 1990s began the use of sponge or sponge-like materials (e.g., foams) in sample preparation [4][5][6]. These materials possess favorable characteristics, such as negligible weight, open-hole structure, high surface area, and variable surface chemistry [7]. As a result, a new trend was started and many reports were published, with the peak reached in the years 1997-2003 [7]. The key merit of these materials that increased their popularity and started the trend was their ability to be compressed into mini-columns so that they could be used under the solid-phase extraction (SPE) principle. Although their use seemed promising, this trend was soon reversed and the number of reports declined significantly. The reasons behind the reversal of the trend are not clear. One possible explanation is the fact that the use of nanomaterials in sample preparation began around the 2000s and, since then, it skyrocketed, as can be seen in Figure 1. The trend, as detailed below, started with the use of polyurethane foams (PUFs). By definition, PUFs are plastic materials in which gas (in the form of numerous small bubbles (cells)) replace a proportion of the solid phase [8]. Depending on the percentage of the volume the bubbles occupy, the geometrical shape of the bubbles varies from spherical to quasi-spherical polyhedral, thus changing the properties (such as elasticity) of the PUFs. Another important parameter that affects their properties is the synthesis method. PUFs can be fabricated via the reaction either of isocyanates with hydroxyl compounds (resulting in polyester or polyether PUFs) or of isocyanates with water [7]. Either way, the chemical composition of PUFs differs in terms of polar and non-polar groups, thus making them suitable for the sorption of compounds with different properties [9]. While PUFs are the most widely produced foam materials, currently, advancements are being made and materials with similar structure and physical properties are being fabricated. Melamine sponges (MeS) and carbon foams are two typical examples. Melamine sponges are three-dimensional copolymers (composed of formaldehyde, melamine, and sodium bisulfite) with low density, high porosity, and an open-hole structure [10,11]. Their negligible cost and the presence of functional groups, which make them amenable to functionalization, increased their popularity [12]. Likewise, carbon foams are materials with open-cell structures; they have high surface area and tend to be highly hydrophobic [13]. Currently, with the development of new sponge or foam materials, along with the advancement of nanotechnology, the abovementioned topic was revived, and various uses of these materials were proposed. Furthermore, functionalizations were carried out to alter their applicability, so that they could be used as sorbents in sample preparation in analytical chemistry. The recent reports in this field are scanty and sparse, but the use of sponge and foam materials in analytical chemistry is once again an up-and-coming trend. This review explores the development and use of such materials, from past to present, and it highlight futures perspectives on their use in analytical chemistry. The Beginning of the Trend Currently, PUFs are mostly known in analytical chemistry, owing to their widespread use as passive samplers for the collection of volatile compounds [14][15][16][17][18][19]. Their low cost and ease of handling, as well as the fact that they can accumulate particulate matter, make their use favorable. However, PUFs were not always used in such a way. The first reported method that employed PUFs for absorption was published 50 years ago by Bowen, but it was not until the 1990s that the use of PUFs in sample preparation procedures started to become more popular [20]. From that time onward and especially in the next two decades from 1990 to 2010, PUFs were used for the preconcentration of common metal species, such as iron [21,22], copper [23], zinc [6], and nickel [24], and rare metals, such as germanium [4], thorium [25], thallium [26], and uranium [27] from aqueous matrices. In most cases, the PUFs were used bare, without any functionalization, necessitating the addition of organic or inorganic ligands in the sample solution to form metal complexes that could be sorbed onto PUFs. In the case of aqueous samples, no sample pretreatment was carried out, while, for other matrices (such as metal granules, alloys, dried shrimp, fruits etc.), the samples were digested with hot, concentrated nitric acid prior to analysis. A typical example of this concept was the addition of sodium molybdate in a germanium-containing solution so that molybdogermanate could be formed, which was then extracted onto PUFs [4]. The PUFs were left in the sample solution for 1 h (to reach equilibrium). Similar to the above case is the use of polyether-type PUFs for the extraction of gallium (in the form of gallium chloride) from alumina, aluminum alloys, and residues from the aluminum industry [5]. Another example is the use of salicylate for complexation with U 5+ (in an acidic environment) so that the produced complex could be extracted into PUFs [27]. In all three cases, the PUFs could be analyzed directly, without conducting an elution step, by analyzing the PUFs directly using X-ray fluorescence, which is an asset for the overall time spent for analysis of a sample. In another study, a method for the determination of molybdenum in iron-based matrices was developed [28]. The method was based on the use of ascorbic acid to reduce Mo 5+ to Mo 4+ and Fe 3+ to Fe 2+ and then on the employment of thiocyanates to form metal complexes which were extracted into PUFs. The reducing step was important to avoid interferences from iron since Fe 2+ does not form thiocyanate complexes. The molybdenum thiocyanate complexes were efficiently extracted, even in the presence of ten times as high as the concentration of other metal species (such as copper, cobalt, and zinc), as evidenced by the high recoveries achieved by analyzing pure iron and steel samples. Later on, another study was published, in which the experimental parameters were studied, so that the above-described principle was used for the determination of molybdenum in water and plant leaf samples (digested with the addition of boiling concentrated nitric acid and hydrogen peroxide) [29]. It was found that sorption kinetics were fast since a high flow rate was used (up to 10 mL per min), which was an advantage for the analysis of high sample volumes in a short time. Moreover, it was suggested that the elution should not be carried out with nitric acid solution more concentrated than 3 mol·L −1 , since the PUF structure is altered and, thus, the PUFs cannot be reused. The recoveries were satisfactory and good accuracy was reported by analyzing certified reference materials. In another study, the use of thiocyanates for the formation of complexes with zinc was proposed, so that they could be extracted onto PUFs [6]. In this method, many metal species, such as calcium, aluminum, and nickel, as well as anions such as chloride, sulfate, nitrate, etc., do not affect the extraction. However, Fe 3+ , Cu 2+ , Co 2+ , Hg 2+ , Ga 3+ , and Pb 2+ are co-extracted with this method. To avoid their presence, the authors proposed the reduction of Fe 3+ to Fe 2+ with ascorbic acid and the use of citrate to mask the copper and cobalt species. The method was developed to extract zinc from cadmium-rich matrices. However, cadmium can also form complexes with thiocyanates. To avoid this, the pH of the solution was adjusted to 3. For the three other metal species, the authors proposed an elution clean-up step with water, which does not elute the mercury, gallium, and lead complexes. The use of water as an elution solvent was also an added advantage since organic solvents were avoided. In a similar context, Abbas proposed the use of molybdate for the formation of the respective complexes with phosphates and arsenates, which were then reduced to molybdenum blue (using antimony as a catalyst and ascorbic acid as a reducing agent) and adsorbed into PUFs [30]. The sorbed complexes were then eluted, and the absorbance of the eluent was measured photometrically. However, since both species form molybdenum blue, it was difficult to determine their concentration in the same sample. This is a common problem for the detection of arsenate, which exists at lower concentrations in water samples, compared with phosphates. The author claimed that, by conducting extractions at two pH values (i.e., 0.9 and 1.2, adjusted with sulfuric acid), the formation of molybdenum blue by arsenates was totally inhibited at the solution with pH 0.9 and the recorded absorbance value was only due to phosphomolybdenum blue. Following simple calculations using the two recorded absorbances, the concentrations of both phosphates and arsenates were obtained. Based on the definite formation of the Fe 3+ -thiocyanate complexes, it was proposed that the complexes could be extracted in PUFs [22]. The importance of adding hydrochloric acid (until pH was close to 1.3) to the sample solution so that the formation of Fe 3+ -OH − complexes was avoided was strongly emphasized. The authors proposed that the adsorption of the Fe 3+ -thiocyanate complexes was completed in three steps. Firstly, the solute reaches near the boundary layer film of the adsorbent surface. The second step is film diffusion, which is the diffusion of the complex through the boundary film. The third step is intraparticle diffusion, which is the diffusion of the complex into the porous PUFs. To complete these steps and achieve reproducible results, a 90-min extraction time was proposed. Capitalizing on the same complex formation principle, Casella developed an on-line solid-phase extraction system using PUFs for Fe 3+ determination in acidic water samples [21]. Following spectrophotometric measurements, highly satisfactory relative standard deviations (between 1.2% and 1.5%) and low detection limits (0.45 or 0.75 µg·L −1 , depending on the preconcentration time) were achieved. PUFs combined with thiocyanates were also been proposed for the on-line detection of nickel [31]. The use of thiocyanates was suggested, to form complexes with other interfering ions, so that, ultimately, these complexes could be removed by adsorbing into the PUFs, while nickel, which does not form a complex with thiocyanates, could pass through the PUF mini-column. After reacting with 4-(2-pyridylazo)-resorcinol, nickel was determined spectrophotometrically. The two above studies highlighted the potential of PUFs mini-columns for the development of low-cost, on-line systems. All of the abovementioned studies made use of bare, non-functionalized PUFs. However, there were certain cases where PUFs were loaded with selective reagents for the determination of various ions. This was done to counterbalance two main disadvantages of PUFs: lack of selectivity and low sorption capacity [32]. To make feasible the solid-phase extraction and determination of Ru 3+ , the use of PUFs functionalized with 3-hydroxy-2-methyl-1,4-naphthoquinone-4-oxime was proposed [33]. The developed PUFs were highly selective and made feasible the extraction of 1 µg of Ru 3+ , even in the presence of a high excess of other ions, such as Ba 2+ , Zn 2+ , Cr 3+ , etc. The only metals that could interfere with Ru 3+ extraction were easily masked with common reagents (e.g., Ni 2+ was masked by 1% KCN solution; Fe 3+ and V 5+ were masked by the addition of one crystal of potassium fluoride and sodium fluoride, respectively). The recoveries were above 98%, highlighting the prospect of functionalized PUFs being exploited. Similarly, the functionalization of PUFs with 9,10-phenanthaquinone monoethylthiosemicarbazone was proposed, so that the functionalization reagent could form a highly stable and colored complex with Ti 3+ [34]. By using these functionalized PUFs and spectrophotometric detection, recoveries between 99.2% and 100.2% were achieved for zinc granulates and lead foil samples, highlighting the great potential of the method. Similarly, 2-ethylhexylphosphonic acid was proposed as a reagent to functionalize PUFs, to produce selective PUFs for thorium [25]. Thorium ions were extracted based on a cation exchange mechanism. Therefore, highly acidic ambiance was avoided, since hydrogen ions were competitively extracted. Finally, a report was published for the functionalization of PUFs with ammonium hexamethylenedithiocarbamate [35]. The functionalized PUFs were used as sorbents for arsenic, bismuth, mercury, antimony, selenium, and tin. Since, As 5+ , Sb 5+ , and Se 6+ do not form complexes with ammonium hexamethylene dithiocarbamate, a reduction step was necessary prior to extraction. The cases reported so far concerned the extraction of metal ions in (functionalized) PUFs. However, there are two published reports which pertained to the extraction of polycyclic aromatic hydrocarbons (PAHs) in PUFs. In the first study, the authors developed a single-pass flow-through extraction method and examined the suitability of PUFs for the extraction of 18 PAHs from diesel exhaust samples [36]. In the second study, PUFs were suggested for the extraction of four PAHs from water samples [37]. The authors examined the mechanism via which the PAHs and PUFs interact. Firstly, they found that PAH sorption was not dependent on the chemical structure of the PUFs. Whether the polyester, the polyether, or their co-polymer was used, sorption remained unchanged. However, it was evident Molecules 2020, 25, 3673 5 of 20 that the hydrophobicity of the PUFs greatly influenced their sorption potential. A more hydrophobic PUF presented a greater sorption potential. Both of these studies highlight the effectiveness of bare PUFs for PAH extraction, without laborious functionalization steps. A summary of the analytical methods developed, between 1990 and 2010, based on PUFs is given in Table 1. Table 1. Summary of the analytical methods developed, between 1990 and 2005, based on polyurethane foams (PUFs), SP: spectrophotometry; ETAAS: electrothermal atomic absorption spectrometry; XRF: X-ray fluorescence spectrometry; AAS: atomic absorption spectrometry; ICP-AES: inductively coupled plasma atomic emission spectrometry; PAH: polycyclic aromatic hydrocarbon. Recent Uses of Polyurethane Foams After a marked decline in the number of publications with the use of sponge and sponge-like materials in sample preparation, after 2010, this trend was revived, and more and more studies were published. As with the reports of the previous years, some PUF-based sorbents were developed for metal species, albeit to a lower extent [9,32,57,58]. For instance, the use of Eriochrome Black T as a complexing agent for Cu 2+ was recently proposed [9]. Eriochrome Black T was selected since, in acidic conditions, the metal-Eriochrome Black T complex formation constants for copper and iron are higher than those for other metals (such as cobalt, zinc, etc.). The formed complexes were sorbed on PUFs and, based on this, an SPE procedure was developed for Cu 2+ extraction from water samples. At acidic ambiance, the copper-Eriochrome Black T complex was in its neutral form and was extracted more efficiently. This was justified by proposing a solvent-like mechanism, where PUFs acted as a polymeric solvent, which was able to retain neutral substances or substances with very low charge density. Although Eriochrome Black T forms more stable complexes with iron than with copper, the iron complexes could not be sorbed efficiently on PUFs, because, at acidic ambiance, nitrogen atoms of PUFs are protonated and repulse the cationic iron-Eriochrome Black T complex. Capitalizing on the affinity of thializodin-4-ones for heavy metal ions, a spirothializodine analogue (3-sulfonamoyl-phenyl-spiro[4-oxo-thiazolidin-2,2 steroid]) was synthesized to functionalize PUFs with it so that PUFs were rendered selective and able to preconcentrate Cd 2+ from water samples containing iodide ions [57]. Iodide ions were added to the sample solution so that the anionic complex [CdI 4 ] 2− could be formed, which formed a ternary ion that could be sorbed onto PUFs. In another study, an SPE procedure for the separation of Au 2+ traces from geological samples was developed [32]. To make this feasible, acid hydrolysis of the PUFs was carried out, and then, using glutaraldehyde as a linking arm, cytosine was added onto the PUFs. Cytosine was selected due to its low cost and its nitrogen and oxygen atoms that render it a good ligand for metal ions. To carry out the extraction, the pH of the sample solution was adjusted to 1 so that protonation of the binding sites of the chelators could take place, while avoiding metal precipitation from hydroxides. The use of hydrochloric acid ensured the formation of the chloro-anionic species (AuCl 4 − ) which are readily adsorbed onto the imine groups of the cytosine-modified PUFs, via electrostatic interactions (formation of ion-pair complexes). The functionalized PUFs exhibited high selectivity, sensitivity, and high adsorption capacity for Au 2+ . PUFs were also functionalized with β-naphthol and used as sorbents for Fe 3+ , Cu 2+ , Cr 3+ , Co 2+ , and Mg 2+ [58]. The developed PUFs efficiently adsorbed the metal species, following a simple SPE procedure. In the case that methylene blue was used to functionalize PUFs, penicillins could be extracted from pharmaceuticals and milk samples (previously deproteinated with the addition of acetic acid), following a simple SPE procedure [59]. The developed procedure was sensitive to pH changes since they affected the formation of ion pairs between the antibiotics and the methylene blue-functionalized PUFs. At pH values lower than 8, antibiotics are primarily in their neutral form; hence, they cannot form ion pairs. At pH above 9.5, a competition between the antibiotics and the hydroxyl ions for the positively charged centers of the sorbent was recorded. The method developed exhibited adequate accuracy and good precision. More importantly, the reusability of the prepared PUFs was examined, and it was found that they can last six months, after performing 15 sorption-desorption cycles each day, which is a great asset of the developed material. Finally, PUFs were functionalized with graphene oxide (GO) to combine the extractive properties of both [60]. The preparation process is very easy (stirring PUFs in GO solution and then drying at room temperature), but it takes more than 24 h to be completed. The epoxy groups of the GO link with the carboxyl and amino groups of the PUFs, while the formation of hydrogen bonds further stabilizes the coating. The prepared sponges were used to extract sulfonamides from milk samples (after protein precipitation with acetonitrile). From analyzing the Fourier-transform infrared (FT-IR) spectra of the sorbent prior and after the extraction, the authors found that amides where formed, as a result of the sulfonamides amine groups and the carboxylate groups of the sorbent. Moreover, the formation of hydrogen bonds was validated and contributed to the overall sorption of sulfonamides. A summary of the analytical methods, developed between 2005 and today, based on PUFs is given in Table 2. Development of Carbon-Based Foams All the above studies highlight the potential of PUFs in sample preparation. Currently, the trend seems to have shifted toward other sponge-like materials. One such material is carbon-based foam. In one study, melamine-formaldehyde polymer foams were annealed at 800 • C, under a nitrogen atmosphere, to produce carbon foams [61]. The obtained carbon foams retained the initial three-dimensional (3D) interconnected network of the initial foams, and they were composed of nitrogen and carbon atoms, while exhibiting moderate hydrophobicity. If the temperature during synthesis was increased to more than 1000 • C, more hydrophobic carbon foams would be obtained. The carbon foams were used for the extraction of phenolic endocrine-disrupting compounds (i.e., bisphenol A, 4-tert-octylphenol, and 4-n-nonylphenol) from water samples, since they have a good affinity for moderately polar phenols, owing to their hydrophobicity. The recoveries were found to be above 90% for experiments conducted using well water, leachates, and wastewater. The use of the synthesized carbon foams resulted in enhanced preconcentration factors (i.e., [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38], compared to PUFs (preconcentration factors: [11][12][13][14][15] and MeS (preconcentration factors: 7-12). In two other studies, GO/polypyrrole foams were developed and used in pipette-tip SPE procedures [62,63]. In the first case, the polypyrrole (by polymerization of pyrrole in the presence of FeCl 3 ) was firstly prepared and then mixed with a GO solution for 24 h [62]. Although the final material was obtained in the form of a powder, the scanning electron images revealed that the morphology was loose three-dimensional foam. The GO/polypyrrole foams were used to extract three auxins from papaya juice. In the second case, the polymerization of the pyrrole was conducted in the presence of GO and, thus, the amount of time needed for the preparation of the GO/polypyrrole foams was significantly reduced (from 48 h needed in the previous case to 12 h) [63]. The above GO/polypyrrole foams were used to extract seven sulfonamides from milk and honey samples. Acetonitrile and hexane were successively added to the samples to remove proteins and fat, respectively. The proposed method consumes a very small amount of adsorbent (3 mg) and the extraction step is completed in 3 min, which are significant assets of the developed method. The relative standard deviations (RSDs) achieved with the proposed method were low (<1.1% for intra-day analyses and <1.9% for inter-day analyses) when analyzing water samples. However, the recoveries were less satisfactory for real samples (62.3-109.0% and 66.6-106.9% for honey and milk samples, respectively) and the RSDs were significantly higher (<11.2% and <10.8% for honey and milk samples, respectively). Despite the significant advantages of the method, further improvements are needed to enhance its performance. Another case was the freeze-drying of a GO dispersion to obtain a GO sponge which was reduced to form a graphene sponge, by following a reduction step using hydrazine [64]. The SEM images revealed that the GO sponge had a more compact structure, compared to graphene sponge, which was attributed to the interactions of the oxygen-containing groups. In both cases, a smooth structure and a porous three-dimensional open-hole structure were observed. Using the graphene sponges under the principle of solid-phase extraction, the authors were able to extract six organic ultraviolet (UV) filters (i.e., 2-(2 -hydroxy-5 -methylphenyl) benzotriazole, 2-(2H-benzotriazol-2-yl)-4,6-bis(1-methyl-1-phenylethyl)phenol, 2-tert-butyl-6-(5-chloro-2H-benzotriazol-2-yl)-4-methylphenol, 2-(2 -hydroxy-3 , 5 -di-tert-butylphenyl)-5-chlorobenzotriazole, 2-(2 -hydroxy-3 ,5 -dipentylphenyl) benzotriazolel, and 2-(2H-benzotriazol-2-yl)-4-(1,1,3,3-tetramethylbutyl) phenol) from water and personal care products (e.g., skin cream, sunscreen etc.). A wide linear range (20.0 to 1000 µg·L −1 ), was recorded for the developed method. The synthesized graphene sponges could be reused more than 60 times, which counterbalances their lengthy preparation (five days are needed to prepare 4.5 g of graphene sponge, starting from pristine graphite oxide to synthesize GO and then the sponges). In another study, the authors heated a mixture of zinc nitrate and sucrose in a crucible at 120 • C for 2 min, then at 180 • C for 5 min, and then at 1100 • C for 3 h, combining synthesis and calcination in a single step [65]. During the heating step, various gases are released (such as CO 2 , N 2 , and H 2 O vapors) that "blow" the heated mixture and form the foam-like structure. The SEM images revealed that the foam material contains many "bubbles", some intact and others broken, which facilitate the interaction of the foam material with analytes, not only on the surface but also in the internal holes, cavities, and channels ( Figure 2). The carbon foam developed was used in a stir-bar-supported micro-solid-phase extraction procedure for the extraction of five polyaromatic hydrocarbons from wastewater samples. The performance of the carbon foam was similar to that of multiwalled carbon nanotubes and graphene; however, since its synthesis is faster and cheaper, it is a good alternative to the other two nanomaterials. A summary of sample preparation procedures developed based on various carbon-based foams is given in Table 3. Combinations of Carbon-Based Foams with Metals Instead of using solely carbon-based foams, in some cases, carbon-based foams functionalized with metals were developed. One good example is the study published by Sajid et al., who prepared carbon foam with zinc oxide nanoparticles incorporated in the network [13]. To do so, they heated a mixture of sucrose and zinc nitrate at 110 °C. The concept is similar to that discussed above. However, in this case, an annealing step was not employed. The resulting product had a foamy structure, as evidenced by scanning electron microscopy images, while zinc oxide nanoparticles were visible all over the surface (Figure 3). The presence of zinc oxide nanoparticles was further confirmed by X-ray diffraction (XRD) spectra. The authors also calcinated the produced zinc oxide nanoparticle-incorporated carbon foam by heating at 900 °C. Although the calcined product exhibited a better crystalline structure, its sorption performance was worse, compared with the as-synthesized foam. The developed product was used as a sorbent for the extraction of 15 organochlorine pesticides from milk samples (without sample pretreatment). In another study, the authors combined chitosan and metal-organic frameworks to prepare foams [66]. The foams were prepared by an ice-templating procedure where proper amounts of the metal-organic framework, chitosan, and glutaraldehyde were mixed, and then the mixture was placed into a mold. After freezing at −20 °C, the material was freeze-dried to form the porous foams. The authors prepared six such foams, using different metal-organic frameworks (i.e., MIL-53(Al)/chitosan, MIL-53(Fe)/chitosan, MIL-101(Cr)/chitosan, MIL-101(Fe)/chitosan, UiO-66(Zr)/chitosan, and Combinations of Carbon-Based Foams with Metals Instead of using solely carbon-based foams, in some cases, carbon-based foams functionalized with metals were developed. One good example is the study published by Sajid et al., who prepared carbon foam with zinc oxide nanoparticles incorporated in the network [13]. To do so, they heated a mixture of sucrose and zinc nitrate at 110 • C. The concept is similar to that discussed above. However, in this case, an annealing step was not employed. The resulting product had a foamy structure, as evidenced by scanning electron microscopy images, while zinc oxide nanoparticles were visible all over the surface (Figure 3). The presence of zinc oxide nanoparticles was further confirmed by X-ray diffraction (XRD) spectra. The authors also calcinated the produced zinc oxide nanoparticle-incorporated carbon foam by heating at 900 • C. Although the calcined product exhibited a better crystalline structure, its sorption performance was worse, compared with the as-synthesized foam. The developed product was used as a sorbent for the extraction of 15 organochlorine pesticides from milk samples (without sample pretreatment). In another study, the authors combined chitosan and metal-organic frameworks to prepare foams [66]. The foams were prepared by an ice-templating procedure where proper amounts of the metal-organic framework, chitosan, and glutaraldehyde were mixed, and then the mixture was placed into a mold. After freezing at −20 • C, the material was freeze-dried to form the porous foams. The authors prepared six such foams, using different metal-organic frameworks (i.e., MIL-53(Al)/chitosan, MIL-53(Fe)/chitosan, MIL-101(Cr)/chitosan, MIL-101(Fe)/chitosan, UiO-66(Zr)/chitosan, and MIL-100(Fe)/chitosan), and they examined their sorptive performance for a mixture of five parabens in water. The results were conclusive that the best sorbent was MIL-53(Al)/chitosan foam. Metal-organic frameworks exhibit drawbacks in aqueous-phase adsorption due to low stability in water (coordination bonds are likely to collapse). For this reason, the use of a zeolitic imidazolate framework-8 (ZIF-8) in combination with a GO sponge was proposed [67]. An ice-templating procedure was employed to obtain a product, similar to the previous case; however, to obtain a functional material, a calcination step at 800 °C was necessary. Without the calcination step, the material had a mono-dispersion rhombic dodecahedral structure, similar to that of ZIF-8. When the Metal-organic frameworks exhibit drawbacks in aqueous-phase adsorption due to low stability in water (coordination bonds are likely to collapse). For this reason, the use of a zeolitic imidazolate framework-8 (ZIF-8) in combination with a GO sponge was proposed [67]. An ice-templating procedure was employed to obtain a product, similar to the previous case; however, to obtain a functional material, a calcination step at 800 • C was necessary. Without the calcination step, the material had a mono-dispersion rhombic dodecahedral structure, similar to that of ZIF-8. When the material was calcined at 800 • C, a rich open-hole structure could be observed, which could not be achieved at lower temperatures. The ZIF-8/GO sponge was used for the extraction of five sex hormones in defatted and deproteinated milk and milk products. The developed method exhibited wide linear ranges (10.0-3000 mg·L −1 ) with remarkable linear correlation coefficients (R 2 > 0.9998). Excellent repeatability (intra-day RSDs < 0.39% and inter-day RSDs < 3.86%) and good recoveries (83.8-108.4%) were the two most significant advantages of the developed procedure. Finally, in another study, the authors employed a somewhat "reversed" concept, where, instead of functionalizing a carbon-based foam with some metal, they functionalized nickel foams with polydopamine [68]. This was achieved by placing nickel foam in a dopamine solution prior to its self-polymerization. Owing to the presence of catechol and quinine groups on the surface of the prepared foam, good affinity with Sudan dyes was expected. As a proof of concept, the developed foam was used in a solid-phase microextraction procedure for four Sudan dyes from diluted tomato sauce and hotpot seasoning samples. A summary of the analytical methods developed based on carbon-based foams functionalized with metals is given in Table 4. Development of Functionalized Melamine Sponges Until recently, MeS was an unexplored material in sample preparation. In our laboratory, we modified MeS with graphene (GMeS) in a straightforward way and used it, for the first time, in sample preparation [10]. Previous studies attempted to modify MeS with graphene, using multiple steps and sophisticated equipment, resulting in time-consuming methods. We achieved the modification by dipping MeS cubes into a GO solution, containing hydrazine, before irradiating the cubes with microwaves for 2 min and then drying. The as-prepared GMeS contained G sheets through their structure ( Figure 4) and were rendered hydrophobic ( Figure 5). The prepared GMeS were used for the extraction and preconcentration of sulfonamides from deproteinated milk and eggs, as well as lake water samples (based on π-π and hydrophobic interactions), and a method validated according to the SANCO/12571/2013 guideline was developed. Low limits of quantification (between 0.31 µg·kg −1 and 1.3 µg·kg −1 for the food samples and between 0.10 µg·L −1 and 0.29 µg·L −1 for lake water samples), and high enrichment factors for milk and lake water samples (94-100) were some of the figures merit of the developed procedure. Following this study, next, we proposed the decoration of MeS with copper sheets, so as to prepare a sorbent, selective and suitable for sulfonamide extraction, based on the affinity of copper for sulfonamides [11]. The synthesis was based on the addition of hydrazine in a heated solution of copper acetate, in which MeS was placed, and stirring the mixture for 30 min (Figure 6). After washing, the copper-decorated MeS could be used, directly, without the need for a drying step. Owing to the size of the copper-decorated MeS, sulfonamides could be extracted following a radically different mechanism. Their sorption was based on the fact that sulfonamides acted as bridges between two copper ions, via their aromatic amine nitrogen and the nitrogen atom of the heterocycle. This mechanism renders the sorbent selective and efficient for sulfonamides. Using the prepared sponges, we developed a method for sulfonamide determination in deproteinated milk and water samples, validated according to the Commission Decision 657/2002/EC. The method exhibited a wide linear range (0.05-150 µg·L −1 ) and high enrichment factors (234-463 for water samples), which render it suitable for the routine analysis of sulfonamides. In another study, we functionalized MeS with urea-formaldehyde co-oligomers [12]. Instead of adopting an acid-catalyzed polymerization step for the preparation of urea-formaldehyde oligomers, we employed a base-catalyzed step. This resulted in the formation of a more hydrophobic product, which does not exhibit the typical resin structure of the acid-catalyzed polymer and consists mainly of oligomers. The prepared sponges were found to be suitable for the extraction of six different classes of compounds (i.e., non-steroidal anti-inflammatory drugs, benzophenones, parabens, phenols, pesticides, and musks). The developed method had low limits of quantification (0.03 and 1.0 µg·L −1 ), wide linear ranges, and excellent recoveries. In another study, graphene-modified MeS were functionalized with β-cyclodextrin [69]. The synthesis procedure consisted of multiple steps: Firstly, MeS was dipped into a GO solution and dried. Then, the sponges were placed into a solution of β-cyclodextrin (previously modified with aminopropyl tetraethoxysilane), removed after 2 h, and left to dry overnight. The modification procedure was repeated once more, resulting in the final material. The sponges were used for the extraction of flavonoids. The developed material is new and has some advantages, but the synthesis is time-consuming. MeS functionalized with β-cyclodextrin and graphene was also proposed [70]. An MeS cube was added into a β-cyclodextrin and graphene dispersion, and then ammonia solution was added to adjust pH to 10. After adding hydrazine and heating, the modified MeS was dried. The developed sponges could be used for the extraction of malachite green. The presence of β-cyclodextrin was found to significantly affect the adsorption, since sponges prepared with lower amounts of β-cyclodextrin had lower sorption efficiency. The sponges were employed to extract In another study, graphene-modified MeS were functionalized with β-cyclodextrin [69]. The synthesis procedure consisted of multiple steps: Firstly, MeS was dipped into a GO solution and dried. Then, the sponges were placed into a solution of β-cyclodextrin (previously modified with aminopropyl tetraethoxysilane), removed after 2 h, and left to dry overnight. The modification procedure was repeated once more, resulting in the final material. The sponges were used for the extraction of flavonoids. The developed material is new and has some advantages, but the synthesis is time-consuming. MeS functionalized with β-cyclodextrin and graphene was also proposed [70]. An MeS cube was added into a β-cyclodextrin and graphene dispersion, and then ammonia solution was added to adjust pH to 10. After adding hydrazine and heating, the modified MeS was dried. The developed sponges could be used for the extraction of malachite green. The presence of β-cyclodextrin was found to significantly affect the adsorption, since sponges prepared with lower amounts of β-cyclodextrin had lower sorption efficiency. The sponges were employed to extract malachite green from fresh crayfish and squid extracts (samples were homogenized and acetonitrile was added to extract malachite green). Another type of functionalization for MeS reported was the use of carboxylated multi-walled carbon nanotubes and the metal-organic framework MIL-101(Cr) [71]. The synthesis was based on mixing carboxylated multi-walled carbon nanotubes, MIL-101(Cr), and polyvinylidene difluoride, and then immersing MeS into the final solution. The modified sponges were used in an SPE procedure for the extraction of six triazines from corn extracts (corn was crushed to fine powder, and hexane was used to extract the compounds). To coat MeS with polyaniline a new procedure was developed [72]. Since polyaniline is a polymer with four different states, it can serve well as a sorbent in sample preparation. The synthesis of the sponges was very simple. After dipping the MeS into an aniline solution and freezing them at 4 • C for 30 min, a chilled solution of ammonium persulfate was added, and the mixture was stirred for 30 s. Then, the mixture was left at 4 • C for 4.5 h and, after rinsing and drying, the sponges were ready to be used. Stirring was avoided during synthesis so that the formation of polyaniline agglomerates in the MeS could not be present. The modified MeS was found to be suitable for the extraction of perfluorooctanoic acid and perfluorooctane sulfonate from deproteinated (with acetonitrile) human urine and serum. A similar simple procedure was followed in another study, were silanization of MeS with trichloromethylsilane was completed in 10 min [73]. The sponges were rendered hydrophobic after the silanization, which made it suitable for the extraction of benzene, toluene, ethylbenzene, m-xylene, and o-xylene. The adsorption was based mainly on hydrophobic interactions, since benzene was adsorbed less efficiently than m-xylene (K o/w for m-xylene is 10 times higher than that of benzene). Following a needle-trap extraction method, an analytical procedure was developed which exhibited low limits of detection (0.005-0.0010 µg·L −1 ). In a more complex study, layered double hydroxides were developed on the surface of MeS [74]. To do so, the Co(II)/2-methylimidazole porous coordination polymers were firstly immobilized on MeS and served as a source of Co, so that Ni-Co layered double hydroxides could be synthesized on the MeS. Three phenolic acids (gallic, p-hydroxybenzoic, and caffeic acid) were used to examine the suitability of the developed sponges in sample preparation. It is noteworthy that the developed layered double hydroxides were dissolved during the elution step to obtain the analytes. Owing to the good analytical figures of merit, a simple and effective analytical method was developed. Compared to the extraction with bare layered double hydroxides, the composite material had superior performance, due to the increased surface area. This is probably one of the reasons that the use of sponges and sponge-like materials is again a trend, since the surface area of existing compounds can be increased, by "depositing" them onto sponges. Finally, Liu et al. formed silica monoliths on the surface of MeS [75]. The sponges were dipped into a hydrolyzed mixture of tetramethoxysilane and vinyltrimethoxysilane, containing polyethylene glycol, urea, and acetic acid, and then heated. To render the sponges suitable for the extraction of dipeptides, sponges were modified with 3-mercapto-1-propanesulfonic acid so that sulfonate groups could interact with the free amine groups of the peptides. The silica-monolith-functionalized MeS could also be easily used for other applications, by altering the synthesis mixture. For instance, the addition of β-cyclodextrin during synthesis makes the sponges effective for the sorption of 4,4 -sulfonyldiphenol. Analytical methods developed based on modified melamine sponges are summarized in Table 5. Use of Natural Sponges The use of natural sponges has many benefits, including renewability, low cost, environmental friendliness, etc. Therefore, the use of natural sponges is highly promising. It is known that sea sponges are used as biomarkers to monitor the contamination of water with heavy metals [76]. Therefore, they are suitable for the preconcentration of metals. Capitalizing on this principle, a sea sponge was used to fill a column, which was used to extract copper, iron, lead, and nickel [76]. Due to the complex composition of the sponge and the plenty functional groups, no complexation agents were needed to sorb the metals, as in the case of PUFs. Moreover, the developed method is environmentally friendly, as it uses natural sea sponge. Later, the same groups published another study, where they proposed the use of sea sponges for the adsorption of Ponceau 4R and Sudan Orange G dyes [77]. The sea sponges served as an excellent sorbent since very good analytical figures of merit were recorded and, more importantly, very good recoveries were obtained from analyzing real samples without any pretreatment (i.e., peach-flavored drink powder, fruit-flavored mint candy, flavored rock candy, rosehip-flavored drink powder, fruit-flavored soft drinks, tomato paste, and pepper). No interference was recorded from ions commonly present in food products (such as iron, copper, potassium, calcium, etc.) or other food dyes (chocolate brown, tartrazine, sunset yellow, brilliant blue, patent blue V), even though, in their previous study, they found that metals could readily be adsorbed from the sea sponges. These two studies highlight the potential of sea sponges in sample preparation. Luffa sponge is another natural sponge, obtained from the ripened fruits of Luffa cylindrica [78]. It is composed of lignin, cellulose, hemicellulose, and smaller quantities of pectin and proteins. Owing to its many functional groups, it can also serve as an excellent sorbent material. This is evidenced by three recent reports. In the first report, Luffa sponges were used as a sorbent for phosphopeptides from protein digests [78]. The sponges exhibited exceptional selectivity for phosphopeptides, over other non-phosphopeptides, making feasible their detection, even when their concentration was 100 times lower than other non-phosphopeptides. In the second report, the authors used Luffa sponges for the selective extraction of chromium (III) [79]. Selectivity over chromium (VI) was ensured by adjusting the pH of the solution to 4.0. The analytical method developed exhibited high accuracy as evidenced by analyzing certified reference materials (certified value: 300.00 ± 0.5, found value: 299.57 ± 0.006). In the final report, the ionic liquid 1-hexadecyl-3-methylimidazolium bromide was deposited on Luffa sponges by physisorption [80]. The modification was carried out to render Luffa sponges suitable for the determination of four benzoylurea insecticides in water and tea beverage samples. Before the addition of the ionic liquid, an alkali treatment of the Luffa sponges was carried out to remove hemicellulose and lignin and to make the sponges more hydrophilic. Conclusions Herein, we discussed the use and selected applications of sponges and sponge-like materials in analytical chemistry. This trend started with the use of PUFs for metal ions sorption. Although quite a few articles were published on this topic due to the limitations of PUF use, the narrow applicability (only metal ions), and the advancement of nanotechnology, this trend soon declined. However, in the last decade, this trend was not only reversed, but more and more researchers also aimed to develop new sorbent materials, based on sponges. Currently, many different sponge materials exist, such as PUFs, MeS, carbon foams, sea sponges, Luffa sponges, and others. This makes it easier to develop more advanced or selective sorbents than in the past. The next step in this field was the combination of nanomaterials with sponges. This resulted in the development of sorbents with more advanced characteristics and wider sorption capabilities, rendering sponges suitable for the sorption of small organic molecules, such as antibiotics and pesticides. In the future, large-scale synthesis of the materials should be examined, so as to result in commercially available and reliable sorbent materials. Moreover, the use of other nanomaterials should also be examined in order to make even wider the gamut of potential sorbents, as well as to render them more suitable for specific applications. Whether this trend will come to its own in sample preparation or not remains to be seen in the near future. Until then, the development of sponge-based sorbents will continue to improve, resulting in exceptional materials that could significantly alter existing methods. Author Contributions: Conceptualization, T.G.C. and C.D.S.; writing-review and editing, T.G.C. and C.D.S. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
2020-08-19T13:06:27.694Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "6f5b6a7e76e88431fc75a3c3eb3ab2024023588d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/16/3673/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4fe1aca2836ebe84e331ee7540577ffac80e728", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
256687146
pes2o/s2orc
v3-fos-license
Shape control of size-selected naked platinum nanocrystals Controlled growth of far-from-equilibrium-shaped nanoparticles with size selection is essential for the exploration of their unique physical and chemical properties. Shape control by wet-chemistry preparation methods produces surfactant-covered surfaces with limited understanding due to the complexity of the processes involved. Here, we report the controlled production and transformation of octahedra to tetrahedra of size-selected platinum nanocrystals with clean surfaces in an inert gas environment. Molecular dynamics simulations of the growth reveal the key symmetry-breaking atomic mechanism for this autocatalytic shape transformation, confirming the experimental conditions required. In-situ heating experiments demonstrate the relative stability of both octahedral and tetrahedral Pt nanocrystals at least up to 700 °C and that the extended surface diffusion at higher temperature transforms the nanocrystals into equilibrium shape. Tetrahedral nanocrystals are out-of-equilibrium structures whose growth mechanism is a long-standing open problem. Here, the authors show that pure Pt tetrahedral nanocrystals grow in the gas phase and single out the defect-mediated mechanism leading to the symmetry-breaking for tetrahedral growth. T he properties of nanoparticles are known to strongly depend on both size and shape 1,2 . Therefore, producing nanoparticles with controlled size and shape is extremely important for applications. With this respect, the mechanistic understanding of the shape control is essential for turning shapedirected nanoparticle synthesis from an art to science. Depending on the degree of non-equilibrium of the growth, shape-directed nanocrystal synthesis was rationalized so far on the basis of either the Equilibrium Wulff Construction (EWC) or the Kinetic Wulff Construction (KWC) [3][4][5] . In EWC, nanocrystal shape is determined from the equilibrium surface free energies of facets with different orientations. In KWC, nanocrystal shape derives from the different growth velocities of these different facets. Clearly, the competition between octahedral and tetrahedral growth can neither be explained by the EWC nor by the KWC. Both octahedra and tetrahedra are non-equilibrium shapes (see the calculations of Supplementary Fig. 1 and Supplementary Table 1), with quite unfavourable surface-to-volume ratios, so that their growth cannot be predicted by the EWC. On the other hand, both structures expose the same type of (111) facets only, so that the simplistic orientation-directed growth model of the KWC does not apply. In fact, in order to form a tetrahedron, only four symmetrically placed (111) orientations over the possible equivalent eight (111) orientations must grow. In an isotropic environment, with atoms landing on the nanocrystals randomly from all directions, there is no a priori reason to select this reduced set of growing orientations. The explanation of this longstanding open problem [6][7][8][9] thus hints at a subtler growth mechanism that we are going to present here. A first important step in understanding this issue was made by Wang et al. 8 , who showed that the growth of tetrahedra takes place at a stronger non-equilibrium preparation condition than that of octahedra. They noted also that a larger tetrahedron can grow from a smaller octahedron. However, they did not unravel the atomic-level mechanisms causing this type of growth, which, as we will show below, requires a peculiar type of symmetry breaking. In the experiments of Wang et al. 8 , the situation was indeed difficult to analyse at the atomic level due to the complexity of the processes occurring in their wet-chemistry synthesis. Pt is an excellent catalyst for many industrially important chemical reactions. Although far-from equilibrium tetrahedra have not been extensively studied due to their rarity, there are already experimental evidences that they are particularly useful for certain catalytic reactions 2,6 . Recent advancements in preparation of realistic quantities of catalytic nanoparticles with sizeselection by gas phase deposition 10,11 have opened a way for us to study the shape-induced effects of nanocatalysts in a systematic way. Tetrahedral Pt nanoparticles would also exhibit unique plasmon responses 12 that can be explored in bio-sensing and spectroscopy of trace elements. The availability of size-tunable tetrahedral nanoparticles is therefore not only of fundamental interest, but of enormous practical values. In this study, we tackle this problem by taking advantage of the clean and relatively simple environment of gas-phase growth using the magnetron-sputtering technique 13,14 . We report a controlled gas-phase growth of octahedral and tetrahedral Pt nanocrystals with mass selection in terms of number of atoms up to 20,000 (with corresponding diameter around 9 nm). Even though tetrahedra are the most out-of-equilibrium among the platonic shaped nanocrystals, we show that they can be optimized to be a dominant product, as confirmed by our atomic scale Scanning Transmission Electron Microscopy in high-angle annular dark field mode (HAADF-STEM). This is achieved without the use of precursors or surfactants. We therefore show that the growth of Pt tetrahedra and octahedra in the gas phase can arise from driving forces inherent to the kinetics of the pure metal condensation which can be controlled experimentally. This is supported by our atomic-scale simulations, which not only reproduce the main features of the experiment, but also unravel the underlying atomic-scale mechanism. The analysis shows the key role of atomic mobility between facets and of the generation of specific metastable defects during growth and gives support to the optimal experimental parameters. It is also confirmed by an in situ heating experiment where surface diffusion can be manipulated. Results Growth experiments: octahedral vs tetrahedral structures. Figure 1 shows atomic scale HAADF-STEM imaging of two most abundant shaped nanoparticles, namely tetrahedron ( Fig. 1, a-d) and octahedron (Fig. 1, e-h), with a nominal mass of 5000 Pt atoms, found in a cluster beam generated by a magnetron sputtering inert gas aggregation source. The details of the source are described later as well as in the Methods section. The experimental images in Fig. 1(b, f) are taken with the electron beam travelling down an [011] zone axis of the tetrahedral and octahedral nanoparticles respectively, as shown in Fig. 1(a, e) respectively. Two simulated STEM images are displayed in Fig. 1c and 1g, where the atomic column intensities can be used as a guide to the local thickness variation 15,16 . For shape identification, our experience has shown that there is no need to take into consideration small structural relaxation that may be present 17 . Close matches of both the outlines of the particles and the overall atomic column intensity variations between the experimental and simulated images demonstrate that the nanoparticles have the expected shapes. However, for example in Fig. 1b, the top-right corner of the experimental image for tetrahedron-shaped nanoparticle shows that the atoms are missing from the edge when viewed edge-on, and the lower-left image of the topologically equivalent edge shows that the missing atoms decorate the edge unevenly lengthwise, particularly at vertices. These phenomena indicate that atoms at edges and vertices are highly unstable, but they do not lead to smooth regular truncated facets. The result is also confirmed quantitatively by line profiles drawn across the images of nanoparticles concerned, as shown in Fig. 1(d, h) respectively. The experimental and simulation line profiles mostly match each other, apart from those parts at the beginning and at the end, due to the 'missing atoms' at the edges and vertices. This is consistent with the total energy estimate for tetrahedral-shaped nanoparticles with different degrees of vertex truncations (see Supplementary Fig. 1 and Supplementary Table 1 for the excess energy of differently shaped nanoparticles), with the total energy for an ideal tetrahedron being much higher than those for the octahedral-shaped particles which can be taken to be vertextruncated tetrahedron where the four {111} surfaces created by vertex-truncation equal to that of the original {111} facets. As the truncated-octahedral nanocrystals are the most stable at thermal equilibrium (see Supplementary Discussion SD-1, Excess energy of different structural motifs), the growth of octahedral and tetrahedral nanocrystals must be due to kinetic processes which open ways to experimental manipulations of the different shapes. Figure 2a shows a schematic of the cluster source used in this study. Here, the nanoparticles are generated by magnetron plasma sputtering of a Pt target in an Ar/He mixture environment and are grown as they travel towards the exit of the condensation chamber. They are then extracted through a nozzle and positive charged particles are focused and accelerated through the optics and mass selected by a lateral time-of-flight (ToF) mass spectrometer. In this set-up, we can select the size of nanoparticles easily. For example, for Pt 5000 and Pt 20000 nanoparticles, we have 5000 ± 150 and 20,000 ± 600, respectively, when the mass (M) resolution of ToF was set at M/ΔM of 16. However, the shape of the nanoparticles will depend on many factors which determine how the particles are formed. The operation parameters that can be varied here are power of the sputtering, argon (Ar) and helium (He) gas composition and their flow rates while keeping the condensation length the same. Figure 2b shows the experimental result of a simple kinetic study by mass spectroscopy in which the only kinetic variable is the He flow rate in the Ar/He inert gas mixture introduced to the condensation chamber. In this case, the sputtering power is kept at a relatively large value of 135 W and the Ar flow rate is kept constant at 90 standard cubic centimetres per minute (sccm). Here, the main result of increasing the He flow rate is the decrease of the mean value of the nanoparticle size as well as the number of particles detected (as represented by the area under the main peak of the mass spectrum). To the first approximation, the inverse of the He flow-rate can be interpreted as proportional to the dwell time of the nanoparticles while traversing the condensation chamber. In Supplementary Discussion SD-2 (Influence of He flow rate on the formation of Pt nanoparticles), we show that, initially, the number of the nanoparticles detected grows quickly with the transit time, then stabilizes at roughly a constant value, which indicates that the He flow rate we have applied is sufficiently small that has little effect on the number of nanoparticles in the beam. Interestingly, the mean particle size increases throughout the full range of the nanoparticle transit times and the evolution can be fitted to a single exponential relationship. This phenomenon suggests strongly that the increase in particle size is likely due to adatom condensation on the existing nanoparticle nuclei which is enhanced due to longer effective transit time 18 . We also see that the mass spectra are largely symmetrically distributed, closer to Gaussian than log-normal distribution, suggesting that the possibility of nanoparticle size increase through He-enhanced two-body collision of Pt nanoparticles is also small 19 . Particleparticle aggregation would often lead to either elongated or fractal shaped clusters 20 , which are not observed here. We thus can study the growth pathway of the shaped nanoparticles in the adatom adsorption region by examining the shape distribution of the size-selected particles at different He flow-rates. This is done for the nanoparticle size at 3900, 5000, 10,000 and 20,000 when the He flow-rate is at 115 sccm, 70 sccm, 40 sccm and 5 sccm, respectively (see examples in Fig. 2c, d and Supplementary Discussion SD-3, Procedures of capturing images for shape identification and their uncertainty). The shape distributions at 115/70 sccm would be more representative of the particles soon after nucleation near the plasma zone as they are quickly swept through the condensation chamber so opportunity for further growth is limited. The shape distribution at 5 sccm would reflect the further growth of these nuclei as they travel through the condensation chamber at a more leisurely pace. The details of how images are captured for shape identification and their uncertainty are described in Supplementary Discussion (SD-3, Procedures of capturing images for shape identification and their uncertainty). The shapes of the particles are inferred by comparing the experimental HAADF-STEM images of the nanoparticles with that in a library (see Supplementary Discussion SD-4, Tetrahedron and Octahedron Image Libraries) of simulated HAADF-STEM images of ideal tetrahedral and octahedral fcc nanocrystals of comparable sizes at all possible orientations, as we have demonstrated in Fig. 1. The most prominent finding, shown in Fig. 2e, is that the dominant shape of the Pt 5000 nanoparticles grown is octahedral (73% of 616 nanoparticles counted) while the dominant shape of the Pt 20000 nanoparticles is tetrahedral (52% of 753 nanoparticles counted) (for details, see Supplementary Table 2). The results for Pt 10000 nanoparicles are in between. It seems that as the size increases, there is a higher percentage of particles with the tetrahedron shape. In other words, at least some of the tetrahedral Pt are grown from the smaller octahedral Pt. However, this growth cannot be easily understood by the classical KWC mechanism, which depends on the varied growth rates of different facets 21,22 . Both tetrahedral and octahedral nanoparticles are surrounded by identical (111) facets. Furthermore, tetrahedral nanoparticles are more out-of-equilibrium than the octahedral nanoparticles (see Supplementary Discussion, SD-1, Excess Energy of Different Structural Motifs), therefore the observed shape transformation also cannot be driven by energetics. A similar conundrum exists for the growth of tetrahedral nanoparticles in solutions and has remained unresolved 22 . Before we discuss the resolution of this puzzle in our gas phase growth below, we should also point out that those nanoparticles classified as "others" in Fig. 2 consist of some single or multiply twinned Pt particles or hexagonal shaped particles or particles with a long axis (see Supplementary Fig. 6). The absolute numbers of "other" nanoparticles are small and comparable in all measured samples, so they may not play significant roles in the growth pathway from octahedra to tetrahedra. Anyway, this does not exclude the possibility that a few tetrahedra may grow without passing through the octahedral structure, but our data indicate that the octahedral route is dominant. Growth simulations: defect-mediated symmetry breaking. We take advantage of the relatively clean environment of the gas phase growth to study the atomic processes behind the octahedral-to-tetrahedral shape transformation using molecular dynamics. Our results show that there is a kinetic growth pathway in which the nanoparticles progressively move further and further away from equilibrium, while growing in the context of highsymmetry structural motifs. This pathway comprises two steps: (a) truncated octahedron → octahedron; (b) octahedron → tetrahedron. These steps are schematically shown in Fig. 3a. Small nuclei formed at the high temperature atomic vapour at the sputtering target are mostly truncated octahedra because of fast equilibration 23,24 . They can become octahedra after growth on all six (100) facets to complete their vertices, through adatom adsorption 25 . Then, the further transformation into tetrahedron can only be obtained by growing tips on four (111) facets of the octahedron out of eight, in such a way that octahedral symmetry is broken in a very specific pattern. Detailed analysis of the atomistic steps involved shows how that is achieved. We believe that the initial nucleation of nanoparticles in the presence of energetic plasma will result in nearly equilibrium shapes which are truncated octahedra 23,26 , which present both (111) and (100) facets. Adsorption on (100) facets is more energetically favourable than on (111) facets, by about 0.5 eV in our model. It is well known that atoms deposited on (111) facets can move to the (100) facets where they get trapped, thus growing the octahedral tips 25 . This growth step follows the classical KWC model, with the out-of-equilibrium growth driven by the growth rates of different facets: (100) facets grow faster so that they tend to disappear. This explanation is consistent with the dominance of octahedra in the Pt 3900 and Pt 5000 samples we found experimentally when the flow rate was high such that Pt nanoparticles newly nucleated at the plasma sputtering target were quickly swept through the condensation chamber (see Fig. 2e). Once an octahedron is formed, how can its O h symmetry be broken and a tetrahedron (T d symmetry, a subgroup of O h ) grow steadily on top of it? As all facets are equivalent in both shaped nanocrystals, of (111) type, we therefore cannot in principle assume different growth rates. Some more subtle mechanisms should come into play in this further growth step, which we explain with the aid of representative snapshots from a molecular dynamics simulation of growth of a Pt 231 octahedral nanocrystal ( Fig. 3(b-g), see the Methods section for the simulation method used). We first analyse the growth simulations of such small nanocrystals because they allow us to describe more easily the growth mechanisms by following the process atom by atom. Simulations of the growth of larger nanocrystals, of sizes in the range of the experiments, are discussed later here and in Supplementary Discussion SD-6 (Molecular Dynamics Simulations for Different Sizes and Temperatures). The simulation originally starts from a truncated-octahedral structure, which becomes the octahedron by completing its six vertices (Fig. 3b). A crucial ingredient for the subsequent tetrahedral growth is the mobility of atoms between nearby facets: in our simulations we find that it occurs by exchange and is activated already at relatively low temperatures. When the growth starts on a given facet of the octahedron, steps are created on that facet. These steps work as traps for diffusing adatoms. Since mobility from nearby facets is activated, atoms deposited Fig. 3 Growth pathway from truncated octahedral to octahedral and to tetrahedral shaped nanoparticles. a Schematic representation of the two main steps of the growth sequence. A truncated octahedron (white atoms) becomes an octahedron after growth on all (100) facets to complete six vertices (blue atoms). The tetrahedron is then obtained by growing equivalent tips on only four (111) facets of the octahedron over eight, in such a way that octahedral symmetry is broken with a specific pattern. These tips are colored in y ellow, green, cyan and orange. b-g Snapshots taken for a growth simulation at 400 K and deposition rate of 0.1 atom/ns. (b-d) and (e-g) show the same growing nanoparticle from different perspectives. The simulation is started from a truncated octahedron of 201 atoms (white atoms). b Octahedral vertices are completed. c Non-neighbouring growing (G, in yellow and green) facets are formed, separated by non-growing facets (NG). Small tetrahedral edges are formed (enclosed in black rectangles). d An island in stacking fault (F, in dark grey) starts to grow at the corner between G and an NG facets, creating a (100)-like fourfold sites (enclosed in the pink rectangle) on which (e) new atoms adsorb and facilitate the growth of the tetrahedral tips by a self-replicating process (f) which finally leads to the tetrahedral nanoparticle (g), which continues to grow as a tetrahedron (h, i). there are likely to diffuse to the growing facet to stick there. We believe that this depletion effect renders the nucleation of islands on nearby facets less likely and leads to configurations of the type shown in Fig. 3c, in which initially growing (G) facets are separated by initially non-growing (NG) facets, which indeed may grow but at a lower rate than G facets, for the reason given below. The growth of a layer on a G facet initiates the formation of a tetrahedral edge in its nearby NG facets, as shown in Fig. 3c. These sharp tetrahedral edges present favourable sites for adatom adsorption on the NG facet. In general, adsorption of atoms on facets is more favourable in the vicinity of edges, as known from experimental and computational results on Pt(111) surfaces 27,28 . Some small islands are therefore likely to nucleate on NG facets, with preferential placement at corners with G facets, where they can keep a rather compact shape while placing several atoms on edge sites (see Fig. 3d). An experimental example of such a compact island forming over an edge in octahedral nanocrystals can be seen in the top-left side of the octahedral nanocrystals shown in Fig. 1f. However, the formation of such an edge-bound step alone is not able to account for the shape transformation into a tetrahedron. The key symmetry-breaking step starts if the island at the corner of the NG facet is in stacking fault, which creates fourfold adsorption sites on the G facet (Fig. 3d). These sites act as traps for new incoming adatoms (Fig. 3e), which contributes to locking the island in fault position. These adatoms create further new fourfold adsorption sites, causing an autocatalytic self-replicating process which can lead to the fast growth of a tetrahedral tip (Fig. 3f, g). By comparison, the island growth of the unfaulted layer will be slow and self-limiting (see Supplementary Figs. 7 and 9). The fast kinetics of tip growth over the layer-by-layer growth drives the shape transformation from an octahedron to a tetrahedron, because the symmetric placement of the corners between G and NG facets naturally leads to the growth of four non-neighbour facets over eight, finally producing the tetrahedral shape of Fig. 3h, i. Growth sequences from further simulations at temperatures of 600 and 800 K, as well as the final results of our 70 independent simulations for the same sizes, are reported in the Supplementary Table 3 and Supplementary Figs. 7-9. We note that the metastable islands in fault position can revert back to fcc stacking during the growth process (see Supplementary Figs. [7][8]. When this happens, the growth of nearby tetrahedral tips significantly slows down (see Supplementary Fig. 7), so that the tips remain truncated. The fact that faulted islands easily revert back to the natural stacking may also explain why we did not experimentally observe them in the final tetrahedral nanocrystals in our electron microscopy observations. On the other hand, stacking fault islands are indeed visible in Pd tetrahedra grown in the liquid phase (see the image in Fig. 3a of ref. 8 ), where those islands may have been stabilized by ligands after growth completion. This also indicates that the proposed growth mechanism is possible in liquid-phase synthesis. Further simulations were done to reach larger growth sizes, up to 3000 atoms ( Supplementary Fig. 10) and to more than 14,000 atoms ( Supplementary Fig. 11), which is in the same size range as those studied in experiments. These large-size simulations confirm that the transitions from truncated octahedra to octahedra and then to tetrahedra take place by the same growth mechanisms shown in Fig. 3. We note that the simulations of Supplementary Fig. 11 were run at a slightly higher temperature to allow octahedral growth up to large sizes within the limited simulation time scale. The overall picture arising from our simulations is that nanocrystals initially grow close to their equilibrium shape, until they reach a critical size at which they are not able to equilibrate anymore. In fact, the larger the nanoparticle size, the slower the equilibration of its shape. After that critical size, kinetic trapping begins to dominate, so that truncated octahedra grow into octahedra and then, for even larger sizes, the octahedra grow into tetrahedra. This critical size depends both on the growth time scale and on temperature, being small for short growth times and for low temperatures. It can be increased in two ways, namely by growing on longer time scales and by increasing the temperature. Annealing experiments and simulations. An implicit condition for the proposed growth mode is that atomic mobility between nearby facets is activated, yet full mobility of atoms around the nanoparticle has not yet occurred. The latter would bring growth closer to equilibrium, resulting in a round or truncated octahedron. This has been verified both by annealing experiments and by annealing simulations, finding that truncated octahedral structures are indeed produced. Figure 4 displays HAADF-STEM images for Pt 2057 nanoparticles after in situ heating from room temperature to 900°C. The electron beam is incident along the [011]-zone axis as in Fig. 1. The samples were heated to the elevated temperature as indicated in the images and held there for 1 min, followed by quenching down to room temperature when the STEM images were acquired. It shows clearly that the tetrahedron is stable at least until 700°C, even though some rearrangements of the atoms are observed. Interestingly, not only do we see the well-formed tetrahedral tip at the top of the image becoming rounded at 500°C, but also the less-well-formed tetrahedral tip on the right of the image grows a little bit, indicating the surface diffusion activated by the modest heating promotes this autocatalytic growth. Further heating to 800 and 900°C, a rounded shape of the nanoparticle can be seen following more substantial mass movement. The observation of the initiation of mass transport at various parts of the surface of the nanocrystals, occurring at different temperatures, suggests heterogeneity of surface diffusion barriers. This is the window of opportunity we explored experimentally to exercise shape control. At the same temperature of 900°C, the octahedral Pt particle too is transformed to the rounded shape as shown in Fig. 4b. This result is nicely confirmed by annealing simulations (Fig. 4c) which are able to equilibrate the octahedron to a truncated octahedral shape at 900°C even within the shorter time scale of simulations. The results again show that both tetrahedra and octahedra are not thermodynamic equilibrium states. They had been kinetically trapped in and they could thermally reach a stable structure above 700°C on our annealing time scale. The high trapping temperature is characteristic of Pt 29,30 which is good for the practical application of these shaped nanocrystals. Discussion Compared with the complex and somewhat confused situation about the mechanisms for the growth of tetrahedra by wet chemistry 7,31,32 , the clean surface of nanoparticles generated in a vacuum system offers a simpler system for our simulation study of the atomic steps involved during tetrahedral growth. In particular, the nucleation of 2D islands at the edges of the nanoparticles, with hexagonal stacking on the underlying (111) facets seems to be the dominant processes involved because of its efficiency compared with random nucleation of fcc stacked islands in the middle of facets (see the growth rate simulation shown in Supplementary Discussion SD-7, Growth Simulation at 600 K and flux of 0.1 atoms/ns). The similar adatom growth mechanism should prevail for nanocrystals of other fcc noble metal systems. Our simulations show that this is indeed the case for palladium (Pd). This suggests that the controlled preparation of alloyed or core-shell bimetallic tetrahedral nanocrystals are possible using the clean gas-phase plasma synthesis method, further increasing the tunability of Pt-based nanoparticles 26,33 . The key to the controlled gas-phase growth of tetrahedral fcc nanocrystals is a growth environment favouring predominance of adatom growth over particle-particle aggregation and finite diffusion length of adatom on the nanoparticle. Although the growth condition in a magnetron sputtering inert gas condensation chamber is a complex function of many factors, the understanding of the detailed atomistic mechanism gives us physical guidelines to their optimization. For example, particle-particle coalescence and secondary nucleation by inert gases can be suppressed by working with a low He vapour pressure as we did. The finite diffusion length can be controlled by transit time through the condensation chamber. The transit time can be adjusted via a combination of suitable condensation length, or as we have done in this work, by varying helium gas flow rates. The high yield gas phase synthesis route 11,14 has the added advantages of producing clusters with bare surfaces, allowing the surface reactivities to be probed directly 10 without the potentially shape-transforming steps of surfactant removal 34 for nanoparticles produced by wet-chemistry approach. We expect that the demonstrated ability to manipulate operation parameters to select not only the size, but also the shape and the composition can play an important role to gain understanding of the physical mechanisms behind their often remarkable physical and chemical properties, paving the way for the rational design and controlled growth of bespoken nanoparticles for catalysts or optical devices. Methods Nanoparticle preparation. Pt nanoparticles are fabricated using DC magnetron sputtering inert gas condensation 14 . A mixture of argon and helium gases with individually controlled flow rate is introduced into the chamber for plasma sputtering of Pt targets (99.99%) as well as nanoparticle growth in the gas phase. The growth is terminated once they are out of the condensation chamber. The positively charged nanoparticles are focused and accelerated before their size selection through a lateral time-of-flight (ToF) mass filter. In the present study, the mass resolution (M/ΔM) is set to be 16. The Pt nanoparticle deposition is performed at a kinetic energy of 1200 eV. Characterisation and structural identification. Pt nanoparticles are imaged by a 200 kV JEM-2100F Transmission Electron Microscope (JEOL, Japan), which is equipped with a probe spherical aberration corrector for STEM (CEOS, Germany). The probe convergence angle is 19 mrad and the collection angle range of the highangle annular dark field (HAADF) detector is set from 62 to 164 mrad. HAADF-STEM images are captured with an electron probe size of 8 C and a pixel dwell time of 38 μs with a 512 × 512 pixel scanning area. Atomic simulation environment (ASE) 35 is used to generate the atomic models of Pt nanoparticles used in the kinetic simulation of the HAADF-STEM images of Pt nanoparticles 36 . For shape identification, the structural models used in Fig. 1 are idealized tetrahedron and octahedron. In situ heating experiments. Pt nanoparticles are deposited onto Wildfire nano twist chips covered with amorphous silicon nitride (SiN) support films (DENSsolutions, Netherlands), which can withstand heating up to 1300°C. During the in situ experiment, the chips are heated to the required temperatures with a pre-set heating rate of 1000°C/min. After holding at the annealing temperature for 1 min, the chips are quenched to room temperature with a cooling rate of 2000°C/min. The Pt nanoparticles are examined by HAADF-STEM imaging, with a beam current density of 15 pA/cm 2 , a pixel dwell time of 10 μs and an image size of 512 × 512 pixels. The resultant electron dose rate is controlled at 0.7~1 × 10 4 electrons/Å 2 per frame. Simulation methods. Molecular Dynamics (MD) growth simulations are made by molecular dynamics using the same type of procedure adopted in ref. 26,37 . Simulations start from a truncated octahedral structure of 201 atoms. Atoms are deposited one by one isotropically from random directions every 1 and 10 ns, corresponding to deposition rates of 1 and 0.1 atoms/ns. Simulations are stopped after the deposition of about 800 atoms, i.e., at nanoparticle size of about 1000 atoms. The equations of motion are solved by the Velocity Verlet algorithm with a time step of 5 fs. Temperature is kept constant by an Andersen thermostat. Seven different temperatures from 300 to 900 K are considered. For each temperature and deposition rate, 5 independent simulations are made. Annealing and quenching MD simulations are performed by heating up the nanoparticles in steps of 1 K every ns. Some more simulations for larger sizes (up to more than 14,000 atoms) were also done, at the deposition rate of 1 atom/ns and for temperatures between 700 and 900 K. Pt-Pt interactions are modelled by the Gupta potential 38 . Form and parameters of the potential are given in ref. 30 , where it has been demonstrated that the potential is able to reproduce the experimental growth structures of PtPd alloy nanoparticles both in the Pt-rich and in the Pd-rich cases. In order to evaluate the energetic stability of the structural motifs, we have calculated the excess energy E exc defined as 24 : where ε is the cohesive energy per atom in bulk Pt, E and N are the binding energy and the number of atoms of the nanoparticle. Data availability All experimental and simulation data in the main text and the supplementary materials are available upon request to the authors. Code availability Molecular Dynamics codes used in the simulation are available upon request to ferrando@fisica.unige.it. c Simulation of the annealing of an octahedral Pt 2030 structure, by heating up from room temperature (RT) to 700 and 900°C and cooling back to RT, with rates of 1 K/ns (heating) and −1 K/ns (cooling). The simulation is indicative of the trend of structural evolution, which agrees with the experiment. The arrows indicate the truncated tips. In total, 2 nmscale bars are used for all the experimental and simulation images.
2023-02-09T14:40:28.803Z
2021-05-21T00:00:00.000
{ "year": 2021, "sha1": "132300285584314ce0122700d1dc583f6a79db62", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-23305-7.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "132300285584314ce0122700d1dc583f6a79db62", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
27656829
pes2o/s2orc
v3-fos-license
Australian National Enterovirus Reference Laboratory annual report , 2015 Australia conducts surveillance for cases of acute flaccid paralysis (AFP) in children less than 15 years as recommended by the World Health Organization (WHO) as the main method to monitor its polio-free status. Cases of AFP in children are notified to the Australian Paediatric Surveillance Unit or the Paediatric Active Enhanced Disease Surveillance System and faecal specimens are referred for virological investigation to the National Enterovirus Reference Laboratory. In 2015, no cases of poliomyelitis were reported from clinical surveillance and Australia reported 1.2 non-polio AFP cases per 100,000 children, meeting the WHO performance criterion for a sensitive surveillance system. Two non-polio enteroviruses, enterovirus A71 and coxsackievirus B3, were identified from clinical specimens collected from AFP cases. Australia complements the clinical surveillance program with enterovirus and environmental surveillance for poliovirus. Two Sabin-like polioviruses were isolated from sewage collected in Melbourne in 2015, which would have been imported from a country that uses the oral polio vaccine. The global eradication of wild poliovirus type 2 was certified in 2015 and Sabin poliovirus type 2 will be withdrawn from oral polio vaccine in April 2016. Laboratory containment of all remaining wild and vaccine strains of poliovirus type 2 will occur in 2016 and the National Enterovirus Reference Laboratory was designated as a polio essential facility. Globally, in 2015, 74 cases of polio were reported, only in the two remaining countries endemic for wild poliovirus: Afghanistan and Pakistan. This is the lowest number reported since the global polio eradication program was initiated. Introduction Australia has established clinical and virological surveillance schemes to monitor its polio-free status. The clinical surveillance follows the World Health Organization (WHO) recommendation of investigating cases of acute flaccid paralysis (AFP) in children less than 15 years of age as an age group at high risk of poliovirus infection. AFP cases are ascertained either by clinicians notifying the Australian Paediatric Surveillance Unit (APSU) via a monthly report card or through the Paediatric Active Enhanced Disease Surveillance System (PAEDS) at five sentinel tertiary paediatric hospitals. 1,2,3 The WHO recommends that two faecal specimens be collected for virological investigation at least 24 hours apart and within 14 days of the onset of paralysis from cases of AFP to exclude poliovirus as the causative agent. It is a requirement of the WHO polio eradication program that the specimens are tested in a WHO-accredited laboratory, which for Australia is the National Enterovirus Reference Laboratory (NERL) at the Victorian Infectious Diseases Reference Laboratory (VIDRL). The clinical and laboratory data from AFP cases in children is reviewed by the Polio Expert Panel (PEP) and reported to the WHO as evidence of Australia's continued polio-free status. Enterovirus and environmental surveillance programs were established as virological surveillance for poliovirus to complement the clinical surveillance program focussed on AFP cases in children. Enteroviruses other than poliovirus have been associated with AFP and poliovirus infection may manifest clinically without paralysis. The Enterovirus Reference Laboratory Network of Australia (ERLNA) involves public diagnostic virology laboratories reporting enterovirus typing results from clinical specimens to exclude poliovirus involvement and establish the epidemiology of non-polio enteroviruses (NPEVs) in Australia. Most poliovirus infections are asymptomatic with the virus shed for weeks in the faeces of infected persons. WHO supports the testing of environmental or raw sewage samples as a means of detecting the presence of wild poliovirus in polio-free countries. The testing of environmental samples commenced at a sentinel site in metropolitan Melbourne from late 2014. The number of wild polio cases worldwide decreased from 359 in 2014 to 74 in 2015. 4 Cases were reported only from the two remaining polio endemic countries: Afghanistan and Pakistan. Nigeria was declared polio-free in September 2015, after more than 12 months with no detection of wild poliovirus. 5 Only wild poliovirus serotype 1 was detected in 2015, with the last report of wild poliovirus type 3 in Nigeria in November 2012. 6 The global eradication of wild poliovirus type 2 was certified in September 2015, with the last detection reported in India in 1999. 7 This achievement has led to the planned globally-synchronised withdrawal of Sabin 2 poliovirus from OPV along with laboratory containment of this serotype from 2016, which will involve restricted access at a limited number of facilities worldwide. 8 The three poliovirus serotypes will remain in the inactivated polio vaccine and all countries will incorporate at least one dose of inactivated polio vaccine in the routine immunisation schedule to maintain immunity to poliovirus type 2 ahead of the switch from trivalent OPV to bivalent OPV in April 2016. In May 2014, the WHO Director-General declared the international spread of wild poliovirus in the northern hemisphere low season to be a Public Health Emergency of International Concern. The situation has been assessed every three months since then and the declaration has remained in place with countries known to be exporting wild poliovirus required to ensure all residents and long-term visitors are vaccinated between four weeks and 12 months prior to international travel. 9 At the seventh meeting of the Emergency Committee, in November 2015, outbreaks of circulating vaccine-derived poliovirus (cVDPV) were added to the declaration. These cVDPV outbreaks are indicative of gaps in routine immunization, and type 2 cVDPV outbreaks in Guinea, Myanmar, Nigeria and Pakistan during 2015 are cause for public health concern in the lead-up to the switch to bivalent OPV. 10 This report summarises the polio surveillance program in Australia for 2015, encompassing clinical surveillance for AFP cases in children and virological surveillance for poliovirus. Acute flaccid paralysis surveillance Paediatricians reviewing a patient less than 15 years of age presenting with AFP, or clinicians reviewing a patient of any age with suspected poliomyelitis, are requested to notify the NERL. i Cases of suspected poliomyelitis are notifiable under the Nationally Notifiable Disease Surveillance Scheme. 11 Paediatricians notify AFP cases to the APSU ii via a monthly report card. Upon receipt of the notification, the AFP National Surveillance Co-ordinator based within the NERL forwards a clinical questionnaire for the clinician to complete. Alternatively, AFP cases are ascertained by PAEDS nursing staff from medical records and are enrolled in the surveillance program with parental or guardian consent. According to the WHO surveillance criterion, two faecal specimens must be collected more than 24 hours apart due to intermittent virus shedding, and within 14 days of the onset of paralysis, while the virus titre remains high, to be classified as adequate. The faecal specimens are tested free of charge by the NERL. The PEP, a subcommittee of the Communicable Disease Network of Australia, reviews the clinical and laboratory data for all notified cases of AFP, irrespective of whether they are an eligible or ineligible case. An eligible case is an Australian child less than 15 years of age with AFP (including Guillain-Barre syndrome and transverse myelitis) or an Australian of any age with suspected polio. Virus culture Upon receipt at the NERL, faecal specimens are treated with minimum essential medium containing Earle's salts, chloroform (9.1% v/v) and phosphate buffered saline. The suspension is clarified and the supernatant inoculated onto the two mammalian cell lines recommended by WHO for the isolation of poliovirus: L20B (a transgenic mouse epithelial cell line expressing the human poliovirus receptor, CD155) and RD-A (human rhabdomyosarcoma). 13,14 Two WHO real-time reverse transcription polymerase chain reaction (RT-PCR) tests are used to determine whether a poliovirus is a wild strain, oral poliomyelitis vaccine (OPV) strain (Sabinlike) or a vaccine-derived poliovirus (VDPV), in a process known as intratypic differentiation (ITD). 15 The NERL sequences the complete poliovirus viral protein 1 (VP1) genomic region, which contains a major neutralising antibody binding site. The VP1 genomic sequence provides valuable biological information, including the number of mutations within a significant region of the OPV virus strain, and it enables phylogenetic analysis of wild poliovirus to rapidly determine the likely source of the virus, as utilised in the 2007 wild poliovirus importation. 16 Enterovirus surveillance The ERLNA was established primarily as a means of detecting imported poliovirus amongst untyped enteroviruses from clinical specimens. The NERL screens clinical specimens for enterovirus using a semi-nested RT-PCR directed to highly conserved sequence in the 5' nontranslated region (NTR). 17 Enterovirus typing is primarily performed by amplifying a fragment of the VP1 genomic region according to a published method, 18 but the complete nucleotide sequence of VP1 is required to type some enteroviruses. The enterovirus typing RT-PCR is directed to a region of sequence divergence that allows differentiation between enterovirus genomes. As a consequence, the enterovirus sequence based typing assay is not as sensitive as the pan-enterovirus detection assay. This can result in an enterovirus being detected by pan-enterovirus RT-PCR in a clinical specimen without subsequent identification by the VP1 enterovirus typing assay. Environmental surveillance Environmental samples are processed by the NERL according to the two-phase separation procedure published by WHO. 19 In brief, 800 ml of sewage is collected as a grab sample prior to any biological or chemical treatment and referred to the NERL within 24 hours. At the laboratory 500 ml of the sample is vigorously shaken at 4 o C with dextran, polyethylene glycol and sodium chloride. The mixture is incubated overnight at 4 o C in a separating funnel and the lower organic phase collected the next day and clarified with chloroform. The sample extract is then inoculated onto the L20B and RD-A cell lines and observed microscopically for cytopathic effect as for faecal specimens. All enterovirus isolates from cell culture are typed by nucleic acid sequencing as described in the Methods section for enterovirus surveillance. Classification of AFP cases A total of 73 notifications of AFP cases involving children less than 15 years of age were received in 2015 ( Table 1). The PEP classified 53 cases as non-polio AFP, a rate of 1.2 cases per 100,000 children less than 15 years of age, which exceeds the WHO AFP surveillance performance criterion for a polio-free country of one case of nonpolio AFP per 100,000 children (Table 2, Figure 1). Seventeen cases were notified by more than one source, whether by two or more clinicians or a clinician and the PAEDS system. Three notifications were deemed to be ineligible due to the patient's age being greater than 14 years or the clinical presentation was subsequently determined not to be AFP. In 2015, an Australian adult was hospitalised with fever, weakness and significant back pain upon returning from Pakistan. High signal in the anterior horn cell region of the patient's spinal cord by magnetic resonance imaging and the detection of enterovirus in a faecal specimen by the local laboratory led to the case being investigated as suspected poliomyelitis. Salmonella paratyphi was isolated from blood culture and the final diagnosis was post-infectious inflammation presenting as acute disseminated encephalomyelitis. The NERL identified the enterovirus as type A76, one of the newly described enteroviruses, which the laboratory has not detected in Australia before. Notification of AFP cases by state and territory In 2015, AFP cases were notified from all jurisdictions in Australia except the Australian Capital Territory (Table 1). This result may not be surprising, since the Australian Capital Territory is expected to report one case every one to two years based on the population less than 15 years of age, however, no AFP cases have been notified from this jurisdiction since 2009. The non-polio AFP rates for eligible cases by jurisdiction exceeded the WHO AFP surveillance performance indicator of one case per 100,000 children in all other states and territory except New South Wales, which was the second year in a row that Australia's most populous state did not meet this surveillance criterion. Faecal collection from AFP cases A total of 73 faecal specimens from 42 of the 53 eligible cases were tested at the NERL in 2015. Fifteen AFP cases met the WHO criterion for specimen testing with two faecal specimens collected within 14 days of the onset of paralysis a The WHO AFP surveillance performance indicator for a polio non-endemic country is one case per 100,000 children <15 years of age, which is highlighted by the red line. (Figure 2, Tables 2 and 3). The proportion of cases with at least one specimen collected within 14 days of the onset of paralysis was 75%, while 83% of cases had a specimen collected any time after the onset of paralysis. No poliovirus was detected in any of the specimens. Enterovirus A71 was isolated from one AFP case each originating from Queensland, South Australia and Western Australia, while coxsackievirus B3 was isolated from one AFP case in Western Australia. Enterovirus surveillance Putative poliovirus samples in long-term storage were referred by a laboratory and were subsequently identified as Sabin poliovirus type 1. A total of 353 NPEVs were typed by members of the Enterovirus Reference Laboratory Network of Australia from clinical specimens (Tables 3 and 4). The most common genotypes identified, in order of decreasing frequency, were coxsackievirus B5, coxsackievirus A6, echovirus 6, and echovirus 18 collectively accounting for more than half the total, while only sporadic detections of enterovirus A71 were reported. Environmental surveillance Twenty-nine sewage samples were collected at a sentinel site in metropolitan Melbourne from January to August 2015. Two Sabin-like polioviruses were isolated in this period: type 3 in February and type 2 in March. The type 3 Sabin- Table 2: Australia's surveillance for cases of acute flaccid paralysis, 2015, compared with the main World Health Organization performance indicators like poliovirus had one nucleotide different from the Sabin prototype sequence, indicative of a recent vaccination event, but the type 2 poliovirus had four mutations compared to prototype sequence suggestive of five months replication. NPEVs act as an indicator organism for the collection, transport and test procedures and were identified from 26 samples. Enterovirus RNA was detected in one other sample but was of insufficient amount to type, while rhinovirus was identified in another sample. Polio regional reference laboratory activities As part of its role as a Polio Regional Reference Laboratory, in 2015, the NERL received specimens from AFP cases referred from Brunei Darussalam (4 cases), Pacific Island countries (17 cases) and Papua New Guinea (26 cases). Sabinlike poliovirus type 3 was isolated from one AFP case from Fiji and Sabin-like poliovirus type 1 from one case from Papua New Guinea. NPEVs were reported from one AFP case from Brunei Darussalam, two cases from the Pacific Islands and 10 AFP cases from Papua New Guinea. Quality assurance programs In 2015, the NERL was accredited as a WHO Polio Regional Reference Laboratory through participation in the annual WHO poliovirus isolation quality assurance panel. The laboratory was accredited for quality and competence as a medical laboratory by the National Association of Testing Authorities and also successfully participated in the Royal College of Pathologists of Australasia quality assurance panel for enterovirus detection by RT-PCR. Discussion Australia has met the WHO non-polio AFP surveillance target for the eighth year in a row, reporting 1.2 cases per 100,000 children less than 15 years of age. The notification of AFP cases via the APSU monthly report card and the PAEDS system has routinely met the international standard that assesses whether an imported case of polio in children less than 15 years of age would be detected, although gaps in AFP surveillance were noted at the sub-national level in the Australian Capital Territory and New South Wales. 1,2,3 Australia has never met the strict WHO surveillance target for adequate stool collection from 80% of the non-polio AFP cases, however 75% of the cases had at least one specimen collected within 14 days of the onset of paralysis. Enterovirus and environmental surveillance for poliovirus supplement the AFP surveillance program and in total provide a comprehensive surveillance system monitoring Australia's polio-free status. Four key points in 2015 highlight the significant progress made in the WHO polio eradication program: 1. The reporting of 74 polio cases worldwide, all caused by wild poliovirus type 1, is the lowest number recorded since the goal of global polio eradication was declared in 1988; a The main WHO criterion for adequate specimen collection is two faecal specimens collected more than 24 hours apart and within 14 days of the onset of paralysis from 80% of the cases classified as non-polio AFP. The withdrawal of poliovirus type 2 from the oral polio vaccine means the last remaining stocks of the virus will be held by vaccine production facilities and research and diagnostic laboratories. WHO has requested all facilities to destroy any unwanted material containing type 2 poliovirus or to transfer it to a polio essential facility that complies with the strict laboratory containment regulations stipulated in the 3 rd edition of the WHO Global Action Plan to minimize poliovirus facility-associated risk after type-specific eradication of wild polioviruses and sequential cessation of OPV use (GAPIII). 21 Wild poliovirus type 2 material must have been destroyed or contained by 31 December 2015 and Sabin poliovirus type 2 material must be destroyed or contained by 31 July 2016. At the end of 2015, the Australian government nominated the NERL to WHO as a polio essential facility for wild and OPV/Sabin poliovirus strains, which will enable the laboratory to continue to fully characterise all polioviruses detected in Australia. The detection of two Sabin poliovirus strains, including a type 2 serotype estimated to have been replicating in one or more persons for more than five months, from sewage collected in Melbourne during 2015, is a stark reminder that poliovirus importations continue to occur since Australia ceased usage of oral polio vaccine in 2005. Laboratories in Australia are recommended to refer any putative poliovirus from any source immediately to the NERL for full characterisation. The identification of Sabin poliovirus type 1 from archived laboratory samples demonstrates that not all stocks of poliovirus have been accounted for in Australia, supporting the recommendation made by the National Certification Commission for Poliomyelitis Eradication to review the laboratory containment of poliovirus. 22
2020-04-16T09:09:11.700Z
2020-04-15T00:00:00.000
{ "year": 2020, "sha1": "879b10868f746d805f5da3cdaf895764f091e288", "oa_license": null, "oa_url": "https://doi.org/10.33321/cdi.2020.44.24", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "625f1e837460f1178b4e03bf6ade6f8c2ec60181", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259326748
pes2o/s2orc
v3-fos-license
Effects of Repeated in-vitro Exposure to Saudi Honey on Bacterial Resistance to Antibiotics and Biofilm Formation Introduction Although Sumra and Sidr Saudi honey is widely used in traditional medicine due to its potent activity, it is unknown whether its prolonged usage has impact upon bacterial virulence or leading to reduced antibiotic sensitivity. Thus, the study aims to investigate the effect of prolonged (repeated) in-vitro exposure to Saudi honey on the antibiotic susceptibility profiles and biofilm formation of pathogenic bacteria. Methods Several bacteria, including Staphylococcus aureus, Escherichia coli, and Acinetobacter baumannii, were in-vitro exposed ten times [passaged (P10)]to Sumra and Sider honey individually to introduce adapted bacteria (P10). Antibiotic susceptibility profiles of untreated (P0) and adapted (P10) bacteria were assessed using disc diffusion and microdilution assays. The tendency regarding biofilm formation following in-vitro exposure to honey (P10) was assessed using the Crystal violet staining method. Results Adapted (P10) bacteria to both Sumra and Sidr honey showed an increased sensitivity to gentamicin, ceftazidime, ampicillin, amoxycillin/clavulanic acid, and ceftriaxone, when compared with the parent strains (P0). In addition, A. baumannii (P10) that was adapted to Sidr honey displayed a 4-fold increase in the minimal inhibitory concentration of the same honey following in-vitro exposure. 3-fold reduction in the tendency toward biofilm formation was observed for the Sumra-adapted (P10) methicillin resistant S. aureus strain, although there was a lower rate of reduction (1.5-fold) in biofilm formation by both the Sumra- and Sidr-adapted A. baumannii (P10) strains. Conclusion The data highlight the positive impact of prolonged in-vitro exposure to Saudi honey (Sumra and Sider) for wound-associated bacteria since they displayed a significant increase in their sensitivity profiles to the tested antibiotic and a reduction in their ability to form biofilm. The increased bacterial sensitivity to antibiotics and a limited tendency toward biofilm formation would suggest the great potential therapeutic use of this Saudi honey (Sumra and Sidr) to treat wound infections. Introduction Antimicrobial resistance (AMR) is a well-defined, continuously emerging phenomenon that poses frightening threats and challenges to human health worldwide. 1,2 This is mainly due to the emergence bacterial strains that cannot be treated using the currently available drugs. These are known as multidrug-resistant (MDR) bacteria or "superbugs". 1 Multiple factors, including globalization and poor cleanliness, might trigger or enhance the emergence of AMR, although the term "antibiotics misuse" is most often claimed to be the predisposing factor regarding the establishment and development of AMR. 1 The misuse of antibiotics includes the under and over usage of these drugs by different means, such as the incorrect duration or dosage of the prescribed antibacterial agents in different sectors (eg, veterinary medicine, agriculture and aquaculture), including human health. 3 In addition, over-the-counter sales, or unguided empirical prescriptions of antibiotics by doctors/specialists, where there was no real need for their use (eg, for patients with a viral infection), are commonly observed examples of unprofessional practices globally across different sectors. 4 AMR has led to a worldwide health and economic burden because infections caused by MDR bacteria are associated with the highest rate of mortality and economic losses. 1,5,6 In 2006, it was reported that about 50,000 deaths were attributed to hospital-acquired infections (HAIs) caused by drug-resistant strains in the United States of America (USA) alone. 7 It was also estimated that globally, by 2050, about 10 million deaths annually will be caused by AMR, unless targeted actions are implemented prior to this date. Furthermore, AMR increases the annual cost of healthcare by 20 billion dollars in the USA alone. 8 Although multiple actions are taken to combat the AMR, scientists have focused on antibiotic-alternative approaches and therapies, due to the high probability of their potential effectiveness and limited development of resistance compared with conventional antibiotics. 1 These approaches would include the use of plant-based products (eg, honey) as an antibacterial agent to treat drug-resistant bacteria, such as Manuka, or Leptospermum scoparium honey, which is named after a species of tea tree that is native to New Zealand. 9 Manuka honey displayed activity toward multiple Gram-positive and negative pathogenic drug resistant bacteria, including Escherichia coli, methicillin-resistant S. aureus (MRSA), antibiotic resistant beta-hemolytic streptococci and vancomycin-resistant Enterococci (VRE). 9,10 Despite the considerable tolerance of biofilm to antibiotics and its great challenge to medicine, an in-vitro study showed the successful eradication of biofilm growth by P. aeruginosa and S. aureus when using Manuka honey. 11,12 Interestingly, the sensitivity of Manuka-treated pathogenic bacteria increased towards conventional antibiotics, compared with untreated ones. 13 Saudi Arabia produces multiple types of honey, that according to the tree, nectar and/or location of hives, including Sider, Talah and Sumra honey. 14 Sider and Sumra honey are commonly used in traditional medicine to treat various infections as well as gastric ulcers, probably due to their antioxidant, anti-inflammatory, and anti-bacterial properties. 15 Sider honey, with an 80% w/v concentration, displayed potent activity against S. aureus, S. epidermidis and Shigella flexneri, while Dharm honey of the same concentration showed a superior inhibitory effect toward E. coli, Proteus mirabilis, S. epidermidis and Shigella flexneri. 16 In addition, the promising sensitivity of Gram-positive and negative bacteria including B. cereus, S. aureus, E. coli, Salmonella enterica as well as Trichophyton mentagrophytes (Fungi) toward Sider honey was also monitored. Moreover, Saudi Sumra honey has been proven to be effective against multidrug-resistant (MDR) S. aureus, E. coli, and Candida albicans in single and/or polymicrobial cultures. 17 Although the above-mentioned studies have highlighted the promising activity of Saudi honey and its historical usage in traditional medicine to treat infectious diseases, there is a clear research gap concerning the effect of the prolonged, repeated in-vitro exposure of Saudi honey on bacterial resistance to conventional antibiotics, as well as its ability to create biofilm. This in-vitro study is essential for any potential therapeutic use of Saudi honey to treat infections caused by MDR and biofilmforming pathogenic bacteria. Accordingly, the current study aims to investigate the antibiotic susceptibility and biofilm formation of honey-treated (adapted) pathogenic bacteria following repeated in-vitro exposure to various types of Saudi honey in comparison with the same parent (untreated) bacterial strains. Collection of Bacterial Isolates, Honey Samples and Antibiotic Discs Several Gram-positive (n=7) and negative (n= 6) clinical and reference bacterial isolates that are associated with wound infections were used in this study, including Staphylococcus aureus and Staphylococcus epidermidis, as Gram-positive indicators, while Acinetobacter baumannii, Escherichia coli and Pseudomonas aeruginosa were the Gram-negative strains. Clinical isolates were collected from King Khalid hospital in Hail, Saudi Arabia, as part of the routine diagnosis procedure. To generate data with decent reliability and reproducibility for practical comparison, reference Gram-positive and negative bacterial strains were purchased from the American Type Culture Collection (ATCC) (Manassas, United States) and National Collection of Type Cultures [(NCTC) Public Health England, Salisbury, United Kingdom)]. All of the isolates were recovered, cultured and/or maintained in a Mueller-Hinton agar (MHA) and/or Mueller-Hinton broth (MHB) (Thermo-Fisher Scientific, Massachusetts, United States). Sumra and Sider honey was obtained from an authenticated company that sold honey in Al-Bah city, Saudi Arabia (Al Amari, AL-Baha, Saudi Arabia). 17 In addition, medical grade Manuka honey (Methylglyoxal 400) was purchased from MaunukaGuard (Monterey, California, USA), 9 which was utilized as a positive control. All of the obtained honey was inoculated into MHA plates and incubated at 37°C for 24 hours to ensure the absence of any contamination (a purity check), then prepared as a 90% stock solution using sterile distilled water (dH 2 O) before any experimental work was conducted. Antibiotic discs, including gentamicin (120 μg), ceftazidime (30 μg), ampicillin (10 μg), amoxycillin/ clavulanic (30 μg), ceftriaxone (30 μg), and imipenem (10 μg), were obtained from Condalab (Madrid, Spain) and stored at −20°C in a dry place. Long-Term Exposure of Bacterial Isolates to Honey Samples (Adaptation Process) To evaluate the effect of the long-term, in-vitro exposure of bacterial isolates to various honey samples, each individual bacterium was repeatedly exposed (ten times) to each tested honey sample (Sumra, Sider and Manuka) using an agarbased diffusion method. 13 In brief, approximately 6-millimeter (mm) wells were formed at the center of a Mueller Hinton agar plates, after which 200 μL of a sub-lethal concentration of each tested honey (30 mg/mL dissolved in sterile dH 2 O) was added aseptically into the center of the well/s. Each parent (unexposed) bacterial isolate [Passage zero (P0)] was distributed radially in triplicate around the central well and incubated at 37°C for 24 hours. Following incubation, the bacteria that had grown in close proximity to the well were aseptically re-inoculated onto a plate containing the same or a higher concentration of the tested honey. This procedure was repeated for a total of ten passages to generate the adapted bacteria (P10) for each type of honey tested. The parent strains (P0) were also sub-cultured ten times in MHA (without any honey) as a negative control [(P0) negative control)] to ensure that any potential changes in the antibiotic susceptibility of the adapted bacteria (P10) could be attributed to the honey exposure alone (the treatment) rather than the sub-culturing process itself. Antibiotic Susceptibility Testing To assess the effect of honey on bacterial sensitivity toward antibiotics, the antibiotic susceptibility profiles of the isolates prior to their exposure to honey [(P0) negative control)] as well as those of the isolates that had been treated ten times with honey (P10) were determined. 18 This was performed according to the standardized British Society for Antimicrobial Chemotherapy (BSAC) disc diffusion method for antimicrobial susceptibility testing using MHA plates. 18 The susceptibility profiles of all of the Gram-positive isolates were determined against five different antibiotics, including gentamicin, ceftazidime, ampicillin, amoxycillin/clavulanic and ceftriaxone, whereas imipenem in addition to these five antibiotics were used for the evaluation of the Gram-negative isolates. The plates were incubated for 24 hours at 37°C, after which the zones of inhibition were measured in millimeters (mm). This testing was carried out in triplicate for each bacterium to provide more reliable and reproducible data. The average of the three reads and standard deviations were also calculated. Antibacterial Activity of Various Types of Honey Against Adapted (Honey-Treated) and Parent (Untreated) Bacterial Strains The minimum inhibitory concentrations (MICs) and minimum bactericidal concentrations (MBCs) of Sider and Sumra honey against adapted clinical isolates (P10) were assessed using a broth microdilution method. 18 Overnight bacterial cultures of clinical isolates with zero exposure (P0) as well as isolates with long-exposure (P10) were adjusted to a specific optical density (OD 600 of 0.8), known as 0.5 MacFarland, followed by a 1:100 dilution using the MHB. Next, after the bacterial suspension (100 µL) had been transferred to a 96-well microtiter plate, multiple concentrations (0-1000 mg/L) of the tested honey were added. The inoculated plates were incubated aerobically at 37°C for 24 hours. Then, the MIC values were determined visually, which is the lowest concentration of tested honey at which the bacterial growth was inhibited. Only then was the MBC 4275 assessed by transferring aliquots of 5 μL from wells exhibiting no visible turbidity onto the MHA plate, followed by overnight incubation at 37°C. The MBC values were determined as the lowest concentration of tested honey that led to the complete absence of bacterial growth on the MHA plate. Biofilm Formation Assay A crystal violet assay was used to assess the biofilm formation by the clinical bacterial strains that had been treated with each type of honey (P10) and the untreated isolates [(P0) negative control]. 19 The overnight bacterial cultures were diluted (1:100) using MHB, then transferred to a sterile 96-well microtiter plate and incubated at 37°C for 48 hours. Next, the bacterial suspension was discarded, and the wells were washed twice in a sterile phosphate-buffered saline (PBS) (Thermo-Fisher Scientific, Massachusetts, USA) before staining with 250 μL of 1% (w/v) crystal violet solution (Sigma-Aldrich, Missouri, USAT) at room temperature until dry. Ethanol was added to each well and the absorbance (OD600) was monitored using a plate reader. The experiments were performed in triplicate and the data for the isolates with zero exposure (P0) were compared with the data for the treated strains (P10). Statistical Analysis The statistical analysis was conducted using GraphPad Prism (version 9). The analysis was performed on biofilm formation essay reads to determine any potential significant changes between the parent (P0) (untreated) and adapted (honey-treated) bacteria (P10) using the t-test. Effect of Honey Exposure on Bacterial Sensitivity to Antibiotics In this study, certain honey-treated bacterial strains, that had previously been exposed in-vitro ten times to a single type of honey, showed a significant increase in the size (mm) of their caused zone of inhibition (ZOI) to the tested antibiotics, which varied from one type of honey to another. Most of the significant changes were observed in the tested reference strains compared to the clinical isolates (Figures 1 and 2). The sensitivity data for each of the tested antibiotics to the types of honey used will be presented in the following sections separately. Alteration in the Susceptibility to Gentamicin The S. epidermidis [ATCC 12228] that was treated with Sumra and Sider honey showed an increased sensitivity to gentamicin. The largest inhibitory zones to gentamicin were observed in the adapted S. aureus [ATCC 25923], that had been exposed ten times to Manuka and Sider honey individually, while S. pyogenes [ATCC 19615] showed an increased susceptibility to gentamicin following exposure to all types of the honeys tested separately. The Gram-negative strains that had been treated with Manuka honey exhibited a noticeable reduction in their resistance to gentamicin. The adapted (P10) E. coli (ATCC 14169) and E. coli (ATCC 12923) strains displayed a larger zone of inhibition following long exposure (P10) to Manuka and Sider honey, respectively. Alteration in the Susceptibility to Ceftazidime Treatment with Sider honey resulted in greater sensitivity toward ceftazidime among the clinical isolates of S. aureus (217), S. epidermidis (ATCC 12228) and S. aureus (ATCC 25923). S. aureus (ATCC 25923) that had been treated (P10) with Manuka honey and S. pyogenes (ATCC 19615) that had been treated with Sumra honey showed a similar increase in their sensitivity to ceftazidime. All of the remaining adapted bacteria, (P10) revealed inhibitory zones to ceftazidime of a similar size to those of the parent strains (P0). In addition, all of the adapted (P10) reference strains of E. coli had larger ZOIs to ceftazidime (from around 8 mm to over 20 mm) following exposure to all three types of honey individually (Sumra, Manuka, and Sider). Moreover, the treated strains of P. aeruginosa (NCTC12903) became more susceptible to ceftazidime having been exposed to Manuka honey ten times. 4278 treatment with either Sumra or Manuka honey. Noticeably, S. aureus (ATCC 25923) that was exposed (P10) to Sumra and Sider honey separately became sensitive to ampicillin, with an inhibitory zone (26 mm) of identical size compared to full resistance (ZOI of 0 mm) prior to exposure. In addition, S. pyogenes (ATCC 19615) became more sensitive to ampicillin following exposure to Manuka honey. The susceptibility of all of the tested Gram-negative bacteria remained unchanged, except for that of P. aeruginosa (NCTC12903), which had been completely resistant to ampicillin previously, but became sensitive to it, with a large ZOI following in-vitro exposure (P10) to Sumra and Sider honey, individually. Alteration in Susceptibility to Amoxycillin/Clavulanic Methicillin-resistant S. aureus (150) that was treated (P10) with Manuka honey showed a slight decrease in its ZOI. All of the tested reference isolates of S. aureus displayed significant changes in their sensitivity profiles to this antibiotic following exposure to the tested types of honey. S. aureus (ATCC 25923) and S. epidermidis (ATCC 12228), having been adapted (P10) to all three types of honey, an exhibited significant increase in the size of their inhibitory zones caused by amoxycillin/ clavulanic, although larger zones by the same antibiotic were monitored for S. pyogenes (ATCC 19615) following long exposure (P10) to Manuka honey. The most noticeable change in susceptibility was observed for the isolates that had been treated with Sider honey, including E. coli (NCTC 10418), (E. coli ATCC 14169), (E. coli NCTC 12923) and P. aeruginosa (NCTC 12903), as they become sensitive to amoxycillin/clavulanic, compared to their complete resistance to it prior to their exposure to Sider honey. In addition, adapted E. coli (NCTC 12923) and P. aeruginosa (NCTC12903) showed greater sensitivity to this antibiotic following exposure to Manuka honey. Alteration in Susceptibility to Ceftriaxone S. aureus (211), S. aureus (ATCC 25923), S. epidermidis (ATCC 12228) and S. pyogenes (ATCC 19615) that had been exposed to Sumra honey presented with a high susceptibility to ceftriaxone. Additionally, elevated activity by ceftriaxone was observed against S. aureus (ATCC 25923) and S. epidermidis (ATCC 12228), that had been treated ten times with Manuka and Sider honey, individually. Moreover, S. pyogenes (ATCC 19615) displayed a larger inhibitory zone following in-vitro exposure to Manuka honey alone. All of the adapted reference isolates of E. coli (NCTC 10418), E. coli (ATCC 14169), E. coli (NCTC 12923) and P. aeruginosa (NCTC12903) exhibited an increased sensitivity to ceftriaxone following exposure to all of the types of honey tested. Alteration in Susceptibility to Imipenem The activity of imipenem was tested against Gram-negative clinical and reference isolates, which revealed complete resistance and promising inhibition, respectively. There were no significant changes in the sensitivity profiles towards imipenem for any of the adapted bacteria (P10) to the three types of honey tested, except for P. aeruginosa (NCTC12903), that displayed a larger inhibitory zone caused by imipenem following in-vitro exposure to Sider and Sumra honey separately. Changes in the Honey's Antibacterial Activity Profiles Following Exposure to Sider and Sumra Honey The number of MICs and MBCS of Gram-negative bacteria were slightly increased in the adapted bacteria (P10) compared to the parent strain (P0) against both Sider and Sumra honeys. A. baumannii (20) that had been exposed (P10) to Sider honey displayed a 4-fold increase in its MIC value as well as slightly higher MBCs in comparison with the MIC and MBC values of the parent strain against the same honey. The Gram-positive bacteria showed no changes in their MIC values to both types of honey while the values for the MBCs were somewhat elevated. Effect of Bacterial Adaption to Honey on Biofilm Formation The Sumra and Sider honey-adapted bacterial strains were tested for biofilm formation by comparing their adherence to the used surface with an untreated (P0) bacterial strain (Figure 3). The following bacterial strains were tested: S. aureus Discussion Honey is well-valued and considered as one of nature's finest gifts. Honey has been used for medicinal and dietary purposes for a long time in different nations all over the world. 20 In conventional medicine, honey is used to treat/control various non-infectious illness (eg, cancer and asthma) and numerus infectious diseases, including, throat and wound infections, tuberculosis, and hepatitis. 21 Honey contains hundreds of compounds, such as amino acids, phenol, sugar, minerals, flavonoids, vitamins, and antioxidants. 17 Sumra and Sider are famous types of honey in Arab countries, and their antimicrobial activity has been documented against a wide range of pathogens, including drug-resistant bacteria and fungi. 17 Honey's antimicrobial activity has been attributed to several factors, such as the disruption of the bacterial cell membrane. 22 However, no information about potential bacterial resistance to honey has ever been reported. 23 Although, bacterial adaption to Manuka honey has been well-described 13 and studied, there is no published data regarding the adaptation effect of Saudi honey (the ≥ ten times in-vitro exposure of the tested types of Saudi honey to bacterial strains individually) on its antibiotic sensitivity profiles and ability regarding biofilm formation. Therefore, the current project aimed to assess any potential changes in antibiotic sensitivity and biofilm formation for several clinical isolates following long-term in-vitro exposure to Sider and Sumra honey, individually. Several clinical isolates were repeatedly exposed (ten times) to Sumra and Sider honey separately to generate the adapted bacteria (P10), after which their antibiotic susceptibility and biofilm formation were evaluated. The majority of adapted bacterial strains (P10) showed an increased sensitivity based on their zone size to five different antibiotics following repeated exposure to honey, although these changes were commonly observed in the adapted reference strains rather than the clinical isolates (Figures 1 and 2). This could be because reference isolates are less historically exposed to antimicrobial compounds, including honey (less selection pressure for resistance) compared with clinical isolates that have been treated with various doses of numerous antibiotic drugs. 17 Thus, the effect of in-vitro exposure to honey on the reference strains in this study is more obvious and significant than that for the clinical strains. This suggests that further or longer exposure/treatment of the tested clinical isolates to these types of Saudi honey could have a more positive effect upon their sensitivity. The obtained data show that the majority of the bacteria that adapted to Saudi honey became more sensitive to the cell wall inhibitors, including ceftazidime, ampicillin, amoxycillin/clavulanic, ceftriaxone, and imipenem (Figures 1 and 2). In line with a previous report, 24 the increased sensitivity of the adapted bacteria to these antibiotics might be due to possible alterations in their cell wall permeability 25 or/and efflux systems, 26 following treatment with the tested honey. It was claimed that the in-vitro exposure of bacterial isolates to an antimicrobial agent (eg, honey) can affect cell membrane integrity, which has been observed in bacteria that have been adapted to antimicrobials. 18,26 For instance, exposure to the antibiotic gentamicin has led to changes in cell wall permeability. 27 Another possible cause of the increased sensitivity of the adapted bacteria to antibiotics could be that the repeated in-vitro exposure to the honey used (known as the adaptation process) leads to a reduction in bacterial growth and fitness, 28 which in turns causes the adapted bacteria to become more sensitive to antibiotics. The MICs and MBCs values of the Gram-negative and positive adapted bacteria against both types of honey tested (Sumra and Sider) were identical to those of the parent strains, except for A. baumannii (20) that was adapted (P10) to Sider honey, since a 4-fold increase in its MIC value toward this honey was monitored when compared to the parent strain (P0) ( Table 1). This observation is in line with a previous study that described how bacterial strains that had adapted to Manuka honey showed similar MIC values to the parent strains against Manuka hone. 13 However, the biofilm growth of adapted P. aeruginosa to Manuka honey showed a higher MIC value for Manuka honey as well as an alteration in its sensitivity toward certain antibiotic drugs. 29 Honey contains multiple compounds, that engage in antibacterial activities through various modes of actions, so low/ limited resistance development is highly expected. 24,30 Nevertheless, it would be worth conducting protein expression and whole genome analysis of bacteria that had been treated with Saudi honey to reveal its bacterial targets as well as any possible genomic and proteomic changes. Biofilm formation is a well-known bacterial virulence factor that hampers wound healing and antimicrobial activity, leading to a severe form of infection, serious complications, treatment failure and death. 31 Former studies have reported the promising antibiofilm activity of multiple types of tested Saudi honey (eg, Sumra honey) to treat infections caused by biofilm forming bacteria. 15,17 The current findings show a 3-fold reduction in biofilm formation for Sumra-adapted MRSA as well as an approximately 1.5-fold reduction in biofilm formation for both the Sumra and Sider-adapted A. baumannii (20) strains (Figure 3). A reduction in biofilm formation of S. aureus has been reported following treatment with triclosan, which has been attributed to a reduced growth rate following exposure. 19 This is in line with previous work that reported a reduction in biofilm formation among Gram-positive bacteria that had adapted to Manuka honey, including S. aureus. 32 In contrast, a higher tendency toward biofilm formation was observed among all of the tested Gram-negative bacteria that had adapted to Manuka honey, although A. baumannii was not tested. 13 A. baumannii tends to be more resistant to antibiotics and is one of the most common bacteria associated with hospital-acquired infections, especially wound infections. A wound is a complex environment, in which A. baumannii usually form biofilm, leading to a delayed healing process and/or treatment failure. 33 The tested A. baumannii exhibited a reduction in biofilm formation and low MIC for both types of Saudi honey. Although more strains of A. baumannii need to be tested for the further validation and confirmation of these results, this finding is promising and supports the potential therapeutic use of the tested Saudi honey as an antimicrobial agent for treating wound infections caused by this bacterium. Although the current study might has successfully assessed the changes in antibiotic sensitivity and biofilm formation of the tested bacteria following repeated in-vitro exposure to multiple types of Saudi honey, a few factors can be regarded as limitations of this study. The bacterial virulence could be evaluated using an in-vivo model (eg, larvae) to gain more knowledge about the pathogenesis of the bacteria following exposure to honey. In addition, an exploration of the growth rate and genomic analysis would facilitate the identification of any potential emergence of variance or mutation occurring at the genomic level. Conclusion In summary, the repeated in-vitro exposure of clinical bacteria to Saudi Sumra and Sider honey led to variable changes in both their antimicrobial sensitivity profiles and tendency toward biofilm formation, while very limited resistance to the used honey compared to the parent strains was observed. The increased antibiotic sensitivity of the adapted pathogenic bacteria (eg, S. aureus and A. baumannii) to Saudi honey (Sumra and Sider) and a reduced tendency toward biofilm formation are the main findings of this study. Although the exact mechanisms behind the resulting reduction in antibiotic sensitivity and biofilm formation remain undefined and so further investigation is required, this positive effect on the sensitivity profiles and biofilm formation of the tested bacteria that adapted to Saudi honey highlight the great potential therapeutic use of these tested types of Saudi honey as antimicrobial agents for treating bacterial infections, especially those associated with chronic wounds, since dressing these for a long period of time is mandatory.
2023-07-05T15:02:33.062Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "e7945c80887b48d20a7c45ab455fed4422d3c9a5", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=90870", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2206ddd8837c139c32fa7bbd698f5b08f039b2a7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247219539
pes2o/s2orc
v3-fos-license
Decitabine salvage for TP53-mutated, relapsed/refractory acute myeloid leukemia after cytotoxic induction therapy Not available. Decitabine salvage for TP53-mutated, relapsed/refractory acute myeloid leukemia after cytotoxic induction therapy TP53-mutated acute myeloid leukemia (AML) represents a therapeutic challenge due its chemotherapy-refractoriness, and to an uncertain role for hematopoietic cell transplant (HCT) intensification. 1 We initiated a trial to determine whether decitabine might salvage TP53-mutated AML after failure of cytarabine-based induction.Seventeen patients were enrolled before the trial was closed due to slow accrual.Decitabine was well tolerated in this pretreated population and allowed transition to HCT in seven of 17 patients (41%).The 1-year overall survival (OS) was 29% (median 244 days; 95% confidence interval [CI]: 116-390).Survival was longer in patients receiving HCT (median 354 days), and two long-term survivors were transplanted in molecular remission.Detection of TP53 clonal response by bone marrow (BM) immunohistochemistry (IHC) or peripheral blood (PB) exome sequencing was associated with improved survival, suggesting the utility of these secondary endpoints in future clinical trials.This single-arm, open-label, prospective clinical trial (clinicaltrials gov.Identifier: NCT03063203) was approved by the Institutional Review Board at Washington University in St. Louis.The study enrolled 17 patients between October 2017 and September 2020 before closing due to slow accrual during the SARS-CoV-2 pandemic, and to shifting treatment practices towards the use of venetoclax combinations. 2Eligible patients had TP53-mutated relapsed/refractory AML following cytarabine-based induction chemotherapy, and at least one of the following: BM blasts >5%, flow-based measurable residual disease (MRD) >0.5%, persistent cytogenetic abnormality by fluorescence in situ hybridization (FISH) or karyotyping, or persistent TP53 mutation at ≥5% variant allele frequency (VAF) with exome sequencing.Decitabine was administered at 20 mg/m 2 /day on days 1-10 of 28-day cycles and could be reduced to days 1-5 once BM aspirate blasts were <5%.Granulocyte colony-stimulating factor (G-CSF) use was allowed during the treatment of sepsis and neutropenic fevers, but not to support neutrophil recovery.The primary objective was to determine the 1-year OS in patients with TP53-mutated AML compared to historic controls (1-year OS 25%). 1,3,4OS was defined from the time of enrollment to death from any cause.Secondary endpoints included determination of: i) proportion of morphologic responses, as defined by the European LeukemiaNet 2017 (ELN) 5 ; ii) time to transplant and the number of patients able to undergo HCT; iii) 2-year eventfree survival after transplant compared to historical controls (18-22%); 6 and iv) average number of hospital days during cycles 1-2, as a surrogate of toxicity. The average time from the initiation of induction chemotherapy to trial enrollment was 42 days (median 25 days), consistent with primary refractory disease, persistent MRD, or rapid relapses after induction chemotherapy (Table 1).Performance status was 0 or 1 in 16 of 17 patients, reflecting a population that had been fit for cytotoxic chemotherapy (Table 1).Sixteen of the 17 patients had complex cytogenetics and nine of 17 had cytogenetic loss of chromosome 17p (Online Supplementary Table S1).As expected in relapsed/refractory AML and TP53-mutated AML, BM aspirates were commonly hypoplastic, 7 with a blast count mean of 18% at trial enrollment and of 37% at diagnosis.The mean number of hospital days during combined cycles 1 and 2 was 21 (median 14 days; Online Supplementary Table S1), including the inpatient decitabine administration days.Observed grade 3-4 serious adverse events (SAE) reflected typical complications associated with decitabine therapy, including anemia (1), febrile neutropenia (6), heart failure (1), gastrointestinal pain (1), infections (2), abnormal liver function test (LFT) (2), troponin elevation (1), lymphopenia (6), neutropenia (6), thrombocytopenia (4), acidosis (1), hyperglycemia (2), electrolyte imbalance (2), acute kidney injury (2), dyspnea (1), respiratory failure (3), hypertension (1), and hypotension (1).The median survival was 244 days (95% CI: 116-390; Table 1; Figure 1A).Twelve patients died within 1 year of presentation, providing a 29% 1-year OS.An interim analysis noted 27% predictive power to reject the null hypothesis if the study were to include 60 patients 8 (calculated using PASS v.15.0.5).All non-transplanted patients eventually relapsed and died of disease progression.Seven patients underwent HCT, at a mean of 106 days (median 117 days, Online Supplementary Table S1).Three patients died in remission from complications of transplantation, two died after relapse/progression (Online Supplementary Table S1).Two patients remain alive at 26 and 18 months; both were transplanted in molecular remission.Overall, HCT was associated with longer survivals (median 354 days; Figure 1B).No patient achieved a complete morphologic response by ELN criteria (Table 1).Robust neutrophil recovery was not noted, though G-CSF was not used to treat asymptomatic neutropenia.Five patients displayed platelet counts normalization (Figure 1C).BM and PB samples were collected at enrollment (day 0) and at the end of cycles 1, 2, and 3.For 16 patients, samples at AML diagnosis (pre-induction) were also available for correlative studies.TP53 IHC (antibody clone DO7) was performed on 4 mm BM sections using the Benchmark XT automated stainer (Ventana Medical Systems, Tucson, AZ, USA).Quantitative scoring was performed on nuclear staining in 500 hematopoietic cells.Based on published cutoffs, 9 IHC response was defined as a reduction of TP53+ cells to <10% of total cells on a core biopsy sample.Many TP53 missense mutations are associated with IHCdetectable TP53, via protein stabilization. 10,11 A B C tions; however, three of four cases with nonsense mutations did not have detectable TP53 by IHC (R213*, Y107*, F54Sfs*69; Online Supplementary Figure S1A to C). Unexpectedly, the C-terminal nonsense mutation (R342*) led to elevated TP53 protein at three separate time points (Online Supplementary Figure S1D).TP53 IHC protein levels correlated between pre-induction and day 0 (R 2 =0.56). Only two cases (WUDAC015 and WUDAC021) were associ-ated with a significant reduction of TP53 IHC staining after cytotoxic induction therapy, confirming the limited efficacy of standard induction in this cohort (Online Supplementary Figure S1E and F).Serial assessment of response by IHC noted eight patients with responses (Figure 2A), and five patients without responses (Figure 2B).Four patients could not be evaluated: three had nonsense variants (WUDAC001, WUDAC002, and WUDAC029; Online S1A to C), and one (WUDAC014) had del17p, all resulting in absent IHC TP53 staining.Exome sequencing was performed in parallel using PB samples to circumvent sampling variation due to hemodilute collections in this hypocellular disease (Online Supplementary Table S1).Exome capture utilized an IDT exome reagent, and was resolved on an Illumina NovaSeq S4 300XP to a median depth of 200x for pre-induction and day 0 samples, and 148x (range, 76 -200x) for other time points.This provided >100x coverage for >90% of targets in 61 of 67 samples.Molecular response was defined as a reduction in the copy number adjusted TP53 to a VAF of <0.05 12 .The computational pipeline is available at https://github.com/genome/analysisworkflows/blob/968d7d80c3cec865c7fa58b4dc24561a4dbfd9ad/definitions/pipelines/somatic_exome.cwl.Mutation burden assessment at day 0 and at pre-induction showed correlation (R 2 =0.8).Eight patients achieved a molecular response, and seven patients displayed persistence of TP53 mutations after therapy (Figure 2C and D).For one patient (WUDAC016), no follow-up PB samples were available (Online Supplementary Figure S1G).The absolute TP53 tumor burden quantified by BM IHC, either at pre-induction or at day 0, did not consistently correlate with PB exome results (R 2 =0.15 and 0.27, respectively).However, qualitative response trends were concordant in 11 of 16 evaluable patients (Online Supplementary Figures S1D to F and S1H to O).One case (WUDAC005; Online Supplementary Figure S1P) showed stable disease by TP53 IHC, but progressive disease by PB exome sequencing, suggesting peripheralization of AML cells during therapy.Exome analyses revealed that the global molecular and clinical response was dictated by the TP53 clonal response trend (Online Supplementary Figure S2).Discordance between the TP53 clone and an alternate clone was only observed in WUDAC001, who progressed with a different clone during TP53 clonal response.Survival outcomes were longer in patients with molecular responses identified by TP53 IHC (median OS 345 days vs. 116 days, P<0.002; Figure 2E) or by exome sequencing (median OS 390 days vs. 165 days, P<0.001; Figure 2F).These results are consistent with data from other studies, 13 and suggest that IHC and exome sequencing could be useful adjunctive strategies to quantify responses in future clinical trials.However, each approach has limitations: IHC is applicable only to cases with mutations that stabilize TP53 protein (typically missense variants) and lacks specificity below tumor burden of 10%, due to background staining that occurs in a small number of non-malignant cells.Sequencing of PB samples qualitatively reflected the measurement of TP53 levels in the BM in this study, however this approach is affected by the proportion of circulating malignant cells. TP53-mutated AML has dismal outcomes and is commonly associated with chemotherapy resistance.Although we found that decitabine is tolerated after intensive chemotherapy, and that molecular responses are achievable in a subset of relapsed/refractory TP53 patients, long-term survival remained poor.These results are consistent with prior studies reporting lower responses to decitabine in relapsed/refractory disease versus untreated cases. 14,15herefore, novel therapies, upfront combination and consolidation strategies should be considered.The hypoplastic BM in many patients makes accurate response determination challenging, due to hemodilute aspirate collections.The integration of molecular endpoints into clinical trials may improve response quantification, and increase the ability to identify significant differences between treatment arms. Figure 1 . Figure 1.Summary of clinical responses.(A) Kaplan-Meier curve describing the overall survival of the 17 patients enrolled in the trial (median survival 244 days, 95% confidence interval: 116-390 days).(B) Kaplan-Meier curve describing the overall survival of the seven transplanted patients (median survival 354 days).(C) Line plots showing the platelet (Plt) count trends at different time points for the 17 the patients enrolled in the study.On the right side of the plot are separately displayed five patients (003, 016, 019, 024 and 029) with platelet recovery.SD: stable disease; NA: not possible to evaluate; CRi: complete remission with incomplete count recovery. Figure 2 . Figure 2. Molecular responses trends and their association with overall survival.(A and B) Line plots showing the % TP53-positive bone marrow cells by immunohistochemistry (IHC) at different time points of response assessment for: responder cases (IHC TP53 <10% reduction during therapy) vs. non-responder cases (IHC TP53 >10% during therapy).(C and D) Line plots of TP53 variant allele frequency (VAF) during therapy for responder cases ((E) reduction of copy number (CN) adjusted TP53 VAF below 0.05%) vs. nonresponder cases ((F) VAF stable or progressing over time).(E) Overall survival curves of the 17 cases stratified by responses assessed with IHC (median OS 345 days vs. 116 days, P<0.002).(F) Overall survival stratified by responses assessed with exome sequencing (median OS 390 days vs. 165 days; P<0.001). Table 1 . TP53 IHC showed staining in all cases with TP53 missense muta-Clinical characteristics and treatment responses.
2022-03-04T06:22:24.104Z
2022-03-03T00:00:00.000
{ "year": 2022, "sha1": "48476106bbea77b39435c024c5658da1d7c1b286", "oa_license": "CCBYNC", "oa_url": "https://haematologica.org/article/download/haematol.2021.280153/74236", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "134deecbf931e2b4b6ca2ede43a82e4f612cb464", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233293846
pes2o/s2orc
v3-fos-license
Catalytic Oxidation of Tartrazine in Aqueous Solution Using a Pillared Clay with Aluminum and Iron In this work, pillared bentonite with Al−Fe (Al−Fe−PILC) was synthesized and used as a heterogeneous Fentonlike catalyst in the oxidation of tartrazine azo-dye in aqueous solution. The modification of bentonite with the AlFe mixed system in a concentrated medium, with ultrasound assisted intercalation was carried out, and the obtained catalyst was characterized by XRF, XRD, and N2 adsorption at 77 K. The oxidation of tartrazine with Al−Fe−PILC, using different amounts of H2O2, expressed as a multiple (1, 3, 6, and 9) of a stoichiometry amount required to completely oxidize the dye was evaluated. The reaction of catalytic wet peroxide oxidation (CWPO) of the dye with 400 mg of Al−Fe−PILC and 6 times the stoichiometric amount of H2O2 at 25 °C, reached 98.2±1.8% of decolorization, 51.9±1.9% of TOC removal and 71.5±1.8% of TN removal. Results of this study show that the oxidation of tartrazine increased with the amount of H2O2 up to a certain limit. This oxidation process can be considered as an alternative for treating wastewater containing azo-dye because the reaction takes place under mild experimental conditions (room temperature and atmospheric pressure). Copyright © 2021 by Authors, Published by BCREC Group. This is an open access article under the CC BY-SA License (https://creativecommons.org/licenses/by-sa/4.0). Introduction The different sources of water pollution contain hazardous compounds which transfer adverse effects on the environment [1]. Some of the organic contaminants present in water biodegradable, carcinogenic and mutagenic for aquatic systems and human health [7,8]. Tartrazine (known as E102 or FDC Yellow 5) is a synthetic yellow azo-dye used primarily as a food coloring and also found in some pharmaceutical products and cosmetics [9,10]. This dye is found in much foodstuff as desserts, ice cream, soft drinks, confectionery, instant puddings, gelatin, sauces, etc. Food processing facilities require water for activities that include washing, processing, and clean-in-place operations. As a result, they generate a great amount of wastewater, which often contain very high levels of total suspended solids, chemical oxygen demand, fats, oils and dyes [11,12]. Dyes are visible to the human eye even in a low concentration (< 1 mg/L) [13,14] and their inadequate disposal in aqueous ecosystems leads to reduction of sunlight penetration which in turn diminishes photosynthetic activity, resulting in acute toxic effects on the aquatic flora/fauna and dissolved oxygen concentration [15]. Different physical, chemical and biological treatment methods for dye removal have been adopted. These methods include adsorption, coagulation/flocculation, chemical oxidation (classical chemical treatments and advanced oxidation processes), electrochemical treatment and microbial or enzymatic degradation [6]. Advanced Oxidation Processes (AOPs) have been reported to be some of the most effective methods for the degradation of azo dyes, due to their complex chemical structure and recalcitrant nature (non-biodegradable) [16]. AOPs have been defined as water treatment processes which involve the generation of hydroxyl radicals (HO•) by chemical, photochemical and/or radiolytic methods. The hydroxyl radicals are very reactive species that attack most of the organic molecules and not only offer complete decolorization of aqueous solutions but also promise a considerable degree of mineralization and detoxification of the dyes and their oxidation/hydrolysis byproducts [17,18]. Among the AOPs, the Fenton process (H2O2/Fe 2+ ) is one of the most effective methods of organic pollutant oxidation. Disadvantages in using the Fenton process include the production of a substantial amount of Fe(OH)3 sludge (precipitate) and additional water pollution caused by the homogeneous catalyst that added as an iron salt. To solve the previous drawback, a water-insoluble solid catalyst can be used. This heterogeneous Fenton process is called catalytic wet peroxide oxidation (CWPO) [19][20][21]. The CWPO has been mainly used to oxidize phenols and their derivatives [20,22]. Its application in the degradation of other organic pollutants such dyes is not often reported [8,[23][24][25][26]. The development of heterogeneous Fenton and Fenton-like catalysts has been extensively explored, using materials based on zeolites, clays and activated carbon loaded with iron and copper [24,[27][28][29][30]. The pillaring procedure involves the formation, intercalation and subsequent fixation of polynuclear cations among clay layers. Thus, the lamellar spacing and specific area increase, making these materials attractive catalysts for various reactions [31]. Pillared clays with Al−Fe (Al−Fe−PILC) are promising catalysts for the CWPO because they combine porous support with active sites capable of degrading organic compounds [32]. In the process of catalytic oxidation, the initial step is the adsorption of the dye upon the active support, which is the fundamental degradation mechanism [33]. The second step is the reaction between the adsorbed dye and HO• radicals in the support surface. Thus, Al−Fe−PILC offers a combination of properties such as adsorption and catalytic activity, it is highly efficient for mineralization of pollutants [8,33]. In general, Al−Fe−PILCs have been shown to be efficient catalysts for phenol removal under mild conditions (atmospheric pressure and room temperature) without considerable leaching of metals [34,35]. Regarding the use of Al−Fe−PILC as catalysts for the CWPO of dyes (heterogeneous photo-Fenton oxidation is not included), there are reports just for acid chrome dark-blue (C16H9O9Na2ClS2N2) between 25 and 75 °C [26], tartrazine (C16H9N4Na3O9S2) between 25 and 75 °C [23], orange acid II (C16H11N2NaO4S) at 60 °C [25], methyl orange (C14H14N3NaO3S) at 18±2.0 °C [24] and Congo red (C32H22N6Na2O6S2) at 25 °C [8]. In this work, pillared bentonite with Al−Fe in a concentrated medium was synthesized (assisting the intercalation with ultrasound) [36], characterized and evaluated in the CWPO of tartrazine at 25 °C and atmospheric pressure (78 kPa). In addition to the degree of decolorization of tartrazine, the total organic carbon (TOC) and total nitrogen (TN) removals were also quantified, in relation to the amount of H2O2 used in the oxidation reaction. Tartrazine (azo food dye, molecular formula: C16H9N4Na3O9S2, 534.3 g/mol, CAS registry number: 1934-21-0) was a product of nonpurified industrial quality (purity 62%, 38% NaCl and NaSO4 combined), purchased from Retema S.A.S.-Colombia). The chemical structure of this dye is shown in Figure 1. Stock solution (100 mg/L) was made up by accurately dissolving a weighed quantity of the dye in double-distilled water. Experimental dye solution of different concentrations was prepared by diluting the stock solution with a suitable volume of double-distilled water. Synthesis and Characterization of Materials The starting material was a natural bentonite mined by Bentocol S.A. from Valle del Cauca (Colombia). This clay has been previously characterized, having found that dioctahedral smectite (or montmorillonite) is the main component of the natural material [24,37]. The bulk clay was ground in a ball mill, and clay powder passed through a 100 mesh. The particle size separation of the clay fraction (< 2 µm) was performed by gravitational sedimentation based on the Stokes' Law [38]. To achieve it, 100 g of the powder sample were suspended in 10 L of distilled water. The suspension was magnetically stirred for 20 min, and transferred to a graduated cylinder for gravitational sedimentation. To obtain a fraction smaller than 2 µm, the suspension was allowed to rest for 16 h, and the first 20 cm of the suspension removed. Subsequently, the suspension was centrifuged at 5000 rpm for 10 min to recover the clay fraction, which was dried at 60 °C, ground and sieved in a 100 mesh [39]. The clay fraction was homoionized with 0.5 M CaCl2, washed with distilled water until the leachate showed a negative test for chloride ions, dried at 60 °C and, finally, ground and sifted in an ASTM 100 mesh. The pillaring agent of Al−Fe purified (Al13+Fe nitrate) was synthesized by using an OH/metal hydrolysis molar ratio of 2.4, a previously published procedure [36]. The amount of Fe used in the synthesis of the pillaring agent was 5% molar because this amount favors the formation of larger and better-distributed pillars, which determines the good catalytic performance of these solids in the CWPO. For the modification, both homoionized clay fractions and the solid pillaring agent were mixed (20 meq of metal/g clay) and placed into a dialysis membrane, which was then immersed in distilled water and agitated for 3 h. After that time, the mix underwent an ultrasound bath (50 kHz) for 30 min. The modified material was washed and centrifuged until reaching a conductivity close to that of distilled water, dried at 60 °C, ground and sifted in an ASTM 100 mesh, and calcined for 2 h at 400 °C [36]. Both the clay fraction homoionized with calcium (labeled as Ca-Bent) and pillared clay with Al−Fe (labeled as Al-Fe-PILC) were characterized by X-ray fluorescence (XRF), X-ray diffraction (XRD) and adsorption-desorption of N2 at 77 K. XRF was performed using a Magix Pro Philips PW2440 instrument with samples prepared as glass pearls. For the X-ray diffraction, a Shimadzu LabX XRD-6000, which operates with Cu-K radiation (=1.5406 Å, steps of 0.02 °2θ and 2 s/step) was used. Nitrogen adsorption-desorption isotherms were determined in a Micromeritics ASAP 2020 instrument at 77 K after outgassing the samples for 2 h at 90 °C followed by 3 h at 400 °C. The specific surface area (SBET) was measured by means of the BET equation and, the total pore volume, evaluated for nitrogen uptake at a relative pressure of 0.99. Microporous specific surface area and micropore volume were calculated using the t-method of the Harking-Jura equation [40]. out, and were also established in accordance with literature data. The initial concentration of 25 mg/L is in the range of azo dyes' concentrations, usually found in industrial waste streams (between 10 and 50 mg/L) [41,42]. In Fenton-like reactions of phenol and dye oxidation with Al−Fe−PILC, the pH is sustained between 3.5 and 4.0 [24,43], and the optimal value reported in the literature has been 3.7 [24,44,45]. The average catalyst load reported for chemical oxidation of azo dyes varies between 2-5 g/L [24,26,41], so the lowest dose was used for this study. Stirring speeds of 200 rpm guarantee a complete mixing of the solid (adsorbent or catalyst) in the solution [24]. The oxidation reaction with a powdered catalyst was performed in a batch glass reactor, open to the atmosphere, thermostated at 25 °C, under constant magnetic stirring at 200 rpm. For each test, the reactor was loaded with 200 mL of aqueous solution at 25 mg/L and 400 mg of the catalyst. In this study, the pH of the medium was adjusted to 3.6 by using diluted 0.1 M H2SO4 acid or 0.1 M NaOH. Once the adsorption equilibrium time was reached, 8 mL of an H2O2 solution (2.0 mL/h) was added to the reactor. Although the addition of the total H2O2 dose is frequent at the beginning of the reaction, the gradual peroxide dosing had positive effects on the increase of TOC removal [46]. The time of reaction was 5 h (not including the time of dye adsorption-desorption equilibrium). The amount of H2O2 was varied in multiples of stoichiometry amount (1, 3, 6, and 9), which is theoretically required to completely oxidize one mole of tartrazine into CO2, H2O, and mineral acids, according to Equation (1): Measurement of Catalytic Activity Dye decolorization (dye removal efficiency) of tartrazine was measured by monitoring the absorbance of dye in the aqueous medium at its respective maximum absorption wavelength (max = 429 nm) using a UV-Vis spectrophotometer (Mapada V-1200, China). The dye concentration was determined from aliquots (1 mL of sample filtered in 0.45 µm millipore paper) measured at time intervals and by using a calibration curve relating absorbance vs. sample concentration. The concentration interval went from 0.5 to 20 mg/L, with a correlation coefficient (R 2 ) of 0.9919. Detection limit (DL) and quantification limit (QL) were 0.21 mg/L and 0.53 mg/L, respectively. The dye decolorization was calculated from the following Equation (2): where C0 is the dye concentration at time zero (t = 0). This concentration corresponds to the solution in the adsorption-desorption equilibrium, and Ct is the dye concentration at time t. Kinetic studies were performed by monitoring the change in the tartrazine concentration (Ct/C0) as a function of time (t). Apparent firstorder rate constants (kapp) for the different amounts of H2O2 were determined from the plot of −ln(Ct/C0) versus time (t); and the kapp was calculated with the slope of the line obtained [47]. The contents of the total organic carbon (TOC) and total nitrogen (TN) at the beginning and end of the reaction were determined in filtered aliquots of the reaction mixture using a TOC/TN analyzer (Multi N/C 3100, Analytik Jena AG, Germany). The Fe ions leaching from the catalyst at the end of the reaction were measured using an atomic absorption spectrophotometer (Thermo Scientific iCE 3000 Series). All of the oxidation tests were performed by triplicate. Table 1 shows the chemical analysis of starting bentonite and Al−Fe−PILC. A recent study proposes the use of a practical chart to identify the predominant clay mineral based on oxide composition in clay soils [48]. In accordance with the chemical composition of SiO2 (56.79 wt%), Al2O3 + Fe2O3, ( Al2O3 and thus a decrease in the Si/Al ratio respect to Ca-Bent. Although the content of Fe2O3 decreased in Al−Fe−PILC with respect to Ca-Bent, the Si/Fe ratio increased slightly. This last result can be associated with the uncertainty of the measurement made by XRF. Given the conditions of the synthesis of the solid pillaring agent and the low incorporation of Fe [36], this result appears reasonable. The above happened since the introduction of aluminum is very high in the pillared clay compared to iron, making the proportion of Fe2O3 less important. Powder X-ray diffraction patterns of the calcium bentonite and pillared clay with Al−Fe are shown in Figure 2. The shift in d001 reflection for the basal spacing of 13.5 Å (2θ = 6.52°) for the Ca-Bent to higher values in Al-Fe-PILC (18.5 Å, 2θ = 4.77°) confirms the effective introduction of metal polyhydroxocations and the subsequent formation of pillars in the interlaminar spacing. The dimensions of an Al13 Keggin ion have been estimated to be 1:09 nm  0:98 nm  0:97 nm [49,50]. After heating with air at 500 °C, Keggin ions lost their water ligands forming shorter Al13-blocks, with a height of 0.84 nm, becoming the structure supporting pillars [50]. The basal spacing of 18.5 Å for Al−Fe−PILC corresponded to the thickness of a montmorillonite-type clay sheet (~10 Å) plus the aluminum pillar (8.4 Å). The amount of iron incorporated in the solid pillarizing agent was very small (0.17 wt%) and did not contribute to the formation of Fe2O3 pillars. The iron incorporated in the pillaring agent was decorating the alumina pillars and not as iron oxide on the sheets of clay (cluster) [36]. Characterization of Materials Adsorption-desorption isotherms of Ca-Bent and Al−Fe−PILC are shown in Figure 3. There is an increase in the adsorption capacity of N2 due to the modification of the clay by pillaring. According to the IUPAC classification [51], Ca-Bent presented an isotherm type IVa while Al-Fe-PILC is a combination of types Ia and IVa, both with H3 hysteresis loop. Type IVa isotherms are characteristics of mesoporous adsorbents. The H3 hysteresis loop is the result of non-rigid aggregates of plate-like particles. The adsorption isotherm of Al−Fe−PILC between 0.05 and 0.2 of relative pressure has the shape of a type Ia isotherm, characteristic of microporous materials. Thus, the combination of isotherms type Ia and IVa with H3 hysteresis loop indicates that both slit-like mesopores and micropores are formed. The specific surface area (SSA) was 41.8 m 2 /g and 150.3 m 2 /g for Ca-Bent and Al−Fe−PILC, respectively. The pore volume at p/p∘ = 0.99 is of 0.0605 cm 3 /g for Ca-Bent and 0.1093 cm 3 /g for Al-Fe-PILC. The t-plot method was used to determine the specific area and the volume developed by micropores of the samples. These values are 2.2 m 2 /g and 0.0008 cm 3 /g for Ca-Bent and 107.0 m 2 /g and 0.0414 cm 3 /g for Al−Fe−PILC, respectively. The specific surface area of Ca-Bent is fundamentally the external surface (approx. 95% of SSA), characteristic of a structure of closed sheets, attributed to the heterogeneous arrangement of the aluminosilicate sheets. For Al−Fe−PILC, the specific surface area is basically microporous (approx. 71% of SSA), which is a typical feature of pillared clays [8,24]. The above results (XRF, XRD, and N2 sortometry) are similar to those found in the literature for the same system (pillared bentonite with Al−Fe in concentrated medium, intercalation assisted with ultrasound), indicating that the synthesis methodology of pillared bentonite was reproducible [36]. Catalytic Activity of Al−Fe−PILC Prior to the CWPO tests, adsorption tests of tartrazine on the materials were carried out. The maximum adsorption occurred during the first 30 minutes and stabilized after 1 h, reaching values of 4.1±0.2 and 12.5±0.5% for Ca-Bent and Al−Fe−PILC, respectively. The low adsorption of the dye in Ca-Bent was due to the fact that tartrazine is an anionic dye and, bentonite, a 2:1 layered silicate with a negative charge due to ionic substitution in its structure [52,53]. This charge in Ca-Bent is balanced by cations Ca 2+ present in the interlayer space, which cannot be interchanged by the anionic species of the colorant in solution (C16H9N4O9S2 3− ). Pillared clays are materials which have lower hydrophilicity than their parent clays [8,54]. It has been established that pillaring improves the efficiency in processes of adsorption of the anionic dye [55], therefore its greater adsorption capacity of tartrazine in an aqueous medium. For all the oxidation tests, an adsorption time of 1 h was established, guaranteeing conditions of adsorption-desorption equilibrium. The catalytic activity of calcium bentonite and pillared bentonite with Al−Fe in the oxidation of tartrazine is shown in Figure 4. For the reaction blank (only 6H2O2), Ca-Bent + 6H2O2 and Al−Fe−PILC + 1H2O2, decolorization shows a slight increase as a function of the reaction time. The color removal of Al−Fe−PILC + 3H2O2 in the first 2 h of reaction increased slightly, and then at a higher speed (between 2 and 5 h). Al−Fe−PILC (with 6 and 9H2O2) evidenced a decolorization curve with two zones, one of increase in color removal (during the first 3 h of reaction) and then a stabilization zone (between 3 and 5 h). Decolorization achieved with only H2O2 (blank, equivalent to 6 times the stoichiometric amount, without catalyst) was 5.81±0.79%. The unmodified clay (Ca-Bent) with 6 times the stoichiometric of H2O2 amount showed a low activity, reaching color elimination close to 19.4±1.7%. This catalytic activity can be related to the iron content of the raw clay mineral (6.85 wt% Fe2O3, Table 1). Catalytic activity of the pillared bentonite with Al−Fe was increased with respect to the starting clay and varied with the dose of H2O2 used in the reaction. Although the amount of iron incorporated into Al−Fe−PILC was very low compared to the iron content of Ca-Bent, the results indicate that Fe incorporated by pillaring is more active than that originally present in the clay, results similar to those reported in the literature [24,35]. The oxidation reaction of tartrazine carried out with a dose of H2O2 of 1 and 3 times the stoichiometric amount reached a decolorization of 29.4±1.6 and 78.3±2.3%, respectively. When the stoichiometric amount of H2O2 was increased to 6 and 9 times, the decolorization obtained after 5 h of reaction decreased from 98.2±1.8 to 94.7±2.1%. Excess H2O2 in a reaction medium (9 times the stoichiometric amount in this investigation) increased the chance for recombination of the radicals (i.e., scavenging effect) and production of less active radicals such as HO2• in the solution to generate inactive species as H2O and O2, which can be expressed by the reactions described by Equations (3) and (4) [20,46,56]: Data of change in the tartrazine concentration (Ct/C0) as a function of time (t) were adjusted to first-order kinetics and kapp values, for each amount of H2O2. For the four quantities of H2O2 used in the CWPO of tartrazine, the coefficient of determination (R 2 ) was higher than 0.965, confirming that data fit the first order model. Figure 5 shows the effect of H2O2 on the rate constant of decolorization (kapp) of tartrazine in aqueous solution. When the amount of H2O2 is increased from 1 to 3 times the stoichiometric value, kapp value increased 4.9 times. Additions of 6 times the stoichiometric amount of H2O2 increased kapp 14 times compared to the value obtained with 1 time of stoichiometric H2O2. Finally, the amount of H2O2 was 9 times the stoichiometric value, kapp decreased by 23.8% in relation to the value obtained with 6 times the stoichiometric H2O2. A similar behavior, that is, the existence of an optimum hydrogen peroxide dose has been found during photo-Fenton degradation of the reactive brilliant orange over iron-pillared montmorillonite. When increasing the amount of H2O2 from 2.9 to 9.8 mmol/L, degradation efficiency of the dye went up from 65.4 to 95.6% at 90 min. However, when the H2O2 dosage was 19.6 mmol/L, the removal of dye decreased [57]. In general, an excess of H2O2 is used with respect to the theoretical [H2O2]/[organic compound] molar ratio to reach the maximum degradation of organic compound [46,58]. UV-Vis spectra of tartrazine oxidation using Al−Fe−PILC and 6 times the stoichiometric amount of H2O2 are shown in Figure 6. At 2 h of reaction, a considerable decrease in the absorbance at 429 nm is observed (close to 67% of decolorization) and after 3 h of reaction, the color removal is greater than 92% so the yellow tone of the solution no longer it is noticeable to the naked eye. There is no criterion for determining the amount of hydrogen peroxide necessary for the oxidation of an azo dye since it depends on the structure of the compound and the reaction temperature. An excess of H2O2 is usually used with respect to the stoichiometric amount. For example, for the oxidation of tartrazine (50 mg/L) with Al,Fe-pillared clay (0.5 g of solid catalyst in 100 mL of aqueous solution of dye), a dose of H2O2 equal to 9.3 times the stoichiometric amount was used. Tartrazine removal at 4 h of reaction at 75 °C reached about 97.5% [23]. For the oxidation of methyl orange (100 mg/L) with Al/Fe-PILCs, the dose of H2O2 was 0.9 times the stoichiometric amount. Removal of the dye was close to 80% in only 1 h of reaction at 18±2.0 °C [24]. However, in two previous investigations, the TOC removal was not quantified. Methyl orange is an excellent model chemical molecule for the catalytic assessment of active solids in Fenton-like reactions, as, for instance, CWPO [24]. Nevertheless, it is an azo dye of simple structure, of low molecular weight (327.33 g/mol, topological polar surface area of 93.5 A 2 ), in comparison with tartrazine (534.36 g/mol, a topological polar surface area of 229 A 2 ). These differences in the two monoazo dyes, as well as in the reaction by-products, mean that the results for similar catalysts are not comparable. TOC and TN removal efficiency in the oxidation of tartrazine using Ca-Bent and Al−Fe−PILC is shown in Figure 7. The H2O2 only (6 times the stoichiometric amount, without catalyst) achieves very low removal of TOC and TN. When the Al−Fe−PILC catalyst is used and the H2O2 dose is increased from 1 to 6 times the stoichiometric amount, the removal of TOC and TN increases considerably. By increasing the dose of H2O2 (9 times the stoichiometric amount), the removal of TOC and TN decrease with respect to the reaction with fewer H2O2 (6 times the stoichiometric amount). In all tartrazine oxidation tests, it was found that TN removal was greater than TOC, reaching TN conversion of up to 71.5±1.8% (with 6 times the stoichiometric amount of H2O2). Similar results for the oxidation of Congo red were obtained with Al−Fe−PILC. At 4 h of treatment, NO3 -concentration (measured by ion chromatography) was practically zero, demonstrating that the catalyst gives a major loss of the initial nitrogen as volatile N-compounds, probably NxOy and N2 or NH4 + [8]. The influence of the H2O2 dose on the TOC or TN removal ( Figure 7) was similar to that obtained for decolorization ( Figure 4), with an optimum oxidant concentration. For an H2O2 dose above that value, the final TOC or TN removals decreased a little, an effect similar to those reported by other researchers [59][60][61]. The reaction path for the generation of reactive species and mineralization of dye using Al−Fe−PILC starts by reduction of Fe 3+ on the surface of Al−Fe−PILC to Fe 2+ . Then the Fe 2+ formed accelerates the decomposition of H2O2 in solution (Fenton reagent), generating hydroxyl (HO•) and hydroperoxyl (HO2•) radicals. These radicals attack the dye molecule leading to reaction intermediates and, finally, the reaction intermediates are mineralized into CO2, H2O, NO3 − and SO4 2− . The simplified reactions that schematize this process are shown in Equations (5) and (6) When the HO• and HO2• radicals formed during the reactions attack the azo group (−N=N−) and break it, there is a color decay of the solution, which is used to measure the efficiency of the catalytic oxidation of the azo dye [24]. In this study, the oxidation by-products of tartrazine (other than CO2 and NOx) were not quantified by chromatographic techniques. In solutions of tartrazines pre-treated by electrocoagulation (EC) and, subsequently, by advanced oxidation by photoelectro-Fenton (PEF), 18 compounds have been identified by GC-MS. The dehydroxylated by-products (in the form of phenols and quinones) are the intermediates formed by the action of HO• radicals, as well as by aliphatic carboxylic acids (cannot be mineralized) [62]. In the CWPO process, the concentration of oxidant (H2O2) is a vital factor that considerably affects the removal of organic pollutants. The amount of generated hydroxyl radicals is directly related to the concentration of H2O2 [20]. According to the results of the catalytic tests carried out for the oxidation of the azotartrazine colorant with Al−Fe−PILC, the most efficient dose of H2O2 corresponds to 6 times the stoichiometric amount. With this dose, a decolorization greater than 98% and a TOC removal close to 52% is achieved. It is important to note that these results correspond to tests performed at 25 °C. It is well known that the mineralization of organic pollutants usually increases by elevating the temperature from 25 °C to around 80 °C. However, the CWPO process at elevated temperatures (> 40 °C) will increase the total cost of the treatment and the thermal decomposition of H2O2 [20,63]. For stability of the catalyst, the leached iron concentration was measured at the end of the oxidation tests. For all oxidation tests with Ca-Bent and Al−Fe−PILC, the leached iron concentration was less than 0.2 mg/L. Thus, it can be concluded that only the heterogeneous Fenton reaction happened for the degradation of tartrazine. The low iron leaching indicated that the active phase of these catalysts is strongly fixed to the clay support and pillars, and that it is highly stable under oxidizing conditions of the reaction [34,64]. Conclusions Pillared bentonite with Al−Fe (Al−Fe−PILC) was effectively synthesized in a concentrated medium and used as heterogeneous Fenton-like catalyst in the oxidation of tartrazine azo-dye in an aqueous solution. Despite of the amount of iron incorporated into Al−Fe−PILC was very low compared to the iron content of Ca-Bent, its catalytic performance indicate that Fe incorporated by pillaring is more active than the purified clay. The effect of H2O2 amount for tartrazine catalytic oxidation was evaluated. Increasing the dose of H2O2 (1, 3, and 6 times the required stoichiometric amount to completely oxidize the dye) increases decolorization and removal of TOC and TN. A high dose of H2O2 (9 times the required stoichiometric amount to completely oxidize the dye) does not improve the performance of the reaction. The catalytic oxidation of tartrazine carried out with Al−Fe−PILC and 6 times the stoichiometric amount of H2O2 at 25 °C, reached 98.2±1.8% of decolorization, 51.9±1.9% of TOC removal and 71.5±1.8% of TN conversion. Additionally, the leached iron concentration in all CWPO tests was less than 0.2 mg/L, which guarantees the catalyst stability and that tartrazine oxidation occurred by a heterogeneous Fenton-like reaction.
2021-04-18T21:49:36.063Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "cb749e876458cf2eaa8d7ed8fb47a97bb69c5c1f", "oa_license": "CCBYSA", "oa_url": "https://ejournal2.undip.ac.id/index.php/bcrec/article/download/9978/5172", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cb749e876458cf2eaa8d7ed8fb47a97bb69c5c1f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
25635365
pes2o/s2orc
v3-fos-license
Antiviral activity of interferon against transmissible gastroenteritis virus in cell culture and ligated intestinal segments in neonatal pigs Segments of jejunum in 5 to 6 days old piglets were surgically ligated, inoculated with transmissible gastroenteritis virus (TGEV) and 18 hours later the segments were fixed for histology or suspensions were prepared for plaque assay in swine testis (ST) cell cultures to determine the yield of virus. When the virulent Purdue strain of TGEV was used, villous atrophy was seen and TGEV antigen was demonstrated immunohistochemically in the villous enterocytes. The Miller M6 strain of virus produced less extensive lesions in the segments, but since it was titratable by plaque assay it was used in the subsequent yield reduction assays to determine the antiviral activity of interferon. When intestinal segments were inoculated simultaneously with either 3200 units of natural porcine interferon-α or up to 100000 units of recombinant human interferon-α2a, and TGEV, there were no reductions in virus yield, although the same cytokines exerted an antiviral effect in ST cells treated in a similar way. However, virus yields were significantly reduced in intestinal segments in piglets treated parenterally with teh synthetic interferon inducer polyinosinic: polycytidylic acid 6 hours before challenge of the segments with TGEV. There was also a trend for the antiviral effects of interferon induction before challenge to be augmented by the inclusion of interferon with the virus inoculum. It was concluded that interferon would be ineffective as a therapeutic for TGEV, although it might be useful prophylactically. INTRODUCTION Transmissible gastroenteritis virus (TGEV) is an enteric coronavirus associated with high mortality in preweaning pigs. Following oral exposure, viral replication occurs primarily in the mature villous epithelium of the jejunum and ileum (Hooper andHaelterman, 1969, Saif andHeckert, 1990). Lysis of infected cells leads to villous atrophy and crypt hyperplasia. Diarrhoea results from maldigestion, malabsorption, and loss of electrolytes and proteins from the damaged epithelium and leads to dehydration, nutritional deficiency, and electrolyte imbalances (Butler, et al., 1974). Antiviral activity associated with TGEV infection was first described in intestinal washings in neonatal pigs (Pensaert et al., 1970). The activity was later shown to be due to alpha interferon (IFN-a) (La Bonnardi~re and Laude, 1981 ). The source of the interferon was unknown, but was thought to be the gut associated lymphoid tissue. More recently it has been shown that IFN-ot induction is dependent on the TGEV transmembrane protein E1 Laude, 1988, Laude et al., 1992), and that IFN-t~ is produced by lamina propria lymphocytes exposed to TGEV (Naidoo and Derbyshire, 1992). The role of the endogenous IFN produced early in the course of infection with TGEV is not currently understood. By virtue of its antiviral activity, it may restrict further cycles of viral replication in the intestinal epithelium, and the major objective of the present study was to investigate this possibility in a ligated intestinal segment model in which exogenous IFN-a could be injected into the intestinal segments concurrently with the infecting virus. It has been clearly established that IFN-t~ can exert an antiviral role against TGEV when cell cultures or intestinal explants are pretreated with IFN before challenge with the virus (Derbyshire, 1989;Weingartl and Derbyshire, 1991 ). In the present study, cell cultures were treated simultaneously with IFN-o~ and TGEV, and since an antiviral effect was demonstrated, similar experiments were conducted in the ligated intestinal segment model. Since it was not possible to produce porcine (Po) IFN-ot in vitro in concentrations comparable to those detectable in vivo (La Bonnardi~re and Laude, 1981 ), high concentrations of human recombinant IFN-a were used in some of these experiments. Treatment of newborn piglets with a synthetic IFN inducer, polyinosinic: polycytidylic acid (poly I:C) complexed with poly-L-lysine and carboxymethylcellulose (poly ICLC), before oral exposure to virulent TGEV resulted in a delay in the onset of clinical signs (Loewen and Derbyshire, 1988a). Because of the timing of the poly ICLC administration in relation to exposure to virus, it is likely that the transient protective effect was due to the activation of natural killer cell activity in the treated piglets (Lesnick and Derbyshire, 1988). In the present study we utilized the ligated intestinal segment model for further observations on the antiviral effect of poly ICLC in the newborn piglet. The initial experiments descibed in this paper relate to the development of the ligated intestinal segment model for TGEV, particularly to the selection of a suitable strain of the virus and an appropriate challenge dose to give consistent virus yields. This technique was used in preference to whole animals since it allowed the comparison of different treatments within one individual, and greatly reduced the number or animals required for the studies. The method has been widely used for the study of enteric bacterial (Gyles and Barnum, 1967) and viral (Carpio, et al., 1981, Kirsten, et al., 1985, Deregt, et al., 1989 infections, but not for the study of TGEV. Experimental animals Sows from a specific-pathogen free herd of commercial swine were farrowed in isolation. The herd was determined to be TGEV and porcine respiratory coronavirus negative by virus neutralization serology. Piglets were given 100 mg of iron dextran (Ironol 100, Sanofi Animal Health, Victoriaville, Quebec) at 2 days of age. At 5-6 days of age piglets were removed from the sow in preparation for experimental surgery. All procedures were carried out under Canadian Council of Animal Care guidelines. Virus strains and cell cultures Virulent Purdue strain TGEV was propagated and collected from infected pigs by the method of Ristic, et al. ( 1965 ). The Miller M6 strain TGEV obtained from Dr. L. Saif, Ohio State University, was propagated and assayed using a continous swine testis (ST) cell line (McClurkin and Norman, 1966). The virus was plaque purified three times and passed seven times on ST cells (Welch and Saif, 1988 ). It was assayed by plaque formation on ST cells. Monolayer cultures grown in 24 well plates were inoculated with 0.2 ml of virus serially diluted in Eagle's minimum essential medium (EMEM). After adsorption for one hour, the inoculum was removed and the wells overlayed with 1 ml 0.6% agarose in EMEM with 5% neonatal calf serum (NNCS). Plates were fixed with 10% phosphate buffered formalin and stained with crystal violet after 24 hours. Interferons and interferon assays Natural porcine leucocyte IFN (PolFN-ot) was produced as described by Weingartl and Derbyshire (1990). Venous blood from weaned pigs was collected into sodium heparin and mixed with equal volumes of Hanks' balanced salt solution (HBSS). Leucocytes were separated by centrifugation through Ficoll-Paque (Pharmacia LKB Biotechnology Inc., Piscataway, New Jersey). After washing in HBSS, 1 × 107 cells/ml were suspended in RPMI-1640 medium containing 20% foetal bovine serum and Hepes buffer and incubated with shaking at 125 rpm. After 18 hours the leucocyte culture was inoculated with 4000 haemagglutinating units of Newcastle disease virus (La Sota strain). Twenty-four hours post-infection the leucocytes were pelleted by centrifugation and the supernatant treated with 0.1 ml 1 mM dithiothreitol (Sigma Chemical, St. Louis, Missouri) per millilitre. Virus was removed by ultracentrifugation at 60000 g for 60 min. The supernatant was stored at -20 ° C. Human recombinant interferon a-2a (Hu rec IFN-c~2a~ was obtained from a commercial source (Roferon, Hoffman -La Roche). Antiviral activity was determined as described below and in our system, 10000 international units were equivalent to 6400 laboratory units. Interferon was assayed by plaque reduction. Samples to be assayed were serially diluted in EMEM containing 2% NNCS and 0.4 ml applied to each well of a 24 well pate containing a monolayer of Madin-Darby bovine kidney cells. After 18 hours the sample was removed and the plates treated with 40-60 plaque forming units (pfu) of vesicular stomatitis virus (Indiana strain ). Following 60 min adsorption the inoculum was removed and wells overlayed with 1 ml 0.9% gum tragacanth (Sigma Chemical) in EMEM and 5% NNCS. Forty-eight hours post infection the cells were fixed with phosphate-buffered formalin and stained with crystal violet. IFN titres were determined as the reciprocal of the highest dilution which resulted in a 50% reduction in the number of plaques. Polyinosinic:polycytidylic acid complexed with poly-L-lysine and carboxymethylcellulose (poly ICLC) was prepared as described by Levy, et al. ( 1975 ) as modified by Loewen and Derbyshire (1988b). Polyinosinic: polycytidylic acid (poly I:C, Sigma Chemical) was reannealed by heating at 71 °C for 60 min then added to a mixture containing equal volumes poly-L-lysine (3 mg/ml, Sigma Chemical) in normal saline and carboxymethyl cellulose (1%, Sigma Chemical) to yield a final concentration of 1 mg/ml poly I:C. Piglets were inoculated intravenously with 0.5 mg/kg of poly I:C. In vitro yield reduction assays The protocol has been previously described (Derbyshire, 1989). Four day old monolayer cultures of ST ceils in 24 well plates were treated for 18 hours with Hu rec IFN-a2a diluted in EMEM containing 5% NNCS. After washing with phosphate buffered saline (PBS), the cells were treated with 20 or 100 pfu/well of Miller M6 TGEV, representing multiplicities of infection of 5 × 10 -S and 2.5 × 10 -4. Following adsorption for 1 hour, the inoculum was removed and replaced with EMEM with 5% NNCS with or without IFN. Supernatants were collected for virus assay 12, 18, or 24 hours later. Surgical protocol Piglets were removed from the sow and deprived of food and water 6 hours prior to surgery. Anaesthesia was induced and maintained with halothane (Fluothane, Ayerst Laboratories, Montreal) using a Bain circuit. Midline laparotomies were performed and up to 14 five cm long segments of the midjejunum were isolated by ligation, with 2-3 cm intervening segments. The 5 cm segments were injected intralumenally with a total 1 ml volume of virus (one injection) and in some experiments IFN (a second injection), both diluted in PBS containing 500 Kallikrein units of aprotinin (Sigma Chemical), a protease inhibitor and penicillin (400 units/ml), streptomycin (0.4 mg/ ml) and gentamicin (0.05 mg/ml). Control inoculations consisted of this PBS diluent except in the experiment involving natural Po IFN-ot, in which case a mock-induced leucocyte supernatant was used as a control inoculum. The incision was then closed in 2 layers and the animal allowed to recover from anaesthesia. Flunixin meglumine ( 1 mg/kg, Banamine, Schefing-Plough) was administered to maintain analgesia post-operatively and water was provided. The piglets remained alert and active, with no clinical evidence of dehydration. Sample collection and processing Eighteen hours after infection the animals were again anaesthetised. Intestinal samples collected for histological examination and immunostaining were placed immediately in 10% phosphate buffered formalin. For virus assay, the ligated segments were removed and placed in 5 ml EMEM containing antibiotics and frozen at -70 ° C. After thawing, the intestinal contents and mucosae were scraped into the media, vortexed and refrozen. The samples were then thawed, mixed and clarified by centrifugation at 1000 g at 4°C for 20 min and stored at -70 °C until assayed. The piglets were euthanized by barbiturate overdose after the above samples were collected (Euthansol, Schering-Plough). Immunohistochemistry Intestinal sections were fixed in 10% phosphate buffered formalin for 24 hours, then transferred to 70% ethanol prior to embedding in paraffin. Slides were stained using a streptavidin immunoperoxidase technique (Dimension Laboratories, Mississauga, Ontario). Briefly, dewaxed and rehydrated sections were treated with protease at 37 °C for 10 rain. Endogenous peroxidase activity was blocked with 3% hydrogen peroxide for 10 rain. Normal rabbit blocking serum (20%) was applied for 20 min, and without rinsing, mouse anti-TGEV monoclonal antibodies or non-specific mouse monoclonal antibodies as control were applied for 120 rain. After rinsing, biotinylated rabbit anti-mouse serum was applied for 10 rain. Binding was detected using a streptavidin/peroxidase conjugate for 5 min, followed by chromogen solution containing aminoethylcarbazole and hydrogen peroxide. Slides were then counterstained briefly with haematoxylin. Histology and immunochemical staining for TGEV of intestinal segments A single piglet was used in a preliminary experiment in which ligated segments were inoculated with dilutions of the virulent pig-passaged Purdue strain of TGEV ranging from 10 ° to 10-5, or with the PBS control inoculum. The histological changes were consistent with naturally occurring TGEV infection, including cytoplasmic vacuolation, enteroeyte enlargement and he-crosis, and villous atrophy. With the 10 -2 dilution, the length of the villi was reduced by approximately 30% in comparison to the control segments, in which no histological abnormalities were seen. The extent of the changes occurred was dose dependent, and somewhat variable along the length of an individual segment. Immunohistochemistry revealed intense specific staining of TGEV antigen in villous enterocytes and also in lumenal debris (Fig. 1 ). Non-specific staining occurred primarily in the muscularis mucosae, around blood vessels, and in erythrocytes. Such non-specific staining was readily distinguished from viral antigen on the basis of its location and its colour intensity. Virus was not isolated from the infected or control segments. The Miller M6 strain of TGEV was inoculated at concentrations of 103, 10 4, and 10~pfu into two ligated segments in each of three piglets. One segment in each piglet was used for histology and one for virus isolation. There were three control-inoculated segments in each piglet. The virus produced less marked histological changes than the Purdue strain, but specific immunostaining of TGEV antigen was seen in the villous enterocytes (Fig. 2 ). No changes were seen in the control segments, from which no virus was isolated. The yields of the virus obtained from the infected segments are shown in Table 1. Hu rec IFN-ot2a treatment of ST cells The data in Table 2 indicate that in ST cell culture Miller M6 TGEV was sensitive to treatment with Hu rec IFN-ot2a. Virus yield reduction was evident with all doses of IFN used, all treatment regimens and in harvests at 12, 18, Simultaneous treatment of intestinal segments with Miller M6 TGEV and interferon In this experiment (Table 3 ), 3200 units ofPo IFN-a were inoculated into four ligated segments in each of three piglets and the same number of loops received the control inoculum. For each piglet, the inoculum for two of the IFN and control segments contained 103 pfu of Miller M6 TGEV, and 104 pfu for the other two segments. There was an additional control segment that was No virus was recovered from any of the uninfected control segments. As shown in Table 3, no significant reduction in virus yield was evident by Student's ttest when intestinal segments were treated simultaneously with PolFN-a and TGEV, or with higher concentrations of Hu rec IFN-c~2a and virus. Interferon induction with poly ICLC Two piglets were inoculated with poly ICLC and two were inoculated with a control solution containing poly-L-lysine and carboxymethylcellulose, but lacking poly I:C. Poly ICLC treatment resulted in the appearance of circulating IFN 6 hours post-treatment (Table 4). This level was greatly decreased by 25 hours post-treatment. The control solution did not induce circulating IFN at 6 hours. Poly ICLC treatment was also associated with a transient leucopaenia involving both segmented neutrophils and lymphocytes at 6 hours and persisting in segmented neutrophils at 25 hours. There was no evidence of a systemic IFN response to the inoculation of intestinal segments with TGEV. This may be due to the localized nature of this infection, the relative avirulence of the Miller M6 strain, or may indicate that this strain is a poor inducer of IFN. Infection of intestinal segments with TGEV in piglets treated with poly ICLC In each of the two poly ICLC treated piglets and the two control piglets, four ligated segments were inoculated with 100000 units of Hu IFN, and four Table 5, there was a significant reduction in virus yield by Student's t-test (P< 0.05 ) from the intestinal segments in the piglets which had been treated with poly ICLC. There was also a trend for the effects of poly ICLC treatment to be augmented by subsequent intralumenal IFN treatment, though this was not significant (P< 0.1 ). As in the earlier experiment, there was no significant reduction in virus yield from the IFN treated segments in the control piglets. DISCUSSION Previous experimenters have described the use of isolated intestinal loops for the study of TGEV pathogenesis, in which relatively large lengths of the small bowel were exteriorized (Pensaert, et al., 1970). While this technique is ideal for the study of the sequential events occurring during TGEV infection, it was not thought suitable for the purposes here, where the objective was to compare the effects of various treatments. The length of the small bowel in a 5 day old conventional piglet is approximately two metres; in our studies up to fourteen 5 cm segments were used, although 8-10 segments were found to be a more satisfactory maximum in an animal of this age. The Miller M6 strain of TGEV was selected for use in our yield reduction assays because it could be readily titrated in cell culture, in contrast to the virulent Purdue virus. While less virulent than the Purdue strain, the Miller M6 strain was capable of replicating in the intestinal segments, in contrast to vaccine strains of TGEV which were evaluated in preliminary experiments (results not shown). Antigenic and nucleotide sequencing data have established the close relationship between porcine and human alpha interferons (La Bonnardi~re, et al., 1986;Lef'evre and La Bonnardi~re, 1986;Lef'evre, et al., 1990;Weingartl, 1989 ). Human interferon has been shown to have antiviral activity in non-human cells (Gresser, et al., 1974). Human interferon has also been shown to have activity against feline infectious peritonitis virus, a coronavirus closely related to TGEV (Weiss and Oostram-Ram, 1989). Oral treatment with human IFN has been reported to decrease rotavirus shedding in pigs (Lecce, et al., 1992 ). Previous reports have described the reduction in TGEV yield following treatment with both porcine and bovine IFN of both cell and intestinal explant cultures (Maclachlan and Anderson, 1986;Derbyshire, 1989;Weingartl and Derbyshire, 1990). Of particular interest in the present study was the demonstration that post-treatment of ST cell cultures after virus inoculation with a low multiplicity of virus would result in some decrease in subsequent virus yield. In the intestinal segment experiments, however, the simultaneous treatment ofenterocytes with Po IFN-ot or Hu rec IFN-o~2a and TGEV did not cause a decrease in virus yield. It is possible that the kinetics of viral replication and/or the kinetics of the IFN system differ in enterocytes relative to ST cells in culture such that viral replication is well underway by the time an effective antiviral state is induced. It is also possible that the IFN inoculum was being degraded by intestinal proteases, despite the inclusion of a protease inhibitor in the inoculum. The effects ofpoly ICLC on circulating IFN levels and leucocyte values are consistent with those previously described Derbyshire, 1986, Loewen andDerbyshire, 1988b). This present study demonstrated that in addition to the activation of natural killer cells, previously described by Lesnick and Derbyshire (1988) poly ICLC treatment will result in a decreased TGEV yield in enterocytes. This may be a factor in the delay in onset of clinical signs associated with poly ICLC treatment (Loewen and Derbyshire, 1988a). It is not certain whether the reduced virus yield in the intestinal segments of the poly-ICLC treated piglets resulted from a direct antiviral effect of the induced IFN on the enterocytes, or by lysis of enterocytes early in infection by NK cells, activated by the induced IFN. The latter mechanism seems less likely, since the intestinal segments were infected with TGEV well before NK activity would have peaked (Lesnick and Derbyshire, 1988). A more direct effect, through the induction of an antiviral state in the enterocytes, seems more likely. It has been shown that 2'-5' oligoadenylate synthetase is induced during TGEV infection, and is also produced in IFN treated porcine cell cultures (Bosworth, et al., 1989;Bosworth and Maclachlan, 1990). Whether this system is involved in the antiviral state against TGEV remains to be determined. The fact that simultaneous IFN treatment did not decrease virus yields in the intestinal segments may preclude the use of IFN as treatment for TGEV infection. However, the reduction in virus yield following the administration of an IFN inducer suggests that IFN may be useful prophylactically.
2018-04-03T04:30:20.099Z
1994-01-01T00:00:00.000
{ "year": 1994, "sha1": "ff008b01ea1e3a177e32c67df5bd2df86d955bb0", "oa_license": null, "oa_url": "https://doi.org/10.1016/0378-1135(94)90007-8", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "58697ca56c80b38243836aab285070415fd651e2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
919389
pes2o/s2orc
v3-fos-license
Intercultural communication through the eyes of patients: experiences and preferences Objectives To explore patients’ preferences and experiences regarding intercultural communication which could influence the development of intercultural patient-centred communication training. Methods This qualitative study is based on interviews with non-native patients. Thirty non-native patients were interviewed between September and December 2015 about their preferences and experiences regarding communication with a native Dutch doctor. Fourteen interviews were established with an interpreter. The semi-structured interviews took place in Amsterdam. They were focused on generic and intercultural communication skills of doctors. Relevant fragments were coded by two researchers and analysed by the research team by means of thematic network analysis. Informed consent and ethical approval was obtained beforehand. Results All patients preferred a doctor with a professional patient-centred attitude regardless of the doctor’s background. Patients mentioned mainly generic communication aspects, such as listening, as important skills and seemed to be aware of their own responsibility in participating in a consultation. Being treated as a unique person and not as a disease was also frequently mentioned. Unfamiliarity with the Dutch healthcare system influenced the experienced communication negatively. However, a language barrier was considered the most important problem, which would become less pressing once a doctor-patient relation was established. Conclusions Remarkably, patients in this study had no preference regarding the ethnic background of the doctor. Generic communication was experienced as important as specific intercultural communication, which underlines the marginal distinction between these two. A close link between intercultural communication and patient-centred communication was reflected in the expressed preference ‘to be treated as a person’. A key-concept in research on doctor-patient communication is patient-centred care, a paradigm defined as care focused on the patient as a whole person with individual preferences situated within a social context. 13 One of the key elements defining patient-centred doctor-patient communication is that doctors adapt their communication style to each patient's preferences. 14 The intercultural communication style of doctors could be seen as a combination of generic patient-centred communication skills and specific intercultural communication skills. 6,15 Besides, a recent literature review of Degrie et al. mentioned the joined responsibility for intercultural communication of the patient and the caregiver, where non-verbal communication, the social dimension and cultural sensitivity of communication play a role. 9 Despite extensive research on patient satisfaction 8 , there is a lack of insight into patients' preferences and experiences on intercultural communication. 7,9,,12,16 The latest review on minority patients' experiences concluded that a broader perspective towards cultural sensitive care for all kinds of patients is desirable. 9 Since shared decision making and patient-centred communication become more important in healthcare, patient's preferences become more important as well. Therefore, it is imperative to know more about nonnative patients' preferences regarding intercultural doctorpatient communication. 8 Additionally, it is expected that better intercultural communication enhances patient involvement, satisfaction and health outcomes. 10 The purpose of this study is to provide insight into patients' preferences and experiences regarding their doctors' communication in more detail. This could direct the development of intercultural communication training for doctors, which is not always structurally implemented in medical education. 3,17 Therefore, we focused on two main research questions: Which kind of communication behaviours do non-native patients prefer in intercultural communication with their native doctors and how do they experience this communication? Study design This qualitative semi-structured interview study was performed following the consolidated criteria for reporting qualitative research (COREQ criteria). 19 Non-native patients were interviewed after visiting a native Dutch doctor in the Netherlands. Study participants Non-native patients who visited a native Dutch medical specialist were asked to participate. Non-native patients were defined as 'patients who were not born in the Netherlands or patients with at least one parent born outside the Netherlands'. If the patient did not speak Dutch, the interview questions and answers were translated by an interpreter. This interpreter could be a family member, another healthcare worker or a professional interpreter. If the patient was accompanied by family or other people, they were also involved in the interview. All participants were informed about the aim and the procedure of the study beforehand. All participants signed informed consent. The study was performed in line with Dutch privacy legislation. Approval of the Dutch medicaleducation ethics board was obtained (NVMO-ERB 557). We confirm that all patient identifiers have been removed or anonymised so the patients described cannot be identified through the details of the story. Sample size Of a total of 57 invited participants, 30 agreed to participate in the study. The most frequently mentioned reason to decline participation was lack of time. The interviews lasted between 5 and 30 minutes, depending on the participant's available time and on the level of elaboration that could be achieved in the interview. Seven patients were available for a short interview, and in seven other interviews attempts to reflect on the question in a deeper way were unsuccessful, resulting in interviews that were shorter than 10 minutes. In interviews where reflection about preferences regarding the intercultural communication in general was unsuccessful, patients were asked to focus on the experiences of the last conversation with a Dutch doctor. In total, 14 participants were accompanied by an informal interpreter. The other 16 participants did not need an interpreter. The ethnic backgrounds of the participants were Surinamese, Turkish, Moroccan, Portuguese, Indonesian, Iraqi, Irish, American and Chinese. Sampling procedure The interviews, conducted in Dutch, were held between September 2015 and December 2015. Patients who met the inclusion criteria were asked to participate when they arrived at the outpatient clinic. To provide a heterogenic sample of medical specialties, the patients were selected at the outpatient clinics of 4 departments: gynaecology, internal medicine, urology and orthopaedic surgery. Patients were approached in the waiting room by the interviewer and were given sufficient time to decide before signing the informed consent form. After they had consulted the medical specialist, an interview took place in a separate room. Setting Semi-structured interviews were conducted in a teaching hospital in Amsterdam, the Netherlands. This hospital was accounted as 'migrant friendly' 18 and around 70% of the patients in this hospital were non-native, as defined in section 2.2. Therefore, the doctors in this hospital were used to communicating in an intercultural context. Data collection procedure The interviews were semi-structured and contained at least the following themes: preferences regarding the doctor's behaviour, preferences regarding the doctor's ethnic background, experiences regarding the influence of language and cultural differences on communication, general experiences regarding communication with doctors and, if this was difficult, their specific experience of the last consultation. The interviews were audiotaped and transcribed verbatim. After transcription, the audiotape was erased and the transcripts were anonymised. Data analysis The transcripts were coded by attaching keywords ('codes') to all text fragments that were considered relevant to one of the research questions. To allow new insights, the coding of the interview transcripts was open and without a previously conceived coding schedule, using the program MAX-QDA. The codes were structured by means of thematic network analysis. 18 Of the 30 transcripts, 9 were analysed independently by two members of the research team. To check reliability, differences in coding and selection of fragments were discussed in an iterative process until consensus about the content of the codes was reached. In this case, consensus was reached after discussing 5 transcripts. After coding 11 transcripts no new codes were derived. The developed coding scheme was discussed in depth among all authors. Results are structured by identified themes. Per theme, first patients' preferences are presented, followed by their experiences. In the analysis, we focused on intercultural communication in general and did not differentiate per ethnic group. The characteristics of the doctor All participants claimed that a doctor's ethnic background was not important as long as the doctor was a professional. Some of the patients preferred a Dutch doctor instead of a doctor of their country of origin. The main reason for this claim was that many of the patients already lived in the Netherlands for a long time. The respondents described that they felt more Dutch than the ethnicity of their country of origin. Many patients mentioned that they experienced a difference between the healthcare system in the Netherlands and their country of origin. "He needs to be a professional. Then I don't have a preference regarding his background". (Female, obstetrics department, interview 6) Some participants had a clear preference for a doctor of a particular gender. Male as well as female participants said they had experienced feelings of shame when the doctor was of the opposite gender. On the other hand, other participants mentioned that if the doctor was a professional, the doctor's gender was not an issue. Some patients expressed preferences for the age of a doctor. Some participants preferred older doctors, as they considered them to be more trustworthy. The doctor's communication behaviour Many participants mentioned that they felt comfortable when the doctor talked in an accessible way, such as: speaking slowly, using short sentences, explaining topics in various ways and avoiding medical jargon. Furthermore, participants considered it important that a doctor explains the diagnosis clearly, listens to patients, takes sufficient time, comforts the patient, gives advice and information to the patient and prepares the consultation beforehand. Furthermore, participants preferred an open and friendly doctor, who focusses his attention on the patient and not on the computer. Participants regarded a doctor who is honest about the diagnosis, as an example of open behaviour. An unfriendly doctor was described as someone who does not shake hands when greeting and who has a cold non-verbal attitude, such as leaning back in the chair. Also, doctors were experienced as friendly when, for example, the doctor asks patients to take a seat, before the real consultation starts. "A friendly smile or something really simple can help to create a good atmosphere between the patient and the doctor". (Female, obstetric department, interview 6) Participants said that being treated as a unique person and not as a disease contributed to feeling satisfied with the medical consultation. They believed that communication was facilitated by acknowledgements, such as the feeling that the doctor understands the problem, and by a feeling of being important to the doctor. Patients expressed that when doctors asked more questions they felt respected and understood. Professional attitude and knowledge Participants repeatedly mentioned a doctor's medical expertise, having enough time, and taking the problem of the patient seriously to be important. This was linked to the doctor's professional behaviour, indicating that participants found their doctor to be a professional if he or she was medically up-to-date and well informed about possible treatment options. It was frequently reported that doctors sometimes ask about their patient's cultural habits and background. Many of the participants claimed to have no problem with this, especially when it was necessary for the doctor to know more about the background of the patient to be able to help them. However, a few participants mentioned feelings of discomfort in those situations because they were afraid the doctor would make assumptions about them. The doctor-patient relation All participants mentioned that language differences were a challenge. Some participants said that communication problems were solved by the presence of an interpreter. Many patients preferred an informal interpreter. Many patients mentioned that it was the responsibility of the patient to speak Dutch more fluently. "For me, a doctor is a doctor. The problem is the language". (Male, internal medicine department, interview 24) In intercultural communication, a good doctor-patient relation was mentioned by the participants as a facilitator for satisfactory communication. A good doctor-patient relation was for example established when the doctor and the patient knew each other for a longer period. Some participants said that many language differences seemed to have been solved when the doctor-patient relation was established. This was based on the experience that communication was easier if the participant and the doctor knew each other, because fewer words were needed to understand each other than during the first visit. All participants experienced positive feelings about the intercultural communication with their doctors and found it hard to come up with points of improvement for the doctor's style of communication. Patient characteristics and participation skills Some participants spontaneously reported that patientdoctor communication was also influenced by their own behaviour. Some participants were aware that their expectations may not always be clear for doctors, which could result in miscommunication. Also, participants considered it the patient's responsibility to ask questions if they did not understand the doctor's information about a diagnosis or treatment option. Participants stated that the communication could be influenced by patient characteristics, such as their educational level, religious beliefs and age. Knowledge of the healthcare organisation The participants talked about the clarity of healthcare organisational aspects in the Netherlands. For example, some participants said they had initially been unaware that they needed a letter of referral from the general practitioner to see a medical specialist in the hospital. Also, a few participants were unfamiliar with the irregular availability of their doctor or the concept of a teaching hospital employing residents. "I did not just have one gynaecologist or midwife. Instead, there was a different doctor every time". (Female, obstetric department, interview 13) Discussion The aim of this interview study was to explore non-native patients' preferences and experiences regarding the intercultural communication with their native doctor. We found that the doctor's ethnic background was considered as not important for this sample of non-native patients. However, a professional attitude of the doctor was very important for the patients. Furthermore, the patients preferred the doctor to focus on them as unique persons rather than only on the disease. Overall, the patients had positive experiences about the communication with their Dutch doctor, though a language barrier was mentioned as a major problem in an intercultural conversation. The patients stated that being acquainted with the doctor made language problems less prominent. Some results of this study are well-known in literature, such as the language barrier as a problem in intercultural communication and the importance of professional attitude. However, a remarkable result of this study was that patients had no preference regarding the doctor's ethnic background. We had expected that a doctor's ethnic background would be important to patients. Concerning the effect of concordance in ethnic or racial background between the doctor and the patient, various effects have been found in the literature. On the one hand, it is concluded that race concordance was not important for the communication, 23 which is confirmed by the patients in this study. While on the other hand, positive effects have been found of race or ethnic concordance between the doctor and the patient such as understanding the feelings of the patients when the doctor is of the same ethnicity. 24 The fact that this was not the case in this study could serve as an argument against the proclaimed need for categorical care, where for example Turkish doctors care for Turkish patients. 25 Many studies report about the positive effects of language concordance between the doctor and the patient. 21,22 Since patients in our study mentioned language as the biggest barrier in a conversation with the doctor, we could imagine the positive effects of language concordance. However, the patients did not explicitly mention this. Paternotte et al. Patients' communication preferences In our study the importance of generic communication skills was showed. This is in line with the results of Mazzi et al. on the preferences of native patients, who identified relevant communication skills for doctors, such as listening attentively, treating the patient as a person and granting enough time. 8 Although they did not investigate patientdoctor communication in an intercultural context, the similarity of the relevant communication skills could confirm that patient-centred communication is important in every context. In particular, the preference that 'patients should be treated as a person' was mentioned several times in our study. This is closely linked to the theory of patientcentred communication, which stipulates that every patient should be approached as a whole person. 13,26 These results are also closely linked to the views expressed by the participants in our study. Considering that patient-centred communication seems to be relevant in an intercultural context, the relation between these two concepts of communication is of interest. 26 The question whether patient-centred communication alone is sufficient enough for successful intercultural communication should be investigated in more depth. 26,28 Patient-centred communication is not only an approach to guide doctors, it also asks something of patients' participation, such explaining the reason of encounter. 28,29 In our study the non-native patients seemed to be aware of this by mentioning the need of their own participation in a conversation. In addition, the patients mentioned unfamiliarity with the healthcare system as an issue. 26 So, in intercultural communication it is important to account for the unfamiliarity of non-native patients regarding the healthcare system, which needs explicit attention in intercultural communication. 26 During the interviews, the non-native patients in our study seemed to have difficulties in reflecting on their doctor's communication behaviour. For example, participants mentioned that communication of doctors was most of the time good and they could sparsely formulate points of improvement. Based on this, we interpreted that these participants found it difficult to mention their preferences regarding the communication style of the doctor. Reflections on previous communication experiences were used to reflect at a deeper level. Still, the participants expressed mainly positive experiences. It could be, of course, that their doctors are already skilled intercultural communicators, since they all work in a 'migrant friendly' hospital, 18 although there is always room for improvement. Other studies showed that patients were mainly positive about the communication with their doctors. 30 The question remains whether patients, and especially non-native patients, have the capacity to reflect on their preferences or experiences regarding communication with their doctors at a deeper level and to formulate improvements. As a consequence, the results of the analysed data might be superficial. At the same time, insurmountable problems regarding intercultural communication probably would have been identified during the interviews, whereas more subtle intercultural communication issues need more profound reflection to be identified. Gaining more understanding on this issue is particularly important since patients are seen as important stakeholders in the evaluation of healthcare communication and patient's views could guide training for doctors. 31,32 The strengths of this interview study lie in the fact that we interviewed non-native patients, since patients are the ones who need to be satisfied with the doctor's communication in order to experience good healthcare. Despite the effort to include non-native patients and to create a deeper level of interviewing with the non-native patients, the sample size was probably not big enough to create various deeper insights. Additionally, the various professional backgrounds of the researchers made it possible to reflect on the data from multiple perspectives. However, the interviews were performed by a Dutch interviewer, which may have influenced the responses. Further research should focus on the effect of the interviewer's cultural background, in order to find out if a deeper level of understanding could be reached more easily between a patient and an interviewer who share the same cultural background. Another option to facilitate reflection is the use of films or observation of conversations. The results in this study show an overlap of patientcentred communication and intercultural communication. Therefore, further research could focus on the distinction between these two and their overlap, which could facilitate further development of intercultural communication education for medical curricula. To approach and learn every aspect of each culture that could influence the medical encounter is impractical, if not impossible, and reinforce stereotyping. 3,28,31,33 We, therefore, chose to focus on the non-native patients as a group, instead of analysing the results according to their ethnic cultural background. However, a limitation is that the interviews were performed in one hospital in one country. Conclusions Overall, non-native patients reported positive experiences regarding the communication with native Dutch doctors, and they did not prefer a doctor of a specific ethnic background. According to them, a language barrier constituted the most important problem, which would become less pressing once a doctor-patient relation is established. Generic communication of doctors was considered more important than specific intercultural communication, which could indicate the marginal distinction between intercultural communication and patient-centred communication. An additional conclusion is that reflecting on the communication skills of the doctor is difficult for patients.
2017-09-26T12:44:21.406Z
2017-05-16T00:00:00.000
{ "year": 2017, "sha1": "4460a22e4ec600492db4e916283f7bc6565177a5", "oa_license": "CCBY", "oa_url": "https://www.ijme.net/archive/8/patients-communication-preferences.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4460a22e4ec600492db4e916283f7bc6565177a5", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
11798518
pes2o/s2orc
v3-fos-license
An old friend revisited: chloramphenicol optic neuropathy With increasing antibiotic resistance, oral chloramphenicol may be utilized more frequently; below we highlight the risk of toxic optic neuropathy. With increasing antibiotic resistance, oral chloramphenicol may be utilized more frequently; below we highlight the risk of toxic optic neuropathy. Case report A 66-year-old woman presented on October 2011 with a 10-day history of painless bilateral visual loss of subacute onset (over 48 h). She also described paraesthesiae of the limbs for a similar duration. She has rheumatoid arthritis, treated with methotrexate (since 2002) and etanercept (2003 -2009), and she required bilateral knee replacements in 2009. Her right knee replacement was revised and underwent six washouts for persistent infection. Previous antibiotics used for her knee infection included ceftriaxone, fusidic acid, amoxicillin, vancomycin and daptomycin. Chloramphenicol 4 g daily was started 14 weeks before the occurrence of visual loss. Chloramphenicol had been discontinued five days before presentation. Her other medications included methotrexate, folic acid, tramadol, paracetamol, aspirin and omeprazole. She smokes 10 cigarettes daily and does not drink alcohol. She has a varied balanced diet. There is no relevant family history. On examination at 10 days after onset of visual loss, she had a best corrected visual acuity of 3/24 (Snellen) bilaterally and centrocaecal scotomas on examination by confrontation with a red hat pin. None of the Ishihara colour plates could be read, not even the control plate. Both her optic discs were hyperaemic (see Figure 1). The rest of her cranial nerves and neurological examination of limbs were normal, except for bilateral L5 dermatomal loss to pinprick and abnormal proprioception at the great toe. Both temporal arteries were normal to palpation. Magnetic resonance imaging (MRI) of brain and orbits was normal and MRI spine showed mild degenerative changes at cervical spine only. Cerebrospinal fluid examination was normal. A visual evoked potential was performed after recovery and this was normal, as were nerve conduction studies. A diagnosis of toxic optic neuropathy secondary to chloramphenicol was suspected. As stated above, the chloramphenicol had been stopped five days before presentation to us, i.e. on day 6 of visual symptoms. Four weeks after stopping chloramphenicol, vision improved to acuities of 6/5 (right), 6/6 (left) and normal visual fields, but reading of the Ishihara colour plates remained impaired at 8/13 bilaterally. The patient elected to have an above knee amputation of her right leg, and is now mobilizing with a prosthetic limb and rehabilitation. At the onset of visual symptoms she developed paraesthesiae of limbs. Although the paraesthesiae improved, she is still symptomatic with this and the sensory loss in the left foot is impairing her prosthesis use. Discussion Chloramphenicol oral therapy was commonly used until the late 1980s, but with the publicized idiosyncratic reaction of bone marrow suppression 1 and the availability of newer antibiotics, its use was slowly phased out, at least in the 'developed' world. 2 However, with the increasing emergence of antibiotic resistance, older generations of antibiotics may potentially be used more frequently, as was the case in our patient. The previously reported adverse reactions to these older generation antibiotics may therefore be less familiar to the current generation of physicians. Toxic optic neuropathy secondary to chloramphenicol was first described in 1950. Approximately 40 cases of chloramphenicol optic neuropathy were reported from 1950 to 1988, but for 23 years since then only two cases were reported, 3 -7 a trend that may reflect the reduction in chloramphenicol use. The bone marrow suppression caused by chloramphenicol has long been recognized by physicians, 1 leading to a severe idiosyncratic adverse reaction even with brief courses of treatment. By contrast, chloramphenicol toxic optic neuropathy is associated with prolonged use (more than 6 weeks) and high cumulative dose (>100 g). 8 Knowledge of these risk factors, by avoiding prolonged courses or high doses, can potentially avoid this adverse reaction. The visual loss in chloramphenicol toxic optic neuropathy may be sudden or subacute, and painful or painless. There may often be limb paraesthesiae preceding the visual symptoms. 4 The typical ocular signs are bilateral optic disc swelling, retinal vessel tortuosity and retinal haemorrhages; our patient's optic discs were certainly hyperaemic, and possibly slightly swollen (see Figure 1). However, the fundi may also be normal. A centrocaecal scotoma is typically seen, and is pathognomonic for toxic/nutritional optic neuropathies. The differential diagnoses to be considered were nutritional optic neuropathy and Leber Hereditary Optic Neuropathy, both of which were excluded in our patient. Hypotheses suggested include mitochondrial inhibition and vitamin B metabolism inhibition by chloramphenicol. 4,9 The treatment is to stop chloramphenicol, which often leads to good, or even full, recovery of vision. 3,4,6 There is anecdotally reported use of high-dose vitamin B ( pyridoxine and cyanocobalamin) but the evidence for this is not robust. 8,10 Our patient recovered simply with stopping the offending drug. Conclusion We describe a case of toxic optic neuropathy secondary to chloramphenicol. Chloramphenicol Bilateral hyperaemic optic discs seen in our patient with chloramphenicol-associated toxic optic neuropathy optic neuropathy causes subacute visual loss with centrocaecal scotoma. The risk factors are prolonged treatment (>6 weeks) and a cumulative dose of >100 g. The management is to stop the drug. This case highlights the potential pitfalls resulting from the use of older generation antibiotics and lack of familiarity with their adverse effects. The problem is likely to become increasingly pertinent as antibiotic resistance increases.
2017-08-30T11:51:46.407Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "46dde1297366d475faa760216a6af76ffabd999e", "oa_license": "CCBYNC", "oa_url": "http://journals.sagepub.com/doi/pdf/10.1177/2042533313476692", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46dde1297366d475faa760216a6af76ffabd999e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
373234
pes2o/s2orc
v3-fos-license
Tacorin, an extract from Ananas comosus stem, stimulates wound healing by modulating the expression of tumor necrosis factor α, transforming growth factor β and matrix metalloproteinase 2 Wound healing is a complex biological process that involves integration of hemostasis, inflammation, proliferation and tissue remodeling. An extract of pineapple (Ananas comosus) stem demonstrates several therapeutics properties, including acceleration of wound healing. Tacorin is a water crude extract derived from the stem of A. comosus with high protein content. The effect of tacorin on wound healing in vivo was examined using rats with an induced injury. Wound closure was faster with tacorin treatment than in the untreated group. An in vitro study was conducted on mammalian cells (3T3‐L1) to observe the effect of tacorin on cell proliferation. Tacorin was first heated to inactivate its proteolytic activity. It increased the viability of 3T3‐L1 cells in a dose‐dependent manner. Excessive inflammation was suppressed by tacorin as shown by decreased tumor necrosis factor α expression. Treatment with tacorin increased the expression of transforming growth factor β, a major player in tissue remodeling. Moreover, tacorin also reduced the expression of MMP‐2 to accelerate the recovery of the wound. Taken together, tacorin is able to accelerate the wound‐healing process by increasing cell proliferation, suppressing inflammation and accelerating tissue remodeling. Wound healing is a complex biological process that involves integration of hemostasis, inflammation, proliferation and tissue remodeling. An extract of pineapple (Ananas comosus) stem demonstrates several therapeutics properties, including acceleration of wound healing. Tacorin is a water crude extract derived from the stem of A. comosus with high protein content. The effect of tacorin on wound healing in vivo was examined using rats with an induced injury. Wound closure was faster with tacorin treatment than in the untreated group. An in vitro study was conducted on mammalian cells (3T3-L1) to observe the effect of tacorin on cell proliferation. Tacorin was first heated to inactivate its proteolytic activity. It increased the viability of 3T3-L1 cells in a dose-dependent manner. Excessive inflammation was suppressed by tacorin as shown by decreased tumor necrosis factor a expression. Treatment with tacorin increased the expression of transforming growth factor b, a major player in tissue remodeling. Moreover, tacorin also reduced the expression of MMP-2 to accelerate the recovery of the wound. Taken together, tacorin is able to accelerate the wound-healing process by increasing cell proliferation, suppressing inflammation and accelerating tissue remodeling. A complex series of interactions between different cell types, cytokine mediators and the extracellular matrix are required in the wound healing process. The normal phase of wound healing involves hemostasis, inflammation, proliferation and remodeling. Each phase is distinct, continuous and overlapping [1]. The initial responses in tissue injury are wound clearing, vasoconstriction followed by vasodilatation and platelet aggregation. The inflammatory phase is marked by erythema, swelling, warmth and pain [2]. In the late inflammation phase, macrophages, neutrophils and lymphocytes migrate to the wound area and release cytokines such as tumor necrosis factor (TNF), transforming growth factor (TGF) and interleukin (IL). These cytokines will stimulate cell migration and proliferation and formation of the tissue matrix [3]. The proliferative phase is characterized by the formation of granulation tissue and ephitelization. The final phase in wound healing, tissue remodeling, consists of reorganization of new collagen fibers [2]. There are many types of wounds that can damage skin. Deeper wounds require medical attention to prevent infection and loss of function. However, most wounds are superficial and can be cared for at home. This usually requires cleaning, application of antibacterial ointment and covering with an adhesive bandage. The purpose of medical care for wounds is to prevent infection and loss of function. In several cases, cosmetic treatment is used, but it is not the primary consideration for wound care. Crude pineapple stem preparations have previously been shown to contain two cysteine proteases: bromelain and ananain [4]. Ananain was identified as a minor cysteine protease that possesses distinct substrate and inhibitor binding properties [4]. It preferentially hydrolyzes Bz-Phe-Val-Arg-p-nitroanilide (pNa) substrate. On the other hand, bromelain preferentially hydrolyzes Bz-Arg-Arg-pNa [5]. One important pharmaceutical application of cysteine protease is enzymatic debridement of necrotic tissue from ulcers and burn wounds [6]. Tacorin is a crude extract from the stem of Ananas comosus, a medicinal plant with highest proteolytic activity in other parts such as fruit and fruit core extract. Tacorin has been developed by Dexa Laboratories of Biomolecular Science (DLBS), one of the biggest pharmaceutical companies in Indonesia, which is exploring various natural compounds for medicinal applications [7][8][9][10][11][12][13]. In addition to proteolytic enzyme, tacorin also consists of the amino acids glycine, proline, glutamine and arginine. These amino acids are important in wound healing. To observe the effect of tacorin in wound healing, in vivo and in vitro studies were conducted. The in vitro study used 3T3-L1 cells, while the in vivo study was conducted with rats with an induced injury. The expression of several cytokines and growth factors was quantified with enzyme linked immunosorbent assay (ELISA) using specific antibodies. Tacorin extraction Plant parts of A. comosus (crown, fruit, fruit stalk/core, butt, leaf and stem) were collected and ground using a blender (Philips, Guangdong, China). The protein fraction from each plant part was extracted using a customized press machine (PT Raja Mesin, Jakarta, Indonesia), followed by separation at 8930 g at 4°C for 15 min using a Kubota 7780 centrifuge (Fukuoka, Japan), and filtration through 0.1 lm and 5 kDa membrane with the QuixStand benchstop system (GE Healthcare, Uppsala, Sweden). The filtrate was subsequently dried using a Mini-Lab fluid bed dryer (Diosna Dierks & S€ ohne, Osnabruck, Germany). Tacorin was obtained from the plant part of A. comosus with highest concentration of protein content and protease activity. Protease in tacorin must be inactivated for cell treatment, and inactivation was carried out by heat exposure at various temperatures (60 and 80°C) and time incubations (30,40 and 50 min). Assay of tacorin Characterization of tacorin was carried out by analyzing the protease activity, protein and amino acid contents. The protein profile of tacorin was also obtained by tricine SDS/ PAGE. Proteolytic activity assay The proteolytic activity assay was performed according to Rowan et al. [14] with small modifications. A volume of 1.25 mL of the reaction mixture containing 0.65% casein (Sigma-Aldrich, St Louis, MO, USA) in 20 mM potassium phosphate buffer (pH 7.5) was added to a 0.25 mL sample. The mixture was incubated for 10 min at 37°C. The reaction was stopped by adding 250 lL of 110 mM trichloroacetic acid (Sigma-Aldrich). A blank was prepared by adding trichloroacetic acid to the crude enzyme, followed by the substrate. After vortexing for 5 s, 2 mL of reaction mixture was placed in an Eppendorf tube and centrifuged at 9200 g for 10 min. A half-volume of supernatant was added to 1.25 mL Na 2 CO 3 (Sigma-Aldrich) and 0.25 mL Folin (Merck, Darmstadt, Germany) reagent. The absorbance was measured by UV/Vis spectroscopy at 660 nm. One unit of tacorin is the amount of enzyme that hydrolyzes casein to produce color equivalent to 0.5 nmol of tyrosine per minute at 37°C and pH 7.5 (colored with Folin-Ciocalteau reagent, Darmstadt, Germany). Protein content assay Protein concentration was quantified using the Bradford method [15]. One hundred microliters of sample was added to 2 mL of Bradford reagent [100 mg Coomassie G250 (Sigma-Aldrich), 50 mL of 95% ethanol (Sigma-Aldrich), 100 mL of phosphoric acid 85% (Sigma-Aldrich), and water to 1 L]. The mixture was incubated for 10 min at room temperature. The absorbance was read at 595 nm. Bovine serum albumin fraction V (Merck) was used as a reference standard. Protein profile assay The protein profile of tacorin was performed by tricine SDS/PAGE in 10% and 16% gel concentration using ultra low molecular mass markers (1.02-26.6 kDa). Proteins were visualized by Coomassie Brilliant Blue R-250 [16]. Amino acid profile assay The amino acid profile was analyzed using high performance liquid chromatography (HPLC). Tacorin was injected into a Waters AccQ-Tag amino acid analysis column (3.9 9 150 mm; Waters Corp., Milford, MA, USA) and eluted with a mixture of AccQ-Tag eluent A, acetonitrile and HPLC-grade water, based on the manufacturer's instructions. Animals A total of 14 female Wistar rats (Rattus novergicus), weight 170-220 g, were used in this study, seven rats used for the negative control group and the rest for the tacorin group. Rats were caged individually in polysulfone cages and housed under standard conditions (18-25°C, relative humidity < 70%, 12 h light/dark cycle). All procedures in this study have been reviewed and approved by Dexa Laboratories of Biomolecular Sciences Animal Care and Use Committee with protocol number DOC-DLBS-PROC-APC-027 and carried out in accordance with Association for Assessment and Accreditation of Laboratory Animal Care International (AAALAC International). Animal treatment After 1 week of acclimatization, the rats were anesthetized with ketamine (75 mgÁkg À1 bw) and xylazine (10 mgÁkg À1 bw) intraperitoneally prior to wound induction. The abdomen was opened by making a vertical incision. A 2.5 cm midline abdominal excision was made from the midpoint of the abdomen to the anterior of the urethra, and then surgery was performed for 2 cm on each side right and left of the uterine area. The wound was then closed by the onelayer closure technique with continuous lock stiches of 4.0 chromic catgut sutures. Effect of tacorin on wound healing Wounded rats were divided into an untreated group (negative control group, purified water 1 mLÁkg À1 bw) and a treated group (tacorin 80 mgÁkg À1 bwhuman dose). Purified water and tacorin were daily administered by the oral route. Then a 1.7% (v/w) blood sample of each rat was taken daily up to 5 days (D0-D5). Blood samples were further analyzed for the expression of TNF-a and TGF-b. Tissue was collected from the left and right of the rats' uterine area at days 0, 3, 5 and 7 and were used for analysis of expression of matrix metalloproteinase 2 (MMP-2). Table 1 shows the design of the animal study. Cell line and treatment The 3T3-L1 cell line (American Type Culture Collection CL-173, Rockville, MD, USA) was maintained in DMEM supplemented with 10% BSA and 1% penicillin/streptomycin (Life Technologies, Carlsbad, CA, USA). Cells were seeded at a density of 4 9 10 3 cells per well in 96-well plates and incubated in complete medium for 24 h before use. Then, cells were incubated in serum-free medium for another 24 h to completely diminish serum in the medium. Treatment was conducted with heat-inactivated tacorin and bromelain (Sigma-Aldrich) as a comparator. Cells were treated at various concentrations of both proteins for 24 h. Cell viability was quantified with a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide tetrazolium (Promega, Madison, WI, USA) assay. Enzyme linked immunosorbent assay The expression levels of TNF-a, IL-6 and TGF-b were measured with ELISA using specific antibody (Santa Cruz Biotechnology, Dallas, TX, USA; Abcam, Cambridge, MA, USA). One hundred microliters of diluted antigen (rat serum, 2 lgÁlL À1 in phosphate-buffered saline; PBS) was immobilized in a 96-well microtiter plate (Nunc, Rockilde, Denmark) overnight at 4°C. After incubation, the plates were washed three times in washing buffer (twice with PBS, once with Tris-buffered saline and Polysorbate 20). One hundred microliters of blocking solution [5% skim milk (Sigma-Aldrich) in PBS] was added to each well. Afterwards, 100 lL of primary antibody (1 : 1000; Santa Cruz Biotechnology, Abcam) in blocking solution was added, followed by incubation for 2 h at room temperature on a shaker. Plates were washed as before. One hundred microliters of secondary antibody-tagged horseradish peroxidase (1 : 1000; Santa-Cruz Biotechnology) in blocking solution was added to each well. The plates were incubated for 1 h at room temperature. Finally, the plates were washed four times and 50 lL of tetramethylbenzidine (Sigma-Aldrich) was added. The color intensity of the solution was read in a microtiter reader at 650 nm. Zymogram analysis The expression of MMP-2 was detected using zymogram analysis. Approximately 25 mg of uterine tissue was extracted with 1 mL of isolation buffer (1% Triton X-100, 0.5 M Tris/HCl pH 7.6, 0.2 M NaCl and 10 mM CaCl 2 ) for 30 min. The suspension was frozen and thawed twice in a deep freezer, and then centrifuged at 2500 g for 30 min. The supernatant was collected and dialyzed on ice-cold dialysis buffer (50 mM tris/HCl pH 7.6, 0.2 M NaCl and 5 mM CaCl 2 ) for 48 h. The extracted protein was then diluted in 10 lL of zymogram buffer (0.25 g SDS, 0.3125 mL of Tris base pH 6.8, 0.5 mL of bromphenol 1%, 0.5 mL of glycerol and 1.18 mL of purified water) and analyzed by electrophoresis (SDS/PAGE) using 10% acrylamide. After electrophoresis, the electrophoretic gel was incubated in 2.5% (v/v) Tween 20 for 30 min, in 50 mM potassium phosphate buffer (pH 7.0) for 3 h, and then stained with Coommassie Brilliant Blue R-250 for 30 min and destained using destaining solution (purified water, ethanol and glacial acetic acid 8 : 1 : 1 v/v/v). Characterization of tacorin Ananas comosus was obtained from a supplier from Subang, West Java, Indonesia and identified by Research Center for Biology, Indonesia Institute of Science, Bogor, Indonesia (Fig. 1). Water extract of pineapple contains a number of proteolytic enzymes [17,18]. Protein concentrations from various parts of pineapple extracts were quantified and the highest protein concentration was obtained from stem ( Table 2). For further experiments, we used extract from pineapple stem, referred to as tacorin. The protein profile of tacorin was analyzed using SDS/PAGE and there were found to be two types of protein with molecular masses around 15 and 25 kDa (Fig. 2). These results were similar to ananain from A. comosus reported by Rowan et al. [4]. Another report, from US Patent 7833963, described various proteins with masses of 15.07, 25.85 and 27.45 kDa as being present in bromelain derived from A. comosus. It is well understood that sufficient protein is required in the wound healing process due to the increased need of protein for tissue regeneration and repair. Specific amino acids such as arginine and glutamine have been detected as important constituents in wound healing. Arginine is a non-essential amino acid that is important in protein synthesis [18,19]. Adequate arginine in tissue appears to be an essential parameter for efficient wound repair. Glutamine is used by inflammatory cells within the wound for proliferation and as a source of energy. Fibroblast cells use glutamine for similar purposes, as well as for protein and nucleic acid synthesis [20,21]. An analysis of amino acids was conducted using HPLC with the profile shown in Fig. 3. Tacorin contains 6.07% total amino acids. Figure 3 and Table 3 show that glycine, proline, glutamine, alanine and arginine are major amino acids in tacorin. Since arginine and glutamine are important in the wound healing process, this might explain the mechanism of tacorin as a wound healing accelerator. The activity of tacorin In vivo study has revealed that the recovery progress of a wound in a group of rats treated with tacorin was faster than in an untreated group, indicated by the decrease of wound area and degree of uterine wound healing. These data have been reported in our previous study by Nailufar et al. [22]. They demonstrated that after 3 days of treatment, the degree of wound healing in the tacorin group increased significantly, and the degree of uterine wound healing at day 7 reached up to 90% compared with the untreated group. This study described the mechanism of action of tacorin in the wound healing process. However, the mechanism of action of bromelain has been reviewed. Bromelain has been reported to have anti-inflammatory properties, which are mediated through the following factors: increased serum fibrinolytic activity, reduced plasma fibrinogen levels, decreased bradykinin levels, decreased prostaglandin E 2 and thromboxane A2, which in turn decreased prostaglandin levels. Another in vitro study reported that bromelain treatment activates natural killer cells and increases the production of TNF-a, interferon-c, IL-1, IL-2 and IL-6. Bromelain was also demonstrated to induce cytokine production in human peripheral blood mononuclear cells, which leads to the production of TNF-a, IL-1b and IL-6 in a time-and dose-dependent manner [23][24][25][26][27][28][29][30]. Our study demonstrated that TNF-a expression decreased when the animal was treated with tacorin ( Fig. 4). A pronounced decrease was observed on day 5 (Fig. 4). This result suggested that tacorin protects the progression of inflammation via suppression of TNF-a level. TNF-a is a pleiotropic inflammatory cytokine produced by several types of cells, especially by macrophages. TNF-a is an acute phase protein that initiates a cascade of cytokines and increases vascular permeability, thereby recruiting macrophages and neutrophils to a site of infection [31]. In contrast to TNF-a level reduction, treatment with tacorin enhanced TGF-b expression (Fig. 5). TGF-b is a multifunctional protein that controls proliferation, differentiation and other functions in many cell types. This protein interacts with a conserved family of cell surface serine/threonine-specific protein kinase receptors and generates intracellular signals [32]. In addition to TNF-a and TGF-b, we also quantified the expression of MMP-2 protein from the treated and untreated groups. At the third day after treatment, a high expression of MMP-2 was detected in untreated rat, both in left and right uterine areas. Thus, the expression of MMP-2 was reduced at days 5 and 7. However, in rats treated with tacorin, the level of MMP-2 was not significantly different at days 3, 5 and 7, which indicates that the recovery process of the treated group was faster than that of the untreated group (Fig. 6). The expression of pro-MMP-2 remained constant. Matrix metalloproteinases (MMPs) are a cell-derived proteolytic enzyme family with 26 identified members [33]. MMPs function in the breakdown of extracellular matrix in normal physiological processes, such as embryonic development, reproduction and tissue remodeling, as well as in disease processes, such as arthritis and metastasis [6]. Most MMPs are secreted as an inactive pro-protein that is activated when cleaved by extracellular proteinase. The protein cleavage activity of MMPs is balanced in time and spatially by cell-secreted inhibitors called tissue inhibitors of metalloproteinases [34]. Tacorin is a water extract containing proteolytic activity even toward 3T3 cell surface protein that contributes to cell adhesion. Protease-treated cells had reduced contact with neighboring cells and the plate surface caused by digestion of the cell surface proteins that are involved in cell adhesion. The proteolytic digestion of membrane proteins by protease also showed complete inhibition of fibroblast adhesion on Table 3. Amino acid composition of tacorin by HPLC. n.d., not detected: limit to detection of lysine, 455.81 p.p.m.; limit to detection of cysteine 938.38 p.p.m. Amino acid Amino fibronectin-coated plastic dishes. Therefore, the proteolytic enzyme in tacorin must be inactivated prior to use to avoid false-negative result. This inactivation was conducted through a physical (heat) method. The proteolytic activity assay of tacorin was conducted using casein [35]. The influence of temperature and incubation time on bromelain activity was reported in several studies [3,[35][36][37]. Commercial bromelain from pineapple stems was reported to be completely inactivated by heating for 30 min at 60°C. Bromelain from frozen pineapple fruit of Bromelia balansae Mez had no activity when exposed at 37°C for 120 min. Complete activity loss of B. balansae Mez is observed when incubated at 75°C. The activity of bromelain from A. comosus was decreased by 20% when incubated at 50°C and 100% when incubation was continued at 80°C for 8 min. We examined the effect of temperature on tacorin proteolytic activity. Bromelain was also studied as a control. Upon exposure at 60°C for 30 min, the activity of tacorin was reduced by 60% (Fig. 7). For longer exposure time, the protease activity of tacorin was reduced by 75%. The activity of control protein remained at a level of 15% in the same condition. Silver-staining resulted in bands (Fig. 8, bands 2 and 3) that remained even after exposure at 60°C for 50 min. However, bands 1 and 2 decreased at 80°C exposure. A new band, suggested to be protein degradation product, appeared after both samples were exposed at 80°C (Fig. 8, band 4). Thus, inactivation of tacorin and control protein was performed by heating at 80°C for 30 min. Several parameters relevant to the wound healing process are cell regeneration, including proliferation, cell growth and maturation. To observe the effect of tacorin on the proliferation phase, we measured the viability of 3T3 cells in the presence and absence of tacorin and control protein. Tacorin treatment increased cell viability in a dose-dependent manner ( Fig. 9). At the highest concentration of tacorin, the number of cells was nearly double as compared with control. Similarly, when the cells were treated with protein control, the number of cells was 1.5 times higher than control. The ability of tacorin in inducing wound healing is suggested through its ability to promote fibroblast cell growth. Fibroblast proliferation is important in the formation of granulation tissue that is needed for wound closure. Fibroblast growth is related to neo-angiogenesis and extracellular matrix secretion that needed for cell ingrowth and tissue development, and the production of some cytokines and growth factors [38][39][40][41]. Conclusion Tacorin is a crude protein extracted from A. comosus stem. The bioactive protein fraction in tacorin contains glycine, proline, glutamine and alanine, which are amino acids that are important in the wound healing process. The effects of protein and amino acid in the wound healing process are through remodeling the expression of cytokines and growth factors. In addition, those proteins and amino acids are involved in inflammation and tissue regeneration as well. Thus, tacorin is a promising wound healing therapeutic agent.
2018-04-03T04:55:59.481Z
2017-06-09T00:00:00.000
{ "year": 2017, "sha1": "28f94b67ef1497f264a1f7176d17a48bd1f9ed3e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/2211-5463.12241", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28f94b67ef1497f264a1f7176d17a48bd1f9ed3e", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13360238
pes2o/s2orc
v3-fos-license
Detecting and Correcting Learner Korean Particle Omission Errors We detect errors in Korean post-positional particle usage, focusing on optimizing omission detection, as omissions are the single-biggest factor in particle errors for learners of Korean. We also develop a sys-tem for predicting the correct choice of a particle. For omission detection, we model the task largely on English grammatical error detection, but employ Korean-specific features and filters; likewise, output analysis and the omission correction system illustrate how unique properties of Korean, such as the distinct types of particles used, need to be accounted for in adapting the system, thereby moving the field one step closer to robust multi-lingual methods. Introduction Grammatical error detection is useful to produce an improved final document for writing assistance, provide feedback to language learners, provide features for automatic essay scoring, and postedit machine translation output (see references in Chodorow et al., 2012, sec. 2). Within this growing field, most of the work has focused on English, but there has been a small community of researchers working on other languages. We continue this trend by advancing the state-of-the-art in detecting errors in Korean particle usage. Expanding to other languages and language families obviously presents new challenges, such as being able to handle word segmentation and greater morphological complexity (e.g., Basque (de Ilarraza et al., 2008), Korean (Lee et al., 2012), Hungarian (Dickinson and Ledbetter, 2012), Japanese (Mizumoto et al., 2011)); greater varieties of word order (Czech (Hana et al., 2010), German (Boyd, 2012)); case ending errors (Czech, German, Hungarian); differing definitions of function words (Korean, Japanese, Basque); and so forth. Investing in methods which apply across languages will make techniques more robust and applicable for even more languages. An additional challenge for many of these languages is the lack of resources. Much previous work on detecting errors in Korean, for example, focused less on techniques and more on acquiring training data and evaluation data (Lee et al., 2012). We thus desire techniques that work using smaller and/or unannotated data sets that may be less reliable than some of the corpora for better-resourced languages. We focus on detecting errors in the presence or absence of Korean postpositional particles. Korean is a Less Commonly Taught Language (LCTL) needing proficient speakers and more pedagogical research (see Dickinson et al., 2008, sec. 2), making computational tools for Korean language learning important. Particles are used to mark properties akin to prepositions and also to case markers, as discussed in section 2. This makes our task applicable to similar languages like Japanese and more generally to agglutinative languages like Basque, Hungarian, and Turkish, as discussed in section 3 on related work. Particles are our focus because of the high prevalence of particle errors in learner data, accounting for 20-30% of learner errors (section 2). One of the most frequent errors relating to particles is not using them when required (section 4); thus, simply detecting whether a particle is necessary can pinpoint nearly half the particle errors language learners make. In the interest of extending methods to new languages, we develop an omission error detection system rooted in work on English preposition error detection (section 5), accounting for Korean-specific properties in the features and filtering of results (section 6). We then see the impact of such error detection on predicting the specific omitted particles (section 7). We make the following contributions in this paper: 1) We present a functional Korean particle omission error detection system, adapted from previous English preposition work but tailored towards Korean in its morpheme-based approach and its novel features. 2) We outline system mistakes, highlighting unique properties of Korean, and point towards how to fix them. 3) We provide an error correction system-incorporating new discourse-based features and optimized separately from the first-stage classifier-which corrects a high percentage of omission errors. In so doing, we also discover that accounting for distinctions in types of Korean particles opens the door to further improvements. The overall lesson is that work from English can be adapted, but only if incorporating the nuances of the new language. Korean Particles Korean postpositional particles are units that appear after a nominal to indicate different linguistic functions, including grammatical functions, e.g., subject and object; semantic roles; and discourse functions. In (1), for instance, 가 (ka) marks the subject (function) and agent (semantic role Similar to English prepositions, particles can also have modifier functions, adding meanings of time, location, instrument, possession, and so forth. For further discussion of Korean particles, see, e.g., chapter 3 of Yeon and Brown (2011). Learner Errors Particle errors are very frequent for Korean language learners, accounting for 28% of beginner errors in one corpus study (Ko et al., 2004). In (2a), for instance, a learner omitted a subject particle after the word 것 (kes,'thing'). The error has beencorrected in (2b). Related Work While there is much related work on detecting preposition and article errors in English, e.g., the 2012 Helping Our Own (HOO) shared task (Dale et al., 2012), we will focus here on work on detecting errors in functional items in agglutinative languages (Korean, Japanese, Basque), as we most directly build from this. Roughly, agglutinative languages here are ones which "glue" syntactic categories, in the form of affixes, onto a word. For Korean particle error detection, Dickinson and Lee (2009) train two parser models, one with particles included and one without, to compare mismatches. Their main purpose is to adapt treebank annotation to be more particle-aware, and they did not evaluate on real learner data. We build more directly from , who build web corpora of Korean in order to train machine learning models for particle prediction. They obtain 81.6% accuracy for particle presence. While the work is similar, comparing the current work to their results is problematic for a number of reasons. First, the work in was very preliminary, focusing on acquiring training data, and did not examine different levels of learners. Also, they used a different learner corpus with different annotation guidelines (see comparison in Lee et al., 2012), along with training data that was specifically tailored for the domains in the test corpus. Finally, for particle presence, they focus on overall system accuracy, rather than error detection, making direct comparison of results difficult (cf. . There has been more work in the comparable language of Japanese, which we review briefly. To begin with, Oyama (2010) uses a basic SVM model trained on well-formed Japanese to detect particle errors, focusing on eight different case particles and finding that the particle frequency distribution in the training corpus affects accuracy, ultimately evaluating on 200 learner particle instances of a single particle (wo). Mizumoto et al. (2011) use statistical machine translation (SMT) techniques to detect and correct all errors within Japanese, using a "parallel" cor-pus of ill-formed and correctly-formed Japanese, based on correction logs from a collaborative language learning website. Our paradigm is much different, basing our method only on a correct model of the target language, given a relative lack of corrected data available in Korean and other lesserresourced languages. We are, however, able to use some correction logs for building confusion sets (section 7.1). Imamura et al. (2012) correct Japanese particle errors using an approach similar to SMT ones, relying on a corpus of generated errors to learn a model of alignment to correct forms. We could explore generated errors in the future, but rely only on a model of correct Korean here. Suzuki and Toutanova (2006) predict case markers in Japanese for an MT system, basing their techniques on semantic role labeling. They predict 18 case particles, a subset of all Japanese particles. They use a two-stage classifier, first identifying whether case is needed and then assigning the particular case ending, training the second classifier only on instances where a case marker was required. This breakdown and parts of their feature sets are similar to ours, but: a) they use (gold standard) parse features and treat the problem as one of predicting markers for phrases; and b) they correct machine errors, while we correct learner errors, allowing us to investigate methods such as using learner-based filters. Turning to Basque, de Ilarraza et al. (2008) detect errors in five complex postpositions, where the postposition itself has a suffix, by developing 30 constraint grammar rules which use morphological, syntactic, and semantic information. While the rule-based system can work well, we pursue a strategy which incorporates different types of linguistic information through contextual features. Training Data: Collecting Web Data In order to control the data for domain specificity, we follow the recommendations laid out in Dickinson et al. (2010) and extended in . Namely, we use data collected from the web using search terms based on topics likely to be discussed in a learner corpus, in order to find semantically-relevant instances. This data is passed through an encoding filter to ensure that at least 90% of any document retrieved is written using Hangul (the Korean writing system). The resultant corpus is over 23 million words. Testing Data: A Learner Korean Corpus For testing data, we use a corpus of learner Korean (Lee et al., 2012, 2013 featuring 100 errorannotated essays from learners evenly split into four different categories: beginning (B) vs. intermediate (I) learners, and foreign (F) vs. heritage (H) learners, where heritage refers to learners who had Korean spoken at home. 2 We split the corpus into development and test sets by taking ≈20% of each subcorpus for development, and using the rest as testing. Table 1 gives the numbers of sentences, tokens, nouns, particles, total errors, and omission errors in the development and testing sets. Particle errors are marked as omissions, insertions (comissions), or substitutions, in a multilayered framework. Spacing and spelling errors are corrected before the target form and correct segmentation are marked, segmentation being necessary since nouns and particles are written as a single orthographic unit. For our experiments, we use the correctly-spelled layer, mitigating the effect of spelling errors for testing an error detection system, as done for English (e.g., Tetreault and Chodorow, 2008;Chodorow et al., 2007). All particles (erroneous or correct) are labeled as to their function (e.g., locative), allowing us to group particles into categories, to see how classifier performance differs. Figure 1 provides the four groups we consider (cf. tables 5 and 6). Additionally, some nominals require multiple particles in sequence (Seq.), and some of the annotations allow for particles from more than one category as a correct answer, i.e., a set of correct answers (Set). to get a sense of the types of errors that learners of Korean make in essays. In this corpus, omission errors, i.e., instances where the learner has mistakenly omitted a particle, make up the biggest propotion of the errors (47.6%). The next most common are replacement errors, where the learner has used the wrong particle (44.6%). Comission errorsusing a particle where none is necessary-make up the remainder of the errors (7.8%). Approach Particles have a range of functions, including case marking and preposition-like functions, but, since they are a closed class of functional elements, we can adapt techniques from English for other closed class functional items, namely prepositions and articles, to detect errors in usage. We view the task of detecting and correcting errors as two steps (cf. Gamon et al., 2008). The first step is a binary choice that only involves determining whether or not a particle is required, a so-called presence (yes/no) classifier. The second classifier, the particle choice classifier, attempts to guess the best particle, once it has been established that a particle is needed. We actually treat the first step as a particle omission detection system because the expected rate of errors of comission is so low, and thus we specify that the classifier cannot reject a particle that is already present. Comission errors may require their own system. We utilize the omission classifier as it nicely performs two functions. First, because it posits instances requiring particles, it also filters out instances that do not need a particle to be grammatical. Thus, the particle choice classifier does not need to include NULL as a possible class, cutting down on training size and complexity. Secondly, many errors can be found at this stage, as a lot of errors stem from learners omitting necessary particles (see section 4.3). Nearly half of the learner errors could be detected with an accurate omission particle detection system at this step. Thus, this classifier can provide useful feedback to learners, especially higher-level ones who may know the correct particle once its omission is highlighted. Particle Omission Error Detection We describe the particle presence classifier here, treating it as a task of particle omission detection. Any particle a learner uses is passed on, while we posit where a particle should have been used. CRF Classifier Conditional Random Fields (CRFs) have been utilized in a variety of NLP tasks in the last few years, and have been used recently for leaner error detection tasks, especially those which can be seen as sequence labeling tasks (e.g., Tajiri et al., 2012;Imamura et al., 2012). We use the comma error detection work in as a basis, and employ CRF++ 3 to set up a binary classifier at this step based on 1.5 million instances from our web corpus. Here we consider all nominals, as annotated in the corpus, as possible candidates for particle insertion. When we derive features based on POS tags (section 6.2), however, we rely on an automatic POS tagger. Features The feature set for particle omission detection is mainly composed of words and POS tags in the surrounding context, where tags are derived from a POS tagger (Han and Palmer, 2004). We use a five-word sliding window, processing each token in the document, although only nominals are possible candidates for particle insertion. The fiveword window includes the target word and two words on either side for context; the feature set, with examples, is given in table 2. We break all words into their root and a string of affixes, each with its own POS tag (or tags, for multiple affixes) to better handle the morphological complexity of Korean and avoid sparsity issues. Particles are removed when extracting affixes, so as not to include what we are trying to guess. For the text and POS of the root, we use unigram, bigram, and trigram features, as shown in the table; for the affixes, we use only unigrams. We also have a feature (combo) for each root that combines the text and POS into a single string. In addition to these adjacency-based features, we also encode the previous and following nouns and predicates, to approximate syntactic parent features. The predicates can be verbs, adjectives that function like verbs in Korean, and auxiliary verbs. Finally, we use two features to encode the amount of nouns that have already occured in the sentence, as well as how many still remain. The usage of topic particles, for instance, relies in part on knowing where in the sentence a noun occurs, with respect to other nouns. Filtering Because learners are more often correct than erroneous in their usage of particles, we want to ensure that the output of classifier does not predict errors in too many instances. To this end, we have built a filter into the classifier. For these errors of omission, we check how confident the classifier is in its answer and only posit omission errors if the classifier's confidence is above a certain threshold. Tuning on the development corpus (section 6.4), we tried a variety of thresholds, in a hill-climbing approach, and found 85% to be the best. Results For all results in this paper, we follow the recommendations from . We evaluate by comparing the writer, annotator, and system's answer for each instance; true positives (TP), for example, are cases where the annotator (gold standard) and system agree, but the writer (learner) disagrees. In our case, positives are cases where the system posits a particle while the learner did not. We count only instances of nominals without particles in the writer's data, as these are the only ones which could have omission errors. Along with precision (P), recall (R), and an Fscore (F 0 .5 ), we provide the number of errors (n), true positives (TP), true negatives (TN), false postives (FP), and false negatives (FN), for the sake of clarity and future comparison. As a baseline, we use the majority class, i.e., guessing a particle for every nominal in the corpus. Table 3 provides the results for particle omission detection on our development corpus. Here we present the baseline, the results based only on the classifier's decision (no filter), and the results for the best filter. We use precision-weighted F 0 .5 rather than the traditional F 1 because precision is more important than recall for most error detection applications. As the 85% threshold results in the best F 0 .5 , we use this system on the test data. Table 4 provides the results for particle omission detection broken down by subcorpus. The FB (foreign beginner) subcorpus has the worst performance, most likely due to their language being most distant from the well-formed Korean of the training corpus, as well as the most distant from the development set. Overall, however, the system has a solid 84.9% precision on all test subcorpora. Analysis In looking over some FPs, i.e., cases where the system predicted a particle not in the gold standard, we discovered that some of these cases involved the optionality of particles. For example, in (3), the system posits a particle after 사람들 (salamtul, 'people'). This is a case of a nominal being used in a genitive fashion, and so a genitive particle could be used here, but it is not required. In some sense, the system rightly points to particle usage being licensed in this setting. However, the corpus annotation only marks particles that are necessary for grammaticality (Lee et al., 2013). Fully teasing apart particle licensing from particle requirement requires more thorough discussion of when particle dropping is permitted. ( is-so. 'In particular, it is thus for the eyes of foreign people.' Other cases do not license particles, but the nominals still have particle-like functions. In (4), for instance, the nominal phrase 이 때 (i ttay, 'this time') carries a temporal meaning-much like that conveyed in the temporal particle 에 (ey), but no particle is allowed here, because the function is more like an adverb (cf. today in English). (4) 이 this Regarding false negatives, i.e., cases where we do not posit a particle when we should, one major problem we observe involves noun-verb and nounnoun sequences. If a learner views a noun and a following word like a compound, it conceals the fact that the noun requires a particle. For instance, in (5) (learner-omitted particles in curly brackets), the word 성격 (sengkyek, 'personality') needs a subject particle, but it forms a compound with 좋 (choh, 'good'), obscuring the noun's role. 'When a child who has good personality is born, if the environment is bad ...' Another complication is the variability of particle requirements due to minor changes in the amount of information presented, for example, the addition of one prepositional phrase changing whether a particle is necessary or not. Combined with misclassifications resulting from segmentation errors from the POS tagger, it seems like the false negative set can be reduced with better linguistic preprocessing fed into the system. Particle Choice for Omission Errors Once we have established that there is a missing particle, the next step is to select the best particle to be placed in the given context. Thus, we send all instances classified as missing a particle to a second classifier that makes this selection. Confusion Set for Particle Omission The scope of the training data selected, i.e. what particles should be allowed to be guessed by the classifier, is a significant decision at this stage. There are hundreds of particles in the Korean language, but many of these are not used often, e.g., 9 particles cover 70% of particle use in a data set of thesis abstracts and 32 cover 95% in a study by Kang (2002). Thus, the training data should only include particles which can reasonably be ex-pected to appear when the learner has omitted one. Utilizing similar methodology to Mizumoto et al. (2011), we build a confusion set from data collected from the language learing and social networking website, To build the set, we searched the user-edited versions of the essays for any word corrected by appending text resembling a particle. Due to the somewhat ambiguous nature of particles with respect to other morphemes and root endings, we cannot be certain that all of these edits are in fact particles, but can be confident that a majority are. After compiling all possible insertion candidates, we prune the list by requiring a particle's frequency to be at least 10% of the most frequent particle. For example, if 가 appears 100 times as the most frequently inserted particle, any particle appearing less than 10 times would be removed. TiMBL For this task, we use memory-based learning, namely TiMBL (Daelemans and van den Bosch, 2005). The nearest neighbor algorithm is desirable as training data is sparse, and there are a variety of possible classes to choose from. After filtering the web-corpus to only include instances based on the confusion set extracted from the Lang-8 data, we have 5.7 million instances for training. Features For the particle selection system, we build upon the particle omission detection features (cf. section 6): we use unigrams, bigrams, and trigrams of the words and POS tags, a combination word+POS unigram, the previous and following verbs and nouns, and the count of nouns passed and remaining in the sentence. We only use nominals as targets for instances, using a five-word window for context. Some of the n-gram features with high numbers of possible values are less helpful, and we remove them, namely the unigram features for the two words farthest from the target, as well as the bigrams that do not include the target. We then extend this information by adding features, some of which provide discourse information. 1) Knowing if there is already a subject, object, or topic particle in the sentence often means that there should not be another of the same type used; thus, we add binary features encoding if any of these have occured yet. 2) We also add binary 4 http://lang-8.com features relating to the usage of the target word in the previous sentence, encoding if the target was marked as the topic, subject, or object, or if it was in the previous sentence at all. 3) A numeric feature is used that tracks how far along we are in the sentence, based on the idea that certain particles, e.g. subjects, are more likely to occur earlier in the sentence, whereas others, e.g. objects, occur later. 4) Finally, we include the previous particle used by the learner, again because some particles are not likely to be reused in a sentence. Results and Analysis Here we present the results for the selection classifier in terms of the accuracy of the classifier on choosing the best particle for an instance already defined as erroneous. By the definition of this task-selecting the correct particle for an errorthere are no FNs or TNs. Thus, recall is rather meaningless, and accuracy and precision reduce to the same metric ( T P T P +F P ). Additionally, as mentioned in section 4.2, the particles in the test corpus can be grouped into different categories, and we provide results broken down by category and subcorpus. Instances that require a sequence of multiple particles to be correct (Seq.) are not currently handled, but we leave them in the results for clarity and completeness. FPs from the error detection step are also included, although the system clearly cannot select a correct particle for them. Table 5 shows the performance of the selection classifier on the instances identified as omission errors by the binary classifier (i.e., TPs and FPs identified by the pipeline). Overall, this classifier selects the correct particle 52.9% (63/119) of the time in the test data when presented with instances from the previous classifier. Table 5: Results for particle selection on instances from binary omission classifier (pipeline) Table 6 provides the results for testing on all instances with omission errors (based on the gold standard), i.e., including the FN instances from the binary omission classifier mistakenly marked as correct, but not FPs. For all corpora combined, the classifier selects the best particle 58.4% (136/233) of the time in the test data. The overall accuracy gleaned from Tables 5 and 6 is encouraging as we move forward, as it means that the classifier performs reasonably well on cases where it has a chance of selecting the best particle in both the pipeline and gold experimental environments. This classifier actually performs better than when restricting selection to the particles from the confusion set for the pipeline experiment setting (cf. Table 5, 56% > 52%), though there is a slight drop in performance as compared to the confusion set classifier in the gold experiments (cf. Table 6, 57% < 58%). In both cases, however, there are significant gains made when only examining structural particles; this classifier correctly identifies the best particle over 80% of the time in both the pipeline and gold test settings. These results show the potential in handling specific linguistic types of particles in Korean differently. Conclusion and Outlook We have presented a system for detecting and correcting learner Korean particle omission errors. We used a two-stage pipeline utilizing CRFs to make a binary decision as to whether or not a nominal without a particle should be followed by a particle, followed by a memory-based learner to select the best particle in the case of an omission. The binary classifier performs with 85% precision and 44% recall in the testing data, for an F 0 .5score of 71%; these results could lead to a useful error detection tool for learners and/or teachers. The selection classifier is also fairly accurate, choosing the best particle close to 60% of the time to correct omission errors. These results compare favorably with English preposition and determiner error correction work (cf. Dale et al., 2012), though those results involve all error types, not just omissions. Our experiments for the selection task using specific particle types indicate that constraining the set of particles for a given context helps greatly. We saw improvement in choice accuracy by using only structural case particles to train a classifier for selecting structural case. This encouraging result can help direct research moving forward. One could build a classifier to identify what category of particle is most likely for a given context after determining a particle is missing and before sending it to a final selection classifier. Finally, as we improve the omission detection/correction pipeline, the next logical step for building a tool for more robust grammatical error detection is to take on errors of substitution and commission. The lessons learned here from particle choice, using a feature set that incorporates dialog-based features and constraining the set of particles that can be selected for a given context, should prove particularly useful for the substitution task. Just as we have seen that structural case particles are the most likely to be dropped, we may be able to find patterns for what types of particles can be substituted or over-used by learners. Confusion sets for the types of errors made by learners (cf., e.g., Rozovskaya and Roth, 2010) should be even more useful for substitution errors.
2015-06-05T01:59:53.000Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "7a0cd8f9a5ba58e3e361f790022939ea90335c84", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "7a0cd8f9a5ba58e3e361f790022939ea90335c84", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
230666792
pes2o/s2orc
v3-fos-license
China-Russia Partnership and Competition Impact on Strategic Security in Asia Russia and China are celebrating their “strategic partnership”, and have been vastly expanding their cooperation since 2014. Their close alliance is based on economic and geopolitical considerations. While it is mutually beneficial, it also has its limitations. China and Russia mostly agree with each other on structural issues such as redefining the parameters of international governance. However, they also continue to encounter disagreements and conflicts over conjectural issues such as energy and arms trade. We conclude that in the Asia-Pacific, China and Russia will likely develop a more comprehensive relationship that may have greater practical implications for the region's security and stability. China and Russia continue to seek to align their strategies more comprehensively against the backdrop of the evolving geopolitical environment by working to overcome existing disagreements and exploring new areas of cooperation. Russia has found its critical role in implementing its Asian policy to face the complicated situation in APR. Now China relies on Russia's supplying the key military technology and materials, so China's military modernization also promotes their bilateral trade. Now China is the biggest trade partnership for Russia, economic ties will let Putin to improve their relationships (國家政策研究基金會, 2014). The Council for National Security Policy Studies, which belongs to the China Association of Policy, held the 11th Chinese National Security Forum in Beijing on June 25, 2012. The Chinese-Russia strategic partnership is defined by the pro-governmental experts as the new relationships to defeat the US strategic containment and should be observed from not only bilateral relations but also global strategy. The Chinese participants regarded that the US transferring their global policy toward Asia is the most serious challenge for China's national security and development. They thought that the China-Russia strategic partnership is based on the geopolitical interests (Xinhuanet, 2012). The Chinese Russian expert Feng Yujun said to the Xinhua News Agency that China and Russia shared the similar conception about the international security and international strategy. China and Russia are the permanent members of the UN Security Council. They need to strengthen the military and security collaboration to face the global threats from overthrowing the governments, international terrorism, and criminal organization. He said Xi's visit to Russia is a substantial and clever diplomacy to develop the Chinese soft power as well (Xinhuanet, 2013). The Chinese Minister of Foreign Affairs Wang Yi said that during Xi's first visit and high-centered public diplomacy has showed the new central collective leadership of the start of diplomatic trip. The first visit to Russia is to focus on the surrounding environment and deepening the strategic relations between these big powers; attending the Brazil, Russia, India, and China (BRIC) summit is to deepening the new relations among the new emerging powers; visiting three African countries is to focus on the establishment in developing countries. China and Russian will transfer their trade model from the pure purchase and sell to the joint manufacturing products in high-tech and aerospace (Xinhua News Agency, 2013). As I published an article in Hong-Kong Tungkungpao that Russia's entering WTO will challenge China, because China's investment is to focus on the short-term benefits and developing its own consuming market. Joint manufacturing the high-tech and energy product takes too much time to see the benefits. Russia's Far East development is to focus on refreshing the infrastructure and attracting the long-term investment. If Russia can balance the regional relationships in Asia, Russia will become the most important regional actor. The different investment directions between China and Russia increase the degree of mutual cooperation. (Takungpao.com., 2012) China is facing the dilemma among the Ukrainian crisis. China proclaims the neutral position and appeals to political resolution in order not to offend any parties in this crisis. This made China disappear because China has economic influence in these countries. What does China care is about its attitude toward national sovereignty, based on not intervening others interior issues and maintaining the dignity of national sovereignty. China can only wait and see China's role in the US-Russia conflict around Ukraine crisis is not clear, because Crimea is currently returning from Ukraine to Russia, for China it is difficult to face a referendum on independence within China separatism problem. Therefore, the Sino-Russian strategic partnership did not occur in Ukraine crisis. The ideological differences between Russia and China remain barrier to ally together. Russia-China Relations: Past and Perspective There are still doubt voices about the weakness of the Russian government because the Russian CHINA-RUSSIA PARTNERSHIP AND COMPETITION IMPACT 495 government has made too many compromises to China on the boundaries. Besides, Russia does not have the superiority to the Chinese modernized army. However, just like the majority of the Russian sinologist, Alexandra Lukin assumed that China-Russian strategic partnership is beneficial for both countries to have more power in international arena and economic development. The Russian people met with the Chinese people about 400 years ago. They met at the Far Eastern expansion. Issue of reconciling the boundaries between them demands a lot of time and effort. By the beginning of 20th century, two sides signed several agreements on the border. It was the longest land border in the world. there have never been in large-scale wars four centuries of relations between Russia and China. However, at the same time, the relationships from time to time were complicated and sharpened , which even in 1969 resulted in a bloody armed conflict for occupying the island Damanskii (Gladilin, 2012). In 1966, during the reign of Mao Zedong in Beijing on the walls appeared slogan: "USSR-our enemy!". In the 1980s, in the period of Deng Xiaoping, there was a popular slogan: "Bringing back our mountains and rivers !". In the 1990s, Beijing continued to demand unilateral weakening of the Russian armed forces in areas adjacent to the Russian-Chinese border. In China, the rising generation is taught and educated that Russia is an "aggressor" and a "national debtor of China" (Gladilin, 2012). On May 16, 1991, in Moscow, representatives of the USSR and China signed an agreement on the border, which was confirmed in the 1996 agreement between Russia and China. The island Damanskii is confirmed as the Chinese territory. The island Tarabarov (Yinlong), which is located at the confluence of Amur and Ussuri rivers, possessing 174 square kilometers and in the past was allotted the combat aircraft flight path of the 11th Russian Army Air Force in the island Large Ussuri, was confirmed as the Chinese territory in 2005 (Gladilin, 2012). Another opinion is on the possible military conflict between Russia and China. An article is published online said: Obviously, without getting Russian resources areas, China will be unable to implement its expansion into the rest of the world. Russia has no idea about China's aggression and supplies China Siberian and Far East resources. Therefore, we believe that Russia's relations with China will irreversibly deteriorate in 10 years and can become intolerable. After 10-15 years of military power, China will be comparable with the power of the United States and will exceed the capacity of the Russian armed forces. 1 Russia newspaper Russiskaya Gazeta published an article "China-Russia: Why do they need each other?" (Lukin, 2013). Russia Sinologist Alexandar Lukin thought that Russia and China are very interested in cooperation in the international arena. China shares Russia's view of the future structure of the world, which is expressed by the notion of "multipolarity". Realistically, this means that both countries would like to see the world, which is not dominated by one power, but several centers on the basis of international law and the UN Charter. Russia-China cooperation is necessary for the development of Siberia and the Far East. China is an important partner of Russia within the framework of Shanghai Cooperation Organization (SCO), where the members country work together to solve the problems of boundary demarcation and fight against the regional extremism and terrorism. Now, SCO is searching for the more scope of cooperation both in military but also economic aspects. Lukin also assumed that new topic Russian-Chinese partnership is also regarded as a reaction to the global economic crisis and their desire to reform the international financial system of the governance International Monetary Fund (IMF) and the World Bank. Now these two countries are seeking an alternative of the world economic order. One of the most important cooperation is the dynamic group BRICS. Beijing is interested in that Stable Russia is a certain counterbalance for the United States and Western Europe and this is beneficial for China implementing its "independent and autonomous" foreign policy. Stable situation on the border with Russia, as well as other neighbors, is important for China's economic development, which is to implement the main goal of the current leadership of the country. Discussion China-Russia strategic partnership has impact on the escalated islands conflicts in APR. The competition and cooperation among China, Russia, and the United States will be reflected in several areas: 1. The function of SCO will be strengthened both in the economic and military area. SCO plays more and more important role in regional integration and regional security. 2. With the importance of APR, both Russia and the United States turn their diplomacy toward APR. China's rising is challenged. China cannot face two powers Russia and US cooperate against China's rising. With the escalated tension between US and Russia, now Russia and China has consensus to face the US rebalancing policy in APR. 3. Russia's Far East development needs China's cooperation and this makes these two countries increase the influence in Asia-Pacific Economic Cooperation (APEC) to compete with the United States. The economic and military competition among China, Russia and US is explicit. 4. The barrier between China and Russia to ally together is the ideological uncertainty between these two countries. Different approaches to solve problems will be vivid toward disputed issue and deepen their cooperation confidence. 5. Russia and China will cooperate in geopolitical expansion against North Atlantic Treaty Organization (NATO) expansion. Russia is establishing Eurasia to resume the traditional political influence in central Asia and broaden its relations with the Asian countries. The NATO expansion will push them stand together more closely. Conclusion It appears that a number of centripetal factors lead to China-Russia policy convergence and cooperation. These factors mainly involve geopolitics that broad lines of cooperation emerge from intense major power competition as China and Russia engage the United States in Eurasia and the Asia-Pacific. Geopolitically, compatibility, acceptability, and availability of Chinese and Russian relationship for bilateral and facilitate deeper interaction. Due to its central position in global activities within and across national borders, political and economic spheres help two countries' cooperation, which renders it a valuable policy instrument. In the context of China-Russia relations, catalyzed by viable conjunctures, the strategic importance of geopolitics as a policy tool has grown considerably over the years, helping create a virtuous cycle of greater cooperation in response to internal needs and external challenges. The geopolitical issues nexus, in this case, reflects the centrality of resource diplomacy in the triangulation of domestic, regional and international politics. China and Russia will have an increasingly broader and deeper cooperation and diplomacy will maintain centrality in the frame of SCO BRICS and APEC as a basis to add material substance to the relationship.
2020-12-17T09:14:03.751Z
2020-09-28T00:00:00.000
{ "year": 2020, "sha1": "25fa54c902785e9c1547965eb04195ac3a7bf373", "oa_license": null, "oa_url": "https://doi.org/10.17265/2328-2177/2020.09.001", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cff774d686daa04f2fd7c00d1892a86143d61148", "s2fieldsofstudy": [ "Political Science", "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
233210216
pes2o/s2orc
v3-fos-license
McKean-Vlasov type stochastic differential equations arising from the random vortex method We study a class of McKean-Vlasov type stochastic differential equations (SDEs) which arise from the random vortex dynamics and other physics models. By introducing a new approach we resolve the existence and uniqueness of both the weak and strong solutions for the McKean-Vlasov stochastic differential equations whose coefficients are defined in terms of singular integral kernels such as the Biot-Savart kernel. These SDEs which involve the distributions of solutions are in general not Lipschitz continuous with respect to the usual distances on the space of distributions such as the Wasserstein distance. Therefore there is an obstacle in adapting the ordinary SDE method for the study of this class of SDEs, and the conventional methods seem not appropriate for dealing with such distributional SDEs which appear in applications such as fluid mechanics. Introduction In this paper, we study the following McKean-Vlasov type stochastic differential equations where i = 1, . . . , d, ν > 0 is a constant (which has its origin in fluid mechanics, namely the kinetic viscosity), B = (B 1 , . . . , B d ) is a standard Brownian motion on some probability space and ω 0 = (ω 1 0 , . . . , ω d 0 ) is the initial data to the corresponding non-linear (and non-local) partial differential equations (PDEs), see (6.2) below. The structure kernel function which defines SDE (1.1) K = (K i j ) is a d × d matrix-valued Borel measurable function on R d , which is continuous except at several singularities. The study of SDE (1.1) is inspired from the random vortex method in fluid mechanics, in which the integral kernel K is singular at 0. We will explain the random vortex model and formulate the problem (1.1) more precisely in the next section. (1.1) is a system of SDEs which involves the distributions of its solutions. This type of SDEs and SDEs which share the same nature may arise from physics models and from applied mathematics, and they have been studied intensively over the past decades. There is a large amount of literature devoting to various aspects of McKean-Vlasov equations, initiated by McKean in his seminal paper [21] (see for example [8,15,32] for some recent progress, [6,7,5,22,27,36,33] and the literature therein). The study of McKean-Valsov SDEs and the renewed interest in SDEs involving solution distributions in recent years are largely influenced by their connections with some non-local and non-linear PDEs arising from physics models. In this aspect, McKean-Valsov type SDEs provide the theoretical foundation for numerical methods such as the particle method for simulating the solutions to this kind of PDEs. For example, the propagation of chaos (law of large numbers) for solving the corresponding PDE of (1.1) may be formulated by replacing the expectation by the empirical measure, to obtain the following system dX n,k = 1 N N n=1 j ε d K(X n,k − X n,j )ω j dt + √ 2νdB n (t), X n,k = εk, for k ∈ Z d , (1.2) where B n are independent copies of d-dimensional Brownian motion, and ε > 0 is the lattice size. The previous random system is the essential ingredient in the random vortex method, see for example [20,22,24]. Other numerical approximations have also been employed to look for large deviation results, see for example [4,11,18,19] for detailed discussions. The difficulty however, in particular in the case that the dimension d = 3 and K is the Biot-Savart kernel, comes from the fact that the kernel K is too singular at 0, hence the Lipschitz continuity of the coefficients appearing in (1.1), which is essential (see for example [8,32]), can not be expected. SDE (1.1) has an independent interest by its own of course besides its significance in fluid dynamics. The research for this type of SDEs has been dominated, to the best knowledge of the present authors, by the use of Itô's SDE theory in one or another way which requires the Lipschitz continuity of K with respect to the variational distance or the Wasserstein distance when one seeks for strong solutions, or by means of martingale problem for weak solutions. Unfortunately, these approaches are not appropriate for the study of (1.1) when K is singular such as the Biot-Savart kernel − 1 4π x |x| 3 (where d = 3) which is explored near zero like 1/|x| 2 . In the present paper, we overcome these difficulties by devising a new and powerful approach which allows us to establish the existence and uniqueness of strong and weak solutions of (1.1) under very weak conditions on the singular integral kernel K. In particular, our results apply to the the Biot-Savart kernel in any dimension, and also apply to the Green kernels (such as ln |x| in dimension 2, 1/|x| d−2 for d > 2), the Riesz kernels 1/|x| γ where γ ∈ [0, d) on R d and many other singular integral kernels. Our novel approach is based on the following simple observation. If K is singular, then the mapping where ξ has a distribution µ, is unlikely Lipschitz continuous with respect to the variational or the Wasserstein metric on the space of distributions. However we recognise that the distributions of possible solutions to (1.1), even K is singular, have much higher defines a vector field (although the vector field b(x, t) is defined via the solution of the SDE), and X(x, t) must be a weak solution to the diffusion process defined by ordinary Therefore the distribution of X(x, t) can be represented by Cameron-Martin formula in terms of the Wiener measure and many results from diffusion processes thus can be brought in to the study of SDE (1.1). In this paper the major technical tool is the sharp heat kernel estimates obtained in [26,25]. The paper is organised as the following. In Section 2, we recall the random vortex problem and derive the SDE (1.1) from the vorticity equation and formulate SDE (1.1) in a form which will be appropriate in the work frame of the present paper. In Section 3, we collect a few facts about diffusion processes whose infinitesimal generators are elliptic operators of second order, and we prove several technical estimates which will be used to prove our main results. In Section 4, we define a non-linear mapping associated with SDE (1.1) and prove it is a contractive mapping, then we show that (1.1) admits a unique weak solution. In Section 5 we show that a strong solution can be constructed, and show that the drift vector field (1.3) is Hölder continuous. Section 6 recovers solutions to the corresponding non-linear PDEs by using the solutions to (1.1), which can be considered as a probabilistic representation for this class of non-local and non-linear PDEs. Convention on Notations. The following set of conventions are employed throughout the paper. Firstly Einstein's convention on summation on repeated indices through their ranges is assumed, unless otherwise specified. If A is a vector or a vector field (in the space of dimension d) dependent on some parameters, then its components are labelled with upper-script indices so that A = (A i ) = A 1 , . . . , A d . The same convention applies to coordinates too, so that x = (x i ) = (x 1 , . . . , x d ). If u is a vector field on R 3 then ∇ ∧ u denotes the curl of u which is again a vector field on R 3 with its component or, if no confusion is possible, f p denotes the L p -norm with respect to the Lebesgue measure on the product space R d × [0, T ]. Random vortex method -from PDE to SDE Particle formulations for fluid flows have been studied as a tool for understanding fluid dynamics of turbulence. The underlying idea is simple, originally due to Taylor [34]. Instead of considering the velocity vector field u(x, t) of the flow, one may study the dynamics of trajectories X(x, t) of the fluid particles emitting from x at the moment 0, i.e. the dynamical equation and reformulate the equation of motion of the vorticity ω = ∇ ∧ u into an evolution equation for X(x, t). This approach works well for certain inviscid fluids. For viscous incompressible fluid with constant viscosity ν > 0, a natural idea is to consider Brownian particles instead, i.e. X(x, t) is modelled by the Taylor diffusion where B is a standard Brownian motion, and rewrite the equation of vorticity motion in terms of the distribution of the Taylor diffusion. This approach is called the random vortex method, see for example [9,10,12,19,28,35] and etc. for a comprehensive account including the recent exciting progress. For incompressible fluid flows, u(x, t) satisfies the Navier-Stokes equations where p(x, t) is a scalar function representing the pressure at (x, t). If the fluid is constrained in a finite region, then certain boundary conditions must be identified, but for simplicity, we consider the case where the evolution of the fluid can take place without physical boundary and also without external force. In this case, the implicit boundary condition at infinity is applied: both u(x, t) and p(x, t) tend to zero sufficiently fast as |x| → ∞. This is the model used in the homogeneous turbulence for example. The incompressible condition (2.3) allows to reformulate the first equation (2.2) in terms of the fluid vorticity ω = ∇ ∧ u and the equation of vorticity motion is the following vorticity equation where the velocity field u can be recovered from the Laplace equation Taylor's diffusions In our approach, Taylor's diffusions will play a crucial rôle, so the goal of this part is not only for the propose of describing the vortex dynamics, but also for establishing a few notions and notations which will be used frequently throughout the paper. The vorticity equation may be written as where we have introduced the following notation: if b(x, t) is a time-dependent vector field (here t is the time variable), then which is a differential operator of second order and is time inhomogeneous in general. This convention will be applied to any time dependent vector field b(x, t) on R d where d is not necessary to be 3. If no confusion may arise, the argument (x, t) will be suppressed. L b is the infinitesimal generator of the Taylor diffusion describing the motion of Brownian particles (X t ) t≥0 , which can be defined by the Itô's stochastic differential equation The formal adjoint operator of L b is given by which is again a diffusion operator if and only if the vector field b is divergence-free. In The following lemma contains the facts about the elliptic operator L b which will be used throughout the paper. 2) For given τ ≥ 0, ϕ and g, the function solves the initial value problem of the parabolic equation: The results in the previous lemma hold in fact under much weaker conditions on b and can be generalised to a large class of elliptic operators, see [1,3,14,30] and other standard literature on parabolic equations for details. An archetypical example Suppose the vorticity ω(x, t) of an incompressible fluid flow with velocity u(x, t), without applying external force, always lies in the kernel of the rate-of-strain tensor, so that ω · ∇u = 0 identically, then the vorticity equation becomes with the initial data ω(·, 0) = ω 0 . Then, according to Lemma 2.1, On the other hand, since ∆u = −∇ ∧ ω, according to the Biot-Savart law, x |x| 3 is the vector valued singular kernel in R 3 . Since ω is a solution to (2.14), we are therefore able to rewrite the velocity field (2.16) in terms of the fundamental solution p u , to obtain that where X(y, t) is the Taylor diffusion process with infinitesimal generator L u started at y at t = 0. That is, the solution to the stochastic differential equation where (B t ) t≥0 is a standard Brownian motion on a probability space (Ω, F , P). Substituting (2.17) into (2.18), we may rewrite the previous stochastic differential equation as 19) where x runs through the state space R 3 . This is the archetypical example of the SDEs we are going to study in the present paper. Formulation of the problem Although our main examples come from the study of fluid dynamics, it will be beneficial formulating the problem in a more general setting. Still, we restrict our study to vector fields on Euclidean space R d . Though the methods and the results can be generalised to tensor fields with certain modifications. Let where K i j are Borel measurable and locally integrable. We are interested in the following stochastic differential equation is the initial data, and B = (B i ) is a d-dimensional standard Brownian motion on some probability space. Before we carry out a study of this class of SDEs, let us reformulate (2.20) in a different form to facilitate our approach. If µ is a measure on (R d , B(R d )), then K ⋆ µ = (K i j ⋆ µ) denotes the convolution of K and the measure µ where for i, j = 1, . . . , d, as long as the right-hand side is well defined. If U is an R d -valued random variable on some probability space (Ω, F , P), then its distribution is denoted by L (U). By definition If, in addition, the law of U has a pdf p(x), then where the right-hand side is the convolution of K and the function p. After having introduced the basic data K and ω 0 and the notations, we are now in a position to reformulate the SDE we are going to study: The concepts of strong and weak solutions to (2.24) may be defined accordingly. It will be convenient to introduce the following notations. If Z = (Z(x, t)) t≥0 is a family of continuous processes on some probability space, which is jointly continuous in (x, t), then we may define a vector field denoted by b Z whose components are given by Notice that by definition, b Z depends only on the one-dimensional marginal distributions of the process (Z(y, t)) t≥0 . Suppose b(x, t) is a time dependent vector field on R d , we may define another tdependent vector field on R d , denoted by K ⋄ b(x, t), such that its i-th component is given by where Z = (Z(y, t)) t≥0 is the L b -diffusion started at y at the moment t = 0, so that Since Z(y, t) has a transition probability density p b (0, y, t, z), we can write We therefore define the mapping K⋄ which sends a vector field b(x, t) to the vector field K ⋄ b(x, t). The non-linear mapping b → K ⋄ b will play a crucial rôle in our study. Under the above notations, we may rewrite SDE (2.24) as 29) where B is a standard Brownian motion on a probability space and i = 1, . . . , d. Then where i = 1, . . . , d. That is, (X, B) is a weak solution to (2.24). This lemma follows by definition: and therefore (2.30) follows from (2.29) immediately. Example 2.3. If d = 3 and K = (K 1 , K 2 , K 3 ), then we set K i j = ε ikj K k and SDE (2.30) becomes which is the random vortex dynamical model, where ω 0 represents the initial vorticity. Several facts about diffusions with bounded drifts In this section, we collect a few facts on diffusion processes and prove several technical estimates which will be used later in next section. Let b(x, t) be a Borel measurable vector field on the Euclidean space R d , dependent on the time parameter t ≥ 0. It is assumed that b(x, t) is bounded: |b(x, t)| ≤ A for every x and t, where A is a non-negative constant. Then the unique L b -diffusion (in the sense of weak solutions) may be constructed by using Cameron-Martin formula (see [31, Theorem 6.4.2, page 154]). Let B = (B t ) t≥0 be a d-dimensional standard Brownian motion on (Ω, F , P). Let F t = σ {B s : s ≤ t} and F ∞ = σ {B s : s ≥ 0} be the natural filtration generated by this Brownian motion. Given x and τ ≥ 0, define the exponential martingale called the Cameron-Martin density where, for simplicity, we have written for t ≥ τ . If τ = 0, then the symbol τ will be suppressed from the notations. Next, construct the probability P τ,x on (Ω, F ∞ ) such that is a diffusion family with generator L b (see for example [17,31]). In particular for any Borel function f , as long as one of the integrals in the equation makes sense, where p b (τ, x, t, y) is the transition probability density function of the L b -diffusion. It is known (see for example [30]) that p b (τ, x, t, y) is positive and continuous on any t > τ ≥ 0 and x, y ∈ R d . Moreover, for every T > 0, there is a constant M depending on A, d and T only, such that for all τ ≥ 0 and T ≥ t > 0. This is the so-called Aronson estimate (see [2,29,30] for example). In our study, we need more precise information about the constant M, which was obtained in [26,25]. There is a positive universal constant κ, depending only on the dimension d and 1 < q < d d−1 , such that for all x, y ∈ R d , τ ≥ 0 and t > 0. As a consequence, we establish the following estimate, which will play a crucial rôle in the proof of our main theorem. for any x ∈ R d and t > 0. Then there exists a universal positive constant κ 1 depending only on d, such that for all x and t > 0. Proof. Without losing generality, we may assume that f ≥ 0. Using the sharp estimate (3.6), we have and the proof is complete. We also need the following estimate which is completely elementary though. for all x ∈ R d and t > 0. Then for all x and t > 0, ρ > 0 and γ ≥ 0. Proof. Since γ ≥ 0, so that and the proof is complete. Weak solutions In this section we prove under certain conditions that there is a unique weak solution to (2.24). To this end we make several assumptions on ω 0 and K which will be in force throughout the remainder of the paper. Let C 0 , C 1 and C ∞ be three non-negative constants. It is assumed that K = (K i j ) satisfies the following growth condition: there are two constants γ 1 ∈ [0, d) and γ 2 ≥ 0 such that |K(x)| ≤ C 0 |x| γ 1 for all x = 0 and |x| < 1, (4.1) and In addition we assume that the initial vorticity ω 0 is bounded and integrable such that ω 0 L 1 ≤ C 1 and ω 0 ∞ ≤ C ∞ . Choose and fix a number q ∈ (1, d d−1 ) and set The crucial fact about C K and T K is that they depend on C 0 , C 1 , C ∞ and γ 1 only. t) is a time-dependent vector field such that |b(x, t)| ≤ C K for all x ∈ R d and t ≤ T K , then K ⋄b is also bounded with the same bound. That is |K ⋄b(x, t)| ≤ C K for all x ∈ R d and t ≤ T K . Proof. Let B = z ∈ R d : |z| < 1 . Then The estimate for I 1 follows directly from Lemma 3.2: where A = C K . By Lemma 3.3 we deduce that Putting the estimates for I 1 and I 2 together we may conclude that for all x ∈ R d and t ≥ 0. Since A √ t ≤ 1 for any t ≤ T K , we therefore have for any x and t ≤ T K . The conclusion then follows immediately from the definition of C K and T K . Next we are going to establish another key estimate for the mapping b → K ⋄ b, where b(x, t) are vector fields such that |b(x, t)| ≤ C K for any t ≤ T K . Lemma 4.2. There exists a positive constant C L depending only on C 0 , C 1 , C ∞ , such that for any b(x, t) andb(x, t) satisfying that |b(x, t)| ≤ C K and |b(x, t)| ≤ C K for all x and t ≤ T K we have for all x and t ≤ T K . Proof. We prove this by using Cameron-Martin formula [31, Theorem 6.4.2, page 154]. Let B be a d-dimensional standard Brownian motion on some probability space (Ω, F , P) and R c (x, t) = e Nc(x,t) be the Cameron-Martin density (see (3.1) and (3.2)) with respect to the vector field c starting at x at the moment 0. Then whose quadratic process and It is clear that and therefore, . (4.8) Now we write Substituting this equality into (4.9) to obtain Now we are in a position to study the non-linear mapping c → K ⋄ c. According to Cameron-Martin formula Then Then by using the previous formula for K ⋄ c we have Substituting (4.10) into J 1 , we may write and therefore where the second inequality comes from (4.8). (4.11) Thus, by applying Hölder's inequality in J 1,1 , we deduce that , for all t ≤ T K , where the second inequality follows from (4.11), the third inequality follows from Lemma 3.2, and the last inequality follows from the fact that C K t ≤ 1 for all t ≤ T K . To deal with J 1,2 , we apply Lemma 3.3 and obtain that for any t ≤ T K , where the first inequality follows from the estimate in Lemma 3.3. Now we treat with J 2 . Since and |K 2 (z)| ≤ C 0 , so by (4.8) we have , where the last inequality comes from (4.11). Putting these estimates for J 1 and J 2 together, we deduce (4.5) with a positive constant C L which depends only on the structure constants C 0 , C 1 , C ∞ ,γ 1 and d (as α and q are constants depending only on d and γ 1 ). For example We are now in a position to prove the main result about weak solutions to (2.24). Theorem 4.3. There exist two positive constants T L and C K depending on C 0 , C 1 , C ∞ , γ 1 ∈ [0, d) and d only such that the followings hold: 1) The (non-linear) mapping b → K ⋄ b is contractive on the space of bounded timedependent vector fields. More precisely for any vector fields b andb such that Hence, there is a unique b such that K ⋄ b = b. 2) There is a unique weak solution (X, B) on some probability space to the SDE (2.24) up to time T L , where B is a Brownian motion and X satisfies (2.24), and is bounded for i = 1, · · · , d. Proof. Choose T L = 1 4 C L ∧1 where C L is given by (4.13). Then (4.14) follows immediately. The second part then follows from Lemma 2.2. We finish this section with a comment on the global solutions of (2.24). As long as K is a singular integral kernel, bounded at infinity, we have shown that there is a unique weak solution to (2.24) where T L depends only on the structure constants C i (i = 0, 1, ∞) and γ 1 ∈ [0, d) . However, we are unable to conclude that the weak solution exists for all time t. The reason is that the SDE (2.24) does not define a dynamical system, which is not proposed as an initial value problem. Finally, we should point out that we do not claim, although we strongly believe it is not the case, if ω 0 and K are regular enough, there are other fixed vector fields c in the sense that K ⋄ c = c but c(x, t) is unbounded on some time interval [0, T ]. Strong solutions With the same assumptions on K and ω 0 as in Section 4, we show that there is a weak solution to (2.24) by using the result in [37] for multi-dimensional diffusion process with bounded drifts. Moreover, under a growth condition on K, we are able to show the Hölder continuity of the vector field K ⋄ b. Theorem 5.1. Let B = (B t ) t≥0 be a d-dimensional standard Brownian motion on a probability space (Ω, F , P). There is a unique family of stochastic processes X(x, t) which is jointly continuous in (x, t) almost surely, and satisfies the stochastic integral equations where i = 1, . . . , d, are bounded. Proof. According to Theorem 4.3, for any t ≤ T L (and extended it to be zero for t > T L ), there is a unique bounded vector field b(x, t) satisfying K ⋄ b = b. Since b is bounded and Borel measurable, then there is a unique strong solution X(x, t) to the ordinary stochastic differential equation We are going to show that the vector field (5.1) is in fact Hölder continuous. To this end, we need the following Hölder continuity result of the transition probability density function, proved originally by Nash [23] and later by Aronson [2], Fabes and Stroock [13]. We take this from [29, Theorem II.2.12, page 340]. Lemma 5.2. Under the same assumption as in Lemma 3.1, there are constants C H > 0 and α ∈ (0, 1) depending only A and d such that for all s ≥ 0, δ 2 ≤ t − s ≤ 1 δ 2 ,|x −x| ≤ δ, for any δ > 0. Lemma 5.3. Under the same assumptions for K and ω 0 as in the previous section, we further assume γ 1 = γ 2 ≡ γ, which belongs to [0, d). Suppose |b(x, t)| ≤ C K for all x and t. Then K ⋄ b(x, t) is Hölder continuous on any compact subset of R d × (0, T K ], where the Hölder exponent and Hölder norm depend only on C K . Proof. By using Lemma 5.2, if T K ≥ t,t > δ 2 and |x −x| < δ (for δ > 0 small enough), for simplicity set Then for any ρ > 0. Here the last inequality follows from Lemma 3.2 and Lemma 3.3. Choose ρ > 0 such that which yields the claim. Corollary 5.4. Under the same conditions for ω 0 as in the previous section. Suppose the kernel K = (K i j ) satisfies the following condition: where 0 ≤ γ < d and C 0 > 0. Then there is a unique strong solution X(x, t) to (2.24) for any t ≤ T L , such that From SDE to PDE In this section we recover the PDE from the SDE (2.24). Theorem 6.1. Let K and ω 0 satisfy the assumptions in Section 4. Let (X(x, t), B t ) (where x ∈ R d and t ≥ 0) be the unique weak solution of SDE (2.24) on a probability space (Ω, F , P) for t ∈ [0, T L ]. Then for any y ∈ R d and t > 0, the distribution of X(y, t) has a positive and continuous density denoted by p(0, y, t, ·). Let b(x, t) be defined by b i (x, t) = R d K i j ⋆ L (X(y, s)) ω j 0 (y)dy for i = 1, . . . , d, and ω i (x, t) = R d p(0, y, t, x)ω i 0 (y)dy (6.1) for any x and t ∈ [0, T L ]. Then the pair (b, ω) is the solution to the following non-local partial differential equation for any x any t ∈ [0, T L ], where i = 1, . . . , d. Proof. According to our construction b(x, t) is the unique bounded vector field such that K ⋄ b = b, and X(x, t) is the unique weak solution of the SDE dX(x, t) = b (X(x, t), t) dt + dB t , X(x, 0) = x. Thus p(0, y, t, x) = p b (0, y, t, x) is the transition probability density for the diffusion with its generator L b , hence considering p b (0, y, t, x) as a function of (t, x), p b is the fundamental solution to the forward adjoint equation Hence, according to Lemma 2.1 ω(x, t) given by (6.1) is the solution to ∂ ∂t − L ⋆ b ω = 0, ω(·, 0) = ω 0 , which completes the proof. As an example we may apply this representation theorem to the Biot-Savart kernel G(x) = − x |x| 3 on R 3 so that K i j = ε ikj G k . Then there is a unique weak solution to the following SDE dX(x, t) = R 3 E [G(z − X(y, t))]| z=X(x,t) ∧ ω 0 (y)dy dt + √ 2νdB t (6.4) where ω 0 ∈ L 1 (R 3 ) ∩ L ∞ (R 3 ) is the initial vorticity. In this case we define ω(x, t) = R 3 p(0, y, t, x)ω 0 (y)dy where p(0, y, t, ·) is the probability density function of the law of X(y, t) to (6.4) and define ε ikj x k − z k |x − z| 3 ω j (z, t)dz. Moreover one can verify easily that ∇ · u(x, t) = 0 and ∆u(x, t) = −∇ ∧ ω(x, t) where the second equation follows from the Green formula which can be also written as for some scalar function f . If one also imposes a constrain ∇ · ω 0 = 0 then ω = ∇ ∧ u. Hence (X(x, t), B t ) is the probability representation of the solution to the above vorticity equation.
2021-04-13T01:15:49.085Z
2021-04-11T00:00:00.000
{ "year": 2021, "sha1": "1c6156f5048a70718ead225c7c54c4e92f26d4ae", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1c6156f5048a70718ead225c7c54c4e92f26d4ae", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
255948495
pes2o/s2orc
v3-fos-license
An optimized histological proceeding to study the female gametophyte development in grapevine Reproductive success in seed plants depends on a healthy fruit and seed set. Normal seed development in the angiosperms requires the production of functional female gametophytes. This is particularly evident in seedless cultivars where defects during megagametophyte’s developmental processes have been observed through cytohistological analysis. Several protocols for embryo sac histological analyses in grapevine are reported in literature, mainly based on resin- or paraffin-embedding approaches. However their description is not always fully exhaustive and sometimes they consist of long and laborious steps. The use of different stains is also documented, some of them, such as hematoxylin, requiring long oxidation periods of the dye-solution before using it (from 2 to 6 months) and/or with a differentiation step not easy to handle. Paraffin-embedding associated to examination with light microscope is the simplest methodology, and with less requirements in terms of expertise and costs, achieving a satisfactory resolution for basic histological observations. Safranin O and fast green FCF is an easy staining combination that has been applied in embryological studies of several plant species. Here we describe in detail a paraffin-embedding method for the examination of grapevine ovules at different phenological stages. The histological sample preparation process takes 1 day and a half. Sections of 5 µm thickness can be obtained and good contrast is achieved with the safranin O and fast green FCF staining combination. The method allows the observation of megasporogenesis and megagametogenesis events in the different phenological stages examined. The histological sample preparation process proposed here can be used as a routine procedure to obtain embedded ovaries or microscope slides that would require further steps for examination. We suggest the tested staining combination as a simple and viable technique for basic screenings about the presence in grapevine of a normally and fully developed ovule with embryo sac cells, which is therefore potentially functional. interest in plant microtechniques combined with microscopic observations [6]. A good histological study based on anatomical and histochemical alterations provides insight into cellular processes and gives clues to propose hypotheses for further experimentation. In fact, more and more researchers incorporate cytohistological analyses into their investigations because it allows to "see" changes that take place in the target experimental system [7][8][9][10][11][12][13][14][15]. Plant tissue culture methods are often applied to fundamental studies of plant morphology and development. Such studies demand familiarity with histological techniques for light microscopy. Observation and, specially, interpretation of the histological sections usually requires a high degree of expertise and a steep learning curve that can be facilitated by the use of specific plant anatomy atlas [16]. Nevertheless, the technical procedure to obtain the histological sample can be easily reproduced if tissue-specific well-described protocols are available. There is a vast literature on plant microtechniques and microscopy proposing a broad collection of protocols with general schedules that have to be optimized depending on the species, the nature of the tissue and the aim of the experiment; therefore each schedule has to be adjusted for every histological experiment to be performed, so that a high quality histological sample could be obtained in the shortest possible time. The availability of ad-hoc schedules for a certain tissue of a determined species supposes a considerable saving of time. It also becomes of great value especially for researchers with limited knowledge and experience in this field, but whose research topic requires the application of histological procedures to keep moving forward in their investigation. Three main parts can be identified in the histological sample preparation. Part 1: sample preparation for sectioning, resulting in the tissue embedded in a solid matrix ready to be sectioned. It usually includes the following steps: fixation, dehydration, clearing, infiltration and embedding. Part 2: sectioning and affixing to microscope slides to obtain good quality sections in terms of thickness (the thinner the better resolution). Part 3: staining of the obtained sections for visualization at the light microscope (see [1][2][3][4]6] for detailed description of each step). Detailed knowledge of the reproductive biology of cultivated species is important, not only to assess the adaptive significance and homology of descriptive characters used in plant systematics, but also to comprehend the requirements for fruit and seed production that allow the development of effective management strategies and a sustainable use [17]. Reproductive success in seed plants depends on a healthy seed set [18]. Normal seed development in the angiosperms requires the production of functional male (pollen grain or microgametophyte) and female (embryo sac or megagametophyte) gametophytes [19]. This is particularly evident in seedless cultivars where defects during megagametophyte's developmental processes have been observed through cytohistological analyses [14,[20][21][22]. Unlike microgametophyte's phenotyping (which can be subjected to high-throughput phenotyping for its functional validation [23]), there is a technical difficulty in obtaining phenotypic information from the megagametophyte at the cellular or sub-cellular level, as it is a more complex system and is located within the ovule and wrapped by the integuments, which act as physical barriers [24]. A complete functional embryo sac is critical to many steps of the reproductive process, so a deep comprehension of female sporogenesis (megasporogenesis) and gametogenesis (megagametogenesis) is needed to further understand other processes of the reproductive biology in which the megagametophyte is involved, such as pollen tube guidance, fertilization, induction of seed development upon fertilization, and maternal control of seed development after fertilization [25,26]. In grapevine, the female reproductive organ (gynoecium or pistil) consists of a superior ovary with two locules (each containing two anatropous ovules with an embryo sac), a single short style and a single stigma. The ovule, which develops as a placental outgrowth, is constituted by a massive nucellus and two integuments (inner and outer). The nucellus evolves from periclinal divisions of the subepidermal cells of the ovule primordium. The inner integument originates at the base of the ovule primordium from periclinal divisions of the nucellar epidermal cells and consists of two to three cell layers. The outer integument, instead, develops at the base of the inner integument from periclinal divisions of the subepidermal cells when the ovule is partly anatropous and, in this case, consists of two to nine cell layers in different parts of the ovule [27]. Embryo sac development starts with differentiation of one hypodermal cell in the center of the nucellus into the archeosporial cell (2n), which divides transversely and forms an outer primary parietal cell (from which the layers of parietal cells and nucellar calotte generate) and an inner primary megasporogenous cell [27,28]. During megasporogenesis, the inner primary megasporogenous cell differentiates into the megaspore mother cell (MMC), which undergoes two consecutive meiotic divisions producing a linear tetrad of four megaspores (n). Only the one in the chalazal direction behaves as a functional megaspore and develops into the female gametophyte, whilst the remaining meiotic products degenerate by a form of programmed cell death. During embryo sac development the ovule becomes anatropous (completely inverted) due to the development of the integuments. Then, during megagametogenesis, the surviving megaspore undergoes three consecutive mitosis resulting in an immature embryo sac that consists of an eight-nucleate cell (four nuclei toward the micropyle and another four toward the chalazal end). Finally, a last differentiation process gives rise to a mature monosporic Polygonum-type embryo sac containing seven cells (three antipodals in the chalazal end, the egg apparatus consisting of two synergids and one egg cell in the micropyle end, all of them haploid, and one homodiploid central cell near the egg apparatus) [25,27,29,30]. Lebon et al. [31] studied the reproductive organ development in two grapevine cultivars (Gewürztraminer and Pinot Noir) applying a resin-embedded approach. These authors observed that, in both cultivars, female reproductive cells consisted of sporogenous tissue between stages E-L 12 and E-L 15 [32], reaching the MMC stage at E-L 15 + 2 days. The time course of female development differed thereafter. Residual megaspore generating the embryo sac was formed at E-L 15 + 8 days in Pinot Noir and E-L 17 in Gewürztraminer. However, at the onset of anthesis the embryo sac was fully developed in both cultivars. In literature there are several works where cytohistological approaches have been applied for the study of the embryo sac in grapevine as well as for the investigation of flower and berry development [14,21,22,[33][34][35][36][37][38][39]. The various methodologies are mainly based on resin-or paraffin-embedding techniques associated with electron (EM) or light microscopy (LM), respectively. In resinembedding techniques ultrathin sections (1 nm) can be obtained from very hard material (deep freezing or resins) providing high resolution images at the EM; however these kind of techniques require a high level of expertise and special microtomes designed for obtaining very thin sections [34]. Unlike resin-based techniques, paraffinembedding approaches contain the costs and, despite that the best resolution achieved is 3-5 µm, they can be an efficient tool if there is no need to observe ultrastructural features of small regions or even of a particular cell type [21,22,36,39]. In addition, it is easier to orient and embed specimens in wax than in polyester wax or glycol methacrylate. In some cases, a combination of resin-embedding with LM, instead of EM, has also been applied in order to get semi-thin sections (1 µm), an intermediate approach in terms of expertise, costs and image resolution [14,31]. However, in the published protocols the description of the used cytohistological methodology is not always exhaustive. Different staining procedures have been performed depending on the target structures or compounds to be observed. There is a huge variety of available stains in botanical microtechnique, but hematoxylin still remains the standard nuclear stain for histological studies. However, hematoxylin by itself is a very weak dye and is of no value in microtechnique if it is not used in conjunction with a mordant that causes it to act as a very strong basic dye. Safranin O and fast green FCF staining combination, easier to apply than hematoxylin, is also considered one of the most valuable stains both for nuclear as well as anatomical and embryological studies and it has even replaced hematoxylin stain for routine work in some cases [40]. This staining combination has been applied in embryological studies of several plant species [9,41,42] and, in grapevine, as far as we know, for carpel morphogenesis, flower formation, fruit set and early stages of fruit development studies [37,38,43]. Here we present a detailed paraffin-embedding method for the examination of grapevine ovules at different phenological stages. The incubation times at each step (fixation, dehydration, clearing, infiltration, and staining) have been optimized in order to reduce the duration of the entire process. The main objectives were to develop optimized schedules for: (i) preparing good quality histological samples of grapevine ovules for further processing (part 1 and 2 of the histological sample preparation process, see above) and (ii) testing the safranin O and fast green FCF staining combination as a valid method for routine screening of functional embryo sacs in grapevine. Results The histological proceeding proposed here (consisting in dehydration, clearing, infiltration and embedding steps) takes 1 day and a half, allowing the preparation of 12 samples at the same time (more samples can be processed simultaneously if the volume of the solvents used at each step is increased). Cooling the paraffin block with ice for 1-2 min prior sectioning (i.e. holding one ice cube against the paraffin block so that the melting ice water will drop down over its front surface) is useful to obtain thin sections. Anyway, despite the technical feasibility of obtaining good sections of 3 µm thickness with the rotary microtome, it was difficult to place them into the paraffin section flotation bath, as they tended to fold. Therefore, sections of 5 µm thickness were preferentially produced. Analysis of the megagametophyte development Sangiovese ovules at three phenological stages were examined in both longitudinal and transversal orientations ( Fig. 1, Table 1): Longitudinal sections of Sangiovese ovules were obtained at this stage. In all the investigated ovules the beginning of the nucellus was observed as well as protuberances indicating the initiation of the inner integument. The archeospore was evident in most of them. Cells undergoing periclinal divisions at the base of the inner integument were visualized, which likely point to an initial formation of the outer integument. The cytoplasmatic component of the cells forming the nucellus and the observed protuberances were stained with cyan/green hues while nuclei and nucleoli were clearly stained in reddish/magenta (Fig. 2). Stage E-L 17. Longitudinal ovules at this stage were already anatropous with nucellus and both integuments fully developed (Fig. 3A, B). Transversal sec-tions offered an overview of the ovary, which was mainly divided in two locules with two ovules each, accounting a total of four ovules per ovary. However, in some cases, ovaries had three locules with two ovules each, thus containing an overall of six ovules with a megaspore each (Fig. 4). Inner and outer integuments completely wrapped the nucellus in all the ovules examined. Inner integument consisted in a three cell layer and outer integument in four layers of cells reaching seven in the chalazal part of the ovule. Nucellus and inner integument cells appeared cyan/green with reddish/magenta nuclei, while the outer integument cells presented a more intense safranin O staining (not only in the nuclei and nucleoli), especially those in the outermost layer of the outer integument. A space between the inner and outer integument was observed in the majority of ovules and sometimes between the nucellus and the inner integument, as well as exfoliated integuments ( Fig. 4A-F, H). The nucellar calotte or nucellar cap, which develops at the mycropilar end of the nucellus, consisted at this stage of around five layers of cells (Fig. 3B, E). Different processes that take place during megagametogenesis were observed at this stage (Figs. 3 and 4B-E, Table 1) with megaspores during or after the first or second mitosis. Stage E-L 26. Longitudinal ovules examined at this stage were anatropous with fully developed integuments (Fig. 5). In transversal sections, as in stage E-L 17, some ovaries with three locules containing two ovules each were observed; in this case most of them presented a mature embryo sac (Fig. 6). Anyway, the examined ovaries consisted mainly of two locules. A small space could be appreciated between the inner and outer integuments (Figs. 5B, D, 6A-C, E, G-L) and less frequently between the nucellus and the inner integument (Figs. 5B, 6C). At this stage cell layers of both inner and outer integuments stained with a reddish/magenta color; although, as in stage E-L 17, cells of the outermost layer of the outer integument presented a much more intense staining compared to the other layers. The nucellar cap consisted of up to ten layers of cells. Embryo sac elements (egg cell, synergids and central cell or its contours) could be seen in the majority of the examined ovules in both longitudinal (Fig. 5) and transversal sections (Fig. 6). In addition, in some ovules the zygote, the endosperm and/or the sperm cell were visible, which indicates that the fertilization process was already undertaken (Fig. 6E-G, I-K). Nucellar tissue was not observed in two ovules from the same flower in any of the longitudinal sequential sections inspected (Fig. 5A). Technical remarks Sectioning and affixing the sections on the microscope glass slides, in our opinion, is the real bottleneck of the whole process. Dehydration, clearing and infiltration steps can be performed for several samples simultaneously, but each block of embedded material has to be sectioned once at a time. In addition, the study of the development of the embryo sac requires the examination of a set of adjacent sections because the different target structures could be in different cutting planes. Therefore, all sections from the beginning to the end of the ovule have to be kept, which makes the process even longer. On the other hand, in order to avoid mechanical damage of the tissue, sectioning at the microtome requires competence, precision and time and these skills can only be obtained and improved with practice. Alternate sections were selected for staining. In this way, if other types of analysis (e.g. in situ hybridization, specific staining of certain structures or compounds such as lipids, carbohydrates, polysaccharides, etc.) have to be performed in a second time, specific procedures and dyes could be applied to adjacent slides to those already screened containing the structures of interest to deepen. Histological sample preparation and staining Fixation time of grapevine flower buds reported in literature ranges from 12 to 72 h, usually this process has been carried out at 4 °C and sometimes vacuum has been applied to remove air and promote penetration of the fixative [14,20,22,36]. According to our experience, flower buds are completely fixed after 12 h and vacuum is only needed for closed flowers collected before anthesis. That is because the calyptra creates an air chamber around the pistil and, if this air is not removed, it can interfere, not only with fixation, but also with the successive steps of the process [4]. This step could be ignored if pre-flowering flower buds are decapped before fixation. FAA universal fixative, a mixture of formalin-acetic acid-alcohol, was chosen here because it is the most [3,4,6]. In addition, we wanted to avoid the use of other more dangerous and toxic compounds such as chromic or picric acid. Finally, FAA fixative does not need to be washed off before the dehydration step because its ingredients are soluble in the dehydrating agents and are removed before infiltration is begun. Instead of ethanol, an ethanol-based alcohol mixture (histoalcohol), optimized for cytohistological analysis, has been used for tissue dehydration. In the consulted literature referring to grapevine, only Cardoso et al. [36] specified the ethanol series employed. They performed six steps of increasing ethanol concentrations, from 10 to 90%, and a last overnight dehydration step in 100% ethanol. The full length of the dehydration step in the protocol proposed in the present work is much shorter: 3 h and 15 min (4 h including "clearing"). The general rule concerning the initial concentration of alcohol to be used for dehydration is to begin with approximately the same percentage of alcohol as fixative or storage fluid. In this protocol dehydration steps were saved because, as the FAA fixative is already 50% ethanol, dehydration starts at 50% [6]. As dehydration, tissue infiltration from the clearing agent to the support matrix should occur gradually. In fact, in basic protocols described in plant microtechniques' manuals this step is laborious and long, it could even last several days [1][2][3][4]6]. For grapevine flower buds a three-step infiltration consisting of 100% xylene, xylene:paraffin 1:1 and 100% paraffin, was reported [36]. Due to the small size of the samples analyzed in the Transversal section at 100× of the ovary of two Sangiovese flowers presenting two (A) or rarely three locules (B) with two ovules each one. C-F sections of ovules containing the megaspore during starting mitosis at 400×. G-I megaspore in each section of ovules in the central part of nucellus at 1000×. Nu: nucellus, l: locule, ii: inner integument, oi: outer integument, ms: megaspore present paper, they were placed in biopsy bags to prevent them from coming out from the biopsy cassette. These filter bags absorb and retain the solvents used, as xylene, so infiltration was performed by directly placing the samples from the clearing solution (xylene) into liquid paraffin, with one change of the liquid matrix and skipping over the step xylene:paraffin 1:1. A vast range of dyes and stain combinations are available. Different stains can be applied to the same tissue depending on the structures to be observed. In the study of grapevine ovules different dyes (i.e. Heidenhein's iron alum hematoxylin, DAPI, aniline blue, toluidine blue O, cresyl violet, vanillin-HCl) and double stain combinations (i.e. Mayer's hematoxylin and eosin (H&E), safranin O and orange G) have been used with different scopes [14, 20-22, 34-36, 39]. In the present work good contrast and differential staining allowing the identification of embryo sac structures were reached using this combination. Therefore, we propose safranin O and fast green FCF as a suitable staining for this purpose. Grapevine female gametophyte development Stages E-L 15, E-L 17 and E-L 26 were selected to follow the female gametophyte development in Sangiovese according to the key steps of sexual organ formation observed at different phenological stages by Lebon et al. [31]. In Sangiovese at stage E-L 15 the initiation of the inner integument was already evident, as well as a nucellus with in the middle a structure that can correspond to the first division of the archeospore or the megaspore mother cell [27]. At stage E-L 17 ovules were already anatropous and megagametogenesis processes could be observed (Fig. 3D). The acquisition by the inner and outer integuments of a reddish/magenta color is likely due to the fact that this cell layers consist of tannin-bearing cells, specially the outermost layer of the outer integument, as described by [44]. The spaces observed between inner and outer integuments and between nucellus and inner integument likely represent the first sign of ovule degeneration with exfoliated integuments [14]. The embryo sac of Sangiovese cv., as previously reported also for Gewürztraminer and Pinot noir [31], was fully developed at the onset of anthesis. Egg cell, central cell and synergids were observed in most of the samples, while antipodals (located at the chalazal end of the embryo sac) were not seen likely because they have a short life span and soon disintegrate [45]. Most of the Sangiovese ovaries inspected in A Sangiovese ovary at 100× with two locules and four ovules with exfoliated integuments (they can partly be the result of microtome damaging). B Rare ovary presenting three locules with two ovules each at 50×. C Exfoliated integuments partly due to microtome damaging, embryo sac with central cell at 400×. D Egg cell and two synergids at 400×. E, F Embryo sac with initiation of the endosperm at 400× (E) and endosperm nuclei in detail at 1000× (F). G embryo sac with an already visible zygote at 400×. H one of the first transversal sections of a Sangiovese ovule at 400× with an unusual shape of the inner integument. I section of the middle part of the same ovule shown in H where the fertilization process is occurring: zygote just after fertilization by one sperm cell, two dark synergids together and a large central cell with two nuclei including nucleus of the central cell (2n) and the nucleus of the second sperm cell (n), 400×. J Ovule at 400× with an already visible zygote and synergids (zoom in the upper right corner). K Embryo sac after fertilization where the initiation of the nuclear endosperm with several micronuclei (black dotted area, zoom in the left upper corner) and two dying synergids (red dotted area, zoom in the left bottom corner) are visible, 400×. The upper synergid is at the beginning of the cell death process while the one in the bottom is already dead. L Embryo sac with dead cells in the center (*), 400×. cc: central cell, ccn: central cell nucleus, ec: egg cell, en: endosperm, ii: inner integument, Nu: nucellus, oi: outer integument, s: synergids, scn: sperm cell nucleus, z: zygote this study presented the typical conformation described for the genus Vitis: bicarpelar, syncarpous (fused carpels into a unified compound gynoecium), and divided into two locules with two ovules each [27,46]. However, we also observed some ovaries of Sangiovese presenting three locules with two ovules each, which evidences the presence of some flowers of Sangiovese with a tricarpelar, syncarpous ovary. This phenomenon had been previously observed in some ovaries of cultivated grapes [43,45] and also, in mature berries of the wild species Vitis labrusca the reported number of hard well-developed seeds amounted to six [47]. Two types of syncarpous ovary ontogeny have been described based on the time of the carpel fusion event involved: congenital (carpels fused from the earliest emergence of their primordia) and post-congenital (fusion takes place during development). Syncarpy is congenital in 80% of angiosperms and it has also been reported as the main pathway for tricarpelar ontogeny in a Chinese V. vinifera cultivar, Xiangfei, with a high occurrence of tricarpelar flowers [43]. In our study, all locules of the three-locule ovaries seemed alike in size and morphology and contained two potentially functional ovules each, so it is likely that also in the case of the infrequent tricarpelar ovaries of Sangiovese a congenital type is involved; although further studies should be performed for confirmation. Conclusions The histological sample preparation process we propose here can be used as a default procedure to obtain embedded ovaries or microscope slides that would require further procedures for examination (e.g. staining, in situ hybridization…). Safranin O-fast green FCF staining combination is an easy procedure and a valid staining for basic screenings about the presence of a normally and fully developed ovule and embryo sac in grapevine, which is therefore potentially functional. In addition, the proposed methodology could be potentially applied as a useful tool for studying the megagametophyte during flower development, as well as for comparative phenotyping of the embryo sac of seeded/seedless cultivars to get a clue about the underlying mechanisms of seedlessness, like defects during the female gametophyte development. Covering medium for microscope slides and mounting medium for cover slips (Biomount BMT-100, Histo-line Laboratories). Plant material Flower buds were collected from young plants of Sangiovese cultivar located in the grapevine germplasm collection of the Fondazione Edmund Mach (ITA362). Plant material was sampled from the same inflorescence at three phenological stages: E-L 15, E-L 17 and E-L 26 [32] in three different plants (Fig. 1). Sample collection and fixation (timing: 12 h fixation) Groups of six to ten flowers were collected at each phenological stage keeping the pedicel. Immediately after sampling in the field, flowers were placed into a 50 mL falcon tube filled with FAA. In the laboratory, penetration of the fixative was facilitated by a vacuum infiltration for 30 min, or until the flowers sank to the bottom of the tube. An oil vacuum pump was connected to a bell jar vented into a fume hood and a pressure vacuum between − 0.7 and − 0.85 bar was applied. FAA volume should be at least 50× the volume of the plant material to be fixed. Then, plant material was fixed in FAA for at least 12 h at 4 °C. If material was not processed immediately, it could be conserved for a long period in fixative until dissection and subsequent sample processing. Dissection Within 1 month of fixation, two flowers per inflorescence at each phenological stage were selected for dissection: one for longitudinal and another for transversal sectioning. Gynoecium of each flower was isolated by removing the calyptra and stamens (using a stereomicroscope and dissection forceps) and it was placed into a biopsy bag, which was then set into a biopsy cassette (previously labelled with the sample code). Cassettes in groups of maximum 12 were then immersed in a 250 mL container filled with FAA for storage and transported to the histology laboratory. The pedicel was conserved together with the gynoecium because it facilitated the orientation of the tissue sample when embedding it on paraffin. Sample dehydration and clearing (timing: 3 h 15 min dehydration + 45 min clearing) The containing sample biopsy cassettes were transferred to a graded series of alcoholic solution. Sample dehydration was carried out at room temperature using an increasing graded histoalcohol series, from 50% to absolute histoalcohol, followed by a clearing step consisting of two changes of xylene (Table 2). Each dehydration and clearing step was performed in 500 mL of solvent, volume that could contain 12 biopsy cassettes at a time. Sample infiltration and embedding (timing: (overnight + 5-6 h) infiltration + 30 min solidification) Excess of xylene from the previous clearing step was removed by squeezing the cassettes on absorbent paper. Then, they were immersed in a glass staining dish filled with liquid paraffin and kept at 64 °C in a lab oven for tissue infiltration. One change of paraffin was done after overnight incubation. Incubation in the new paraffin continued for other 5-6 h. An embedding station was used for embedding samples in 15 × 15 or 24 × 24 mm disposable molds that were then set in a cryo console for fast paraffin solidification for at least 30 min. Two pistils per inflorescence per stage were separately embedded, one in longitudinal orientation and another in transversal orientation. Each sample was embedded in a paraffin block, getting as many paraffin blocks as samples processed. Sectioning with a rotary microtome The paraffin block containing the sample was cooled on the cryo console before setting it on the rotary microtome. Sections or serial sections (ribbons) of 3-5 µm thickness were obtained. Each section or section ribbon was placed on the water surface of the paraffin floatation thermostatic bath (set at 37 °C) to which one drop of glycerinated albumen was previously added. In this way the section expands and flattens quickly. Subsequently, the section was affixed to a 26 × 76 mm microscope glass slide, which was then left in a slide drying bench (set about 5-10° below the melting point of the paraffin used) until completely dried. Staining of the slides and mounting (timing: 1 h 30 min deparaffination + 3 h 10 min staining + 25 min mounting per rack of 24 slides) A safranin O-fast green FCF staining combination was used. Alternate slides for each sectioned pistil were stained. Before staining, slides were deparaffinized by incubating them for 30 min at 64 °C in a lab oven, then were placed immediately in xylene for 35 min (with one change of the solvent after 30 min) and consecutively slides underwent a series of ethanol of decreasing concentrations until 50% ethanol for tissue hydration (Table 3). Immediately after deparaffinization sections were stained. The basic schedule proposed by Jensen [4] for safranin-fast green staining was set with some modifications: Observations at the light microscope and imaging processing All the stained slides were observed through bright field microscopy. A digital camera (AxioCam ERc 5 s, ZEISS) was attached to the optical microscope and simultaneously connected to a computer. AxioVision Rel. 4.8 software (ZEISS) was used to examine and observe in "live" mode the samples, to get digital images and to do annotations on them. ImageJ software [48] was employed for image post-processing, which consisted of enhancing image contrasts and sharpness and performing customized images through the "custom montage plug-in".
2023-01-18T15:17:11.987Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "e1bbd7ba7b4658b5212842c72c02811a8f4aaa68", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13007-020-00604-6", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e1bbd7ba7b4658b5212842c72c02811a8f4aaa68", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
226635320
pes2o/s2orc
v3-fos-license
Impact of the SARS-Cov2 Pandemic on Orthodontic Therapies: An Italian Experience of Teleorthodontics Objective: To assess the possibility of controlling patients at a distance according to principles of teleorthodontics to understand its possible usefulness in the future routine activity and the impact pandemic may have had on different types of orthodontic treatments. Material and Methods: One hundred orthodontic patients (57 F, 43 M, age 7-46) during quarantine were checked through videocalls and photos sent by patients following proper instructions. Three groups have been distinguished based on the type of therapy: A-fixed appliances; B-removable appliances; C-clear aligners. Relevant events about dental and gingival health, integrity of appliances, orthodontic therapies related symptoms and overall progress of treatments were recorded. Results: A and B groups reported higher percentages of gingival inflammation (27 and 22%), dental plaque (16 and 13%), deciduous tooth loss (8 and 16%). Bracket and attachment detachment were the most frequent events in A and C groups (22 and 23%). Pain and discomfort were reported in A and B groups (35 and 32%). Therapies continued to progress better in C (51% improved dental alignment) and B (31% improved malocclusions) groups. Conclusion: Orthodontics is safe and allows during emergencies to postpone checks. Everyday mobile-technology is useful in managing orthodontic patients unable to carry out in-person control. When their effectiveness equals other systems, treatments with clear aligners without attachments should be preferred in patients unavailable for regular checks. Introduction Telemedicine is the complex of technologies and tools that cover medical services, ranging from the formation of opinion during consultation to diagnosis, prescriptions, treatment and monitoring of the patient, all carried out remotely via an Internet connection. Telemedicine allows to break down the distances that exist between doctor and patient also in pediatric age [1,2]. This is also a very important aspect in dentistry. Teledentistry is, in fact, a specialistic extension of telemedicine that has been developed first at the military level and subsequently has found application mainly in the management of visits to patients living in rural areas in countries where dental practices can be many kilometers away or in the case of the management of patients whose movements want to be limited to the necessary for reasons of greater fragility, for example in the case of cancer patients [3]. In the context of teledentistry, its application to orthodontics, called teleorthodontics, can be considered a very useful tool to assess the patient remotely when he wants to report a problem related to a fixed or removable device before periodic check-in person or when he has doubts about how to use the removable orthodontic devices or some components of the fixed one (e.g., elastics) [4]. The SARS-CoV-2 pandemic required dentists to limit their performance to those so-called 'nondeferable' emergencies to avoid the possible spread of the infection given the high risk of infection in healthcare environments where procedures involve aerosol production [5]. Orthodontists like all dentists, who did not have guidelines in the first few months, had to suspend their visits and regular checks on their patients and also the management of orthodontic emergencies that would normally lead to an early visit of the patient (among the most common: oral cavity lesions, detachment of brackets and bands, breakage of orthodontic arch, breakage of a removable device, detachment of attachments from aligners and breakage of them). The lack of professional platforms developed specifically for teledentistry and therefore also teleorthodontics has meant that in order to make up for the impossible inperson control, orthodontists have mainly used mobile messaging services, video calls and applications to carry out this new activity [6]. This study aims to assess the reliability of virtual orthodontic remote controls according to principles of teleorthodontics to understand its possible usefulness in the future routine activity and to evaluate the impact that the pandemic may have had on different types of orthodontic treatments. Sample One hundred patients (57 females and 43 males, age 7 to 46) under orthodontic therapy in our dental offices carried out orthodontic monthly checks (after an average period of 40 days since the last check-up in the dental office) using the WhatsApp mobile phone application's video call service (Facebook Inc., Menlo Park-USA) from their mobile phone or their parents' (and in their presence) if under the age of 18. Clinical Instructions Patients were given the following instructions to make orthodontic control easier and explanatory guide photos (made on himself by one of the authors) were sent before the virtual check (Figures 1 to 5): 1) Sit in a well-lit place in the house by placing frontally the phone or computer (using the Whatsapp service via the web) so that the front camera of the phone (selfie mode) or pc could well frame the mouth of the patient ( Figure 6). Help from a parent in case of children or other family members in the case of adult subjects is recommended. 2) If the current therapy is with removable orthodontic appliances, make them a series of photographs and send them to the orthodontist before starting the video call. 3) Keep mobile devices or auxiliary tools of fixed orthodontic therapy (e.g., rubber bands) near the phone during the video call. 4) At the beginning of the video call report, any type of disturbance related to the teeth or oral mucous appeared in the previous days or weeks and any problems related to the appliance as in the traditional appointment. 5) During the video call, when asked from the orthodontist, pull the lips or use the fingers of both hands to retract as much as possible the cheeks, as well as the professional cheek distractors that patients know because of the orthodontic photos, to show the teeth. 6) Bite down so back teeth touch. Directly facing the camera, smile wide, trying to get as many teeth as possible. Pull lips and cheeks away from teeth with index and middle fingers to show more teeth. Keeping to bite down so that back teeth touch, smile and retract the cheek and lips with the fingers on the right side of the mouth to expose more of teeth. The slight head rotation from the opposite side to that of the retracted cheek helps expose more teeth (Figures 1 to 3). The problems and issues related to each group found during this service of virtually assisted orthodontic checks are collected in the results with a descriptive statistical analysis. Based on the observations detected during the virtual orthodontic checks, the following categories and subcategories indicated in parentheses were grouped according to the three groups of patients referred to: • The health status of the teeth and gums (visible gum inflammation, dental plaque and suspected carious lesions, tooth fracture, tooth permutation -if applicable-); • The integrity of the appliance (breaking one or more parts, decementing of attacks, bands, attachments); • Symptoms related to orthodontic therapy (pain in the pressure mucous of the appliances, inflammations from orthodontic arch or metallic ligatures that sting, dental pain, joint and functional pain); • Overall progress of orthodontic therapy (compared to the previous orthodontic controls more aligned teeth and/or reduced malocclusion, replacement of invisible aligners occurred without problems). Ethical Considerations Patients (or their parents) expressed informed consent to share therapy data (including images and videos) for this study knowing that they would be treated and disclosed anonymously. The current Regulation of the Ethics Committee of the Higher Institute of Health (Istituto Superiore di Sanità) established the ethical aspects that need evaluation, approval and monitoring of trial protocols relate to epidemiological, evaluation and medical-social projects when personal data are not anonymised. In accordance with this regulation, in this study the personal data of patients was anonymized. Results The group of patients with fixed orthodontic appliances (Group A) comprised 37 subjects of which 29 with fixed multibrackets therapies ( Table 2. Table 3. For the number of subcategories that correspond to the data of the three groups, the descriptive statistics method was applied to these results (Figures 7 to 10). Patients who overall had a better health condition of teeth and gums were those in Group C (81%). The cases of gum inflammation were greater in A (27%) and B (22%) groups than in C (13%). In A and B groups, mixed teething patients were reported in cases of tooth loss (in A 8%, in B 16%). The dental plaque incidence were higher in A and B groups (16% and 13% versus 6% in C group) ( Figure 7). The most frequent events that lead to a loss of integrity are bracket and attachment detachment (22 and 23%), loss of ligatures (14%) and hook breakage (9%) (Figure 8). Patients in the C group did not report any discomfort or pain associated with ongoing orthodontic therapy, while in the other groups cases of pain and discomfort were highlighted (in A 35%, in B 32%) ( Figure 9). Group C was the one where the dental situation improved better (51% versus 27% in A), group B was the one where there was a better improvement in malocclusions (31%) compared to the A group (5%) and also where there was no worsening in dental alignment (observed in 8% of A group and 9% of B group) ( Figure 10). Figures 11 to 13 show the remote checking of patients. Discussion The advent of the Coronavirus pandemic shocked the entire world community, which had to rapidly change its daily lives while the scientific community and the medical class were facing the emergency from a health point of view [7][8][9][10]. Dentists, even if aware of the characteristics of the virus and the prevention measures necessary to contain the infection and avoid the risk of infection, in the absence of adequate individual protective devices and guidelines capable of ensuring safety for themselves, for their coworkers and for the patients themselves had to close their dental offices remaining available only for non-differible emergencies [11,12]. The orthodontics then fell within the non-urgent dental branches and monthly checks on patients were cancelled [13,14]. Telemedicine from which teledentistry originates, and by definition teleorthodontics, although it is not a particularly widespread diagnosis and control strategy and is reserved only for certain categories of patients and situations, in this situation can be a winning tool to not lose sight of the patient under orthodontic therapy [15]. Patients enthusiastically adhered to the possibility of being remotely checked and although the quality of their photos (Figures 11 to 13), despite repeated attempts, has not always been excellent the time when they were checked by video call were very useful to check their oral health condition, schedule the future interventions to be done, reassure the patient that no major harm had occurred and motivated him to continue orthodontic therapy. One of the most frequently encountered situations during orthodontic treatment is the gingival inflammation linked to an increase in plaque retention for inadequate oral hygiene [16]. The latest scientific literature results show that invisible aligners are better tolerated from a periodontal point of view. Thus, their use should be recommended in patients with recurrent gingivitis [17]. In this study, the presence of dental plaque and gum inflammation was recorded in several cases, but most of the subjects in all three groups showed acceptable oral conditions and none apparently had visible unknown carious lesions. Group A showed higher percentages of gum inflammation (27%) and dental plaque (16%) compared to the other two groups. This study confirms the better response of gingival tissues to clear aligners that show lower percentages of gum inflammation (13%) and dental plaque (6%). Information on oral hygiene techniques needed for proper oral health maintenance [18] was reiterated during the video check and critical areas for some patients were highlighted. Orthodontic therapies using fixed devices are inevitably subject to inconveniences related to the breakage of their components that cause a discontinuity in therapy often and discomfort in the patient [19]. Cases with fixed therapies (Group A) have been quite stable (65%), there has not been a great advance in therapy (59% of patients remained stable), but not a major deterioration, although the detachment of brackets (22%) and loss of ligatures (14%) have contributed to the onset of some discomfort and pain. Detachment of brackets and loss of ligatures is a very frequent occurrence in orthodontic practice and is a factor that makes orthodontic therapy unstable and slows its progress by lengthening overall duration of treatment [20]. Orthodontic therapies based on removable devices are also not exempt from accidents and complications related to the breakage of the device itself [21]. Removable devices (Group B) are certainly very safe for children given the non-significant presence of accidents that have compromised their use and the type of discomfort reported by patients. Some removable devices are accepted with difficulty by younger patients, especially at the beginning of therapy, so checks with the orthodontist and parents' motivational support are very important [22]. The pain sometimes accompanies the initial stages of orthodontic treatment with removable appliances that creates doubts and anxiety [23]. Pain can be the manifestation of oral injuries caused by the rubbing of the orthodontic device or its pressure on oral mucous [24] but also be the manifestation of more complex symptomatic situations that emerge with orthodontic treatment and that would require a diagnostic deepening [25]. Assessing these aspects with remote controls is complicated. Pain during orthodontic treatment that arises at specific stages of device activation, somehow predictable in advance, can be treated pharmacologically safely and effectively [26]. In our study, some patients from Groups A and B reported a succession of symptoms related to orthodontic therapy that were very common but with overall low rates: 35% in Group A and 32% in Group B. In accordance with what other authors have already said [27,28], in our study, orthodontic therapies with invisible aligners (Group C) are much more comfortable than those with other mobile or fixed devices, as there are no reports of discomfort or pain. Current literature guidelines [29,30] in the comparison between invisible aligners and traditional orthodontic devices propose a less enthusiastic view of clear aligners because although their effectiveness even in moderate complex cases is recognized, they would not be superior in terms of long-term stability of the result and would be more inefficient in correcting antero-posterior and vertical malocclusions [31]. In our study, in accordance with what is stated in the literature [32], treatments with clear aligners progress faster. Altogether, the progress of orthodontic therapy is better (51% of improved dental alignment compared to 27% of improved dental alignment in Group A) as it was possible, in cases without intermediate stripping or addition/removal of attachments, to send the aligners to patients who have replaced themselves the old aligners with new ones. Cases where a system without attachments and with divot (Sorridi Aligners) were used allowed progress with therapy regularly without even the risk of detachment of attachments. This event occurred, in a not insignificant percentage (23%), in those cases with systematic aligners that use bonded attachments. From our experience in this research, we agree that the presence of attachments poses many problems in fitting the invisible aligner to the teeth [33] and given the evolution of the systematics with invisible aligners without attachments their presence, decided during the set-up, should be limited to the necessary as other solutions, such as divots, show great effectiveness in guiding the orthodontic movement and make the treatment easier and smooth [34]. This study's data allow us to define orthodontics as a safe specialist branch that, despite a period of almost two months without a check done in person, did not report serious consequences for the patient with any type of orthodontic therapy in progress. Patients who reported problems such as ligature loss, detachment of brackets and other incidents during this period were told that they would be treated for these emergencies as soon as we could resume orthodontic activity. Patients who reported ailments such as pain or discomfort were asked to suspend therapy for a few days if the device was removable or provided with orthodontic wax in fixed therapy cases. It is difficult to replace teleorthodontic checks with the evaluations carried out with live objective clinical examinations. Furthermore, remotely viewing radiographs such as an orthopanoramics, lateral teleradiographies or more accurate examinations such as TC DentaScan or CBCT (cone beam computer tomography) make difficult initial assessments on the need for complex treatments such as extractions or evaluations on the importance of certain radiographic elements [35,36]. Even if teleorthodontics definitely applies best to patients who have already started therapy as long as it is limited to a reasonable amount of time, overcome this global emergency, the creation of teleorthodontic systems organized would be a precious clinical advance. The standardization of protocols for its use would make this method very useful in the control of patients unable to make regular checks (disabled, sick, frequent travelers, residing in rural areas distant from the dental office). Conclusion The Coronavirus pandemic has brought to life the importance of having technologies to carry out visits and checks to orthodontic patients at a distance and low impact emergency therapies. The possibility to dialogue with the patient and to view his intra-oral situation through mobile technology is a strategy that can be adopted safely and effectively in emergency situations and routine when they want to do intermediate checks to the one in the studio or when the patient is forced for various reasons to skip a monthly appointment. Even if the orthodontist cannot be replaced by do-it-yourself orthodontic therapy, in cases where the clinical case allows it and the situation requires it, treatments with invisible aligners with few or nothing checks in dental chair (no attachments, divot mechanics), associated with remote teleorthodontic control, can be considered the future evolution of post-pandemic orthodontics: zero-emission aerosol, reduced social contact, risk of null contagion, agendas with fewer appointments. Financial Support None.
2020-08-12T13:03:29.481Z
2020-07-18T00:00:00.000
{ "year": 2020, "sha1": "57d0c157690de95103459c96f9e7235769853c81", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/pboci/a/8Pw4zxBj94zTJHJG66yPJQq/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "26f3cf3fc0252b30d7c1c0b95ff1027c4fdb0e06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203579906
pes2o/s2orc
v3-fos-license
A Dual Protective Effect of Intestinal Remote Ischemic Conditioning in a Rat Model of Total Hepatic Ischemia The present study aimed to investigate the effects of intestinal remote ischemic preconditioning (iRIC) on ischemia-reperfusion injury (IRI) and gut barrier integrity in a rat model of total hepatic ischemia (THI). Male Wistar rats (n = 50; 250–300 g) were randomly allocated into two experimental groups: RIC/Control. Thirty minutes of THI was induced by clamping the hepatoduodenal ligament. iRIC was applied as 4-min of ischemia followed by 11-min of reperfusion by clamping the superior mesenteric artery. Animals were sacrificed at 1, 2, 6, 24 h post-reperfusion (n = 5/group/timepoint). RIC of the gut significantly improved microcirculation of the ileum and the liver. Tissue ATP-levels were higher following iRIC (Liver: 1.34 ± 0.12 vs. 0.97 ± 0.20 μmol/g, p = 0.04) and hepatocellular injury was reduced significantly (ALT: 2409 ± 447 vs. 6613 ± 1117 IU/L, p = 0.003). Systemic- and portal venous IL-6 and TNF-alpha levels were markedly lower following iRIC, demonstrating a reduced inflammatory response. iRIC led to a structural and functional preservation of the intestinal barrier. These results suggest that iRIC might confer a potent protection against the detrimental effects of THI in rats via reducing IRI and systemic inflammatory responses and at the same time by mitigating the dramatic consequences of severe intestinal congestion and bacterial translocation. Introduction Ischemia-reperfusion injury (IRI) is inevitably encountered in various clinical scenarios in liver transplantation and oncological liver surgery, representing an important risk factor for inferior outcomes with increased morbidity and mortality, prolonged intensive care/in-hospital stay, and a significant increase of costs [1,2]. 2 of 16 Following a landmark observation of Toledo-Perayra et al., demonstrating the presence of ischemic-reperfusion injury in transplanted livers of dogs in 1975, several methods have been introduced to reduce hepatic IRI in experimental and clinical settings [1,[3][4][5][6][7]. Remote ischemic conditioning (RIC) was introduced by Przyklenk et al. in 1993, showing for the first time that brief ischemic-reperfusion attacks, applied at a remote organ or tissue (e.g., limbs or intestine), can protect certain target organs against the deleterious effects of IRI via triggering various protective pathways [8,9]. Although the RIC technique may be a powerful tool against the effects of IRI in different experimental models and clinical scenarios, the exact underlying mechanisms and the definitive explanation of the phenomenon still remain unclear [10]. Although, our group and others have intensively investigated the effects of RIC applied on the skeletal muscle in partial hepatic ischemia and liver transplantation [10][11][12][13][14][15][16], only very limited data is available on the effects of intestinal RIC (iRIC) in total hepatic ischemia (THI) [17,18]. Longer periods of THI without porto-systemic shunt, result not only in an expressed hepatocellular damage but also lead to a severe intestinal congestion and injury of the small bowel mucosa with consequential loss of barrier function and bacterial translocation. Therefore, prolonged periods of THI of over 30-min are considered to be lethal in rats, leading to high mortality rates without intervention [17,19,20]. Due to this above-described dual injury (liver IRI and splanchnic congestion) induced by THI, we hypothesized that iRIC might confer a protection via local conditioning effects on the small intestine, protecting against the dramatic consequences of the loss of barrier function and bacterial translocation as well as by mitigating hepatic IRI, targeting the liver as a remote organ. This study was designed to investigate the effects of iRIC on hepatic and intestinal injury in a rat model of THI. Various parameters, known to be relevant in IRI and RIC, were used to assess intestinal and hepatic injury, systemic inflammation, and protective responses following THI and iRIC treatment. Animals All experiments were performed in accordance with institutional guidelines and the German federal law regarding the protection of animals. The ethical proposal of the study was approved by the responsible authorities (Bezirksregierung Köln, Cologne, Germany, ID: 50.203.2BN45). All animals received human care according to the principles of the "Guide for the Care and Use of Laboratory Animals" (8th Edition, NIH Publication, 2011, USA). The present study was designed, performed and reported according to the principles of the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines [21]. Male Wistar rats (RjHan:WI; Janvier Labs, Le Genest Saint Isle, France) were used (Σn = 50; body weight range: 250-300 g). The animals were housed under specific pathogen-free conditions according to the guidelines of the "Federation for Laboratory Animal Science Associations" (FELASA; www.felasa.eu) with a 12-h light and dark cycle in a temperature-and humidity-controlled barrier environment. Water and standard pellets for laboratory rats (Sniff GmbH, Soest, Germany) were provided ad libitum. Surgical Technique To avoid disturbing effects of circadian rhythm, all experiments were performed at the same time of day, following an acclimatization period of one week. Volatile anesthesia was performed using 2 vol% isoflurane (Forane; Abbott GmbH, Wiesbaden, Germany) during all the surgical interventions. All surgical procedures were performed by the same surgeon. After sufficient anesthesia and analgesia (buprenorphine 0.03 mg/kg/24 h; Temgesic; EssexPharma, Haar, Germany), laparotomy was performed through a midline incision, the liver was mobilized by cutting its ligaments and the superior mesenteric artery (SMA) was exposed. Remote ischemic conditioning treatment was applied as 2 cycles of 4-min of ischemia and 11-min of reperfusion (total 30-min) by clamping of the SMA using an atraumatic microvascular clamp (Aesculap Yasargil FT260T; B.Braun) as described by our group previously on different occasions ( Figure 1) [11,13,14,17]. Animals of the Control group underwent the exact same procedure without iRIC. Afterwards, THI was achieved by clamping the bilio-vascular pedicle of the whole liver using an atraumatic microvascular clip (FT260T) and ensuring that both the main portal vein and the hepatic artery are included. After 30-min of THI the clamp was removed to allow free reperfusion of the liver. No porto-systemic shunt was applied. At the end of the surgical procedure, the laparotomy (in 1, 2, 6, 24 h of reperfusion groups) was closed in two layers using 4-0 continuous sutures (Vicryl 4-0; Ethicon). Postoperatively, the animals were placed in an intensive care unit cage (Vetario; Brinsea Products Ltd., North Somerset, UK) for a recovery period of one hour, providing warmed air (30-35 • C) and an oxygen supply. After surgery, antibiotic treatment and analgesia were achieved by subcutaneous injections of cefuroxime sodium (16 mg/kg/24 h) (Cefuroxim Fresenius; Fresenius Kabi Deutschland GmbH, Bad Homburg, Germany) and buprenorphine (0.03 mg/kg/24 h). During the first 4-h postoperatively, animals were observed continuously and then transferred back to their cages and normal environment. Following the observation periods defined by the protocol, samples were collected, and animals were sacrificed subsequently under deep isoflurane anesthesia 2 vol%-4 vol% and buprenorphine (0.03 mg/kg) analgesia. ischemic conditioning treatment was applied as 2 cycles of 4-min of ischemia and 11-min of reperfusion (total 30-min) by clamping of the SMA using an atraumatic microvascular clamp (Aesculap Yasargil FT260T; B.Braun) as described by our group previously on different occasions ( Figure 1) [11,13,14,17]. Animals of the Control group underwent the exact same procedure without iRIC. Afterwards, THI was achieved by clamping the bilio-vascular pedicle of the whole liver using an atraumatic microvascular clip (FT260T) and ensuring that both the main portal vein and the hepatic artery are included. After 30-min of THI the clamp was removed to allow free reperfusion of the liver. No porto-systemic shunt was applied. At the end of the surgical procedure, the laparotomy (in 1, 2, 6, 24 h of reperfusion groups) was closed in two layers using 4-0 continuous sutures (Vicryl 4-0; Ethicon). Postoperatively, the animals were placed in an intensive care unit cage (Vetario; Brinsea Products Ltd., North Somerset, UK) for a recovery period of one hour, providing warmed air (30-35 °C) and an oxygen supply. After surgery, antibiotic treatment and analgesia were achieved by subcutaneous injections of cefuroxime sodium (16 mg/kg/24 h) (Cefuroxim Fresenius; Fresenius Kabi Deutschland GmbH, Bad Homburg, Germany) and buprenorphine (0.03 mg/kg/24 h). During the first 4-h postoperatively, animals were observed continuously and then transferred back to their cages and normal environment. Following the observation periods defined by the protocol, samples were collected, and animals were sacrificed subsequently under deep isoflurane anesthesia 2 vol%-4 vol% and buprenorphine (0.03 mg/kg) analgesia. Figure 1. Study flowchart surgical protocol. Animals were randomized into two experimental groups (Control, RIC). Following laparotomy and dissection of the superior mesenteric artery (SMA) intestinal remote ischemic conditioning (iRIC) was applied as of 4-min of ischemia and 11-min of reperfusion via clamping of the SMA. Total hepatic ischemia was induced by clamping the hepatoduodenal ligament including both the portal vein and the hepatic artery. Animals were sacrificed after 1, 2, 6, 24 h of reperfusion for sample collection and further analysis (n = 5/group/time point). Modified from Emotzpohl, Czigany et al. Shock. 2018 [15]. Abbreviations used: iRIC-Intestinal remote ischemic conditioning; SMA-Superior mesenteric artery. Experimental Design For the present study 50 surgical procedures were performed based on an a priori sample size estimation. Recipients were randomly allocated into two experimental groups (n = 25 cases/group) ( Figure 1). Control: after dissection of the SMA and a corresponding sham waiting period of 30-min, no remote conditioning was applied and THI was induced as described above RIC: remote ischemic conditioning protocol was applied as described above before THI. Experimental Design For the present study 50 surgical procedures were performed based on an a priori sample size estimation. Recipients were randomly allocated into two experimental groups (n = 25 cases/group) ( Figure 1). Control: after dissection of the SMA and a corresponding sham waiting period of 30-min, no remote conditioning was applied and THI was induced as described above RIC: remote ischemic conditioning protocol was applied as described above before THI. After 1, 2, 6, and 24-h of portal reperfusion, liver and ileum microcirculation were measured in anesthesia (n = 5 cases/group/time point). Systemic-and portal venous blood from the vena cava and the portal vein as well as tissue samples from the liver (right mediate lobe) and from the ileum (2 cm proximal from the ileocecal valve) were collected for analysis before the animals were sacrificed via exsanguination in deep anesthesia. Five animals per group have been used for the in vivo imaging experiments. Figure 1. depicts a flowchart of the experimental protocol. During the survival period all animals were visited at least every 12-h by an experienced veterinary technician blinded for the experimental design and their clinical condition was evaluated using a human-endpoints score sheet. The score sheet was based on the previous work of Morton and Griffiths and the recommendations of our group for experimental studies in the field of liver research [21,22]. Liver and Ileum Perfusion We evaluated hepatic and ileal microcirculatory perfusion using multiple timepoints at sacrifice, before collecting blood and tissue samples. As a reference control (baseline), we measured hepatic and ileal circulation in 10 rats just after laparotomy. The hepatic microcirculation (flow) and tissue oxygen saturation (StO 2 ) were evaluated using an O2C device and a corresponding surface probe (O2C-oxygen to see device, LF1 surface probe; LEA Medizintechnik GmbH, Giessen, Germany) as described by our team previously [11,15]. The output signal was transferred to an integrated computer equipped with software to yield real-time display of data, and to record and analyze the blood flow pattern and values (LEA Medizintechnik GmbH, Giessen, Germany). Biochemical Analysis and Serum Cytokines Blood samples, collected from the inferior vena cava and from the portal vein by direct puncture with a 20-gauge needle at sacrifice, were centrifuged (room temperature, 10-min, 2500 rpm) and then serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), lactate dehydrogenase (LDH) were measured using an automated analyzer and standard photometric procedures (Vitros 250; Johnson and Johnson, Neuss, Germany). Serum samples, stored at −80 • C, were used for interleukin-6 (IL-6) and tumor necrosis factor alpha (TNF-α) assessments using commercial rat enzyme-linked immunosorbent assay (ELISA) kits (R and D Systems, Minneapolis, MN, USA) according to the manufacturer's guidelines. Tissue Adenosine Triphosphate Concentration The apical part of the left lateral lobe was snap-frozen with liquid nitrogen pre-cooled metal tongs before sacrifice. Subsequently, the intestinal specimens were harvested from an identical anatomical location without mesenteric tissue (1 cm segment of the ileum 2 cm proximal from the ileocecal valve) and were immediately snap frozen in liquid nitrogen. Liver and intestinal tissues samples were stored at −80 • C until the assessment of adenosine triphosphate (ATP) concentrations, as described in detail elsewhere [11,[23][24][25][26]. Briefly, the specimens were transferred into a vacuum freezer (Christ 2-16, Osterode, Germany) at −40 • C with a pressure less than 0.001 atm for at least 2 weeks of freeze-drying. After lyophilization, samples were homogenized and deproteinized and tissue ATP concentrations were determined by standard enzymatic tests [24][25][26]. The results were calculated and expressed as micromoles per gram of dry-weight. Transmission Electron Microscopy Following THI and 6-h of reperfusion, tissue samples of the ileum were immersed in a 2% glutaraldehyde and paraformaldehyde solution in phosphate-buffered saline. Following further sample preparation, as described before [27], specimens were examined using electron microscopy (EM400 T/ST, Philips, Amsterdam, The Netherlands). Bioluminescent Assessment of Bacterial Translocation Before surgery, animals (n = 5/group) have received a standard dose of bioluminescent Escherichia coli (E. coli), modified to contain the lux operon from P. luminescens, dissolved in phosphate-buffered saline (6 × 10 11 in 0.5 mL i.g.; orogastral administration using feeding needles). The lux genes code for both the bacterial luciferase and substrate biosynthesis enzymes, which enable the strain to produce luciferase and its substrate simultaneously, thus no exogenous luciferin substrate was required [28]. Following surgery and 6 h of reperfusion the animals were re-anaesthetised and in vivo imaging was performed using the IVIS 100 System (Caliper Life Sciences Inc., Hopkinton, MA, USA) as described before [29][30][31]. Images were captured using the corresponding software provided by the manufacturer (Living Image Software 2.0, Caliper Life Sciences Inc.). The imaging system consists of a cooled charge-coupled-device camera mounted on a light-tight chamber, a camera controller, a cryogenic refrigeration unit connected to a computer system. Following in vivo imaging, animals have been sacrificed in deep anesthesia by removing the lungs, spleen, liver and mesenteric tissue including lymph nodes and the presence of bioluminescence has been directly analyzed to assess the translocation of labelled E. coli in distant organs following IRI. Statistical Analysis Results are expressed as mean ± standard error of the mean (s.e.m.) for each group. Two-way analysis of variance (ANOVA) and Bonferroni post-hoc test was performed to analyze changes in time dependent parameters and between group differences at each time point. Mann-Whitney-U test was applied to test the differences within two groups. Differences were considered significant when p < 0.05. Data plotting and analysis were performed using GraphPad Prism 8 (GraphPad Software Inc., San Diego, CA, USA) software package. Liver and Ileum Microcirculation The preischemic baseline hepatic and intestinal microcirculatory flow did not differ markedly between the two experimental groups ( Figure 2). During the period of iRIC, flow values of all animals of the RIC group showed significant fluctuations which were more expressed in the ileal flow (flow 10-140%). During the ischemic period, no significant differences were detected between the groups. After liver exclusion and the induction of THI, flow values of the liver and ileum have dropped dramatically. Relatively rapid recovery of the flow was observed after the reperfusion period in the RIC group. Over the early period of liver reperfusion flow values of both the liver and the ileum were significantly higher compared to the Control group ( Figure 2). Meanwhile, the iRIC treated animals have reached flow-values comparable with the baseline values as early as after 2-h. Ileal and hepatic flow of the Control group recovered only after 24-h leading to the loss of significance between the two groups at this time point ( Figure 2) Most prominent between group differences were registered after 6 h of reperfusion (Ileum: RIC 6hours vs. Control 6hours , 132 ± 14 vs. 52 ± 14 %, p = 0.001; Liver: RIC 6hours vs. Control 6hours , 102 ± 05 vs. 47 ± 13 %, p = 0.001; Figure 2). Tissue StO 2 of the liver and ileum showed similar characteristic features, however, StO 2 values did not follow the positive alterations of flow observed over the course of the reperfusion period in the RIC group ( Figure 2). Accordingly, no significant differences were found between the two experimental groups, in terms of StO 2 ( Figure 2). O2C device remained higher in the RIC group compared to Control throughout the reperfusion period. (C) Ileum perfusion remained higher in the RIC group compared to Control throughout the reperfusion period, however the significant difference between the two experimental groups has disappeared after 24-h of reperfusion. (B,D) Partially similar characteristic features were observed in terms of tissue oxygen saturation of the liver and ileum over the course of the observation period, however, StO2 values did not follow directly the positive alterations of the flow observed in the RIC group. Accordingly, no significant between group differences were detected. (mean ± s.e.m, * p < 0.05, ** p < 0.01, *** p < 0.001, RIC vs. Control, two-way ANOVA and Bonferroni post-hoc test, n = 5/group/time point). Baseline was determined as the mean flow measured in 10 healthy animals right after laparotomy. Abbreviations used: iRIC-Intestinal remote ischemic conditioning; THI-Total hepatic ischemia, StO2-Oxygen saturation. Biochemical Analysis and Serum Cytokines Significant cellular injury has been characterized by markedly increased serum transaminase levels and LDH, showing a peak after 2-h of reperfusion ( Figure 3). As it is characteristic following 30-min of THI in rats, both AST and ALT as well as LDH have increased dramatically in the Control group, while the application of iRIC has led to a significantly reduced cellular injury (AST: RIC2hours vs. Control2hours, 3217 ± 559 vs. 6145 ± 1025 IU/L, p = 0.004; ALT: RIC2hours vs. Control2hours, 2409 ± 447 vs. 6613 ± 1117 IU/L, p = 0.003; LDH: RIC2hours vs. Control2hours, 32,716 ± 1340 vs. 50,578 ± 10 877 IU/L, p = 0.03; Figure 3). After a maximal damage following 2-h of reperfusion, a reduction of serum enzyme levels was observed in both groups during the later phase of reperfusion ( Figure 3). To be able to differentiate between total serum cytokine concentrations and gut related cytokine release, serum levels of TNFα and IL-6 have been measured in both portal-and systemic samples. No major differences were observed between the cytokine levels in these separate samples (Figure 4, A vs. C and B vs. D). During the early phase following THI and liver reperfusion, serum levels of TNFα and IL-6 have increased markedly in the animals of the Control group. In contrast to the Control, iRIC resulted in a reduction of TNFα and IL-6 both in portal and systemic blood, leading to graphically and statistically strongly significant difference between the treated and non-treated groups after 1 h of reperfusion (Systemic TNFα: RIC1hour vs. Control1hour, 43.7 ± 3.4 vs. 78.7 ± 8.3 pg/mL, p = 0.001; Systemic IL-6: RIC1hour vs. Control1hour, 177.6 ± 20.9 vs. 748.7 ± 333.5 pg/mL, p = 0.03; Figure 4). However, as it was observed in the characteristics of the serum transaminases, the between group differences have disappeared after 24-h of reperfusion ( Figure 4). Biochemical Analysis and Serum Cytokines Significant cellular injury has been characterized by markedly increased serum transaminase levels and LDH, showing a peak after 2-h of reperfusion ( Figure 3). As it is characteristic following 30-min of THI in rats, both AST and ALT as well as LDH have increased dramatically in the Control group, while the application of iRIC has led to a significantly reduced cellular injury (AST: RIC 2hours vs. Control 2hours , 3217 ± 559 vs. 6145 ± 1025 IU/L, p = 0.004; ALT: RIC 2hours vs. Control 2hours , 2409 ± 447 vs. 6613 ± 1117 IU/L, p = 0.003; LDH: RIC 2hours vs. Control 2hours , 32,716 ± 1340 vs. 50,578 ± 10 877 IU/L, p = 0.03; Figure 3). After a maximal damage following 2-h of reperfusion, a reduction of serum enzyme levels was observed in both groups during the later phase of reperfusion (Figure 3). To be able to differentiate between total serum cytokine concentrations and gut related cytokine release, serum levels of TNFα and IL-6 have been measured in both portal-and systemic samples. No major differences were observed between the cytokine levels in these separate samples (Figure 4, A vs. C and B vs. D). During the early phase following THI and liver reperfusion, serum levels of TNFα and IL-6 have increased markedly in the animals of the Control group. In contrast to the Control, iRIC resulted in a reduction of TNFα and IL-6 both in portal and systemic blood, leading to graphically and statistically strongly significant difference between the treated and non-treated groups after 1 h of reperfusion (Systemic TNFα: RIC 1hour vs. Control 1hour , 43.7 ± 3.4 vs. 78.7 ± 8.3 pg/mL, p = 0.001; Systemic IL-6: RIC 1hour vs. Control 1hour , 177.6 ± 20.9 vs. 748.7 ± 333.5 pg/mL, p = 0.03; Figure 4). However, as it was observed in the characteristics of the serum transaminases, the between group differences have disappeared after 24-h of reperfusion ( Figure 4). Tissue Adenosine Triphosphate Concentration Similar characteristic features were observed concerning liver and intestinal tissue ATP levels in the RIC and Control groups ( Figure 5). After a reduction of ATP levels following 1-h of reperfusion, a substantial recovery of the tissue energy reserves was observed after 3-h. However, the RIC group showed better preserved ATP levels throughout the experiments. There was a significant difference between the RIC and Control groups after 1-h of reperfusion (Liver: RIC 1hour vs. Control 1hour , 1.34 ± 0.12 vs. 0.97 ± 0.20 µmol/g dry weight, p = 0.04; Ileum: RIC 1hour vs. Control 1hour , 1.97 ± 0.10 vs. 0.92 ± 0.23 µmol/g dry weight, p = 0.02; Figure 5). Despite some graphical differences, no significant disparity was found between the RIC and Control groups after THI and 3-h of reperfusion ( Figure 5). in the RIC and Control groups ( Figure 5). After a reduction of ATP levels following 1-h of reperfusion, a substantial recovery of the tissue energy reserves was observed after 3-h. However, the RIC group showed better preserved ATP levels throughout the experiments. There was a significant difference between the RIC and Control groups after 1-h of reperfusion (Liver: RIC1hour vs. Control1hour, 1.34 ± 0.12 vs. 0.97 ± 0.20 μmol/g dry weight, p = 0.04; Ileum: RIC1hour vs. Control1hour, 1.97 ± 0.10 vs. 0.92 ± 0.23 μmol/g dry weight, p = 0.02; Figure 5). Despite some graphical differences, no significant disparity was found between the RIC and Control groups after THI and 3-h of reperfusion ( Figure 5). Intestinal Barrier and Bacterial Translocation Ultrastructural analysis of the epithelial layer of the ileum mucosa showed disrupted microvilli which were partially showing signs of vacuolization as well as a disruption of the terminal web and swollen mitochondria with partial disintegration of their cristae and membrane in the animals of the Control group ( Figure 6). Following the application of iRIC, much better preserved cellular ultrastructure was observed on the samples of the RIC group with almost regular microvilli and subcellular organelles ( Figure 6). Correlating well with the ultrastructural changes assessed by electron microscopy, intestinal barrier function was better preserved following iRIC and THI. Animals receiving bioluminescent E. coli before IRI had a markedly increased extra-intestinal luciferase activity after 6-h of reperfusion, especially in the liver and the lungs as well as in the mesenteric lymph nodes ( Figure 6). No relevant extra-intestinal activity was observed in the animals of the RIC group (Figure 6), suggesting the presence of none or only a minor IRI related bacterial translocation. Intestinal Barrier and Bacterial Translocation Ultrastructural analysis of the epithelial layer of the ileum mucosa showed disrupted microvilli which were partially showing signs of vacuolization as well as a disruption of the terminal web and swollen mitochondria with partial disintegration of their cristae and membrane in the animals of the Control group ( Figure 6). Following the application of iRIC, much better preserved cellular ultrastructure was observed on the samples of the RIC group with almost regular microvilli and subcellular organelles ( Figure 6). Correlating well with the ultrastructural changes assessed by electron microscopy, intestinal barrier function was better preserved following iRIC and THI. Animals receiving bioluminescent E. coli before IRI had a markedly increased extra-intestinal luciferase activity after 6-h of reperfusion, especially in the liver and the lungs as well as in the mesenteric lymph nodes ( Figure 6). No relevant extra-intestinal activity was observed in the animals of the RIC group (Figure 6), suggesting the presence of none or only a minor IRI related bacterial translocation. 30-min of total hepatic ischemia and 6-h of reperfusion, electron microscopy of the ileum showed disruption of microvilli which were partially showing signs of vacuolization (arrows) as well as a disintegration of the terminal web (tw) and swollen mitochondria with partial disintegration of their membrane and cristae (asterix) in the animals of the Control group (A). Following iRIC, better preserved cellular ultrastructure was observed on the samples of the RIC group with almost regular microvilli and subcellular structures (B). To assess functional integrity of the intestinal barrier animals were treated with luciferase labelled Escherichia coli before THI (C). Following 30-min of ischemia and 6-h of reperfusion, bacterial translocation was assessed using an in vivo imaging system. After the retrieval of the organs these were assessed for luciferase intensity. Intestinal barrier function was better preserved following iRIC and THI. Animals receiving luciferase labelled E. coli before IRI had a markedly increased extra-intestinal luciferase activity after 6-h of reperfusion, especially in the liver and the lungs as well as in mesenterial lymph nodes in the Control group. No relevant extra-intestinal activity was observed in the animals of the RIC group. Abbreviations used: OM-original magnification; RIC-remote ischemic conditioning; THI-total hepatic ischemia; IVIS-in vivo imaging system; MLN-mesenterial lymph node. Figure 6. Ultrastructural and functional assessment of intestinal barrier integrity. Following 30-min of total hepatic ischemia and 6-h of reperfusion, electron microscopy of the ileum showed disruption of microvilli which were partially showing signs of vacuolization (arrows) as well as a disintegration of the terminal web (tw) and swollen mitochondria with partial disintegration of their membrane and cristae (asterix) in the animals of the Control group (A). Following iRIC, better preserved cellular ultrastructure was observed on the samples of the RIC group with almost regular microvilli and subcellular structures (B). To assess functional integrity of the intestinal barrier animals were treated with luciferase labelled Escherichia coli before THI (C). Following 30-min of ischemia and 6-h of reperfusion, bacterial translocation was assessed using an in vivo imaging system. After the retrieval of the organs these were assessed for luciferase intensity. Intestinal barrier function was better preserved following iRIC and THI. Animals receiving luciferase labelled E. coli before IRI had a markedly increased extra-intestinal luciferase activity after 6-h of reperfusion, especially in the liver and the lungs as well as in mesenterial lymph nodes in the Control group. No relevant extra-intestinal activity was observed in the animals of the RIC group. Abbreviations used: OM-original magnification; RIC-remote ischemic conditioning; THI-total hepatic ischemia; IVIS-in vivo imaging system; MLN-mesenterial lymph node. Discussion The present study is one of the first and most comprehensive reports showing a dual protective response triggered by iRIC in a rat model of THI. Our results demonstrate not only the prominent effects of iRIC in mitigating remote hepatocellular damage but also a reduction of local damage of the intestinal barrier induced by severe congestion and functional ischemia in a well-established rodent model of THI. Ultimately, this complex dual protective response triggered by iRIC was manifested in reduced systemic inflammation. The protective effects of RIC have been reported in IRI scenarios for a diversity of tissues and organs in various experimental and clinical studies over the years [10,11,[14][15][16][32][33][34][35]. However, only scarce evidence is available from well-designed and comprehensive experimental studies which would demonstrate the effects of intestinal RIC in total-or partial hepatic ischemia (Table 1). A previous study of our group could show that iRIC can exert potent protection and reduce hepatocellular and intestinal damage following THI via a HO-1 mediated pathway in the second window of protection (using a 48-h recovery period between iRIC and THI) [17]. Here we aimed to assess the acute or so-called "first window" effects of iRIC on hepatocellular injury and intestinal barrier integrity in THI. Literature search (PubMed; search date: 10 September 2019; search terms: ischemic preconditioning OR ischemic conditioning AND intestine AND liver) resulted in two relevant studies on iRIC in liver ischemia, showing very limited evidence available on the effects of iRIC in THI. According to the best of our knowledge, the present study is one of the most comprehensive experimental works so far on iRIC and its effects in liver IRI. Abbreviations used: THI-Total hepatic ischemia; RIC-Remote ischemic conditioning, IRI-Ischemia reperfusion injury; SMA-Superior mesenteric artery. As the hepatic vascular bed is located just downstream of the small intestine collecting the portal blood, complete inflow occlusion of the liver without porto-systemic shunt results not only in ischemic liver injury but in severe congestion of the splanchnic organs resulting in functional ischemia of the gut [17]. In this model, the combination of a hepatic IRI and severe intestinal congestion, with structural and functional damage of the intestinal barrier and consequential bacterial translocation, are leading to a systemic proinflammatory activation [17,36]. We hypothesize that the benefit of iRIC in the scenario of THI, compared to the more widely used and reported RIC of the limbs, lies in the combination of a "local conditioning" on the intestine which may protect against the detrimental intestinal congestion and a "remote conditioning" effect targeting the liver. Impairment of tissue microcirculation is one of the key elements in IRI [10,11,37]. A combination of different mechanisms is contributing to a post-ischemic microcirculatory failure such as endothelial neutrophil stasis, cell swelling, sludges and formation of micro-thromboses in small capillaries and liver sinusoids [11,37]. In the present study we registered hepatic and intestinal microcirculation using the laser Doppler based O2C system. Remote conditioning resulted in better preserved microcirculation of the liver and the gut, especially during the early phase of liver reperfusion, however, the significant difference on the level of microcirculation disappeared after 24-h of reperfusion. Microcirculatory failure is not only a consequence of IRI but it also actively contributes to the paradox damage documented as reperfusion injury by maintaining an impaired perfusion at the level of the small capillaries and sinusoids and aggravating tissue injury [37]. Positive effects of RIC on target organ circulation were confirmed in various IRI models [11,[13][14][15]38,39]. In previous reports, we could demonstrate that RIC applied on the infrarenal aorta is able to potently improve graft macro-and micro-circulation including post-reperfusion microcirculatory flow and portal venous flow in a rat model of orthotopic liver transplantation as well as in 70% partial liver ischemia and liver resection [11,[13][14][15]. A sublethal period of 30-min THI in rats without porto-systemic shunt leads to a severe hepatocellular damage, characterized by a prominent elevation of serum transaminases [40]. In our study the peak of hepatocellular injury was observed 2-h following liver reperfusion with strongly increased AST, ALT and LDH levels. The potent ability of iRIC to mitigate hepatocellular injury was characterized by significantly reduced transaminases and LDH in the RIC group compared to the non-treated animals. These findings are in line with our previous reports with limb RIC and partial liver ischemia [13,14] or liver transplantation [11,15] as well as correlates with our findings with iRIC and THI showing similar effects in the second window of protection after 48-h [17]. A rapid drop of tissue ATP content during THI results in disturbed active ion transport mechanisms, contributing to cellular swelling, microcirculatory failure and cell death [11,37,41]. A "reconditioning" effect, leading to better preserved or increased ATP production is associated with improved mitochondrial and cellular integrity following RIC and ischemia (via the prevention of mitochondrial permeability transition pore opening, less mitochondrial oxidative stress, reduction of calcium overload) [42,43]. Therefore, increased levels of tissue ATP may be interpreted as a global manifestation of better preserved mitochondrial and cellular functions. Previous reports could show the beneficial effects of different ischemic conditioning approaches and pharmacological agents on tissue energetic status and ATP levels [11,44,45]. Our data show favorable alterations in hepatic and intestinal tissue ATP levels during reperfusion and iRIC. After 1-h of reperfusion significantly higher ATP levels were found in the RIC group vs. Control. There is a plethora of experimental and clinical evidence showing that imbalance in systemic pro-and anti-inflammatory processes likewise belongs to the main events in the pathophysiology of liver IRI [11,46]. An inflammatory cytokine release, associated with intestinal congestion and ischemia as well as bacterial translocation further aggravate systemic and remote organ damage leading to inferior outcomes [47,48]. In previous reports we could show that RIC of the limbs results in the upregulation and downregulation of anti-and pro-inflammatory cytokines (including Interleukin-10, Monocyte chemoattractant protein-1, Macrophage Migration Inhibitory factor, and TNFα) in models of orthotopic liver transplantation or partial liver ischemia [11,14,15]. In line with these previous findings, our present results show significantly reduced levels of TNFα and IL-6 in portal-and systemic blood, suggesting a greatly reduced systemic inflammation following the application of iRIC in this severe model of THI. The comparable levels of both cytokines observed in the portal-and systemic circulation indicates that the gut is the major source of these cytokines [36]. However, a significant elevation in the serum levels of inflammatory cytokines is not only a sign of increased tissue damage and systemic inflammation, but these cytokines also actively participate in aggravating local tissue injury induced by THI [36]. The increase in translocation of bacterial products to the gut-associated lymphatic tissue following hypoxia/ischemia of the intestinal wall tissue triggers an orchestra of pro-/anti-inflammatory cytokine release [36,49,50]. Among these inflammatory mediators, TNFα has received an intense scientific interest because of its role in increasing tight-junction (TJ)-permeability in the intestine not only through a decreased expression of TJ proteins but via the activation of myosin light chain kinases, leading to a disruption of barrier function [36,49]. Therefore, enhanced TNFα levels appears to play a central role in promoting pathological bacterial translocation [36]. Other cytokines such as IL-6 and Interferon-gamma have been shown to increase intestinal epithelial permeability and induce translocation of E. coli across epithelial cells [36,51]. Accordingly, we have observed a relevant ultrastructural damage of the epithelial cells of the ileum following 30-min of THI and 6-h of reperfusion in our present model, including a disruption of the microvilli, disintegration of the terminal web structure and swollen mitochondria with partial disintegration. Following iRIC these pathological alterations were largely mitigated with an overall better preserved cellular ultrastructure. These structural observation correlated well with the observed damage of the functional barrier. Following the administration of bioluminescence E. coli, significant extra-intestinal accumulation was observed in the animals of the Control group, while no relevant luciferase activity was detected by the IVIS system in the animals of the RIC group, suggesting an active reduction of bacterial translocation by iRIC. Only very limited data is available to date on the effects of iRIC on barrier integrity and bacterial translocation following liver IRI (Table 1). In our previous report by Kageyama et al., we could show a reduced mucosal damage in the second window of protection following iRIC assessed by the Park score on conventional histological samples [17]. This was associated with an increased expression of hemoxygenase-1 (HO-1) in the intestinal mucosa and in the liver, suggesting a potential mechanistic role of HO-1 behind the effects of late iRIC [17]. As the presents study did not aim to compare different iRIC protocols concerning the length and numbers of cycles, it is not possible to draw a conclusion, whether there are alternative iRIC protocols potentially triggering an even superior protective response. Despite the many reports attempting to find the optimal RIC protocol, a widely accepted guideline is still not within reach and the selection of the protocols is mostly based on empiric choices [10,52,53]. Due to the lack of an optimal RIC protocol some authors have also expressed their concern about a potential "hyperconditioning" phenomenon, where excessive or repetitive stimulus may lead to a deleterious effect [52,54]. While the skeletal muscle of the extremities (where RIC is conventionally applied) has a high ischemic tolerance, the small bowel mucosa is notoriously sensitive for hypoxia and ischemia [10,11,14,15]. Therefore, in the present study we aimed to use a less intense protocol with shorter total ischemia times (8 min in total) to minimize the ischemic insult of the small bowel but maximize the potential benefits. The findings of our study have to be interpreted in the light of certain limitations. This study is one of the first reports on iRIC in THI (Table 1). Therefore, in this preliminary setting we were not able to perform a deep exploration of the subcellular mechanisms behind the observed protective effects of iRIC, thus our findings remain partially descriptive. The further inclusion of an "iRIC only" (an experimental group receiving iRIC treatment without THI) and sham laparotomy groups may have strengthened the statistical analysis and the conclusions of the study, resulting in a more solid overall message. Notwithstanding the aforementioned limitations, the present study shows some novel findings on the dual way of protection conferred by iRIC in a well-established model of THI without porto-systemic shunt. iRIC seemed to be a feasible technique which could potently reduce hepatocellular injury, improve intestinal and hepatic microcirculation, positively influenced inflammatory cytokine and tissue ATP levels and mitigated a consequential disintegration of the intestinal barrier and prevented bacterial translocation (Figure 7). More detailed exploration of the mechanistic steps behind these observations and identification of the connecting pathways between the gut and the liver following iRIC and THI would be of interest for future basic and translational research. Figure 7. Summary of the mechanism and effects of iRIC observed in the setting of THI in rats. The following flowchart depicts the observed and possible protective effects and mechanisms of action of iRIC following THI in rats. Briefly, iRIC applied as short periods of ischemia-reperfusion before THI at a remote organ (intestine) results in the transfer of protective signals via different humoral and/or neural and partially unknown connective mechanisms to the target organ (liver), however, it seemingly also confers a local protection against the detrimental effects of intestinal congestion and functional ischemia of the gut, via preserving the integrity of the intestinal tissue and barrier function and resulting in less dramatic systemic effects of THl. Adapted from Emotzpohl, Czigany et al. Shock. 2018 [15]. Abbreviations used: iRIC-Intestinal remote ischemic conditioning; THI-Total hepatic ischemia. Figure 7. Summary of the mechanism and effects of iRIC observed in the setting of THI in rats. The following flowchart depicts the observed and possible protective effects and mechanisms of action of iRIC following THI in rats. Briefly, iRIC applied as short periods of ischemia-reperfusion before THI at a remote organ (intestine) results in the transfer of protective signals via different humoral and/or neural and partially unknown connective mechanisms to the target organ (liver), however, it seemingly also confers a local protection against the detrimental effects of intestinal congestion and functional ischemia of the gut, via preserving the integrity of the intestinal tissue and barrier function and resulting in less dramatic systemic effects of THl. Adapted from Emotzpohl, Czigany et al. Shock. 2018 [15]. Abbreviations used: iRIC-Intestinal remote ischemic conditioning; THI-Total hepatic ischemia. Funding: The authors declare funding in part from the START program of the Faculty of Medicine, RWTH Aachen University (#23/19 to Z.C.) and from the B.Braun Foundation, Melsungen, Germany (BBST-S-17-00240 to Z.C.) without involvement of the funders in study design, data collection, data analysis, manuscript preparation or decision to publish.
2019-09-29T13:01:31.772Z
2019-09-26T00:00:00.000
{ "year": 2019, "sha1": "6aac6cc2634bfff1d073e8eb25b69c88dadb6e4c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/8/10/1546/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7066847efe2ef1e4bce75f1a63363913ec8056d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118591133
pes2o/s2orc
v3-fos-license
Hyperon forward spin polarizabilty gamma0 We present the results of a systematic leading order calculation of hyperon Compton scattering and extract the forward spin polarizability gamma0 of hyperons within the framework of SU(3) heavy baryon chiral perturbation theory (HBChPT). The results obtained for gamma0 in the case of nucleons agree with that of the known results of SU(2) HBChPT when kaon loops are not considered. I. INTRODUCTION Compton scattering is a source of valuable information about baryons since it offers access to some of the more subtle aspects of baryon structure such as polarizabilities [1]- [5], which parameterize the response of the target to an external quasi-static electromagnetic field. For the case of unpolarized nucleons the spin-independent (SI) Compton amplitude is given by where N = p, n; Q N , m N represent the nucleon charge and mass, while ǫ µ = (0, ǫ), ǫ * µ = (0, ǫ * ) and q µ = (ω, q), q ′ µ = (ω ′ , q ′ ) specify the polarization vectors and four-momenta of the initial and final photons, respectively. At this order the Compton amplitude is defined in terms of two polarizabilities-electric (α N ) and magnetic (β N ), which measure the response of the nucleon to applied quasi-static electric and magnetic fields. By measurement of the differential cross section one can extract α N and β N provided the energy is large enough such that the second and third term in Eq. (1) contribute significantly with respect to the leading Thomson contribution, but is not so large that higher order effects become significant. This extraction has been achieved in the energy range 50 MeV < ω < 100 MeV-for a recent review see e.g. Refs. [6][7][8]. According to the Particle Data Group [9] current experimental numbers for α N and β N are: α p = (12.0 ± 0.6) × 10 −4 fm 3 , β p = (1.9 ± 0.5) × 10 −4 fm 3 , α n = (11.6 ± 1.5) × 10 −4 fm 3 , β n = (3.7 ± 2.0) × 10 −4 fm 3 . ( The nucleon polarizabilities have been studied via a number of theoretical approaches based on dispersion relations [3,[10][11][12][13][14][15], phenomenological Lagrangians [16][17][18][19][20], constituent quark models [21][22][23], chiral-soliton type of models [24][25][26][27][28] and lattice QCD using the external electromagnetic field method in quenched [29,30] and unquenched approximation [31]. Additional insights into the polarizabilities have come from chiral perturbation theory (ChPT), an effective theory of the low-energy strong interaction [32,33], specifically from heavy baryon chiral perturbation theory (HBChPT) which is an extension of ChPT that includes the nucleon [34,35]. The first such calculations of nucleon polarizabilities within ChPT were carried out in [36,37]. However, HBChPT has an important deficiency in that the chiral perturbative series fails to converge in part of the low energy region. The problem is generated by a set of higher order graphs involving insertions in nucleon lines. It has been shown that infrared singularities of the various one loop graphs occurring in the chiral perturbation series can be extracted in a relativistically invariant fashion. This procedure is known as infrared dimensional regularization (IDR) [38]. The IDR respects the constraints of chiral symmetry as expressed through the chiral Ward identities. The manifestly Lorentz-invariant form of baryon chiral perturbation theory (BChPT) with the IDR prescription has been successfully applied to calculate α N and β N and the results for these polarizabilities differ substantially from the corresponding HBChPT numbers [39,40]. In addition, HBChPT has been employed to analyze virtual Compton scattering processes since, as an effective field theory, it satisfies the structures of gauge invariance, Lorentz invariance and crossing symmetry [41]. New predictions for generalized polarizabilities have been made using HBChPT at O(p 4 ) (NLO) [42][43][44] and, using ChPT, Compton scattering from the deuteron has been computed to order O(p 4 ) [45]. However, the situation with regard to scattering from polarized targets is less satisfactory, in part because few direct measurements of polarized Compton scattering have been attempted. The spin-dependent (SD) piece of the forward scattering amplitude for real photons of energy ω and momentum q is [4,[46][47][48][49], From the theoretical perspective there is particular interest in the low energy limit of the amplitude: where γ 0 is the forward spin polarizability, which is related to the photo-absorption cross sections for parallel (σ + ) and antiparallel (σ − ) photon and target helicities via where W = M π + M 2 π /(2m N ) is the threshold energy for an associated neutral pion in the intermediate state. The Low-Gell-Mann-Goldberger low-energy theorem states that, where α = e 2 /(4π) = 1/137.036 is the fine-structure constant, κ N is the nucleon anomalous magnetic moment [50]. The forward spin polarizability γ N 0 has been calculated to O(p 3 ) (LO) [51] in the framework of HBChPT yielding, at lowest order in the chiral expansion, both for protons and neutrons, where the entire contribution comes from πN loops. (Hereafter we shall use units of 10 −4 fm 4 for the spin polarizability). This LO calculation of spin polarizability is a prediction, since any low energy constants associated with the polarizability enter only at next to leading order (NLO). At LO the polarizability is given entirely by the loop contribution in terms of well known parameters such as nucleon and pion masses and the pion-nucleon coupling constant (g πN N ). The effect of including the ∆(1236) enters in counterterms at fifth order in standard HBChPT, and has been estimated to be so large as to change the sign. The forward nucleon spin polarizability γ 0 has been computed in an extension of HBChPT with an explicit ∆ in [47]. This calculation has also been carried out to NLO in the framework of HBChPT [52][53][54][55]. The contribution to γ N 0 up to and including NLO contributions is found to be γ p/n 0 = 4.5 − (6.9 ± 1.5)-the NLO contributions are large. The corresponding relativistic chiral one loop calculation of the forward spin polarizability was carried out by Bernard et al. [51] and the computed value of γ N 0 was found to be smaller than the LO result of HBChPT. The generalized γ N 0 has been calculated in the Lorentz invariant formulation of BChPT to NLO which demonstrates a large NNLO contribution [56,57]. In [56] the quoted values are γ p 0 = 4.64 and γ n 0 = 1.82; hence the chiral expansion does not seem to converge, which is attributed to the Born terms. Also, as has been shown in Ref. [56], inclusion of the Born terms up to fourth order is not sufficient to obtain convergence and thus a complete fifth order calculation seems mandatory. However, when only the first two terms of the chiral expansion are considered (O(µ −1 )) the results reproduce the NLO HBCHPT results. Electroproduction data have been used to extract γ N 0 using the sum rule given above. In particular, in Ref. [58] The most recent results are γ p 0 = −0.90 ± 0.08 ± 0.11 [62]. Other results based on different photomeson analyses are γ p 0 = −0.67 (HDT), -0.65 (MAID), -0.86 (SAID) and -0.76 (DMT). Hence it is safe to say that, although considerable progress has been made in understanding γ 0 for the nucleon, the results obtained from BCHPT/HBCHPT are far from the numerical results obtained from the electroproduction data. While a rather large amount of work has been devoted, both theoretically and experimentally, to the study of the nucleon polarizabilities, very little is known about hyperon polarizabilities. However, with the advent of hyperon beams at FNAL and CERN, the experimental situation is likely to change, and this possibility has triggered a number of theoretical investigations. Already, predictions for electric and magnetic polarizabilities have been made for low-lying octet baryons in the framework of LO HBChPT [63], and in the context of several other models, yielding a broad spectrum of predictions [64][65][66][67][68][69]. At present, no experimental data is available for the forward spin polarizabilty of the hyperons and no theoretical calculations have been published. Motivated by this situation, in the present work we extend the analysis of SU(2) HBChPT to the SU(3) version in order to compute γ 0 for hyperons. This could serve as a test of low-energy structure of QCD in the three flavor sector. However, there is also a need to compute the spin polarizabilities in the framework of BChPT with the IDR prescription. The paper is organized as follows. Section II contains an overview of the SU(3) version of HBChPT relevant for the calculation of the hyperon forward spin polarizabilities γ 0 . The relevant Feynman rules for the case of the Σ + polarizability are listed in Appendix A (see Fig.1), and the required loop integrals are listed in Appendix B. The explicit expressions for Σ + π + (K + ) loops in terms of loop integrals are listed in Appendix C. In Section III we give the explicit results for the hyperon spin polarizabilities γ 0 and discuss the corresponding numerical results. Brief conclusions are given in Section IV. The lowest-order SU(3) HBChPT Lagrangian involving the octet of pseudoscalar mesons and the baryon octet B consists of two basic pieces: the lowest-order chiral effective meson Lagrangian L and the lowest-order meson-baryon Lagrangian L (1) HHChPT [4,34,35]: where the superscript (i) attached to the above Lagrangians denotes their low-energy dimension and the symbols , [ ], { } denote the trace over flavor matrices, commutator and anticommutator, respectively. We use the following notations: U = u 2 = exp(iφ/F 0 ), where F 0 is the octet decay constant (in our calculations we use F 0 = F π = 92 MeV), u µ = i{u † , ∇ µ u}; ∇ µ and D µ are the covariant derivatives acting on the chiral and baryon fields, respectively, including external vector (v µ ) and axial (a µ ) fields: with Γ µ being the chiral connection given by The covariant spin operator is S µ = i 2 γ 5 σ µν v ν , obeying the following relations in d dimensions [4]: Finally, χ ± = u † χu † ± uχ † u with χ = 2BM + . . ., where B = | 0|qq|0 |/F 2 is the quark vacuum condensate parameter and M = diag{m,m,m s } is the mass matrix of current quarks (We work in the isospin symmetry limit withm u =m d =m = 7 MeV. The mass of the strange quarkm s is related to the nonstrange one viam s ≃ 25m). The parameters D and F are fixed from hyperon semileptonic decays to be D = 0.80 and F = 0.46 with D +F = g A = 1.26 being the nucleon axial charge. In the above equations, m denotes the average baryon mass in the chiral limit. III. FORWARD SPIN POLARIZABILITY γ0 In order to calculate the forward spin polarizabilities, we work in the Breit frame wherein the sum of the incoming and outgoing baryon three-momenta vanishes. We utilize the Weyl (temporal) gauge A 0 = 0, which, in the language of HBChPT, means v·ǫ = 0, where v µ = (1, 0, 0, 0) is the baryon four-velocity. At O(p 3 ) only the loop diagrams contribute to γ 0 -to one loop, the hyperon polarizabilities are pure loop effects. At LO these loop diagrams have insertions only from L (1) HBChPT φB . Fig. 2 shows all the possible loop-diagrams, which contribute to γ 0 for Σ + . Similarly for the other octet baryons the diagrams in Fig. 2 are the only ones which contribute to γ 0 (except that the incoming and outgoing particles are different). There do exist contact term graphs stemming from two insertions from L , but these do not contribute to γ 0 and consequently we have not shown these diagrams in our manuscript. Appendix A (see Fig.1) lists the relevant Feynman rules for the computation of the loop diagrams, while Appendix B contains the relevant loop integrals required for their evaluation. Appendix C gives the analytic results for Σ + π + (K + ) loops contributing to the forward Compton scattering amplitude γΣ + → γΣ + . Note that both pion and kaon loops yield finite contributions to γ 0 for all octet baryons. The values of γ 0 are found from the calculation of W (1) (ω) via [47], and below we list the expressions for γ 0 for all the low-lying octet baryons: We note that in the nucleon case, when we neglect the kaon loops contributions, we reproduce the well known result of SU(2) HBChPT [51]. The other results for spin polarizabilities are new predictions. In Table I, the second and third columns give the contribution to γ 0 from π and π + K loops, respectively. In Table I IV. CONCLUSIONS We have presented the LO contribution to spin-dependent Compton scattering in the framework of HBChPT. In LO HBChPT, these contributions are all meson loop effects, with no counterterm or resonance exchange contribution and hence are a test for the chiral sector of three-flavor QCD. There exists a small but finite contribution from kaon loops to γ 0 for the low-lying octet baryons except the Ξ − and Ξ 0 states. Our result for γ 0 in the case of the proton and neutron reproduces the results of the LO calculation of SU(2) HBChPT when kaon loops are not considered and it remains to be seen how the predictions for the other baryons will compare with future experiments. On the theoretical side, one needs to perform O(p 4 ) calculations to improve the predictions of the polarizabilities and to test the convergence of the chiral expansion. Additional calculations are also needed to compute γ 0 in the framework of BChPT with the IDR prescription in order to test the LO and NLO HBChPT results. Work in this direction is in progress. πΣΛ coupling Photon-Meson-Baryon couplings 6. γπΣΣ coupling Here, we have defined all the loop functions which occur in our calculation and we have given these functions in closed analytical form as far as possible. In the following all propagators are understood to have an infinitesimal imaginary part. The results of the integral are for real photons. The complete list of integrals can be found in [4]: has a pole at d = 4. Here P=π or K, γ E = 0.557215 and λ is the scale in dimensional regularization scheme used in the evaluation of integrals. The relevant integrals are where Appendix C: Σ + π + (K + ) loops in forward Compton scattering Using the loop integrals defined in Appendix B, the Σ + + π + (K + ) loop diagrams of Fig. 2 can be written as: Amp Σ + π + b+c+b ′ +c ′ = C 2 [S · ǫ * , S · ǫ] ∂ ∂M 2 π 1 0 [J π 2 (ωz) − J π 2 (−ωz)]dz (C2) where
2011-10-19T10:54:58.000Z
2011-08-01T00:00:00.000
{ "year": 2011, "sha1": "baab34b56555ce95f5da686c19a4f2d5d39deb60", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.84.076007", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "baab34b56555ce95f5da686c19a4f2d5d39deb60", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
245123493
pes2o/s2orc
v3-fos-license
Ultrasound and Sonogenetics: A New Perspective for Controlling Cells with Sound An important challenge in neurobiology is to stimulate a single neuron, especially in deep areas of the brain. The optogenetics methods need a surgical operation to convey light sources to targeted cells. Nowadays, non-invasive tools such as sonogenetics with the ability to modulate and visualizing cellular and molecular processes have attracted much attention. The study of the biological functions of living organisms always requires tools for monitoring and imaging dynamically. Current sonogenetic approaches use ultrasound as a non-invasive tool to precisely control cellular function. In general, sonogenetics includes the development of mechano-sensitive proteins, approaches for introducing their genes to specific cells, targeted stimulation, and finally, reading the outcome. Hence, to prepare a short review of emerging technology sonogenetics, we summarized the introduction of sound waves, the mechano-sensitive proteins commonly used in sonogenetics, and potential therapeutic applications of sonogenetics for biological research and medicine. This short review would beneficiate in the translation of sonogenetics from present in-vitro and in-vivo investigations to clinical therapies. Introduction Today sound and light are known as fast transfer tools in non-invasive therapeutic methods such as sonogenetics and optogenetics. Therefore, a study in this field can be useful for imaging and modulating biological systems. Sound waves are known as the strongest mechanical waves that are divided into three types of sound; 1) Audible sound with the frequency of 20 Hz to 20 kHz is perceived by the auditory organ and the nervous system. Audible sound enters the human ears as mechanical vibration, then converts to biomechanical signal by inner ear hair cells and transmitted to the brain as a neuronal signal, 2) Infrasound wave with a frequency of less than 20 Hz is inaudible for the human's ear. These waves were first discovered in 1883 in Krakatao, Indonesia, and with low frequency, they can travel over thousands of kilometers. Infrasound is not used in therapy but is very important in biological phenomena and geology (1). 3) Another type of sound wave is ultrasound with a frequency above 20 kHz which has been very important in diagnosis. The first use of ultrasound was as a diagnostic tool for brain tumors, which was done by Theodore and Frederick Dussik in the 1930s (2). Ultrasound was first discovered in 1876 by Francis Galton, who made the radarmaking device on World War 1 submarines (3). Nowadays, the use of ultrasound is one of the most common clinical applications in diagnosis and treatment, especially in the field of neuroscience. Ultrasound is the most efficient tool in physiotherapy, surgical instruments, chemotherapy, drug delivery, sonography, and the high-intensity focused ultrasound (HIFU) field (4). Notably, ultrasound waves used in sonography in the frequency range of 2 to 18 MHz, are a hundred times greater than the threshold of human hearing. In ultrasound waves, higher frequencies have smaller wavelengths and higher energy and finally could penetrate soft tissues. The penetration depth of sound waves is the most distinguishable factor between sonogenetics and optogenetics. Although, optogenetics technology provides helpful tools to study biological systems, it may be less efficient in deep tissues (5). Ultrasound, like any wave, also has speed in soft tissue nearly 1540 m/s. The acoustic impedance of ultrasound becomes noticeable at the boundary of various media. Note that there is no significant difference between the acoustic impedances for soft tissue, and that's why the sound wave penetrates homogeneously in the soft tissue and provides simple images without distortion (6). Gradually, a new therapeutic modality has emerged in the treatment of medical disorders by using ultrasonic energy to target regions in the body. Focused Ultrasound (FUS) permits sonic energy to be targeted in deep tissue, noninvasively, and precisely ( Figure 1) (7,8). Although the ultrasound mechanism on cellular excitability is poorly understood, there are several possibilities of energy delivery of ultrasound being considered; firstly, heat production by ongoing ultrasound waves can be used for thermal bioswitches. Secondly, cavitation occurs via the interaction of ultrasound with microbubbles, which can cause cell or vascular obstacle infraction. Ultrasounds also deposit momentum when they travel through the medium, produced by mechanical forces called acoustic radiation forces that are able to stimulate soundactivated molecules (9). The currently available tool for monitoring and controlling cell events is optogenetics. Since each technique is not perfect and optogenetics is not an exception to this rule, light scattering has been difficult to use this technique in biological systems. In contrast, ultrasound has a long history in biomedical imaging and therapeutics, but its application in manipulating and monitoring cellular events is limited. Recent progress has been started to solve this restriction by the discovery of ultrasound-responsive elements that allow ultrasound to link to the cell activities directly, in a new technology called "sonogenetics". Sonogenetics, which utilizes ultrasound to noninvasively manipulate and control cells genetically engineered with ultrasound-responsive proteins, can be widely applied to manipulate cellular functions. This investigation aims to introduce this emerging technology and provide a clue for further studies. Additionally, this study has been centralized on the latest developments in ultrasound and specially sonogenetics technology in medicine. Four basic steps are required for sonogenetics which are briefly discussed below. Part 1: Sound-sensitive proteins One of the main challenges in the development of non-invasive technologies such as sonogenetics is to find the appropriate sound-sensitive proteins as toolboxes. Since sonogenetics used mechanical ultrasound waves, the mechano-sensitive ion channels genes have been identified and expressed in the target cells. Different types of mechanosensitive channels genes act in response to different mechanical stimuli containing shear stress, osmotic pressure, stretch, and pressure. Mechanical forces applied to mechanosensitive channels lead to bilayer deformation and movement of the channel helix and finally, opening the pore, flux of the ions, and small molecules. In addition to the above, another type of sound-sensitive protein in the mouse brain is called Prestin and can be stimulated by ultrasound (16). The effects of ultrasound on targeted tissues are through the mechanical and thermal mechanisms. Transient receptor potential vanilloid 1 (TRPV-1) is one of the thermo-sensitive ion channels that are used in sonogenetic and activated at 42 °C (17). The TRPV-1 channels closed at the physiological temperature of the body to keep the cells safe. These channels are one of the primary proteins that are used in sonogenetics and expressed in non-neural cells and nerve fiber such as muscle cells and vascular endothelial cells (18,19). Protein kinase C and phospholipase C as intracellular signaling pathway elements have a regulatory effect on TRPV1 activity. The phosphatidyl inositol-bisphosphate (PIP2) also has an inhibitory effect on this channel (20). The activation mechanism of this channel is a result of some factors such as heat (above 42 °C), acidic environment, and chemical stimuli, including capsaicin (which is abundant in red pepper) (21). The transient receptor potential Ankyrin1 (TRPA-1), like TRPV-1, is a subgroup of TRP channels that permeable to calcium. The agonist compound of TRPA-1 is mustard oil that is activated by the allylisothiocyanate component. Some compounds as toxic scorpion peptides (WaTx) and electrophilic irritants can be activating by TRPA-1 channels. The structure of TRPA-1 includes six transmembrane domains throughout the plasma membrane, 16 ankyrin, intracellular N-terminal and C-terminal domains. This region of TRPA-1 has an important role in mechano-sensitive features in this channel (22). Notably, TRPA-1 channels were activated at cold temperature (<17 °C) (23). One type of TRPA-1 in humans (hsTRPA-1) is used as a sonogenetic toolbox. Researchers found that following ultrasound simulation on this channel, the intracellular calcium level, and membrane potential are increased (22). Other mechano-sensitive proteins in bacteria are MS channels. These channels found in bacteria such as E.coli are divided into different subgroups such as; MscL (mechano-sensitive channel large), MscS (mechano-sensitive channel small,) and MscM (mechano-sensitive channel mini) (24). The MscS family members are found widely among bacteria and archaea. Moreover, they are found in all of the plant's genome, fungi and eukaryotes genome. MscL protein consists of a polypeptide with 136 residues of amino acid and signals recognition particle (SRP) to target in the plasma membrane (25) and two alpha-helices in both sides of the plasma membrane (26) Some factors such as light and pH can affect Mscl (27,28). Ultrasound is an important factor to open the MscL and control target cell activities. The frequency of ultrasound waves can improve the target cell activity by rapid chemical interaction. K2P family channels are new protein channels of potassium with two domains, four transmembrane domains, and extracellular caps. TREK-1, TREK-2, and TRAAK are mechano-sensitive channels of the K2P family that being activated by mechanical force. The genes of these channels are expressed in the nervous system of mammalian and have efficient effects on vital signs. The activation of these channels is strongly dependent on the extracellular K + level (29). Prestin, one of the transmembrane proteins of the cochlea, is a voltage-to-force transformer motor. All changes and movements in the outer hair cell (OHC) of cochlea make a membrane potential in these cells and that might be the cause of auditory messages in mammalian. Prestin has a high sensitivity to the frequency of sound in hearing organs of mammals (30) and can detect a sound frequency in the range of less than 20 kHz. Prestin function depends on electromechanical signals send from the OHC of the cochlea to the brain (30). It is noteworthy that prestin isn't an ion channel and does not have any role in ion exchange on either side of the membrane (15). Part 2: Gene delivery and expression Gene coding ultrasound-sensitive protein can be delivered to the target cells via gene delivery. This process is done with viralvectors and non-viral vectors methods and also the creation of transgenic lines. Part 3: Ultrasound exposure As mentioned above, ultrasound is an acoustic wave with a frequency of more than 20000 Hz. Focused high-intensity ultrasound is a non-invasive surgical technique that uses focused ultrasound for thermal ablation of tissues. Focused ultrasound exerts its effect with the most appropriate energy and minimum period time (31). To have a better effect and more efficiency of ultrasound radiation, some parameters such as intensity, fundamental frequency, duration, duty cycles, and pulse repetition frequency are checked. Intensity: intensity is the amount of acoustic energy produced by ultrasound that is described as Spatial-peak Pulse-average (Isppa) or spatial-peak time-average (Ispta) that can be used for the safety radiation of ultrasound in brain simulation. Focused ultrasound waves are divided into two types based on intensity: Focused ultrasound with high intensity (HIFU) and Focused ultrasound with low intensity (LIFU). The intensity of HIFU is approximately about 100 w/cm 2 to 10 kw/cm 2 and due to its high intensity; it's widely used in treatment and surgery (32). Sometimes high intensity can cause an increase in the temperature and make irreversible destructive effects. LIFU with an intensity less than 3 w/ cm 2 can modulate the local tissue by controlled temperature. Ultrasound stimulation can record electrical activity when its intensity increases up to 100 mw/cm 2 and after reaching 100 mw/cm 2 , it is impossible to analyze brain activity (33). Fundamental frequency: The oscillation cycle per time is called fundamental frequency which is widely used in ultrasound applications. Ultrasound has a wide frequency range but limited frequencies can be effective in diagnosis and treatment. For example, ultrasound with high frequency (1-20 MHz) uses in diagnosis and ultrasound with medium frequency (0.7-3MHz) utilizes in therapy and ultrasound with low frequency uses in industry. Higher frequency waves have smaller depth penetration. Therefore, for high penetration depth, low frequencies are essential (34). Duration: The interval between the duration of pulse transmission. Researchers have found that long-term (>10 s) use of LIFU can inhibit neural activities while using short-term can be simulated (35). Pulse repetition frequency (PRF): PRF is the number of ultrasound pulses over a specified period of time that is typically measured as hertz (Hz) or cycle per second. Recent studies have shown that PRF levels are associated with neuron modulation. PRF above 500 Hz of ultrasound simulates neural activity with evoked EEG (36). Duty cycle (DC): the ratio of the ultrasound cycle per pulse is called the duty cycle (DC). The duty cycle of ultrasound can be delivered in the continuous or discrete pulse, which is DC = 100% if the ultrasound is continuous and without any interruption. Noteworthy, in most studies pulsing duty cycle stimulation of ultrasound with DC < 100 can be more efficient in neural activation application (37). Part 4: Readout The results induced by stimulating the ultrasound-sensitive proteins require to be evaluated in cells, tissue or organisms. Electrodes and arrays can be used to record the effect of changes in membrane voltage by evaluation of calcium charges after ultrasound exposure. Many biosensors such as dye and genetically encoded indicators can be used to evaluate different cellular readouts. Ultimately, investigating cell behavior can be used to evaluate the effect of modulating cellular activity in-vivo. Application of ultrasound As mentioned above, ultrasound is a type of mechanical wave and can be focused on a high frequency. Sound waves travel among the tissues at 1540 m/s and scatter from the interface of tissues with different acoustic impedance which is a function of the density and compressibility. Ultrasound Imaging is a diagnostic modality in the clinic and there are numerous ultrasound imaging modes such as B-mode imaging, doppler imaging which detects the motion of the red blood cells, contrast imaging that relies on the administration of contrast agents like microbubbles, ultrafast imaging, functional ultrasound imaging, and ultrasound localization microscopy. Furthermore, ultrasound can interact with biomolecules to enhance their transport through cellular and tissues barrier which is relied on the cavitation behavior of microbubbles and leads to cellular sonoporation, vascular barrier opening, acoustic trapping, and manipulation of cells and molecules. Furthermore, it is worth mentioning that ultrasound can be combined with other forms of energy such as light and magnetic fields to enable the imaging or actuation of biomolecules that can be considered as photoacoustic imaging, acoustically modulated light focusing, and acoustically modulated magnetic resonance (6). The physics of ultrasound makes it a favorable option for neuromodulation because it can be focused on millimeter resolutions through the skull bone to deep-brain regions. Ultrasound waves associated with heat generation can be effective in therapy; In fact, they can be used in generating internal heating in local tissue without adverse effects. In recent studies, researchers have found a way to use ultrasound to treat a specific type of cancer by focused ultrasound. The focused ultrasound can upgrade the temperature of the area of the tumor without any destructive effect on the surrounding tissue. Most medical diagnostic imaging procedures are performed with X-rays. X-ray photons have high energy and high ionizing radiation. Due to this feature, X-rays can break down molecular bonds in tissues. This demolition in molecular bonds can lead to a change in function or the destruction of tissues. Unlike X-ray, there is no ionizing radiation exposure related to ultrasound. So using the FUS is recommended in sensitive situations that X-rays can be dangerous. Besides, ultrasound can be discriminate contrast between different types of soft tissue. Ultrasound can participate in some biomedical applications. The study of using ultrasound in the treatment of cancer is a novel field for researchers. Surgery is the common method to treat solid tumors but this method is not applicable in some tumors in sensitive areas. Findings have shown that thermal ablation of the HIFU approach can also be used in curing cancers such as prostate cancer, breast cancer, liver cancer, and kidney cancer which is discussed below (38). Prostate cancer: Prostate cancer is the most common cancer among men. HIFU is one of the appropriate choices for the treatment of prostate cancer. Since surgery can cause destruction effects on the function of urinary and sexual ducts, the use of this technique may be useful. The prostate is located in a deep area of the pelvis and ultrasound can detect the prostate with high accuracy and minimal damage for adjacent tissue (39). Breast cancer: Breast cancer is the second most common cancer in females worldwide. This cancer is caused by inheritance, and environmental factors such as age; obesity, moreover, alcohol consumption increases the risk of breast cancer infection in women. The common methods in the treatment of breast cancer are surgery, chemotherapy, radiotherapy, and hormone therapy. FUS as a non-invasive tool appropriate method instead of mastectomy. In this method, HIFU directed into the tumor of tissue and terminated the tumors by increasing temperature without injury. Using HIFU to treat breast cancer can prevent the proliferation, invasion, and metastasis of breast cancer (40). Liver cancer: Surgery coupled with implant liver is the most promising way to cure (39). The liver has many blood vessels and because of that reason, chemotherapy and surgery is the main method for this cancer. Studies have shown that HIFU can destroy tumors completely in the selection area in the liver cancer with the least pain for patients. Doxorubicin administration combined with HIFU treatment increases the chance of survival of the patient (41). Kidney cancer: Surgery is the main method to treat kidney cancer but since most kidneys tumors are small, a non-invasive method is the best treatment choice for this cancer, and for this aim, using HIFU is recommended. Sonogenetics and its applications Currently, LIFU is a non-surgical approach used for neuromodulation of the peripheral and central nervous systems. Although, significant advances have been done in neuromodulation, still faces some limitation such as lack of spatial selectivity. Sonogenetics has appeared as a novel strategy to target individual cells with high spatial resolution. Comprehension of how the neural system works and how triggers particular behavior need recognition of participating neurons and their activities. Some strategies have been advanced for manipulating neural circuits using small molecules (pharmacogenetics) (42) or light (optogenetics) (43,44). While these approaches have unveiled some complications in neural circuits, they have some limitations; problems in the transfer of exciter to specific neurons in deep areas of the brain. To solve this matter a novel approach has been elaborated which genetically sensitizes selected neuron cells to ultrasound. Integration of deep penetration and spatial targeting of ultrasound has emerged in sonogentics technology that has the potential clinical application of Epilepsy, Depression, and Parkinson's disease (45). Besides, it is shown that HIFU can change neuronal activity in frog and turtle neuromuscular systems, but heating tissues by HIFU shows risks for irreversible damage; Thus, recent studies have focused on the use of Low-Intensity of Focused Ultrasound (LIFU). Chalasani and his colleagues used ultrasound to stimulate specific neurons in the nematode, Caenorhabditis Elegans and observed behavioral responses to single ultrasound pulses depending on the pressure of the ultrasound, but it accrued just in the situation that tiny bubbles were added and surrounded the worm body that amplify the US and cause mechanical stimulus: because wildtype animals are insensitive to low-pressure ultrasound and require gas-filled microbubbles to transduce the ultrasound waves. But they found that the mechanosensory channel formed by TRP-4 sensitizes neurons to ultrasound stimulus and mediated the responses also in other animals called mutated animals (22). This new technique became the basis of some new technique applications. Therefore, using sonogenetics, activation of neurons does not require direct contact of TRP-4 expressing neurons with microbubbles, since internally localized neurons could be manipulated by this method and since ultrasound can penetrate the skull, it is so interesting to know if it is possible to use songenetics in other organisms or not (45). Also, it is shown that LIFU with the frequency of 1.1 MHz and intensity of 14-93 w/cm 2 can activate the peripheral neuronal structures in the human hand and other surveys; ultrasound was used due to manipulating the structure of the deep neurons of the human hand to decrease the chronic pain (45). These findings open doors to different applications of the effects of ultrasound on an existing or an engineered ultrasound-sensitive ion channel that could be over-expressed in a particular region in a cell carrying specific genetic markers (9). In 2016, scientists found that focused ultrasound modulates K + current K2P channels and also Na + current of Na 5 1.5 which are expressed in neurons, retinal cells or cardiac cells which may lead to important medical applications. Lots of the brain and the heart ion channels might respond to mechanical or temperature-related effects related to the US application (9). In addition to neuromodulation, sonogenetics could be applied to manipulate and control a variety of types of cells and tissues from cardiomyocytes pacemaker cells in the heart to insulin-secreting cells in the pancreas. However, additional investigations are needed to explore the therapeutic potential of this novel technology. On the other side, sonogenetics has some other applications, for example, sonogenetics in fetal neurology, which incorporating the idea of fetuses first will play an important role in the molecular genetics era. As we know, bacterial and eukaryotic cells may sense physiologically relevant changes in the membrane tension using MscL and MscS homologs, then convert tension into solute flow across the membrane and at last turn, these fluxes into applicable actions such as osmotic shock protection in some type of bacteria and some others like B. subtilis, MS channels have a crucial role in the process of exiting stationary phase and re-entering the growth cycle. Moreover, Arabidopsis thaliana encodes ten MscS-Like (MSCL) proteins in the plasma and vacuolar membranes that are regulated by salt and other osmotic (46). In addition to the above, in a survey, researchers expressed MscL genes in rat hippocampal neurons in primary culture and activated it by low-pressure ultrasound pulses, because the gain of function mutation, 192L, had sensitized MscL to low-pressure ultrasound that can penetrate the skull and brain tissue with very little impedance or damage in tissue and triggering action potentials at a peak negative pressure. MscL can be activated in any membrane independent of other proteins or ligands and it has a single small gene that can easily be targeted to a specific neuron in-vivo. As a fact, the different or additional mutations can generate new sensors suited to different needs. For example, molecular engineering of MscL variants with fast gating kinetics which is combined with strong currents and also rapid inactivation can improve the frequency of ultrasound-evoked spikes or more accurate manipulation can be achieved by MscL modification with designed ion selectivity and pore size. It is noteworthy to tell that ultrasound should be able to deliver drugs to the cells through installed MscL in the membrane called probably "sonotherapy" because scientists have used transgenic MscL to deliver phalloidin into mammalian cells. MscS-like proteins also are implicated in cellular signal transduction pathways. Sonogenetics by the targeted use of different types of channels in special microorganisms, tissues, or cells can use their features to apply different acts in the way it did not work before or also change the channel's activity to evaluate their activities. For example, examine the biochemical and biophysical consequence of osmotic shock, evaluation of the role played by MS channel in cellular processes that changes membrane tension other than osmotic stress like rehydration of bacteria and pollen spores, membrane remodeling during cell or organelle fission, changes in cell morphology or size, and altered membrane synthesis and neural circuit activity (6). Moreover above, sonogenetics can have a crucial role in actuating the cellular signaling according to some of its features like ultrasound ability to apply mechanical forces to tissues in a controlled temperature increase that leads to mechanical actuation of receptors, ultrasound neuromodulation that would provide sonogenetics control of cellular function. Specific examples are the proliferation of microbes in the gut, the release of the cell expressing therapeutic payload, and the excitability of specific neurons (6). A more recent study by He et al. designed a sonogentics nanosystem by expressing MSCL, mechanosensor ion channels, in tumor cells. Following ultrasound stimulation, MSCL-expressing cells are overloaded with Ca 2+ fluxes and subsequently triggering cell apoptosis. So it is feasible to precisely control cell apoptosis without affecting the rest of the cells (47). Conclusion Currently, optogenetics has known as a significant and useful tool in modulating and monitoring various cells, comprehension of different disorders and has been identified as an alternative to conventional electrical stimulation methods used in clinical research. Light penetration is a major factor limiting this valuable technology in deep area structures. The Sonogenetics approach is an alternative tool that provides a new strategy for deep tissue high resolution and non-invasive therapy. Sonogenetics might be the ideal manipulation technique for Neurostimulation like in the heart and brain compared to optogenetics. Furthermore, Sonogenetics have promising applications not only in neurobiology but also in cancer immunotherapy. Using this technology, it is possible to manipulate the various cellular signaling involved in the programmed cell deaths in cancer treatment. Like all emerging technologies, this technique has some limitations to translate in the clinic. As in any gene therapy method involves delivering the sound-sensitive proteins genes in targeted subpopulations cells. Whereas Adeno-associated virus (AAV) vectors are exhibiting good potential in a clinical setting, obstacles such as host immune response, gene transfer, and liver clearance still remain. However, many further investigations are guaranteed to confirm the safety and efficacy of sonogenetics and adjust factors and parameters before clinical practice. Ethical approval This article does not contain any studies with human participants or animals performed by the author.
2021-12-14T05:08:33.649Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "84a24b95b268cbea5aad9c843b3046cf6155b473", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "84a24b95b268cbea5aad9c843b3046cf6155b473", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
233884919
pes2o/s2orc
v3-fos-license
Diversion’s Application in The Juvenile Justice System to Realize Restorative Justice Related to Deprivation of Liberty Principle Diversion Efforts can only be carried out in cases of Children in conflict with laws that threaten their crimes under 7 (seven) years and do not constitute a repeat of a criminal act. In contrast, the juvenile justice system requires deprivation of liberty principle and punishment related to the latest findings. This research uses the normative legal research method, using the law method, research method, and comparative method. From this research, we know that diversion in the juvenile justice system cannot be done in every child’s case; it can only be done in the case of children who meet the requirements of a case protected under 7 (seven) a repeat of follow up. Not all cases of children go through a process of diversion. Children who have a conflict with the law are directly threatened with criminal punishment. However, there has been a reconciliation between the perpetrators and the victims, so that the deprivation of liberty principle, and criminalization, is the latest result, which is not successful. Therefore, diversion shall not be used again to protect children. INTRODUCTION As is through diversion, namely the transfer or transfer of the judicial process into an alternative method of solving criminal cases, namely through deliberation on recovery or mediation. The transfer step is made to prevent the child from further legal action. Besides that, the transfer aims to avoid the negative influence of the next legal action that can cause stigmatization for community support. Diversion is currently considered a process recognized internationally as the best and most effective way of resolving cases of children in conflict with the law. This thought initially arose because children were influenced by several other factors outside of the child, such as relationships, education, family, playmates, etc. Diversion aims to achieve peace between victims and children, resolve cases of children outside the judicial process, prevent children from being deprived of liberty, encourage people to participate, and instill a sense of responsibility in children. Diversion must be carried out at every stage from the investigation, prosecution, and examination at the District Court. Diversion is said to be successful if there is an agreement, and the case can be stopped. Restorative justice is achieved, whereas if the diversion is not successful, then the issue is continued until the child is convicted. Children are not to be punished but must be given guidance and guidance to grow and develop as normal children who are completely healthy and intelligent. Children are a gift from Allah Almighty as candidates for the next generation of the nation who are still in physical and mental development. Sometimes children experience difficult situations that make them commit illegal acts. However, children who break the law are not eligible to be punished, let alone put in prison. In the imposition of punishment, although the sentence imposed on children can be a warning or a criminal with conditions, stigmatization as a child who has served a sentence is inherent in the child who conflicts with the law. For the sake of legal protection for children who clash with the law, especially children who conflict with the law, with due observance of the principles in the Juvenile Criminal Justice System, all cases of children without exception can be carried out for diversion so that deprivation of liberty and punishment is the last resort. PROBLEM FORMULATION Does the application of diversion in the Juvenile Criminal Justice System achieve Restorative Justice reflect the principle of deprivation of liberty and punishment as a last resort? RESEARCH METHOD This research is normative legal research with statutory approach. Courts. The most basic substance in this law is strict regulation regarding Restorative Justice and Diversion, which is intended to avoid and keep children away from the judicial process to prevent stigmatization of children who conflict with the law. It is hoped that the child can reasonably return to the social environment. The child is in a difficult situation that makes him commit illegal acts. However, children who break the law are not eligible to be punished, let alone be put in prison, a paradigm shift from an emphasis on retributive justice and an emphasis on restitutive justice to emphasize restorative justice. The focus on restorative justice must be supported by the roles and duties of the community, government, and other state institutions that are obliged and responsible for improving the welfare of children and providing superior protection for children who conflict with the law. The Indonesian Criminal Law System has entered a new chapter in its development. Restorative justice is realized if the diversion is successful and the agreement has been fully implemented so that the child's case can be stopped. Termination of cases of children can be done at any level. With this diversion's success, children who face the law will avoid stigmatization, and children can naturally return to the social environment. The success factor of diversion is the willingness to agree between the perpetrator and the victim. It has implemented the agreement, while the factors that affect diversion's success depend on the victim if the victim does not agree to make peace. The victim or the victim's family does not agree to settle the case through diversion because they still think that punishment is retribution for the wrong that has been done. One form of reform that exists in Indonesian Criminal Diversion is a crucial matter regulated by the SPPA Law because it aims to achieve peace between victims and children, resolve child cases outside the judicial process, prevent children from being deprived of liberty, encourage the community to participate, and instill a sense of responsibility in children. In the process of enforcing such as murder, rape, drug trafficking, and terrorism who are punishable by crimes over 7 (seven) years. Considering that the diversion effort itself does not necessarily reach an agreement between the parties, diversion can be successful and may fail, depending on the course of the parties' deliberations. If the diversion process is successful, then the case settlement process outside the criminal court has realized restorative justice. Still, when the diversion process fails, in the end, the settlement of juvenile cases is continued through formal criminal justice. Researchers assess that diversion is the right of every child, so it does not need to be limited. Children who have to be caught in narcotics, terrorism, rape, and other serious crimes also have the right to get access to diversion. At the very least, all children in conflict with the law are allowed to improve themselves and take responsibility for their actions, so that restrictions on the requirements for implementing diversion as in Article 7 Paragraph (2) of Indonesia does as has been done, namely, diversion is applied to every child case. Deprivation of liberty and punishment is the last resort. Recommendation It is recommended that the application of diversion in the case of a child can be made in every issue of a Child, there is no priority for diversion, does not look at the level of the criminal threat or the repetition of the criminal act of the child is high or low. The diversion that is carried out still considers the victims' interests because there is no success without the consent or agreement with the victims. Suppose the child's act is committed without a victim (such as a drug case). In that case, the government can form an integrated team to consider the child's actions and whether a restorative justice approach can be used. The child who is a criminal offender is also a victim of his/her environment. Restorative justice using diversion is applied to every child so that the Principle of Deprivation of Independence and Criminalization as a last resort is the last resort because it has gone through diversion. If the Diversion effort is not passed, then the deprivation of freedom and punishment is not the last resort because there are still restrictions on diversion.
2021-04-30T22:38:37.066Z
2021-02-10T00:00:00.000
{ "year": 2021, "sha1": "4e8a9cc0162f3077062207cdd06d819c110117bd", "oa_license": "CCBYNCSA", "oa_url": "https://journal.uwks.ac.id/index.php/norma/article/download/1075/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4e8a9cc0162f3077062207cdd06d819c110117bd", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Political Science" ] }
238531668
pes2o/s2orc
v3-fos-license
Erbium-doped-fiber-based broad visible range frequency comb with a 30 GHz mode spacing for astronomical applications Optical frequency combs have the potential to improve the precision of the radial velocity measurement of celestial bodies, leading to breakthroughs in such fields as exoplanet exploration. For these purposes, the comb must have a broad spectral coverage in the visible wavelength region, a wide mode spacing that can be resolved with a high dispersion spectrograph, and sufficient robustness to operate for long periods even in remote locations. We have realized a comb system with a 30 GHz mode spacing, 62 % available wavelength coverage in the visible region, and 40 dB spectral contrast by combining a robust erbium-doped-fiber-based femtosecond laser, mode filtering with newly designed optical cavities, and broadband-visible-range comb generation using a chirped periodically-poled LiNbO3 ridge waveguide. The system durability and reliability are also promising because of the stable spectrum, which is due to the use of almost all polarization-maintaining fiber optics, moderate optical power, and good frequency repeatability obtained with a wavelength-stabilized laser. Introduction Optical frequency combs with a wide mode spacing have a high power per mode, and each mode can be resolved with diffraction gratings or optical filters, which is essential for applications such as mode-resolved direct frequency comb spectroscopy [1,2] and line-by-line arbitrary optical waveform synthesis [3]. In particular, the wavelength calibration of astronomical spectrographs is expected to lead to breakthroughs in exoplanet exploration and cosmological research by improving the precision of radial velocity (RV) measurement [4,5]. RV measurement using the Doppler shift of the stellar spectrum is known as the "Doppler method" [6] and was used in the discovery of the first exoplanet [7]. A precision of a few cm/s is needed to find earth-like exoplanets with the Doppler method, and this is difficult to achieve with conventional wavelength standards such as Th-Ar lamps and iodine cells. To cope with this situation, an optical frequency comb or "astrocomb," has been proposed and developed as the wavelength standard for RV measurement . Astro-combs require a mode spacing several times wider than a resolution of a high-dispersion spectrograph (> 10 GHz) and broad spectral coverage that depends on the celestial body being observed. A frequency comb with a wide spacing needs a high average power to obtain the pulse energy required for spectral broadening due to nonlinear optical effects. In addition, the development of astro-combs is made more difficult because they must be robust and durable for long-term remote operation 2 at observatories. The scheme frequently employed for astro-combs involves increasing the sub-GHz mode spacing of a modelocked laser to more than 10 GHz using mode-filtering cavities [9][10][11][12]. The advantages of this scheme are that it is relatively easy to achieve self-referencing and obtain 100 fs-level optical pulses. The difficulty is that unnecessary modes attenuated by the optical cavities are revived through the spectral broadening process [13,14]. On the other hand, there have been reports of astro-combs generated by modulating a CW laser with electro-optic modulators [15][16][17][18] and by the Kerr effect in a micro-cavity [19,20] as ways of directly realizing combs with a mode spacing exceeding 10 GHz but without mode-filtering cavities. The spectral range of the astro-comb is also important. The wavelength most often used in the RV measurement of celestial bodies is the visible region where there are abundant atomic absorption lines. Ytterbium (Yb) doped-fiber-laser-based [21,22], and the titanium-sapphire-laser-based [23][24][25] combs have been reported as schemes for obtaining visible broadband astrocombs, because of their short wavelength and high output power. In particular, Yb-fiber-based astro-combs have produced actual results, and combs have been reported with a mode spacing of 18 GHz or 25 GHz and a wavelength coverage of 455 nm-691 nm [26]. This coverage reaches 48 % of the visible region, and RV measurement precision at the 1 cm/s level has been reported. Note that we define the visible wavelength region as 360 nm-830 nm in this paper. In this paper, we describe an astro-comb scheme that combines a robust erbium (Er) comb, mode-filtering cavities, spectral broadening with highly nonlinear fiber (HNLF) and multi-order harmonic generation with a chirped periodically-poled lithiumniobate waveguide (cPPLN-WG), and a wavelength-stabilized laser. We have achieved unprecedented spectral coverage while ensuring a sufficient mode-spacing frequency and unnecessary-mode suppression ratio (UMSR). We also discuss the possibility of spectral extension to all visible wavelengths. Figure 1a shows an overview of the broadband, visible, and wide-mode-spacing comb system. We employ an Er-doped-fiberbased mode-locked laser as the comb source. Its carrier-envelope offset frequency (fCEO) and repetition frequency (frep) were phase-locked to reference frequencies from an atomic clock. Using three Fabry-Perot cavities, the comb mode-spacing was increased to 30 GHz by matching one of every 130 modes of the comb to the cavity transmission mode frequency and suppressing the power of other unnecessary comb modes. Here, an acetylene-stabilized laser [29] was used as a reference for cavity-length stabilization so that the comb and cavity-transmission mode-frequencies are easily reproduced. The comb was amplified with two polarization-maintaining Er-doped fiber amplifiers (EDFAs) inserted between the cavities, and then the comb spectrum was broadened in the infrared region with a polarization-maintaining HNLF [30]. We then input the broadband comb into a cPPLN-WG; the second to fourth-order harmonic generation processes converted the comb in the infrared region into a broadband comb in the visible region. This is an evolution of the previous high-order harmonic generation of frequency combs [31][32][33]. For details of each part, see Methods. The wavelength-stabilized laser was frequency-stabilized to a 13 C2H2 absorption line at a wavelength of 1542 nm (ν1 + ν3 P(16)); not referring to the comb. The output of the laser wave was divided into two parts; one of which was used for beat 3 detection with the comb. The other was frequency-shifted with an acousto-optic modulator (AOM) and used for the length stabilization of mode-filtering cavities. Here fCEO and frep were set so that the beat frequency (fbeat) between the wavelengthstabilized laser and the nearest comb mode was approximately 40 MHz. The laser frequency output from the AOM was feedforward controlled [34] to match one of the comb mode frequencies by applying the fbeat signal to the AOM. This frequencycontrolled laser was used as a reference laser for the three mode-filtering cavities. The cavity lengths were controlled to allow the reference laser to transmit. We set the free spectral ranges (FSRs) of the three cavities at approximately 2.00 GHz (130frep/15), 1.77 GHz (130frep/17), and 2.14 GHz (130frep/14), respectively. The combination of these FSRs resulted in a high UMSR after the comb had passed through the three cavities connected in series. The resultant transmitted comb mode spacing was approximately 30 GHz (130frep), and the calculated UMSR was more than 60 dB at a finesse of 100 (see Fig. 3). During the cavity locking procedure, we scanned the optical cavity length and observed the transmitted reference laser power and the total transmitted comb power simultaneously; we locked the cavity mode with the maximum total transmitted comb power to the reference laser wavelength. Thus, the FSR of the optical cavity was closest to the rational multiple of frep; a high transmittance over a broad spectral region was obtained for the extracted comb modes. Spectral range We employed chirp-pulse amplification [35] to obtain optical pulses with sufficient peak power for spectral broadening using an HNLF. As shown in Fig. 1a (4), 70 % of the output from the mode-locked laser was first highly chirped using a 15-m-long normal-dispersion fiber (NDF), and then passed through three cavities and two EDFAs in the order shown in the figure. The temporal pulse width stretched with the NDF was gradually compressed with anomalous-dispersion fibers used in the modefiltering and amplification parts. Then, the temporal width of the pulses was compressed so that it was chirp-free by using an anomalous-dispersion polarization-maintaining single-mode fiber (PM-SMF) after the third cavity. The temporal width of the pulse measured with frequency-resolved optical gating (FROG) was 180 fs. From the average power of 880 mW, the peak power was estimated to be 0.13 kW. The compressed 30 GHz-repetitive optical pulse train was incident into the HNLF to broaden the spectrum in the near-infrared region. The measured and calculated spectra were in good agreement. See Methods for details of the simulation. Figure 2b shows comb-resolved spectra at 1550 nm observed with a high-resolution optical spectrum analyzer (OSA) at the HNLF input (black line) and output (red line). The contrast was approximately 55 dB at the input and 40 dB at the output. Thus, we observed that the spectral contrast at the HNLF output was lower than that at the input. The spectra were also observed in the 1350 nm-1400 nm, 1525 nm-1625 nm, and 1650 nm-1700 nm ranges, where similar mode-resolved comb spectra were observed. The spectral contrast is discussed in detail in the next subsection. The broadband comb in the near-infrared region broadened with the HNLF was incident in a 10 cm-long cPPLN-WG to generate the second to fourth-order harmonics; the infrared comb was converted into broadband combs in the visible range. The cPPLN-WG had a poling period that varied linearly from 12.8 μm to 19.4 μm and was designed to satisfy the quasi-phasematching condition of the second harmonic generation from the wavelength range 1350 nm-1600 nm to 675 nm-800 nm. The 4 average power incident into the cPPLN-WG was 700 mW; the power per mode of the incident comb exceeded 10 μW in the design wavelength range of the cPPLN-WG. Figure 2c shows the spectrum of the broadband comb output from the cPPLN-WG observed with an OSA (solid green line) through a multi-mode fiber (core diameter: ~50 μm). Figure 2d shows the spectrum at a wavelength of 800 nm with high resolution that we observed in the same way, and we obtained well-resolved comb modes. The CCD image sensor used in the high-dispersion spectrograph in which the comb system will be installed begins to saturate when the number of photons detected per pixel reaches 10 5 [36]. Considering the pixel area, the imaging area of the comb, quantum efficiency of the sensor and optical coupling, the signal begins to saturate when the photon number of the comb mode reaches 10 8 . We assume that the spectrograph can use a comb mode as a wavelength reference if the mode has 1/10 of the saturation photon number at an exposure time of 1 s. In other words, we defined the available wavelength range in which we can obtained 10 7 photons/(s mode). Then, the available harmonic component ranges are 664 nm-873 nm, 453 nm-543 nm, and 350 nm-408 nm, as shown in Fig. 2c. The available wavelength coverage of the obtained comb reaches approximately 62 % of the visible wavelength region in the frequency domain when the visible region is defined as 360 nm-830 nm. This is the best coverage for a visible-range comb with a mode spacing in the 30 GHz class. Spectral contrast For precise RV measurement with high-dispersion spectrograph, the imaged comb-spectrum must have a high contrast. The spectral contrast is primarily determined by the quantity of amplified spontaneous-emission (ASE) and the UMSR; the effect of ASE is not negligible for spectral observation with a spectrograph due to its wide resolution bandwidth. In this study, we assume that the quantity of ASE is sufficiently suppressed in the visible region because the final optical cavity is placed after the final EDFA, and the wavelength conversion of the ultra-short optical pulses by nonlinear optical effects acts as filter in the frequency and time domains of the ASE, respectively. Therefore, we considered that the UMSR was the dominant factor determining the contrast. We measured the UMSR of the comb at the HNLF input, the HNLF output, and the cPPLN-WG output using a CW laser as a probe. For details of the measurement procedures, see Methods. Figure 3 shows the UMSR of the comb at the HNLF input (1542 nm, open blue circles), the HNLF output (1542 nm, filled red circles), and the cPPLN-WG output (514 nm, green diamonds). The light blue line shows the UMSR of the comb output from three mode-filtering cavities. This is almost equivalent to the comb at the HNLF input and can be calculated from the ratios of the cavity FSRs to the comb frep and the finesse of the cavities (~100). When the order of a certain transmitted mode is considered to be zero, the comb modes with orders of integer multiples of 130 are transmitted modes, and the others are unnecessary modes. In each wavelength range, the UMSR can be determined by measuring the suppression ratio from mode order 0 to 65 due to the symmetry of the transmittance of the Fabry-Perot cavity. The measured minimum UMSR of the comb at the HNLF input was ~65 dB at a wavelength of 1542 nm, which agreed well with the calculated value. The minimum UMSR of the comb at the HNLF output at 1542 nm was degraded to ~40 dB. The minimum UMSR at 1350 nm was also ~40 dB. It is known that the self-phase modulation induced in the HNLF reduces the UMSR [13,14]; a degradation of 20 dB-25 dB was observed here. This is consistent with the degradation in the suppression ratio observed in the spectrum of the comb at the HNLF input and output shown in Fig. 2b. For the comb output from the cPPLN-WG, only the five signals with low UMSRs could be measured since the signal-to-noise ratios (SNRs) of the beat signals at 514 nm were low. The minimum UMSR was ~40 dB. We did not observe any significant degradation of the UMSR in the harmonic. Therefore, we believe that a UMSR level (~40 dB) similar to that in the fundamental comb is obtained at other visible wavelengths. When comb spectra are imaged using a spectrograph, the asymmetry of the unnecessary modes on the short and long wavelength sides of the transmission modes causes shifts in the spectral positions of the optical comb modes, resulting in RV measurement errors. In particular, the asymmetry causes a significant error when the suppression ratio is low. Here, we calculated the transmitted mode spectral center-of-gravity shift [13] for a ±9th-order unnecessary-mode pair with the lowest suppression ratio (40 dB), assuming that the power difference of the mode pair was 0.5 dB (about 10 %). As a result, the estimated frequency shift was 24 kHz. This corresponds to an RV shift of 1.2 cm/s for the Doppler method in the 500 nm wavelength region, which is sufficiently small. Spectral stability and device durability First, we investigated the long-term spectral stability of the broadband 30 GHz-spacing comb in the visible range. Figure 4 shows spectra obtained at 4 h intervals over 36 h, and there was no significant change in the comb spectrum. Even after more than a year of intermittent use, there was no noticeable change in the output spectrum. In this system, much of the optical system is composed of polarization-maintaining fibers, which suppress temporal fluctuations in the spectrum due to changes in the polarization state caused by environmental changes such as variations in temperature and atmospheric pressure. Next, we discuss the durability of the optical devices. Compared with frequency combs based on solid-state lasers such as titanium sapphire lasers, fiber-laser-based combs are robust, almost maintenance-free, and have excellent long-term functionality. In particular, Er-doped-fiber-laser-based optical combs have been widely studied [37][38][39]. It is also important to remember that the durability of the optical devices depends on the broadband comb generation scheme in the visible region. In this study, HNLF and the ridge-type cPPLN-WG, which are known for their high durability, are responsible for the spectral broadening and wavelength conversion of the comb, respectively. As a result, the generated visible comb power is as low as 22.4 mW for all wavelengths (360 nm-830 nm); there is a low risk of green-induced infrared absorption and other phenomena that can cause damage to nonlinear optical crystals. In fact, despite more than a year of continuous operation, the HNLF and cPPLN-WG have not needed to be replaced, and no power degradation or spectral change has been observed. We can expect the system to operate for several years without their replacement. Discussions In this section, we discuss the possibility of generating a 30 GHz-spacing comb over almost the entire visible wavelength region with minor modifications to the parameters used in the abovementioned scheme. Specifically, we assume the following changes. (1) Increase the output power of EDFA#3 to broaden the output spectrum from HNLF. (2) Extend the chirp range of the poling period of the cPPLN-WG to match the infrared spectrum of the comb output from the HNLF. Here, we estimate how broad a spectrum can be achieved by the above improvements. Considering the available wavelength range of the third harmonics, which has the lowest power among the harmonics generated in this study (Fig. 2b), the power at the short wavelength end (453 nm) appears to be limited by the fundamental comb power (40 μW per comb mode at 1359 nm). Thus, we assumed this power per comb mode as a requirement for high harmonics generation. On the other hand, the long wavelength end (543 nm) in this work seems to be limited by the design of the cPPLN-WG, which can be improved through 6 optimization. Figure 5 shows simulation results revealing the way in which the spectrum of the comb output from the HNLF changes when the average power of the optical pulses input to the HNLF can be increased without changing the HNLF or the spacing frequency (30 GHz) of the comb used in this study. The HNLF length was adjusted to the value where the calculated spectrum was broadest for each power. In the simulation, when the average power input to the HNLF was 3 W, the wavelength range at which the power per mode of the output comb exceeded 40 μW increased to 1279 nm-1761 nm. By designing the cPPLN-WG to satisfy the phase-matching condition in this wavelength range, the available spectral range of the second, third, and fourth harmonics were estimated to be 640 nm-881 nm, 426 nm-587 nm, and 320 nm-440 nm, respectively, which corresponds to 91 % of the frequency range in the visible wavelength region. We realized a broadband frequency comb in the visible range based on an Er-doped fiber laser for the wavelength calibration of a high-dispersion spectrograph for astronomical observations. The mode spacing, available spectral coverage, and spectral contrast of the realized comb exceeded 30 GHz, 62 % of the visible wavelength region, and 40 dB, respectively. The results also showed excellent potential as a practical astro-comb for high-precision RV measurements, with a long-term stable spectrum, durable nonlinear optical devices, and easy frequency reproducibility using a wavelength-stabilized laser. Furthermore, simulations showed that if the comb power input into the HNLF were increased to 3 W, the spectrum of the output comb would cover 91 % of the visible wavelength region. This high-performance, easy-to-use, broadband, visible, and widemode-spacing comb will be a powerful tool that will encourage the widespread use of astro-combs and to take astronomical research in such fields as exoplanet exploration and the accelerated expansion of the universe to the next stage. Such combs will open the door to applications where the heterodyne-beat method is inapplicable, as well as provide existing applications with both inspiration and benefit. Fully phase-stabilized Er-doped-fiber-based frequency comb The comb source was an Er-doped-fiber-based mode-locked laser with an frep of approximately 230 MHz. The output of the comb source was divided into three branches, one of which was used for visible-range comb generation as the main branch. The second branch was used to detect an fCEO signal with an f-2f interferometer along with the frep signal. The frep and fCEO were phase-locked to reference frequencies based on an atomic clock by controlling the laser cavity length with an intra-cavity piezoelectric transducer and the pump power of the mode-locked laser, respectively. The third branch was used to detect the beat frequency (fbeat) between the acetylene-stabilized laser and the nearest comb mode. The configuration of the comb, except for the main branch, was essentially similar to that described in our previous work [38]. Reference laser for locking mode-filtering cavities In this study, we used a reference laser whose frequency matched one of the comb modes to stabilize the length of the modefiltering cavities described below so that the cavities transmitted the comb. To obtain such a laser, we employed a 1542 nm laser frequency stabilized to an absorption line of acetylene ( 13 C2H2, ν1 + ν3 P(16)) [29] and feed-forward control with an inline AOM [34]. The output of the acetylene-stabilized laser was divided into two branches, one of which was used to detect the fbeat signal with the nearest comb mode. The output from the other branch was frequency shifted by fbeat in the AOM so that its frequency matched the nearest comb mode. We used it as a reference laser to stabilize the optical cavity lengths so that the comb modes transmitted the cavities. The advantage of using an acetylene-stabilized laser is that the same order of the comb mode can always be used when detecting the fbeat signal between the acetylene-stabilized laser and the comb mode. Here, we set frep and fCEO at 230.875 909 MHz and +30 MHz, respectively. Then, from the equation ν(n) = n frep + fCEO, where n and ν are the comb-mode order and comb-mode frequency, respectively, the frequency of the 841 879-th comb mode is approximately 194 369 609 393 kHz; a beat frequency fbeat of approximately 40 MHz is obtained between this comb mode and the laser stabilized on the P(16) line of acetylene (194 369 569 384(5) kHz [40]). Mode-filtering cavities Three Fabry-Perot cavities were used to extract one of every 130 modes of the comb source and obtain a comb with a wide mode spacing (30 GHz) and a high UMSR. To realize this, the frequencies of the cavity transmission modes and the comb modes must be matched as precisely as possible every 30 GHz. In this study, we employed optical cavities whose FSR integer multiple matched 30 GHz. The advantage of this design is that it allows a high suppression ratio for the comb mode adjacent to the transmitted comb mode at a relatively low cavity finesse. Furthermore, by appropriately selecting the different FSRs of the three cavities, the overall UMSR could be increased. We set the FSR of each cavity at 2.00 GHz (= 130frep/15), 1.77 GHz (= 130frep/17), and 2.14 GHz (= 130frep/14) in this study. In this design, 15, 17, and 14 times each FSR match the mode spacing of the transmitted comb, 30 GHz. We used the reference laser, whose frequency was matched with the nearest comb mode as described above, to stabilize the cavity FSR by controlling the cavity length so that the reference laser passed through the cavity. When stabilizing each cavity FSR, we coarsely adjusted the cavity length and selected the cavity mode in which the observed total transmitted comb power was maximized by scanning the cavity length. As a result, the FSR of each cavity was closest to the designed value (rational multiple of frep) and a high transmittance over a broad spectral region was obtained for the extracted comb modes. The reflectance of the cavity mirror was 97 %, which corresponded to the cavity finesse of 100. The group delay dispersion of the mirror was designed to be less than 0.15 fs 2 in the 1520 nm-1600 nm wavelength range, and the wavelength dependence of the FSR caused by the group delay dispersion was negligible in this range. The minimum UMSR calculated from the three cavities was 64 dB as shown by the light blue line in Fig. 4 To filter the comb mode frequencies with three optical cavities, it is necessary to obtain the desired FSRs as precisely as possible. Therefore, we have newly developed an optical cavity that does not break the beam alignment for resonance even if the cavity length is changed by a few centimeters. Two cavity mirrors are held on highly stable kinematic mounts for optical axis adjustment and installed opposite each other. One mount is directly fixed to the aluminum baseplate and the other is fixed to the baseplate via a cross-roller stage. The distance between the two mounts is basically supported by a Super Invar rod with a low thermal expansion coefficient, and a micrometer head and a piezoelectric transducer are inserted for coarse and fine tuning of the cavity length. The high linearity of the cross-roller allows the optical axis of the cavity to be maintained for resonance even when the stage is moved to change the cavity length by a few centimeters. We employed a Pound-Drever-Hall (PDH) locking scheme [41] to stabilize the cavity FSR. To obtain the error signal for stabilization, the reference laser was phase-modulated by employing a 30 MHz sinusoidal voltage with an in-line EOM placed in front of the in-line AOM. The modulated reference laser was divided into three parts, each of which was incident in each cavity. The light reflected from the cavity was extracted by an optical circulator and incident on a photodetector, and then the cavity FSR was feedback-controlled using the error signal obtained by demodulating the detected signal. To avoid the comb light mixing into the optics used for the PDH locking, the comb and the reference laser beams were incident in the cavity in opposite directions and with orthogonal polarizations. The comb and reference laser, respectively, were coupled to the PM fiber at the first cavity input and at the in-line AOM output through a half-wave plate, a quarter-wave plate, and a polarizer. The limited extinction ratio of a polarizing beam splitter (PBS) means that some part of the comb light amplified in EDFA#2 and EDFA#3 leaks from the PBS, and is incident on the detector used for PDH locking, thus making the PDH locking unstable. To avoid this, a narrow optical bandpass filter installed before the detector attenuated most of the comb spectrum. Furthermore, to suppress the leaked comb power relative to the reference laser power, the reference laser was amplified using EDFA#1 before the leaked comb was mixed and was attenuated to an adequate power in front of the detector. Chirped pulse amplification and spectral broadening The 30 GHz mode-spacing comb in the 1550 nm wavelength region was amplified with EDFAs and spectrally broadened with an HNLF. To obtain optical pulses with the peak power required for spectral broadening, we employed chirp-pulse amplification [35] as shown in Fig. 1a. First, the pulses were chirped with an NDF. Two EDFAs were employed and one was placed between each of the three cavities to amplify the optical power after EDFA#3 to 1.6 W. Although the pulse chirp induced by the NDF was gradually compensated for by the PM-SMF and PM-EDF used in this stage, the chirp was large until it passed through Cavity#3, and no significant nonlinear effect was observed in the amplification process. After Cavity#3, the pulse chirp was compensated to make it chirp-free using a PM-SMF with a length of about 3 m. The temporal width of the compressed pulse measured by FROG was 180 fs. From the average power of 880 mW, the pulse peak power was estimated to be 0.13 kW. The compressed pulse was incident into the HNLF with a length of 205 cm to broaden the 30 GHz-spacing comb spectrum in the near-infrared region. Wavelength conversion to visible region The comb spectrally broadened with the HNLF was incident in the ridge-type cPPLN-WG. The cPPLN-WG had a poling period that varied linearly from 12.8 μm to 19.4 μm, corresponding to the quasi-phase-matching condition of the second harmonic generation from the wavelength range 1350 nm-1600 nm to 675 nm-800 nm. The cross-section of the waveguide was a rectangle 7.8 μm high and 9.5 μm wide, and the length along the optical axis was 10 mm. Using the cascaded higher harmonic generation in the PPLN waveguide [31][32][33], we expected the wavelength of the comb spectrum to be converted to 675 nm-800 nm (second harmonics), 450 nm-533 nm (third harmonics), and 338 nm-400 nm (fourth harmonics). Spectral broadening simulation We simulated spectral broadening by the split-step Fourier method, which is widely used to calculate the optical pulse evolution through optical fibers [42]. The pulse waveform input into the HNLF measured by FROG was used as the initial condition for the calculation. In Fig. 2a, we set the repetition frequency and average power at 30 GHz and a measured value of 880 mW, respectively. In Fig. 5, we used average powers of 2 W and 3 W. The group velocity dispersion and nonlinear coefficient of the HNLF were −0.0022 ps 2 /m and 20 /(W km) at a wavelength of 1550 nm. The wavelength dependence of the group velocity dispersion and nonlinear coefficient was taken into account in the calculation, and the calculated spectrum was in good agreement with the experimental result as shown in Fig. 2a. Therefore, if the average power can be increased to 2 W or 3 W without significantly changing the pulse shape, we expect a broader near-infrared comb spectrum to be obtained as shown in Unnecessary-mode suppression ratio measurement It is difficult to measure the UMSR even when using a high-resolution OSA. In fact, only the strong unnecessary modes were observed between the comb modes with a 30 GHz mode spacing in the 1550 nm region as shown in Fig. 3b. The UMSRs near the transmitted comb modes cannot be obtained correctly from this spectrum because the modes of a comb source with a 230 MHz spacing cannot be resolved with the OSA. In addition, no unnecessary modes were observed even when Fig. 3d was presented in a logarithmic scale (not shown here) because of the lower frequency resolution of the OSA in the 800 nm region. In this study, we observed heterodyne beats for UMSR measurement. We first measured the SNR of the beat between a CW laser and the comb transmitted mode as the reference SNR and then measured the SNR of the beats between the CW laser and unnecessary comb modes. The ratio between them is the UMSR. If the CW laser power is sufficiently larger than the total comb power including the mode power P and the noise equivalent power of the photodetector, the SNR of the heterodyne beat is expressed as P/(2hνΔf) and does not depend on the CW laser power, where h is the Planck constant, ν is the optical frequency, and Δf is the resolution bandwidth. Therefore, even if the CW laser power changes when the beat signal with each comb mode is measured by changing the CW laser frequency, it has little effect on the UMSR measurement results. Using this approach, we measured the UMSRs of the comb at the HNLF input, at the HNLF output, and at the cPPLN-WG output. For the SNR measurement at the HNLF input and output, we offset-locked a 1542 nm CW laser to each mode of the 230 MHz-spacing comb source and measured the SNRs of the beat signals between the CW laser and the transmitted and suppressed modes of the 30 GHz-spacing comb at 1542 nm. For the evaluation at the cPPLN-WG output, the 1542 nm CW laser, which was offset-locked to the comb source before passing through the cavities, was wavelength-converted to 514 nm by third harmonic generation with a dual-pitch PPLN waveguide [43], and beat signals with the 30 GHz-spacing comb at a wavelength of 514 nm were observed. We measured the SNRs of the beat signals by offset-locking the CW laser to the transmitted and suppressed modes of the comb source as in the evaluation at 1542 nm and determined the UMSRs by calculating their ratio. The offset locking was performed with a control bandwidth of several hundred kHz so that the linewidth of the coherent peak of the in-loop beat spectrum was sufficiently narrow. This enabled us to observe the beat signal at a low resolution bandwidth with a high SNR, and thus made it possible to measure a high UMSR. On the other hand, for a beat signal with a high SNR, it was difficult to distinguish the noise floor from the sideband components around the beat frequency, which was an uncertainty factor in the SNR measurement and thus also in the UMSR measurement. In addition, we considered the frequency response and saturation characteristics of the photodetector, the power fluctuations of the comb, and the measurement uncertainty of the RF spectrum analyzer as uncertainty factors, and we concluded that the measurement uncertainty of the UMSR was several dB. Figure 1 | Experimental schematic of an Er-fiber-based visible-broad frequency comb with a 30 GHz mode- spacing. a, Schematic of the comb system. (1) frep and fCEO are stabilized to reference frequencies by controlling an intracavity piezoelectric-transducer and the pump power for the mode-locked laser, respectively. (2) The beat note between the acetylene-stabilized laser and the comb is fed forward to an acousto-optic modulator (AOM) to match the frequency-shifted acetylene-stabilized laser to the comb-mode frequency. We call this frequency-shifted light a reference laser. (3) The acetylene-stabilized laser is phase-modulated with an electro-optic modulator (EOM).
2021-10-11T01:15:58.817Z
2021-10-05T00:00:00.000
{ "year": 2021, "sha1": "dc8e1c2bb57fcc93f9f08dbe8397c8ab2868daf5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dc8e1c2bb57fcc93f9f08dbe8397c8ab2868daf5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
250295762
pes2o/s2orc
v3-fos-license
Management of Severe Hypomagnesemia as the Primary Electrolyte Abnormality with a Delayed Onset of Clinical Signs as a Result of Refeeding Syndrome in a Cat Abstract This case report describes severe hypomagnesemia in a cat attributed to refeeding syndrome with an onset of clinical signs from the magnesium deficiency apparent on the twelfth day following initiation of feeding. The patient initially presented in a state of cachexia from apparent malnutrition after missing from the owners care for five months. The patient was initially discharged five days after the initiation of feeding with only a mild hypokalemia apparent and requiring supplementation and returned for outpatient management. The patient presented through the emergency department on the twelfth day following the onset of feeding with the clinical signs of acute lethargy, vomiting, generalized tremors and a seizure episode and had a severe total hypomagnesemia on diagnostic bloodwork. The patient’s clinical signs resolved following emergency treatment with parenteral magnesium sulfate as a continuous rate infusion and was later managed with oral magnesium hydroxide for a prolonged period of time. Electrolyte abnormalities and associated clinical signs typically occur between two and five days after initiation of feeding and up to ten days after starting food intake in humans with anorexia nervosa. This case report highlights that hypomagnesemia, while not the most common electrolyte disturbance to occur with refeeding syndrome, can occur without other significant electrolyte changes and can cause clinical signs greater than ten days following refeeding to a starving patient. This magnesium deficiency required prolonged treatment, but the patient made a complete recovery. Introduction Refeeding syndrome is characterized by metabolic and physiologic abnormalities during refeeding after starvation in both people and animals. Refeeding syndrome has been documented to include relative deficiencies in phosphorus, potassium, magnesium and vitamins, as well as glucose and fluid intolerance that occur after initiating feeding following a state of starvation or severe malnutrition. 3 The classic electrolyte abnormality associated with refeeding syndrome is hypophosphatemia, which is responsible for most of the clinical consequences reported in human and veterinary patients. 4 Refeeding a starved patient will increase utilization of phosphorus, potassium and magnesium to drive metabolic pathways and act as a cofactor for adenosine triphosphate (ATP) synthesis. Increased cellular need, in conjunction with co-transport of potassium and magnesium into the cell with insulin-driven glucose uptake, results in further depletion of these electrolytes. 3 Risk factors for refeeding syndrome reported in veterinary patients include patients with chronic malnourishment conditions that result in malabsorption of nutrients such as severe intestinal disease or pancreatic insufficiency, patients that have been anorexic for >7 days, and patients that are obese that have rapid weight loss. 17 Refeeding syndrome has rarely been reported in veterinary patients. Hypophosphatemia was reported to occur between 12 and 72 hours after enteral feeding was initiated in 9 chronically malnourished cats. 12 Refeeding syndrome was also reported in 2 cats resuscitated and fed after being trapped without access to food for 7 to 12 weeks and another cat with hepatic lipidosis fed through an esophageal feeding tube following a 4-week history of decreased appetite and weight loss. 1,3,8,14 Prior to 2017, refeeding syndrome had only been reported in cats in the literature. A case report in 2019 described management of a dog with prolonged starvation and presumptive refeeding syndrome. 14 The dog in this study developed a hypophosphatemia and hypomagnesemia on day 1 shortly after refeeding but this resolved with supplementation. The dog never became clinical for those abnormalities. Patients at risk of developing refeeding syndrome require significant attention, as overlooking these patients can cause lifethreatening consequences such as hemolytic anemia, cardiac failure, neurological dysfunction and respiratory failure. When these patients are identified, a comprehensive nutritional plan in addition to a treatment plan for other co-morbidities should be formulated. Currently, there are no evidence-based studies demonstrating the ideal refeeding strategy, 8 however, there have been protocols developed in people in which veterinary medicine can extrapolate from. There should be careful assessment of patient risk for refeeding syndrome, restoration of fluid balance without overloading the cardiovascular system, initiation of empirical supplementation of phosphate, potassium and magnesium (unless serum concentrations of these electrolytes are increased), initiation of thiamine and other B vitamins and trace minerals apart from iron. 10 Currently, it is recommended that no greater than 20% of RER should be provided on the first day and nutritional support should be increased gradually over 4-10 days. 4 The following case report describes a cat that presented cachectic in a state of apparent starvation after missing from its owner's care for five months. The cat developed a severe hypomagnesemia with clinical signs 12 days after refeeding was initiated. This report highlights the importance of a hypervigilant refeeding strategy, serial electrolyte monitoring and treatment considerations in medical management of refeeding severely malnourished cats. Case Report History A 2-year-old female spayed domestic short-haired cat was initially presented to our emergency service for poor appetite and severe weight loss upon returning to her owners care after reportedly missing for the previous 5 months. Prior to her disappearance, the patient was reported to be healthy and had an appropriate Body Condition Score (BCS). On initial admission, the patient was quiet, alert and responsive,~8% dehydrated, cachectic and had a large flea burden. She was hypotensive on indirect blood pressure measurement at 78mmHg (reference range 120-170mmHg) 5 with an initial body weight of 1.49 kg. Her BCS was assessed as 1-2/9 and muscle condition score (MCS) of 1/3. Initial bloodwork included a venous blood gas, complete blood count (CBC), serum chemistry panel, and in-house FeLV/FIV test (Siemens, IDEXX Procyte Dx, IDEXX Catalyst One, IDEXX Snap FIV/FeLV Combo). These diagnostics revealed a mild hyperlactatemia (2.34 mmol/L, reference 0.5-2 mmol/L), mild hyperkalemia (5.24 mEq/L, reference range 3.5-4.8 mEq/L), mild ionized hypocalcemia (0.95 mmol/L, reference range 1.23-1.4 mmol/L), mild hypochloremia (108 mmol/L, reference range 116-126 mEq/L), a normocytic, normochromic, non-regenerative anemia (Hct 19.7%, reference range 31.7-48.0) ( Table 1). The patient was viral negative on in-house infectious screen. A total magnesium level was not measured at initial presentation and an ionized magnesium level was not available on the in-house blood gas analyzer (Siemens Rapid 500). Initial treatments for stabilization included an intravenous (IV) bolus of 13 mL/kg of Normosol-R with a recheck blood pressure measurement post-bolus of 110mmHg. Once stable, the cat was placed on crystalloid fluids (Normosol-R) at 3mL/kg/hr IV and was administered oral nitenpyram (Capstar, Elanco). The patient was initially fed a commercial maintenance diet and ate with a large appetite before transfer to the Intensive Care Unit (ICU) at the beginning of the second day of hospitalization In the ICU, the patient received additional diagnostics and treatments starting on the second day of hospitalization. Further diagnostic testing included a recheck of her venous blood gas and a total magnesium measurement along with abdominal and thoracic imaging. On recheck venous blood gas, she had a persistent hyperlactatemia with a normalized potassium level (Table 1). Her total magnesium measurement was normal at 1.96 mg/dL (reference range 1.5-3.0 mg/dL). Thoracic radiographs were unremarkable. An abdominal ultrasound was performed that showed hyperechoic hepatic parenchyma, mild jejunal lymphadenopathy, and gall bladder sludge. Diagnostic tests were also submitted to the commercial laboratory (IDEXX Reference Laboratories) including further infectious testing for additional underlying causes of her anemia by screening for . Fecal testing including giardia antigen testing and a urinalysis with culture were also submitted to the reference laboratory. Ultimately, results did not reveal any underlying infectious cause for her anemia and no intestinal parasites. Her urinalysis showed rare bacteriuria, but the urine culture was negative for growth. Additional therapeutics added during the second day of hospitalization included the administration of B complex vitamins (2 mL/L, VetOne), thiamine (25 mg SQ every 24 hours, VetOne), Cerenia (1 mg/kg IVevery 24 hours, Zoetis), pantoprazole (1 mg/kg IV q24 hours, Pfizer), Unasyn (40 mg/kg IVevery 8 hour, Pfizer), praziquantel/pyrantel pamoate (Drontal, Bayer) per os once with the dose repeated in three weeks. The patient was fed a large amount of a commercial diet once at intake through the emergency service (referenced as day 1 of feeding). Further feedings were initially withheld until a nutrition plan was formulated and planned nutritional intake was started on the second day of hospitalization at 25% of the patients calculated resting energy requirement (RER). A diet of Emeraid Intensive Care HDN (EmerAidVet) was offered for refeeding every 6 hours and the patient ate with a ravenous appetite. Nutrition was increased daily by 25% until reaching 100% calculated RER on the fifth day of hospitalization/refeeding. The patient was hospitalized for a total of 5 days. While hospitalized, serial electrolyte levels were monitored every 6 to 12 hours with daily chemistry bloodwork. Overnight during the second day of hospitalization, the patient developed a mild hypokalemia and potassium chloride supplementation (Hospira) at 20meq/L was added to her crystalloid fluids. This progressed through day 4 of hospitalization warranting an increase in parenteral potassium supplementation to 60meq/L crystalloid fluids IV. She was started on oral potassium gluconate supplementation (RenaPlus, VetOne) on the fourth day of hospitalization at 1.6meq PO every 8 hours and her parenteral supplementation was weaned. She was progressively anemic on the fifth day of hospitalization warranting a packed red blood cell transfusion (pRBC, Blood 145 Type A) at a dose of 15mL/kg IV. The owner elected discharge from the ICU on the fifth day of hospitalization given that the patient was stable following transfusion. The patient was discharged from the hospital with amoxicillin-Clavulanate (Zoetis) and potassium gluconate at 1.6meq PO q12 hours (RenaPlus, VetOne). A nutrition plan was formulated for the owner to continue at no more than 100% calculated RER with a Hill's m/d diet. At the time of discharge, the patient's weight was 1.5kg. The patient presented for a recheck on day 9 following initial presentation/refeeding and was reported to be doing well at home with the owner adhering to the feeding and medication plan. The patient was stable on physical examination and her weight was 1.65kg. Recheck labwork was performed and showed a stable potassium and hematocrit with a continued normal phosphorous level (Table 1). A total magnesium level was not performed due to lack of sample and patient temperament. Given the patients clinical improvements and stable bloodwork 9 days following refeeding, the owner was instructed to increase feedings to 1.2 times the calculated RER. Oral potassium gluconate supplementation was continued as prescribed, and antibiotics were discontinued given negative urine culture results. The owner was instructed to recheck in one week or sooner with concerns. Clinical Findings Twelve days following refeeding and initial hospitalization, the patient presented through the Emergency service for acute anorexia, vomiting, panting, generalized tremors, and a tonic-clonic seizure. In-house venous blood gas, CBC and chemistry were performed (Siemens Rapid 500, Procyte dx, Catalyst One) revealed a markedly low total magnesium level (<0.5 mg/dL, reference range 1.9-2.6 mg/dL) as well as low-normal potassium (3.5 mEq/L, reference range 3.504.8 mEq/L) ( Table 2). Her weight at presentation was 1.6kg. Emergent therapeutics included administration of midazolam 0.5 mg/kg IV (Almaject) pending labwork and a magnesium sulfate (Fresenius USA) continuous rate infusion (CRI) at 1 mEq/kg/day IV as well as one oral dose of magnesium hydroxide (320 mg PO, Phillips Milk of Magnesia) once the patient mentation could support oral medication administration. Continuous telemetry monitoring was started and the patient maintained a normal sinus rhythm throughout hospitalization. The clinical signs of tremoring resolved with supplementation of magnesium as described above. The magnesium sulfate CRI led to normalization of total magnesium levels within 9 hours following initiation of treatment (see Table 2). Additional treatments for the patient included potassium supplementation intravenously at 30meq/L of crystalloid fluids (Normosol-R). On the thirteenth day following initial refeeding, the patient was found to be severely anemic again and received a second pRBC transfusion (Blood type A, 15mL/kg IV dose administered over 4 hours). Given her stable magnesium levels, she was started back on enteral nutrition at 25% of calculated RER. The magnesium sulfate CRI was tapered and discontinued on day 14 following initial refeeding given normal total magnesium levels ( Table 2). On day 15 following initial refeeding, she was stable but her total magnesium levels had decreased so she was started on oral supplementation with magnesium hydroxide (Phillips Milk of Magnesia, Bayer 240 mg PO q12 hours) which stabilized then increased her total magnesium levels leading to the discontinuation of oral therapy by day 16 (Table 2). Her nutritional supplementation was increased from 25% to 100% RER by day 15 and she was again eating with a ravenous appetite. She was discharged from the hospital on day 16 following initial refeeding with a plan for outpatient monitoring and treatments. Her body weight was 1.62kg at the time of discharge and the only medication that she required was oral potassium gluconate (RenaPlus, VetOne) supplementation at a dose of 0.8meq PO q12 hours. The owner was instructed to continue to only feed the Hill's m/d diet at 100% of calculated RER per day. The patient presented for a recheck examination 18 days following initial refeeding and hospitalization. She was doing well with no concerns expressed by the owner with the exception of a single episode of apparent melena. The owner was adhering to the strict nutritional guidelines established and noted that the patient was eating with a ravenous appetite. Recheck labwork showed a progressive total hypomagnesaemia (0.94mg/dL, reference range 1.9-2.6 mg/dL) without any of the previously noted clinical signs (Table 2). She was treated as an outpatient with sucralfate (Par Pharmaceuticals 100mg PO q8 hours) for the apparent melena as well as magnesium hydroxide (Phillips Milk of Magnesia 3mL (240 mg) PO q12 hours. On subsequent rechecks, the patient's total magnesium levels were stable on oral therapy and the melena had improved to resolve according to the owner. Her nutrition plan was adjusted to allow her 1.2x calculate RER. Her oral magnesium hydroxide dose was slowly tapered and was discontinued 30 days following initial DovePress refeeding as was her potassium supplementation. Her weight at that recheck was 2.03kg (day 30) and she was reported to be doing well by the owner but continued to act ravenous while eating at home. Her feedings were increased to 1.8xRER at that time. On recheck evaluation 32 days following initial refeeding, the owner reported a new onset diarrhea and recurrent melena. Recheck labwork at that time showed progressively hypokalemia (2.7 mEq/L, reference range 3.5-4.8 mEq/L) and mild total hypomagnesemia (1.43 mg/dL reference range 1.9-2.6 mg/dL) ( Table 2). The patient was started back on potassium supplementation(RenaPlus, VetOne, at 0.8meq PO every 12 hours), magnesium hydroxide 80 mg PO q12 (Philips Milk of Magnesia, Bayer) at 80 mg PO q12 hours), metronidazole (MixLab compounding pharmacy) and sucralfate (Par pharmaceutical) Given the persistence of her apparent hypomagnesemia and gastrointestinal signs, she had a gastrointestinal malabsorption blood panel (Texas Agricultural and Mechanical University to measure serum B12, folate, TLI and PLI as well as a repeated abdominal ultrasound. She was found to be mildly hyperfolatemic (30.6 mcg/L, reference range 9.7-21.6 mcg/L) and on repeated ultrasound she had a new multifocal small intestinal functional ileus with resolution of the hyperechoic hepatic parenchyma and jejunal lymphadenopathy. On subsequent recheck examinations, the patient was discontinued from oral potassium gluconate supplementation 51 days following initial refeeding and hospitalization. The last recheck performed on the patient was 55 days post-initial refeeding at which time her magnesium levels had remained stable on supplementation ( Table 2). Her weight at that recheck was 2.68kg. Further dose adjustments were not pursued at that time because the owner was moving out of state. Seventy-five days following initial hospitalization, a recheck through a new veterinary provider revealed a normal magnesium level so her oral magnesium supplementation started another taper. Further communication with the owner revealed that the patient was successfully tapered off of oral magnesium supplementation and is clinically healthy. The patient received the best practice of veterinary care and all diagnostics and treatments performed were consented by the owners. Discussion To our knowledge, this is the first case report describing hypomagnesemia as the primary electrolyte disturbance attributed to refeeding syndrome in a cat following prolonged starvation as well as an apparent delay in onset of clinical signs compatible with hypomagnesemia. The electrolyte derangements characteristic of refeeding syndrome are usually clinically apparent within two to five days following initiation of feeding with clinical consequences not suspected greater than 10 days following feeding. 1,4,10 This patient developed a mild, total hypomagnesemia on the fifth day following refeeding without clinical signs and only a mild hypokalemia, which stabilized on oral potassium supplementation. She developed anorexia, vomiting, generalized tremors and seizure activity on the twelfth day following initiation of refeeding and intake to our hospital. These clinical signs were attributed to total body magnesium depletion as noted by the markedly low total magnesium levels on in-house bloodwork. The clinical signs of her hypomagnesemia improved with both parenteral and enteral supplementation leading to the termination of her clinical signs. This patient was serially monitored and required long-term electrolyte supplementation without another potential cause of her hypomagnesemia. This patient responded well to treatment and has recovered completely and has not required permanent supplementation of magnesium. The hypomagnesemia in this patient is consistent with refeeding a starved or malnourished patient. The pathophysiology of starvation can be divided into an acute response (occurring within the first 2 weeks of starvation) and delayed response (occurring 10 days after the onset of starvation). In the acute phase of starvation, there are metabolic alterations that occur such as decreases in insulin and triiodothyronine (T 3 ) and increases in glucagon, growth hormone, catecholamines and plasma cortisol. The end result of these hormonal alterations is enhanced by hepatic glycogenolysis, gluconeogenesis and skeletal muscle proteolysis thereby facilitating lipolysis. The brain is surviving on glucose generated from protein catabolism and gluconeogenesis in the liver. During the delayed response to starvation there is a major shift from using carbohydrate to using fat as the main energy source. Gluconeogenesis is reduced during this period and protein catabolism is minimized. Ketone bodies from hepatic oxidation of fatty acids are used by most of the tissues for energy. At this stage, the brain is reliant on ketone bodies as an energy substrate. 17 During starvation, depletion of electrolytes occurs from lack of dietary intake with additional electrolyte losses from the catabolism of fat and muscle. 1 During refeeding, intake of carbohydrate stimulates insulin release, resulting in conversion from a catabolic to an anabolic state, which increases cellular demand for phosphorus, potassium and water. Newly synthesized cells 148 require potassium for maintenance of electrical gradients and translocate serum potassium intracellularly. Glycolysis and protein synthesis resume following refeeding which require the cellular uptake of phosphorus and magnesium. The insulin released during refeeding increases cellular activities thus increasing the cellular requirement for magnesium. 17 Hypophosphatemia is the primary electrolyte abnormality characteristic of refeeding syndrome. 4,14,16 Hypophosphatemia has been a prominent feature of refeeding syndrome in all previously reported feline cases and was often associated with hemolysis. 1,3,8,14 Our patient did not develop a hypophosphatemia on bloodwork throughout any of her evaluations, but her primary electrolyte abnormality was a total hypomagnesemia. Hypomagnesemia is a variable finding in patients with refeeding syndrome. 3,14 In a previous study, low serum total magnesium was only detected in one of the cats reported with refeeding syndrome but was not measured in every case. 1,3,8,14 Hypomagnesemia developed on day 3 and improved to a low-normal value with supplementation. The mechanism for hypomagnesemia in refeeding syndrome is not clear and is likely multifactorial, resulting from intracellular movement of magnesium ions into cells with carbohydrate feeding and poor dietary intake of magnesium. 7 Upregulation of carbohydrate metabolism may also explain the increased demand for magnesium and thiamine, which then leads to neurological and neuromuscular complications. 5 Many cases of hypomagnesemia do not appear clinically significant, but severe hypomagnesemia can result in clinical complications, some of which were noted in this patient. 7 Severe hypomagnesemia can result in cardiac dysrhythmias, gastrointestinal ileus/abdominal discomfort, anorexia and neuromuscular features such as tremors, paresthesia, tetany, seizures, irritability, confusion, weakness and ataxia. 7 Our patient presented with vomiting, weakness, generalized tremors and a tonic-clonic seizure on day 12 following initial refeeding. Total body magnesium concentration is affected by dietary intake, gastrointestinal function, hormonal balance, redistribution of the magnesium cation, and excretion into a third body space or urine. 12 A large amount of magnesium is absorbed in the small intestine and gastrointestinal disease (inflammatory bowel diseases, malabsorptive syndromes) can lead to a hypomagnesemia. 11 This patient was screened for an underlying enteropathy through serial ultrasound examinations and through a malabsorption panel (completed through Texas Agricultural and Mechanical University) without convincing evidence for an enteropathy and she was prophylactically treated for intestinal parasites and fecal negative. Her renal function remained adequate throughout hospitalization making renal losses of magnesium less likely. An underlying cause of hypomagnesemia was not identified in this patient; therefore, her hypomagnesemia was attributed solely to refeeding syndrome. The patient in this case report had persistent, mild hypokalemia and ionized hypocalcemia. All reports of cats with refeeding syndrome had documented hypokalemia 14 Magnesium is an important mediator of both hypocalcemia and refractory hypokalemia. 1 Low magnesium impairs potassium reuptake in the nephron resulting in excess losses and may also impair the cellular transport of potassium all through the impact on magnesiumdependent enzymes such as the Na-K-ATPase. 2,15 The hypokalemia secondary to hypomagnesemia may be refractory to parenteral potassium supplementation but is generally responsive once magnesium is corrected. 9 While our patient was initially hypokalemic starting the second day following refeeding, she stabilized with potassium supplementation alone and remained stable on oral potassium supplementation at a recheck nine days following refeeding. Magnesium deficiency may also lead to refractory hypocalcemia. Approximately one-third of human patients with low serum magnesium may concurrently have low serum calcium. 11 Factors contributing to this include impaired release of parathyroid hormone, diminished parathyroid hormone synthesis, and skeletal resistance to the action of parathyroid hormone, all resulting from impaired magnesium-dependent adenylate cyclase function. 9 The cat in this report had a mild ionized hypocalcemia which never required supplementation. The patient in this study developed diarrhea 12 days after refeeding during her emergency presentation for severe hypomagnesemia. Diarrhea can develop solely from low magnesium, however this occurred shortly after treatment with parenteral magnesium sulfate and continued intermittently in the patient history when magnesium levels had improved. While diarrhea is a known adverse effect of oral magnesium hydroxide, 16 it has not been noted as an effect of parenteral magnesium sulfate. The patient's diarrhea may have initially developed as a result of hypovolemic shock or a result of enterocyte-damage from a functional ileus due to her hypomagnesemia. 9,11 On recheck abdominal ultrasound, a functional ileus was noted in this patient. Hypomagnesemia has been reported in horses after colic surgery, suggesting a potential causal relationship between the hypomagnesemia and strangulating lesions and ileus of the bowel. 11 Starvation is known to lead to reduction in enterocyte formation and nutrient absorption. Gut atrophy with decreased 149 crypt cell proliferation, reduced villous height, intestinal mass reduction, thickening and coarsening of the intestinal mucosal folds, reduced gastric acidity, reduced gastric and intestinal motility are also noted in patients that have been starved. 17 Diarrhea occurs in these starved patients due to impaired absorptive ability, bacterial overgrowth, presence of unconjugated bile salts, hypoalbuminemia and gut edema. 3,17 While starvation is a potential mechanism, given the timing of the onset of diarrhea (12 days following refeeding), this is not suspected for our patient. In addition to diarrhea, this patient also developed melena, consistent with GI bleeding. Her melena improved with gastrointestinal protectants (sucralfate). Part of this patient's anemia was attributed to melena later in her care, but, initially, her anemia was attributed to her heavy flea burden and from the need for survey phlebotomy for monitoring her electrolytes. She required two pRBC transfusions throughout the duration of her care. This patient was fed unrestricted once at initial presentation (day 1) and ate with a large appetite. Further feedings were withheld following the understanding for nutritional restriction given her clinical picture. Her nutritional management for refeeding syndrome was structured starting on the second day of her initial hospitalization at 25% calculated RER and gradually increased to 100% RER over the course of four days. Refeeding syndrome has been reported in a cat started a 6 Kcal/kg/day. 1,4 In humans, the NICE guidelines recommend refeeding should commence at a maximum of 5 kcal/kg/day in severely malnourished patients. 6 In our case, a more conservative plan for the patient's initial nutritional supplementation could have been implemented with a slower increase in supplemented nutrition. In humans, the NICE Guidelines (National Institute for Health and Care Excellence) identify risk factors and recommendations for refeeding in malnourished patients. In these guidelines, initial refeeding should not exceed 20 kcal/kg/day or no more than 20% RER on the first day. These guidelines also recommend that <50% RER total should be fed during the first 3 days. Nutritional support should be increased gradually over 4 to 10 days. However, despite these recommendations, there is no universal recommendation of how quickly to advance the nutritional regimen, particularly in veterinary medicine. This study highlights that a cautious approach to nutrition should be practiced in feline patients. Nutritional strategies in starved patients should consist of a low level of digestible (soluble) carbohydrates, contain high fat and protein content, and adequate electrolytes. This patient was fed Emeraid Intensive Care HDN (EmerAidVet). This formula is appropriate as it is high in crude protein (8.61 grams/100 kcals), crude fat (6.05 grams/100 kcals) and contains adequate electrolytes, vitamins and minerals that exceed the National Research Council guidelines. It is recommended to provide the most calories as fat and protein because carbohydrates stimulate insulin release and may result in more severe metabolic derangement. 8 Consequently, when a diet used for refeeding is composed of a high proportion of carbohydrate, the cessation of natriuresis is abrupt and can lead to the development of peripheral edema and fluid overload. Refeeding with fat or protein alone will allow for natriuresis to continue and may prevent fluid overload or edema formation from occurring in these patients. 17 In veterinary medicine, if refeeding syndrome is suspected then it is recommended to stop refeeding immediately and aggressively treat electrolyte abnormalities. 17 The refeeding nutritional formula can also be adjusted to contain a lower amount of carbohydrate. There should be no attempt to achieve weight gain during the first week of treatment and any weight gain that does occur should be considered to be due to fluid retention rather than addition of lean body mass. 17 Not all starved patients who are refed develop refeeding syndrome. It is important to be aware of the condition and anticipate problems to help minimize its occurrence. It is important to closely monitor at-risk patients, in particular their vital functions, fluid balance and electrolytes. 7 Prior to refeeding, the patient should have complete bloodwork and any electrolyte abnormalities should be corrected prior to initiation of nutritional support. Electrolyte trends should be evaluated several times per day depending on the patient. If electrolyte values are not increasing, it may be necessary to slow or stop nutrition completely until they are improved. High risk patients should be empirically supplemented for the first 24 hours of therapy. Before initiating feeding, thiamine should also be administered and followed by daily injections until day 3 of treatment. Additionally, the patient's body weight and urine output should be monitored for fluid overload. A PCV/TS should also be performed to evaluate for presence of hemolysis. The patient's cardiovascular and respiratory function ought to be monitored. This can be done using telemetry and placing the patient on a respiratory watch. Serial neurological exams should also be performed as electrolyte and thiamine deficiencies can cause neurological signs in these patients. A major limitation in this case report is the measurement of serum magnesium, which has been reported as total serum magnesium here rather than an ionized magnesium level. Magnesium measurements in veterinary patients are often limited by the availability and ability of point-of-care monitoring for ionized magnesium levels. Ionized magnesium is the preferred measurement to total magnesium as it is the physiological active form in serum. 2,11 Severe total body magnesium depletion may exist in the face of normal serum magnesium concentration 13 and may have been the case early in the care in our patient. Measurement of low total serum magnesium levels suggests that the intracellular magnesium balance has been disturbed and implies a moderate to severe total body magnesium deficiency. 13 The total hypomagnesemia measured in our patient likely represents total body depletion given her clinical signs, but these levels may have been detected sooner with the availability of ionized magnesium measurement. The true incidence of hypomagnesemia in refeeding syndrome in veterinary medicine is unknown; however, it is likely higher than suggested by the intermittent case reports and small case series. 1,3,12 In conclusion, this case demonstrates a delayed, severe, and persistent life-threatening hypomagnesemia with clinical signs in a patient in the absence of other severe electrolyte abnormalities attributed to refeeding syndrome. This patient demonstrated a delay in the severity of her electrolyte abnormalities with persistent deficiencies warranting chronic supplementation. This case illustrates the necessity of magnesium measurement regardless of other electrolyte alterations, most notably in the absence of a hypophosphatemia, and outlines the need for magnesium supplementation in the acute and chronic setting. This case report also provides a description of the compatible clinical signs observed in a feline patient with hypomagnesemia. Disclosure The authors report no conflicts of interest in this work.
2022-07-06T15:12:41.931Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "48e74563280c077cce9f6cae2635d6391eae932d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=81917", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7d6e75c5ffdf3fcbe71db231292f7d2aeffee01", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
9346555
pes2o/s2orc
v3-fos-license
Integrative Monitoring of Marine and Freshwater Harmful Algae in Washington State for Public Health Protection The more frequent occurrence of both marine and freshwater toxic algal blooms and recent problems with new toxic events have increased the risk for illness and negatively impacted sustainable public access to safe shellfish and recreational waters in Washington State. Marine toxins that affect safe shellfish harvest in the state are the saxitoxins that cause paralytic shellfish poisoning (PSP), domoic acid that causes amnesic shellfish poisoning (ASP) and the first ever US closure in 2011 due to diarrhetic shellfish toxins that cause diarrhetic shellfish poisoning (DSP). Likewise, the freshwater toxins microcystins, anatoxin-a, cylindrospermopsins, and saxitoxins have been measured in state lakes, although cylindrospermopsins have not yet been measured above state regulatory guidance levels. This increased incidence of harmful algal blooms (HABs) has necessitated the partnering of state regulatory programs with citizen and user-fee sponsored monitoring efforts such as SoundToxins, the Olympic Region Harmful Algal Bloom (ORHAB) partnership and the state’s freshwater harmful algal bloom passive (opportunistic) surveillance program that allow citizens to share their observations with scientists. Through such integrated programs that provide an effective interface between formalized state and federal programs and observations by the general public, county staff and trained citizen volunteers, the best possible early warning systems can be instituted for surveillance of known HABs, as well as for the reporting and diagnosis of unusual events that may impact the future health of oceans, lakes, wildlife, and humans. Introduction Both marine and freshwater toxic algal blooms are believed to be occurring more frequently in lakes, estuaries and oceans of the U.S. Recent problems with new toxic events have increased the risk for illness and negatively impacted sustainable public access to safe shellfish and recreational waters in Washington State. To address these increasing threats to public health, monitoring programs have been strengthened through collaborations that include observations and analyses performed by local, state, and federal scientists, as well as volunteer groups. Washington State produces the highest amount of commercially harvested mussels, clams and oysters in the nation with an estimated annual production of 39 thousand metric tons that generates over $77 million in sales [2]. This commercial harvest, together with recreational shellfish harvest on Washington's public beaches by approximately 300,000 people, necessitates an effective and comprehensive monitoring program for biotoxins that can affect shellfish safety. If harmful algae producing natural toxins are present, toxins can collect in shellfish tissue and cause illness or even death in marine wildlife or people. The expansion of Washington's shellfish growing areas is evidenced by the net gain of 27,811 acres approved for commercial shellfish production from 1991 to 2010 and an increase in beaches open for recreational harvesting from 78 in 2005 to 201 in 2010 [3,4]. However, marine toxins continue to pose a severe threat to shellfish safety. Closures due to paralytic shellfish toxins are annual occurrences. For example, in 2012, 453 shellfish tissue samples had concentrations of PSP above the regulatory level of 80 μg/100 g and 50 samples had concentrations above 1000 μg/100 g, resulting in closures of numerous commercial shellfishing areas to harvesting [4]. New toxic events are also entering the scene. In 2011, Washington had a confirmed case of diarrhetic shellfish poisoning (DSP), the first known illness from this marine biotoxin-related syndrome in the United States [5]. Toxic cyanobacteria have been observed in over 132 lakes in Washington State, resulting in animal and human illnesses and animal deaths in some lakes [6][7][8]. Toxic cyanobacteria and blooms occur in natural lakes, manmade reservoirs, and ponds, but especially those that are influenced by watershed development and pollution. Lakes that produce toxic blooms often provide citizens with vital recreational opportunities in addition to supplying drinking water. Closures of lakes due to toxic blooms have had economic impacts in lakes from all regions of the state resulting in closure of recreational areas and restriction of fishing. With the potential for cyanotoxins to bioaccumulate in fish, public health officials are concerned about exposure through consumption [9], and freshwater toxins from lake blooms have been observed downstream in marine shellfish [10]. Furthermore, in 2014, a lake in the Puget Sound lowlands that provides drinking water to over 500 households had its first toxic bloom, provoking intense scrutiny and public concern [8]. Regional or short-term monitoring programs and opportunistic surveillance indicate that toxic blooms are becoming more frequent in the state, potentially impacting public health, regional economies, and lifestyles of citizens who use the lakes. Table 1. Regulated marine and freshwater toxins in Washington State. a Relative abundance values are used by SoundToxins and ORHAB partnerships to provide rapid, early warning of shellfish toxicity in the marine environment; b Cell count action level is >50,000 cells/L (large Pseudo-nitzschia) and >1,000,000 cells/L (small Pseudo-nitzschia); c Relative abundance values. For Dinophysis, cell count action level is >20,000 cells/L ("common") or an increase from "present" to "common"; ELISA = enzyme-linked immunosorbent assay; LC/MS-MS = liquid chromatography tandem spectrometry; HPLC = high performance liquid chromatography; n/a = not applicable; nd = not done. Here we provide an overview of marine and freshwater toxins in the region that endanger human health (Table 1) and describe the emergence of integrated, interagency monitoring programs for marine and freshwater toxins in Washington State, necessitated by increases in toxin threats to our valued shellfish and freshwater resources. First, we describe marine toxins, their algal hosts, biochemical activity, and historical trends in shellfish toxicity and illness events, followed by similar summary sections for the freshwater toxins. Finally, we provide recommendations for the future and suggestions for tools that can be used for integrative monitoring of biotoxins, marine and fresh water alike. Shellfish Monitoring for Marine Toxins Monitoring of shellfish safety is a critical function of the Office of Shellfish and Water Protection of the Washington State Department of Health (DOH). The shellfish toxicity surveillance program was initiated by the DOH in the early 1930s as a collaboration between DOH and the George Williams Hooper Foundation for Medical Research in San Francisco [14]. Initial monitoring by DOH focused on commercial shellfish and included recreational shellfish for the first time in the early 1990s. Since the 1930s, the DOH has measured biotoxins in shellfish from hundreds of locations in western Washington waterways in order to protect consumers from shellfish poisoning. When harmful levels of biotoxins are measured, alerts are issued by DOH to shellfish growers and harvesters, local health agencies, and tribes by newspaper, television, the DOH Biotoxin Hotline (1.800.562.5632), and the internet [15]. A highly structured Sentinel Monitoring Program was established in 1990 [16] to provide early warning of the onset of biotoxin concentrations in shellfish. Through this Sentinel Monitoring Program, caged mussels are sampled at about 40 locations in Washington's marine waters every 2 weeks thoughout the year. Generally the blue mussel, Mytilus edulis, is sampled; however, M. galloprovincialis and M. californianus are collected at a few Puget Sound sites. Wire mesh cages are stocked with mussels and suspended from floats and docks. Caged mussels sit for at least 1 week before they are sampled and are replenished as needed. At a few sites, natural-set mussels are harvested. Seventy to 100 mussels provide the 100 grams of tissue needed for analysis. Mussels are sealed into plastic bags, chilled with frozen gel packs, and shipped to the DOH laboratory in Seattle for analysis by mouse bioassay [17] (Table 1). When toxins are detected above the regulatory level in shellfish, the harvest area is closed. It takes two shellfish samples of the same species from same area collected 7-10 days apart with acceptable levels of toxin to reopen a closed area to harvest. When the closure is in a commercial harvest area, all licensed shellfish companies in that area are notified to stop harvesting immediately. Commercial product that came from a closed area may also be recalled from the market. Activity and Source of Saxitoxins Saxitoxins are among the most potent natural toxins known [18] that act by blocking sodium channels of nerves, impairing normal signal transmission [19,20]. More than 30 different saxitoxin analogues have been identified, including pure saxitoxin (STX), neosaxitoxin (neoSTX), the gonyautoxins (GTX) and decarbamoylsaxitoxin (dc-STX) of which STX, NeoSTX, GTX1 and dc-STX are the most toxic isomers. The term saxitoxin often refers to the entire suite of related neurotoxins produced by cyanobacteria and marine algae. This suite of closely related tetrahydropurines (saxitoxins-STX) is also described as a group of carbamate alkaloid toxins that are either nonsulfated (STXs), singly sulfated (gonyautoxins, GTX), or doubly sulfated (C-toxins) [21]. Chemically, saxitoxin is stable and readily soluble in water, although it can be inactivated by treatment with a strong alkali. The half-lives for breakdown of a range of different saxitoxins in natural water have been shown to vary from 9 to 28 days, and gonyautoxins may persist in the environment for more than 3 months [22]. The toxicological database for STX-group toxins is limited and is comprised primarily of studies on acute toxicity following intraperitoneal (i.p.) administration. For monitoring purposes, toxicity equivalency factors (TEFs) have been applied to express the detected analogues (using high performance liquid chromatography, HPLC) in freshwater systems and the mouse bioassay for shellfish in marine systems) as STX equivalents (STX-equiv.). The Scientific Panel on The dinoflagellate Alexandrium catenella (Balech), previously described as belonging to the genus Gonyaulax (Whedon and Kofoid) or Protogonyaulax (Taylor), has been identified as the primary causative species of paralytic shellfish poisoning on the west coast of North America [23]. However, the name A. fundyense [24,25] has recently been proposed to replace all Group I strains of the A. tamarense species complex which includes the Washington Alexandrium isolates. Illness and Symptoms Saxitoxins are toxic by ingestion and by inhalation, with inhalation leading to rapid respiratory collapse and death. Intoxication with saxitoxin can be a severe, life-threatening illness requiring immediate medical care. Most information on saxitoxin symptoms comes from exposure through consumption of shellfish. Within minutes of eating toxic shellfish, a person would initially develop tingling of the lips and tongue. However, it can take up to an hour or two to develop tingling, depending on the dose and individual tolerance, followed by numbness and weakness with loss of control of arms and legs, developing into difficulty in breathing. Some people feel nauseated or experience a sense of floating after saxitoxin exposure. If a person consumes enough saxitoxin, muscles of the chest and abdomen become paralyzed, including muscles used for breathing, and the victim can suffocate. Terminal stages of saxitoxin poisoning can occur 2-12 h after exposure, and death from PSP has occurred in less than 30 min [26]. Diagnosis of saxitoxin poisoning is confirmed by detection of toxin in the food, water, stomach contents, or environmental samples. Artificial respiration is used to support breathing; when such support is applied within 12 h of exposure, recovery usually is complete with no lasting side effects [27][28][29]. Stomach evacuation can be conducted if exposure is through ingestion. No antidote against saxitoxin exposure has been developed for human use. Washington Occurrences Closures of recreational shellfish harvesting due to paralytic shellfish toxins (PSTs) have been imposed in Washington State since 1942 when three Native American fatalities occurred in the town of Sekiu on the Strait of Juan de Fuca [13]. At that time, the Washington Department of Fisheries imposed annual closures for all shellfish harvest except razor clams from 1 April to 31 October in the area west of Dungeness Spit (near Port Angeles, WA, Figure 1) including the Pacific coast to the Columbia River [30]. The shellfish surveillance program for PSTs was temporarily stopped in 1946 when it was believed that the seasonal blanket closure was adequately protecting public health. However, an outbreak of PSP on eastern Vancouver Island in 1957 [31] resulted in a mandatory monitoring program for PSTs in all commercial shellfish in Washington. Illnesses due to PSP were not reported in Puget Sound prior to 1978, but widespread toxicity occurred that year throughout much of the central basin [30]. High numbers of illnesses include 14 in 1978, nine in 2000, and 7 in 2012, all in Puget Sound [32][33][34] ( Table 2). Toxins causing PSP are now found in most areas of Puget Sound after a massive event in 1978 that caused spreading into the main basin, then further migration into the southernmost reaches of the Sound in the 1980s and 1990s. Multiple closures due to PSTs occur annually at many locations throughout Puget Sound. Activity and Source of Domoic Acid Several species of pennate, chain-forming diatoms in the genus Pseudo-nitzschia are known to produce domoic acid (DA), a toxin that bioaccumulates through the food chain to shellfish and planktivorous fish, then to vertebrates such as birds, marine mammals, and humans. The toxin, DA, acts at the same nerve receptor as glutamate, the major excitatory neurotransmitter in the mammalian central nervous system that is responsible for many of the functions within the brain, including learning and memory. Several comprehensive recent reviews are available for more information on Pseudo-nitzschia and DA [35][36][37][38][39]. Illness and Symptoms Domoic acid poisoning is formally known as amnesic shellfish poisoning in humans. Gastrointestinal symptoms can appear 24 h after ingestion of shellfish containing DA and may include vomiting, nausea, diarrhea, abdominal cramps and bleeding in the gastrointestinal system. Neurological symptoms in more severe cases can take hours to three days to appear and include headaches, hallucinations, confusion and impairment of short-term memory, unstable blood pressure, cardiac arrhythmia and coma [40]. People poisoned with very high doses of the toxin or those who display risk factors such as old age or renal failure can die after exposure. Washington Occurrences The razor clam and Dungeness crab fisheries on the outer coast of Washington have been plagued by DA closures since 1991 [41][42][43]. Commercial, recreational and subsistence razor clam fisheries suffered total coastwide closures in 1991, 1998 and 2002. However, due to enhanced information about specific locations of Pseudo-nitzschia species attributable to the monitoring efforts of the ORHAB partnership, formed in 2000, selective closures were possible in 2001 and 2003-2005. Because razor clams can retain DA for periods of up to a year due to the presence of a high affinity glutamate binding protein [44], closures on the outer coast lasting for up to a year caused serious economic hardship to the tribal communities which rely on this subsistence fishery. DA closures occurred in Puget Sound in 2003 and 2005, causing great concern to shellfish managers. To date, concentrations of DA below the regulatory level of 20 ppm have been detected in Puget Sound blue mussel (Mytilus edulis), littleneck clam (Protothaca staminea), geoduck clam (Panopea abrupta), manila clam (Tapes philippinarum), Pacific oyster (Crassostrea gigas), and Dungeness crab (Cancer magister) [45]. If future DA concentrations are found at levels in excess of the regulatory level in more areas of Puget Sound, resulting economic losses could be severe. Activity of Diarrhetic Shellfish Toxins These lipophilic toxins that are often found in combination in shellfish can be divided into four groups with different chemical structures and relative toxicities in humans: Okadaic acid (OA and its derivatives, the DTXs; the pectenotoxins (PTXs); the yessotoxins (YTXs); and the azaspiracids (AZAs). Both OA and the DTXs are lipid polyethers with inhibitory effects on protein phosphatases [46,47] and are the only toxins of the DSP group that can cause diarrhea in mammals [48]. The PTXs and YTXs are toxic in animal studies [49] but have not yet been associated with human poisonings [50]. The AZAs were first described after several people became ill after consuming contaminated mussels in Ireland [51] and have recently been measured in shellfish from Washington State at low concentrations [5]. Illness and Symptoms Diarrhetic shellfish poisoning (DSP) is a human syndrome caused by consumption of shellfish contaminated by toxins produced by Dinophysis and benthic species of Prorocentrum [52][53][54]. However, no DSP outbreaks associated with Prorocentrum have been described in Washington. DSP symptoms are gastrointestinal and include diarrhea, nausea, vomiting, and abdominal distress starting a few minutes to hours after ingestion of the toxic shellfish. Recovery occurs within three days [55]. Washington Occurrences The first clinical report of DSP in the in the Pacific Northwest and in the U.S. with coincident high concentrations of diarrhetic shellfish toxins was due to the consumption of toxin-laced mussels collected from a pier at Sequim Bay State Park in northwest Washington in June 2011. Nine mussel samples collected immediately after the illnesses were reported, contained toxins at 2-10 times above the regulatory level. Coincidently, about 60 DSP illnesses associated with the ingestion of mussels occurred on Salt Spring Island, British Columbia, the first reports of DSP in western Canada [56], resulting in the recall of almost 14,000 kg of shellfish. Sites with shellfish testing positive for diarrhetic shellfish toxins above the regulatory level of 16 μg/g in 2014 are shown in Figure 1. Integrative Monitoring of Marine Toxins in Washington In most coastal regions of the world, shellfish harvesting closures based on monitoring for toxins are primarily reactionary. These systems have succeeded in protecting human health but often have led to conservative, blanket closures of shellfish harvesting operations, thereby negatively impacting the economy of the shellfish industry. The recent appearance of new toxins in Washington challenges the capacity and effectiveness of monitoring programs that are based solely on assessment of shellfish toxicity. The National Shellfish Sanitation Program (NSSP) recommends phytoplankton monitoring as an early warning for the control of shellfish safety to provide the assurance that states are taking adequate measures to prevent harvesting, shipping, and consumption of toxic shellfish. The plan encourages communication with other states, researchers and other environmental professionals [57][58][59]. Washington is one of the US states that has successfully integrated phytoplankton and shellfish monitoring through collaboration of DOH with two phytoplankton monitoring programs, the ORHAB partnership on the Pacific coast of Washington [12] and the SoundToxins program in Puget Sound [60]. The ORHAB partnership was established in 1999 and uses a combination of analytical techniques, including weekly quantification of total numbers of harmful algae using microscopes and determination of DA concentration in seawater and razor clams, to give an effective early warning of shellfish toxin events (see Figure 1 for primary ORHAB sites denoted with X). Because razor clams are the main recreationally-harvested shellfish on the outer coast of Washington and accumulate and retain more DA than any other shellfish [44], the ORHAB early warning system is focused solely on DA testing in these shellfish using enzyme-linked immunosorbent assay (ELISA). The efficacy and accuracy of ELISA for diarrhetic shellfish toxin screening are currently being tested for eventual use by the State's phytoplankton monitoring programs [61]. Using ORHAB as a model, SoundToxins was established in 2006 and has grown from four partner sites in 2006 to >30 monitoring locations today ( Figure 1). Seawater samples are collected weekly by the participants at ORHAB sites on the outer coast and SoundToxins sites throughout Puget Sound and are analyzed for salinity, temperature, nutrients, chlorophyll, and particulate toxins, including paralytic shellfish toxins, DA, and diarrhetic shellfish toxins. Phytoplankton relative abundance focuses on four target genera Pseudo-nitzschia, Alexandrium, Dinophysis species, and Heterosigma akashiwo. In addition, SoundToxins participants recently have assisted with the identification of Azadinium species in Puget Sound. Through its weekly monitoring of phytoplankton at sites around Puget Sound, the SoundToxins partnership has allowed the state to target monitoring for diarrhetic shellfish toxins to those sites that have the greatest risk of toxicity due to increases in relative abundance of Dinophysis spp. from present to common or greater (Figure 1, see definitions in Table 1). SoundToxins participants, including environmental learning centers, Native Tribes, shellfish growers, state and federal researchers and private citizens enter weekly phytoplankton relative abundances into a web-based system [60], allowing rapid visualization of data and decision making by DOH officials. Future improvements will include closer pairing of SoundToxins phytoplankton monitoring sites with shellfish harvesting areas ( Figure 1) and rapid toxin testing at the sites of shellfish harvest by volunteers to provide a swift assessment of toxin risk for managers. Monitoring for Freshwater Cyanobacteria and Their Toxins Cyanobacteria blooms are common in numerous Washington lakes. Cyanobacteria (also known as blue-green algae) can create toxins collectively called cyanotoxins. A documented public health concern, cyanotoxins include the liver toxins microcystins and cylindrospermopsins and the nerve toxins anatoxin-a and saxitoxins. Historically, many animals have become ill or have died after exposure to cyanotoxins in state lakes. To address this issue, the DOH and Washington State Department of Ecology (Ecology) have conducted surveillance of blooms and human and animal illnesses related to cyanotoxin exposure for several years. Freshwater algae and cyanobacteria produce blooms that may be non-toxic one day but may become toxic the next day or later in the growing season. The only way to know whether a cyanobacterial bloom is toxic is to test for the presence of toxins. Due in part to citizens' mounting concerns over potential health impacts from exposure to rapidly appearing freshwater cyanotoxins, the state legislature created and funded a Freshwater Algae Control Program in 2005. This Ecology program provides funds to conduct toxicity testing by King County Environmental Laboratory (KCEL) on samples collected by local health jurisdictions, lake managers, other agencies or lake residents from lakes with blooms. Originally, analysis was done on samples collected under the passive surveillance program only for microcystins, but KCEL later developed the capacity to test for anatoxin-a, cylindrospermopsins, and saxitoxins. During initial development of the Freshwater Algae Control Program, stakeholders requested that state guidelines be developed to help with interpretation of toxicity results. In the absence of recreational guidance (based on actual toxicity levels and not cell concentrations) from the United States or the World Health Organization (WHO) for microcystins and anatoxin-a, DOH developed provisional guidance values (health-based recommendations that are not formal regulatory values) for both cyanotoxins based on a review of toxicology literature and standard risk assessment methods [62]. Later, DOH developed provisional recreational guidance for saxitoxins and cylindrospermopsins [63]. As part of the effort to provide assistance to local health jurisdictions and lake managers, DOH also developed a lake protocol that incorporated these guidance values as a reference for use by managers, agencies, and local health jurisdictions (LHJs) ( Table 1). While the most likely exposure pathways to freshwater cyanotoxins are through recreational contact or contaminated drinking water, long-term chronic ingestion via drinking water and exposure through consumption of fish and shellfish were not considered in development of recreational guidance. Recreational exposure includes activities such as swimming, wind surfing, jet skiing, and water skiing. The calculations used to determine these provisional recreational guidance values are described below. DOH incorporated the approach used by Oregon and Vermont in initial derivation of recreational guidance for microcystins [64]. Oregon has recently updated its guidance values to include anatoxin-a, cylindrospermopsin, and saxitoxin and to address acute or short-term exposures for human drinking water exposure, human recreational exposure, and dog-specific exposures [65]. DOH calculations assume a default child's body weight (BW) of 15 kg and an ingestion rate (IR) of 0.1 L based on 2 h exposure by a swimmer or other lake user with an exposure lasting for two hours per day [62]. Using the WHO tolerable daily intake (TDI) of 0.04 μg/kg-day [66] and other assumptions, above, DOH recommends a provisional recreational guidance value of 6 μg/L for microcystins, calculated as follows: DOH recommends a provisional recreational guidance value of 4.5 µg/L cylindrospermopsin, assuming a subchronic RfD of 0.03 µg/kg-day (EPA 2006) calculated using EPA assumptions, as above (RfD, in place of a TDI in the above equation). Similarly, for saxitoxins, DOH recommends a provisional recreational guidance value of 75 µg/L saxitoxin, calculated using an acute RfD developed by the European Food Safety Association [67] based on acute toxicity of STX-equivalent intoxications in humans (>500 individuals) [63]. For anatoxin-a, DOH recommends a provisional recreational guidance value of 1 µg/L based on a systemic toxicity study in mice exposed to anatoxin-a for 28 days [62,68]. When an acute reference dose (RfD) or estimate of daily oral exposure becomes available for anatoxin-a, DOH will reassess this interim anatoxin-a guidance value. All recommended recreational guidance values are considered "provisional" and will be reassessed when national or international guidance values become available. In [8]). Site names mentioned in the text are numbered: 1. Anderson Lake, 2. Waughop Lake, 3. American Lake, 4. Clear Lake, 5. Rufus Woods Lake, 6. Potholes Reservoir. Due to the size of the figure, sampled lakes are not visible. Summary of Freshwater Toxins Affecting Public Health in Washington Early in the program, DOH identified a list of cyanobacteria genera and species of concern for lakes in Washington. Toxicity testing is recommended when lake samples contain the following genera: Microcystis, Anabaena, Aphanizomenon, Gloeotrichia, Oscillatoria/Planktothrix, Cylindrospermopsis, Lyngbya, and/or Nostoc. In a summary report for the 2008-2009 state legislature, the top three toxic cyanobacteria genera in Washington lakes were identified as Anabaena, Aphanizomenon, and Microcystis [69]. Gloeotrichia was also included because a recent study confirmed microcystin-LR production by Gloeotrichia echinulata [70], and exposure to this genus has led to reports of human health impacts in Washington lakes. Cyanotoxins are a diverse group of natural toxins that fall into three broad chemical structure groups [66,71]. These are cyclic peptides (microcystins and nodularin), alkaloids (anatoxins, saxitoxins, cylindrospermopsin, aplysiatoxins, and lyngbyatoxin), and lipopolysaccharides (irritants). Anatoxin-a(s) is a naturally-occurring organophosphate. Some genera, especially Anabaena, can produce both neuro-and hepatotoxins. If a toxic algal bloom contains both types of toxins, signs of neurotoxicity are usually observed first. Neurotoxic effects occur within minutes whereas effects due to liver toxins take one to a few hours to appear. Below we describe the freshwater toxins, microcystins, anatoxin-a, cylindrospermopsins and saxitoxins, which currently are monitored in Washington lakes. Microcystins Microcystins are the most thoroughly investigated cyanobacterial toxins [72]. At least 90 structural variants have been identified, and microcystin-LR is the variant most commonly found in cyanobacteria [73][74][75]. Microcystins have been identified in Anabaena, Microcystis, Oscillatoria (Planktothrix), Nostoc and Anabaenopsis species and from the terrestrial genus Hapalosiphon [66]. More than one microcystin may be found in a particular cyanobacteria strain. Microcystins are cyclic heptapeptides that primarily affect the liver in animals. A lethal dose of microcystins in vertebrates causes death by liver necrosis within hours or up to a few days. Microcystins block protein phosphatases 1 and 2A (important molecular switches in all eukaryotic cells) using an irreversible covalent bond [76] in [77]. Liver injury is likely to go unnoticed and results in (external) noticeable symptoms only when it is severe [77]. Other studies have shown that microcystin toxicity is cumulative [78]. Researchers suspect microcystins are liver carcinogens, which could increase cancer risk to humans following continuous, low level exposure. Illness and Symptoms Symptoms of microcystin poisoning may take 30 min to 24 h to appear, depending upon the size of the animal affected and the amount of toxic bloom consumed. Gross and histopathologic lesions caused by microcystins are quite similar among species, although species sensitivity and signs of poisoning can vary depending on the type of exposure. One of the earliest effects (15-30 min) of microcystin poisoning is increased serum concentration of bile acids, alkaline phosphatase, γ-glutamyltransferase, and aspartate aminotransferase. Microcystin symptoms in mammals and other animals may include jaundice, shock, abdominal pain and distention, weakness, nausea and vomiting, severe thirst, rapid and weak pulse, and death. It is likely that the number of incidents with low-level symptoms such as nausea, vomiting and diarrhea associated with recreational exposure to cyanobacterial toxins are underreported. Death may occur following exposure to very high concentrations within a few hours (usually within 4-24 h) or up to a few days. Death is due to intrahepatic hemorrhage and hypovolemic shock. In animals that survive more than a few hours, hyperkalemia or hypoglycemia, or both, may lead to death from liver failure within a few days [79]. Surviving animals have a good chance for recovery because the toxins have a steep dose-response curve. Activated charcoal oral slurry is likely to benefit exposed animals, even though therapies for cyanobacterial poisonings have not been investigated in detail. Activity and Source of Anatoxin-a Anatoxin-a is one of three neurotoxic alkaloids that have been isolated from cyanobacteria [72]. It is produced by various species of cyanobacteria including Anabaena, Planktothrix (Oscillatoria), Aphanizomenon, Cylindrospermum and Microcystis spp. Anatoxin-a was first detected in Canada in the 1960s [80]. Between 1961 and 1975, cattle and dog poisonings associated with Anabaena flos-aquae blooms occurred in six locations in Canada. Most anatoxin-a has been detected in Europe. Second to Europe, most reports of anatoxin-a have been in North America [74]. Anatoxin-a is a bicyclic secondary amine. It binds to the nicotinic acetylcholine receptor at the axon terminal at the neuromuscular interface [73,74]. Binding of anatoxin-a is irreversible causing the sodium channel to be locked in an open position, resulting in symptoms in humans including overstimulation, fatigue, and eventual paralysis. In the respiratory system, anatoxin-a exposure results in a lack of oxygen to the brain, subsequent convulsions and death by suffocation. Anatoxin-a is about 20 times more potent than acetylcholine, a compound involved in transmission of nerve impulses [74]. Alkaloid toxins are more likely to be present in free (non-cellular) form in water than the cyclic peptide toxins microcystins and nodularin [77]. While microcystins appear to be more common than freshwater neurotoxins, the latter have caused severe animal poisonings in North America, Europe and Australia [77]. Anatoxin-a degrades readily to nontoxic products upon exposure to sunlight and at a high pH [74]. In natural blooms in eutrophic lakes, the anatoxin-a half-life is typically less than 24 h, while its half-life in the laboratory is about five days [66]. This rapid degradation of anatoxin-a presents problems with determining accurate toxin concentrations associated with exposures. According to Botana [74], samples should be protected from light and acidified prior to storage at −20 °C in order to limit anatoxin-a degradation. Illness and Symptoms Neurotoxins are notoriously rapid acting poisons; anatoxin-a was originally called very fast death factor (VFDF) due to its potency [74]. Animal illness and death may occur within a few minutes to a few hours after exposure, depending on the size of the animal and amount of toxic bloom consumed. An animal with anatoxin-a toxicosis may exhibit staggering, paralysis, muscle twitching, gasping, convulsions, backward arching of neck (in birds), and death. Livestock that drink large amounts of contaminated water and pets that lick scum on their fur are at highest risk from anatoxin-a exposure. While anatoxin-a is largely retained within cells when conditions for growth are favorable, toxins will be liberated in the gastrointestinal tract if water containing toxic cells is consumed [66,74]. However, ingestion of a sublethal dose of these neurotoxins leaves no chronic effects and recovery appears to be complete with no ongoing injury [77]. Exposure leaves no sign of organ damage and residual toxin is rapidly degraded [74]. The first report of an animal illness in Washington due to a freshwater toxic bloom occurred in 1976 in Spokane County [7]. Four dogs died after drinking water during a toxic Anabaena bloom and an additional seven dogs, one horse, and one cow were reportedly sickened [81]. In the 1980s, another two hunting dogs died in eastern Washington, and five cats died during a toxic Anabaena bloom in American Lake, Pierce County. More recently, two dogs died after exposure to a toxic Anabaena bloom in Anderson Lake (2006), and two hunting dogs died in the Potholes Reservoir after exposure to a toxic bloom (2007). Each year roughly 4-5 reports of animal illness (including cats, dogs, cows, elk, and horses) are investigated, with approximately 2 probable or confirmed cases per year. Outreach and education efforts such as posting signs at lakes with confirmed toxicity began in 2009 and are thought to have decreased pet exposures in lakes with blooms [82]. Washington Occurrences Three state waterbodies have long-term reoccurring anatoxin-a blooms with unique seasonal patterns [8]. For example, Clear Lake, Pierce County, exhibited blooms three years in a row that became toxic in late fall and continued through the winter (maximum 1170 µg/L anatoxin-a). Testing of Anderson Lake, Jefferson County, from 2009 to 2014 showed reoccurring blooms that began in April, May, or June in most years and continued through August, September, or October (maximum 1090 µg/L anatoxin-a, June 2011; Figures 2 and 5). Rufus Woods Lake, a reservoir behind Chief Joseph Dam on the Columbia River (for locations, see Figure 2), also has reoccurring blooms producing anatoxin-a with a unique seasonal pattern: July and August 2011; July, August, September 2012; May through September in 2013; and May through July in 2014 (maximum 110 µg/L anatoxin-a, July 2012). Seasonal distribution of anatoxin-a concentrations above 1.0 µg/L was determined for 11 other state lakes and reservoirs. Levels above the state recreational guidance value were observed during each month of the year at various sites around the state. Ten lakes produced only one to three samples with anatoxin-a above 1.0 µg/L (maximum 592 µg/L). Most short-term blooms occurred in September, October, November or December. Activity and Source of Cylindrospermopsin Cylindrospermopsin is comprised of a tricyclic guanidine moiety combined with a hydroxymethyl uracil. Production of the toxin is strain-not species-specific [83]. Cylindrospermopsin exhibits a completely different mechanism of toxicity than the liver toxin microcystin [84][85][86]. Damage to cells is caused by blockage of key protein and enzyme functions, thereby inhibiting protein synthesis. Cylindrospermopsin targets the liver and kidneys but can also injure the lung, spleen, thymus, and heart as demonstrated in mouse studies [66,72,87,88]. Animal toxicity studies also suggest that cylindrospermopsin may be carcinogenic [72,89] and may produce genotoxicity in a human lymphoblastoid cell line [90]. Laboratory studies have shown that some of the compounds produced by Cylindrospermopsis may be carcinogenic and genotoxic [83,[90][91][92][93] Cylindrospermopsin is found in certain strains of five genera: Cylindrospermopsis raciborskii (Australia, Hungary, and the U.S.), Umezakia natans (Japan), Anabaena bergii and Raphidiopsis curvata [94], and Aphanizomenon ovalisporum (Australia, Israel) [95]. It is most commonly observed in tropical and subtropical waters of Australia [83]. The first report of animal poisonings attributed to cylindrospermopsin was in drinking water in a farm pond in Queensland, Australia, where it was responsible for cattle deaths [96]. Further, Cylindrospermopsis raciborskii was implicated in one of the most significant cases of human poisoning from exposure to a cyanobacterial toxin in 1979 on Palm Island, northern Queensland, Australia. Generally, toxins are retained in cyanobacterial cells when conditions are favorable; however studies have shown that it is not uncommon for 70%-98% of total cylindrospermopsin produced by cells to be dissolved in the water [83,97]. Illness and Symptoms Symptoms of exposure to cylindrospermopsin include nausea, vomiting, diarrhea, abdominal tenderness, pain, and acute liver failure. Clinical symptoms after exposure to cylindrospermopsin may not appear immediately but may occur several days later. Thus, it is often difficult to determine a cause-effect relationship between cylindrospermopsin exposure and symptoms. The degree of the cyanotoxin impact for cylindrospermopsin and other cyanotoxins is influenced by animal size, species sensitivity, and individual sensitivity. According to the Merck Veterinary Manual, animals may need to ingest only a few ounces or up to several gallons to experience acute or lethal toxicity, depending on bloom densities and toxin content [79]. After removal from the contaminated water supply, affected animals should be placed in a protected area out of direct sunlight. The animal should have access to an unrestricted supply of clean water and good quality feed. Surviving animals have a good chance for recovery because both hepatotoxins and neurotoxins have a steep dose-response curve. Although no therapeutic antagonist has been found to be effective against cylindrospermopsin, activated charcoal oral slurry is likely to benefit exposed animals. An ion-exchange resin such as cholestyramine has proved useful to absorb the toxins from the gastrointestinal tract [79]. Washington Occurrences The state's passive surveillance effort and monitoring results from the CDC 30-lake study show that cylindrospermopsins have been found in only six Washington lakes at very low concentrations. No results were above the state recreational guidance value of 4.5 µg/L cylindrospermopsins, with concentrations above the minimum detection level (MDL; 0.10 µg/L) ranging from 0.11 to 1.12 µg/L. Washington Occurrences Since 2009, saxitoxins have been detected in ten state lakes and one pond. Waughop Lake was the only waterbody with multiple samples higher than the MDL (0.020 µg/L), one of which was above the state recreational guidance value of 75 µg/L (193 µg/L, August 2009). Saxitoxin concentrations in the other lakes and pond ranged from 0.021 to 71.0 µg/L. With the exception of Waughop Lake (Figure 2), saxitoxins do not occur at levels of human health concern. Washington Lakes: Three-Tiered Approach to Managing Lakes with Cyanobacterial Blooms DOH recommends a three-tiered approach for managing toxic or potentially toxic cyanobacterial blooms. The approach applies recreational guidance values derived by DOH for managing Washington lakes (Table 1). Observers look for developing blooms and surface accumulations that can occur in any nutrient-rich water such as lakes, ponds, or river embayments. Upon notification of a potential bloom, the LHJ or other agency staff (or lake resident) will: (1) obtain a sample number from the state Freshwater Algae website [8], (2) sample the water body experiencing the bloom, then (3) send the sample to the laboratory for toxicity tests. Sampling and shipping directions are available at the website [8] or from Ecology's Freshwater Algae Control Program [101]. At present the KCEL is under contract with Ecology to test for microcystins, anatoxin-a, cylindrospermopsin, and saxitoxin. Results of toxicity analyses are incorporated into the Freshwater Algae website as they are received from the laboratory. In Washington, local jurisdictions have the authority to post advisories on water bodies within their districts (RCW 70.05.070) and actions taken such as posting or closing a lake based on toxicity results are published on the website and on Ecology's list serve. Tier I A sample of a visible cyanobacteria bloom or scum is sent for phytoplankton examination and toxicity testing. If the sample is dominated by potentially toxic cyanobacteria, the LHJ should post a CAUTION sign (Figures 6 and 7). Given the tremendous spatial and temporal variability in toxin concentrations, LHJs are encouraged to factor in the spatial extent of the bloom when deciding if a warning level or closed level advisory is warranted. Tier II When recreational guidance values for microcystin, anatoxin-a, cylindrospermopsin and/or saxitoxin are exceeded (Table 1), the LHJ posts a WARNING sign (Figures 6 and 7). The lake is sampled weekly, because toxin levels may be variable, e.g., they may be at their highest during bloom die-offs even though the water looks "normal" or may be significantly lower due to temporary changes in weather such as heavy wind and/or intense rainfall which could redistribute cyanobacteria throughout the lake and throughout the water column with little change in the total number of cyanobacteria cells. This makes assessment of bloom density quite difficult. Therefore, DOH recommends that LHJs not lift advisories unless they check the lake under weather conditions that are conducive to biomass accumulation (relatively calm or a light steady wind and little or no rainfall). Additional steps can be taken to communicate risk (i.e., press release, notification of veterinarians and fish and wildlife officials) depending on severity of the bloom, time of year, and historical use of the lake (i.e., a highly used access point such as a dog park might warrant greater outreach efforts as compared with a lake not known for any recreational activity). In certain situations, some LHJs have mailed notifications to local lakefront residents after confirmation of cyanobacterial toxicity. Other possible measures that have been used to reach lakefront residents include radio messages or the internet via a list serve or "blast" email. Tier III Under certain circumstances, a LHJ may close a lake with unusually high microcystin, anatoxin-a, cylindrospermopsin, or saxitoxin concentrations. At the discretion of the LHJ, a water body can be posted as DANGER-Closed (Figures 6 and 7). Examples include:  Very dense blooms covering an entire lake  Confirmed pet illnesses or death  Reported human illness The LHJ will post a press release to notify the general public of a lake closure. Also, LHJs follow whatever additional methods of outreach, including those listed under Tier II, that best inform public beach users and lake front residents of the risks from cyanotoxins and how to avoid these risks. Retraction of lake closures is also at the discretion of the LHJ. DOH recommends posting a WARNING sign and following Tier II recommendations after retracting a lake closure until microcystin levels are less than the recreational guidance levels ( Figure 2). Human Illnesses Associated with Freshwater HABs Human illness reports following HAB exposure are investigated; however, definitions of suspected, probable, or confirmed human illnesses have changed over time making quantitative reporting problematic. Symptoms following exposure are similar but criteria for illness reporting have changed. At present, the CDC is working on case definitions for national consistency in reporting human illnesses following HAB exposure with the realization that underreporting likely is an issue. Acknowledging these shortcomings, DOH reported 2-4 human illness investigations per year for 2010-2013, with a high of 122 human investigations in 2009, a year with unusually high temperatures during late July-early August. The risk of illness due to exposure to toxins in freshwater will be reduced through more extensive communication and outreach. To that end, Washington has a database of freshwater toxicity data available for the public to access via the web ( [8]; Supplementary Figure 1). Toxicity results can be searched and retrieved by lake, county, water resource inventory area, and toxin with defined concentrations and dates (e.g., Figure 2). Future Threats, Needs and Recommendations Although our understanding of toxic blooms in marine waters and state lakes is improving each year, many questions remain. Below is a list of suggested topics recommended for future work on freshwater and marine HABs in Washington.  Lake and reservoir HABs in Washington pose a potential new public health threat from exposure via drinking water. In 2014, a 500 household community used untreated drinking water from a lake during a period when anatoxin-a concentrations were low but still above state recreational guidelines; no illnesses were reported. In another case, the drinking water source for Friday Harbor, an island town, had a toxic bloom that resulted in the need to import water for the community. Future efforts will be needed to improve testing in lakes used as drinking water sources and to coordinate with drinking water managers of surface water systems that may develop toxic blooms.  The additive toxicity of co-occurring blooms in lakes and marine waters must be studied. Further, as microcystin variants become easier to identify and quantify, toxicologists will need to determine actual toxicities to improve upon the current assumption for public health guidance that all toxin variants are equally potent. In the future, our state will adopt national recreational values for freshwater cyanotoxins following EPA guideline development. CDC and the states are collaborating on an enhanced National Outbreak Reporting System that will fill the current gap at the state level for tracking animal and human health illness events.  The impact of climate change on marine HABs and cyanobacteria is also a subject that needs to be addressed. Cyanobacteria and some marine HABs favor warm temperatures and other environmental conditions such as increased nutrient inputs from land that will be associated with climate change. If long-term climate projections for the Pacific Northwest are correct, rain events will increase, which may influence nutrient runoff from impervious surfaces, particularly as land is developed and regional populations increase.  Washington has an effective Freshwater Algae Control Program based on passive surveillance, legislatively funded toxicity tests, and established cooperation between state agencies and local health jurisdictions. The state's 39 counties (35 local health jurisdictions) have a range of staff and resources available for water surveillance and sampling. Therefore, this program, together with the marine biotoxin monitoring program, will require continued and repeated outreach efforts to local health jurisdictions regarding blooms, toxicity testing, and toxicity postings. Thus, periodic seminars and webinars will be needed to ensure all areas of the state are aware of the program and knowledgeable about state-level technical support.  Outreach efforts on marine and freshwater HABs have met some needs but other educational needs remain unmet. Outreach to veterinary clinics regarding differential diagnoses and distribution of posters for pet owner education has been effective in the state. Annual outreach to the public and to hunters owning dogs will need to continue. More recently, DOH has included outreach to drinking water operators about available toxicity tests, bloom identification, and options for treatment when blooms occur. However, a major outreach and education gap in the state is for physicians who treat those exposed to toxic marine and freshwater blooms.  Standardized and consistent posting at lakes and shellfish harvesting beaches experiencing toxic blooms is essential for public health protection. Some local health jurisdictions have raised concerns about over-posting, which can lead to the public ignoring CAUTION and WARNING signs, and under-posting, which may not be protective of public health. Since blooms in lakes and marine waters are notoriously patchy, some areas of a lake may be below recreational standards while high toxicity scums in smaller areas remain a health threat. We recommend that managers, local health professionals, and state staff work together to refine outreach and offer additional posting options to reflect more complicated local conditions.  Recommendations for future work include ongoing collaborative work investigating the link of freshwater toxins with marine bivalve bioaccumulation of toxins. Further investigation of HAB genetics may help explain why some blooms are toxic and others are not. Another recommended effort is to investigate if satellite imagery using smaller pixels can identify lakes with dominant cyanobacteria that are not under current surveillance. Overall Summary and Conclusions The integration of phytoplankton monitoring into regulatory programs to ensure shellfish safety has been promoted by European countries for many years and should be encouraged throughout the U.S., in particular in those states, such as Alaska, where regulatory monitoring of vast coastlines is challenging [102]. In some regions of the U.S., including Florida, cell counts of harmful algae are used together with satellite imagery and automated environmental observations to provide early warning of the development and movement of Karenia brevis blooms in the Gulf of Mexico [103]. Currently, each European state monitors marine HAB species in addition to toxins in shellfish along the Atlantic coastline. These data are interpreted and incorporated by each national monitoring program into national forecast bulletins that were developed during the program, Applied Simulations and Integrated Modeling for the Understanding of Toxic and Harmful Algal Blooms (ASIMUTH) project, as a demonstration of a downstream service [104]. For freshwater HABs, each state has developed a unique approach for monitoring and regulating toxic blooms. The most extensive effort is in Florida, where the Florida Department of Health (FDOH) has developed the Harmful Algal Bloom Online Tracking Module, which allows public health professionals and environmental scientists/managers to collaborate on cyanobacteria bloom reporting through a secure web-based data management system, hosted in Caspio. There are currently 86 users from 18 different organizations utilizing the system. Other examples of states integrating monitoring into regulatory programs include Oregon's collaboration with the State Drinking Water Program and an effective emphasis on education and outreach; Massachusett's collection of bloom data to serve as guidance for local health officials; New York's collaboration with the Citizen Statewide Lake Assessment Program; Wisconsin's interactive website for incident reporting by citizens and local health departments and strong partnerships with WI Department of Natural Resources and poison control centers; Maryland, Virginia, and South Carolina's integration of freshwater and marine HAB monitoring and tracking of blooms; and Iowa's effective partnership between health and natural resource departments. Such programs provide an effective interface between formalized state and federal programs while observations by trained citizen volunteers offer the best possible early warning systems for surveillance of known HABs as well as for the reporting and diagnosis of unusual events that may impact the future health of oceans, lakes and humans. The vision for the future includes interfacing current monitoring and management programs with efforts in basic research and model development to develop forecasting systems for marine and freshwater HABs. Early warning networks will monitor changes in the abundance and location of toxic blooms using an integrated suite of sensors on satellites and stationary sensor platforms that together can measure ocean water properties including temperature, current speed and direction, chlorophyll, cell species and abundance, and toxins. Data will be telemetered and incorporated with real-time shore-based monitoring. An example of a remote sensing technology is the automated molecular detection and quantification of cells and toxins, using the environmental sample platform (ESP; e.g., [105][106][107][108]), which recently has been funded for deployment in the Pacific Northwest as part of the Integrated Ocean Observing System [109]. Rapidly-accessed data from remote platforms will be used to calibrate and fine-tune physical and biological models and HAB forecasts. One such forecasting bulletin for the Washington State coast is in its pilot stage [110]. These models and forecasts will allow shellfish managers and early warning programs to take preventive actions (such as increasing monitoring efforts, closing targeted shellfish beds, and warning at-risk communities) to safeguard public health, local economies and fisheries. In addition, some proactive management will be facilitated, e.g., early opening of the shellfish harvesting seasons or early posting of toxin threats to recreational users of lakes. A combination of technologies, from volunteer-based phytoplankton monitoring programs, to state and federal regulatory analysis of toxins in shellfish and drinking water, to the newest remote sensing technologies, will provide the most comprehensive system for the protection of public health from documented marine and freshwater HABs as well as new and emerging biotoxins in Washington State.
2016-04-23T08:45:58.166Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "00b12803a37c63064ab8a08728c25feec8b756ff", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/7/4/1206/pdf", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "00b12803a37c63064ab8a08728c25feec8b756ff", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
228373082
pes2o/s2orc
v3-fos-license
Polarization and Phase Textures in Lattice Plasmon Condensates Polarization textures of light may reflect fundamental phenomena, such as topological defects, and can be utilized in engineering light beams. They have been observed, for instance, in photonic crystal lasers and semiconductor polariton condensates. Here we demonstrate domain wall polarization textures in a plasmonic lattice Bose–Einstein condensate. A key ingredient of the textures is found to be a condensate phase that varies spatially in a nontrivial manner. The phase of the Bose–Einstein condensate is reconstructed from the real- and Fourier-space images using a phase retrieval algorithm. We introduce a simple theoretical model that captures the results and can be used for design of the polarization patterns and demonstrate that the textures can be optically switched. The results open new prospects for fundamental studies of non-equilibrium condensation and sources of polarization-structured beams. Phase transitions and spontaneous symmetry breaking are associated with topological defects, for example vortices with windings of the phase of a superfluid or superconductor.A vector field with a (pseudo)spin or polarization degree of freedom allows an even richer set of topological defects such as skyrmions, merons, half-vortices, nodal lines and magnetic monopoles. Three main approaches are typically used when creating polarization textures of light: 1) spontaneous appearance in a phase transition or a quench (6,8), 2) imposing the texture via an excitation or pump beam (9) and 3) advanced structural engineering of the medium supporting the optical modes (11,14,15).Polariton condensates in semiconductor systems are amenable for the first two, however, polarization textures have been observed only at cryogenic temperatures, and complex structural engineering (e.g., (15)(16)(17)) is technically demanding.The latter is well developed in traditional photonic crystals and metamaterials, which, when combined with a gain medium, show lasing.However, strong-coupling condensation phenomena with related interactions have not been reported in those systems.Here, we introduce a novel way of creating polarization textures that combines all three approaches in a platform that avoids the limitations of previously utilized systems.It is based on the design of optical dipoles in a plasmonic nanoparticle array, combined with a non-trivial phase structure of a Bose-Einstein condensate hosted by the lattice.Different polarization textures are switchable by the polarization of the beam pumping the condensate.We experimentally demonstrate this new paradigm in a simple geometry and show that it leads to domain wall formation.For the first time for any kind of condensate, we reconstruct the BEC phase by a phase retrieval algorithm, avoiding interference measurements.Our system provides an extremely easy and versatile engineering of the geometry and unit cell of the lattice (18)(19)(20)(21), along with room temperature strong coupling condensation leading to effective interactions (22,23), and ultrafast sub-picosecond operation (23,24).The demonstrated new approach to polarization texture creation combines these assets in an unprecedented way, and is expected to be fruitful both in fundamental studies of non-equilibrium condensation phenomena (25) and in beam polarization engineering (11)(12)(13), particularly when compact and ultrafast components are desired. An illustration of the system and the experiments is shown in Fig. 1 (A).A square array of golden nanoparticles fabricated on a glass substrate is immersed in a fluid and sealed with a cover glass.The fluid is either an index-matching oil for studies of the bare array, or a fluorescent dye solution such as a gain medium for the lasing and condensation measurements. The bare arrays support collective plasmonic modes called surface lattice resonances (SLRs), which are hybrid modes consisting of the localized surface plasmon resonances of individual nanoparticles and the diffracted orders of the periodic lattice (19,20).Excitations in the SLR modes are bosonic quasiparticles that have a mainly photonic nature, but also consist of collective electron oscillations in the nanoparticles; for their dispersion, see Fig. 1 (B). The Γ-point of the dispersion provides a band edge that may host lasing or condensation. When the nanoparticle arrays are combined with emitters (e.g., dye molecules), the SLR modes remain intact (weak coupling) for low emitter concentrations while for high concentrations the strong coupling regime is reached and the excitations transfrom into polaritons, that is, hybrids of the SLR modes and emitter excitations (26).The emitters may be pumped externally and serve as a gain medium.Three modalities of coherent emission have been observed so far: 1) lasing at the weak coupling regime (18,27), 2) polariton lasing/condensation (22), and 3) Bose-Einstein condensation (BEC) both at weak (24) and strong coupling (23).The BEC requires thermalization consisting of multiple molecule-light absorption and emission processes associated with loss of energy to molecular vibrational degrees of freedom (28).The lumi-nescence spectrum from the condensate shows a Bose-Einstein distribution, although the phenomenon is different from equilibrium BEC, as it occurs on the sub-picosecond scale.While output polarizations have been measured for previous nanoparticle array lasers and condensates, polarization textures have not been observed. Here, we work with the strong coupling BEC as introduced in (23).To reveal the fundamental properties of the condensate polarization structure, we use a highly symmetric geometry: the size and period of the lattice is the same in the xand y-directions, and the particles are cylinders fully symmetric in the sample plane.We combine the array with a fluorescent solution of IR-792 at a concentration of 80 mM, which leads to strong coupling.The sample is pumped at 800 nm using left circularly polarized laser pulses generated with an ultrafast Ti:sapphire laser. The light radiated from the sample inherits the properties of the plasmonic excitations, therefore the condensate can be characterized via real-space, spectral, k-space and polarization-selective imaging.All the real-space data shown here are luminescence collected after a single pump pulse, that is, they correspond to single shot realizations of the condensate (29). Fig. 1 (C) shows a distinct double-threshold behaviour in the total measured luminescence as a function of the pump fluence.The first threshold corresponds to polariton lasing, and the second one to strong coupling BEC (23); we focus here on the latter regime.The realspace intensity profile of the condensate including all polarizations is non-uniform and x − y symmetric, see Fig. 1 (D).With a linear polarizer in the x-direction, we obtain Fig. 1 (E).When thermalizing towards the ground state (k = 0 band edge), the SLR excitations propagate due to their finite momentum k.This together with the finite size of the array leads to a non-uniform condensate profile, which in (23) was observed only in one direction (similar to Fig. 1 (E)) due to the use of a linearly polarized pump, which triggered the propagation only along one direction.Here, the circularly polarized pump leads to propagation and non-uniform condensate density in both xand y-directions. Figs. 2 (A-F) show the real-space intensity images of the pumped sample filtered using different polarizers.Figs. 2 (A-B) and (E-F) reflect the underlying x − y symmetry of the system. Remarkably, the right and left circularly polarized images (Figs. 2 (C-D)) show complementary intensity patterns, with right circularly polarized light being emitted from the centre and corners of the array, while luminescence close to the sides is mostly left circularly polarized.This means that the patterns can be switched optically by femtosecond scale pulses. In addition to the real-space data, we capture k-space (k x , k y ) images of the luminescence (Figs. 3 (A-F)) which display striking similarities to their real-space counterparts: Figs. 3 (A-B) and (E-F) reflect the x − y symmetry, while the left and right circular polarization components reconstructed images provides immediate proof of the robustness of the phase retrieval method. Based on the non-uniform phase distributions obtained from the phase retrieval algorithm, we construct a simple yet highly effective model of the polarization states of the lattice plasmons.The array is modeled as a 2D grid of Jones vectors depicting the polarization of the emitted light.As a basis, we use the amplitudes A H and A V , and phases ϕ H and ϕ V , of the horizontally (x) and vertically (y) polarized electric field components.In the collective SLR lattice modes, the magnitude of a nanoparticle dipole oscillating in the y-direction (x-direction) decreases when moving from the centre of the array towards the edges in the x-direction (ydirection).This is because the nanoparticles closer to the edges receive no radiation from outside the array.Fig. 4 (A) shows a schematic of our model.Based on the simple argument of nanoparticles receiving no radiation beyond the array boundaries, the decrease is linear and of a factor of two.Alternatively, one can utilize the measured real-space intensity data in the model; this is discussed in the supplementary materials (see Fig. S3).Motivated by the results of the phase reconstruction, we apply a linear approximation for the phase components: ϕ H is varied from 0 to π and back as a function of x (purple line), and ϕ V from π/2 to 3π/2 and back as a function of y (red line).The relative phase difference of π/2 between ϕ H and ϕ V is chosen to correspond to right circular polarization at the centre of the array, as observed experimentally and anticipated from the pump polarization. In order to compare the theoretical model to the measured real-space intensities, Jones calculus is applied to the 2D grid of Jones vectors, and the resulting electric field intensities of linear and circular polarization states are plotted in Figs. 2 (G-L).Remarkably, the combination of non-uniform amplitude and phase distributions allows the model to qualitatively reproduce the real-space polarization patterns observed in the plasmonic condensate.Moreover, removing the π/2-phase difference between ϕ H and ϕ V , which corresponds to diagonal polarization in the centre of the array, indeed leads to images that are consistent with the experimental intensity patterns observed with a diagonally polarized pump beam (see Fig. S2).This demonstrates that the observed phase distributions can be switched by the polarization state of the pump. So far, we have investigated projections of the condensate emission on different polarization components.Now we also calculate the Stokes vectors S = (S 1 , S 2 , S 3 ), characterizing the pseudospin nature of polarization and how it evolves at different points on the array.The vector components are given by These windings are reversed in the adjacent domain wall (blue arrow) shown in Fig. 4 (F), and, following a closed loop around the centre of the sample, the total winding number becomes zero.Here, in a system with simple square lattice geometry, the pseudospin texture is of non-topological nature.Given the broad tunability of the plasmonic nanoparticle array and the dependence of the phase profiles on the pump polarization, the creation of topologically non-trivial textures is a feasible goal. In summary, we have observed polarization textures arising from an interplay between a structured optical medium and a non-uniform Bose-Einstein condensate phase.One ingredient of the textures is the finite size of the periodic array, which causes the nanoparticle dipoles to weaken towards the edges.This alone, however, would lead to nothing but unremarkable effects on the polarization properties (c.f.Fig. 4 (C)).For the observed prominent domain wall structures (Figs. 4 (E-F)), an additional element is crucial: the non-uniform phase of the condensate.We revealed a zero-to-π phase change between the central and edge parts of the array with a Gerchberg-Saxton algorithm.In addition to being essential for explaining the textures, this constitutes the first experimental determination of a condensate phase by computational imaging, proposed earlier by theory (30,31).This achievement puts forward an attractive alternative to measurements of phase by interference, replacing complex experiments by a robust computational approach. For our proof-of-concept demonstration of polarization textures, we used a C 2 symmetric (x − y symmetric) system.Future design possibilities include lattices with different geometry, size, and structure of the unit cell to realize new combinations of broken or competing symmetries, artificial gauge fields, and pseudospin-orbit coupling; different material choices such as dielectrics are also feasible (32)(33)(34).Importantly, the simple theoretical framework introduced here allows fast and intuitive planning of the desired textures.The expected qualitative behaviour of the field intensities for different polarizations can be determined from the geometry of the lattice, and straightforwardly generalized to higher order multipolar nanoparticle modes and more complex unit cells.Such approach allows one to explore and plan polarization textures that various lattice configurations, together with different phase profiles, can produce.To exploit the condensate phase as a design degree of freedom, further studies are needed to understand Sample fabrication Square arrays of Au nanoparticles are fabricated on borosilicate glass slides using electron beam lithography.A polymethyl methacrylate (PMMA) layer, which is spin-coated and baked solid on the glass substrates, is covered with 10 nm of evaporated aluminum and patterned using an electron beam.The aluminum layer is then removed using 50% AZ 351B developer and the PMMA layer is developed by immersion in 1:3 methyl isobutyl ketone:isopropanol solution.A thin (2 nm) titanium adhesion layer and a 50 nm gold layer are evaporated on the patterned slide, and excess PMMA and metal are removed by acetone lift-off.The array size is 100 × 100 µm 2 , and the nominal periodicity 568 nm, which sets the Γ-point energy at ∼ 1.44 eV.The height and diameter of the cylindrical nanoparticles forming the array are 50 nm and 105 nm, respectively. The samples are prepared for measurements by sealing the nanoparticles between the substrate and a cover glass slide using a circular silicone isolator, whose thickness is 0.8 mm. For transmission measurements, the isolator is filled with index-matching oil; for condensation measurements, it is filled with an 80 mM solution of IR-792 perchlorate dissolved in 1:2 dimethyl sulfoxide:benzyl alcohol mixture.The solvent has a matching refractive index with the glass slides (n = 1.52). Experimental setup A detailed schematic of the setup is shown in Fig. S1.The measurement setup can be used to measure both angle-resolved k-space spectra as well as real-space spectra with minor modifications.Light exiting the sample is collected using an infinity corrected objective (10x, 0.3 NA) together with a compatible tube lens.An optional polarizer may be placed after the tube lens to limit the measurement to a single polarization state.Here, the polarization is defined from the point of view of the source.A long pass filter (cutoff wavelength at 850 nm) is used in the detection path to filter out pump reflections.Light from the sample is spatially restricted to the nanoparticle array using an adjustable iris in front of the cameras. In k-space measurements, the back focal plane of the objective is focused to the entrance slit of a spectrometer such that each point on the slit corresponds to a specific emission angle θ y .The angle is related to momentum as k y = k 0 sin θ y = 2π/λ 0 sin θ y , where k 0 and λ 0 are the free space wavenumber and wavelength, respectively.This allows the 2D charge-coupled device (CCD) array inside the spectrometer to measure a spectrum at different values of k y simultaneously.In addition, two fast complementary metal-oxide-semiconductor (CMOS) cameras are used to take direct real-and momentum-space images of the sample.The long pass filter limits the emission collected by the CMOS cameras to energies below 1.46 eV.In real-space measurements, an additional lens is placed after the tube lens, which causes the real-space image of the sample to be formed at the entrance slit of the spectrometer.In this case, each point on the slit corresponds to a specific y-coordinate of the sample. In the condensation experiment, the sample is pumped optically using ultrafast laser pulses (50 fs pulse duration, 800 nm centre wavelength (1.55 eV energy)), that are left circularly polarized.However, as the pulses reflect off the sample, they are observed as right circularly polarized in the detection path.Since the repetition rate of our pump pulses is 1 kHz, setting the integration time of our CMOS cameras to 1 ms allows us to capture luminescence from a single realization of the condensate.An iris is used to spatially crop the pump beam, which is then focused on the nanoparticle array through the objective with the help of an additional pump lens.The polarization state of the pulses is controlled using motorized quarter-and half-wave plates on the pump path.Pump fluence is varied using a neutral density wheel.In transmission measurements, the array is illuminated using a broadband halogen light source. Phase retrieval Prior to phase retrieval, real-and k-space images shown in Figs. 2 (A-F) and Figs. 3 (A-F) were pre-processed using standard procedures (38).Each dataset was centred in the computational domain.The phase reconstruction was performed by the Gerchberg-Saxton phase retrieval algorithm (39).The object-domain constraint was the square root of the processed real-space intensity distribution (Figs. 2 (A-F)), and the Fourier constraint was the square root of the processed k-space intensity distribution (Figs. 3 (A-F)).The linear oversampling ratio was Ø ≈ 11 and thus fulfilled the oversampling condition (40).In total, we performed 1000 independent reconstruction rounds with different initial random phase distributions in the k-space.The random phases were generated from a uniform distribution between -π and π.Each reconstruction round comprised 100 iterations, as this number of iterations was sufficient for the algorithm to converge.Only 5% of the reconstructed phase distributions having the lowest error metric in the k-space (41) were selected out of 1000 reconstructions and averaged following standard protocol (42).For averaging purposes, the phase value in the centre of the computational domain was used as a reference, and accounted for an arbitrary global phase shift in the reconstructed images.The resulting reconstructed real-space phase distributions shown in Figs. 3 (G-L) were weighted with the corresponding amplitude values (square root of the processed Figs. 2 (A-F)) for illustration purposes. Diagonal pumping In addition to the circularly polarized pumping scheme described in the main text, we also in- nents of the pump is inherited to the plasmonic excitations and affects the observed polarization textures.A plausible explanation for this is the start of the lasing/condensation process by stimulated rather than spontaneous emission: The pump beam is not resonant with the plasmonic modes and thus mainly excites the molecules, however, it also drives off-resonant weak excitations in the nanoparticles.These excitations stimulate the initial molecular emission, and their polarization properties thus influence the lasing or condensate state.Initiation of the condensation process by stimulated rather than spontaneous emission is in accordance with the ultrafast timescales and stimulated nature of the thermalization that were observed in (23). Figs. 2 ( Figs.2 (G-L) are discussed later when we provide a theoretical model to explain the experimental findings.Note that right and left circular polarizations are superpositions of horizontal (here x) and vertical (y) linear polarizations(| + ie iϕ R/L | ↔ )/ √ 2,with the phase differing by π: ϕ R = π, ϕ L = 0. This, together with the observed change from right to left circular polarization over the array, would hint towards having a phase shift of the condensate by π.We have also used a diagonally polarized pump beam, which led to different patterns (see Fig.S2). ( Figs.3 (C-D)) have momentum distributions distinct from each other, i.e., they cannot be made the same by a rotation.Together, the real-space and Fourier domain data sets (Figs.2 (A-F) and Figs.3 (A-F), respectively) allow us to reconstruct the phase profiles of the differently polarized emission patterns by utilizing the Gerchberg-Saxton phase retrieval algorithm(29). Figs. 3 ( 2 , Figs.3 (G-L) show the reconstructed real-space phase distributions.Remarkably, the images display non-uniform phase profiles: emission from the centre of the plasmonic array has obtained an opposite phase compared to the emission from the edges.Particularly interesting is the left circularly polarized case which shows the opposite phase appearing between adjacent edges.Starting from right circular polarization (| −i| ↔ )/ √ 2 in the middle (Fig. 3 (I)), then moving towards the edges in the x-direction and adding a π phase shift to the | ↔ component where I σ is the measured luminescence intensity and σ corresponds to the six different polarization states (horizontal (H), vertical (V), right circular (R), left circular (L), diagonal (D), and antidiagonal (A)).The pseudospin textures given by the experimental data, as well as those predicted by the theoretical model with uniform and non-uniform phase profiles, are shown in Figs.4 (B-D).The spin texture plotted with a uniform phase distribution (Fig.4(C)) does not produce the complex spin textures calculated from the experimental results (Fig.4 (B)).In contrast, there is a striking resemblance between these experimentally observed pseudospin patterns and the theoretical model with the non-uniform phase distribution (Fig.4 (D)), which demonstrates the importance of the discovered phase profiles in the system.The observed pseudospin texture (Fig.4 (B)) and the corresponding theory prediction (Fig.4 (D)) show clear domain walls separating four regions with mostly left circularly polarized emission.Fig.4(E) shows the Stokes vector orientations following the red arrow across one of the domain walls in Fig.4(D), and windings of 2π and ∼ 1.6π are observed along the R-D and R-H planes, respectively.We do not observe a full rotation along the R-H plane as the amplitudes of the horizontally and vertically polarized components are different at the edges. photonic strong coupling, and related non-linearities(35) in the apparently soliton-like phase and intensity profile of the condensate?Our results open new prospects for fundamental studies of vectorial (pseudospin) non-equilibrium condensates(25) and topological photonics(36), as well as for tailoring bright coherent beams with complex polarization properties.The room temperature operation, straightforward sample fabrication, and ultrafast switching by the pump polarization are important assets.On-chip pumping would complete the list; combining plasmonic nanoparticle arrays with organic materials amenable to electrically induced gain (37) is obviously a worthwhile future research direction. Figure 1 : Figure 1: Spatially non-uniform condensation of lattice plasmon excitations.(A) Illustration of the sample structure and the experimental configuration.Both the pumping of the molecules (IR-792) on the sample and collection of the sample photoluminescence are done via the same objective (29), see Fig. S1 for further details.(B) Extinction of a bare nanoparticle array immersed in index-matching oil measured as (1 − T ), where T is the transmission.Here k y corresponds to the wave vector component parallel to the sample surface, and is related to the emission angle θ y and wavelength λ 0 as k y = 2π/λ 0 sin θ y .The dispersion of the modes shows the linear TE and parabolic TM surface lattices resonance modes crossing at the Γ-point.The inset is a scanning electron micrograph of the gold nanoparticles on a glass substrate, the scale bar is 1 µm.(C) Total measured luminescence intensity of the sample as a function of pump fluence.(D) Unpolarized and (E) horizontally polarized real-space images of the sample at 1.8 mJ/cm 2 pump fluence. Figure 2 : Figure 2: Real-space polarization patterns in a plasmonic condensate.(A-F) Sample emission intensities under left circularly polarized pumping with the fluorescence imaged through (A) horizontal, (B) vertical, (C) right circular, (D) left circular, (E) diagonal and (F) antidiagonal polarizers.These polarizers are illustrated with black arrows.(G-L) Electric field intensities obtained by a theoretical model with polarizations corresponding to those in figures (A-F). Figure 3 : Figure 3: Reconstruction of phase in a plasmonic nanoparticle array.(A-F) Two-dimensional kspace images of the spatially non-uniform emission patterns shown in Figs. 2 (A-F).Here k x,y = 2π/λ 0 sin θ x,y .(G-L) Real-space phase distributions reconstructed from the real and k-space data using the Gerchberg-Saxton algorithm.The images display the phase differences in the sample with an arbitrary overall phase. Figure 4 : Figure 4: Jones vector grid and Stokes vector comparison.(A) Schematic of the array model of Jones vectors depicting the polarization states of the plasmonic condensate.The arrows in different points of the array depict the amplitudes (arrow length) and phases (arrow direction) of the horizontal (blue arrows) and vertical (red arrows) polarization components.The amplitude profile used in both components (black and blue straight lines) increases towards the sample centre.The phase profiles are approximated linearly with a π-phase increase from the edges towards the array centre (purple and red lines).(B-D) Stokes vectors illustrating the polarization state at different points on the array, calculated from (B) the experimental real-space polarization patterns, and from theoretical dipole maps with (C) constant and (D) non-uniform phases.The relation between the polarization states and Stokes vectors is given in the arrow axes on the right.The length of each Stokes vector along the circular polarization axis is illustrated with a color scale, where -1 corresponds to left circular and +1 to right circular polarization.(E-F) Stokes vector winding along the (E) red and (F) blue arrows shown in (D).The vectors are viewed parallel to the array surface towards negative x-values. Fourier data were centred by finding a local maximum or a local minimum in the vicinity of the physical centre of the k-space intensity distribution.The centre of the real-space data was found by applying a watershed segmentation algorithm to the real-space intensity image, and computing the centre of mass of the segmented region.Real-space data were resampled to fulfill the relationship between the pixel sizes in the object and Fourier domains as set by the digital Fourier transformation.The background noise (average 3100 counts in the object domain and 2200 counts in the k-space) was subtracted from each pixel; 3300 counts were subtracted from Fig.2(C) and 2150 counts were subtracted from Figs.3 (B-C), as this led to a better convergence. vestigate sample luminescence under diagonally polarized pumping.The resulting real-space images are presented in Fig.S2, where the black arrows illustrate the polarization state of luminescence.The overall measured intensity is lower compared to circularly polarized pumping, while the vertically and horizontally polarized components do not show a clear accumulation of SLR excitations to the centre of the sample.A comparison between Figs.2 and S2shows that, although both pump configurations show the same assortment of patterns, the textures associated with circular and diagonal polarization states are switched when we change the pump polarization: if patterns (D) and (C) are swapped with (E) and (F) in Fig.2, we reach the arrangement of textures in Fig.S2.The phase delay between the vertical and horizontal compo- Figure S1 : Figure S1: Schematic of the experimental setup used in the angle-and energy-resolved intensity measurements.The spectrometer and two cameras allow the setup to simultaneously measure the spectral information of the emitted light and capture Fourier and real-space images of the sample.Optional components are marked with dashed rectangles.Here, BS stands for beamsplitter and ND for neutral density, and LP850 refers to a longpass filter with a cutoff wavelength of 850 nm. Figure S3 : Figure S3: Jones vector model with fitted amplitude profiles.(A) Modified version of the theoretical model shown in Fig. 4 (A).The linearly approximated amplitude profiles are replaced with averaged amplitude values from Fig. 2 (B).(B-G) Electric field intensities obtained by the Jones vector model presented in (A) with (B) horizontal, (C) vertical, (D) right circular, (E) left circular, (F) diagonal and (G) antidiagonal polarizers.
2020-12-14T02:16:16.763Z
2020-12-11T00:00:00.000
{ "year": 2021, "sha1": "2eeb6b7730b7413954b37d1658069b525a53687f", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.nanolett.1c01395", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "2eeb6b7730b7413954b37d1658069b525a53687f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics", "Materials Science" ] }
24045785
pes2o/s2orc
v3-fos-license
Nuclear factor I as a potential regulator during postembryonic organ development. Nuclear factor I (NFI) family members are transcription factors that are believed to also participate in DNA replication. We have cloned two Xenopus laevis NFIs that are up-regulated by thyroid hormone. They are 84-95% identical to their counterparts in birds and mammals. In contrast, the two Xenopus NFIs are much less homologous to each other, sharing only 58% homology, which largely resides in the DNA binding domain at the amino terminus. However, both NFIs can bind to a consensus NFI binding site and activate the transcription of a promoter bearing the site. Northern blot reveals that both NFI genes are regulated in tissue- and developmental stage-dependent manners. They are first activated, independently of thyroid hormone, to low levels at stages 23/24, around the onset of larval organogenesis. After stage 54, their mRNA levels are dramatically upregulated by endogenous thyroid hormone, and high levels of their expression correlate with organ-specific metamorphosis. Furthermore, gel mobility shift assay indicates that the NFI proteins are present in different organs and that their levels are regulated similarly to the mRNA levels. These results strongly suggest that NFIs play important roles during postembryonic organ development, in contrast to the general belief that NFIs are ubiquitous factors. The proteins of nuclear factor I (NFI) 1 family are transcription factors encoded by multiple genes in birds and mammals (Gil et al., 1988;Santoro et al., 1988;Meisterernst et al., 1988;Paonessa et al., 1988;Inoue et al., 1990;Rupp et al., 1990). In addition, different forms of these factors can be generated by multiple alternative splicing of individual NFI genes (Santoro et al., 1988;Inoue et al., 1990;Apt et al., 1994), although the functional difference among these various forms is still unclear. NFIs are sequence-specific DNA binding proteins that recognize a consensus NFI binding site made of TGGCA(N) 3 TGCCA (Nowock et al., 1985;Gronostajski, 1986;Nilsson et al., 1989). Upon binding to NFI binding sites, these NFIs can activate the transcription of the corresponding promoters (Jones et al., 1987;Cereghini et al., 1987;Santoro et al., 1988). While the mechanism of this transcriptional activation is still unknown, NFI binding sites have been found in a wide variety of genes (Raymondjean et al., 1988;Zorbas et al., 1992;Inoue et al., 1990), and the NFI genes are expressed in many different tissues (Cereghini et al., 1987;Paonessa et al., 1988;Apt et al., 1994), suggesting that NFIs are crucial for cell function in many organs. In addition, NFIs have also been found to be required for the initiation of adenovirus replication both in vitro and in vivo (Nagata et al., 1982;Leegwater et al., 1985;Hay, 1985;Wang and Pearson, 1985;Bernstein et al., 1986;Gronostajski et al., 1988). This raises the possibility that NFIs may also participate in cellular DNA replication. However, it remains to be seen whether NFIs play specific roles during development. We have identified two NFI genes that are up-regulated during the metamorphic transition in Xenopus laevis. Amphibian metamorphosis is an ideal model system to study postembryonic development (Tata, 1993). It systematically transforms every single organ/tissue of a tadpole, for example the total resorption of the tail, de novo development of the limb, and the remodeling of the simple tubular tadpole intestine into a complex, multiply folded adult organ (Dodd and Dodd, 1976;Gilbert and Frieden, 1986;Yoshizato, 1989). While different tissues undergo drastically different changes at distinct developmental stages, all are under the control of thyroid hormone (T 3 ) (Dodd and Dodd, 1976;Galton, 1983;Kikuyama et al., 1993). T 3 is believed to affect amphibian metamorphosis by regulating the transcription of specific target genes in different tissues through its nuclear receptors (Tata, 1993;Shi, 1994). The two NFI genes were isolated as two such T 3 -regulated genes during intestinal remodeling, a process that involves both apoptosis of the larval epithelial cells and proliferation and differentiation of the adult epithelial cells (McAvoy and Dixon, 1977;Shimozawa, 1987, 1992). We demonstrate here that the two Xenopus NFIs bind DNA specifically and activate transcription in an oocyte transcription system. More importantly, we show that the expression of the NFI mRNAs as well as the NFI or closely related proteins is up-regulated in the intestine during metamorphosis as the larval organ degenerate and adult intestine develops. Furthermore, the mRNA and protein levels are also high during both limb morphogenesis and tail resorption while very low in premetamorphic tadpoles or embryos. These results strongly implicate the participation of NFIs in frog organogenesis. MATERIALS AND METHODS Isolation and Sequencing of Full-length cDNAs for X. laevis NFI Genes-To clone the full-length cDNAs for T 3 (3,5,3Ј-L-triiodothyronine)-induced genes in the metamorphosing intestine, the PCR cDNA fragments isolated from a subtractive differentiation screen (Shi and Brown, 1993) were used to screen an intestinal cDNA library (Patterton et al., 1995). Sequence analysis showed that two of the genes encoded proteins homologous to NFIs previously cloned in other species. These two genes, IU16 and IU33 (Shi and Brown, 1993), were renamed as * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Xenopus NFI-B1 and NFI-C1, respectively. The missing 3Ј-end of NFI-C1 gene and 5Ј-ends of NFI-B1 and NFI-C1 genes were cloned using the anchor PCR method of Frohman et al. (1988) and total RNA from stage 62 tadpoles. Northern Blot Analysis-Tadpoles of indicated stages (Nieuwkoop and Faber, 1956) were treated with or without 5 nM T 3 , and RNA was isolated and analyzed as described (Stolow and Shi, 1995). Overproduction of Xenopus NFIs and Antibody Generation-The cDNA inserts from the original NFI cDNA clones and anchor PCR clones were used to construct overproduction vectors for the DNA binding domains and the full-length NFI-B1 and NFI-C1 in pET 15b and pET 28a vectors, respectively (Novagen). The clones were transformed into E. coli BL 21 cells. The bacteria were induced with 1 mM isopropyl-1-thio-␤-D-galactopyranoside for 4 h at 30°C. Under these conditions, the proteins produced were insoluble. To generate antibodies against the NFIs, the full-length NFI-B1 and DNA binding domain of NFI-C1 were isolated from the insoluble fraction and further purified on polyacrylamide gels with Chromaphor stain (Promega). Gel slices containing NFI proteins were excised and used to immunize rabbits. Cloning of the Full-length Open Reading Frames of NFI-B1 and NFI-C1 into the pSP64pA Vector and in Vitro Transcription-DNA encoding the entire NFI-B1 together with 52 bp of 3Ј-untranslated region was cloned into BamHI and SacI cloning sites of pSP64pA vector (Promega), and the DNA coding region of NFI-C1 together with 384 bp of the 3Ј-untranslated region was cloned into the BamHI cloning site of the vector. 1 g of each construct was linearized and transcribed with SP6 polymerase (Ambion). The purified mRNA was analyzed on 1.2% agarose/formaldehyde gel to check the quality and by spectrophotometry to determine the quantity. Overproduction of NFI in X. laevis Oocytes and DNA Binding Assay-Stage 6 oocytes were injected with 25 ng of NFI mRNA/oocyte and incubated at 18°C overnight. The oocytes were homogenized in 70 mM KCl, 20 mM HEPES, pH 7.6, 1 mM dithiothreitol, 5% glycerol, 1 mM MgCl 2 , and 2 mM phenylmethylsulfonyl fluoride (10 l/oocyte), and the protein extracts were prepared as described (Wong and Shi, 1995). Specific DNA binding by NFI was analyzed by the gel mobility shift assay. 15 l of the buffer containing 20 mM HEPES, pH 7.5, 5 mM MgCl 2 , 100 mM NaCl, 5 mM dithiothreitol, 10% glycerol, 0.1% Triton X-100, and proteinase inhibitors (5 g/ml aprotinin, 5 g/ml pepstatin A, 5 g/ml leupeptin, and 5 mM phenylmethylsulfonyl fluoride) was mixed with 2.5 l of poly(dI-dC) (500 ng) and 32 P-labeled doublestranded oligonucleotide (5 ng) containing the consensus palindromic binding site for NFI (ds-NFI, see below). The binding reaction was initiated by adding the above protein extracts to the mixture (2.5 l of control or NFI-B1 extract, or 0.5 l of NFI-C1 extract supplemented with 2.0 l of control extract; less NFI-C1 extract was used due to more efficient translation of NFI-C1 mRNA (see "Results")). Samples were incubated for 20 min at room temperature and analyzed on a 6%, 0.5 ϫ TBE native polyacrylamide gel. As a nonspecific competitor in DNA binding, a double-stranded oligonucleotide (ds-NS) was used, which contained a binding site for thyroid hormone receptors (Ranjan et al. (1994), where it was named xTRE). For antibody supershift experiments, the oocyte extract and poly(dI-dC) were first incubated with 1 l of either preimmune or anti-NFI serum for 45 min at room temperature. The labeled ds-NFI was then added, and the mixture was incubated for another 20 min. Alternatively, the labeled ds-NFI was added 20 min before the addition of the serum, and the incubation was continued for 45 min after the serum addition. The resulting complexes were analyzed as above. Analysis of the NFI Binding Activity in Tadpole and Frog Tissues during Development-Individual organs were dissected, washed once with the homogenization buffer (10 mM Hepes, pH 7.5, 26% glycerol, 2 mM MgCl 2 , 0.5 mM EDTA, 0.5 mM dithiothreitol, supplemented with 5 mM phenylmethylsulfonyl fluoride, 5 g/ml aprotinin, 5 g/ml pepstatin A, and 5 g/ml leupeptin), and frozen on dry ice. The frozen tissues were ground to fine powder and homogenized in the buffer with a Teflon pestle in Eppendorf tubes. NaCl was added to a final concentration of 300 mM. The samples were incubated on ice for 15 min and centrifuged at 40,000 ϫ g at 4°C to remove the insoluble materials. Twenty g of the protein extracts/sample (determined using a Bio-Rad protein assay kit) were mixed with a 15-l sample of the DNA binding buffer (above), spun down at 40,000 ϫ g for 15 min at 4°C to remove any remaining insoluble material, and then aliquoted for individual DNA binding reactions. The binding and antibody supershift (with 3 l/sample serum) were carried out as above. Transcriptional Activation by NFIs in Xenopus oocytes-Two synthetic oligonucleotides, 5Ј-GATCTGCCTTGGCACAGTGCCAACATGA-3Ј and 5Ј-GATCTCATGTTGGCACTGTGCCAAGGCA-3Ј were annealed to produce ds-NFI, which contained a consensus NFI binding site (underlined;Nilsson et al., 1989) and BglII overhangs. The oligonucleotide was ligated into the BglII-digested CAT promoter vector (CAT, Promega), which contained the SV40 early promoter. A recombinant plasmid (CAT/NFI) containing two consecutive ds-NFIs just upstream of the SV40 promoter was thus produced and used to study the transcriptional properties of Xenopus NFIs. Twenty-five ng of NFI mRNA/oocyte were injected into the cytoplasm of stage 6 oocytes. After 6 h of incubation at 18°C, 5 ng/oocyte of the CAT promoter vector with or without the ds-NFI insert were injected into the nucleus. After incubation at 18°C for 16 -18 h, RNA and the plasmid DNA were isolated from the oocytes as described (Wong and Shi, 1995). The RNA was analyzed by primer extension using an antisense CAT primer (Wong and Shi, 1995), and the recovered plasmid DNA was analyzed by Southern hybridization (Ranjan et al., 1994). RESULTS Cloning and Sequence Analysis of Xenopus NFI Genes-To study genes that are involved in the remodeling of the intestine from the larval to adult form, we previously isolated over 20 T 3 -up-regulated genes in the intestine of X. laevis by using a PCR-based subtractive differential screening method (Shi and Brown, 1993). The small PCR fragments of two such genes were used to screen a cDNA library made from intestinal mRNA of premetamorphic tadpoles treated with T 3 for 18 h. Sequence analysis of the longest cDNA clones showed that both genes encoded members of the NFI gene family and were named NFI-B1 and NFI-C1 based on sequence homology to known NFIs (Fig. 1). However, the cDNA clones contained only part of the coding regions, as the NFI-B1 cDNA did not have an in-frame initiation codon and the NFI-C1 cDNA did not contain an in-frame stop codon at the 3Ј-end. In addition, no in-frame stop codon was present upstream of the first methionine codon of the NFI-C1 cDNA, suggesting the possible existence of an additional coding region upstream of this methionine codon. To clone the missing coding regions, the anchor PCR method of Frohman et al. (1988) was used. The anchor PCR clones of NFI-B fell into three groups, NFI-B1, NFI-B2, and NFI-B3, respectively (Fig. 2). The NFI-B1 group of clones had completely identical DNA sequences in the region overlapping with the original cDNA clone, and their initiation codon lay 180 bp upstream of the 5Ј-end of the original cDNA. The NFI-B2 class of clones contained a deletion of 135 bp near the amino terminus and a few nucleotide sequence changes, resulting in 2 or 3 amino acid substitutions in the region overlapping the original clone. The last class, NFI-B3, had a deletion of 162 bp immediately after the initiation methionine of NFI-B1 and nucleotide sequence changes that produced two amino acid substitutions. Anchor PCR cloning of the 5Ј-end of NFI-C1 identified an in-frame stop codon upstream of the first methionine codon of the original cDNA, indicating that this cDNA clone contained the entire amino-terminal coding region. In addition, another clone (NFI-C2) was isolated that had a 27-bp insertion immediately after the methionine codon of the original NFI-C1 clone (Fig. 2). The anchor PCR cloning of the 3Ј-end of NFI-C1 resulted in four clones, one completely identical with the original cDNA clone and three other clones containing nucleotide sequence changes that resulted in only a single amino acid substitution in the region overlapping the original cDNA clone ( Fig. 2B and data not shown). All anchor PCR clones were otherwise identical and encoded the carboxyl terminus of NFI-C1. The anchor PCR cloning, therefore, revealed the existence of a family of NFI proteins in X. laevis, which can be divided into two subfamilies based on sequence homology (Fig. 1). The strong homology among subfamily members suggests that three members of the NFI-B subfamily are most likely derived from alternative splicing of a single gene, and the subtle differences in their sequences are probably due to polymorphism. Similarly, NFI-C1 and NFI-C2 are most likely encoded by a single gene that is alternatively spliced. Such a prediction is supported by the previous reports of different forms of avian and mammalian NFI proteins (Santoro et al., 1988;Rupp et al., 1990), where the sites of sequence divergence match exactly with what we have found for the Xenopus NFI proteins (Fig. 1). Sequence comparison among Xenopus, chicken, and human NFI proteins showed a strong homology among the various NFIs ( Fig. 1). In particular, the predicted Xenopus NFI-B1 protein is over 94% identical to the chicken NFI-B subfamily members with the DNA binding domain being essentially identical (Rupp et al., 1990). Similarly, the Xenopus NFI-C1 is most homologous to the chicken NFI-C subfamily members and human NFI/CTF (Santoro et al., 1988;Rupp et al., 1990). Overall, about 84% identity exists among the different NFI-C proteins, and again the DNA binding domain is the most conserved region. In contrast to the extremely high degree of sequence conservation among the members of a given subfamily, members of different subfamilies are more divergent. Thus, Xenopus NFI-B1 shares only 58% identity with Xenopus NFI-C1 ( Fig. 1, boldface letters). While the carboxyl terminus has only a low level of homology between NFI-B1 and NFI-C1 (42%), the DNA binding domains share over 86% identity. Specific DNA Binding and Transcriptional Activation by Xenopus NFIs-The strong homology among the NFIs from different species suggest that they are likely to recognize at least some common binding sites. To study the function of the Xenopus NFIs, we chose a consensus binding site derived from (Santoro et al., 1988) and chicken NFI-C subfamily (Rupp et al., 1990). The DNA binding domains are bracketed. The sites of sequence divergence among different NFIs are putative alternative splicing sites and are indicated by arrows. Dots represent amino acid deletions, and dashes indicate identical amino acids. The boldface italic letters are amino acids that are conserved between Xenopus NFI-B1 and Xenopus NFI-C1 (58%), which concentrate in the DNA binding domain (86%). studies in birds and mammals (Nilsson et al., 1989) and used the Xenopus oocyte system to overproduce functional proteins. NFI-B1 and NFI-C1 mRNAs were prepared by in vitro transcription and microinjected into mature oocytes. As shown in Fig. 3A, when [ 35 S]methionine was injected into oocytes, many proteins were labeled, demonstrating active translation of endogenous mRNA. When NFI-B1 or NFI-C1 mRNA was coinjected with [ 35 S]methionine, a new labeled protein band appeared on the SDS-protein gel. Although the same amounts of mRNAs were injected, NFI-C1 mRNA was translated a few times more efficiently than NFI-B1 mRNA. The sizes of the new protein bands matched the expected sizes for NFI-B1 and NFI-C1. Furthermore, a polyclonal antibody raised against the full-length NFI-B1 recognized this same band present only in oocytes injected with the NFI-B1 mRNA but not in control or NFI-C1 mRNA-injected oocytes (Fig. 3B). Conversely, a polyclonal antibody against the DNA binding domain of NFI-C1 recognized the polypeptide produced by the injection of NFI-C1 mRNA. The anti-NFI-C1 polyclonal antibody also recognized two endogenous oocyte proteins of very similar sizes (Fig. 3B). These proteins are not likely to be members of the NFI family. This is because, as shown below, oocytes lack any detectable binding activity for the consensus NFI binding site that is recognized by both NFI-B1 and NFI-C1. In addition, the antibody was raised against the highly conserved DNA binding domain of NFI-C1 but failed to recognize NFI-B1. Thus, if these endogenous polypeptides were members of the NFI family, they would be even more homologous to NFI-C1 than NFI-B1 and would be able to bind to the consensus site. To study the DNA binding activity of the NFIs, a doublestranded oligonucleotide containing a consensus NFI binding site (ds-NFI) for avian and mammalian NFIs (Nilsson et al., 1989) was end-labeled and mixed with extracts isolated from uninjected or mRNA-injected oocytes. The resulting complex was analyzed by the gel mobility shift assay. While the uninjected oocyte extract gave no detectable complex (Fig. 4, lanes 1-4), extracts from the oocytes preinjected with NFI-C1 or NFI-B1 mRNA formed a strong complex with ds-NFI (Fig. 4, lanes 5 and 12). The complex formed with NFI-C1 migrated faster than that with NFI-B1 (Fig. 4, compare lanes 5-11 with lanes 12-18), consistent with the smaller size of NFI-C1 (Fig. 1). The complexes could be efficiently competed out by the unlabeled ds-NFI itself. In contrast, even a 50-fold excess of a nonspecific double-stranded oligonucleotide (ds-NS) had no effect on the binding by either NFI-B1 or NFI-C1, demonstrating the specificity of the binding. We next investigated whether the Xenopus NFIs were able to activate the transcription from a promoter bearing the NFI binding site. For this purpose, we inserted two copies of ds-NFI about 140 bp upstream of the major transcription start site of the SV40 early promoter in the CAT promoter vector. The original (CAT) or modified (CAT/NFI) vector was injected into Xenopus oocytes that had or had not been preinjected with the NFI mRNA. After overnight incubation, the transcribed RNA was analyzed by the primer extension assay. No signal was detected in the absence of injected promoter vector (Fig. 5, lanes 1, 4, and 7), demonstrating the specificity of the primer extension. Injection of both the CAT and CAT/NFI vectors gave low levels of transcription in oocytes uninjected with any NFI mRNA (Fig. 5, lanes 2 and 3). The levels of transcription from both vectors were comparable, and both vectors used the same expected major transcription start site. When the CAT and CAT/NFI vectors were injected into oocytes that had been preinjected with either NFI-B1 (Fig. 5, lanes 5 and 6) or NFI-C1 (lanes 8 and 9) mRNA, they produced very different levels of transcription. The preinjection of NFI mRNAs did not alter the transcriptional activity of the CAT vector (compare lanes 5 and 8 with lane 2) but activated the transcription of the CAT/NFI vector by about 10-fold (compare lanes 6 and 9 to lane 3). As controls for the injection of the FIG. 3. Expression of NFI-B1 and NFI-C1 proteins in X. laevis oocytes. A, [ 35 S]methionine was coinjected with water (-) or NFI mRNAs into oocytes. Protein extracts were analyzed on a 10% gel. Dots indicate the positions of the overexpressed proteins. B and C, Western blot analysis of the same protein extracts electrophoresed on 7.5% gels with anti-NFI-B1 (B) or NFI-C1 (C) antibody. Note that both antibodies were specific to their antigens. The two bands of very similar sizes detected by the anti-NFI-C1 antibody that were also present in the waterinjected oocytes (-) are probably non-NFI peptides. Dashes on the left indicate the positions of the size markers: 30, 46, 66, 97, and 220 kDa, respectively. FIG. 4. Specific DNA binding by X. laevis NFIs. Extract from control (Ϫ), NFI-B1 (NFI-B1), or NFI-C1 (NFI-C1) mRNA-injected oocytes were used in the gel mobility shift assay with 5 ng of 32 P-labeled ds-NFI and the indicated amount of unlabeled ds-NFI or a nonspecific DNA (ds-NS) competitor. mRNA, our DNA binding, [ 35 S]methionine-labeling, and Western blot analysis had consistently shown that both NFI-B1 and NFI-C1 were efficiently translated when their mRNAs were injected into the oocyte cytoplasm. However, we consistently observed more efficient translation of NFI-C1 mRNA, which might explain the slightly higher level of transcriptional activation by NFI-C1. In addition, when the injected promoter DNA was recovered after overnight incubation from the same oocytes used to assay the transcriptional activity and analyzed by hybridization (Fig. 5, lower panel), the results clearly demonstrated that equal amounts of promoter DNA were present in the nuclei of different samples. Thus, like their homologs in other vertebrates, both NFI-B1 and NFI-C1 can activate a promoter containing the consensus NFI binding site. Organ-specific Developmental Regulation of Xenopus NFI Genes during Metamorphosis-The Xenopus NFI genes were initially isolated as genes that were activated by T 3 in the tadpole intestine and thus might participate in tissue remodeling during metamorphosis. To investigate this possibility further, the cDNAs derived from the original cDNA clones were used to probe Northern blots made of total RNA from different tissues during development. Under the hybridization conditions, no cross-hybridization was detected between NFI-B and NFI-C genes (Fig. 6), although the individual members of each subfamily could not be differentiated. The NFI-B and NFI-C probes detected the full-length mRNA of 10 and 8 kilobases, respectively, in different tissues. In the intestine, little NFI-B or NFI-C mRNAs were present in premetamorphic tadpoles (stages 54 and 56, Fig. 6). The mRNA levels for both genes were highly up-regulated during metamorphosis and remained high in the intestines of postmetamorphic frogs (stage 66). Similarly, in the tail, both NFI genes were highly expressed during tail resorption (stages 60 -64), while they were repressed in premetamorphic tadpoles (stages 54 and 56). In contrast, high levels of the NFI mRNAs were present in the hind limb at stages 56 -60, at the time of and immediately after limb morphogenesis. Subsequently, their expression was reduced to lower levels. These results strongly suggest that both NFI-B and NFI-C are involved in tissue remodeling during metamorphosis. Thyroid Hormone Regulation of NFI Genes during Metamorphosis-Thyroid hormone is known to be the controlling agent of metamorphosis. It has been well established that a simple addition of T 3 to the rearing water can induce precocious metamorphosis in premetamorphic tadpoles. Thus if the NFI genes participated in metamorphosis, we would expect that they should be expressed during T 3 -induced metamorphosis. Therefore, we treated premetamorphic tadpoles at stage 56 with 5 nM T 3 , a concentration close to the peak plasma T 3 levels during natural Xenopus metamorphosis (Leloup and Buscaglia, 1977), and isolated RNA from the intestine and tail at various time points during treatment (limb was not used due to its small size). Northern blot analysis of the RNA showed that the mRNA levels for both NFI-B and NFI-C genes were up-regulated within 1 day of treatment and continued to increase, reaching the highest levels after 3-5 days in both the intestine and tail, similar to that observed during normal development (Fig. 7). NFI Genes Are Activated during Larval Development-The above results suggest that both NFI-B and NFI-C are involved in the development of adult organs. As larval organogenesis occurs during embryogenesis, we asked whether the NFI genes were also expressed during this early developmental period. Thus, total RNA was isolated from the oocytes and whole embryos and tadpoles at different stages up to the end of metamorphosis (stage 66) and analyzed for the expression of NFI-B and NFI-C genes. The mRNAs for both genes were found to be absent in oocytes and early embryos. The genes were first activated around early tail bud stage (stage 23/24) (Fig. 8) and were expressed at relatively low levels around stages 33-45. FIG. 5. Transcriptional activation by X. laevis NFIs in a reconstituted oocyte system. Control oocytes (Ϫ) or oocytes preinjected with the mRNA for NFI-B1 or NFI-C1 were injected with either one of two promoter vectors. The first vector (CAT) was a vector containing the SV40 promoter upstream of the CAT gene, and the second one (CAT/ NFI) had two copies of the NFI binding site inserted into the CAT vector. Half of the oocyte homogenate was used for RNA analysis by primer extension (upper panel). The other half was used to quantify the injected DNA by slot blot analysis (lower panel). The relative promoter activity was determined by normalizing the primer extension signal with the DNA signal. FIG. 6. Northern blot analysis showing differential regulation of X. laevis NF1 genes in the intestine, tail, and hind limb during metamorphosis. Ten g of RNA were used per lane except for the tail at stage 64 and the hind limb at stage 56, which had only 5 g RNA. Duplicate blots were probed with the coding regions of NFI-B1 and NFI-C1. After boiling off the probes, the filters were hybridized with rpL8 as a control for loading (Shi and Liang, 1994). The blots containing limb RNA were exposed for a longer period. The positions of 28 and 18 S rRNA are indicated. Note that both genes had similar expression profiles. High levels of their mRNAs were present in the intestine during remodeling (stages 60 -66), in the tail during resorption (stages 62-64), and in hind limb during and immediately after limb morphogenesis (stages 56 -60; note that only half as much RNA was used for stage 56). The smeary signals for both genes were most likely due to partial degradation of the mRNAs because of their large sizes, about 10 and 8 kilobases for NFI-B and NFI-C mRNA, respectively. In addition, some size heterogeneity might be due to alternative splicing. This period of NFI expression corresponds to the period of larval organogenesis (the tadpole hatches around stages 35/36 and begins to feed around stage 45; Nieuwkoop and Faber (1956)). Subsequently, these low levels of NFI expression persisted until after stage 54, when they were drastically upregulated during metamorphosis, following the rise in the concentration of endogenous T 3 (Leloup and Buscaglia, 1977). NFI Binding Activity Is Also Regulated in a Tissue-specific Manner during Metamorphosis-To investigate the regulation of NFI proteins during development, we initially performed standard Western blot analysis using the specific antibodies described above. However, possibly due to the low abundance of these transcription factors, we failed to quantify the NFI proteins. Therefore, the gel mobility shift assay was employed together with the antibody supershift assay to determine the relative levels of NFI binding activity during development. To test the effect of the antibodies on the NFI-DNA complexes, anti-NFI antibodies were added before or after the addition of the labeled ds-NFI to the extracts from oocytes that had been injected with NFI mRNAs (Fig. 9). Independent of the order of addition, the anti-NFI-B1 antibody efficiently supershifted the complex formed by NFI-B1 (Fig. 9, lanes 7 and 8, arrowhead) and to a much smaller extent the complex formed by NFI-C1 (lanes 12 and 13, star). On the other hand, the anti-NFI-C1 antibody had little effect on the complex formation by either NFI-B1 or NFI-C1 (lanes 9, 10, 14, and 15). Thus, the anti NFI-B1 antibody was chosen for the studies on tissue extracts below. Tissue extracts from the intestine, limb, and tail of tadpoles at different stages were prepared and subjected to DNA binding analysis. The binding activity for ds-NFI was found to be regulated identically as the NFI mRNA levels in all three organs during metamorphosis ( Fig. 10 and data not shown). Thus in both the intestine (Fig. 10A) and tail (Fig 10C), the NFI binding activity was low in tadpoles before stage 58 and was up-regulated during metamorphosis (stages 62 and 64). On the other hand, the ds-NFI binding activity was high in the limb at stage 56 when morphogenesis took place. Subsequently, the activity decreased as the hind limb underwent growth with little morphological changes (Nieuwkoop and Faber, 1956; Fig. 10B). The specificity of the DNA binding by the extracts was confirmed by the ability of the unlabeled ds-NFI itself (Fig. 10, lanes 9 -12) to compete efficiently for the complex formation and the inability of a nonspecific DNA (ds-NS, lanes 5-8) to do so. Furthermore, anti-NFI-B1 antibody could supershift most of the complexes formed (Fig. 10, lanes 13-16). Based on the mobilities of the supershifted complexes (bands labeled by arrowheads and stars; compare them with those in Fig. 9), it appeared that both NFI-B and NFI-C were present in these tissue extracts and regulated similarly. Thus, while the exact identities of the NFI proteins are unknown, these results strongly suggest that NFI-B and NFI-C or closely related pro- FIG. 7. T 3 activation of NFI genes in premetamorphic tadpoles. 10 g of total RNA from intestine and tail of stage 56 tadpoles treated with 5 nM of T 3 for the indicated number of days were electrophoresed on 1% agarose/formaldehyde gels. Duplicate blots were probed with the coding regions of NFI-B1 and NFI-C1 cDNA. After boiling off the probes, the same filters were probed with rpL8 as control of loading. The positions of 28 and 18 S rRNA are indicated. FIG. 8. Xenopus NFI genes are activated during late embryogenesis and further up-regulated during metamorphosis. Ten g of total RNA from ovary, whole embryos, or tadpoles up to stage 66 (the end of metamorphosis) were analyzed by Northern blot hybridization. The hybridization signals were quantified using a PhosphorImager. Note that both NFI-B and NFI-C genes were activated around stage 23/24 (the early tailbud stages). Relatively low levels of their expression were present throughout late embryogenesis (stages 23-45; tadpole hatches around stage 35/36 and feeding begins around stage 45). The mRNA levels were then up-regulated after stage 54 when endogenous T 3 levels began to increase (Leloup and Buscaglia, 1977). NFIs Are Present in Many Adult Organs-NFIs are known to be expressed in many tissues and cell types in mammals (Cereghini et al., 1987;Paonessa et al., 1988). Our Northern blot analysis above also showed that NFI mRNAs were present in frog intestine and limb (stage 66, Fig. 6). To determine whether the proteins are present in adult tissues, selected organs from postmetamorphic frogs were dissected to prepare whole cell protein extracts. Gel mobility shift assay clearly demonstrated the presence of NFI binding activity in all regions of the gastrointestinal tract as well as the limb and liver (Fig. 11). The binding activity was present at lower levels in the limb than in the intestine just like their respective NFI mRNA levels at stage 66 (Fig. 6), immediately after metamorphosis. It is interesting to note that as in the intestine and limb, the levels of the NFI binding activity in liver were very different in the frog compared with the tadpole (Fig. 11, compare lanes 13, 14 -16, and 17). Premetamorphic tadpole liver had only low levels of NFI binding activity, while much higher levels were present in the frog liver. It is unclear why the complexes formed with the frog liver extract migrated faster. It is likely that partial degradation of NFIs occurred during extract preparation and/or DNA binding even though Coomassie Blue staining of the FIG. 10. NFI binding activity is regulated similarly as the NFI mRNAs during development. Whole cell extracts were isolated from the intestine (A), hind limb (B), and tail (C) of tadpoles at different developmental stages and analyzed for binding to labeled ds-NFI. Specific complexes were formed in the absence (lanes 1-4) or presence of a 20-fold excess of a nonspecific competitor (lanes 5-8) but not in the presence of a 20-fold excess of the unlabeled ds-NFI (lanes 9 -12). The addition of the anti-NFI-B1 antibody could supershift most of the complexes formed (lanes 13-16). The arrowheads and asterisks indicate complexes of similar mobilities as the supershifted NFI-B1-DNA and NFI-C1-DNA complexes, respectively, shown in Fig. 9. Note that longer exposure was necessary for the tail samples (C) due to weaker binding activity and that more smear was present in stage 62 and 64 samples. This smear was likely due to protein degradation even though proteinase inhibitors were present in the samples. This is probably because proteinases were more abundant in the tail at these stages as the tail resorbs (Nieuwkoop and Faber, 1956). The protein degradation might be also responsible for the inefficient antibody supershifting. protein extract on an SDS gel did not reveal noticeable protein degradation. Alternatively, different NFI isoforms might be present in the tadpole and frog livers. In any case, the complex formation with all extracts was specific as judged from competition experiments and the ability of anti-NFI-B1 antibody to superhsift most of the complexes formed (not shown). Thus, NFIs are present in a wide variety of frog tissues. DISCUSSION We have identified at least two genes of the NFI transcription factor family that are regulated by thyroid hormone during amphibian metamorphosis. Sequence analysis, DNA binding assays, and transcription activation experiments demonstrate a strong conservation of the sequence and function among the NFIs from Xenopus, chicken, and human. More importantly, the interesting regulation of the expression of these genes by T 3 during metamorphosis provides strong evidence that these transcription factors are important for postembryonic organ development. Xenopus NFIs Are Encoded by Multiple Genes That Are Alternatively Spliced-NFI was first identified as a component of the Hela cell nuclear extract that can enhance the initiation of adenovirus DNA replication (Nagata et al., 1982). Since then, the corresponding gene and several highly homologous genes have been cloned in birds and mammals (Santoro et al., 1988;Paonessa, 1988;Gil et al., 1988;Meisterernst et al., 1988;Inoue et al., 1990;Rupp et al., 1990). Sequence analysis of the two Xenopus NFI genes reported here demonstrates that they are the homologs of chicken NFI-B and NFI-C genes, respectively. Our anchor PCR cloning has revealed the existence of, at least at the amino end, multiple isoforms for both NFI-B and NFI-C subfamilies in Xenopus. These isoforms differ from each other by some sequence insertions or deletions. While it cannot be ruled out that they are encoded by different genes without cloning the full-length cDNAs, they are likely produced by alternative splicing. First, the sequences of different isoforms are essentially identical except for the insertions or deletions. Moreover, the points of sequence divergence are conserved across species and have been implicated or proven to be the sites of alternative splicing in other species (Santoro et al., 1988;Rupp et al., 1990). It should be pointed out that while this paper was submitted for review, Roulet et al. (1995) reported the cloning of Xenopus NFI-C1. Their NFI-C1 sequence differs slightly from ours. This is probably due to the fact the X. laevis is a pseudotetraploid organism with many of its genes duplicated during evolution (Kobel and Du Pasquier, 1986). Both NFI-B and NFI-C Can Activate Transcription through a Consensus Binding Site-Both Xenopus NFI-B1 and NFI-C1 can bind to a consensus NFI binding site identified in birds and mammals (Nowock et al., 1985;Leegwater et al., 1985;Gronostajski, 1986;Jones et al., 1987;Nilsson et al., 1989). Although Xenopus NFI-B1 and NFI-C1 share only a low degree of homology with each other compared with their homologs in other species, their DNA binding domains are over 86% identical. Thus, it is not surprising that both can recognize the same NFI binding site. Furthermore, when either NFI is introduced into Xenopus oocytes, it can activate the transcription from a promoter containing the NFI binding sites. Currently, it is unclear how the transcription activation takes place. It is known that NFIs can bind DNA as homo-and heterodimers (Mermod et al., 1989;Gounari et al., 1990;Kruse and Sippel, 1994). Furthermore, it has been shown that the amino-terminal half, including the DNA binding domain, is sufficient for dimerization, site-specific DNA recognition, and adenovirus DNA replication (Mermod et al., 1989). In contrast, the carboxyl half of protein and the DNA binding domain are required to activate transcription (Mermod et al., 1989;Altmann et al., 1994;Xiao et al., 1994). It is interesting to note that given the sequence divergence between Xenopus NFI-B1 and NFI-C1 in the putative activation domain, which is only 42% conserved, both can activate transcription to a similar extent in the oocyte transcription system. It is known that the oocyte stores large quantities of different factors important for embryogenesis, especially during the period prior to the onset of zygotic transcription. Thus, it is very likely that Xenopus NFI-B1 and NFI-C1 interact with different factors in the transcriptional machinery to activate the promoter. It would be interesting to know the identities of such NFI-interacting factors. Correlation of NFI Expression with Natural and T 3 -induced Metamorphosis-Both NFI-B and NFI-C genes are first activated relatively late, around tailbud stages, during embryonic development. This activation occurs before the development of the thyroid gland and is thus independent of thyroid hormone. Subsequently, the genes maintain low levels of expression throughout the rest of the embryonic period and early tadpole stages. After stage 54, their expression is drastically up-regulated in the tadpole by the rising levels of endogenous thyroid hormone. We have shown previously that this regulation by T 3 occurs at the transcriptional level based on its resistance to protein synthesis inhibition (Shi and Brown, 1993). Furthermore, when premetamorphic tadpoles are treated with T 3 for an extended period, which can induce precocious metamorphosis such as the intestinal length reduction and epithelial folding (Shi and Hayes, 1994), the expression of the NFI genes is induced similarly as during natural development. During the premetamorphic stages (before stage 56), the NFI genes are expressed at very low levels in the intestine and tail. They are then drastically activated in the intestine from stage 58 to 66 when larval epithelium undergoes cell death and adult (secondary) epithelial cells as well as the connective tissue and muscle cells proliferate and differentiate (McAvoy and Dixon, FIG. 11. NFI binding activity is present in adult organs. Whole cell extracts were made from different regions of the gastrointestinal tract, hind limb, and liver of young frogs and analyzed for ds-NFI binding activity. The binding activity was present in all tissues, and the binding could be competed out by a 20-fold excess of the unlabeled ds-NFI itself but not by a 20-fold excess of the nonspecific DNA (ds-NS). The adult liver complexes migrated faster, likely due to partial degradation of the NFI proteins. For comparison, stage 56 liver extract contained much less NFI binding activity than the frog liver but produced complexes of similar mobilities as those by the intestinal or limb extracts. 1977; Ishizuya-Oka and Shimozawa, 1987). In the tail, the NFI expression begins to be up-regulated around stage 62. While this appears to be later than that in the intestine, it corresponds exactly to the period when massive tail resorption occurs (Nieuwkoop and Faber, 1956). Finally, highest levels of NFI mRNAs in the hind limb are present between stages 56 and 60, right at or shortly after limb morphogenesis. The correlation of the NFI-B and NFI-C expression with tissue-specific metamorphosis as described for the mRNAs is also supported by our analysis of the NFI binding activity during development. Although the exact identities of the proteins responsible for the binding to the NFI oligonucleotide remain to be determined, DNA competition shows that the binding is specific. Furthermore, antibody supershift experiments indicate that both NFI-B and NFI-C types of complexes are formed and that most, if not all, of the binding activity can be accounted for by NFI-B and NFI-C or closely related transcription factors. Thus, while it is unknown how the NFI genes are regulated so differently in different organs, the close correlation of their expression with tissue remodeling during metamorphosis argues for a role of these transcription factors in organogenesis. NFIs as Regulators during Organ Development-NFIs cannot only regulate the expression of a wide variety of genes through their binding sites located in the promoter regulatory regions of these genes, but they also affect DNA replication (Inoue et al., 1990;Cereghini et al., 1987;Santoro et al., 1988;Hay, 1985;Zorbas et al., 1992;Rosenfeld and Kelly, 1986). Furthermore, NFI genes are known to be expressed in most, if not all, tissues in adult animals. Thus, it has been generally assumed that NFIs are ubiquitous factors that are important for both DNA replication and transcription. However, to our knowledge, their involvement during development has not yet been reported. The biphasic development of amphibians, i.e. the embryogenesis and subsequent metamorphosis, serves as a unique model to study gene function during different stages of animal development. Our DNA binding and transcriptional activation experiments as well as Northern blot analysis of the NFI-B and NFI-C expression failed to detect NFI activities in oocytes and early embryos. Thus, if NFIs are required for transcription and/or replication during early embryogenesis, either the very low levels of NFI-B and/or NFI-C that evaded our detection or other NFIs such as the recently cloned Xenopus NFI-X subfamily (Roulet et al., 1995) are sufficient for this early period of development. On the other hand, both NFI-B and NFI-C genes are activated in embryos at early tailbud stages (stages 22/23). The expression during this larval period (up to stage 45, i.e. the feeding stage or the end of larval development), although relatively low, implicates a role of NFIs in larval organogenesis. More importantly, the expression of the NFI mRNAs and the corresponding DNA binding activities correlate with metamorphosis. Two major events occur during this postembryonic process, i.e. cell death and cell proliferation followed by differentiation. The drastic up-regulation of the NFI genes during tail resorption and intestinal remodeling, both of which involve extensive cell death (Dodd and Dodd, 1976;Gilbert and Frieden, 1981;Yoshizato, 1989), suggest that the NFIs may be involved in the up-regulation of genes that control cell fate and/or encode degradative enzymes that are required for removal of degenerated tissues, such as proteases, nucleases, and extracellular matrix degradation enzymes. In contrast, when Xenopus NFIs are highly expressed in the hind limb at stages 56 -60, there is little cell death in this organ except in the interdigital region. In addition, in the intestine, cell death is completed after stage 63 (McAvoy and Dixon, 1977) when NFI mRNA levels remain high. In these two cases, the predominant events are extensive proliferation and differentiation of adult cell types (Dodd and Dodd, 1976;McAvoy and Dixon, 1977;Ishizuya-Oka and Shimozawa, 1987). Thus, the Xenopus NFIs are also involved in the regulation of genes that are crucial for cell growth and/or cell differentiation. Such a function is also consistent with the strong NFI gene expression and presence of NFI binding activities in many organs of postmetamorphic frogs. Both NFI-B and NFI-C genes are direct T 3 response genes and, therefore, the earliest genes activated by T 3 in the gene regulation cascade that controls tissue remodeling during metamorphosis. As transcription factors, they are expected to directly regulate the expression of downstream genes during metamorphosis. While their target genes are still unknown, the presence of their binding sites in a wide variety of promoters in other animal species suggests that the Xenopus NFIs will likely influence the expression of many genes during metamorphosis. In this regard, it is interesting to note that several NFI binding sites have been identified in the Xenopus vitellogenin gene (Cardinaux et al., 1994). The vitellogenin gene is liver-specific and dependent upon estrogen for its expression (Chang and Shapiro, 1990;Corthésy et al., 1990). The gene becomes competent to respond to estrogen activation during metamorphosis (Rabelo et al., 1994). Although T 3 treatment of tadpoles does not regulate the vitellogenin gene directly, it can enhance its activation by estrogen (Rabelo et al., 1994). This enhancement has been attributed to the up-regulation of the estrogen receptor gene by T 3 (Rabelo et al., 1994). Our results here suggest another possibility. While we have not analyzed in detail the NFI expression in the liver during metamorphosis, our DNA binding experiment shows much higher levels of NFI binding activity in post-metamorphic frog liver compared with that in premetamorphic tadpoles. Thus, as direct response genes of T 3 , the up-regulation of NFI genes in the liver during metamorphosis may enable the vitellogenin gene to respond to estrogen. While the vitellogenin gene is a likely target gene regulated by the NFIs in the liver, it will be important to identify other target genes, especially in other tissues. Furthermore, it is still unclear whether NFI-B and NFI-C regulate different target genes. As they can recognize at least some common binding sites, any functional difference is likely to reside in the less conserved carboxyl termini of the proteins. Through cooperative or antagonistic interactions with other transcription factors important for the expression of different promoters, NFI-B and NFI-C can differentially regulate the transcription of different genes. Clearly, the answer to this question waits for the identification of NFI target genes and the characterization of their promoters.
2018-04-03T05:09:22.555Z
1996-03-15T00:00:00.000
{ "year": 1996, "sha1": "1a91661e8a391522fc086164976f4d58f6755934", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/271/11/6273.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "941438c8784cd6df702da4d9cfb7d1e0aca7a8e3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
212850701
pes2o/s2orc
v3-fos-license
A Single Session with Mother Who reframed her Daughter’s Dating Relationship In this case, IP had been absent a high school for a few days, after she troubled with her boyfriend. Then, her mother came to see school counselor (SC). The mother said, “I hope my daughter to break up her boyfriend”. However, if mother told IP to do so, she argued that her mother did not understand her at all, and did not listen to her mother. SC formulated a bad circle as described above. Then, SC emphasized that IP is kind and great, and her mother appreciate them. SC told the mother how excellent she was and intervened “please send a message to her that you really understand your daughter’s feeling.” After the session, the mother changed how to approach her daughter. Then her reaction was also changed, and they peacefully talked to each other. After all, IP started smoothly to go to school again. The mother regarded that the relationship between her and her boyfriend was complementary communication which means that her boyfriend is in one-up position to control her and her position is one-down. After the session, however, the relationship established in the mother changed into meta-complementary communication which means that the position of IP and her boyfriend seems to be one-up and one-down, respectively. After all, mother could give them the choice to break up or not. And she changed interacting with her daughter, IP started to go to school again. Introduction In this case, IP had been absent a high school for a few days, after she troubled with her boyfriend. Then, her mother came to see school counselor (SC). IP and her mother were regarded as "problem" mother told IP to do so, she argued that her mother did not understand her at all, and did not listen to her mother. SC formulated a bad circle as described above. The more her mother required her to break up with him, thus, the more she move away her mother. Exception Mother told stories that seemed to be "exceptions". For example, when IP decided not to go to school, she spent time calmly at home and said "I want to do my best for study and club." Her mother thought that her daughter was a great girl. In addition, when IP was fine, she sometimes said "I wonder why I love him." Her mother heard that her boyfriend was not cared for at his home. Then her reaction was also changed, and we peacefully talked to each other." After all, IP started smoothly to go to school again. Discussions In this report, we want to discuss two points below. First, how did SC process her resistance, and led intervention. Second, how mother reframed relationship of IP and her boyfriend. Processing resistance As one of processing resistance, normalization is considered useful. This mother wanted to break up them as soon as possible. This feeling was a sign to care for IP, but telling IP to break up made her feel that her mother does not know anything. After all, telling IP to break up was considered false solution. Then, SC expressed understanding mother's feeling, and normalized her mind. This provided relationship with mother and SC, and processed resistance. Moreover, this mother's way of thinking had been considered codependent on IP and troublesome by teachers of the school. However, SC dared to utilize this codependence, and complimented the mother on that the mother had comprehended her feeling that she could not have broken up with her boyfriend because of her kindness. Additionally, SC intervened the mother to tell IP that you understand your daughter. Utilization is a basic principle for solving problem proposed by Milton Erickson (Watzlawick, Weakland, & Fish, 1974). He often utilized not only present problem and symptom, but also obstinate belief, delusion and behavior (William, H. O., 1987). In this case, SC highlighted the words that IP is kind and great" said by mother and the view that the mother appreciate IP. Therefore, it is considered that the mother's resistance could be processed and intervention was accepted. Reframed relationship Mother thought that IP's boyfriend wanted to control and monopolize her. But the mother thought that the reason why they did not break up is not only his possessive feeling but also IP's kindness After all, mother could give them the choice to break up or not. In this case, although mother and daughter's codependency is regarded as problem, SC dared to utilize the relationship, which enables IP to attend school. The frame of IP's kindness could be a treatment double bind, because whatever she does with the frame, her mother would think her behavior as her kindness.
2020-02-20T09:06:14.514Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "f71b9becc6bfac69ee822d7d0c2aab677b131966", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ijbf/9/1/9_27/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "eba2065fcfe81b0fa8e05c49c989430db1fcebd4", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Psychology" ] }
2895036
pes2o/s2orc
v3-fos-license
Inelastic Leptoproduction of J/Psi Mesons at HERA The leptoproduction of J/psi mesons is studied in inelastic reactions for four momentum transfers 2<Q^2<100GeV^2. The data were taken with the H1 detector at the electron proton collider HERA and correspond to an integrated luminosity of 77 pb-1. Single differential and double differential cross sections are measured with increased precision compared with previous analyses. New leading order calculations within the non-relativistic QCD factorisation approach including colour octet and colour singlet contributions are compared with the data and are found to give a reasonable description of most distributions. An exception is the shape of the distribution in the J/psi fractional energy, z, which deviates significantly from that of the data. Comparisons with photoproduction are made and the polarisation of the produced J/psi meson is analysed. Introduction Inelastic leptoproduction of J/ψ mesons at HERA, e+p → e+J/ψ+X, is dominated by boson gluon fusion, γ * g → cc. The aim of current experimental and theoretical efforts is a detailed understanding of this production process. Before HERA started operation, the limited amount of lepto-and photoproduction data ( [1] and references therein) was found to be described by the Colour Singlet Model (CSM) [2]. In the CSM the cc pair is produced in the hard γ * g interaction in the quantum state of the J/ψ meson, i.e. in a colour singlet state with spin 1 and no orbital angular momentum. This is possible due to the emission of an additional hard gluon (see Fig. 1b). The process was advocated as a means of determining the gluon density in the proton, since it is calculable in perturbative Quantum Chromodynamics (pQCD) using e.g. potential models for the formation of the J/ψ meson. In recent years the interest in inelastic J/ψ production has shifted considerably since the CSM fails to reproduce the production rate of J/ψ and ψ(2S) mesons in pp collisions by more than an order of magnitude [3]. Nowadays, one of the main aims is the investigation of the rôle of "colour octet" contributions, which have been invoked to describe the pp data. Colour octet contributions arise naturally in the theoretical description of quarkonium production based on non-relativistic QCD and factorisation (NRQCD) [4]. NRQCD is an effective field theory in which the J/ψ production process factorises into terms for the short distance transition (e.g. γ * g → cc(g)) and long distance matrix elements (LDMEs) for the transition of the cc pair into an observable meson. The cc pairs can now be in many different angular momentum states, in colour singlet and also in colour octet states, in which case the transition to the J/ψ meson is thought to proceed via soft gluon emission. The short distance coefficients are calculable in pQCD and a double expansion in the strong coupling parameter α s and v, the relative velocity of the quark and antiquark, is obtained. Many contributions are possible (examples are shown in Fig. 1) and only the most important contributions are kept in a specific calculation [5,6]. The leading term in the velocity expansion is the colour singlet term, so if it is assumed that all other terms do not contribute, the CSM is recovered. Although the octet LDMEs are at present not calculable, they are assumed to be universal. They have been extracted from the measurement of J/ψ production in pp collisions by fitting the leading order (LO) theoretical calculation to the data (see e.g. [7,8] and references therein) and are then used in predictions for electroproduction. First attempts to establish the relative importance of colour octet contributions in lepton proton interactions were made in the photoproduction limit, Q 2 → 0 [9,10], where Q 2 is the negative squared four momentum transfer. The predicted large contributions at high values of the J/ψ fractional energy, z, were not observed. Here, z denotes the J/ψ energy relative to the photon energy in the proton rest system. In the analysis of data at high Q 2 the dependence of the cross section on Q 2 may give additional insight into the production process [5]. Analysing leptoproduction at finite Q 2 has experimental and theoretical advantages compared with photoproduction. At high Q 2 theoretical uncertainties in the models decrease and resolved photon processes are expected to be negligible. Furthermore background from diffractive production of charmonia is expected to decrease faster with Q 2 than the inelastic process. The distinct signature of the scattered lepton makes the process easier to detect. A first comparison between data and NRQCD calculations in the range 2 < Q 2 < 80 GeV 2 and 40 < W < 180 GeV Figure 1: Generic diagrams for charmonium production mechanisms: a) Photon gluon fusion via a "2 → 1" process; b-d) "2 → 2" processes. a-d) contribute via colour octet mechanisms, while b) can also contribute in colour singlet processes. Additional soft gluons emitted during the hadronisation process are not shown. was presented in [11], W being the mass of the hadronic final state or equivalently the centre of mass energy of the photon proton system. The NRQCD calculations compared with the data in [11] were performed taking into account only "2 → 1" diagrams [5] (compare Fig. 1a) and disagreement between data and theory was observed both in the absolute values of the cross sections and in their shapes as functions of the variables studied. In this paper, an analysis of e + p → e + J/ψ + X is presented in the kinematic region 2 < Q 2 < 100 GeV 2 and 50 < W < 225 GeV with increased statistics compared to our previous publication [11]. Differential cross sections are measured for the whole Q 2 range and for a subset with Q 2 > 12 GeV 2 . The data are compared with theoretical predictions [6] in the NRQCD framework taking into account colour octet (CO) and colour singlet (CS) contributions. In contrast to the previous NRQCD calculation, diagrams of the type "2 → 2" are taken into account (e.g. diagrams 1b, c and d). The J/ψ polarisation is measured by analysing the decay angular distribution and its Q 2 dependence is investigated. The polarisation measurements are compared with the prediction of a calculation [12] within a "k t factorisation" approach, i.e. allowing transverse momentum ("k t ") for the incoming gluon, using unintegrated parton density functions and off-shell matrix elements including colour octet and colour singlet contributions. Detector, Kinematics and Simulations The data presented were collected in the years 1997-2000 and correspond to a total integrated luminosity of 77.0 ± 1.2 pb −1 . HERA was operated for most of this time with 27.5 GeV positrons. Roughly 12% of the data were taken with electrons of the same energy. In 1997 the proton energy was 820 GeV. It was increased to 920 GeV thereafter (sample of ∼ 63 pb −1 ). Detector A detailed description of the H1 detector can be found elsewhere [15]. Here we give an overview of the most important components for the present analysis. The central tracking detector (CTD) of H1 consists mainly of two coaxial cylindrical drift chambers for the measurement of charged particles and their momenta transverse to the beam direction and two polygonal drift chambers for measurement of the z coordinates 1 . The CTD is situated inside the solenoidal magnet, which generates a field of 1.15 T. The tracking system is complemented in the forward direction by a set of drift chambers with wires perpendicular to the beam direction which allow particle detection for polar angles θ ∼ > 7 • . Multiwire proportional chambers are used for triggering purposes. In the Q 2 range studied here, the scattered lepton is identified through its energy deposition in the backward electromagnetic calorimeter SpaCal [16]. The SpaCal signal is also used to trigger the events, in conjunction with signals from the tracking chambers. A drift chamber (BDC) in front of the SpaCal is used in combination with the interaction vertex to reconstruct the polar angle θ e of the scattered lepton. The liquid argon (LAr) calorimeter surrounds the CTD and is segmented into electromagnetic and hadronic sections. It covers the polar angular range 4 • < θ < 154 • with full azimuthal coverage. The detector is surrounded by an instrumented iron return yoke that is used for muon identification (central muon detector CMD, 4 • < θ < 171 • ). The J/ψ decay electrons are identified via their energy deposition in the electromagnetic part of the calorimeter and via their specific energy loss in the gas of the central drift chambers. Muons are identified as minimum ionising particles in the LAr calorimeter or through track segments reconstructed in the CMD. Kinematics The kinematics for charmonium production are described with the standard variables used for deep inelastic interactions, namely the square of the ep centre of mass energy, s = (p + k) 2 , the squared four momentum transfer Q 2 = −q 2 and the mass of the hadronic final state W = (p + q) 2 . Here k, p and q are the four-momenta of the incident lepton, proton and virtual photon, respectively. In addition, the scaled energy transfer y = p · q/p · k (energy fraction transferred from the lepton to the hadronic final state in the proton rest frame) and the J/ψ fractional energy z = (p ψ · p)/(q · p) are used, where p ψ denotes the J/ψ four-momentum. The event kinematics are reconstructed using a method which combines the measurement of the scattered lepton and the hadronic final state to obtain good resolution in the entire kinematic range. The variable Q 2 = 4 E E ′ cos 2 θe 2 is reconstructed from the energy E ′ and angle θ e of the scattered lepton (E is the energy of the incoming lepton). For the calculation of y and z the hadronic final state is used in addition. Thus where (E − p z ) runs over all the final state particles including the scattered lepton, and in had (E − p z ) only the final state hadrons are summed. The J/ψ momentum is reconstructed from the momenta of the decay leptons. For the calculation of the sums in equations (1) a combination of tracks reconstructed in the CTD and energy depositions in the LAr and SpaCal calorimeters is used. W is reconstructed using the relation W 2 = ys − Q 2 . Differential cross sections are measured as functions of the following variables: Q 2 , W , z, the transverse momentum squared of the J/ψ with respect to the beam axis p 2 t,ψ and the rapidity 2 of the J/ψ in the laboratory frame Y lab . Differential cross sections are also given for p * 2 t,ψ and Y * , which are computed in the γ * p centre of mass frame. The resolution, as determined from the detector simulation, is 2 − 5% for the variables Q 2 , p 2 t,ψ , Y lab and Y * . For z the resolution is ∼ 8% at high z ∼ 1 degrading to 15% at low z values. For W the resolution is ∼ 7% for W < 145 GeV and ∼ 4% above. The resolution of p * 2 t,ψ is somewhat worse (∼ 30% of the chosen bin widths). Monte Carlo Simulations Corrections for detector effects are applied to the data using a Monte Carlo simulation in which the H1 detector response is simulated in detail. The simulated events are passed through the same reconstruction and analysis chain as the data. The correct description of the data by the simulation is checked by independent measurements. Residual differences between data and simulation, e.g. in the efficiencies of the lepton identification or of the trigger, are included in the systematic uncertainties ( Table 1). The Monte Carlo generator used for inelastic J/ψ production is EPJPSI [17] which generates events according to the Colour Singlet Model in leading order. In contrast to the standard version used previously [11], the full dependence of the matrix element on Q 2 has been implemented [18]. In order to achieve a good description of the data, the events are reweighted in Q 2 2 The rapidity Y = 1 2 ln E+pz E−pz of the J/ψ is calculated with respect to the proton direction in the laboratory frame and with respect to the photon direction in the photon-proton centre of mass frame. using a parametrisation of the measured Q 2 distribution. A systematic uncertainty of ±5% is estimated for this procedure by repeating the analysis without this reweighting. Diffractive production of J/ψ and ψ(2S) mesons is simulated using DIFFVM [19] with parameters which have been tuned to HERA measurements. Contributions from the production of bb quark pairs with subsequent formation and decay of b-flavoured hadrons, b → J/ψ + X, are simulated by the AROMA Monte Carlo program [20]. The total AROMA cross section is normalised to the measured value of 16.2 nb [21]. Radiative Corrections The measured cross sections are given in the QED Born approximation. The effects of higher order processes, mainly initial state radiation, are estimated using the HECTOR program [22]. With the requirement that (E − p z ) > 40 GeV (see below) the radiative corrections amount to −(4 . . . 5)% and depend only weakly on Q 2 and W . A correction of −(5 ± 4)% is applied. Event Selection Events with Q 2 > 2 GeV 2 are selected by requiring a scattered lepton with a minimum energy deposition of 12 GeV in the electromagnetic calorimeter and a lepton scattering angle larger than 3 • . The z coordinate of the vertex position is determined for each event and required to lie in the beam interaction region. In order to minimise the effects of QED radiation in the initial state, the difference between the total energy and the total longitudinal momentum (E − p z ) reconstructed in the event is required to be larger than 40 GeV. If no particle, in particular no radiated photon, has escaped detection in the backward direction, the value of (E − p z ) is expected to be twice the incident lepton energy, i.e. 55 GeV. The J/ψ decay leptons are reconstructed as two oppositely charged particles with transverse momenta p t > 0.8 GeV in the CTD. Both tracks have to be identified as muons with polar angles in the range 20 • < θ < 160 • or as electrons in the range 30 • < θ < 150 • . There is a considerable non-resonant background, mainly due to misidentified leptons (compare Fig. 2), in particular at low values of z. Therefore the number of J/ψ candidate events in a given analysis interval is extracted by fitting the mass distribution with a superposition of a Gaussian of fixed width and position (determined by a fit to all data) to describe the signal and a power law component to describe the background. The number of signal events is then obtained by counting the number of lepton pairs in the interval 2.85 < M µµ < 3.35 GeV and subtracting the fitted amount of background in this interval. This method was found to give stable and reliable results in most regions of phase space. The statistical error on the number of signal events is estimated from the statistical error on the number of events (signal+background) in the mass interval. This method leads to a loss of events for the decay of the J/ψ to electrons due to radiation of the decay electrons in the material of the detector and due to decays J/ψ → e + e − γ. A correction of ∼ 10% is applied. A systematic uncertainty of 3 − 7% depending on kinematic variables is estimated for the determination of the signal event numbers by changing the functional form for the background. After the cuts described above the main background is due to the diffractive production of J/ψ mesons which is concentrated at high z values. Diffractive J/ψ contributions can experimentally be suppressed in several ways. Previously, inelastic events were selected by requiring the hadronic system X, which is produced together with the J/ψ meson, to have a high mass [11]. In the present analysis, a selection cut z < 0.9 is used to suppress diffractive elastic and proton dissociative events. This range corresponds to the region of validity of the theoretical calculations which are used for comparison. A further cut is applied, p * 2 t,ψ > 1 GeV 2 , where p * t,ψ is the transverse momentum of the J/ψ in the photon proton centre of mass frame. After this requirement, the background from diffractive J/ψ meson production is estimated to be less than 2% and is neglected. Contributions from b and ψ(2S) Decays After the cuts described in the previous section, the J/ψ sample is dominated by 'direct' inelastic J/ψ production, in which the J/ψ is directly produced from the cc pair in the process γ * g → cc (g). However there remain contributions from both the diffractive and inelastic production of ψ(2S) mesons and the production of b flavoured hadrons with subsequent decays to states involving J/ψ mesons. Diffractive ψ(2S) events are expected mainly at high z values while contributions from b → J/ψ + X are expected at low z values. With the cut on p * 2 t,ψ > 1 GeV 2 the remaining background from ψ(2S) is mainly due to diffractive events in which the proton dissociates and is estimated to be between 6% and 20% in the highest z bin, 0.75 < z < 0.9, corresponding to 2 − 10% in the total sample. The lower estimate (6%) is based on a Monte Carlo simulation of diffractive ψ(2S) production (the simulated contribution is shown in Figs. 3c) and e). Since diffractive ψ(2S) production has not been measured in the present kinematic range we consider this to be a crude estimate. An analysis of the present data, where events with less than five particles are selected as candidates for ψ(2S) → J/ψ π + π − , yields an estimate of 20% in the highest z bin. No correction is applied to the data, since the dependence on the kinematic variables, in particular on the transverse momentum of the J/ψ meson, is poorly known. The contribution from b → J/ψ + X, which is expected at low values of z, is estimated from a Monte Carlo simulation of bb production [20] using the measured b cross section [21]. It is estimated to be 17% in the lowest z bin (0.3 < z < 0.45, compare Fig.3c) corresponding to 5% in the total sample. Again, no correction is applied to the data due to the poorly known dependences on the kinematic parameters. Inelastic production of ψ(2S) mesons with subsequent decays ψ(2S)→ J/ψ + X give a further contribution which at present cannot be distinguished experimentally. It is expected to contribute over the whole z range and its dependence on the kinematic variables is likely to be similar to that of direct J/ψ production. It can thus be considered as a normalisation uncertainty. In the photoproduction limit this contribution is estimated to be ∼ 15% [4]. Summarising, the measured cross sections contain in addition to direct inelastic J/ψ mesons, contributions from diffractive ψ(2S) events and b decays which may amount to as much as 17% in total. The distributions in the variables studied have not been measured, but are expected to be quite different for the two processes and different from those of the direct inelastic J/ψ events. No correction or systematic error is applied. Inelastic ψ(2S) events on the other hand are expected to have similar distributions to the inelastic J/ψ events themselves; their contribution may be of the order of 15% and can be regarded as a normalisation uncertainty. Results Differential cross sections are determined in the kinematic region 2 < Q 2 < 100 GeV 2 ( Q 2 = 10.6 GeV 2 ), 50 < W < 225 GeV, p * 2 t,ψ > 1 GeV 2 and 0.3 < z < 0.9. A second set of differential cross sections is determined for a subset with Q 2 > 12 GeV 2 and, in order to match the Q 2 range, with p 2 t,ψ > 6.4 GeV 2 . The average value of Q 2 in this sample is Q 2 = 30.9 GeV 2 . The distribution of the invariant mass of the two leptons after all selection cuts is shown in Fig. 2. The total number of signal events is 458 ± 30 of which 70 ± 11 are at Q 2 > 12 GeV 2 and p 2 t,ψ > 6.4 GeV 2 . Comparisons between the data and the Monte Carlo simulation (EPJPSI), which is used to correct for detector effects, are shown in Fig. 3. The simulations take into account the two lepton proton centre of mass energies according to the luminosity. The simulation is normalised to the data in the interval 0.3 < z < 0.9 after reweighting the events in Q 2 and then describes all other distributions well. Monte Carlo estimates of contributions from b → J/ψ + X and diffractive ψ(2S) production are indicated in the z and p * 2 t,ψ distributions ( Fig. 3c and e). The systematic uncertainties in this analysis are typically 15−17% and amount to 21% in single bins at low z and W . For the double differential cross sections the corresponding error estimate is 21%. The systematic errors are dominated by uncertainties in estimating the number of events in regions of high non-resonant background, by the uncertainty in the Monte Carlo calculation used for acceptance and efficiency corrections, and by uncertainties in efficiencies for lepton identification and triggering. A list is given in Table 1 Table 1: Summary of systematic errors for the single differential J/ψ production cross sections. The error on the number of events depends on z and p * 2 t,ψ . The total error is the sum of the contributions added in quadrature. Differential Cross Sections The differential cross sections for inelastic J/ψ production are displayed in Fig. 4 as functions of Q 2 and p * 2 t,ψ . In Fig. 5 normalised differential cross sections are shown as functions of the variables W , z, p 2 t,ψ , p * 2 t,ψ , Y lab and Y * . The data points are plotted at the mean value of the data in each interval. The differential cross sections are also listed in Tables 2 and 3. The results of the calculations by Kniehl and Zwirner [6], who applied the NRQCD approach to electroproduction of J/ψ mesons, are shown for comparison. These calculations only include 2 → 2 contributions, which is appropriate for z < 0.9. For easier comparison of shapes the data and the calculation in Fig. 5 have been normalised to the integrated cross sections in the measured range for each distribution. The NRQCD calculations shown in the figures include the contributions from the colour octet states 3 3 S 1 , 3 P J=0,1,2 , 1 S 0 as well as from the colour singlet state 3 S 1 (labelled "CO+CS"). The contribution of the colour singlet state is also shown separately (labelled "CS"). The calculations depend on a number of parameters. The values used for the non-perturbative long range transition matrix elements (LDMEs) were determined from the distribution of transverse momenta of J/ψ mesons produced in pp collisions [7] 4 . The bands in Figs. 4, 5 and 6 indicate the uncertainty in the theoretical calculation [25]. They cover a charm quark mass of m c = 1.5±0.1 GeV, variation of renormalisation and factorisation scales by factors 0.5 and 2, the errors of the LDMEs as well as the case that either of the two parts of M J/ψ r (see footnote 4) does not contribute. Furthermore, the effect of using the CTEQ5M [24] set of parton density functions instead of MRST98LO [23] is included. The colour octet contribution dominates the predicted cross section for all values of Q 2 and p * 2 t,ψ ( Fig. 4a and c). In order to facilitate the comparison with the data, the ratio data/theory is shown on a linear scale in Fig. 4b and d together with a band indicating the uncertainty in the NRQCD calculation with CO+CS contributions. The NRQCD calculation overshoots the data by about a factor of 2 at low Q 2 , which is at the limit of the theoretical and experimental error. The agreement between the data and the theory improves towards higher Q 2 where the theoretical uncertainties diminish. For p * 2 t,ψ similar agreement between data and NRQCD calculation is observed. Compared with the colour singlet contribution alone, the data exceed the calculations by a factor ∼ 2.7, approximately independent of Q 2 , while for p * 2 t,ψ the ratio increases towards higher values of p * 2 t,ψ . In Fig. 5 the measured and the theoretical differential cross sections are normalised to the integrated cross sections in the measured range for each distribution. The W and the Y lab distributions ( Fig. 5b and f) are reasonably well described in shape by the full NRQCD calculation and also by the colour singlet contribution alone, whereas neither fully describes the Y * distribution. The z distribution is very poorly described by the full calculation including colour octet contributions, while the colour singlet contribution alone reproduces the shape of the data rather well. A similar discrepancy between data and NRQCD calculations was observed at high z values in the photoproduction limit [9,10,26]. It may be due to phase space limitations at high z for the emission of soft gluons in the transition from the colour octet cc pair to the J/ψ meson, which are not taken into account in the calculation. In photoproduction, the rapid rise of the colour octet contributions towards high z values was shown to be damped after resummation of the NRQCD expansion [26,27]. The shapes of the p 2 t,ψ and p * 2 t,ψ distributions ( Fig. 5c and e) are rather well described by including CO+CS contributions while the CS contribution alone decreases too rapidly towards high values of p 2 t,ψ or p * 2 t,ψ . Note, however, that higher orders are expected to contribute significantly at high values of p t,ψ as observed in next-to-leading order CSM calculations in the photoproduction limit [8]. At higher Q 2 values the theoretical uncertainties of the NRQCD calculation decrease (see Fig. 4b). It is therefore interesting to compare data and theory at higher Q 2 . The results for Q 2 > 12 GeV 2 (with p 2 t,ψ > 6.4 GeV 2 ) are given in Fig. 6 and Table 4. The requirement p * 2 t,ψ > 1 GeV 2 is retained. The average Q 2 = 30.9 GeV 2 is larger than the squared mass of the J/ψ meson, m 2 J/ψ . The statistical precision of these data is limited and no substantial change in the comparison of data and theory is seen compared to Fig. 5. 4 The extracted values for the LDMEs depend on the parton density distributions. For the set MRST98LO [23] the values are, in the notation of [6], O J/ψ [ 3 S Double Differential Cross Sections In the calculations the relative contributions of the colour octet states to the cross sections vary with z, Q 2 and p * 2 t,ψ (compare Figs. 4, 5a and c). Therefore, differential cross sections dσ/dp * 2 t,ψ and dσ/dQ 2 are determined 5 in three intervals of z and compared with that for the whole z range in Fig. 7. The dependence on Q 2 and p * 2 t,ψ is seen to be similar in the three z regions. In order to make a quantitive comparison, the differential cross sections for the whole z range are fitted with functions ∝ (Q 2 + m 2 J/ψ ) −n or ∝ (p * 2 t,ψ + m 2 J/ψ ) −m yielding (n = 3.36 ± 0.53) and (m = 4.15 ± 0.50), respectively, where total experimental errors are given. The results of these same fits are then compared with the data in the three z intervals after normalising the curves at low Q 2 or p * 2 t,ψ . In Figs. 7b and d the ratio of the data over the scaled fit is shown. The data in the three z bins are reasonably described by the same functional form although there is an indication of a faster fall with Q 2 at high z than in the total z range. In view of the contributions at high z from diffractive ψ(2S) production, which are expected to have a different dependence on Q 2 , firm conclusions cannot be drawn. The observed dependence on p * 2 t,ψ is within errors the same as that observed in photoproduction (m ≈ 4.6 ± 0.1) [26]. γ * p Cross Sections and Comparison to Photoproduction For comparison with results in the photoproduction limit the cross section for γ * p → J/ψ X as a function of W is calculated by dividing the ep cross section by the photon flux integrated over the analysis intervals [28]. The total cross section σ(γ * p → J/ψ X) is shown as a function of W in Fig. 8 and listed in Table 6 for the present data ( Q 2 = 10.6 GeV 2 ). It is compared with the cross section in the photoproduction limit ( Q 2 ∼ 0.05 GeV 2 ) in an otherwise similar kinematic range [26]. Parametrising the cross section as (W/W 0 ) δ yields δ = 0.65 ± 0.25 for the present data, where the total experimental error was used in the fit. The value is consistent with that obtained in photoproduction (0.49 ± 0.16 [26]). The W dependences are expected to be similar since they reflect the x dependence of the gluon distribution with a scale ∼ Q 2 + m 2 J/ψ . Decay Angular Distributions Measuring the polarisation of the produced J/ψ meson has been proposed as a means of distinguishing the various CO and CS contributions to J/ψ production [5,8]. The polar (θ * ) decay angular distributions are measured in the rest frame of the J/ψ using the J/ψ direction in the γ * p system as reference axis (helicity frame). They are shown in Fig. 9 (and listed in Table 7) for the whole Q 2 range and separately for regions of low and high Q 2 . The cos θ * distribution is expected to have the form d σ d cos θ * ∝ 1 + α cos 2 θ * . A value of |α| ∼ < 0.5 is expected, where α can be negative, zero or positive depending on which intermediate cc state dominates the production [5]. Fitting the data distributions with a function of the form (2) yields a value of α = −0.1 +0.4 −0.3 in the whole Q 2 range (Fig. 9a). For 2 < Q 2 < 6.5 GeV 2 (Fig. 9b), α = −0.4 +0.5 −0.4 is found and for 6.5 < Q 2 < 100 GeV 2 (Fig. 9c) α = 0.7 +0.9 −0.6 . The total experimental errors were used in the fits. Although the central values suggest a change from a negative to a positive value of α as Q 2 increases, this tendency is not significant. Predictions using the k t factorisation approach [12], shown in Fig. 9, are compatible with the measurements. Summary and Conclusions A new analysis of inelastic electroproduction of J/ψ mesons has been presented. Due to the increased statistics the kinematic range has been extended to 50 < W < 225 GeV and reaches average values of Q 2 larger than the squared mass of the J/ψ meson. The cross sections are measured in the range 0.3 < z < 0.9 and p * 2 t,ψ > 1 GeV 2 where direct inelastic J/ψ production dominates. Differential cross sections at average values Q 2 = 10.6 and 30.9 GeV 2 are presented as functions of Q 2 , W , z, p 2 t,ψ , p * 2 t,ψ , Y and Y * . Recent theoretical calculations by Kniehl and Zwirner [6] in the framework of the non-relativistic QCD (NRQCD) approach including colour octet and colour singlet contributions ("2 → 2" diagrams) are compared with the data. At both average Q 2 values, reasonable agreement is observed in the shapes of most distributions except that of z, which is described much better by the colour singlet contribution alone (in a recent resummation of soft gluon processes a similar observation in the photoproduction limit could be explained through a damping of the rapid rise of the colour octet contributions towards high z values). The absolute value of the full NRQCD cross section is a factor ∼ 2 above the data at low Q 2 but approaches the data at higher Q 2 to within 15% which is well within experimental and theoretical uncertainties. The colour singlet contribution alone is roughly a factor 2.7 lower than the data. The differential cross sections in p 2 t,ψ and p * 2 t,ψ are described better when CO contributions are included. In the photoproduction limit a successful description of the p 2 t,ψ spectrum has been achieved within the Colour Singlet Model by including NLO corrections. These corrections are, however, not yet available for the electroproduction case under consideration here. The dependence of the γ * p cross section on W is the same, within errors, as in the photoproduction case. Further distributions are studied in an attempt to assess the relative importance of the different CO and CS terms. Since their contributions are expected to vary with z, differential cross sections dσ/dQ 2 and dσ/dp * 2 t,ψ are measured in intervals of z. The shapes of the p * 2 t,ψ and Q 2 spectra are found to be similar to those over the whole z range although there is an indication of a steeper Q 2 dependence at high z. A fit to the distribution of the polar decay angle in the helicity frame covering the whole Q 2 range yields α = −0.1 +0.4 −0.3 for a parametrisation 1 + α cos 2 θ * . Altogether the measurements presented here provide significant new information which will aid the further development of a quantitative understanding of J/ψ meson production within pQCD. Figure 4: Differential cross sections a) dσ/dQ 2 and c) dσ/dp * 2 t,ψ for the inelastic process ep → e J/ψ X in the region 50 < W < 225 GeV, Q 2 > 2 GeV 2 , p * 2 t,ψ > 1 GeV 2 and 0.3 < z < 0.9. The NRQCD calculation is shown for comparison (CO+CS, light band) and the colour singlet contribution separately (CS, dark band). In b) and d) the ratio of data/theory is shown for the two cases. The theoretical uncertainty in the full calculation (CS+CO) is shown as a band around 1. The inner error bars of the data are statistical, the outer error bars contain statistical and systematic uncertainties added in quadrature. Figure 5: Normalised differential cross sections for the inelastic process ep → e J/ψ X in the kinematic region 2 < Q 2 < 100 GeV 2 , 50 < W < 225 GeV, p * 2 t,ψ > 1 GeV 2 and 0.3 < z < 0.9. a) 1/σ dσ/dz, b) 1/σ dσ/dW , c) 1/σ dσ/dp 2 t,ψ , d)1/σ dσ/dY * e) 1/σ dσ/dp * 2 t,ψ and f) 1/σ dσ/dY lab . The inner error bars are statistical, the outer error bars contain statistical and systematic uncertainties added in quadrature. The histograms show calculations for inelastic J/ψ production within the NRQCD factorisation approach [6] which have been normalised to the integrated cross section. The light band represents the sum of CS and CO contributions and the dark band the CS contribution alone (both are separately normalised). The error bands reflect the theoretical uncertainties (see text). Figure 6: Normalised differential cross sections for the inelastic process ep → e J/ψ X in the kinematic region 12 < Q 2 < 100 GeV 2 , 50 < W < 225 GeV, p 2 t,ψ > 6.4 GeV 2 , p * 2 t,ψ > 1 GeV 2 and 0.3 < z < 0.9. a) 1/σ dσ/dz, b) 1/σ dσ/dW , c) 1/σ dσ/dp 2 t,ψ , d) 1/σ dσ/dY * e) 1/σ dσ/dp * 2 t,ψ and f) 1/σ dσ/dQ 2 . The inner error bars of the data points are statistical, the outer error bars contain statistical and systematic uncertainties added in quadrature. The histograms show calculations for inelastic J/ψ production within the NRQCD factorisation approach [6]. The light band represents the sum of CS and CO contributions and the dark band the CS contribution alone (both are separately normalised). The error bands reflect the theoretical uncertainties (see text). Figure 7: Differential cross sections for e p → e J/ψ X in three z intervals and in the full z range. a) dσ/dp * 2 t,ψ and c) dσ/dQ 2 for low (0.3 < z < 0.6, open points), medium (0.6 < z < 0.75, triangles) and high (0.75 < z < 0.9, squares) values of z in comparison with the results for the full z region (full points). The inner error bars indicate the statistical uncertainty, while the outer error bars show the statistical and systematic uncertainties added in quadrature. For clarity, the data have been scaled by the factors indicated. The data in the complete z range are parametrised by fits of the form (Q 2 +m 2 J/ψ ) −n and (p * 2 t,ψ +m 2 J/ψ ) −m . The same parametrisations are also shown for the data in the three z bins after normalising them to the data at low Q 2 or p * 2 t,ψ . In b) and d) the ratio of the data to these parametrisations is shown on a linear scale using the same symbols as in a) and c). Note that for clarity the points have been shifted in Q 2 and p * 2 t,ψ . Figure 9: Differential cross sections 1/σ dσ/d cos θ * in ep → e J/ψ X in the kinematic region 50 < W < 225 GeV, p * 2 t,ψ > 1 GeV 2 and 0.3 < z < 0.9 normalised for | cos θ * | < 0.9. a) 2 < Q 2 < 100 GeV 2 , b) 2 < Q 2 < 6.5 GeV 2 , c) 6.5 < Q 2 < 100 GeV 2 . The inner error bars indicate the statistical uncertainty, while the outer error bars include the statistical and systematic uncertainties added in quadrature. The shaded regions show the result of fits with the form ∼ 1 + α cos 2 θ * and correspond to a variation of the fit parameter α by ±1 standard deviation. The dashed lines are the result of a prediction using the k t factorisation approach [12]. 28.5 0.12 ± 0.074 ± 0.018 Table 4: Differential cross sections with statistical and systematic errors in the range 12 < Q 2 < 100 GeV 2 , 50 < W < 225 GeV, p 2 t,ψ > 6.4GeV 2 , 0.3 < z < 0.9 and p * 2 t,ψ > 1 GeV 2 .
2014-10-01T00:00:00.000Z
2002-05-17T00:00:00.000
{ "year": 2002, "sha1": "f65169dcb6e7e44eec6245b88ec777073192069e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ex/0205065", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7390b679c752050a83bac7c2159eb431fae2be7c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
232092885
pes2o/s2orc
v3-fos-license
Thermal stress around a smooth cavity in a plate subjected to uniform heat flux The two-dimensional thermoelastic problem of an adiabatic cavity in an infinite isotropic homogeneous medium subjected to uniform heat flux is studied, where the shape of the cavity is characterized by the Laurent polynomial. By virtue of a novel tactics, the obtained K-M potentials can be explicitly worked out to satisfy the boundary conditions precisely, and the possible translation of the cavity is also available. The new and explicit analytical solutions are compared with the those reported in literature and some serious problems are found and corrected. Finally, some discussions on the thermal stress concentration around the tips of three typical cavities are provided. Introduction Plates under the environment of changing temperature are extensively used in designing new steam and gas turbines, high speed flight vehicles, jet and rocket engines, nuclear reactor, and various machine structures, and also in the fields of nuclear and chemical engineering. Studies on stress concentration due to the presence of a cavity in an infinite plate have become a topic of considerable research (Savin, 1961;Kattis, 1991;Zou and He, 2018), as well as the corresponding studies cited therein. Under the action of uniform heat flux, if there is no constraint in the plate and the material deforms freely, there will be no stress. However, the continuous deformation of the material must be interrupted due to the appearance of cavity, which would be necessary at the beginning of design or the openings required by the design. This is called the thermal stress around the cavity. When the stress level is higher than the ultimate strength of the material, it will lead to structural failure. Determining the thermal stress around the cavity can effectively predict and evaluate the properties of materials. The K-M potentials established by Muskhelishvili (1953) provide a powerful tool for the plane problem of isotropic elasticity. Two analytic complex functions as the K-M potentials are introduced to expressed the displacement and the stress in order to naturally satisfy the constitutive relations and the equibrium equations, and the original problem is then transformed into the boundary value problem of analytic functions. Due to the convenience and universality of the K-M potential method, it is also applied to the thermalelastic problem involved in this paper. The problem to determine the thermal stress around a non-elliptical cavity with the adiabatic boundary in an isotropic plate under uniform heat flux has been considered for a long time. Florence and Goodier (1960) studied the thermal stress around an ovaloid hole and analysed two specsified cases of the elliptical hole and the slot. Deresiewicz (1961) extended the study to holes with general shapes described by the Laurent polynomials, and derived the explicit solutions through adding the counter terms with positive powers. Years later, Yoshikawa and Hasebe (1999) studied the thermal elasticity of arbitrary shape holes in infinite media when a point heat source is at any position in the plane. Bhullar and Wegner (2009) analyzed the thermal stress of the hyperelliptical hole under the uniform heat flow by using complex variable method under isothermal conditions. They found that the stress concentration at the tip of the hole is very serious. Jafari et al. (2016a) analyzed the thermal stress distribution of hypocycloidal holes. Jafari et al. (2016b) studied the thermal stress of triangular holes with different shape parameters and heat flux directions. Chao et al. (2018) presented a series solution of the thermoelastic problem of triangular holes with coating. Yu et al. (2019) studied the elastic problems of a thermoelectric material containing an arbitrarily-shaped hole under a uniform remote electric current and a uniform energy flux. Tseng et al. (2020) studied the case of a square hole with coating. It is found that (1) the non-elliptical shapes of cavities the researchers studied, with few exceptions, are those characterized by the Laurent polynomials, (2) there is no significant progress in the analytic solution since Deresiewicz (1961), most of the research works are about applications and extensions. In this paper, based on the tactics proposed in our previous work (Zou and He, 2018), a new solution for the thermal stress problem of uniform heat flux applied in an isotropic plate with a non-elliptical cavity under the adiabatic boundary condition is obtained. This solution is more operable and effective than that given by Deresiewicz (1961). The rest of this paper is arranged as follows. In Section 2, the thermal stress problem is briefly formulated, where the temperature distribution under remote uniform heat flux is derived in a compact process, and the thermal stress in the context of the K-M potential theory is divided in two parts: one is to balance the thermal dislocation and consider the relative rigid-body translation of the cavity to the matrix, another is about the perturbance reducing to zero at infinity and guranteeing the satisfication of traction-free condition on the boundary of the cavity. The basic potentials described the thermal dislocation under an isothermal state, but without regard to the boundary loading, are first proposed by Florence and Goodier (1960). In second 3, the general explicit potentials, never reported before, are presented when the shape of the cavity is characterized by a Laurent polynomial while postponing the detailed derivations to Appendix A, the effectiveness of the new solution is discussed by comparing it with the previous results. Aanlyses of stress distribution are expanded in Section 4: three typical shapes, triangle, square and pentagram star, are considered; hydrostatic pressure, maximal shear stress, and toroidal normal stress along the contour of the cavity are illustrated; the relation between the stress concentration and curvature, the effect of heat flux direction, and the decay of stress around the tips are discussed. Some concluding remarks are drawn in Section 5. Basic equations 2.1 Description of the problem Consider an infinite body Ω in two-dimensional space consisting of a homogeneous and isotropic medium whose thermal conduction behaviour is governed by Fourier's law, and elastic behaviour by Hooke's law. We are concerned with the perturbance effect due to a free cavity with a traction-free, thermally insulated boundary while the matrix is subjected to uniform heat flux at infinity, as shown in Fig. 1. By the Riemann mapping theorem in complex analysis, there is a unique function in the form of Laurent series (see, e.g., Zou et al. 2010) mapping the exterior of the cavity onto the exterior of the unit circle with the origin as its center. In the above expression, h is a point inside the cavity, R is a positive real parameter indicating the size of the cavity, andare the complex variable parameters representing the shape. It can be seen that the point t on the boundary of any simply connected shape can be accurately described by = ( ), | | = 1. In general, we only need to take a limited number of terms N to meet the accuracy requirement. To continue to improve the accuracy, we only need to increase the number of terms. So, one might as well use = ℎ + ( ) = ℎ + * + + -.- 9 -01 2 , | | = 1 to describe the cavity and for the problem of a sole cavity, ℎ = 0 is usually taken. The uniform heat flux at infinity is described by where ? is the unit vector indicating the direction of the heat flux, as shown in Fig. 1, and can be denoted by a complex variable ? = D? , with being the included angle between the heat flux direction and the x-axis. It is assumed that the deformation under the action of thermal stress is always infinitesimal in the linear elastic range. In this paper, the Cartesian coordinate system is adopted, with ( 1 , G ) indicating an arbitrary point and = 1 + G , = √−1 the corresponding complex variable. Temperature The stationary temperature field of linear and an isotropic thermal material is a harmonic function satisfying the Laplacian equation = 0. Denote the negative temperature gradient by we can write the heat flux as where is the thermal coefficient of the matrix. Besides the remote condition (3), the adiabatic condition across the boundary of the cavity is where is the outer normal direction of the boundary. Now introduce the complex representations, say the direction vector = 1 + G . The complex temperature function ( ) = + Y is constructed by using the temperature field and its conjugate harmonic function Y , which is an analytic function outside the cavity and satisfies the Cauchy-Riemann relations: where ∂ \ represents the partial derivative of the coordinate \ . Substitution of (7) into (4) and (5) yields where "(⋅)^^^" denotes the conjugation of a complex variable (⋅). Thus, from where Re[⋅] indicates the real part of a complex variable, the constraint conditions (3) and (6) Following Hasebe and Tamai (1986), the complex temperature can be broken down into two parts ( ) = 1 ( ) + G ( ) where the basic part G ( ) is determined by the reference temperature t of the material without thermal stress and the remote heat flux as while 1 ( ) is the complementary and holomorphic part due to the cavity, satisfying 1 ( ) − 1 ( )^^^^^^= P .D? − ̅ D? R, on = ( ), | | = 1; Considering the holomorphic property of 1 ( ) at infinity in the image plane and the mapping function (2), we obtain the solution in terms of the complex variable in the image plane. Finally, substitution of (16) and (19) into (15) yields the complex temperature field This result was first reported by Florence and Goodier (1960) for the problem of an insulated ovaloid cavity, and directly used to the general shapes (1) by Deresiewicz (1961) without any explanation. But Jafari et al. (2016aJafari et al. ( , 2016b presented wrong expression of the complex temperature. Thermal stress According to the theory of planar elasticity established by Muskhelishvili (1963), two analytic functions ( ) and ( ), called the Kolosov-Muskhelishvili (K-M) potentials, can be introduced to indicate the stress and the displacement of point 2 relative to point 1 as where the effect of thermal expansion is taken into account, is the shear modulus of the material, and v are parameters associated with Poisson's ratio and linear expansion coefficient , respectively, say , plane stress, , plane strain; v = OE (1+ ) , plane stress, , plane strain. The stress in the above formulae naturally satisfies the compatible relations and the equilibrium equation without body force. The traction U = 1 U + G U on the surface with normal n and arc length coordinate is described by The thermal dislocation (Florence and Goodier, 1960) coming from the temperature (20) needs basic K-M potentials in the form t P ( )R = ln , t P ( )R = 2 t^^+ ln , where and are two undetermined parameters with dimension of linear force, ' indicates the induced rigid-body translation of the cavity relative to the matrix (Zou and He, 2018). The remaining perturbation potentials admit the representation using the complex variable in the image plane, whereandare two sets of coefficients with dimension of traction. Sum of (25) and (26) gives the total K-M potentials as Taking account of the property ( ) = ( )^^^^^ on the boundary and the origin inside the cavity, the total dislocation around the cavity anticlockwise can be calculated from (22), and should vanish such that Substitution of (25)- (27) yields In addition, the resultant force on the boundary of the cavity should keep in balance, from (24), that means Combination of (29) and (30) yields the solution It can be found that is a parameter associated with the material properties, the heat flux and the shape characteristic of the cavity. It is remarkable that only two effective shape parameters, and 1 , enter the formula of , and particularly the thermal dislocation disappears when the cavity becomes a slot parallel to the direction of the heat flux, namely 1 = GD? . It is found that the formula of A given by Jafari et al. (2016aJafari et al. ( , 2016b is also wrong. Since the potentials have dimension of linear force, no loss of generality, in this paper, we define the characteristic value of stress as and so From (24), the boundary condition for traction-free can be expressed by Substituting of (25), (27) and (31) yields the constraint condition of the perturbance potentials on the boundary which will be used to solve ' ( ) and ' ( ). General explicit solution and its effectiveness Following the method proposed by Zou and He (2018), we obtain the following results: (1) the perturbance potentials have finite expression, namely ' ( ) has maximal negative power − 1 while ' ( ) multiplied by v ( ) has maximal negative power + 1; (2) the total K-M potentials for the problem of an insulated cavity characterized by (2) can be worked out in the form where is given by (33), -are solved from the linear equations (A.9), the rigid-body translation t is gotten from (A.10) as where the shape parametersare directly calculated by (A.3), -are calculated by The detailed derivation is presented in Appendix A. The problem of thermal stress involved in this paper is classical and important. For the cases of a cavity with the shape characterized by the Laurent polynomial (2), Florence and Goodier (1960) pioneered the study of thermal stress distribution caused by the thermal dislocation, and dealt with the first non-elliptical cavity of ovaloid form. Deresiewicz (1961) extended to cavities whose boundary can be described by the Laurent polynomials, and presented a general solution for arbitrary shapes containing terms with positive powers. Since then, there has been no significant progress. For instance, the solutions given by Jafari et al. (2016a, b) completely followed the method of Deresiewicz (1961). The solutions (36)-(38) present a new form that contain no terms with positive powers. In the following, for comparison, we calculate the resultant force on the boundary of the cavity according to the solutions of the previous studies and our formulae, namely, testing whether the residual of TBC equal to zero everywhere: Two examples are listed for comparison in the following. (1). For a cavity characterized by ( ) =°+ ± • + U • ² ³ , | | = 1 , the reported results (Florence and Goodier, 1960) are where a typo is corrected by changing the sign before ̅ in the perturbance potential ' ( ). The solutions of ours can be expressed with , 1 and · as ´( With Let 1 = , · = , we can check that the formulae (40) and (41) According to our formulae, the potentials can be obtained to be ´( ) = ln − (^. 1 + ¹^. · +^¹ . ¶ ), , the residuals of TBC scaled by t are tested and shown in Fig. 2(b). The above comparison make clear that Florence and Goodier's (FG's), Deresiewicz's solutions and ours are all correct near the boundary, but the constant term in the second potential is out of their initial construction and we point out the physical meaning of this term as the possible rigid-body translation between the matrix and the cavity. In addition, making use of terms with positive powers by Deresiewicz must result in the operation problem of big numbers when the shapes need terms of high degrees to characterize. As an example, the cavity of regular dodecagon described by and the maximal shear stress (MSS) can be formulated and the results from the present theory and Deresiewicz's are illustrated in Fig. 3. It is easy to see that the maximal shear stress field of Deresiewicz's solution is heavily contaminated for points away from the boundary but the hydrostatic pressure fields of the present theory and Deresiewicz's are the same. This is because the calculation of maximal shear stress needs the second potential which involves some terms of positive powers in Deresiewicz's theory. Results and analyses of stress distribution Under the action of remote uniform heat flux, stress concentration will appear near the cavity, and the stress concentration at the tip is more obvious and serious, which is the most prone position for material failure. This property has an important impact on industrial design and material performance. Therefore, we will focus on the stress distribution at the tip in this section. Since the distribution of thermal stress at the tip of a cavity varies with the shape and the direction of heat flux, especially when the cavity has multiple tips, the stress distribution at each tip is also different. Based on the above understanding, we calculate the toroidal normal stress around the boundary of the cavity, and the hydrostatic pressure and maximum shear stress near the tip of different cavities and/or tips, in order to discuss the influence of heat flux direction on the stress around each tip. For convenience, the size parameter of different shapes is chosen to guarantee the cavities in comparison have the same area. All lengths are scaled by , and all stresses by t . We choose triangle, square and pentagram star as three representative shapes, the expressions are listed as follows (more items can be added according to the accuracy requirements): with | | = 1. These mapping functions for cavity shapes with tip indices are drawn in Fig. 5 (a) Maximal shear stress (MSS) around the cavities As shear failure being the main failure mode of metal materials, the distribution of maximum shear stress (MSS) near the cavity is an important topic in strength analysis. Fig. 5 (a)-(c) and (d)-(f) show the fields of MSS near different cavities when the heat flux has directions = 0 and = /2, respectively. It can be found that the large stress is mainly distributed around the cavities, and there are different degrees of stress concentration at the tip, and the MSS at each tip is not independently distributed, but permeates each other. Regardless of = 0 or = /2, the most obvious position of stress concentration (the position that is most unfavorable to industrial design and material performance) is located near the tip(s) whose symmetry axis direction has the smallest angle with the heat flux direction, and the distribution of MSS is found be symmetrical. In order to compare the severity of stress concentration in different shapes of cavities, the tips with the most obvious stress concentration, say Tip1 when = 0 and Tip2 when = /2 are selected to investigate with a magnifying glass, as shown in Fig. 6. Due to the smoothness of conformal mapping, the tip of pentagonal star is composed of a platform, namely has two points with the maximal curvature. It can be clearly seen that the severity of stress concentration is pentagonal > triangle > square; all the maximal MSSs appear at the points with maximal curvature except the case of Tip2 of triangle cavity when = /2. The case of the point with the maximum MSS deviating from the maximum curvature point will be further explained later. Change of toroidal normal stress (TNS) with the curvature of cavity contour From Zou and He (2018), the curvature of the cavity contour can be calculated by Considering that the boundary of the cavity is traction-free when the matrix is under the action of uniform heat flux at infinity, there is only toroidal normal stress (TNS) Ï on the boundary. Due to U = 0 and Ï + U = 11 + GG , the relation (21)1 yields the formula of toroidal normal stress as It can be seen from Fig. 7 that in general the TNS varies sharply with the curvature of the cavity contour around the tips, but the TNSs around different tips have different variation characteristics and these characteristics might be affected by the direction of heat flux, even zero value of TNS appears at some tips. The tip with zero TNS depending on heat flux direction can be applied as the most ideal tip position for industrial design and material properties. It is worth mentioning that the curvature at the tips of pentagonal star presents bimodal characteristics, and the TNS is also different at the bimodal points (this phenomenon is caused by the platform at the tip). With the increase of terms in the Laurent polynomial of pentagonal star, the size of the platform at the tip will gradually decrease, and become infinitesimal when the pentagonal star is characterized by the Laurent series, and then the TNSs at the bimodal points will gradually approach to the same. For the case of bimodal tip, when the failure stress is reached, the crack may expand in both directions. According to Fig. 7, for three different shapes of cavities, it looks like that the maximum TNSs are obtained at the maximum curvature, and their values depends on the values of maximum curvature, so the maximum TNS are pentagonal star (-99.073 t )> triangle (-55.75331 t ) > square (-34.43897 t ). Effect of heat flux direction on the maximal TNS around the tip The direction of heat flux has a great influence on the stress distribution around the cavity. Here we are more concerned about the contour position where the stress reaches its extreme, and the heat flux direction when the stress reaches its extreme. Therefore, we only discuss the TNS at the maximum curvature point and the points with the maximal TNSs, with respect to the change of the heat flux direction. Since the stress concentration always happens around the tips, and the tips of regular polygons have the same configuration but different orientation, we choose Tip1 as a representative for discussion. It is natural to use the external normal direction of the maximum curvature point of Tip1 as the reference direction. The slope of the normal at a point on the boundary has the formula depending on its coordinate in the image plane. Due to the existence of the tip platform in the case of pentagram, the external normal of the platform seems to be more suitable as the reference direction instead. The angle Θ between the direction of heat flux and the reference direction of Tip1 is taken from 0 to 2 anticlockwise in Fig. 8 and Fig. 9, while the TNS values are calculated from Eq. (51). In Fig. 8, the TNS at the maximum curvature point and the maximal TNS on the contour are figured out for cavities of three shapes. In Fig. 9, the distance of the points, having the maximal TNSs, relative to the maximum curvature point is given. From these figures and the calculated data, we can list the following conclusions: l For every point around the tip, there is a reference direction that the TNS at this point reaches its tension maximum when the heat flux direction is the same as this direction, and its compression maximum when the heat flux direction is opposite to this direction. The magnitude of the tension maximum is the same as that of the compression maximum. This reference direction is the external normal of the contour if this point is the symmetry point of the tip. l For the cases of triangle and square, the maximum curvature point is also the symmetry point of the tip, and has the global maximal TNS when the heat flux direction is its external normal or opposite, as Fig. 8(a) shown. But when the heat flux direction is not parallel to its external normal, the point with the maximal TNS is no longer the maximum curvature point, as shown in Fig. 8(a), but deviates from the maximum curvature point with the distance becoming the largest when the heat flux direction is perpendicular to the normal, as shown in Fig. 9. For the special case of heat flux direction tangent to the contour, the TNS at the maximum curvature point becomes zero, and two points, with the maximum Tip 1 compression and the maximum tension, respectively, have maximal distance to the maximum curvature point, though the distance is smaller than two percent of R. l For pentagram, the maximum curvature point is not the symmetry point of the tip, and doesn't have the global maximal TNS. The special feature is that the external normal of the platform is not a good reference direction, as shown in Fig. 8(a). From Fig. 8(b), we find that the direction of heat flux when the TNS reaches its global maximum is more suitable as a reference direction. And Fig. 9 shows the deviation of points with maximal TNS is smaller than 0.1% of R (the length of platform of the tip is 4.5677 × 10 .· ). Therefore, as mentioned in Section 4.1, the point with the maximum stress value may not be the maximum curvature point, as the heat flux direction changes. Careful investigation shows that the maximal TNS doesn't occur at the maximum curvature point when the tip has no symmetric configuration. (32)), where Θ indicates the angle between the heat flux direction and the reference direction: (a). the reference direction is taken to be the normal of the maximum curvature point for triangle and square, but the normal of the platform for pentagram; (b). all for pentagram, curves 1-3 are TNS of the maximum curvature point using different reference directions, the external normal of the maximum curvature point for curve 1, the external normal of the platform for curve 2, and the direction of heat flux when the TNS reaches its global maximum for curve 3, curve 4 is the TNS of the point having the global maximum TNS with the same reference direction as curve 3. Decay of hydrostatic pressure along lines starting from the tips Along the direction of the symmetry axis of the cavity at the tip, take a straight-line segment with the length of 2 which is scaled by the radius t of circumscribed circle of the cavity shape, starting from the maximum curvature point of the tip, the hydrostatic pressure (47) on the line segments near Tip1s of the three cavities are shown in Fig. 10(a)-(c). It can be seen that except that the hydrostatic stress on the line is always 0 when the tip has normal perpendicular to the heat flux direction (pentagram has an instantaneous change due to the nonsymmetric tip), the hydrostatic pressure on the line decreases rapidly and gradually in the distance, and the attenuation rate gradually slows down with the increase of distance. The effect of heat flow direction on hydrostatic pressure does not change with the distance, and the stress decay rate at the tip of a cavity with different shapes is also positive correlation with the curvature: pentagram (from 99.81 t to 1.69 t ) > triangle (from 55.78 t to 2.11 t ) > square (from 34.44 t to 2.21 t ), when the segments are parallel to the heat flux direction. Conclusion In this paper, the two-dimensional thermoelastic problem of infinite medium with a cavity subjected to uniform heat flux is studied by using the plane complex variable theory. In the aspect of solving the problem, we obtain the explicit analytic solutions of K-M potentials and relative rigid-body translation by equivalence analysis tactics in series expansion. By comparison, we find that the previous solutions containing positive power terms have the operation problem of big numbers when the field points are away from the boundary of the cavity, The maximum stress plays a key role in engineering design and material properties. Using the new solution for a cavity with arbitrary shape, with triangle, square and pentagonal stars as representative shapes, we study the thermal stress distribution by investigating the effects of different cavity shapes, at different tips and under different heat flux directions. The major conclusions can be listed as follows: l The thermal stress exists around the cavity and the stress concentration is obvious at the tips. For regular polygon, the stress distribution behaves some symmetry according to the symmetry of the cavity shape. The maximum stress is positively correlated with the maximal curvature. For different cavity shapes scaled by the area, the stress concentration becomes serious as: pentagonal star > triangle > square. l When the heat flux direction is parallel to the external normal direction of the tip, the stress at this point reaches the maximum; when the heat flow direction is perpendicular to the normal direction of the tip, the stress at this point reaches the minimum; When there is an angle between the heat flow direction and the external normal direction of the tip, the maximum stress point and the maximum curvature point will also have a small deviation; l With the increase of the distance from the tip, the hydrostatic pressure decreases rapidly, and its rate is also proportional to the curvature. Disclosure statement No potential conflict of interest was reported by the authors. Appendix A. Detailed derivation of explicit analytical solution According to the analysis of Zou and He (2018), all terms in (35) can be expanded in terms of the power of , and the Cauchy integral formula will guarantee that the part of non-positive powers must be in balance everywhere. Thus, the terms of positive powers can be omitted and an equivalent relation operator "∼" can be introduced to carry out the potentials.
2021-03-03T02:15:55.152Z
2021-03-02T00:00:00.000
{ "year": 2021, "sha1": "46468ac2bb7a9e31e0927acfa006ca1ee11a1202", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2103.01523", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "46468ac2bb7a9e31e0927acfa006ca1ee11a1202", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4640818
pes2o/s2orc
v3-fos-license
Eyelid Retraction in Isolated Unilateral Congenital Blepharoptosis Isolated unilateral congenital ptosis is encountered relatively infrequently in clinical practice. It typically consists of a unilateral droopy eyelid, weak levator palpebrae superioris muscle function, lid lag, and an absent upper lid crease with no other abnormalities on examination. We present a four-and-a-half-year-old girl with isolated and mild unilateral congenital ptosis who unexpectedly demonstrated a static upper eyelid on downgaze in conjunction with a well-formed upper lid skin crease. We attribute this uncommon sign in congenital ptosis to stiffness and presumed fibrosis of the levator muscle. Examining the function of the eyelids in all directions of gaze is important in patients with abnormalities of lid position, since additional useful information can be gleaned about the status of the levator muscle including, aberrant regeneration or fibrosis. Isolated unilateral congenital ptosis is encountered relatively infrequently in clinical practice. It typically consists of a unilateral droopy eyelid, weak levator palpebrae superioris muscle function, lid lag, and an absent upper lid crease with no other abnormalities on examination. We present a four-and-a-half-year-old girl with isolated and mild unilateral congenital ptosis who unexpectedly demonstrated a static upper eyelid on downgaze in conjunction with a well-formed upper lid skin crease. We attribute this uncommon sign in congenital ptosis to stiffness and presumed fibrosis of the levator muscle. Examining the function of the eyelids in all directions of gaze is important in patients with abnormalities of lid position, since additional useful information can be gleaned about the status of the levator muscle including, aberrant regeneration or fibrosis. Keywords: isolated, congenital, blepharoptosis, levator palpebrae superioris, lid retraction in downgaze, static eyelid INtRoDUCtIoN Upper eyelid blepharoptosis, commonly referred to as ptosis, is seen relatively commonly by ophthalmologists and neurologists. It is defined as an inferior malposition of the upper eyelid margin with respect to the superior corneo-scleral limbus in the absence of another cause, such as a hypotropia or enophthalmos. Isolated congenital ptosis, on the other hand, is encountered less frequently. It is present at birth but often goes unnoticed in the first few months of life (1). There are other causes of ptosis in a newborn, including a third cranial nerve palsy and the Marcus Gunn jaw-winking syndrome, but these are distinct from isolated congenital ptosis, which is the subject of this paper. Isolated congenital ptosis typically consists of a unilateral ptosis with weak levator palpebrae superioris muscle function (seen when looking up with the frontalis muscle neutralized by exerting pressure on the ipsilateral eyebrow), lid lag on downgaze, and often an absent upper lid crease (2). The rest of the neuro-ophthalmological and neurological examination is normal. It should be mentioned that ptosis of the lower eyelid is a recognized entity; however, this paper will be referring only to ptosis of the upper eyelid. Isolated congenital ptosis is usually sporadic but may be familial with no well-defined pattern of inheritance. The cause of isolated congenital ptosis was blamed on a presumed myopathy of the levator muscle but this view has been challenged (3), and recent evidence suggests a developmental genetic abnormality in levator muscle innervation (4,5). The histological changes are now thought to be neurogenic in origin, and the direct consequence of defective innervation. The affected muscle is usually dystrophic, with fibrous and fatty tissue replacing the normal muscle fibers, resulting in an abnormal muscle that contracts and relaxes poorly. Surgery to correct ptosis is undertaken in patients who are at risk of amblyopia, i.e., when ptosis is interfering with visual development or for cosmetic reasons. Surgery can only address the lid height (i.e., the palpebral aperture), the lid contour, and create a lid crease, if needed. It cannot restore normal contraction and relaxation of a dystrophic levator muscle. We present a child with an apparently mild isolated unilateral congenital ptosis but whose upper eyelid appeared static on downgaze. Case RepoRt The patient was first brought to the attention of pediatric ophthalmology at 10 months of age. She was referred by her family physician because her parents noticed that her right upper eyelid did not depress on downgaze and it failed to close completely during sleep. It had been noted for several months prior to the initial visit and was first seen soon after birth. The eyelids closed fully when she cried but less tightly on the right. The parents had no other concerns about her health or vision. The pregnancy had been normal with the exception of pregnancy-induced hypertension at 35 weeks for which the mother was treated with labetalol. The mother did not smoke, use any prescription or recreational drugs, and did not drink alcohol. The patient was born at 37-week gestation following spontaneous vaginal delivery with vacuum assistance. Birth weight was 2.33 kg. There were no complications in the postpartum period. The parents were non-consanguineous and were both 33 years old. The father is of Mennonite origin and mother is Irish/Welsh. The family ocular history was significant only for a second cousin whose lid also did not close fully but this individual has not been examined by the authors and no more details are available. Our patient was otherwise well and was developing normally. On initial examination, the patient was orthophoric and her visual acuity was central, steady, and maintained bilaterally. Pupils were equal and reactive to light and there was no afferent pupillary light defect. There was no limitation of her ocular motility. There was a 0.5 mm ptosis of the right upper eyelid in primary gaze. Her right upper lid crease was noted to be normally formed, and symmetrical with the left upper lid crease. Her fundi were normal and eyelid movements were full but with poor right levator function noted in upgaze, and no aberrant movements. There was no evidence of exposure keratopathy. Cycloplegic refraction was +2.25/+2.25 × 090 OD and +1.50/+0.25 × 090 OS. There were no signs of involvement of the facial nerves. The patient was prescribed glasses for the mild anisometropia. Blood work for T3, T4, and TSH was normal. Old photographs from early infancy showed right lagophthalmos measuring approximately 2 mm. On follow up at age 14 months, the right upper lid was no longer ptotic, but now appeared to be retracted by 0.5-1 mm. There was lid lag of the right upper lid in downgaze with associated scleral show and little movement of the right upper lid with reflex blinking. Additionally, manual traction on the lashes of the right upper eyelid identified a restriction in the levator muscle. There was a suggestion of a right hypotropia and exotropia at times although the eyes appeared orthotropic in the primary position for near and distance targets. The remainder of the examination was unchanged. Her thyroid function tests were repeated and they remained normal. The patient was wearing her glasses, as prescribed, and there was no suggestion of amblyopia. Her parents opted to defer neuroimaging due to sedation concerns and the patient was referred to pediatric neurology for a second opinion. She was seen by pediatric neurology at age 16 months. There were no neurocutaneous stigmata or dysmorphic features. Her visual behavior appeared normal. Extraocular movements were full and pupils were equal and reactive to light. The pupils remained equal in size in all directions of gaze. The right upper eyelid moved minimally on downgaze with almost no movement on upgaze. At this time, she was noted to have a smaller palpebral aperture on the right, suggestive of a mild ptosis, but it was not possible to measure due to poor cooperation. Corneal reflexes through direct and indirect corneal stimulation were intact but elicited incomplete (partial) blinks on the right. She was noted to blink spontaneously bilaterally but asymmetrically, with partial blinks on the right. She displayed normal facial movements. She could produce tears with crying and saliva was present in her mouth. Her tone, strength, coordination, and reflexes were normal. An MRI of the brain and orbits, with thin cuts of 0.8 mm, showed no abnormalities. The patient was followed over the next 3 years, and during this time, the right upper lid has become gradually more ptotic, by approximately 3 mm. Her lid crease remained well-formed and almost symmetrical, measuring 3 mm on the right and 2 mm on the left. Her right levator function measured 5 mm compared to 11 mm on the left. No synkinesis of the right levator was evident and the right upper lid was static on downgaze, with no relaxation of the levator resulting in scleral show (Figure 1) and lagophthalmos of approximately 1 mm (Figure 2). Her right eye started to show evidence of amblyopia at 33 months of age despite good cooperation with the use of her glasses. This responded well to patching of the left eye and her acuities were 6/9+ 2 OD and 6/7.5−1 OS at 54 months of age, with 80 arc seconds of stereopsis, orthophoria in all directions of gaze, and normal ocular motility. In the primary position, the midpalpebral aperture measured 7 mm on the right and 10 mm of the left at the last clinic follow up. DIsCUssIoN Over the last two decades, congenital ptosis has been reclassified and is currently considered to be one of the congenital cranial dysinnervation disorders. The roles of several genes have been elucidated and found to have important or essential roles in the development of the brainstem, specific cranial nerve nuclei, or their axonal connections to their target, usually muscles (6). We refer the interested reader to several excellent reviews on the topic (6)(7)(8). Ptosis in the first year of life may be caused by trauma (e.g., birth injury) or may have other etiologies that are myogenic (e.g., congenital myopathy or myasthenia gravis), syndromic, metabolic, mechanical (e.g., from a tumor), or neurogenic (e.g., congenital Horner syndrome or congenital oculomotor nerve palsy) (1). In this paper, we present a child with isolated, unilateral, nonsyndromic, congenital ptosis with no other ocular or neurological features. The upper eyelid did not cover the pupil, i.e., her ptosis was mild, to the extent that it was not apparent until after the first year of life. The right levator muscle was stiff (since the upper eyelid moved minimally on downgaze despite her mild ptosis) and fibrosed (evidenced by the palpable restriction when pulling on the upper lid). The effects of the resultant restriction were apparent soon after birth and well before her ptosis became apparent. There was almost no movement of the right eyelid on upgaze, consistent with weak levator function, and minimal movement of the upper lid on downgaze. No evidence of synkinesis was seen on careful examination. We specifically looked for evidence of levator and inferior rectus synkinesis and found none. The right upper eyelid moved only minimally downwards on downgaze. On careful clinical examination aided by frame-by-frame analysis of a video recording of the patient's eye movements, the right upper eyelid did not move upwards at any time when the child was looking down. The corneal reflex response and blinks were asymmetrical, with partial response on the right due to stiffness of the levator. The action of orbicularis oculi, which was intact in our patient, overcame the levator stiffness during blinks and reflex blinking caused by corneal stimulation, albeit incompletely. During sleep, the mother reported either complete or incomplete closure (~70-80%) of the right eyelids i.e., she had lagophthalmos (9), which can be attributed to the stiffness of the right levator. Our patient is unusual in that she has only mild ptosis, with poor levator function, yet a well-formed skin crease. She also demonstrates a distinctly stiff upper eyelid, which moved sluggishly and minimally on downgaze. We undertook a review of the medical literature on congenital ptosis and scrutinized photographs of children and adults with congenital ptosis in the primary position and downgaze, where available. It was very uncommon to see the sclera superior to the limbus of the effected eye in downgaze, i.e., lid retraction or static lid in downgaze, also referred to a "hang-up in downgaze" (10). Hang-up in downgaze is commonly seen after surgery for congenital ptosis. It has also been described in acquired ptosis associated with orbital malignancy or trauma (10), but it is an uncommon finding in unoperated congenital ptosis. On the other hand, lid lag and a palpebral aperture that remains unchanged or increases on downgaze have been described in congenital ptosis (11,12). In such patients, the ptotic eyelid usually partially covers the superior limbus in downgaze in contrast to our patient. We found different definitions of lid lag, with variations in the usage of the term. In one study, the authors defined lid lag as a dynamic phenomenon, seen during eye movement testing, consisting of a phase lag of the ptotic eyelid seen only during eye movement from upgaze to downgaze; the upper eyelid margin would be seen to catch up soon after the cessation of the eye movement (10). Other authors have labeled the aforementioned sign as the von Graefe's sign (9), which they defined as retarded eyelid descent during a downgaze movement. In the same study, lid lag was defined as a static phenomenon, in which the upper eyelid position assumes a position higher than normal while the eyes are in downgaze (9). To avoid the confusion associated with the variations in the definition of these ocular signs, we simply describe our findings as follows: In our patient, the upper eyelid margin remained retracted after its initial slow and minimal downward movement on downgaze. It is also notable that her skin crease was well-formed bilaterally despite her very poor levator function. A poorly formed skin crease is usually an indicator or poor levator function, but that was clearly not the case with our patient, whose levator function was poor, but her skin crease very prominent. The appreciation of the fibrotic state of our patient's levator has important management implications as it can be expected that any ptosis surgery will have a high probability of causing lagophthalmos due to the rigid nature of the muscle. The etiology of the unilateral isolated congenital ptosis in our patient is unknown. There was no history of birth trauma, no evidence of dysmorphic features, of other ocular signs, or of a myopathic disorder. We, therefore, assume that it is neurogenic in origin, caused by abnormal innervation of the right levator palpebrae superioris muscle. Of interest, two genetic loci (PTOS1 and PTOS2) have been described in patients with congenital ptosis. Inheritance is autosomal dominant and X-linked dominant, respectively. The phenotype consists of unilateral or bilateral ptosis of variable degree in the former and severe bilateral ptosis, frontalis muscle overactivity, and chin-up head posture in the latter (13). We suggest that examining the function of the eyelids in all directions of gaze is important in patients with ptosis, even when the ptosis is mild, since important information can be gleaned about the status of the levator muscle, namely aberrant regeneration or the presence and severity of fibrosis. Increasing stiffness of the levator may herald the onset of ptosis, so clinical follow-up is important. etHICs stateMeNt Mother gave verbal and written permission to publish manuscript and figures. aUtHoR CoNtRIBUtIoNs MS examined patient and wrote first draft. He edited subsequent versions of the manuscript. IC examined patient and contributed to the case report and, intellectually, to the discussion. He obtained both figures. Both approved the final version.
2017-05-17T19:38:33.070Z
2017-05-05T00:00:00.000
{ "year": 2017, "sha1": "0bfa46ebcc1f162c2e269f2b0b466cb269b795f5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2017.00190/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0bfa46ebcc1f162c2e269f2b0b466cb269b795f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249551273
pes2o/s2orc
v3-fos-license
Factors affecting anxiety and depression during the first wave of the COVID-19 pandemic: a cross-sectional study of three different populations Background This paper was the first study comparing levels of anxiety and depression and assessing the affecting factors among the general population, frontline healthcare workers, and COVID-19 inpatients in Turkey during the first wave of the COVID-19 pandemic. We collected data from the general population (n = 162), frontline healthcare workers (n = 131), and COVID-19 inpatients (n = 86) using Individual Characteristics Form, Generalised Anxiety Disorder Scale, and Beck Depression Inventory in this cross-sectional study. Results An increased prevalence of depression and anxiety were found predominantly in frontline healthcare workers (p < 0.001). COVID-19 inpatients and frontline healthcare workers were more likely to demonstrate anxiety (p < 0.001) than the general population. In the regression analysis, while fear of infecting relatives was a significant predictor of anxiety and depression in the general population, gender and experiencing important life events were associated with anxiety. Fear of infecting relatives and lack of personal protective equipment while providing care were predictors of anxiety and depression in healthcare workers (p < 0.001). Furthermore, the fear of being re-hospitalised due to re-infection was a predictor of depression and anxiety levels of the COVID-19 inpatients. Conclusion Policymakers and mental health providers are advised to continuously monitor psychological outcomes and provide necessary health support during this pandemic. Introduction The novel coronavirus (2019-nCoV) was first detected in Wuhan, China, at the end of 2019. It spread rapidly to other countries all over the world. On March 11, 2020, the World Health Organization (WHO) declared the COVID-19 outbreak a pandemic and public health emergency of international concern [35]. Since March 2020, strict preventive measures have been taken by governments worldwide. At the time of writing this article, there were over 200 million confirmed COVID-19 cases and 4.32 million deaths as a result of the disease globally. The number of confirmed cases in Turkey was reported at 12,051,852 in February 2022. Turkey's fourth wave of COVID-19 infections was reported in February 2022 [21]. The number of cases has decreased with the development of the vaccine. Still, problems persist in some countries due to difficulties experienced with vaccine supply, virus mutations, and the relaxation of the restrictions due to a decrease in the number of infections. The pandemic has physical, psychological, and social effects on individuals. While physical problems are at the forefront in the initial stages of spread, psychological and social problems continue to significantly impact individuals in the later stages of the pandemic. These problems can occur even in individuals who are not at high risk of getting sick [22,28]. Traumatic events can reduce people's sense of security, increase levels of existential dread, and adversely affect their psychological well-being. Uncertainty surrounding the duration of the pandemic, constant streams of pandemic information, reduced social contact, and government-imposed lockdowns negatively affect the mental health of individuals. Symptoms such as anxiety, depression, fear, stress and sleep deprivation have been common during the COVID-19 pandemic [32]. Although not pervasive as COVID- 19, mental health problems emerged in healthcare professionals, the general population, and victims of severe acute respiratory syndrome (SARS) or middle east respiratory syndrome (MERS) during the SARS and MERS epidemics [7,11,17]. Post-traumatic stress disorder (PTSD) and depressive disorders have been identified as the most common long-term mental health problems in individuals affected by SARS. Similar results were reported in a study related to the MERS outbreak [17]. These results suggest that the COVID-19 pandemic could have psychological and social impacts on patients infected with COVID-19, the general population, and healthcare workers [10]. The psychological consequences of the COVID-19 pandemic are already evident in the stresses associated with risk of infection, quarantine, self-isolation and traumatic experiences in families and communities. The pandemic and subsequent social distancing measures may beget feelings of loneliness, hopelessness, and existential dread-independent predictors of suicide. The COVID-19 pandemic can be stressful for individuals and communities. Fear and anxiety about an illness can be overwhelming and generate strong emotions in adults and children [18,33]. Individuals tend to feel anxious and insecure when the environment changes. When an infectious disease's cause, progression, and consequences are unclear, rumours propagate, and closed-minded attitudes emerge. Fear has been a known and common response to contagious epidemics for centuries, e.g. the plague. People respond to such threats in many individualised ways. Fear of the unknown increases anxiety in healthy individuals and those with pre-existing mental health problems. The spread of disease and its impact on people, health, hospitals and economies is one such unknown. Pandemics cause individuals, families, and communities to experience feelings of hopelessness, despair, grief and a profound loss of meaning [18]. Isolation strategies to prevent the spread of the virus have caused psychological and social problems by closing schools and workplaces, decreasing autonomy, and causing financial and safety concerns [28,36]. These strategies have led to loneliness, anxiety, and depressive symptoms by restricting access to social support systems such as family or friends [28]. Social isolation, quarantine, social and economic changes caused by the pandemic have triggered emotions that mediate psychological problems such as sadness, anxiety, fear, stress, disappointment, guilt, helplessness, loneliness and anger. These feelings are typical features of mental health problems experienced during or after a crisis [1,3,19]. Consistent exposure to pandemic-related information on social media during this crisis has also led to mental health problems [19,38]. Patients infected by COVID-19 are the most affected group. These individuals experience additional stressors such as fear of infecting family, social stigma, and coping with difficult treatment processes alone [28]. These stressors can have long-term effects on individuals diagnosed with COVID-19 who require treatment in addition to the financial burden of managing the disease [24]. The rise in the number of people hospitalised with COVID-19 has increased the workloads of healthcare workers, worsening working conditions. Lai et al. [16] state that the exponential increase in the number of cases, workload, personal protective equipment (PPE) limitations, sensationalist media, lack of medication, and insufficient support, can have a physical and psychological impact on healthcare workers [16]. Recent publications on COVID-19 showed that researchers focus on epidemiology, clinical features, radiology findings, and treatment; very few studies have focused on the mental health of those affected by the disease [13,28]. Studies on the psychological effects of the pandemic were restricted to healthcare workers [8,16] and the general population [34]. The most important psychological effects of the pandemic are anxiety and depressive symptoms in the short term. It follows that the general population, infected individuals, and healthcare workers on the frontline of the pandemic experience similar psychosocial problems. Increased psychological distress has been reported predominantly in the general population, frontline healthcare workers, and individuals recovering from COVID-19. There are few studies on the mental health of COVID-19 inpatients, while many studies have been conducted on the mental health of frontline healthcare workers and general populations affected by the pandemic [12,26]. There may be a difference in anxiety and depressive symptoms between these populations. Moreover, there is limited research on the psychological distress (anxiety, depression, etc.) of patients with COVID-19. Therefore, this study aimed to determine the level of anxiety and depressive symptoms of patients hospitalised for COVID-19, frontline healthcare workers, and the general population during the first wave of the COVID19 pandemic in Istanbul. This study also examined the effect of factors potentially affecting these variables, such as age, gender, marital status, physical or psychiatric illness, etc. Study design and sample The present study used a descriptive, cross-sectional survey design. The data were collected from June 31 to July 15, 2020, during the first pandemic wave in Istanbul, Turkey. The sample of the study consisted of the general population (n = 162), frontline healthcare workers (n = 131) and COVID-19 inpatients (n = 86). Data were collected through online surveys from frontline healthcare workers and the general population via social media using convenience sampling. The target population for the electronic survey were frontline health care workers and the general population over 18 years old, living in İstanbul. Individuals agreeing to participate were asked to complete the questionnaire through social media (WhatsApp, Twitter and Facebook). The convenience sampling method was used to obtain data from patients with COVID-19 treated in a training and research hospital in Istanbul. Data were collected from patients hospitalised with COVID-19 who agreed to participate in the study and were able to fill out the health-status data-collection form. Informed consent was obtained before data collection. Data collection tools The data were collected using the Individual Characteristics Form, GAD-7 and BDI. Individual Characteristics Form consists of common questions about participants' age, gender, marital status, employment status, whether they have chronic physical or mental illness, and whether they have experienced a significant life event in the past year. Questions unique to the different sample groups were prepared. For the general population, participants were asked about: whether they were in quarantine, whether their relatives had been diagnosed with COVID-19, their levels of fear or anxiety of being infected with COVID-19, their fear of transmitting it to people they are close to. COVID-19 inpatient participants were asked about: how many days they had been in hospital, fear of re-hospitalisation with COVID-19, their fear of infecting people they are in contact with. Frontline healthcare worker participants were asked about: whether their PPE was sufficient, whether there was a change in accommodation, whether they or their relatives had been diagnosed with COVID-19, their fear or anxiety levels of being infected with COVID-19, their fear of infecting the people they are in contact with. Generalised Anxiety Disorder Scale (GAD-7) was developed by Spitzer et al. [30] and translated to Turkish by Konkan et al. [14]. It consists of 4-Point Likert scale type questions (0-not at all, to 3-almost every day) for seven items. It was evaluated generalised anxiety symptoms. The scores of 5, 10 and 15 obtained in the scale are cut-off points for mild, moderate and severe anxiety, respectively. GAD7 is a valid and efficient scale. Cronbach's alpha of the current study was 0.89. Beck Depression Inventory (BDI) was developed by Beck (1961) for evaluating depression symptoms in four areas: emotional, cognitive, vegetative and motivational. It was translated to Turkish by Hisli (1989). This scale consists of 4-point Likert scale-type questions for 21 items. The scores obtainable are between 0-63. The cutoff point for the Turkish sample was 17. For BDI: 0-9 points, minimal depressive symptoms; 10-16 points, mild depressive symptoms; 17-24 points, moderate depressive symptoms; 25 and above points, severe depressive symptoms. The BDI is a valid and efficient scale. Cronbach's alpha of the current study was 0.89. Statistical analysis Data analyses were run via Statistical Package for the Social Sciences version, 20.0. Average and standard deviation for continuous variables were calculated. Frequency and percentage values for categorical variables were calculated. The normality distribution of the data was evaluated using the Kolmogorov-Smirnov and skewness-kurtosis values. Skewness and kurtosis values in the data with a sample larger than 30 ± 2 indicate that the data are normally distributed [31]. The total BDI and GAD-7 scores were obtained for the mean difference statistics. The one-way ANOVA test (F table value) was used to compare the means of three or more independent groups. Levene's test statistic was evaluated for homogeneity of variance between the groups. The Bonferroni correction was used for different statistically significant variables for a dual comparison of three or more groups according to the homogeneity of variance. The Chi-square (χ 2 ) test was used to compare categorical variables. Pearson correlation and multiple regression analysis were used to analyse the relationships between the averages. Frontline healthcare workers were the youngest, COVID-19 inpatients, the oldest group. Frontline healthcare workers' fear of infecting their relatives was significantly higher than the general population (p < 0.001). There was no significant difference between health workers and the general population regarding their fear of infection. Table 2 indicates participants' mean Beck Depression Score and General Anxiety Disorders Scale. Frontline healthcare workers had the highest mean score on the Beck Depression Scale (15.64 ± 9.95) and General Anxiety Disorder Scale (8.52 ± 5.01). Increased prevalence of depression (70.2%) and anxiety (76.3) was also found, predominantly in healthcare workers (p < 0.001). Moreover, COVID-19 inpatients and the frontline healthcare workers were more likely to exhibit anxiety (p < 0.001) compared to the general population ( Table 2). GAD-7 and BDI findings of participants The independent t-test samples were conducted to examine whether the depression and anxiety levels of the participants differ according to the individual characteristics shown in Table 3. It was observed that there was no significant difference between the age and physical illness of participants and their depression and anxiety levels. In the general population, the anxiety levels of female participants were higher compared to male participants (t: 2.803; p < 0.01). There was no significant association between anxiety and depression levels and gender in the other groups. Moreover, the depression levels of single health workers were higher compared to married workers (t: − 2.152; p < 0.01). In the general population, depression and anxiety levels of participants who had a significant life event were significantly higher than those who did not have one. The anxiety (t: 2.671; p<: 0.01) and depression (t: 2.663; p < 0.01) levels of participants in the general population who experienced a significant life event in the past year were found to be significantly higher than those who did not have one. Covid-19 inpatients in the 1st days of hospitalisation, with a high possibility of re-hospitalisation, have higher anxiety levels. While the anxiety levels of frontline healthcare workers and the general population highly correlated with the fear of infecting others, no statistically significant relationship was found between COVID-19 inpatient anxiety and their fear of infecting relatives (p < 0.001). Moreover, there was a significant positive relationship between depression levels of participants and fear of infecting their relatives in all groups (p < 0.001). Furthermore, COVID-19 inpatients with high fear of re-hospitalisation have higher BDI scores. The comparison and correlation of BDI and GAD-7 with individual characteristics of participants Multiple linear regression analysis was performed to determine the individual characteristics affecting the participants' depression and anxiety levels ( Table 4). Multiple regression analysis results were significant (p < 0.001). When the beta values in the table are examined, and all independent variables are included in the regression model, gender (β = 0.170, p = 0.019), significant life events (β = 151, p = 0.038), and fear of infecting relatives (β = 0.355, p = 0.000) contribution significantly to the reasons for anxiety in the general population. This result explains the 20% variance in anxiety level. In the general population, it was determined that fear of infecting relatives contributed significantly to the level of depression (p < 0.001). This result explains the 12% variance in depression levels. According to the multiple regression analyses, lack of PPE and the fear of infecting relatives explained 17% of the anxiety variance level and 16% of the depression variance level among frontline health care workers. COVID-19 inpatients' fear of being hospitalised again was a significant predictor of anxiety and depression (p < 0.005). This result explained 10% of the variance in the anxiety level and 12% of the variance in the depression level. Discussion The levels of depression, anxiety, and related factors in the three groups (general population, frontline healthcare workers, COVID-19 inpatients) during the first wave of the pandemic in Istanbul, Turkey, was investigated in this study. Studies on this subject have increased but were limited, particularly in Turkey. This study showed that the COVID-19 pandemic has negatively affected individuals' mental health, with frontline healthcare workers needing particular attention due to psychological distress. Table 4 Results of multiple linear regression analysis a on factors significantly associated with depression and anxiety The prevalence of anxiety in the general population, frontline healthcare workers, and COVID-19 inpatients were 63.6%, 76.3%, 34.9%, respectively. The prevalence of depression was approximately 46.3%, 70.2%, 33.7%, respectively. These results were higher than those found in other countries [2,10,23]. In China, it was shown that 8.3% of participants had anxiety in the study conducted with affected and unaffected people [16]. The depression prevalence was also lower than our results in this study. It was also found that severe and extremely severe levels of anxiety and depression in the Spain sample were lower than in this study [23]. Another study was conducted in Malaysia to determine depression and anxiety levels during the 3rd wave of the pandemic. The prevalence of depression was 87.7%, and the prevalence of anxiety was 43.6% [20]. In a review of 13 studies examining the symptoms of anxiety and depression in healthcare workers during the pandemic, anxiety was assessed with a pooled prevalence of 23.2%. Depression was assessed in 10 studies, with a prevalence rate of 22.8% [25]. It is noteworthy that the study was carried out just before the normalisation phase of the outbreak in Turkey. Possible reasons for these differences are as follows. Firstly, the study was carried out in Istanbul, the city with the highest number of cases and a prolonged outbreak. COVID-19 had spread globally and restriction measures implemented by governments may have affected these results. Secondly, knowledge of infectious diseases is a factor. The level of knowledge affects reactions to a crisis, particularly in a pandemic [15]. Turkish people did not know how to cope with a crisis of this scale. Use of different measurement instruments, different phases of the pandemic, different study designs, and cultural backgrounds could be a reason for these variable results. Our study shows high levels of anxiety and depression during the COVID-19 outbreak, particularly in frontline healthcare workers and the Turkish public. When we compare the average values between the three groups, healthcare workers have greater levels of anxiety and depression than other groups. This is contrary to the results of the large sample study in China [10]. Our results suggest that anxiety and depression levels of frontline healthcare workers increase when a major infectious disease pandemic occurs. In a study conducted in Turkey before the COVID-19 pandemic, the frequency of depression was 29% among doctors employed in emergency units [5]. In another study conducted during the COVID-19 pandemic, 13.7% of the participants showed symptoms of depression, and 26.7% of those exhibited symptoms of generalised anxiety [37]. In contrast, another study comparing the depression and stress levels of healthcare and non-healthcare workers in Turkey found no difference in the stress and depression levels of the participants [4]. It is thought that this difference in results was because our study was conducted in the first wave of the pandemic. It is believed that the availability of the vaccine, the decrease in the number of COVID-19 inpatients, and the experience of healthcare professionals managing the pandemic have progressed since then. Similar to the psychological consequences of previous epidemics such as SARS [29], we found that approximately 3/4 of the frontline healthcare workers exhibited symptoms of anxiety and depression. This study showed no difference between frontline healthcare males' and females' depression and anxiety levels. This differs from previous research indicating that women were more likely to suffer from depression and anxiety than men [6,9]. However, this study also found that the anxiety levels of female participants in the general population were higher than men. Contrary to previous research conducted in other countries, there was no relationship between age, anxiety, and depression levels [10,23,29]. To slow the spread of COVID-19, the Turkish government imposed stringent restrictions on individuals under 20 and over 65 years of age which could affect this result. There were high levels of depression among single (uninvolved romantically) frontline healthcare workers in our study. Similarly, Marzo et al. [20] found that being young, single, and female was a predictor of depression and anxiety [20]. This may be related to the lack of social support systems, living away from home during the pandemic, and not communicating due to fear of transmitting the disease to their relatives. There was a positive correlation between fear of infecting relatives and anxiety and depression in the general population and frontline healthcare worker groups. There was also a positive correlation between fear of becoming infected and levels of anxiety and depression in these subgroups. Working with suspected positive patients, contact with confirmed infection cases, and a lack of PPE, increased the risk of contracting COVID-19 for frontline healthcare workers. Additionally, healthcare workers worried more about infecting family members, relatives and friends due to working with infected patients. These emotional challenges caused anxiety and depression among the healthcare workers [10,27]. The physical health implications of contracting COVID-19 or transmitting it to someone else lead to anxiety and depression in the general population. There was a high level of anxiety during the 1st days of hospitalisation in patients infected with COVID-19. There was also a relationship between the fear of re-infection and levels of anxiety and depression. Possible reasons for these mental health problems facing infected patients could be confronting an unknown disease, treatment in isolation, and physical complications of COVID-19 [28,38]. There were some limitations to this study. The first was that cross-sectional designs and self-reported data do not allow for confident causal conclusions. The second was that the method of purposive sampling and online surveys could lead to bias, which cannot be measured or controlled for. Therefore, results from the data cannot be generalised throughout Turkey. The strength of this study was in comparisons between the mental health outcomes among three subgroups during COVID-19 surges and factors related to levels of anxiety and depression. Conclusions It is important to diagnose and treat psychiatric conditions that occur in individuals in the future. Consideration must be given to the pandemic's negative impact on mental health to reduce the mental burden of the disease. Future research should investigate a larger sample across different ages, genders, and job roles, particularly other frontline workers like teachers, pharmacists, retail workers, etc. Intervention studies that seek to improve the mental health of individuals are also recommended. Multidisciplinary teams-consisting of psychiatrists, psychiatric nurses, clinical psychologists, and other mental health professionals-should be formed by government and health authorities to meet the psychological support needs of individuals.
2022-06-11T13:28:55.063Z
2022-06-11T00:00:00.000
{ "year": 2022, "sha1": "69f9a498ab819a20372c3ca66e47056bba98fbbe", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "69f9a498ab819a20372c3ca66e47056bba98fbbe", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213001584
pes2o/s2orc
v3-fos-license
Effect of La2O3 on resistance to high-temperature oxidation and corrosion of aluminized and aluminum-chrome coating In this paper, the effect of La2O3 on resistance to high-temperature oxidation and corrosion of aluminized and aluminum-chrome coating on the nickel-based superalloy GH625 was investigated. The results show that the addition of La2O3 can dramatically reduce the oxidation weight gain of coating, promote the formation of a protective oxide film and improve the adhesion of oxide film, which can enhance high-temperature oxidation resistance of the coating. In addition, La2O3 can also obviously improve the high-temperature corrosion resistance of the coating. After 100 h of the high-temperature corrosion, the corrosion weight gain of Al + La2O3 coating and Al-Cr + La2O3 coating was 6.01 mg cm−2 and 4.49 mg cm−2, respectively, with a decrease of 50% and 33% compared with La2O3 free coatings. The addition of La2O3 can refine the grain size, reduce the alkaline dissolution of Al2O3, and inhibit the diffusion of corrosive elements, which can increase the high-temperature corrosion resistance of the coating. Introduction The nickel-base superalloy GH625 is one of the commonly used materials for the preparation of a turbine engine blade in a high-temperature environment and its composition is as shown in table 1. This alloy can produce Cr 2 O 3 protective film, which is easily decomposed and volatilized in a long-term high-temperature environment (1470°C) [1]. Hence, its resistance to high-temperature oxidation and corrosion performance is not very good [2][3][4]. In order to improve the high-temperature performance of this alloy, some scholars studied the effect of high-temperature protective coating on the nickel-base superalloy. The results show that the Al coating can significantly improve the high-temperature oxidation resistance of the alloy. However, the hightemperature corrosion resistance of the alloy is a little poor [5]. This is because the Al 2 O 3 formed on the surface of the Al coating can prevent the entry of oxygen atoms, but not that of corrosive ions. Rare-earth element or rare earth oxide has the characteristics of low electronegativity, high activity, and strong reducibility, and is often used as a catalyst for the oxidation reaction. The addition of a small amount of rare earth or rare earth oxide in the coating can significantly increase the adhesion and the peeling resistance of the oxide film and improve the high-temperature performance of the coating [6][7][8]. Lanthanum (La) in La 2 O 3 is the most active element among the 17 rare-earth elements (La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, Y, Sc), but there are not many studies related to the effect of La 2 O 3 on performance of the coatings. Its effect mechanism on high-temperature oxidation and corrosion is not also clear. Therefore, it is of great research significance and value to study the effect of La 2 O 3 on the high-temperature oxidation and corrosion resistance of Al and Al-Cr coatings. In this paper, the effect of La 2 O 3 on the high-temperature oxidation and corrosion resistance of Al and Al-Cr coatings of GH625 alloy was investigated to improve its high-temperature performance. The effect mechanism of La 2 O 3 which mainly was used as a catalyst and to refine grain was also studied in order to provide reference for the optimization of high-temperature oxidation and corrosion resistance of Al and Al-Cr coatings. Experimental In this paper, the Al and the Al-Cr coatings were prepared by adding La 2 O 3 to the surface of nickel-base superalloy GH625 by thermal diffusion (TD) neutral salt bath method. The size of the sample was 10 mm×10 mm×3 mm. All surfaces of the specimens were ground by 400 # , 800 # and 1500 # silicon-carbon papers in turn. Subsequently, a suitable amount of diamond polishing agent was sprayed on the polishing cloth, and the samples were polished on the polishing machine (Model: MP-2A) at 300 r min −1 . The osmotic agents consisted of NaCl, BaCl 2 , NaF, Cr 2 O 3 and Al. The detailed compositions of four different salt baths were as shown in table 2. The reagents were weighed by an electronic balance with an accuracy of 0.1 mg, and mixed together, stirred uniformly, placed in a crucible, and kept in an electric drying oven at 100°C for 2 h in order to sufficiently eliminate residual moisture and prevent the samples from being oxidized and corroded when the coating was deposited. The osmotic agents were then mixed into a box-type resistance furnace and heated to 800°C for 2 h before adjusting the temperature to 970°C. Finally, the furnace temperature was adjusted to 950°C, and the sample was put into the osmotic agents and kept for 4 hours. The Al and Al-Cr samples were treated with tempering heat treatment before oxidation and corrosion experiment, and the tempering process was 200°C×2 h. The purpose of heat treatment is to homogenize the microstructure of the coatings, reduce the residual stress of the substrate. The high-temperature oxidation experiment in this paper was conducted according to the HB5258-2000 standard 'Measurement method for the determination of oxidation resistance of steel and high-temperature alloys. The high-temperature corrosion experiment also used salt corrosion method, and the experimental test was investigated at 1000°C for 100 h. The scanning electron microscope (SEM) equipped with an energy dispersive spectrometer (EDS) was used to observe the surface morphologies and identify the chemical composition of the samples, respectively. The oxidation kinetics curves, corrosion kinetics curves, phase composition, surface and cross-sectional morphologies of the samples were analyzed in order to study the hightemperature oxidation and corrosion mechanism, and to explore the effect of La 2 O 3 on the high-temperature oxidation and corrosion resistance. XRD phase analysis and surface morphologies The XRD patterns of four kinds of coatings are as shown in figure 1. The phase of the Al coating is mainly composed of β-NiAl (ICDD PDF 65-3199), Ni 2 Al 3 (ICDD PDF 65-3454) phase and a small amount of AlNi 3 (ICDD PDF 65-0144) phase. The surface of the Al-Cr coating also mainly consists of β-NiAl and Ni 2 Al 3 phases, while the α-Cr (ICDD PDF 06-0694) phase replaces AlNi 3 phases in the Al coating. In addition, a small amount of Cr 7 C 3 (ICDD PDF 11-0550) phase was detected. The Al+La 2 O 3 coating is mainly composed of β-NiAl and Ni 2 Al 3 . The richness of Al in β-NiAl is very important for the performance of the coating because it can generate a dense oxide film in high-temperature conditions and can prevent corrosive elements from entering the matrix material in corrosive environment [10,11]. The main constituent phase of the Al-Cr+La 2 O 3 coating is also β-NiAl. In addition, it contains a small amount of Ni 3 Al (ICDD PDF 65-0144) phase and aluminum-chromium compounds. The chromium in the Al-Cr+La 2 O 3 coating is distributed mainly in the inner layer with a small amount in the outer layer in the form of a solid solution. The main reason is that the diffusion rate of the Cr element is relatively slower [12]. Chromium is easy to form Cr 2 O 3 protective film in a high-temperature environment, so the Al-Cr+La 2 O 3 coating has better resistance to high temperature. The surface morphologies of the Al+La 2 O 3 coating and Al-Cr+La 2 O 3 coating are as shown in figure 2. The surface of the Al coating is relatively dense, which is formed by a continuous mushroom-cloud like material. The surface of the coating is scattered with some spinel-like white-bright particles, and the main components of the white-light particles are Al, La and O. They can be La 2 O 3 and Al 2 O 3 . The adsorption of La 2 O 3 can enhance the service performance of the infiltrated layer, at the same time, the shedding of La 2 O 3 in some areas form mushroom shape on the surface of the infiltrated layer. The surface morphology of the Al-Cr+La 2 O 3 coating is similar to that of the Al+La 2 O 3 coating. The difference is that the mushroom-cloud like material in the Al-Cr+La 2 O 3 coating does not have a clear boundary line, and thus, the penetrating layer is more continuous and dense. The composition of the white-light particles is also Al, La and O, and its formation process on the surface morphology is the same as that of the Al+La 2 O 3 coating. It, however, has more and finer particles. The contents of the main elements on the surface of the two coatings are as shown in table 3. The content of the Al element is higher, while that of Cr element, which diffuses from the substrate to the coating, is lower in Al+La 2 O 3 coating than they are in Al-Cr+La 2 O 3 coating. In addition, the content of lanthanum element on the surface of the Al-Cr+La 2 O 3 coating is about twice that of Al+La 2 O 3 coating. Resistance to high-temperature oxidation 3.2.1. Oxidation kinetic curves The oxidation kinetic curves of different coatings oxidized at 1000°C for 100 h are are shown in figure 3. The oxidation kinetic curves show the same change trend, but the overall oxidation weight gain of the Al-Cr coating is less than that of Al coating, which is due to the chromium element in the Al-Cr coating. Chromium element can inhibit the diffusion of external oxygen, which can slow down the oxidation rate [13]. In the initial stage of oxidation, the oxidation weight gain of all the coatings was very fast because at this stage no protective oxidize film had formed on the surface. After 20 h of oxidation, all the curves show a significant turning point and the oxidation rate quickly slowed down. Then the oxidation weight gain of the coatings shows a slowly increasing trend, and this is because, after the initial oxidation, a continuous and dense oxidize film formed on the surface, making it difficult to diffuse oxygen inward the material during the subsequent oxidation. According to figure 4, the oxide layer is mainly a mixture of Cr 2 O 3 and Al 2 O 3 , and the change of turning point of the curve is similar to that of YSZ/YSZ-10% La 2 O 3 studied by Hossein et al [14]. After the coatings were oxidized for 100 h, the oxidation weight gain of the Al+La 2 O 3 coating and Al-Cr+La 2 O 3 coating was 0.38 mg cm −2 and 0.30 mg cm −2 respectively, which was reduced by 30.6% and 26.7% compared with non-added La 2 O 3 coatings, and reduced by 59.57% and 68.09% compared with the substrate, indicating that the Al-Cr+La 2 O 3 coating has better resistance to high-temperature oxidation than Al+La 2 O 3 coating . Compared with the La 2 O 3 free coatings and substrate, it can be found that the addition of La 2 O 3 significantly reduces the oxidation weight gain of the two coatings after oxidation for 100 h. Therefore, it can be seen that La 2 O 3 can significantly improve the coatings resistance to high-temperature oxidation. coating, the phase constituents of the Al-Cr coating were different, which were mainly composed of Cr 2 O 3 , NiAl α-Al 2 O 3 and θ-Al 2 O 3 phases. As shown in figure 4(b), the highest intensity of the diffraction peak in the phase diagram was NiAl phase, indicating that the content of NiAl in the two coatings was still sufficient, and it still had a strong resistance to high-temperature oxidation. However, at the same time, it also formed the Ni 3 Al phase during the oxidation process, which can result in the crack or shedding of the oxide film on the surface and weaken the coating's resistance to high-temperature oxidation [15]. The oxide films in the two coatings are mainly composed of α-Al 2 O 3 and θ-Al 2 O 3 , but the diffraction peak intensity of α-Al 2 O 3 is stronger than that of θ-Al 2 O 3 , indicating that the transformation of θ-Al 2 O 3 to α-Al 2 O 3 was high. It is noteworthy that the content of θ-Al 2 O 3 in the Al-Cr+La 2 O 3 coating is much less than that of the Al+La 2 O 3 coating, which illustrates that the transformation speed of θ-Al 2 O 3 to α-Al 2 O 3 in Al-Cr+La 2 O 3 coating is faster than in the later. Figure 5 shows the surface morphologies of different coatings oxidized at 1000°C for 100 h. As can be seen from cluster of fine and uniform particles. According to the XRD analysis, the fine particles are mainly α-Al 2 O 3 and with a small amount of θ-Al 2 O 3 and NiCr 2 O 4 . In addition, some tiny holes appeared on the surface of the Al-Cr coating and this phenomenon may be due to the formation of Cr 2 O 3 inside the infiltration layer during the oxidation process. It can be seen that the α-Al 2 O 3 particles of the coatings added with La 2 O 3 are finer, and the surface is more uniform and dense compared to La 2 O 3 free coatings, which can prevent the oxygen diffusing into the material and improve the high-temperature oxidation resistance of these coatings. The cross-sectional morphologies of five samples oxidized at 1000°C for 100 h are as shown in figure 6, with the phases of the oxide film identified by EDS. It can be seen from figure 6(a) that the surface of the substrate was severely oxidized. Compared with the substrate, the Al coating was less oxidized, and most of the surface is covered by Al 2 O 3 , as in figure 6(b), and the diffusion zone is mainly composed of some milky-white substances. As can be seen from figure 6(c), the Al-Cr coating has a distinct cross-sectional structure and the surface of the outermost layer is covered by a continuous dense Ni-rich oxide scale and dark Cr-rich alumina oxide. It can be seen that the surface oxide film of the Al+La 2 O 3 and Al-Cr+La 2 O 3 coatings both appeared irregular in structure, and this is because, after 100 h of high-temperature oxidation, part of the oxide film in the surface layer fell off. The composition of the surface oxide film is Al 2 O 3 , and the oxide film is very thick, indicating that both of the coatings with added La 2 O 3 have good resistance to high-temperature oxidation when compared with the substrate, the Al and Al-Cr coatings. In addition, the Cr-rich layer in Al-Cr+La 2 O 3 coating with La 2 O 3 is very dense and uniform, which can improve the resistance of the coating to high-temperature oxidation. Corrosion kinetics curves Compared with the coating, the high-temperature corrosion resistance of the substrate is worse. When the experiment was carried out for about 50 h, a large area of the substrate surface was peeled off. Therefore, the experimental results of four samples which were corroded at 1000°C for 100 h were analyzed. The corrosion kinetics curves of different coatings corroded in mixed salt of 75 wt% Na 2 SO 4 +25 wt% NaCl at 1000°C for 100 h are as shown in figure 7. It can be seen that all the coatings had rapid corrosion weight gain at the initial stage. However, the weight gain for coatings with La 2 O 3 was much smaller than that for La 2 O 3 free coatings, indicating that La 2 O 3 can inhibit corrosion weight gain. After 20 h of high-temperature corrosion, the corrosion weight gain rate of coatings all decreased due to the formation of the surface oxide film, which can inhibit the progress of the corrosion. The subsequent corrosion weight gain of the coatings shows a stable trend, and the corrosion weight gain of Al-Cr coating was less than that of Al coating. In particular, the weight gain of Al-Cr+La 2 O 3 coating has been lower than that of other samples for a long time, and the oxidation weight gain has remained basically unchanged after corrosion for 80 h. This phenomenon can be attributed to the high content of chromium in the Al-Cr+La 2 O 3 coating, which has better resistance to high-temperature corrosion and can inhibit corrosion weight gain [16]. When corrosion was carried out for 100 h, the corrosion weight gain of Al coating and Al-Cr+La 2 O 3 coating was 6.01 mg cm −2 and 4.49 mg cm −2 , respectively; a reduction of 50% and 33% compared with La 2 O 3 free coatings. It can be seen that La 2 O 3 can significantly reduce the corrosion weight gain and improve the high-temperature corrosion resistance of the coating. Figure 8 shows the XRD patterns of the four samples corroded in mixed salt of 75 wt% Na 2 SO 4 +25 wt% NaCl at 1000°C for 100 h. The XRD patterns of samples in figure 8 (a) show that the surface of the Al coating is mainly composed of β-NiAl, NiAl 2 O 4 , NiO, Ni 3 Al and a small amount of Al 2 O 3 along with Cr 2 O 3 phase. While the main phases of the Al-Cr coating are β-NiAl, NiAl 2 O 4 , NiO, Ni 3 Al and a large number of Cr 2 O 3 . As can be seen from figure 8 (b), the main composition of Al+La 2 O 3 coating was mainly α-Al 2 O 3 , which indicates that the transformation from θ-Al 2 O 3 to α-Al 2 O 3 had almost completed. In addition, the aluminum element consumption in the infiltrated layer was greater and the β-NiAl phase in the infiltrated layer was also significantly reduced. With the detachment of the oxide film, the inner oxide layer continues to oxidize to form θ-Al 2 O 3 , hence a small amount of θ-Al 2 O 3 was detected by XRD. The main components of Al-Cr+La 2 O 3 coating were α-Al 2 O 3 and β-NiAl phases, indicating that the Al-Cr+La 2 O 3 coating had no obvious degradation, and still had better resistance to high-temperature corrosion. Figure 9 shows the surface morphologies of different coatings corroded in mixed salt of 75 wt% Na 2 SO 4 +25 wt% NaCl at 1000°C for 100 h. It can be clearly seen from figure 9 that the surface of the Al coating with non-added La 2 O 3 has a uniform network structure and contains many holes. The presence of the holes allows the corrosive material to easily enter the material, thereby accelerating the material's corrosion failure. The surface of the Al+La 2 O 3 coating consists of a large number of granular particles and traces of thin strips. From the XRD analysis, it is found that the particles are α-Al 2 O 3 and the thin strips are θ-Al 2 O 3 . The surface of Al-Cr coating is denser, with the grain size smaller than that of Al coating. In addition, no cracks appeared on the surface of the coating, and the oxide film had not been exfoliated. Combined with the corrosion kinetics curves, it can be seen that the Al-Cr coating has better high-temperature corrosion resistance than Al coating. By comparison, it can be found that the structure of the coatings added with La 2 O 3 is denser and the grain size is smaller than that of non-added coatings. That is why the addition of La 2 O 3 can improve the coating's high-temperature corrosion resistance. Characterization of coatings added with La 2 O 3 corroded at 1000°C for 100 h The cross-sectional morphologies of the four samples corroded at 1000°C for 100 h are as shown in figure 10. It can be seen from figure 10(a) that the Al coating is still covered by a layer of Al 2 O 3 , but the outermost layer is almost exhausted, and severe peeling results in many pits. As shown in figure 10 (b), the outermost layer of Al-Cr coating contains a large amount of dark NiAl and Cr precipitate phases still retains a certain thickness. It can be seen that the diffusion layer in the Al+La 2 O 3 coating is very loose. This phenomenon illustrates that the element diffusion during the corrosion process is very severe, resulting in the consumption of the osmosis layer and weakening the high-temperature corrosion resistance of the coating. Although the thickness of the Al-Cr+La 2 O 3 coating is smaller than that of Al-Cr coating without added La 2 O 3 , the surface of the coating is very smooth and dense, which can prevent corrosive elements from entering the interior of the material and inhibit the progress of corrosion. In general, the resistance to high-temperature corrosion of Al-Cr+La 2 O 3 coating is better than that of the other coating. Analysis and discussion The addition of La 2 O 3 in the Al coating and Al-Cr coating can significantly improve the resistance of the coating to high-temperature oxidation and corrosion. The proper amount of La 2 O 3 can improve the high-temperature properties of the coating due to its lower electronegativity and strong chemical affinity. The rare earth oxide La 2 O 3 can accelerate the decomposition of the osmotic agent in the salt bath, and increase the potential energy of aluminum and chromium, so that more active aluminum atom and chromium atom are adsorbed on the surface of the sample, and continuously penetrate the interior of the matrix to form dense coating [17,18]. In addition, the large radius of lanthanum atom can easily cause lattice distortion on the surface of the substrate, thereby promoting the diffusing of the aluminum atom and chromium atom into the matrix, and to form a dense protective oxide film. Compared with the La 2 O 3 free coatings, the surface layer of the coatings added with La 2 O 3 is denser, and the distribution of cross-sectional elements is more uniform. In general, the addition of La 2 O 3 can refine the structure of the coating and improve high-temperature performance. Under high-temperature oxidation and corrosion conditions, the surface of the coating added with La 2 O 3 mainly is composed of Al-rich β-NiAl. The oxidation process was the formation of metastable θ-Al 2 O 3 in the early stage, and then the unstable θ-Al 2 O 3 gradually transformed into stable α-Al 2 O 3 with the extension of the oxidation. At the beginning of oxidation, Al 2 O 3 and a small amount of Cr 2 O 3 were formed on the surface of the coating and the oxygen activity between the infiltrated layer and the oxide film became lower. The failure of the coating at the end of oxidation is mainly due to the gradual depletion of the aluminum element in the penetrating layer, which can result in the crack and exfoliation of the protective oxide film. The addition of La 2 O 3 can promote the formation of a stable and dense α-Al 2 O 3 oxide film, thus reducing the cracking and exfoliation rate of the coating and improving to high-temperature oxidation resistance of the coating [19]. The effect of La 2 O 3 on resistance to high-temperature corrosion is similar to that of high-temperature oxidation. The addition of La 2 O 3 can reduce the alkaline dissolution of Al 2 O 3 , promote the formation of a highly stable Cr-containing phase, improve the denseness of coating, prevent the corrosive elements spreading into the material and reduce the formation of holes in the coating, thus increasing the high-temperature corrosion resistance. In addition, Al-Cr coating has better resistance to high-temperature corrosion than Al coating and the main reason for this phenomenon is that La 2 O 3 can enhance the diffusion of chromium element in Al-Cr coating and slow down the corrosion rate coating. Conclusions This paper mainly studied the effect of La 2 O 3 on the resistance to high-temperature oxidation and corrosion of Al coating and Al-Cr coating of GH625 alloy. The results show that: (1) The Al+La 2 O 3 coating added with La 2 O 3 is mainly composed of β-NiAl and Ni 2 Al 3 phases, and the main phase of Al-Cr coating is β-NiAl phase. The β-NiAl phase is very important for the high-temperature performance of the coating, which can generate dense oxide film to protect matrix under high-temperature conditions and prevent corrosive elements from infiltrating into the matrix material in a corrosive environment. Therefore, the addition of La 2 O 3 can improve the high-temperature performance of the coating. (2) After 100 h of high-temperature oxidation, the oxidation weight gain of Al+La 2 O 3 coating and Al-Cr+La 2 O 3 coating was 0.38 mg cm −2 and 0.30 mg cm −2 respectively, and the oxidation weight gain was reduced by 30.6% and 26.7% compared with that of La 2 O 3 free coatings. La 2 O 3 can promote the transformation of the θ-Al 2 O 3 to stable α-Al 2 O 3 on the osmotic layer, reduce the diffusion rate of aluminum element during the oxidization process, and has a certain refinement effect on the grain size. It can promote the formation of a protective oxide film and improve the adhesion of film and anti-stripping properties, thus improving high-temperature the oxidation resistance of the coating. (3) After 100 h of high-temperature corrosion, the corrosion weight gain of Al+La 2 O 3 coating and Al-Cr+La 2 O 3 coating was 6.01 mg cm −2 and 4.49 mg cm −2 , respectively, and the corrosion weight gain was reduced by 50% and 33% compared with that of La 2 O 3 free coatings. The addition of La 2 O 3 can promote the formation of a continuous and dense protective oxide film, reduce the alkaline dissolution of Al 2 O 3 , inhibit the diffusion of corrosive elements, thereby improving the high-temperature corrosion resistance of the coating.
2019-12-19T09:16:23.633Z
2020-01-06T00:00:00.000
{ "year": 2019, "sha1": "0e8342dca7d599e398954457266091eedf52ba20", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ab602d", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7037f29785e6815073dc536eced3f1789af108b7", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
209456279
pes2o/s2orc
v3-fos-license
Nurses’ Perception of Causes of 2015 Strikes at Federal Medical Centre Owerri: Implication for Preventive Strategies “Get the nurses to go back to work” was the directive, after two years of intermittent labour strikes and consequent shutdown of the Federal Medical Centre, Owerri, South Eastern Nigeria. It was assumed that, since nurses constituted the largest percentage of health workers, their resumption would frustrate and end the strike. However, studies have shown that the use of force rarely worked. This study examined the nurses’ perception of causes of the strikes and the government interventions. The WHO healthy workplace framework was adapted in recommending strategies to prevent reoccurrence. Exploratory research design with mixed method sequential exploratory data collection strategy was utilized. Findings from focus group discussions in first phase were used to develop Likert scale self-administered questionnaire at second quantitative phase. One hundred and thirtynine and 461 nurses participated in the qualitative and quantitative phases respectively. Epi Info statistical package was used for data entry and analysis of the quantitative data. Frequencies and percentages were calculated for all the items, and Chi-square was calculated between the senior and junior nurses’ responses. The responses of the senior and junior nurses were similar on the items. All sixteen causes of the strike identified by participants were within Psychosocial Work Environment of the WHO framework. Disparity in salary was highest (443(96.1%), followed by highhandedness of the chief executive (436(94.58%). Participants opined that insincerity of the investigation panel (369(80%) and seriousness of the crisis led to the shutdown (341(73.97%) of the facility. Suggested fifteen preventive strategies against strikes covered two of the WHO’s workplace environments. They included, the psychosocial environment: effective communication (450(97.61%), promotion of nurses as and when due (447(96.96%), harmonization of salaries (445(96.53%), change of chief executive (442(95.87%); and the physical environment: provision of materials to work with in the hospital (406(88%). Accurate reports by panels of enquiry (448(97.18%), appropriate prompt attention to the causes (447(96.96%), and avoidance of sentiments (446(96.75%) could prevent repeat shutdown of the facility. Chi-square showed no significant difference in the responses of the senior and junior nurses. According to the WHO healthy work place intervention model, elimination, substitution and modification of contents and processes in the workplace may be required. Stakeholders should avoid factors that hinder appropriate interventions; and uphold values that protect workers and the benefitting communities. Background Conflicts are inevitable as long as people coexist [1], and they are unavoidable in workplaces [2]. However, industrial actions in Nigeria's health sector are unacceptably on the increase, making government prompt intervention inevitable [3]. Strike is a form of industrial action. Workers are said to be on strike when they refuse to work completely or work below efficiency level, in order to compel their employers to attend to their demands [4][5]. Strike is the most employed form of industrial action [6]. Not only the workers get involved in the ensuing crisis; other stakeholders are involved one way or the other [7][8]. Usually, during industrial crises, health care services are not completely paralyzed in any health institution since there are different professionals and they belong to their respective unions [9]. However, with the coming together of a number of unions to form the Joint Health Sector Unions (JOHESU) in Nigeria, industrial actions in the health sector by JOHESU have been widespread and formidable. The code of ethics of the nursing profession requires them to act in the best interest of their clients / patients [10][11]; hence, in spite of their rights to industrial actions, the public often expects them to consider the ethics of the profession above their rights. Understandably, this is because nurses constitute the largest single group of professionals in the health sector [12][13], and in FMC Owerri, they are about 28% of the entire workforce. Nurses have been described as the backbone [14] and workhorses of health facilities [12]. Nurses provide twenty-four hour care for patients in the hospital and almost all aspects of the patients' care in the health facility require a measure of nursing care [13]. Nurses in Nigeria belong to the National Association of Nigeria Nurses and Midwives -NANNM, an affiliate of the Joint Health Sector Unions (JOHESU) and the National Labour Congress -NLC [15]. Unions provide avenue for workers to express their dissatisfactions and be heard [16]. Industrial action guidelines differ from one country to another [5]. As critical as the health sector is, the workers too have rights to industrial action which should not be denied them [4][5]. Services that when withdrawn pose a threat to the safety, health and life of the citizens are termed essential [5]. Causes of Industrial Crisis Oleribe, et al [18] observed that less than half of health workers favoured industrial actions. Participation of health professionals in industrial actions depends on the influencers [19]. Causes of industrial crisis include: failure of government to implement agreements [20] from previous negotiations [21]; poor conditions of service [2,20], including threats to workers' health [22]; management's misinterpretation of contract agreement between government and the workers [23]; poor and discriminatory remuneration [2,23]; absence of teamwork and corruption [23]; economic recession making compromise difficult during negotiations; lack of promotions; delay in salaries of newly employed staff; poor management-worker relationship; inadequate information on labour laws; non-enforcement of the nowork-no-pay law; ineffective communication system; lack of trust [24]; and maltreatment of workers [2] by the management. In the past, salary related strike accounted for 50% [22] of industrial actions, and in recent times, it could be as high as 82% [18]. In the Nigerian health sector, poor health care leadership and management was documented as the commonest and most important cause of industrial action [18]. Consequences of Industrial Actions Especially in the Health Sector Industrial actions negatively affect the entire health system leading to increasing medical tourism [25]. The major consequence of industrial actions is, "… tragic but avoidable loss of human life, value of which cannot be accurately computed" [26]. Others include: bad image of the particular health institution and the whole country, loss of internally generated revenue [20], and public lack of access to quality health care [18]. As undesirable as strikes are, they are capable of providing opportunities [7] for understanding of the underlying issues and establishing quality employeremployee relationships [24]. Prevention and Control of Industrial Action Although conflicts are inevitable and the rights of workers in the health sector to industrial action cannot be denied, efforts should be made to avoid industrial action and promote alternative means of resolving crisis [4]. Employers and managers should realize the imminence of crisis [7] and be ready at all times to prevent an industrial action that would cripple operations, by having implementable policies [23] and mechanisms in place [8]. There is the need for mutual respect among stakeholders, and respect for the extant regulations, as well as the guiding principles put in place to promote industrial harmony and mitigate possible conflicts [27]. Appropriate application of relevant labour laws has been recommended [24:7]. It is important to note however that, harsh directives or interventions by governments are usually not effective and often worsen the crisis [9]. It was observed that temporary involvement of a third-party in industrial conflict resolution could be positive [9]. Efficient government is critical for sustainable industrial harmony in Nigeria [25]. Government should adopt fair wage policy acceptable to professionals in the health sector; and promote workplace equity and harmony [23]. Government should not agree on what they cannot implement and should honour agreements reached [16] in appreciation of the public rights to quality health care services. Enabling working conditions and health of workers are of paramount importance [2,29] and this must be ensured. The employees also have a significant role to play. In line with the visions of most unions to protect the public [11,30], workers in the health sector should understand that their right to industrial action should be expressed within the dictates of their professional ethics and the general ethical principle of 'do no harm' [4]. They should find a way to provide basic and essential services to patients during strike action [18]. Skeletal services could be provided by some of the workers or by alternative sources [4] through worked out agreement with other providers. Workers should be educated on implications of frequent strike actions [20], labour laws and related skills [2]. There is also the need to appreciate the fact that right to industrial actions are only protected when exercised within the provisions of relevant laws [31]. The 2015 Strikes at Federal Medical Centre, Owerri There was a general JOHESU nationwide total strike from November, 2014 to February, 2015. Thereafter, the Federal Medical Centre, Owerri JOHESU strike took place from 15 th May -29 th July 2015, and 18 th November -4 th December, 2015. The nurses, like the other JOHESU members in the hospital were actively involved in the industrial crisis. Efforts by the supervising Federal Ministry of Health, Abuja, the State Governor and traditional institutions to mediate in the crisis failed. The Centre was therefore shutdown by the supervising ministry on 5 th December, 2015. That was the first and only time a tertiary health facility was shut down in the country under such circumstance. Crisis management in the health sector aims, among other things, at ensuring that the health of the citizens is not jeopardized [28]. Hence, in recognition of the grave consequences of continued lack of access to quality health care by the public, the Federal Government through Federal Ministry of Health set up an Interim Management Committee (IMC) on 21 st December 2015, to take over the Management of the Centre and to restore peace and quality service in the institution [32]. The Committee had eleven Terms of Reference (TOR) including "… review of the events leading up to the industrial unrest…" [32:ii]. "Get the nurses to go back to work" was the specific directive to BOA (the nurse on the Committee). This was in recognition of the importance of this cadre of health workers to the restoration and sustenance of care services in the hospital. It was assumed that, since nurses constituted the largest percentage of health workers, their resumption would frustrate and end the strike. To BOA, the directive to "Get the nurses to go back to work" did not presuppose the use of coercion; rather, the adoption of a process that could lead to attainment of the desired result, in a form that outlives the tenure of the Interim Management Committee (IMC). Hence, the need to employ the research process in exploring the perception of the nurses about the strikes, and working together with the nurses on the way forward to ensuring enduring solution to the protracted crisis. No such research had been carried out in the hospital or by the supervising Federal Ministry of Health among nurses in Nigeria. Objectives The study was undertaken to examine the perception of the nurses on the causes of the 2015 strikes and stakeholders' roles; with a view to recommending, strategies to prevent reoccurrence of the industrial action and its consequences. The WHO Healthy Workplace Framework The WHO Health Workplace Framework by Burton [33] was adapted for the study. According to Burton [33], many employers of labour do not appreciate the importance of healthy workplace and lack the capacity to enhance it. In line with the focus of the study, the framework deals with both the workplace content which influence its health, and the processes to improve and maintain a healthy workplace. The framework describes four interrelated core contents of the workplace, referred to as "avenues of influence". The four, must be attended to in order to have a healthy workplace. They are, the physical work environment, the psychosocial work environment, personal health resources in the workplace, and enterprise community involvement. When and how to manage the challenges identified in the four avenues of influence vary from one organization to the other, based on the results of the assessment. Suggested management of undesirables in the four avenues follow a "… a hierarchy of controls that seeks to eliminate the hazard if possible or modify it at the source; lessen the impact on the worker; or help the worker protect him or herself from its effects." [33]. In the discussion section, Burton's [33] hierarchy of controls formed the framework for organizing the strategies to prevent reoccurrence of the strikes and consequent shutdown of the facility. Method The method used was an applied research focused on an existing problem requiring immediate attention [34]. The project was undertaken from a social science perspective with an understanding that the variables of interest were complex and could not be subjected to a positivist experimental research approach. The WHO healthy workplace model recommended observation method for assessment of the physical work environment; while, survey or interview was suggested for the remaining three avenues of influence [33]. Hence, a survey approach was adopted for the study. An exploratory research design was adopted with a mixed method sequential exploratory strategy for the data collection. It began with the qualitative approach for explorative purposes, followed up with a quantitative survey [35]. Pre-determined and emerging methods of data collection were applied using both open and closed ended questions [35]. Study Setting The Federal Medical Centre, Owerri (FMCO) is one of the federal tertiary health care facilities located in south eastern part of Nigeria [32]. The motto expected to guide activities in the facility is: "Dedicated to Care and Service" [32:12]. Study Population All nurses working in the employment of the Federal Medical Centre, Owerri constituted the study population. The study was limited to professional nurses because of the specific charge to Get the nurses to go back to work. The nurses were under the Nursing Services Department (NSD) of the hospital. The department had forty-two operating units spread across the various clinical specialty areas, including two outreach centres outside the main hospital. As at June 2016, there were 715 nurses serving in the forty-two nursing units. Table 1 shows the distribution of the nurses according to their ranks. Qualitative Phase One hundred and thirty-nine (139) nurses participated in the qualitative phase of the study. Sampling was purposeful [36], flexible and realistic in methodology, considering individual and contextual characteristics that had implication for the study [37]. Variations and similarities among the nurses were considered, to increase representativeness and confidence [38]. Polit and Beck [34] suggested Focus Group Discussion (FGD) group size of 6-12 discussants and sufficient data collection to saturation level. Therefore, the 139 nurses consisted of 39 Older / Senior nurses in four groups of 10 members each, except one with 9 members; and 100 Younger / Junior nurses in 10 groups of 10 members each. Instrument for Qualitative Data Collection A semi-structured discussion guide was developed [39][40] with two parts. The first part contained the purpose, guide on how to conduct the discussion, confidentiality, the contact and signature of BOA. The second part had ten main questions and some prompts, in line with the objectives of the study. The questions were on: causes of the industrial action, stakeholder involvement, and prevention of a reoccurrence. Participants were also requested to provide any other information for the attention of, and to assist the Interim Management Committee. Data Collection The qualitative data was collected through modified Focus Group Discussions (FGD) on 29 th December 2015 at the first meeting of BOA with the nurses in the hospital. Modified Focused Group Discussion (FGD), because of the volatility of the issue at the time, the need to build trust, and to reduce bias as much as possible. There was no audio recording of the discussions. The fourteen group discussions were held simultaneously. Group members appointed their own moderator, secretary and observer who were adequately briefed by BOA. The secretary and observer's reports were submitted in longhand. Furthermore, implied consent was obtained [41] by volunteering to participate. Members in each group were volunteers from each rank and the groups were homogenous according to the nurses' ranks. One hundred and thirty-nine out of the 505 nurses present at the meeting volunteered in their respective ranks and participated in the discussion. Confidentiality was assured. Data Analysis The discussions lasted for between 15 and 52 minutes with a mean duration of 36.7 minutes. There was no need for transcription. Secretary and observer reports for each group were read for overall initial understanding of the content [42,43]. Simple content analysis was done [34]. Themes were derived from the objectives and the FGD guide questions. Data matrices [39] were developed manually. Highlighted words, phrases and sentences from each group were retrieved and arranged within the matrices for cross analysis group by group. By this, the similarities and differences between focus groups were easy to identify, and explain [43]. Trustworthiness -(Validity and Reliability of Qualitative Data) Detailed description [44] of the process was presented to enhance transferability. There was no inconsistency as data was collected in all the groups simultaneously, same day, under similar condition. The consistency promoted dependability [34,45]. For credibility and authenticity, expressions of the participants as derived from the matrices were used to develop the questionnaire for the quantitative data collection [34,46]. Quantitative Phase Because of the risk of attrition due to annual leave, maternity leave, study leave, days and nights off, all the nurses were involved in the second phase of data collection. All interested nurses on all three shifts, in a randomly selected week of the data collection month, were allowed to participate in the study. Instruments Findings from analysis of the qualitative data were used to develop a Likert item instrument to which participants indicated how much they agreed or disagreed with each item; from Strongly Disagree, Disagree, Neither Agree nor Disagree, Agree, to Strongly Agree. The 75 Likert items were grouped into 7 categories of between 5 to 16 Likert items each [47]. There was a section for the respondents' suggestions / comments. The instrument also had a preamble conveying the salutation, purpose of the study, implied consent by completing the questionnaire, confidentiality, and researchers' information. There was a section for the questionnaire number, participant's service unit, age last birthday in years, and length of service in the hospital. Gender was omitted because the hospital had only 21 male nurses and could be linked to the units. Validity and Reliability Because the items were customized [35] to the hospital, the instrument was pre-tested with 10 senior and junior nurses in one of the units of the hospital. Repetitions and typographical errors were corrected. The reliability of the questionnaire was tested using the Cronbach's alpha coefficient [47] with an acceptable score of >0.9. Data Collection The instrument was ready in June 2016; hence, the decision to collect the data in July. Week Monday 18 th -Friday, 22 nd in July, 2016 was randomly selected for data collection. The instrument was self-administered. Four hundred and seventy-nine nurses participated in this phase of the study from all the units except the unit that was involved in pre-testing of the instrument. Data Analysis Epi Info™ 7 version 3 3/21/2016 statistical package for public health professionals was used for data entry and analysis. Frequencies, percentages and mode were calculated for all the items [48], and Chi-square was calculated between the senior and junior nurses' responses. Ethical Consideration Ethical Approval to conduct the study was granted by the Ethical Committee of the Federal Medical Centre Owerri vide FMC/OW/HREC/123. All the participants were provided with adequate information on the purpose of the study. Personal contact details of BOA and open access were provided to all nurses in the hospital. Confidentiality was ensured throughout the study and afterwards. Description of the Respondents Out of the 479 instruments retrieved from the participants, 461 (96.2%) were found duly completed and useable. Distribution of the nurses by rank and length of service is presented in Table 2. The oldest nurse was aged 59 years and the youngest 25 years; the mean age was 39.22 years and the mode, 40 years. Table 3 shows opinion of the nurses on causes of the industrial action; arranged from the cause with the highest frequency to the lowest. Review of the qualitative data on Disparity in salary of nurses in the hospital and nurses in similar hospitals, which is the first cause of the crisis as shown in Table 3, revealed that the nurses compared themselves with their colleagues in similar institutions in terms of placement on salary grade level at appointment and after promotion, as well as payment of some allowances like teaching and call duty allowances. Highhandedness, the second highest cause of the crisis was reportedly related to incidences of intimidation and victimization by the Medical Director in-charge of the hospital. The unpaid Arrears included: promotion, relativity, and skipping arrears; while, the unpaid Allowances indicated by the nurses included: rural, teaching, uniform and course allowances. Concerning the Denial of opportunity for training, nurses remarked that for more than five years they were not allowed to go for further studies and were not supported for professional development activities. Nurses who ventured to do part time programmes were fished out and directed to terminate such programmes or face disciplinary actions. Stagnation of CNOs and other nurses was one of the first ten causes of the crisis. According to the qualitative reports, the last promotion of nurses to the Assistant Director (AD) level was in 2010 in spite of the fact that there were qualified and "promotion matured" nurses. With the exit of the last AD by statutory retirement, the most senior nurse in the hospital was a Chief Nursing Officer; whereas, other clinical departments were headed by Directors, Deputy Directors, or at least an Assistant Director or their equivalents. Nurses therefore constituted the only core health professional group in the hospital without a Head in the directorate cadre. There were at least eighteen Assistant Director (AD) positions which needed to be filled as at the time of the industrial action in 2015. There were complaints from some nurses on the Irregular payment of salaries and emoluments (errors on IPPIS). Some had 1-6 months salaries unpaid (skipped for no reason) in 2014 and 2015 despite repeated complaints to the Accounts Department of the hospital, and promises that the errors would be corrected. Some felt victimized. The nurses indicated that Due Process was not followed for PPP (Public Private Partnership) programme proposed by the Medical Director and, similar ventures in the past had nothing positive to show for their existence. The Lack of incentive / motivation referred to denial of call food to nurses on call and lack of call rooms for deserving nurses; while, other perioperative workers were adequately catered for. Nurses reported instances when it was announced in the suite that they were not entitled to such privileges. This was demoralizing and provoking. The Chi-square suggested a difference in the responses of the senior and junior nurses only on the Financial Misappropriation by Management (senior nurses 93(75%) and junior nurses 264(78.34%) pvalue 0.025). Why the Hospital Was Shut Down The nurses opined that the hospital was shut down because the investigation panel did not present the facts found (369(80.05%), the crisis was serious (341(73.97%), and because the workers preferred the outright removal of the MD, to the Minister of Health's option of bringing back the MD who had been asked to step aside (334(72.45%). Less than half of the nurses felt the shutdown was to re-organize the hospital (206(44.69%) or to redefine the terms of engagement of staff (157(33.97%). Parties Involved in the 2015 Protracted Hospital Crisis The people or groups identified as being involved in the crisis were the staff ( Causes of Industrial Actions All the sixteen causes of the 2015 strikes identified by the nurses (Table 3) fell within the psychosocial environment of the WHO healthy workplace framework. The Psychosocial Work Environment includes the organization of work and the organizational culture; the attitudes, values, beliefs and practices that are demonstrated on a daily basis in the enterprise / organization, and which affect the mental and physical well-being of employees [33]. The causes reported by the nurses were in line with findings in previous studies presented in the introduction. Similarly, resident doctors in the hospital whose perspective of the strike was examined at the same time this study was carried out, indicated the causes of industrial crisis in the hospital as administrative lapses, poor welfare packages, lack of sponsorship for training, and delayed payment of salaries [3]. However, the lack of job security reported by the nurses was not related to mergers, acquisitions, reorganizations, or the labour market / economy as opined by Burton [33]; rather, it was related to highhandedness and victimization. Why the Hospital Was Shut Down The eventual shut down of the hospital was largely attributed to the inability of the investigation panel to present an unbiased report. Expectedly, the panel was appropriately constituted with representatives of both the government and workers' unions as members. However, because corruption is a cause of industrial crisis [23], it is essential for such government panels to be both responsible and truthful [25]. The shutdown was also considered to be due to the seriousness of the crisis; more so, when the position of the Minister was not acceptable to the workers, and, none of the mediating traditional and government representatives could prevail on the workers to suspend their strike. Interventions should be based on unbiased assessment of the situation that led to the industrial action; and, arbiters must not be seen or suspected to be biased. The Involved Parties In identifying the parties involved in the industrial crisis, the nurses listed the federal government, her agencies and their representatives (panel of enquiry and the Minister), separately. Similarly, in the hospital, the Management of the hospital and the Medical Director were identified separately, as well as the members of staff. This suggests recognition of difference between the office of those in positions of authority as government representatives, and, their personal actions. Rules and regulations guide operations in offices; however, human beings and workplace circumstances are not always the same, hence, the importance of individual skills and attributes in management. What Involved Parties Should Not Have Done Delayed intervention is not advisable during industrial crisis; particularly, in health sector crisis. Crisis managers should be appropriately trained to be proactive [28]. Furthermore, failure of government to keep her promises that was disapproved by the nurses, was reported by Gyamfi [20] as one of the causes of industrial crisis. Name calling and blame game reported, was indicated as a post crisis feature in Smith's model [49], and has the risk of worsening the situation [9]. Although the Boards of health parastatals represent the Minister (government), their power over the chief executives is limited [50]; little surprise therefore, that the intervention of the Board in the crisis was ineffective. Prompt and definite intervention by the supervising ministry and government is therefore critical. Reaction of the workers to alleged false report by the investigation panel worsened the crisis and was identified as being responsible for the shutdown. Untrue report has the potential to misdirect intervention towards ineffective outcome. As discussed earlier, character of individual leader varies and plays out in times of crisis. This could have accounted for the Acting Medical Director's response under pressure. Survival, is a fundamental need and would naturally be attended to first before higher order needs; hence, the acting chief executive's choice of handing over than holding on to the position when threatened. Contrary to the opinion of government that black uniform was occultic, majority of the nurses saw nothing wrong with the workers wearing black uniform during the protest. Considering Smith's model, making issue out of the protesters' attire, is part of the crisis of legitimization phase during which efforts are made by parties to restore external confidence. Issues critical to resolution of crisis should be the focus of intervention and not distracting elements like attire worn by protesters; such distractions could engender prejudice. Only one-fifth of the nurses opined that staff should not have refused to dialogue with some mediators. Workers are a part of the community in which they work and identify with the sociocultural climate of the environment. Respect for socio-political leaders, including traditional rulers, is part of Nigerian culture. Aggrieved workers' refusal of intervention by such people could be capitalized upon by government to push her crisis legitimization [49] agenda and pitch the workers against the public, who no longer enjoyed access to health services. Dialogue remains a key strategy in crisis intervention. Oleribe et al. [18], documented poor health care leadership and management as the commonest and most important cause of industrial action in Nigerian health sector. Therefore, every mediator, no matter how highly placed, or the relationship he / she has with leaders of the organization, must be objective about the facts of the crisis, and should not take sides. A workers' union, though made up of people, is considered powerful and greater than individual members. It is subject to labour laws and does not respond to socio-political and traditional sentiments; unless, as a strategy to pave way for peaceful resolution, and not prejudice. What Involved Parties Should Have Done What the involved parties should have done could be classified into two categories, relating to the pre-and intracrisis culture and disposition, of the organization and the workers [49]. Pre crisis, employers should be proactive, knowing that crisis is possible even when there is no obvious threat [7]; and put workable policies in place to avoid strike actions and their consequences [23]. Findings from the study show that, it is important that a healthy relationship is created and promoted between the management and the members of staff. Collaboration between the management and staff is a feature of a healthy workplace [33]. Management plans and decisions should be discussed with the workers, and not heard first by workers outside the organization or through the social media. In view of the fact that salary related issues feature in many industrial crises, government should agree to what is doable and keep to the terms of agreement. Where implementation is problematic, workers should be appropriately informed [3], and more importantly, they must be part of every arrangement for a speedy resolution of the problem. Anarchy is usually the result of noncompliance to extant regulations. The Public Service Rules (PSR) should be applied in all government establishments. to prevent victimization, nepotism and abuse of "power". Poverty increases vulnerability [51]. The high rate of unemployment and poverty in the country increases desperation of workers to keep whatever means of livelihood is available to them even when administrators disregard the provisions of the PSR and enforce personal rules, causing fear and sense of job insecurity among workers. Noncompliance with the provisions of the PSR, promotes highhandedness, victimization, and nepotism as indicated in this study, leading to industrial disharmony. If the Boards represent the parent ministry at the facility level, they should be adequately empowered to take important decisions binding on members of staff at the highest management level in the facility; particularly during crisis. Otherwise, in view of the usual bureaucratic processes, the government, Boards and management must devise strategies for quick intervention to ensure peace and order. Although the nurses did not include factors in the physical work environment as causes of the industrial action, these were identified as things that stakeholders should have attended to prior to the industrial crisis. Such as, provision of adequate instruments and equipment, as well as structurally good working environment. How to Prevent Reoccurrence of the Strike Action and Shutdown of the Hospital: Application of Who Healthy Workplace Model Considering the grave consequences suffered by the public in 2015 and 2016, a critical part of this project was suggesting strategies to prevent reoccurrence of the industrial action and shutdown of the hospital. The hierarchy of control of workplace hazards described by Burton [33], is to eliminate the hazard if possible or modify the hazard at the source; lessen the impact of the hazard on the worker; or help the worker protect him or herself from effects of the hazard. The identified causes of the strike action and shutdown of the facility could be considered as the hazards. Applying Burton's [33] hierarchy implies that some of the causes must be eliminated, some modified, and the workers would need to be prepared to adapt to some which cannot be eliminated or modified. Elimination of the Hazard or Modifying It at Source This strategy, according to Burton [33], could involve removing the manager, or retraining the managers / supervisors in communication and leadership skills; enforcing zero tolerance for harassment, bullying or discrimination in the workplace; applying all legal standards and laws regarding workplace conditions or putting policies in place to supplement the laws [33]. These activities are in line with some of the suggestions of the workers; such as, removal of the MD, not having another nurse leader with the unacceptable character of the former, and avoidance of highhandedness. Oleribe et al. [18] reported poor health care leadership and management as the commonest and most important cause of industrial action in Nigerian health sector; yet, the government is usually reluctant to replace such managers. In the facility under study, the reinstatement of the MD after an initial stay away order led to further crisis and uneasy calm in the facility until an acting MD and eventually a substantive MD was appointed. Government needs to be more objective, fearless, and trustworthy [25]. Strictly applying all legal standards and laws regarding workplace conditions [33], as contained in the public service rules, financial regulations, and government circulars, could "modify the hazard at source", and prevent reoccurrence of the industrial crisis. Some issues identified by the nurses for attention in order to prevent reoccurrence had been adequately provided for in the government documents. These are related to promotions, placement on salary at appointment and after upward movement, allowances and other entitlements, and support for maximum professional development activities. These are implementable government policies guiding activities in the public service [16,23]. Provisions of the regulations are binding; and problems arising therefrom must be promptly discussed and resolved to avert crisis [3]. Sequel to the evidence-based reports of the Interim Management Committee, confirming some of the "hazards" reported by the nurses, promotion exercises were conducted, and workers released for seminars, and in-service training programmes. Percentages of outstanding allowances that could not be fully paid were agreed upon by management and workers, and were settled accordingly. This could have been done ab initio and the crisis prevented. Lessen the Impact of the Hazard on the Worker Provision of structurally good working environment and adequate equipment was one of the expectations of the nurses before the crisis. The weak state of the nation's health system, with poor infrastructure, grossly inadequate human and material resources, and poor management of human resources among others, is acknowledged by the government [52]. The system is generally not enabling; as such, it does not encourage efficient performance by the workers [2,29]. This obvious impact of the poor health system on the workers' performance requires that the government should do everything possible to improve the workplace environment steadily; even if slowly. Government should ensure that equipment purchased are of good quality, useable, and not just locked up; while, the supposed end users suffer. Every stakeholder must be part of the developmental activities. The rate of exit of nurses from the facility for various reasons was quite high; however, the created vacancies were not adequately filled leading to gross shortage of nurses. Poor staffing has been associated with nurses' dissatisfaction, burnout, and poor quality of nursing care [53]. Replacement of exited staff does not increase government spending; hence, the need for a standing personnel replacement policy to reduce personnel dissatisfaction which is one of the promoters of industrial crisis. Help the Worker Protect Him or Herself from Effects of the Hazard According to Burton [33], this healthy workplace strategy involves raising the awareness of workers and training them on issues such as stress management. Apart from the fact that some nurses who were matured / qualified for promotion were stagnated, there were others who were waiting for promotion, not knowing they could not be promoted to the directorate level because the nature of the first degree they had was not acceptable. They lacked adequate information about the deficiency, and the provisions by the regulatory agency and tertiary institutions in the country on how to resolve the problem. Knowledge strengthens and lightens hope; thus, protective against mental and emotional stress that could aggravate crisis. Interestingly, as soon as the necessary information was provided and due resource linking effected, the affected nurses seized the opportunity to do the needful. Part of the protection strategy is helping nurses to balance demands of professional responsibility to consumers and the exercise of their rights to industrial action. Although the "no work, no pay" rule is usually not enforced during strike actions, studies have shown that many workers do not want to participate; not all workers support strike actions [18], because of the rationale [19], and obvious consequences; including, irredeemable loss of lives [26]. As spiritual and emotional beings, grave consequences of strike actions make some nurses want to boycott strikes; but they are usually coerced into participating. There is need for training on promoting industrial harmony [33], and bearing the consequences of illegal strike actions. Such consequences may include loss of pay by employers, though the organized labour unions may choose to pay for the days workers are pulled out of work [5,6,17]. Implication for Nursing Services at FMC Owerri In view of the charge to "Get the nurses to go back to work", after the first phase of data collection, there was active engagement of the nurse managers in activities to lessen the impact of the strike and protect the nurses. Burton [33] emphasized the importance of knowledge transfer and components of action research in the process of creating a healthy workplace; hence, the utilization of the preliminary findings of the qualitative phase of the study in starting off some of the intervention measures reported in this study and their outcomes. It is expected that, having participated in the positive process of change, the nurse managers would see themselves as stakeholders, and would be able to confidently and effectively participate in management of the facility without fear of victimization. Conclusion Industrial actions in the health sector deny the public their right to quality health care and protection from avoidable death. Expecting that aggrieved workers can be coerced to suspend strike and go back to work without effective intervention from stakeholders is fantasy. The WHO model of healthy workplace is recommended for all parties involved in workplace crisis resolution, as a guide to situation analysis and appropriate intervention. All hands must be on deck to prevent strikes by promoting healthy workplaces in the health sector.
2020-01-17T16:03:03.473Z
2019-11-05T00:00:00.000
{ "year": 2019, "sha1": "e7a9410026fc79f8f57c2f65abaa9a6f1b0d757b", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.hep.20190404.13.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "186494a7149e6f8cd91169c88bb6dd8a0a275170", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Psychology" ] }
252298752
pes2o/s2orc
v3-fos-license
Research on Tibetan Traditional Ecological Culture and Its Significance in the New Era The Tibetan people in our country have lived on the Qinghai-Tibet Plateau where the ecological environment is extremely special and fragile for generations. The relationship between man and nature has always been an unavoidable topic. Over the long years, they have formed an ecological and cultural outlook with national and contemporary characteristics to guide their production and life, and have basically realized the harmonious development of man and nature for thousands of years. The traditional Tibetan ecological culture can be roughly divided into a material level and a spiritual level, and these customary codes of conduct have important practical significance for the sustainable development and ecological civilization construction of Tibetan areas in the new era. Introduction The Seventh Central Committee on Tibet Work Conference clearly pointed out that "we must adhere to the viewpoint of ecological protection first," and pointed out that "protecting the ecology of the Qinghai-Tibet Plateau is the greatest contribution to the survival and development of the Chinese nation." Today, when the party and government attach great importance to the construction of ecological civilization, Tibet is striving to build a national ecological security barrier, and it is particularly important to build a national ecological civilization highland. The Tibetan traditional ecological culture is roughly embodied in all aspects of production methods, lifestyles, religious beliefs, primitive worship, myths and epics, and festival customs. They treat and protect nature with awe, and integrate with the land that nurtures them. Protected the ecological environment of the Qinghai-Tibet Plateau. Therefore, the unique ecological culture formed in practice by the Tibetan people from ancient times to the present shows enduring and tenacious vitality. Inheriting and carrying forward the traditional Tibetan ecological culture and giving it time significance is an inevitable requirement for the construction of ecological civilization in the new era. Mode of production The interrelationship between people and nature formed by the Tibetans in the production process is their mode of production. Observing objective laws and fully considering the mode of production required by people is the optimal solution for harmonious coexistence between man and nature. In order for human beings to survive and develop, in order to carry out other social activities, they must first have the material means of living necessary to solve basic problems such as clothing, food, housing, and transportation. The acquisition of these material means of living can only be achieved through production and labor. And human production and labor must be carried out in a certain way. Without a certain way of production, human beings cannot obtain the material means of life needed for their survival, prosperity and development, so survival and development cannot be talked about. The choice of each mode of production is inseparable from the level of productivity development and the geographical environment of the nation. On the Qinghai-Tibet Plateau with an alpine climate, affected by the complex geographical environment, production conditions are relatively weak, and people have limited choices of production methods. Therefore, they are very dependent on the environment, and nomadism has become their best choice. The vast pastures of the Qinghai-Tibet Plateau provide prerequisites for the development of Published by Francis Academic Press, UK -25-Tibetan nomadism. The nomadic method is that the Tibetan people reflect on the natural environment and state of their existence under specific natural ecological conditions, and adjust their ethnic culture to suit their uniqueness. The natural environment and social environment of the country can be derived from the ecological production method suitable for the ecological and cultural thoughts of the nation. [1] The nomadic production method has formed a good ecological cycle system between man, nature and livestock. The livestock as an intermediary adjusts the relationship between man and nature. The three are interdependent and interdependent, which promotes the development of production methods and at the same time. A virtuous circle of ecology. It is not difficult to find that the nomadic production method of the Tibetan people contains a simple and rich ecological concept. They follow objective laws, conform to the times, multiply and thrive in the harsh living environment, and form their own ecological culture in the accumulation of history. Realize the harmonious coexistence of man and nature. Life-style This article discusses the ecological culture of the Tibetan people in the narrow sense of lifestyle such as clothing, food, housing, and transportation. The special ecological environment determines the nomadic production method. There is no doubt that such a production method determines the life style of the Tibetan people. Mobile living is the most important way for them when they are nomads, so why not settle down directly and adopt the seemingly more troublesome mobile living? I think such a form of mobile grazing is conducive to herders' grazing and allows livestock to eat better natural feed. The deeper reason is to maintain the ecological balance of the grassland, reduce the overuse of the grassland, prevent the degradation of the grassland, and ensure the stock has always been greater than the consumption. The lifestyle bred by the special ecological environment is bound to depend closely on the various conditions given by the ecological environment. Among them, the clothing and diet of the Tibetan people have obvious manifestations. On the Qinghai-Tibet Plateau, three to four kilometers above sea level, where the temperature is low, people ask for animal fur from nature to keep out the cold. For another example, people will inevitably produce a lot of cow dung while nomadic, so people use it, on the one hand, as a fertilizer to return to the pasture, on the other hand as a fuel for cooking and heating. The dietary habits of the Tibetan people formed a lifestyle of "relying on mountains to eat mountains and relying on water for drafting" in the era of relatively low productivity. It is precisely because of this lifestyle that Tibetans have a clear understanding of most of the materials they can use. Data is derived from nature. If you want to rely on the existing lifestyle, you must obtain resources in a controlled manner, and more importantly, protect the ecological environment. These all reflect the simple ecological culture of the Tibetan people. A large number of their daily necessities are all derived from nature, and very few items are obtained outside the environment. Perhaps these lifestyles themselves are the link between humans and nature, and the protection of the ecological environment has long been rooted in their hearts and is reflected in their lives. All aspects of the way. In summary, the Tibetan people's lifestyle takes the harmonious coexistence of man and nature as the starting point, and has chosen a frugal and simple lifestyle. They practice the ecological concept with practical actions and believe that they should maintain a pure heart in daily life, have less desires and be more peaceful, and love nature while accepting the gifts of nature. Religious influence In Tibetan areas, the most influential and far-reaching are undoubtedly the Bon religion and Tibetan Buddhism, which in turn run through the spiritual beliefs of the entire Tibetan nation. Religious teachings not only contain rich ecological culture, but also permeate every bit of Tibetan life and shape their spiritual concepts. More importantly, these primitive ecological cultures have been deeply rooted in the hearts of Tibetans and become their conscious actions after the precipitation of time. Bon religion is a primitive religion native to Tibet. It has a history of development for nearly a thousand years before Buddhism was introduced. It has been in a dominant position for a long time and still affects the Tibetan people. The belief in the animism of the Bon religion and the worship of sacred mountains and lakes derived from it have played an extremely important role in the formation of Tibetan ecological culture and the formation of Tibetan psychology. [2] In terms of sacrifices, the Bon religion pays more attention to making people worship the gods of various parties with a heart of awe, like the trend of "diving into the night with the wind, moisturizing things silently", making people consciously Under the influence and effect of the introduction of Buddhism, the Bon religion combined with it to form a distinctive Tibetan Buddhism. The ecological culture in it bears a strong religious imprint. The equality of all beings and the existence of all things are the ecological concept of Tibetan Buddhism. Mainly reflected. Tibetan Buddhism is deeply influenced by the concept of causal reincarnation. They believe that the concept of "this has a reason for another, this life is for another life; this has no reason for another, this is an extinction and therefore an extinction" is a condition for the existence of all things in the world. Everything and everything are in the cycle of karmic rebirth of a life system. They influence each other and achieve each other, and it is their responsibility and obligation to protect the natural ecology and live in harmony with it. Taking the life view of non-killing as an example, the Tibetans believe that the concept of "saving all living beings" and "saving a life is better than building a sevenlevel buddha" is not only the public's attitude towards life and nature, but also people's A kind of connection with Buddha. The Buddha said: "All the worlds in the other ten directions, six interests, four rebirths, and all kinds." The "four rebirth theory" explains that there are four types of reincarnation: viviparous, oviparous, wet-bearing, and metaplasia. This view of equality is sufficient it shows that everything in the natural world has its own equal right to life like human beings. All living beings have their own lives and all have their own feelings about the world. In the Buddha's great compassion, all things are equal, not because of humans, animals or plants. The difference is in favor of one side. These ideas have had a subtle impact on the Tibetan people's concept of harmonious coexistence with nature, and they have an ecological vision of the times in dealing with the relationship between man and nature. Nature worship The traditional ecological culture of the Tibetans is also a kind of green culture, which embodies the concept of natural harmony embodied in the traditional ecological concept of the Tibetans. There are not only the maintenance of natural balance and the rational use of resources, but also the unique nature worship in Tibetan culture. While the Tibetans endow nature with sacredness, they have formed a sense of reverence for nature. This concept of reverence for nature, dependence on nature, and protection of nature objectively effectively protects the fragile plateau ecological environment and also builds the traditional Tibetan culture. Concept of natural ecology. The Tibetan natural worship is to personify and worship natural phenomena, natural objects, and natural forces. The reason is that in the primitive period, people could not make a reasonable explanation for various natural phenomena that occurred, so they had to trust the gods, thinking that the sun, moon, stars, wind and rain All things in nature, such as thunder and lightning, mountains, rivers, lakes, and seas, are perceptive and spiritual. They worship it with absolute awe, love and respect nature in the practice of production and life, and resolutely prohibit any desecration of nature. The worship of cows is a more prominent manifestation of nature worship, and cow heads with scriptures can be seen everywhere on the grasslands in Tibetan areas. Tibetans believe that cattle are the species they depend on for survival and the medium for them to communicate with the heavens and the earth, so they thank the cattle for their contributions by turning them into gods. They enshrine yak corpses in many temples, hoping to avoid evil and exorcise evil spirits, and pray for the harmonious coexistence of man and nature. For another example, the worship of water is a pure worship of nature. The lakes on the Qinghai-Tibet Plateau are dotted with abundant water resources. Water is closely related to the lives of Tibetans, and their worship of water is self-evident. If people pollute or waste water resources, they will be regarded as blasphemy against the god of water, will be punished by the god of water, and they will suffer from dragon disease (four hundred and twenty-four diseases in the world). Therefore, Tibetans generally do not eat aquatic animals, do not throw dirt in the water, and always keep the water source and surroundings clean and hygienic. [3] Nature worship is a relatively primitive belief model. It is a sincere and simple reflection of the fragile ecological environment and precious resources. It plays an inestimable role in coordinating the relationship between man and nature and protecting the ecology. Mythical epic Mythology, as a story that expresses the worship, struggle and pursuit of nature by mankind, originally appeared as a collective oral creation of the people. As we mentioned above, people were Academic Journal of Humanities & Social Sciences ISSN 2616-5783 Vol.4, Issue 10: 24-28, DOI: 10.25236/AJHSS.2021.041006 Published by Francis Academic Press, UK -27-unable to explain many natural phenomena in ancient times, and myths became one of the ways for people to understand the world. Affected by myths and stories, the ecological consciousness of the Tibetan people is centered on the theory of "natural generation". Because of people's ambiguity about nature, they have created the unique ecological and cultural mentality of the Tibetan people, that is, nature is the foundation of all things, and we must always maintain a sense of awe for it. [4] In "Sparta Niu Song", it is described that various parts of the cow's body are "transformed" into natural ecological features such as mountains, rivers, heaven and earth. There are also many myths that are widely circulated in history books, the "Ming Jian of the Lineage of Tibetan Kings" records that Tibet was originally a vast ocean. The God of Gongqu Mountain introduced sea water into the cave of "Gongjiqula", and the sea turned into mulberry fields. These myths all reflect the primitive cultural mentality and ecological consciousness of the Tibetan people. As one of the three heroic epics in China, "Gesar" celebrates the heroic achievements of the Tibetan people in fighting against various evil forces and invaders. As an "encyclopedia" of ancient Tibetan society, its content reflects man and nature from all aspects. The close relationship between people, including the production and lifestyle, religious beliefs, and nature worship mentioned above. Whether it is a myth or an epic, they all reflect the ecological and cultural concept of the Tibetan people in vivid storylines through narrative descriptions, showing a national image that loves homeland and cares for the ecology. Affected by myths and epics, the Tibetan people have gradually formed an ecological consciousness centered on the theory of "natural generation". Because of people's ambiguity about nature, they have created the unique ecological and cultural mentality of the Tibetan people, that is, nature is the foundation of all things, and people must always maintain sufficient awe of it. In the eyes of the Tibetan people, nature has given us life and means of living. Therefore, human life, death, reward, and punishment must follow the laws of nature. Imagining and describing everything around through myths is determined by the low level of productivity of the Tibetan people. Although it is idealistic, it plays a major role in the protection of the plateau ecological environment. The Time Significance of Tibetan Traditional Ecological Culture During his inspection in Tibet, General Secretary Xi Jinping emphasized that it is necessary to firmly establish the concept that green water and green mountains are golden mountains and silver mountains, and ice and snow are also golden mountains and silver mountains, maintain strategic determination, improve the level of ecological environment management, and promote the protection of biodiversity on the Qinghai-Tibet Plateau. Take the path of ecological priority and green development, strive to build a modernization in which man and nature coexist harmoniously, and earnestly protect the third pole ecology of the earth. The core content of the ecological concept of the new era is the sustainable development strategy guided by Xi Jinping's socialism with Chinese characteristics in the new era. It is not only a requirement for economic growth and social progress, but also a rational reflection on the progress of human civilization. The thinking about how humans live in harmony with nature and the dialectical thinking of long-term interests in the traditional Tibetan ecological culture all coincide with the ecological thinking of the new era. The traditional Tibetan ecological culture contains a large amount of life wisdom of Tibetans, which has important reference significance for the construction of ecological civilization in the new era. On the one hand, the enrichment and development of Tibetan traditional ecological culture is not only of great significance for expanding the breadth and depth of ecological disciplines, providing multiple perspectives for studying the ecology of the Qinghai-Tibet Plateau, but also providing useful ideas for the current ecological cultural education practice. The organic combination of theory and practice shapes the ecological spiritual home of contemporary people. On the other hand, on the basis of a deep understanding of the existing shortcomings in the construction of our country's ecological civilization, we will dig deeper into the traditional Tibetan ecological culture, explore its beneficial ingredients, and build a new era of ecological civilization system based on the present and looking forward to the future. The ecological civilization construction in Tibetan areas and even the whole country provides a beneficial reference.
2022-01-15T23:49:13.897Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d4d9000d2a4041c2a6a0bb79f75e6df262db42d0", "oa_license": null, "oa_url": "https://francis-press.com/uploads/papers/ly1ckL1ZLURspYXzyug4LMW6nuyMot875dn2gACg.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d4d9000d2a4041c2a6a0bb79f75e6df262db42d0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
258355529
pes2o/s2orc
v3-fos-license
The Impact of the COVID-19 Pandemic on the Care of Women Experiencing Abortion in a University Hospital in Brazil Objective  To evaluate the impact of the coronavirus disease 2019 (COVID-19) pandemic on the care of patients with miscarriage and legal termination of pregnancy in a university hospital in Brazil. Methods  A cross-sectional study of women admitted for abortion due to any cause at Hospital da Mulher Prof. Dr. J. A. Pinotti of Universidade Estadual de Campinas (UNICAMP), Brazil, between July 2017 and September 2021. Dependent variables were abortion-related complications and legal interruption of pregnancy. Independent variables were prepandemic period (until February 2020) and pandemic period (from March 2020). The Cochran-Armitage test, Chi-squared test, Mann-Whitney test, and multiple logistic regression were used for statistical analysis. Results  Five-hundred sixty-one women were included, 376 during the prepandemic period and 185 in the pandemic period. Most patients during pandemic were single, without comorbidities, had unplanned pregnancy, and chose to initiate contraceptive method after hospital discharge. There was no significant tendency toward changes in the number of legal interruptions or complications. Complications were associated to failure of the contraceptive method (odds ratio [OR] 2.44; 95% confidence interval [CI] 1.23–4.84), gestational age (OR 1.126; 95% CI 1.039–1.219), and preparation of the uterine cervix with misoprostol (OR 1.99; 95% CI 1.01–3.96). Conclusion  There were no significant differences in duration of symptoms, transportation to the hospital, or tendency of reducing the number of legal abortions and increasing complications. The patients' profile probably reflects the impact of the pandemic on family planning. Introduction The coronavirus disease 2019 (COVID-19) pandemic has affected health services around the world, requiring prioritization of the professional team and hospitals to meet the growing demand for cases complicated by the infection. Thus, the health system was structured in such a way to care for infected patients, limiting their surgical care to emergency procedures, leaving their surgical centers available to be transformed into intensive units and saving personal protective equipment. 1 Likewise, the general population was affected, losing track of its chronic diseases and measures to promote health and prevent illness and injury. Mandatory home isolation was instituted in several countries, and people were advised to seek medical attention only in urgent and emergency cases. Given these facts, these changes affected when patients seek and how they receive their medical care. 2 In relation to women in abortion situations, they had greater difficulty in accessing health services, since the basic health networks restricted their care; public transport reduced its fleet; the fear and the need for home isolation left the population without adequate screening or isolated until the appearance of alarm signals, delaying their care, diagnosis, and proper management, just as hospitals reduced the number of available beds, reserving hospitalization only for more critical cases, among other various socioeconomic and political factors that have impacted our current global health condition. 3 With this in mind, in March 2020, the American College of Surgeons (ACS) recommended delay of all nonessential invasive procedures, reinforcing, however, the importance of not delaying gynecologic emergency procedures, including ectopic pregnancy and miscarriage. 4 Likewise, the American College of Obstetricians and Gynecologists (ACOG) stated that care for women in abortion situations should be guaranteed by community-and hospital-based clinicians, as sometimes a delay of weeks or days may increase the risks or potentially make it completely inaccessible. 5 The COVID-19 pandemic triggered several changes in the flow of care for women experiencing an abortion. In the context outside the institution, we supposed that the reduction in the number of consultations available in basic health units and the reduction in the availability of public transport would delay the care, diagnosis, and assistance of these women, resulting in longer time experiencing symptoms and getting to the hospital. In the internal context of the institution, we can mention a reduction in the number of spaces available for hospitalization due to the need for distance between beds, and the reduction in the availability of intensive care unit (ICU) beds could limit the access of these patients to tertiary assistance. This scenario raises the following research question: What influence did the changes in routine resulting from the COVID-19 pandemic have on the quality of care for women experiencing abortion in a university hospital? The aim of this study was to evaluate the impact of the COVID-19 Palavras-chave pandemic on the care of patients with miscarriage and legal termination of pregnancy in a university hospital in Brazil. Methods The multicentric network MUSA-Women in Abortion Situations-is a network created by the Latin American Center for Perinatology (CLAP, in the Portuguese acronym) to improve care for women undergoing any kind of pregnancy loss during the first half of pregnancy (spontaneous or induced ones) in Latin America and the Caribbean. 6 It includes several hospitals, called sentinel centers, which periodically send their data regarding the pregnancy cycle for registration in the Perinatal Computerized System (SIP, in the Portuguese acronym), a software developed by CLAP that helps health facilities register data related to pregnancy and epidemiologic monitoring. Our institution, University of Campinas Women's Hospital (UNICAMP) is a tertiary referral hospital for cases of complications related to pregnancy in municipalities in the region and experiences an average of 250 births and 20 cases of first trimester pregnancy loss per month. Our hospital has been a sentinel institution of the MUSA network since July 2017, prospectively collecting data which have already been used in other cross-sectional studies. However, this is the first work performed during the COVID-19 pandemic. The hospital follows the laws of Brazil regarding the legal termination of pregnancy, in which abortion is allowed only in cases of risk of maternal death, sexual violence, and fetal anencephaly. [7][8][9][10][11][12][13][14][15][16][17][18][19] The sentinel centers of the MUSA network regularly provide information on maternal morbidity in early pregnancy loss, termination methods for uterine evacuations, incidence of complications related to pregnancy termination, incidence of preoperative antibiotic use and prescription of contraception before hospital discharge. Through SIP, it is possible to carry out epidemiological monitoring and comparisons between different sentinel centers over time. Representatives from each sentinel center also hold regular online meetings to discuss the data collected, conduct scientific discussions on the topic of women's health in abortion situations, and encourage good clinical practices for safe abortion. This cross-sectional study with epidemiological surveillance data was conducted between July 2017 and September 2021. All cases from the SIP-abortion database from July 1 st , 2017, to September 30 th , 2021, were included. The inclusion criteria were women admitted for spontaneous pregnancy loss (inevitable miscarriage, complete, incomplete, or missed abortions) and legal interruption of pregnancy due to any cause or any age group who visited our hospital. The exclusion criteria were women with bleeding during pregnancy who did not have a confirmed abortion and women with ectopic or molar pregnancies. The research ethics committee of our institute approved this study (approval number CAAE: 56933116.0.1001.5404). The dependent variables evaluated were: abortion-related complications (infection, excessive bleeding, and intraoperative complications, such as postspinal anesthesia head-ache, disseminated intravascular coagulation, reapproach, and allergic reaction) and legal interruption of pregnancy. The control variables were patients' clinical and sociodemographic characteristics, such as age, education, marital status, living status, health records, number of pregnancies, number of births, number of abortions, body mass index (BMI), active smoking, illegal drug use, alcohol use, planned pregnancy, pregnancy resulting from contraceptive failure, date of admission at the hospital, if it is a medically induced abortion for legal reasons, gestational age, presence of any complications, and admission data, besides duration of transportation and symptoms. Initially, a descriptive analysis of the data was performed. For continuous variables, the mean, standard deviation, median, minimum, maximum, and quartiles were calculated. For categorical variables, the relative frequencies were calculated. To assess whether there was a change in the trend in the occurrence of the outcome variables, the Cochran-Armitage trend test was performed. To evaluate the association between abortion-related complications and the independent variables, the Chi-squared or Fisher exact tests were performed for categorical variables, and the Mann-Whitney or Kruskal-Wallis tests for continuous variables. To evaluate the factors independently associated with abortion-related complications, a multiple logistic regression was performed, with "stepwise" selection criteria for variables. The significance level assumed was 5%. The software used for the analyses was the SAS System for Windows, version 9.2. (SAS Institute Inc., Cary, NC, USA). Results During the study period, 561 women in a situation of abortion were included; 376 during the PrP and 185 during the PP. From the PrP, 50 women had abortion induced for legal reasons and 326 had other types of abortion. During the PP, 20 patients had legal abortions, and 165 had other types. During the PrP, it was observed that the mean maternal age was 30. 13 In the PrP, 60.93% of the patients were married or living in a stable relationship, while 51.65% in the PP were single (p ¼ 0.005). In the PrP, 91.96% of patients did not have comorbidities, compared with 96.74% in the PP (p ¼ 0.031). In the PrP, 12.1% of patients declared drinking alcohol, while only 5.41% did in the PP (p ¼ 0.013). A total of 32.71% of pregnancies were planned during the PrP, whereas 24.32% were had been planned in the PP (p ¼ 0.042). In the PrP, most patients (42.36%) were accompanied by their partners, while in the PP, most patients (45.36%) came alone (p ¼ 0.012). In the PrP, most patients (62.73%) chose not to initiate contraceptive methods at hospital discharge, while in the PP, 53.01% chose to initiate (p < 0.001). In the PrP, most uterineemptying procedures involved medication plus uterine curettage (41.49%), while in the PP, 40.44% underwent manual intrauterine aspiration (p < 0.001) (►Table 2). Since the beginning of the evaluation period, 70 women (12.47%) had undergone legal interruption. We did not observe a significant tendency toward an increase or decrease in the number of legal interruptions. (Cochran-Armitage test: Z ¼ -0.28; p ¼ 0.783) (►Fig. 1). Since the beginning of the evaluation period, 31 women (5.53%) had abortion-related complications. Among the complications, we found that the most frequents were: infection, with 13 cases (2.32%), and 8 cases of sepsis (1.43%); excessive bleeding, with 9 cases (1.60%), and 2 cases of hypovolemic shock (0.36%); and other complications, with 6 cases (1.07%), which include post-spinal anesthesia headache, disseminated intravascular coagulation, reapproach, and allergic reaction. We did not observe a significant tendency toward an increase or decrease in the number of complications. (Cochran-Armitage test: After analyzing the factors associated with a higher prevalence of complications, considering the PP and PrP as independent variables, we observed that the pandemic period was not associated with a higher occurrence of complications. We observed that the factors associated with the occurrence of complications were: failure of contraceptive method (p ¼ 0.002); no cervical preparation with misoprostol (p ¼ 0.006); type of procedure performed for uterine evacuation, with curettage being the method with the highest number of complications (p ¼ 0.009); maternal age, with a higher number of complications among younger patients (p ¼ 0.031); gestational age, with more complications in more advanced pregnancies (p ¼ 0.010); and duration of symptoms, with more complications associated with longer duration (p ¼ 0.045) (►Tables 3 and 4). In the multiple logistic regression model, it was found that the variables significantly related to complications were: failure of the contraceptive method, with a risk 2.4 times greater (odds ratio [OR] 2.44; 95% confidence interval [CI] 1.23-4.84); gestational age, with an increase of 12.6% for every 1 week of gestational age (OR 1.126; 95% CI 1.039-1.219); and lack of uterine cervix preparation with misoprostol, raising 2.0 times the risk of complications (OR 1.99; 95% CI 1.01-3.96) (►Table 5). Discussion The COVID-19 pandemic has affected health services around the world and has changed how the general population experienced their diseases and sought health assistance. In relation to women in abortion situations, we supposed that changes outside the institution could hinder assistance of these women, and changes inside the institution could limit their access to the hospital. These facts brought us to the importance of evaluating the impact of the COVID-19 pandemic on the care of patients with miscarriages and legal termination of pregnancy in a university hospital in Brazil. Although we have experienced these external changes, we did not observe great differences in the duration of symptoms and time of transportation to our hospital. We imagine that, as most primary care services were closed or turned to care for patients suspected of having COVID-19, the patients probably sought our emergency room as a first form of care, as well as assuming that access to public transport was guaranteed in our city and region of coverage. However, this is not what we expected, as facing COVID-19 pandemic changes in transportation contributed to increase health disparities, hindering access to healthcare to low-income families. 8 Regarding demographic aspects, we found that, during the pandemic, most patients were single, without comorbidities, experiencing abortions as a result of unplanned pregnancy and chose to start contraceptive methods at hospital discharge. These findings make us reflect on how the pandemic may have impaired family planning and access to contraceptive methods. We have learned from previous public health emergencies, such as the Ebola outbreak, that the impact of an epidemic on sexual and reproductive health is not a direct consequence of the infection, but an indirect result from strained health care systems, disruptions in care and redirected resources. 9 Riley et al. 9 estimated that a decline of 10% in the use of short-and long-acting reversible contraceptive methods in low-and middle-income countries due to reduced access would result in 49 million women without family planning support and 15 million unintended pregnancies over a year. We observed an increasing in the number of women who were hospitalized without companions. It might be influenced by the internal restructuring of our service, since it restricted the number of companions and hospital visits during the pandemic period. However, our hospital guaranteed and prioritized the presence of companions for adolescents, victims of sexual violence, and women with important physical and emotional needs. A new tendency was also observed in our hospital. Most uterine evacuation procedures performed during the pandemic were manual vacuum aspirations (MVAs), comparing to the previous tendency of using medicine for cervix preparation and curettage. Since the implementation of the MUSA network in our hospital, it was possible to generate data to assess our trends in clinical practice patterns, and the data were necessary for analyses of the safety of abortion practices and to purpose improvements in the quality of patient care and overall health outcome. In 2020, we began an intense process to insert MVA into our care practice, training the technical team, as well as modifying the institutional protocol and making MVA a priority method of uterine evacuation for abortions up to 12 weeks of gestational age, following the recommendation of the World Health Organization (WHO) and the Federation of Gynecology and Obstet-rics (FIGO). 10 The adherence to the implementation was probably facilitated by the period of the pandemic, since it is a quick and easy procedure, with a lower risk of complications; it requires less complex anesthetic procedures and has a rapid recovery, allowing early hospital discharge. 10,11 We feared that external changes in health services organization and in people behavior during the pandemic could restrict women access to our hospital, resulting in a decrease of number of legal terminations of pregnancy and an increase in abortion-related complications. However, we did not observe this tendency, corroborating the hypothesis previously mentioned that patients sought for emergency attendance after primary care and that the access to our hospital was maintained during the pandemic period. This result differs from those of national data, which showed that only 55% of the 76 hospitals in our country that provide legal abortions were operating in 2019. 12 Our multivariate analysis showed that the variables significantly related to complications were failure of the contraceptive method, higher gestational age, and no preparation of the uterine cervix with misoprostol. It is known that women using contraceptive methods can possibly not recognize symptoms of pregnancy, resulting in late diagnosis and delay in seeking medical attention, increasing the risk of infection and/or hemorrhage. 13 Also, because of the unplanned pregnancy, women need to face some unexpected issues, such as finding transportation or companion and justifying absence at work. Fear, embarrassment, or stigma are also barriers to seek care. 16,17 Besides, failure of contraceptive methods might reflect the contraceptive use pattern of our country, in which oral contraceptives and condom are predominant compared with long-acting reversible contraceptives, such as intrauterine devices. 13 Our study showed that each week of gestational age increased 12.6% the risk of complications, while another study showed an increase in the number of complications by up to 20% in each week. 18 Pregnancies with higher gestational ages mean higher uterine volume, bigger amount of retained products of conception, and possible chorioamnionitis. 13 The main complications include uterine perforation, cervical laceration, hemorrhage, uterine rupture, and infection. 14 Preparing the cervix prior to the procedure reduces this risk to less than 1% of cases. 15 Compared with manual dilation alone, it improves cervical dilation, shortens procedure times and decreases the risk of complications intraoperatively, such as cervical laceration and uterine perforation. 14 This study had some limitations. First, it was a crosssectional study; thus, a cause-effect relationship could not be established. Furthermore, it was not possible to differentiate provoked abortion from spontaneous abortion, except in cases of legal induction. However, our study was important to evaluate the impact of the COVID-19 pandemic, which caused changes all over the world and could impact negatively in women's experiencing abortion. We were pleased to find that our patients apparently did not have difficulties accessing our health service, mostly because we maintained and prioritized care for women in abortion situations despite all the reorganization and limitation we have suffered internally in the context of the pandemic. Conclusion Our service did not reduce its volume of abortion attendance during the COVID-19 pandemic. Significant differences in the duration of symptoms and transportation to the hospital were not observed, neither was there a tendency to reduce the number of legal abortions, or an increase in complications. Despite reorganization of hospital function due to this public health emergency, we were one of 55% of services still providing legal abortions in our country. Our patients' profiles reflect the impact of the pandemic on sexual and reproductive health. This outbreak situation showed us that, in our institution, the infection might not have directly affected how women have experienced abortion, but how the reorganization of health system impacted on family planning.
2023-07-29T05:52:46.661Z
2022-06-04T00:00:00.000
{ "year": 2023, "sha1": "24ab4ae12aebab1b20f327a9f109d518e16897c2", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0042-1759749.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a19f494e458696b0662098f1dff40d9c0b752f0b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252071950
pes2o/s2orc
v3-fos-license
An integrated metabolome and transcriptome analysis of the Hibiscus syriacus L. petals reveal the molecular mechanisms of anthocyanin accumulation Hibiscus syriacus L. var. Shigyoku is a new double-flowered bluish-purple variety in China that changes color during flower development from bluish-purple to light purple. There is limited information on the anthocyanin accumulation patterns and associated transcriptome signatures in Shigyoku from D1 (bud) to open flower (D3). Here, we employed a combined transcriptome and metabolome approach to understanding the mechanism of this color change. Our results demonstrate that cyanidins, pelargonidins, delphinidins, petunidins, peonidins, and malvidins were differentially accumulated in Shigyoku petals. The anthocyanin biosynthesis started in D1, was significantly upregulated in D2 (semi-open flower), and reduced in D3. However, malvidins, pelargonidins, and peonidins could be associated with the bluish-purple coloration on D2. Their reduced accumulation in D3 imparted the light purple coloration to Shigyoku petals on D3. Significant contributions in the color change could be associated with the expression changes in anthocyanin biosynthesis genes i.e., LARs, ANSs, DFRs, UGT79B1, C3’Hs, 3ATs, and BZ1s. The UFGTs were associated with the higher accumulation of glycosylated anthocyanins in D2 and D3. Furthermore, the changes in the expressions of the MYB and bHLH transcription factors were consistent with the anthocyanin accumulation. Finally, we discussed the possible roles of Jasmonic acid, auxin, and gibberellic acid signaling in regulating the MBW complex. Taken together, we conclude that H. syriacus petal coloration is associated with anthocyanin biosynthesis genes, the MBW complex, and phytohormone signaling. Introduction Hibiscus syriacus L., a member of the Malvaceae family, is commonly known as Sharon rose. It is an ornamental flowering plant, and more than 350 varieties are being grown worldwide (Magdalita and San Pascual, 2020). Its flowers are used in salads and are known to possess biological functions against a wide range of human diseases (Kim et al., 2022). It is widely used as a Chinese medicinal plant because of its pharmacological activities i.e., antifungal, antifertility, antihypertensive, anti-inflammatory, and antibacterial activities (Punasiya et al., 2014). The H. syriacus varieties are categorized based on two factors i.e., the number of petals and petal color. It has a long flowering period that extends from May to October, but the blooming period of a single flower is usually 1 day. The flower color changes throughout the flower development i.e., from pod to full bloom. This color-changing characteristic is particularly prominent in a new double-flowered cultivar, "H. syriacus Shigyoku" (Shigyoku thereafter). Its flowers show bluish-purple petals when semi-opened and turn purple when completely open (full bloom). There is a need to explore the petal color transition mechanism of H. syriacus for breeding flowers of different color intensities to target different consumers. Additionally, petal color is an important biological factor that attracts pollinators (Whitney and Glover, 2007). Hibiscus syriacus flowers contain nearly 40 different anthocyanin components. Flowers from different varieties in this species produce different amounts of cyanidin, delphinidin, procyanidin, peonidin, pelargonidin, petunidin, and malvidin to produce different colorations (Zhang et al., 2022). The different shades of color such as lavender, blue, red, and purple colors are produced via anthocyanin pigments . The three main types of anthocyanins are distinguished by the number of hydroxyl groups on their B-ring: anthocyanins generated from pelargonidin (orange or red; one hydroxyl group), cyanidin (magenta; two hydroxyl groups), and delphinidin (containing malvidin imparting blue or purple; three hydroxyl groups) (Ng et al., 2018). The functional groups present on the anthocyanins skeletons improves the color of the compounds as well as the plant tissue where they are synthesized. For example, the red and purple petals of H. syriacus flowers contained more anthocyanins and were substantially hydroxylated and partially hydroxyl methoxylated, resulting in a deeper hue of the petal cell vacuole (Alappat and Alappat, 2020). The reduction/loss of anthocyanins also contributes to color transition in flowering plants e.g., Ipomoea purpurea (Zufall and Rausher, 2004), Linanthus parryae (Schemske and Bierzychudek, 2001), and Primula vulgaris (Li et al., 2020). Moreover, the mutations in the coding sequences of the anthocyanin biosynthesis pathway (ABP) genes have also been implicated in the evolutionary transition in floral color (Streisfeld et al., 2011). Anthocyanins biosynthesis in plants takes place through ABP, which is a component of the flavonoid biosynthesis pathway (Liu et al., 2018). Upstream of the flavonoid pathway, the phenylpropanoid pathway has the early ABP genes such as phenylalanine ammonia-lyase (PAL). Together with PAL, the ABP genes in dicots are divided into two groups i.e., early biosynthetic genes (EBGs) and late biosynthetic genes (LBGs) (Weiss, 2000). The EBGs include chalcone synthase (CHS), chalcone isomerase (CHI), and flavanone 3hydroxylases (F3H), which are the common flavonoid biosynthesis pathway genes and affect downstream flavonoids. Whereas the LBGs include flavonoid 3′-hydroxylase (F3′H), flavonoid 3′,5′-hydroxylases (F3′5′H), dihydroflavonol 4reductase (DFR), leucoanthocyanidin dioxygenase (ANS), and UFGTs (Tanaka et al., 2008). Several internal factors e.g., copigments, cell shape, pH, phytohormones (Griesbach, 2010) play an important role in flower coloration. The transcriptional control of the ABP is associated with the MBW (MYB, bHLH, and WD40 TFs). Two components of this complex i.e., MYB (Docimo et al., 2016) and bHLH (XIE et al., 2012) are positive regulators of ABP genes and in most, cases their expressions are specific to the pigmented tissues. On the other hand, WD40 plays similar roles in both pigmented and nonpigmented tissues. Nevertheless, WD40 stabilizes the MBW complex (Li, 2014). Studies have shown that the MBW complex is influenced by plant hormone signals. Especially, the activation/deactivation by Jasmonic acid (JA), auxin, and gibberellic acid (GA) is relatively well established (LaFountain and Yuan, 2021). Although, these mechanisms of anthocyanin biosynthesis, transcriptional control, and regulatory signals are well explored in a range of flowering plants, but how these pathways control the anthocyanin accumulation and petal coloration in different H. syriacus (in general) varieties especially Shigyoku (specifically) is yet to be explored. Developments in transcriptomics and metabolomics have geared up the exploration of complex pathways that regulated traits e.g., ABP. We adopted a combined metabolome and transcriptome analyses to explore the ABP in Shigyoku petals. We specially looked into the ABP and flavonoid biosynthesis pathway. We found that the flavonoid and anthocyanin accumulation is highest on D2 (semi-open flower) stage and the cyanidins were the highest accumulated anthocyanidins. Additionally, other anthocyanidins were also present, where the dark purple color could be due to malvidins, peonidins, and petunidins. The differential expressions of major flavonoid and anthocyanin biosynthesis genes such as early biosynthesis genes (EBGs including, PAL, CHS, 5-O-(4-coumaroyl)-Dquinate 3′-monooxygenase (C3′H), FLS, F3H, HCT) and late biosynthetic genes (LBGs, including ANS, leucoanthocyanidin reductase (LAR), anthocyanidin reductase (ANR), DFR, anthocyanidin 3-O-glucoside 2'''-O-xylosyltransferase Frontiers in Genetics frontiersin.org 02 (UGT79B1), anthocyanidin 3-O-glucoside 6''-O-acyltransferase (3AT), anthocyanidin 3-O-glucosyltransferase (BZ1), and anthocyanidin 5,3-O-glucosyltransferase (GT1)) are controlling the differential anthocyanidin accumulation. We also discussed the potential roles of the genes enriched in plant hormone signal transduction pathway in changing the anthocyanidins accumulation. This combined omics approach elucidates the ABP related changes leading to the variations in anthocyanins' accumulation in H. syriacus petals, thus proves to be an important resource for breeding Shigyoku or other varieties in this species in different shades. Material and method Plant material and growth condition Hibiscus syriacus "Shigyoku" is the most popular flower in China. The research material was received from the Hunan Forest Botanical Garden. Plants were grown under normal conditions at the Chinese Academy of Agricultural Sciences (CAAS), 2021-22. The average temperature during the studied growing period was 29°C. Three distinct stages of flower development were selected, which are D1 (bud stage), D2 (early flowering stage which is also known as semi-open flower), and D3 (full bloom). For each stage, petals were collected in triplicate (single plants). The collected petals were stored at −80°C in liquid nitrogen and used for metabolic and transcriptomic analysis. Transcriptome analyses RNA extraction, library preparation for transcriptome sequencing Total RNA was isolated from nine samples using the TRlzol Reagent as per manufacturer's protocol (Life Technologies, CA, United States). The RNA degradation and contamination were evaluated on agarose gel (1%). RNA purity was analyzed by using NanoPhotometer ® spectrophotometer (IMPLEN, CA, United States). RNA Assay Kit (Qubit ® 2.0 Fluorometer) was used to measure the RNA concentration. RNA integrity was determined by using a Bioanalyzer 2100 system (Nano 6000 Assay Kit Agilent Technologies, CA, United States). mRNA from total RNA was subjected to poly-T oligo attached magnetic beads for purification purpose. 1 µg RNA was taken for each sample and the libraries were prepared by using NEBNext ® UltraTM RNA Library Prep Kit Illumina ® (NEB, United States). The cDNA fragments of 250-300 base pair were selected. Purification of the library fragments was done via using the AMPure XP system (Beckman Coulter, Beverly, United States). The random hexamer primer and M-MuLV Reverse Transcriptase (RNase H-) were used to synthesize the first strand of cDNA. Subsequently, second strand was prepared with the help of RNase H and DNA Polymerase I. Afterwards, the adaptor-ligated cDNAs with specific length were treated with 3 μl USER Enzyme (NEB, United States) for 15 min at 37°C and again for 5 min at 95°C. Then, PCR was carried out with the help of DNA polymerase and universal DNA primers. Finally, the PCR products were purified (AMPure XP system). Agilent Bioanalyzer 2100 system was used to assess the quality of the library. The cBot Cluster Generation System (TruSeq PE Cluster Kit v3-cBot-HS Illumia) was used for clustering the samples with index code as per manufacturer's protocol. Afterward sequencing of libraries was performed on an Illumina Hiseq platform. De novo transcriptome assembly and annotation The Cassava 1.8 pipeline of the Illumina pipeline was used to filter the raw reads with paired ends. The low-quality reads and adapter reads were filtered. The de novo assembly of filtered reads was performed via adopting Trinity program 2.3.0 with a cut-off length >300 bp. Then, fastp v 0.19.3 (Ollion et al., 2013) software was used to filter the original data and the paired end reads, reads with adapters, and N content >10% eliminated. The low-quality paired-end (Q20>50%) reads were also removed, clean reads were obtained, and used for downstream analyses. Trinity (v 2.11.0) software (Grabherr et al., 2011) was used for transcriptome assembly (Kim et al., 2017). The transcript after Trinity assembly and de-redundancy was used as the reference sequence, and the clean reads of each sample were aligned to the reference sequence. For this, we used RSEM software (Li and Dewey, 2011), bowtie2 used in RSEM (Langmead and Salzberg, 2012). Finally, the appropriate transcripts were regrouped into gene clusters (unigenes) using Corset v 1.0.7 (Davidson and Oshlack, 2014). Gene annotation, identification of DEGs DIAMOND BLASTX program (Kanehisa et al., 2007) was used to compare the unigene sequences with KEGG, NR, Swiss-Prot, GO, COG/KOG, and TRembl databases., The unigenes' amino acid sequences were predicted. Next, the HMMER software was implemented to compare them with Pfam database to obtain the annotation data of unigenes. To appraise and normalize the transcript expression level, RSEM tool (Li and Dewey, 2011) was adopted and then fragments per kilo base of transcript per million mapped fragments value (FPKM) were calculated. DESeq2 v1.22.2 (Love et al., 2014;Varet et al., 2016) was used for DEGs analysis. The p-value was normalized via implementing Benjamini & Hochberg method (multiple hypothesis testing) to attain the False Discovery Rate (FDR). To identify the differential genes, log 2 Fold Change ≥ 1 and the FDR value less than 0.05 were used as selection parameters. Frontiers in Genetics frontiersin.org qRT-PCR analysis of anthocyanin related genes We further studied the expression of the 15 anthocyanin-related genes in the petals on three stages i.e., D1, D2, and D3. The genes were selected based on their RNA-sequencing profile and relevance to their roles in the pathway. The primers for the selected genes were designed using Primer 3Plus software (Untergasser et al., 2007). Firststrand cDNA synthesis kit (ThermoFisher Scientific, United States) was used. The reaction mixture preparation and PCR reactions were performed as reported earlier (Niu et al., 2017). The Actin6 gene was used as a reference. The correlation was computed between the RNAseq data and qRT-PCR data of the respective genes (Everaert et al., 2017). Metabolome analyses Sample preparation for metabolomics The petal samples were vacuum freeze-dried in a lyophilizer (Scientz-100F) and ground to powder with a grinder (MM 400, Retsch) at 30 Hz for 1.5 min. Afterwards, 100 mg powder sample was taken and mixed into 1.2 ml of 70% methanol extract. The solution was then vortexed for six times and placed in a refrigerator at 4°C overnight. Next morning, the samples were centrifuged at 12,000 rpm for 10 min and the supernatant was discarded. The samples were filtered through microporous membrane (0.22 μm) and transferred to the injection flask for UPLC-MS/MS analysis. Conditions for chromatographic and mass spectrometry of analysis Ultra-high-performance liquid chromatography (UHPLC) (SHIMADZU Nexera X2, and tandem mass spectrometry MS/ MS) (Applied Biosystems 4500 QTRAP) were the basic system used for metabolite detection. The conditions for analysis are given in Table 1. The effluent was coupled to an ESI-triple quadrupole-linear ion trap (QTRAP)-MS. Whereas, Table 2 shows the key components of mass spectrum conditions. Each ion pair was scanned and identified using the optimal declustering potential (DP) and collision energy in triple quadrupole (QQQ) (Ma et al., 2021). The qualitative evaluation of the secondary spectrum data was done by using a self-built database MWDB (Metware Biotechnology Co., Ltd. Wuhan, China). Both isotopic and duplicate signals comprising of following ions such as NH 4 + , K + , and Na + and fragments (large molecular weight) were eliminated from analysis. The quantitative examination of metabolites was carried out using multiple reaction monitoring (MRM) analysis of QQQ-MS. Following the collection of metabolite data from several samples, the peak area of all metabolite mass spectra was integrated. Then, mass spectra of the same metabolites in different samples were combined and normalized. Differential metabolites were annotated and demonstrated by means of KEGG database. Statistical analysis R software was used for cluster analysis, as well as PCA following previously described methods (Ma et al., 2021). The differentially accumulated metabolites (DAMs) were selected based on the variable importance projection (VIP) and Results Comparative metabolome profile of H. syriacus "purple jade" petals Overview of metabolome analysis The HPLC-MS/MS-based metabolome profiling of the fresh H. syriacus purple jade petals ( Figure 1A) revealed the differential accumulation of 189, 172, and 52 metabolites; in total, we detected 301 metabolites ( Figure 1B). The differentially accumulated metabolites (DAMs) were those whose FC was ≥ 2 or ≤ 0.5 between the comparative groups. The principal component analysis (PCA) grouped the metabolites in three separate clusters indicating the reliability of sampling (Supplementary Figure S1). The DAMs were enriched in metabolic pathways, flavonoid biosynthesis (and related pathways), biosynthesis of secondary metabolites, and anthocyanin biosynthesis (Supplementary Figure S2). Transcriptome profiling of H. syriacus petals on different days Overview of transcriptome sequencing Global gene expression profiling of the H. syriacus petals was done by transcriptome sequencing. The nine cDNA libraries produced 58.97 Gb clean reads (an average of 43.68 million clean reads per library). 73.81% of the clean reads could be mapped onto the reference sequence (the transcripts after Trinity assembly and deredundancy were used as the reference sequence). The error rate and GC contents were 0.03 and 44.5%, respectively (Supplementary Table S2). The sequencing produced 313,323 transcripts and 303,832 unigenes; all the unigenes could be annotated (Supplementary Figure S3). Overall FPKM values on D1 and D2 were higher than on D3 (Figure 2A). The PCA analysis grouped the respective replicates of each flowering stage together suggesting the reliability of the sampling ( Figure 2B). There were 29,921, 59,258, and 49,401 DEGs in D1vsD2, D1vsD3, and D2vsD3, respectively ( Figure 2C); 7,457 of which were common between the three comparisons ( Figure 2D). These observations indicate that from D1 to D2 the processes such as ubiquitination, changes in cell wall, and sugar transport are involved. These process could be relevant to the developmental mechanisms and may also be related to ABP. On the contrary, the upregulation (or exclusive expression) of MYB, Ga2ox, SAUR, and LFS-like indicates that hormone signaling (GA signaling in particular), anthocyanin transport, and stability are highly regulated from D2 to D3 (Supplementary Table S3). KEGG pathway enrichment showed that DEGs were significantly enriched in metabolic pathways, biosynthesis of secondary metabolites, flavonoid biosynthesis, starch and sucrose metabolism, and signaling-related pathways (plant hormone signal transduction and MAPK signaling pathway) (Supplementary Figure S4). Expression changes in anthocyanin biosynthesis genes is consistent with the respective metabolite accumulation in H. syriacus petals The transcriptome analyses showed that 178 DEGs were significantly enriched in the flavonoid biosynthesis pathway; 98, Figure 3 and Supplementary Table S4). Among the LBGs, two ANSs (Cluster-15126.124322, and Cluster-15126.124323) were slightly upregulated in D2 as compared to D1 but their expression significantly reduced in D3. The LAR transcripts had higher expressions on D1, which then reduced on D2. Most LARs didn't express in D3. The ANRs showed downregulation in D2 as compared to D1 but were not expressed in D3. These expression changes indicate that leucoanthocyanins are reduced to (+)-afzelechin, (+)-catechin, and (+)-gallocatechin in D1 but in D2 the ANS allow the biosynthesis of anthocyanins/anthocyanidins. Furthermore, a DFR (Cluster-15126.107363) was exclusively expressed in D2, while eight others were highly expressed in D1. Additionally, three DFRs were downregulated in D3. The expression patterns indicate that DFRs are activated on D1 and their expression continue to decrease with time. Furthermore, we noted that C3′H, UGT79B1, 3AT, BZ1, and GT1 were also differentially regulated between the three stages. The C3′Hs were expressed only in D2 and D3, where the expressions were higher on D2. These are consistent with higher dihydroquercetin biosynthesis in D2 as compared to D3. Two UGT79B1's (Cluster-15126.141627 and Cluster-15126.130638) showed strong expressions on D3, whereas their expression on D1 and D2 were fractional. These expressions support the results that D3 had reduced contents of 3-O-glucosides in D3 as compared to D2. This is further supported by the most important observation that two BZ1s (Cluster-15126.190510 and Cluster-15126.98419) were exclusively expressed in D2 and a third (Cluster-15126.189442) was highly upregulated in D2 as compared to D1. Most 3ATs' expressions increased from D1 to D2 and then significantly decreased on D3, which is consistent with the The reducing 3-o-glucoside contents from D2 to D3 are also consistent with the 3AT expression trends. The expression of two GT1s (Cluster-15126.125467 and Cluster-15126.97722, particularly, the second transcript) is highly correlated with the cyanidin-3,5-o-diglucoside. The GT1s expressions were variable i.e., some transcripts had a higher expression on D1 as compared to D2 and D3, but the others showed contrasting expressions. This is because this enzyme regulates two successive steps i.e., the conversion of cyanidin to cyanidin-5-o-glucoside and its further conversion to cyanidin-3,5-odiglucoside. Since we noted the higher accumulation of glycosylated anthocyanins in D2 as compared to D1, therefore we explored if there were any flavonol 3-O-glucosyltransferases (UFGTs). Interestingly, five UFGTs showed upregulation in D2 as compared to D1, whereas one showed contrasting expression i.e., higher in D1 as compared to D2. On the other hand, only two of six were upregulated in D3 as compared to D1. These expression changes are consistent with the metabolite results ( Figure 3 and Supplementary Table S4). Seemingly, the expression trends of LARs, ANSs, DFRs, UGT79B1, C3′Hs, 3ATs, and BZ1s are highly consistent with that of the respective metabolite accumulation. Additionally, the expressions of the UFGTs are relatable to the higher anthocyanin accumulation in D2. More specifically, the UFGTs are responsible for higher glycosylated anthocyanins in D2 and D3 as compared to D1. Expression changes in plant hormone signal transduction pathway genes Hormone signaling has been strongly associated with the regulation of the expression of ABP-related genes. Furthermore, hormone signaling also affects the expression of the ABP activators/repressors. Considering these important roles and the fact that DEGs were significantly enriched in the plant hormone signal transduction pathway, we specifically looked into this pathway. We noted the differential expression of 1184, 1587, and 1585 genes in D1 vs. D2, D1 vs. D3, and D2 vs. D3, respectively; overall 2469 DEGs were enriched in this pathway. The observation that a large number of DEGs were enriched in this pathway is interesting and implies largescale changes in the hormone signaling pathway, which in turn suggests important roles in the ABP as well as growth and development in the H. syriacus flower. Since we noticed that anthocyanin biosynthesis was highly increased on D2 and then reduced on D3, therefore, we specially looked for planthormone signaling-related DEGs that show corresponding expression trends (Supplementary Table S4). JAZ's (Jasmonate ZIM domain-containing protein) attachment with COI1 (coronatine-insensitive protein 1) allows it to detach from the MBW complex, which is then transcriptionally activated (Garrido-Bigotes et al., 2020). Of 20 differentially expressed JAZ transcripts, 75% (15) showed reduced expression in D2 as compared to D1, which indicates JAZ's are being degraded in D2. Of these 20, only nine were differentially expressed between D2 and D3, whereas five were upregulated in D3. Only two COI1 transcripts were differentially expressed in D1 vs. D2; Cluster-15126.193655 was highly expressed in D2 as compared to D1. Interestingly, 12 COI1s were differentially expressed between D2 and D3, where all of these were downregulated in D3. These expression trends indicate that JAZ's are degraded in D2, which leads to higher anthocyanin biosynthesis, whereas the contrasting expression leads to lower anthocyanin accumulation in D3. From this expression pattern, we can understand that JAZ degradation leads to the transcriptional activation of MBW complex in H. syriacus flower resulted in color changes (Supplementary Table S4). Regarding auxin signaling, we noted that 36 of the 46 TIR1s (transport inhibitor response 1) showed decreased expression in D2 and D3 as compared to D1. This expression trend is contrasting to the ABP genes' expressions. Whereas, the upregulation of a 108 ARF (auxin response factor) transcripts in D1 could be a signal to repress ABP genes in D1 and not in D2. This correlates with the lower expression of 50 IAA (auxin-responsive protein IAA) transcripts in D1 as compared to D2. Thus, IAA's higher expression in D2 suggests that they bind with ARFs to remove their repressive action on ABP genes (Supplementary Table S4). A total of 44 DELLAs were upregulated while 36 were downregulated in D2 as compared to D1. The higher expression of a larger number of GID1 and GID2 transcripts in D1 and reduced expression in D2 indicates that in D1 GID1's degrades a larger number of DELLAs but not in D2. The degradation of DELLAs in D1 could be a negative regulator of the anthocyanin biosynthetic genes. Similarly, 66 DELLAs were downregulated in D3 as compared to D2. The higher expression of the other DELLA transcripts in D1 and the contrasting expression of GIDs is understandable since we also detected the anthocyanin accumulation in D1 (Supplementary Table S4). Changes in the expression of transcription factors and MBW complex-related DEGs The transcription factor (TF) annotation and classification indicated the differential expression of 82 (2,648 DEGs), 87 (4,164 DEGs), and 85 (3,280 DEGs) families in D1vsD2, D1vsD3, and D2vsD3, respectively; 646 were commonly regulated between the three stages. The top-10 upregulated TFs in D2 as compared to D1 were AP2/ERF, bHLH, MYB-related, WRKY, PHD, MYB, and MADS-MIKC. On the contrary, the top-10 downregulated TFs in D3 as compared to D2 were classified as MYB, bHLH, OFP, AP2/ ERF-ERF, and C2C2-GATA. The differential expression of the MYB/ MYB-related and bHLH TFs is interesting since both of these are the Table S5). Three hundred and seventeen bHLH TFs were differentially expressed between the replicates; 67 and 101 were up-and downregulated in D2 as compared to D1, respectively. Top five bHLH TF in D2 were bHLH092, bHLH072, bHLHAs. Whereas in D3, we observed the upregulation of bHLH9, bHLH120, and bHLH72. Four of these (bHLH072, Cluster-15126.161369;bHLH072, Cluster-15126.158188, bHLHA, Cluster-15126.142146, and Cluster-15126.281970) were expressed in D2 and D3. Since the anthocyanin content was highest on D2 and then decreased on D3, therefore, we specifically looked for the bHLH TFs that showed similar expression trends; 46 bHLH TFs showed higher expression on D2 as compared to D1, which then decreased on D3 (Supplementary Table S5). Overall, these expression analyses indicate that in H. syriacus a large number of MBW complex related transcripts (MYB and bHLH) takes part in anthocyanin biosynthesis. qRT-PCR analysis validates transcriptome and metabolome analyses The qRT-PCR analysis of selected genes related to anthocyanin biosynthesis revealed similar expression patterns as RNA-seq data ( Figure 4A). The correlation between the FPKM values and qRT-PCR expression was 0.8168 signifying that the latter is consistent with the RNA-seq results ( Figure 4B). These observations reaffirm the transcriptome and metabolome analysis results that anthocyanin content varies in the three stages of H. syriacus petals and is responsible for the change in color. Co-joint analyses of RNA-sequencing and metabolite profiling confirm accumulation patterns of anthocyanins The relationship between the differentially expressed transcripts and accumulated metabolites between the studied stages of H. syriacus petals was further explored through a co-joint analyies. Most importantly, we first looked for the KEGG pathways to which the DEGs and DAMs were jointly enriched. Six KEGG pathways i.e., flavonoid biosynthesis, anthocyanin biosynthesis, isoflavonoid biosynthesis, flavone and flavonol biosynthesis, metabolic pathways, and biosynthesis of secondary metabolites were the significantly enriched pathway (Supplementary Figure S5). A detailed look into the flavonoid biosynthesis pathway and ABP confirmed the individual results of RNA sequencing and metabolite profiling. More specifically, we observed that the upregulation of BZ1 transcripts lead to the increased accumulation of pelargonidin 3-glucoside, pelargonidin 3-malonyl-glucoside, cyanidin 3-malonyl-glucoside, peonidin 3-glucoside, cyanidin 3glucoside, cyanidin 3,5-glucoside, petunidin 3-glucoside, delphinidin 3-rutinoside, and delphinidin 3,5-diglucoside (Supplementary Figure S6). Discussion Petal color transition is attributed to the changes in the accumulation of the major anthocyanins We adapted a combined metabolome and transcriptome approach and explored the metabolic and transcriptomic changes in petal from D1 to D3 ( Figure 1A). Recent research on H. syriacus flowers has revealed presence of as many as 40 different anthocyanin components; particularly cyanidin, delphinidin, procyanidin, peonidin, pelargonidin, petunidin, and malvidin have been detected (Zhang et al., 2022). Our results that all the major anthocyanins were present in H. syriacus petals, are consistent with previous findings (Figure 1). The detection of cyanidins, petunidins, and delphinidins on D1 indicates that ABP was already active before flower opening, which is a common phenomenon in most of the flowering plants (Justesen et al., 1997). The purple coloration of the H. syriacus petals in our study could be associated with the increased accumulation of these anthocyanins. However, the significantly higher accumulation of the glycosylated forms of cyanidin, delphinidin, malvidin, petunidin, pelargonidin, and peonidin seems to be the major color contributor in D2 ( Figure 1D). The glycosylated forms are produced when sugar moieties are attached to the unstable anthocyanidin aglycones (Cheng et al., 2014). This glycosylation stabilizes anthocyanins and also serves as signal for their transport to vacuoles that allows them to function as pigments (Mathews et al., 2003). The EBGs and LBGs control the major steps in anthocyanin biosynthesis (Liu et al., 2018). This means that the flavonoid biosynthesis pathway leads to the production of anthocyanins (LaFountain and Yuan, 2021). As a first step, the phenylalanine is converted into cinnamate by the action of PAL. The expressions changes in PAL transcrips in D2 (as compared to D1) indicate that the differential anthocyanin biosynthesis starts way upstream the pathway, this is consistent with the relationship of PAL with the accumulation of different anthocyanins in tea (Camellia sinensis L.) . Further down in the pathway, the reduced expression of C3′Hs in D3 as compared to D2 indicates that reduced biosynthesis of the active dihydroflavonol intermediate (dihydroquercetin) is one of the causes of reduced anthocyanin accumulation in D3 (Gang et al., 2002). On the other hand, the varied expressions of EBGs i.e., C4H, CHS, CHI, and F3H confirm our above statement that the anthocyanin biosynthesis had already started in D1, however, they are not the major cause in the anthocyanin accumulation pattern. Nevertheless, the consistent upregulation of PAL, CHS, C3′H, FLS, F3H, and HCT transcripts with the anthocyanin accumulation patterns proposes their important roles as reported in other flowering plants i.e., Paeonia lactiflora , Hibiscus cannabinus L. (Lyu et al., 2020), and other ornamental plants [reviewed in (Zhao and Tao, 2015)]. Furthermore, the reducing expressions of LARs in D2 indicate that the pathway is not moving in the direction of the biosynthesis of flavan-3-ols i.e., (+)-afzelechin, (+)-catechin, and (+)-gallocatechin (Tanner et al., 2003). Instead, the ANS upregulation in D2 causes higher anthocyanin biosynthesis and their downregulation in D3 is responsible for the opposite i.e., reduced anthocyanin biosynthesis ( Figure 5). The succeeding expression changes in DFR further strengthen our statement that the anthocyanin biosynthesis was already active in D1. Also, the downregulation of three DFRs in D3 can be associated with the reduced anthocyanin accumulation ( Figure 1C) (Li et al., 2017) because, a purple sweet potato DFR has been characterized for a similar role in anthocyanin biosynthesis (Wang et al., 2013). The increased biosynthesis of 3-O-glucosides of pelargonidin, cyanidin, and delphinidin in D2 and then their decrease in D3 changes can be explained by the changes in BZ1s expressions. BZ1 catalyzes the transfer of the glucosyl moiety from UDP-glucose to the 3hydroxyl group of the anthocyanidins as reported in Iris hollandica (Yoshihara et al., 2005). Furthermore, the reducing 3-O-glucosides and increasing accumulation of cyanidin-3-O-(6″-O-caffeoyl)glucoside in D3 as compared to D2 is consistent with the 3AT transcripts' expression (Yonekura-Sakakibara et al., 2000). Another explanation of the higher accumulation of the glycosylated anthocyanins in D2 and then their reduction in D3 is the up and downregulation of UFGTs in both stages, respectively. These results are consistent with the UFGT's functions in most of the flowering plants e.g., Freesia hybrida (Sun et al., 2016) (Figure 5). Hence, our combined metabolome and transcriptome analyses indicate that the ABP was active in D1 as evident from the expression of EBGs and LBGs. The higher anthocyanidins accumulation in D2 as compared to D1 and reduced accumulation in D3 as compared to D2 is due to the Frontiers in Genetics frontiersin.org changes in the expressions of EBGs and LBGs. Specifically, C3′Hs, LARs, ANSs, DFRs, UGT79B1s, BZ1s, and 3ATs seems to be controlling the major steps in the anthocyanin accumulation differences from D2 to D3 ( Figure 5). The expressions of the MBW complex's components are consistent with the of the changes in anthocyanin accumulation Since H. syriacus is a dicot and the transcriptome sequencing results showed that both EBGs and LBGs take part in the differential anthocyanin accumulation and resultant petal coloration (as explained in the above sections). Therefore, it is necessary to understand the changes in the expressions of the MBW complex components. The upregulation of a very large number of MYB TFs in D2 is an indication of the active MBW complex. Similarly, the differential expressions of the MYB TFs in D1 further justify our proposition that the ABP was already active in D1. Likewise, the observations that MYB/MYBrelated and bHLH TFs were among the top-10 highly upregulated DEGs in D2 and highly downregulated in D3 corresponds to the anthocyanin profiles on these stages. These results imply that MBW complex components are active in D1, however, their higher expressions in D2 possibly increasingly regulate the ABP genes (EBGs and/or LBGs), that lead the higher anthocyanin accumulation (Liu et al., 2018;LaFountain and Yuan, 2021;Yan et al., 2021). Contrastingly, the downregulation of MYBs and bHLHs in D3 can be linked with the reduced anthocyanins (Liu et al., 2018;LaFountain and Yuan, 2021;Yan et al., 2021). Thus, the metabolome data supported with the transcript expressions of ABP genes and MBW complex components indicates the mechanism of H. syriacus petal coloration (Yan et al., 2021). Most interesting observation was the exclusively expressed MYBs in D2. Particularly MYB114s activation can be FIGURE 5 Differential regulation of anthocyanin biosynthesis in H. syriacus petals. The genes and metabolites that were differentially expressed and accumulated, respectively, are given in red text. The dotted arrows/lines indicate that the genes are not found or not known. The pathway was reconstructed by following the layout and nomenclature available on https://www.genome.jp/kegg/kegg2.html. The gene name abbreviations correspond to the full names given in Supplementary Table S4. Frontiers in Genetics frontiersin.org associated with expression of many LBGs and resultant higher anthocyanin biosynthesis in D2. In apple and Arabidopsis, the MYB114 has been characterized for its role in the regulation of anthocyanins (Gonzalez et al., 2008;Jiang et al., 2021). The MYB114s expression has been shown to be bHLH dependent. Thus, it is possible that it interacted with the four exclusively expressed bHLHs (Cluster-15126.158188, Cluster-15126.142146, Cluster-15126.161369, and Cluster-15126.281970). The specific interaction of these TFs with anthocyanins needs further specific characterization experiments. Plant-hormone signaling pathway's role in H. syriacus petal coloration Hormone signaling pathway in plants have been studied in multiple species to understand the role of various hormone signals in the transcriptional activation/ repressions of MBW complex and/or ABP genes (LaFountain and Yuan, 2021). Among all hormones, the effect of JA is majorly studied and it has been shown that the application of MeJA activates anthocyanin biosynthesis. This is because the JAZ proteins bind directly with the MYB and bHLH TFs (of the MBW complex) (Qi et al., 2011). When COI is expressed, it attaches to JAZ (through SCF complex) and degrades it (Pauwels and Goossens, 2011). Thus, if JAZ's are expressed highly in a tissue, it is reasonable that anthocyanin biosynthesis would be low because the MBW complex is occupied. The results that 75% of the JAZ transcripts had higher expression in D1, which then reduced in D2 depicts that JAZ's are being degraded in D2, while they are again upregulated in D3. These results are further supported with the expression trends of COI. Thus, the significant upregulation of anthocyanins on D2 stage of H. syriacus petals clearly correlates with the anthocyanin accumulation. This mechanism has been reported in Fragaria vesca L. x F. ananassa (Garrido-Bigotes et al., 2020). Other than JA signals, our results suggest that auxin signals can be another reason for the anthocyanin biosynthesis and resulting petal color. IAA121 and ARF13 when bound, release MBW complex from ARF13, which transcriptionally activates anthocyanin biosynthesis. When IAA121 is bound with TIR1, the ARF13 destabilizes the MBW complex as well as represses anthocyanin biosynthesis genes as studied in apple (Wang et al., 2018). The decreased expression of TIR1 transcripts is relatable to the increased expression of ABP genes in D2 as compared to D1. Similarly, the higher expression of a large number of ARFs (108 transcripts) in D1 suggests their repressive action on ABP genes (Wang et al., 2020). This is further supported by the reduced expressions of 50 IAA transcripts in D1 as compared to D2. Therefore, we can propose that similar to apple, the H. syriacus IAA's bind with ARFs and remove their repressive action on ABP genes. Finally, the expression patterns of the DELLAs in the studied stages of H. syriacus petal indicates that GA signaling is also at play to change the anthocyanin biosynthesis and the resulting color (Qi et al., 2014;Xie et al., 2016). Nevertheless, these expression patterns of the JA, auxin, and GA signaling genes put forward a basic understanding of the hormonal control of the petal color formation. It gives us clues and provide us a list of multiple hormone-signaling associated genes and TFs that should be characterized and manipulated for desired color formation in H. syriacus. Conclusion Hibiscus syriacus L. petal color transition from D1 to D3 was studied through metabolome profiling, transcriptome sequencing, and qRT-PCR analysis. The bluish-purple coloration of the petals on D2 appeared to be associated with significantly higher accumulation of anthocyanidins i.e., cyanidin, delphinidins, malvidins, pelargonidins, petunidins, and peonidins. The purple (lavender) petal color on D3 was due to reduced contents of these anthocyanidins. The transcriptome analysis showed that both EBGs and LBGs were differentially expressed. Based on the transcriptome sequencing results, the major genes that contributed to the changes in petal color were C3′Hs, LARs, ANSs, DFRs, UGT79B1s, BZ1s, and 3ATs. The transcriptomic signatures that were associated with the expression of these genes included the MBW complex components, particularly MYB and bHLH TFs. Furthermore, JA, auxin, and GA signaling related genes showed expressions that suggest the roles of respective hormones in affecting the MBW complex, ABP genes, and the resulting anthocyanin accumulation. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. The datasets for this study were deposited to the NCBI with the BioProject accession ID as "PRJNA851843". Author contributions Conceptualization, XW, YW, and LL; Data curation, LL and CL; Formal analysis, CL and MZ; Investigation, CL; Methodology, XW, YW, and MZ; Project administration, XW; Resources, YW; Software, CL; Supervision, LL; Validation, YW; Writingoriginal draft, XW and YW; Writingreview and editing, XW. All authors have read and approved the final manuscript. Frontiers in Genetics frontiersin.org
2022-09-05T13:42:20.514Z
2022-09-05T00:00:00.000
{ "year": 2022, "sha1": "c8b1a3fa29d19014b17b540519ddaf6ad09237d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "c8b1a3fa29d19014b17b540519ddaf6ad09237d7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
5825187
pes2o/s2orc
v3-fos-license
Quantification of Interactions between Dynamic Cellular Network Functionalities by Cascaded Layering Large, naturally evolved biomolecular networks typically fulfil multiple functions. When modelling or redesigning such systems, functional subsystems are often analysed independently first, before subsequent integration into larger-scale computational models. In the design and analysis process, it is therefore important to quantitatively analyse and predict the dynamics of the interactions between integrated subsystems; in particular, how the incremental effect of integrating a subsystem into a network depends on the existing dynamics of that network. In this paper we present a framework for simulating the contribution of any given functional subsystem when integrated together with one or more other subsystems. This is achieved through a cascaded layering of a network into functional subsystems, where each layer is defined by an appropriate subset of the reactions. We exploit symmetries in our formulation to exhaustively quantify each subsystem’s incremental effects with minimal computational effort. When combining subsystems, their isolated behaviour may be amplified, attenuated, or be subject to more complicated effects. We propose the concept of mutual dynamics to quantify such nonlinear phenomena, thereby defining the incompatibility and cooperativity between all pairs of subsystems when integrated into any larger network. We exemplify our theoretical framework by analysing diverse behaviours in three dynamic models of signalling and metabolic pathways: the effect of crosstalk mechanisms on the dynamics of parallel signal transduction pathways; reciprocal side-effects between several integral feedback mechanisms and the subsystems they stabilise; and consequences of nonlinear interactions between elementary flux modes in glycolysis for metabolic engineering strategies. Our analysis shows that it is not sufficient to just specify subsystems and analyse their pairwise interactions; the environment in which the interaction takes place must also be explicitly defined. Our framework provides a natural representation of nonlinear interaction phenomena, and will therefore be an important tool for modelling large-scale evolved or synthetic biomolecular networks. Introduction Complex biochemical reaction networks serve a broad variety of different tasks within the cell. Systems Biology researchers apply a range of systems analysis techniques to these networks to identify and model functional subsystems and their interaction structure. In the context of biomolecular networks, the subsystems that can be identified often have biological interpretations: for example, the heat shock response and the chemotaxis pathways represent two functional subsystems within a model describing the complete biomolecular reaction network of Escherichia coli; the synthesis pathways of individual products represent distinct functional subsystems within a metabolic network; and so on. Other functional subsystems may also have system-theoretic interpretations: for example, interacting, distributed feedback control mechanisms; or subsystems which can sense, compute, or actuate on the cell and its environment. This tangle of different objectives within the same network leads to trade-off situations: evolutionary or synthetic changes to one functional subsystem can lead to declining performance or unexpected side effects with respect to another. A fundamental challenge of Systems Biology is to not only establish the behaviour of each functional subsystem in isolation, but also to understand how they dynamically influence one another. This problem is particularly acute when applying the modelling and analysis tools of Systems Biology to adapt and redesign modular biomolecular networks in Synthetic Biology [1][2][3]. The dynamics of many functional subsystems, whether evolved biochemical networks or synthetic devices, often do not proceed as modelled when integrated into a cell due to their interactions with one another and the environment of their cellular host. Possible sources of nonlinear interactions between pairs of functional subsystems and the cellular environment include retroactivity in genetic [4][5][6] and signalling [7] networks, crosstalk between parallel signalling pathways [8,9], and the coupling of multiple transcription or translation rates through competition for shared resources [10][11][12]. In each of these settings, the change in input-output behaviour of a given subsystem upon integration with its context is examined. There are two complementary goals of this paper. First, we investigate the behaviour of each subsystem between the two extremes of 'isolated' or 'integrated', when it is integrated with any subset of the other subsystems. The second goal, which is achieved as a consequence of the first, is to then systematically quantify each of the pairwise interactions between the network's subsystems. The approach we will take in this work is to define a functionality as a group of reactions which corresponds to an identified functional subsystem of a biomolecular network. The reactions that determine each functionality can be selected either through biological insight, or by applying existing computational approaches such as elementary flux mode (EFM) analysis [13][14][15] (see also Results). We characterise the behaviour, or effect, of a functionality as the solution of an ordinary differential equation (ODE) model determined by the particular group of reactions. This approach exploits the recently-introduced decomposition technique known as layering [16,17]. As depicted in Fig 1, such an approach is distinct from established modular approaches to network decomposition, which are characterised by identifying sets of species with a high connectivity inside the module, and significantly lower connectivity to species in other modules [18][19][20][21][22][23][24][25][26]. While often many species and reactions in a given network are implicated in multiple network functions, these modular approaches generally do not allow for such a high degree of overlap between modules. For example, if a network of two pathways responds to two external signals with a single output species, a modular decomposition of this network requires the common output species to be assigned to a module representing exactly one of the pathways, or potentially to an additional separate module. Either way, the input-output behaviour of both pathways cannot be easily defined. However, in the layered framework, the common output is associated with both layers, and hence the output of each layer can be defined in terms of its biological function. Thus, in some cases, layers are preferable to modules for defining the functional subsystems of the network, since the layered framework explicitly allows for overlap in species and reaction subsets [16,17], as will be illustrated further in 'Mapping Functionalities to Layers' below. Most importantly, we make it explicit that the behaviour of any functionality also depends on the other functionalities with which it is integrated, to which we refer as the context of the functionality. This contextual dependence is formalised by developing a notational framework that will unambiguously define a functionality's behaviour in a particular context. The resulting concept of conditional dynamics will be key to our understanding of each functionality as being defined only in the context of others, allowing us to systematically investigate the interdependence of an entire network's functionalities. The subsequent aim of this framework is to characterise all of the interactions between each pair of functionalities, each of which is also context-dependent. Our approach is a formalisation and extension of previous investigations into additive (i.e. independent), synergistic, or antagonistic subsystem interactions. Examples of these phenomena include the cytokine secretion by macrophages in response to stimulation with different sets of ligands [27], the response of bacteria to different combinations of drugs [28], or calcium signalling responses to different stimuli [29]. Importantly, we demonstrate how the strength and the type of interactions between functionalities depends on mediated indirect interactions with the other functionalities comprising their context. The relationship of our approach to the concepts in [27][28][29] is further discussed in the section "Calculating With Layers". In addition to the previous literature on context-dependent dynamics, our theoretical framework is also related to steady-state methods. For instance, the third of our examples will exploit EFMs, a technique designed to analyse the steady-state flux distribution in metabolic networks [13][14][15]. Furthermore, modular and hierarchical control/response analysis is concerned with the different behaviour of subsystems both in isolation and integrated in larger systems, with particular reference to the steady-state responses of biochemical networks to parameter perturbations [6,[30][31][32][33][34]. The key distinction between our method and these is that we analyse the dynamics of kinetic models [35,36] represented by sets of ODEs, rather than steady states. Moreover, our method is not based on linearisation, allowing us to adequately capture nonlinear interactions between functionalities. Quantifying the dynamic interactions of each pair of functionalities in all possible contexts requires multiple ODE simulations; for its practical applicability, it is important to minimise the computational effort involved. This paper is structured as follows: in the Methods section we show how to use a layered decomposition to identify the incremental effect of a functionality, making its Modularization, non-cascaded and cascaded layering of a simplified model of the glycolytic pathway (see [46] p.88 ff.). Nodes represent metabolites, coloured arrows reactions, and grey arrows information transfer. Intermediate metabolites and cofactors are omitted, and several enzymatic reactions are merged. Modules (A) or layers (B and C) are delineated by blue and green boxes. In each panel the blue subnetwork corresponds to the single functional subsystem Glc!Glyc of glycolysis, where the conversion from GAP into Pyr has been removed. All panels consider the integration of the Glc!Glyc pathway (blue arrows) with the missing (merged) reaction GAP!Pyr. A) Modularization of the pathway [18][19][20][21][22][23][24][25][26]. Modules are typically defined by non-overlapping sets of species, and can be interconnected by mass flow (red arrow, representing the load of the new species) and information transfer. Here, there is mass flow when the blue module is integrated with the green module, leading to retroactive effects [4]. The combined dynamics will be in feedback, and hence both modules must be considered simultaneously. B) Non-cascaded layering [16,17]. Layers are defined by non-overlapping sets of reactions, while species may take part in multiple layers; species' affiliation to layers is indicated in superscript. The dynamics of species in multiple layers are summed (e.g. GAP(t) = GAP 1 (t) + GAP 2 (t) + GAP(t = 0)). Layers are interconnected only by information transfer (grey arrows) and their models are not modified when combined. However, as in the modular case, the layers are in feedback and must be considered simultaneously. C) Cascaded layering of the pathway, as introduced in this article. By allowing the layers to overlap in 'altered reactions' (green broken arrows) as well as species, the layers become cascaded as information (grey arrows) is only transferred in one direction. The states of the green layer directly capture the incremental effect of extending the isolated Glc!Glyc pathway with the additional (merged) reaction GAP!Pyr. Since the information transfer between the layers is cascaded, they can be numerically integrated and analysed sequentially. context-dependence explicit. We continue by defining the interdependence, or mutual dynamics, between any two functionalities. We summarise this interdependence by the incompatibility and the cooperativity between functionalities. In the final part of the Methods section we describe how to analyse all functionalities and their dependencies with minimal computation. We demonstrate our method on three familiar biomolecular networks in the Results section. The first example is of two signalling pathways with two crosstalk mechanisms, in which we use our approach to quantify the nonlinear interactions between crosstalk mechanisms. In the second example we analyse an unstable pathway stabilised by two integral feedback loops, finding the interactions between each controller and the pathway, and also between the controllers. Finally, we consider the glycolytic pathway in Saccharomyces cerevisiae, with functionalities defined by an EFM analysis. We apply our approach to compare how different knock-out strategies in metabolic engineering influence the yield of ethanol, industrially relevant in biofuel production. Network Representation and Layering Consider a biochemical reaction network with N X species X i of time-varying concentrations x i (t), taking part in N R reactions R 1 , . . ., R N R , each of which proceeds at the concentration-de- where the stoichiometric matrix S maps reaction rates to the rate of change of concentrations. The layered decomposition strategy [16,17] defines N L new stoichiometric matrices S 1 ; . . . ; S N L such that S ¼ S 1 þ Á Á Á þ S N L , and defines N L associated state variables x l taking values x l ðtÞ 2 R N X (see Fig 1B). Each layer's state x l has dynamics from initial conditions x l (0) = 0, for l = 1, . . ., N L . The original state's dynamics are recovered by summing the layers' states xðtÞ ¼ x 0 þ P N L l¼1 x l ðtÞ. Denote by r = rank(S) the dimension of the original system, which defines the dimension of the manifold in R N X in which x(t) evolves. It follows that r l = rank(S l ) defines the dimension of the state space of each layer. Hence, even though the state space of each layer is also embedded in R N X , each layer is a lower-dimensional system than the original system if r l < r. In our previous work, we have applied this decomposition strategy by choosing the matrices S l to reflect timescale separation [17], and to reflect the propagation of steady-state responses to parametric perturbations [16]. A feature of both of these approaches was that, in the form (2), each layer's dynamics depend on all other layers' states (as in Fig 1B, for example). Consequently, all layers had to be numerically integrated together, and the effect of one specific layer on all others could not be easily determined. Also, the approach was constrained to define layers by strict partitions of the reaction set, somewhat limiting its flexibility to capture the widest possible range of functional subsystems. In this article, we significantly extend the layering framework in two ways. First, we introduce the concept of functionalities, which are possibly overlapping sets of reactions working together for a common purpose. To enforce a cascade structure between the functionalities, we adapt the layered dynamics corresponding to each functionality depending on its position in the cascade. The following section will use this cascade structure to define the incremental dynamic effect of each functionality. Mapping Functionalities to Layers Let a functionality F i of a network be defined for i = 1, . . ., N L to be a subset of N i R reactions F i {R 1 , . . ., R N R } necessary to fulfil a given task of the network, where superscript integers index functionalities and their properties. It is assumed for the remainder of this section that these subsets are given, and that all reactions take part in at least one functionality. The question of how to choose each subset F i {R 1 , . . ., R N R } remains out of the scope of this work. Nevertheless, there are numerous non-modular decomposition strategies taken in recent related research that we can use to justify this definition of a functionality. For example, Oishi and Klavins [37] identify control blocks as specific groups of reactions, connected by shared species. Kurata et al. [38] identified reaction groups forming 'flux modules' in the Escherichia coli heat shock response system. Similarly, the decomposition of signalling networks into component pathways exhibiting crosstalk [8] also identifies functionalities as groups of reactions. Finally, we can also consider elementary flux modes (EFMs) of metabolic networks [13] as being sets of reactions with the specific 'task' of converting one or more substrates into given products. Several of these examples are explored further in the Results section of this paper. In this section, we will assume that the functionalities are ordered by their index F 1 ; . . . ; F N L . We first identify the dynamics of the isolated functionality F 1 as the dynamics of a biomolecular network consisting of only the reactions associated with F 1 . We then identify the conditional dynamics of the next functionality in the cascade as the effect of extending the preexisting network with the reactions in the new functionality. First consider, without loss of generality, the network defined by only the subset of reactions making up functionality F 1 {R 1 , . . ., R N R }, taken in isolation from the other reactions. For given initial conditions x 0 , we now identify the isolated dynamics of this functionality as the solution to the layer _ x 1 ðtÞ ¼ S 1 vðx 0 þ x 1 ðtÞÞ; Here, the stoichiometric matrix S 1 is defined by copying the columns of the original stoichiometric matrix S in (1) corresponding to the reactions in F 1 and setting the other columns to zero. We will denote this trajectory x 1 = L(F 1 ), where the notation L represents a map from the functionality F 1 to the solution x 1 of the dynamics (3) from initial conditions x 1 (0) = 0. Note that L(F 1 ) depends on the specific initial condition x 0 of the network (3), which is in general distinct from the initial condition x 1 (0) = 0 of the layer's state. To make this dependence explicit, it is sometimes helpful (see Examples 2 and 3) to define a 'zero layer' F 0 with constant trajectory L(F 0 ) = x 0 . We can then make clear that L(F 1 ) is dependent on the initial conditions by writing it as L(F 1 jF 0 ). The layered framework also implies that the absolute concentrations in this network are modelled by the translated trajectory x 0 + L(F 1 jF 0 ). We next consider extending the functionality F 1 by combining it with the reactions in F 2 . The extended network can be simulated through a similar process to the original network above, as follows. Define S 1,2 as considering only the reactions in at least one of F 1 or F 2 . Using S 1,2 we can then simulate the layer corresponding to the extended network _ x 1;2 ðtÞ ¼ S 1;2 vðx 0 þ x 1;2 ðtÞÞ; the solution of which can be written L(F 1 , F 2 jF 0 ) = x 1,2 . This denotes the trajectory of the combined functionalities F 1 and F 2 . The fact that each of (3) and (4) are layers with states in R N X implies that we can calculate the difference between each of the trajectories. This difference is clearly interpreted as the incremental effect of extending a network made up of the initial conditions F 0 and the isolated functionality F 1 , by also including F 2 . We thus define as the conditional dynamics of F 2 , given the specified context of F 1 and the initial condition layer F 0 . However, rather than simulating the layer (4) representing the combined functionalities, we may further exploit the layered framework described above to directly simulate L(F 2 jF 1 , F 0 ). Suppose we already have x 1 = L(F 1 jF 0 ), found as the solution to the dynamics (3). We now define the layer _ x 2 ðtÞ ¼ S 1;2 vðx 0 þ x 1 ðtÞ þ x 2 ðtÞÞ À S 1 vðx 0 þ x 1 ðtÞÞ; with S 1 and S 1,2 given above. Note that this layer is downstream of (3), since it depends on the state x 1 . It is clear from summing the vector fields in (3) and (6) that the sum (x 1 + x 2 ) of the layers' states follows exactly the same dynamics as the combined network's state x 1,2 in (4). Thus, since x 2 = x 1,2 − x 1 it follows that the dynamics (6) directly simulate L(F 2 jF 1 , F 0 ), with the input L(F 1 jF 0 ) simulated by (3). We can rewrite the dynamics (6) corresponding to the simulation of L(F 2 jF 1 , F 0 ) as where S 2 = S 1,2 − S 1 corresponds to the reactions in F 2 \F 1 , and are the rates of the 'altered reactions': the rates of reactions in F 1 which are modified by the presence of F 2 (shown as broken green arrows in Fig 1C). This description allows us to see the degree to which F 2 is 'downstream' of F 1 . For example, if v alt = 0, then we can say that the reactions in F 1 are independent of those in F 2 and that F 2 is strictly downstream of F 1 . Note that, especially for larger networks, many of the altered reaction rates in v alt are zero and can be omitted (see Example 3), simplifying simulation. Given that the trajectory of x 1 is already determined from simulating (3), we can simulate either (6) or (7a), using x 1 (t) as a time-dependent input to obtain the conditional dynamics L(F 2 jF 1 ). The latter approach is taken in our examples (see Results section). The definitions above easily extend to larger combinations of functionalities. In full generality, we can consider the network defined by the combination of n 1 functionalities F 1 ; . . . ; F n 1 , and its extension through the additional n 2 functionalities F n 1 þ1 ; . . . ; F n 1 þn 2 . By writing the definitions above can apply in the simulation of Here we have defined a notation for the trajectory LðF 1 ; . . . ; F n 1 j F 0 Þ of the biochemical network made up of the reactions which comprise an arbitrary combination of functionalities F 1 ; . . . ; F n 1 . We have also defined the change in trajectory LðF n 1 þ1 ; . . . ; F n 1 þn 2 j F 0 ; F 1 ; . . . ; F n 1 Þ incurred by extending that network with the additional reactions in F n 1 þ1 ; . . . ; F n 1 þn 2 . Finally, we have shown how to identify the dynamical systems that can simulate these trajectories. Consider the two layers L(F 2 jF 1 ) and L(F 2 ) that both describe the effect of the functionality F 2 . This effect is different depending on the presence or absence of F 1 . The difference between these two trajectories defines how the presence of F 1 changes the behaviour of F 2 ; that is, the dependence of F 2 on F 1 . We will now demonstrate how our layered analysis allows us to define the interdependence between two functionalities, thereby capturing the nonlinear effects arising from modelling a biomolecular network as being constructed from a combination of functional subsystems. Calculating with Layers In order to quantify the interactions between functionalities, we can exploit the layered formulation above. For simplicity, from this point on we suppress the F 0 notation, with the acknowledgement that all of the trajectories depend on the system's initial conditions L(F 0 ) = x 0 . The definition of conditional dynamics in (5) implies that L(F 1 , F 2 ) = L(F 2 jF 1 ) + L(F 1 ). This represents a layered cascade, where the dynamics of an integrated network are the linear combination of the conditional dynamics of its functionalities. There are two natural questions associated with this approach. First, how is the contribution of functionality F 2 , considered in isolation, different from the conditional dynamics of F 2 when integrated with F 1 ? Secondly, how is the behaviour of the integrated F 1 , F 2 network different from the linear combination of the isolated functionalities? That is, how different is L(F 1 , F 2 ) from L(F 1 ) + L(F 2 )? The answers to these two questions are the same. We denote the error incurred by approximating the integrated system as the linear combination of the isolated dynamics by the quantity M(F 1 ; F 2 ), defined as which we call mutual dynamics. This can be interpreted as the nonlinearity that arises from integrating the two functionalities together. Note that, since M is defined symmetrically, we can use (5) to rewrite M as Therefore M measures how the function of F 2 (or F 1 ) is changed when considered in the context of F 1 (or F 2 ). Thus M(F 1 ; F 2 ) is a symmetric measure of the interdependence between the two functionalities. We use (8) to calculate the the mutual dynamics between two functionalities, which requires us to first obtain either both trajectories L(F 1 ) and L(F 1 jF 2 ), or alternatively both trajectories L(F 2 ) and L(F 2 jF 1 ). These trajectories can be simulated, as described in the previous section, or calculated by the methods described in 'Reducing Computational Burden' below. We have been careful to make explicit through our Bayesian-style notation that the dynamics of all functionalities are context-dependent. It is also the case that the interdependence between any two functionalities is context-dependent. Therefore, we need to extend the definition of mutual dynamics to consider how the interdependence between F 1 and F 2 is dependent on the wider context of the network, which we denote by another functionality, F 3 . Similarly to the definition above, we can define the conditional mutual dynamics between F 1 and F 2 , given F 3 , with the formula to quantify the difference between the dynamics of the integrated and isolated functionalities F 1 and F 2 , in the context of F 3 . As before, this can also be expressed in terms of layered dynamics as to quantify how the effect of F 2 on its context changes with the presence of F 1 , and vice versa. The geometric intuition underlying the conditional mutual dynamics can be seen in Fig 2. The . The cooperativity C(F 1 ; F 2 ) is the cosine of the angle between −M and the sum of the isolated dynamics. In the example shown, the cooperativity is negative, indicating that the isolated behaviour L(F 1 ) + L(F 2 ) is attenuated when F 1 and F 2 are integrated together, although also with some orthogonal effects. The strength of their interaction is proportional to the incompatibility I(F 1 ; F 2 ), defined as the ratio between the lengths of M(F 1 ; F 2 ) and L(F 1 ) + L(F 2 ). doi:10.1371/journal.pcbi.1004235.g002 key interpretation of M is that it captures the nonlinearities that arise from combining F 1 and F 2 into a single network, conditioned on F 3 if necessary. The conditional mutual dynamics M(F 1 ; F 2 jF 3 ) is a time-varying, vector trajectory. We can base on M the following time-varying scalar, which we call the incompatibility and denote I(F 1 ; F 2 jF 3 ) with formula Here, k.k represents the Euclidean norm. In this paper we use the unweighted Euclidean norm, but in certain cases it might be appropriate to introduce a weight, for example if the concentrations of the species in a network are at different orders of magnitude. One might also decide to set the weight of certain intermediate species of limited interest to zero (see below). This incompatibility measures the relative size of the error made by approximating the integration of two functionalities as the sum of their individual behaviour. To gain some intuition about this number, we can consider a number of special cases. If I(F 1 ; F 2 jF 3 ) = 0, then this indicates that the trajectory of the integrated functionalities is simply the sum of the isolated functionalities' trajectories: is a reasonable approximation, since the incurred error is relatively small. However, if I is of significant size, then the dynamics of the integrated functionalities can be expected to significantly differ from their individual behaviours. Besides I, which measures the relative size of the mutual dynamics M, the direction of M is also important, since this determines if the integration of two functionalities together enhances or attenuates their individual dynamics, or causes other effects. We define the cooperativity C as the cosine of the angle between −M and the sum of the isolated layers: with Á denoting the scalar product. See Fig 2 for a geometric representation of C. Note that when using a weighted norm, the scalar product should be weighted accordingly. Again, we consider a number of special cases. Suppose that C(F 1 ; F 2 jF 3 ) equals or is close to minus one, so that M is approximately parallel to and pointing in the same direction as L(F 1 jF 3 ) + L(F 2 jF 3 ). In this case the integrated dynamics can be approximated as an attenuation of the isolated behaviour Conversely, if the cooperativity equals or is close to one, the two functionalities enhance each other, in the sense that we can approximate the integrated behaviour as an amplification of the isolated behaviours However, once the cooperativity C equals or is close to zero, the mutual dynamics M are orthogonal to L(F 1 jF 3 ) + L(F 2 jF 3 ). This means that when the functionalities are integrated, both isolated functionalities are maintained, but there are also additional interactions (with an effect of strength I) in directions orthogonal to the summed isolated dynamics. For example, suppose that the functionalities F 1 and F 2 correspond to sets of reactions responsible for mediating the cellular responses to two different input signals. By setting the influence of all but the common output (measured) species to zero, an incompatibility I(F 1 ; F 2 ) close to zero corresponds to an additive interaction of the input signals. If I(F 1 ; F 2 ) is larger, it may correspond to either a synergetic (for C(F 1 ; F 2 ) = 1) or antagonistic (for C(F 1 ; F 2 ) = −1) interaction (see e.g. [28]). Note that for a scalar output, orthogonal dynamics are not possible. For systems composed of many different functionalities, one might also take the time averages hIi and hCi of the incompatibility, respectively the cooperativity, over the simulation time ΔT to obtain single measures quantifying the interactions between layers: Although they are useful for obtaining a first impression of how functionalities interact, time averages should be carefully applied. They can hide transient interactions between functionalities, including potential sign changes of the state-dependent cooperativities (see Example 1). We have now identified how to measure the interdependence between two functionalities, which we have defined as the change in the dynamics of one functionality when the other is present. We have made explicit how this interdependence is itself dependent on the context of the rest of the network. In the remainder of this section we will describe how to minimise the computational burden incurred when calculating all possible interactions between functionalities. Reducing Computational Burden The notation L(F 1 ) and L(F 2 jF 1 ) describing the map from a functionality (or set of functionalities) to the resulting trajectory simplifies the calculations we may wish to carry out to understand how the functionalities combine. Using the key definition of 'conditional dynamics' given by (5), we can prove a number of rules for combining layers which appear analogous to those known from Information Theory [39]. For example, for two random variables X and Y, the well-known quantities of joint entropy H(X, Y) = H(X) + H(YjX) and mutual information I(X; Y) = H(X) − H(XjY) are each definitions of the same form as those given above of conditional dynamics (5) and mutual dynamics (8) respectively. However, it is important to note that this similarity is only superficial, and any intuition gained by seeking analogies between our work and information theoretic concepts should be applied carefully. This caveat applies in particular to the two results below. Two lemmas allowing the quick combination of layer dynamics can be easily proved directly from the definitions of L(F 1 ) and L(F 2 jF 1 ), their extensions to larger combinations of functionalities, and Eq (5). The first is an analogue of Bayes' Rule, given by We will demonstrate how this rule can be used for quickly deducing the incremental effects of layers when combined in a different order. This is fundamental, since a natural ordering of the layers is generally not given. A second rule, which is analogous to Bayes' Factor, is given by This rule applies when we have a choice between integrating two functionalities to the F 1 -only network. The difference in their effects is decomposed into the difference L(F 3 ) − L(F 2 ) between their isolated behaviours, summed with the difference L(F 1 jF 3 ) − L(F 1 jF 2 ) in the incremental effect of F 1 on each. We can use these rules to compute all possible functionality combinations with a minimal amount of simulation. In order to answer particular biological questions, we may be interested in the incremental effect of a given functionality on a specific 'base' network, such as those described in Examples 1 and 2 in the Results section. In other cases we may be interested in all possible interactions between the functionalities, such as the situation in Example 3. In a biochemical network whose reactions are decomposed into N L functionalities, the latter case suggests that we must simulate all N L layers for each of the N L ! different orderings of the functionalities, resulting in (N L + 1)! − N L ! layers to be numerically solved. This burden can be significantly reduced using the calculation rules deduced above. The cascaded layers representing all possible orderings of functionalities can be arranged in an acyclic directed graph, shown in Fig 3. Each node represents the trajectory arising from the incremental addition of a new functionality, given those already present. The graph is organised into levels, corresponding to the position of the new layer in the sequence. The root of the layering graph (referred to as Level 0) represents the given initial conditions x 0 . Each of the nodes. Each node represents the dynamics of a functionality F i conditioned on a subset of size l − 1 of the remaining functionalities F j , j 6 ¼ i. A directed edge from a node in Level l to a node in Level l + 1 exists if the node in Level l + 1 is conditioned on all functionalities taking part in the node in Level l. Each directed path from Level 0 to Level N L (i.e. from the root to a leaf) represents one of the N L ! possible orderings of functionalities. By adding up the layer dynamics corresponding to the nodes in each path, the trajectory of the complete network is obtained. Each node in Level 1 represents the dynamics of an isolated functionality. The dynamics represented by the nodes at the leaves of the layering graph are also of specific interest. Multiplying the layer dynamics in a given node in Level N L (corresponding to a particular functionality F i ) by −1 gives the change in the dynamics that results from removing F i from the system while keeping all other functionalities intact (see Example 3). To obtain the dynamics of all layers for all orderings of functionalities, we can use rules (11) and (12) to exploit certain symmetries in the layering graph and reduce the number of numerical integrations of layer ODE systems. If all of the dynamics in Level l − 1 are already known, it is only necessary to numerically solve N L l ! layer dynamics in Level l. The rest of the level can then be deduced using Bayes' rule (11). For example, suppose we have the trajectories of all nodes in Layer 1, and have simulated L(F 1 jF 0 , F 2 ) in Level 2. Using (11) we can deduce L(F 2 jF 0 , F 1 ) without having to simulate again. In fact, when considering a network with N L functionalities, only layers have to be numerically integrated (see Example 1), compared to (N L + 1)! − N L ! integrations necessary when analyzing all orderings separately. Although the number of integrations still exponentially grows with N L , it nevertheless becomes possible to analyse systems with up to ten or more distinct functionalities in reasonable time. For N L = 10, integrating every layer would require more than 3.5 Á 10 4 times as much computational time as is required by exploiting (11). Furthermore, the symmetry of the layering graph allows a certain degree of freedom in choosing which layers to simulate, and which to deduce from (11). In particular, one should always choose to simulate the simplest layers, with respect to the number of states or reactions in the respective ODE system. Fig 3 shows the situation where N L = 3: we need to simulate 2 3 − 1 = 7 layers. We have decided to, wherever possible, simulate F 3 , ahead of F 2 , ahead of F 1 . After simulating the isolated behaviour L(F 1 jF 0 ) of the functionality F 1 , we can calculate its behaviour in any other context without having to simulate it again. For example, L(F 1 jF 0 , F 2 ) can be calculated, using (11), as L(F 2 jF 0 , F 1 ) + L(F 1 jF 0 ) − L(F 2 jF 0 ). Similarly, substituting the resulting trajectory into (11) once more means that we can calculate L(F 1 jF 0 , F 2 , F 3 ) as the linear combination L(F 3 jF 0 , F 1 , F 2 ) + L(F 1 jF 0 , F 2 ) − L(F 3 jF 0 , F 2 ). As can be seen by the indicated nodes in Fig 3, only one layer corresponding to F 1 had to be numerically simulated, whereas we simulated layers corresponding to F 2 twice, and layers corresponding to F 3 four times. Thus we can reduce computation time even further by avoiding the repeated simulation of highdimensional layers. Results We now apply our layered approach to three examples of multi-functional biomolecular systems. In each case, we can show how the effect of each functionality in a network is dependent on the others. We exploit the formulation of mutual dynamics to quantify the interdependence between different functionalities. Example 1: Crosstalk The high osmolarity glycerol and the pheromone response mitogen-activated protein (MAP) kinase pathways in S. cerevisiae share the common species Ste11 [40]. Such a common species can constitute a mutually excitatory crosstalk mechanism by cross-activating one pathway upon activation of the other. However, McClean et al. [8] observed that only one pathway responds, and deduced that a second, mutually inhibitory crosstalk mechanism exists. They constructed a model, available as Model 115 in the BioModels Database [41], that includes both crosstalk mechanisms. As an initial example of our approach we implemented a layered decomposition of this model, shown in Fig 4. The mathematical description of the complete model and of all layers can be found in the Supporting Information. We applied our framework to systematically investigate the function of each of the crosstalk mechanisms. We first considered the effects of each crosstalk mechanism on the signalling pathways. This gives some indication of the function of each crosstalk mechanism, but our framework takes this analysis further. The effect of excitatory crosstalk on the network is altered by the presence of inhibitory crosstalk, and vice versa. The conditional mutual dynamics between the two crosstalk mechanisms, integrated with the crosstalk-free network, can be used to quantify their interdependence. In the model, we set the strength of the mutually excitatory crosstalk to k a = 0.1 and the strength of the mutually inhibitory crosstalk to k d = 1, corresponding to a monostable network (see Fig. 1 in [8]). We assume zero initial conditions. The three reactions R 1 , R 2 , and R 3 represent the first signal transduction pathway, which affects species concentrations X 1 -X 3 , so that we define the functionality F 1 = {R 1 , R 2 , R 3 }. Similarly, we choose F 2 = {R 4 , R 5 , R 6 } to represent the second signal transduction pathway, affecting species Y 1 -Y 3 . The mutually inhibiting crosstalk between the two pathways comprises F 3 = {R 7 , R 8 }. Finally, the mutually excitatory crosstalk is given by F 4 = {R 9 , R 10 }. We first calculated all of the possible incremental effects of each functionality F i integrated with each possible subset {F j j j 6 ¼ i} of the others. Since we have N L = 4 functionalities we only need to simulate 2 N L À 1 ¼ 15 layers to be able to calculate all N L Á N L ! = 96 layer dynamics for all orderings of the functionalities. Fig 5 shows the trajectories corresponding to several of these nodes. The plots were generated for time-varying inputs S 1 and S 2 given by The switching times between the input combinations were chosen such that, at the end of each period, the species of the network have converged to their corresponding steadystate concentrations. The top-left plot in Fig 5 shows the response L(F 1 , F 2 , F 3 , F 4 ) of the entire network to this input pattern. Each of the other plots depicts the incremental effects of integrating a certain Signalling pathways with crosstalk. This figure shows two signalling pathways, with two crosstalk mechanisms. Functionality F 1 = {R 1 , R 2 , R 3 } is the X pathway; F 2 = {R 4 , R 5 , R 6 } is the Y pathway; the mutually inhibiting crosstalk F 3 = {R 7 , R 8 }; and the mutually excitatory crosstalk F 4 = {R 9 , R 10 }. Model adopted from [8]. functionality with a group of others. Fig 6 depicts the trajectories of the mutual dynamics between several pairs of functionalities, also calculated from the 15 simulations of layered ODE systems. Below these trajectories we plot their corresponding incompatibilities I and cooperativities C. We consider the basic network comprised only of F 1 and F 2 , where neither crosstalk mechanism is active. From L(F 1 ) and L(F 2 ) in Fig 5 we see that after t = 20 (respectively, t = 40) the concentrations of species X 1 -X 3 (respectively, Y 1 -Y 3 ) quickly saturate. It is easily shown that the mutual dynamics between the two isolated pathways M(F 1 ; F 2 ) = 0. This means that there is no interdependency between the pathways, and their integrated dynamics equal the sum of their isolated dynamics. Of course, this conclusion is intuitively clear because we are considering a network with neither crosstalk mechanism active. We are now in a position to investigate what each crosstalk mechanism does to this basic network, by identifying the effects of each of F 3 and F 4 in turn. We first identify the effect of the mutual inhibitory crosstalk F 3 on the basic network. This is depicted in Fig 5 by L(F 3 jF 1 , F 2 ). The input signals S 1 = S 2 = 5 are both 'on' during two time If the mutually inhibitory crosstalk L(F 3 jF 1 , F 2 ) is added first, the network becomes bistable, then monostable again by then including the mutually excitatory crosstalk L(F 4 jF 1 , F 2 , F 3 ). Adding first the excitatory crosstalk L(F 4 jF 1 , F 2 ) leads to strong cross-activation of the pathways, which is significantly weakened by the mutual inhibitory crosstalk L(F 3 jF 1 , F 2 , F 4 ). intervals t 2 [40,80] and t ! 100. In each of these time intervals, we can observe in L(F 3 jF 1 , F 2 ) two different steady states of X 1 -X 3 and Y 1 -Y 3 . Thus one effect of mutual inhibition is that the resulting network is bistable. We can also observe that the X 1 -X 3 values of L(F 3 jF 1 , F 2 ), which show the effect of F 3 on the X i concentrations, become sufficiently negative during t 2 [80, 100] that they cancel out the positive values of X 1 -X 3 in L(F 1 , F 2 ). Hence, we can conclude that, by integrating mutual inhibition with the basic network, the removal of S 1 at t = 80 now causes the corresponding pathway to deactivate. We can also analyse the interaction between mutual inhibition and the basic network in terms of the mutual dynamics M(F 3 ; F 1 , F 2 ), as depicted in Fig 6. For t ! 40, the cooperativity C(F 3 ; F 1 , F 2 ) is always negative, and the incompatibility I(F 3 ; F 1 , F 2 ) is high. Thus, the effect of integrating F 3 with (F 1 , F 2 ) is to strongly attenuate the levels of all species from their saturated state. We next consider the effect on the basic network of mutually excitatory crosstalk F 4 , depicted by L(F 4 jF 1 , F 2 ) in Fig 5. We will focus in particular on the time interval t 2 [20,40], where S 1 is first activated. For this time interval, the concentrations of Y 2 and Y 3 increase, while the concentrations of X 1 -X 3 are transiently reduced. Thus the effect of including mutual excitation is that a non-zero input S 1 is sufficient to activate the second pathway, as well as the first. As in the previous case, we can also analyse the interaction between mutual excitation and the basic network in terms of the mutual dynamics M(F 4 ; F 1 , F 2 ), as depicted in Fig 6. In this time interval, the cooperativity C(F 4 ; F 1 , F 2 ) is briefly negative before returning to zero, while the incompatibility I(F 4 ; F 1 , F 2 ) increases monotonically and converges to around 0.81. Negative cooperativity implies that mutual excitation attenuates the isolated dynamics, although only marginally since I is small. This attenuation is also only transient; C approaches zero and I approaches approximately 0.81 as t!40. Thus, by the end of this time interval, integrating mutual excitation creates a significant additional effect in a direction orthogonal to the isolated dynamics. This is intuitively clear, since the incremental effect of mutual excitation is the additive excitation of Y 2 and Y 3 , orthogonal to the saturation of X 1 -X 3 during t 2 [20,40]. We have identified the effect of each crosstalk mechanism on the basic network. However, we can see from the plots of L(F 3 jF 1 , F 2 , F 4 ) and L(F 4 jF 1 , F 2 , F 3 ) that the function of each crosstalk mechanism (when defined as its incremental effect on a network) is very different when the other crosstalk mechanism is present. We can see from the first of these trajectories that one effect of integrating inhibition into the cross-activated network is to effectively insulate the Y 1 -Y 3 pathway, since the excitation shown by L(F 4 jF 1 , F 2 ) during t 2 [20,40] is cancelled out by the values of L(F 3 jF 1 , F 2 , F 4 ). Furthermore, on comparing L(F 3 jF 1 , F 2 , F 4 ) with L(F 3 jF 1 , F 2 ) we can see that mutual inhibition has remarkably different incremental effects depending on whether or not mutual excitation is present. This can be quantified by observing the trajectory of and the associated C and I values in the right-hand plots of Fig 6. For t 2 [20,40], the value of I (F 3 ; F 4 jF 1 , F 2 ) is large, and C(F 3 ; F 4 jF 1 , F 2 ) is close to −1. This confirms the assertion above that, in this time interval, the interdependence between crosstalk mechanisms causes them to approximately cancel each other out, so that the behaviour of the entire system is much like that of the isolated pathways. We can observe that, in this time interval, the effect of mutual inhibition on the basic network L(F 3 jF 1 , F 2 ) is zero. However, non-zero mutual dynamics M(F 3 ; F 4 jF 1 , F 2 ) in this time interval means that the presence of mutual excitation causes the inhibition crosstalk to have a non-zero function. To summarise, we have demonstrated how to use the concepts of conditional dynamics L (F i jF 1 , F 2 ) and mutual dynamics M(F i ; F 1 , F 2 ) to analyse the effect of each crosstalk mechanism on a pair of isolated pathways. We extended this analysis by using M(F 3 ; F 4 jF 1 , F 2 ) to quantify how, by integrating the crosstalk mechanisms together, they influence one another's isolated functions. Example 2: Controller This example illustrates the application of our layered framework to a toy network, comprised of a biomolecular cascade containing two integral feedback motifs (Fig 7). This toy network was adapted from a motif identified in [42] as sufficient to exhibit perfect adaptation to changing external input concentrations, and underlying the ability of the chemotaxis pathway in E. coli to show near-perfect adaptation to extracellular chemoattractant concentrations [43][44][45]. The network consists of two intermediates, Y 1 and Y 2 . The species Y 1 is produced with a time-varying rate k(t) depending on the concentrations of one or more external species. Intermediate Y 1 is irreversibly converted to Y 2 at rate V 1 (y 1 ) and Y 2 is consumed with rate V 2 (y 2 ), each following Michaelis-Menten kinetics (see Fig 7). For certain production rates of Y 1 , V 1 is close to saturation so that the concentration of Y 1 eventually becomes unstable for high rates k(t). To stabilise the network and achieve perfect adaptation of the concentrations of Y 1 and Y 2 to slowly changing production rates k(t), the network needs to be controlled. Such controllers can be implemented through additional Michaelis-Menten reactions V a (y 1 ) and V b (y 2 ) forming species A and B from Y 1 and Y 2 , respectively. A and B are consumed with Michaelis-Menten rates V −a (a) and V −b (b) respectively, both of which are assumed to be close to saturation. The two control loops are closed by non-competitive inhibition of the production of Y 1 and of the conversion of Y 1 to Y 2 modelled by adapting the production rates with the factors ϕ a (a) and ϕ b (b) respectively, as shown in Fig 7. The dynamics of the system can be modelled by the following ODEs: with Reactions V a and V b have two distinct effects on the pathway: first, they decrease the concentration of Y i independently of any interaction of the controller; and second, they increase the concentration of A and B and are thus the means by which the controller observes the state of the network. To distinguish these two effects mathematically, in the dynamics above we have distributed each of V a and V b into two reactions with the same rates, one only affecting the controller, and the other only the pathway. To analyse the system, we define three functionalities, corresponding to the uncontrolled pathway F 1 (columns 1-4 and 7 in S) and the two controllers F 2 (columns 5 and 6) and F 3 (columns 8 and 9). The initial condition x 0 was set to represent the steady state concentrations of the species for k = 3. Note that, in order to make explicit the dependency of the layered dynamics on this initial condition, we created a 'zero layer' F 0 with constant trajectory L(F 0 ) = x 0 as described in the Methods section. The ODEs describing the layers' dynamics for each ordering of the functionalities, the parameter values, as well as the precise initial conditions for this example can be found in the Supporting Information. We then determined the dynamics of the layers when applying a step change of the Y 1 production rate from k = 3 to k = 10 at t = 50. Fig 8 displays the dynamics of the pathway, the conditional dynamics of the controllers given the pathway for both orderings of the controllers, and the complete dynamics of the system. Similarly to the previous example, we will consider a basic network consisting of only the uncontrolled pathway, F 1 . We are then able to identify the effect of each controller as the conditional dynamics of F 2 and F 3 given the basic network. However, the effect on the basic network of both controllers will be different to the sum of each controller's individual effect. We will thus compute the mutual dynamics between the two controllers in order to analyse how the controllers interact with one another when both are integrated in the context of the basic network. We first consider the isolated dynamics L(F 1 jF 0 ) of the uncontrolled pathway which makes up the basic network, depicted in Fig 8. After the step increase at t = 50, the two reactions decreasing Y 1 quickly saturate. The isolated system is therefore unstable, and the Y 1 component of L(F 1 jF 0 ) increases to infinity at a positive constant slope. We will now use this basic network to define the function of each of the controllers. Integrating the controller F 2 with the pathway stabilises the network. This is indicated by the trajectory of L(F 2 jF 0 , F 1 ) in Fig 8. In particular, the Y 1 component of this trajectory decreases to minus infinity with approximately −1 times the rate of increase of Y 1 in the isolated pathway. Another way to observe this stabilising effect is to consider the mutual dynamics M (F 1 ; F 2 ) between the pathway and the controller, shown in Fig 9. The inconsistency I(F 1 ; F 2 jF 0 ) between pathway and controller approaches one, while the cooperativity C(F 1 ; F 2 jF 0 ) minus one. We interpret these values as the two functionalities cancelling one another out; in other words, F 2 stabilising F 1 . We next consider the effect of the second controller F 3 , depicted in Fig 8 by L(F 3 jF 0 , F 1 ). The Y 1 component of L(F 3 jF 0 , F 1 ) also diverges to positive infinity, albeit at a much slower rate than that of the basic network. Intuitively, the second controller cannot stabilise the basic network, since it has no means to observe the unstable Y 1 component. Instead, its interventions further destabise the pathway. We can quantify this through the mutual dynamics between the controller and pathway. Indeed, since C(F 1 ; F 3 jF 0 ) % 1, the second controller amplifies the unstable dynamics. However, since I(F 1 ; F 3 jF 0 ) ( 1, this effect is only minimal. We now consider the interaction between the two controllers when combined with the uncontrolled pathway. Interestingly, the effect of the second controller when combined with the first is only transient, as shown by L(F 3 jF 0 , F 1 , F 2 ) in Fig 8. However, the mutual dynamics M(F 2 ; F 3 jF 0 , F 1 ) between the two controllers have a Y 1 component increasing slowly to infinity, and the cooperativity between the controllers C(F 2 ; F 3 jF 0 , F 1 ) approaches 1. This positive cooperativity indicates that the first controller increases its interventions to stabilise the network when the second controller is present. That the two controllers given the pathway act cooperatively might initially appear surprising, since the first controller stabilises the network while the second destabilises the network. This indicates the utility of our notation in allowing a rigorous quantification of how functionalities interact, given their environment. Consider the following three cases for how the two controllers F 2 and F 3 interact. First consider the interaction M(F 2 ; F 3 jF 0 , F 1 ) between the two controllers when integrated with the pathway. Since the presence of F 3 increases the control action by F 2 necessary to stabilise the pathway, the cooperativity given the pathway C(F 2 ; F 3 jF 0 , F 1 ) approaches 1, as described above. Next, consider the interaction M(F 2 ; F 3 jF 0 ) between the two controllers without the pathway. They cannot influence each other since the respective subnetworks are not connected. Consequently they do not interact and hence M(F 2 ; F 3 jF 0 ) = C(F 2 ; F 3 jF 0 ) = I(F 2 ; F 3 jF 0 ) = 0. Finally, consider the interaction M(F 2 ; (F 1 , F 3 )jF 0 ) between the first controller F 2 and the unstable network comprised of the pathway F 1 extended with F 3 . The first controller stabilises the extended pathway, which is represented by the cooperativity C(F 2 ; (F 1 , F 3 )jF 0 ) approaching minus one, as F 2 acts to attenuate the instability of F 1 , F 3 . Thus, we have demonstrated that the rigorous definition of interdependence using our cascaded layering framework allows us to identify which components of a network amplify or attenuate each other, by defining the components which interact and the context in which they do so. Example 3: Metabolic Fluxes in Glycolysis Glycolysis is a central ten-step process in most organisms responsible for the production of energy in form of ATP and NADH by catabolism of glucose and other sugars (see [46], p. 88 ff.). Besides being one of the best studied metabolic pathways, it is also an important starting point for biotechnological processes [47]. For non-growing S. cerevisiae cells, a kinetic model of the glycolytic pathway was created by Hynne et al. [48], including fermentation, glycerol production, lactonitrile and glycogen formation, and cellular import and export processes for glucose and other metabolites. We will use this model (available at the BioModels Database [41], model 61) to exemplify how to apply our layering approach in the context of metabolic engineering. In this section we use Elementary Flux Modes (EFMs) [13], minimal functional pathways which can carry non-zero fluxes at steady state and which fulfil positivity constraints for irreversible reactions. From the unique set of EFMs of a network, all steady-state flux distributions can be obtained by non-negative linear combinations of EFMs. Due to their simplicity, EFMs can often be associated with certain elementary 'tasks' of a network, like the production of one or more final products from various available extracellular substrates. Thus EFMs are natural candidates to represent functionalities. The model in [48] includes the dynamics of the cofactors NAD + , NADH, AMP, ADP, and ATP, with the consumption of ATP by the rest of the cell modelled by first order mass action kinetics. The cofactor concentrations can be interpreted as control inputs to the junctions of the glycolytic pathway dynamically channelling the metabolite flux into the different branches depending on cellular requirements, e.g. during hyperosmotic conditions [49]. The feedback loop is closed by an integral controller 'observing' the consumption and production of the cofactors by the glycolytic pathway and other cellular processes. We established a mixed layer structure for the glycolysis model (Fig 10). This mixed layer structure consists of the EFMs (not taking into account mass balance of the cofactors) as functionalities in a cascaded layer structure. The dynamics of cofactors, together with ATP consumption (ATP!ADP, Reaction 23 in [48]) and the AK reaction (ATP + AMP $ 2ADP, Reaction 24) form a control layer that communicates with all layers in the cascade without being part of the cascade. Using the software efmtool [50], we identified eight EFMs for the modified network where the cofactor dynamics were removed by setting the appropriate rows in the stoichiometric matrix to zero. Three of the identified EFMs were non-negative linear combinations of other EFMs, a consequence of treating each reversible reaction as two separate irreversible reactions. Based on the concept of simplicity (condition C3 in [13]), we removed the linearly-dependent EFMs with the lowest number of zero entries. The remaining five EFMs can be interpreted as representing elementary 'tasks' of the network: 1. Glycogen buildup: production and storage of glucose-6-phospate G6P Since the cofactors participate in many of the reactions, our approach to not take them into account when calculating the EFMs is conceptually similar to classifying metabolites taking part in more than a threshold number of reactions as 'external', as proposed in [51]. Both approaches result in the same significantly simpler and biologically interpretable set of EFMs. However, unlike the subnetworks identified in [48], the EFMs are not necessarily redox neutral. We will briefly discuss the consequences of this at the end of this section. Production and excretion of glycerol In Fig 10 we represent the common reactions of the five EFMs by a graph. Surprisingly, the graph is a binary tree, indicating that, while there are junctions in the reaction pathway of the metabolites, there are no joins. Furthermore, there exists one main branch, consisting of Reactions 1 − 11 and Reaction 18, and each EFM can be represented by a unique junction from this main branch: EFM F 1 is the junction after Reaction 3, F 2 after Reactions 6 and 7, F 3 after Reaction 11, and F 4 and F 5 after Reaction 18. This property provides a natural order for the EFMs, with F 1 at the top of the cascade and F 5 at the bottom. The ODEs describing the layers' dynamics for this ordering of the functionalities, the dynamics of the control layer, the parameter values, and the initial conditions can be found in the Supporting Information. The layers and their interconnection can be represented as a species-reaction graph (SR graph, see [18]) depicting the reactions, the non-zero altered reactions, the species with nonzero dynamics in each layer, and the inter-layer communication (Fig 10). Recall from (7b) that the vector of altered reactions for layer i is defined by In this example we identified reactions j where v j (x 1 + . . . + x i ) = v j (x 1 + . . . + x i−1 ), meaning the rate of the reaction is not altered by the interconnection of functionality F i . A sufficient condition [18] to conclude that altered reaction j does not change the dynamics of Layer i and The binary tree structure of this graph is a special property of the glycolytic model [48]. C) Species-reaction graph (compare [18]) representing the models of the cascaded layers as well as the inter-layer information transfer. Rounded rectangles represent metabolite concentrations, diamonds reactions. An arrow is drawn from a metabolite to a reaction if the reaction rate thus can be omitted is if the jth element of is zero. Here, exp(.) represents the matrix exponent, S k the stoichiometric matrix of layer k, I the element-wise indicator function being one if the respective element is non-zero and zero otherwise, and 1 the N R × 1 vector with all elements being one. Similarly, the integration of a species can be omitted in a layer if it is not affected by any of that layer's reactions. Fig 10 shows that each layer is comparatively small and includes few altered reactions. Furthermore, since reactions R 13 and R 14 are linear (in the metabolites), information about the metabolite concentrations involved in these reactions is not required to be transmitted between the layers (for example, EtOH 3 and EtOHX 3 are not transmitted to Layer 4 and 5). This demonstrates that, for many biological networks, layers and the interfaces between them are comparatively simple. We equilibrated the model for a mixed flow glucose concentration of 10 mM and initialised the dynamics of the initial condition layer L(F 0 ) = x 0 to these steady-state values. This corresponds to low glucose conditions in the range of the K m values of high-affinity glucose transporters [52], and slightly below the mixed flow glucose concentration for which the parameters of the model were identified (18.5 mM) to prevent glycogenic oscillations (see [48]). All other parameters and medium conditions were kept unchanged as compared to the version of the glycolysis model [48] available at the BioModels Database. We then calculated the layer dynamics for each ordering of the EFMs into all possible cascade structures by simulating the layering graph for 1000 min as described in Methods. This requires the simulation of 2 5 − 1 = 31 layers to populate the 5 Á 5! = 600 nodes of the graph. Based on the layering graph we then analysed all pairwise interactions between F i and F j , given all possible combinations of other functionalities. In Fig 11, we display the steady-state incompatibilities and cooperativities between the EFMs after convergence at the end of the simulation. Note that although the incompatibilities and cooperativities converge, this is not necessarily true for the corresponding layer and mutual dynamics, which diverge for some EFMs (compare Fig 9). Several pairs have an incompatibility close to one and a cooperativity close to minus one. This combination is typical if one of the functionalities (given its environment) is unstable but is stabilised by the other. For example, in the isolated layer L(F 3 jF 0 ), pyruvate increases to infinity since the PDC reaction (Reaction 11 in [48]) is saturated and becomes rate limiting. Integrating either F 1 or F 2 with F 3 reduces the production rate of pyruvate and thus stabilises the network, since both L(F 1 , F 3 jF 0 ) and L(F 2 , F 3 jF 0 ) are stable. On the other hand, in the isolated layer L(F 5 jF 0 ) both pyruvate and acetaldehyde (ACA) concentrations are unstable, the latter due to limited cyanide (CNX) availability. Integrating F 1 or F 2 with F 5 only stabilises the pyruvate concentration, but not acetaldehyde. The prediction of infinite growth of species concentrations is unlikely to be observed experimentally. Instead, the instability might indicate that the conversion of pyruvate to acetaldehyde catalysed by the pyruvate decarboxylase (pdc) becomes a rate limiting step when synthetically increasing the flow through the respective EFM. Indeed, it was experimentally validated [53] that under certain conditions depends on the metabolite, and an arrow from a reaction to a metabolite if firing of the reaction changes the metabolite's concentration. Colours and superscripts indicate the layer to which species and reactions belong. Reactions with only one border correspond to altered reactions. Black arrows are dependencies inside the model of the respective layer, red arrows indicate information transfer between layers. Grey arrows represent links to cofactors in the control layer; reactions R23 and R24 are part of the control layer. Graphs (B) and (C) drawn with Graphviz [57]. doi:10.1371/journal.pcbi.1004235.g010 over-expression of pdc after reducing glycerol synthesis can lead to increased growth rates and ethanol yield. The incompatibilities between two EFMs, given the three other EFMs, are of specific interest (rightmost ten interactions in Fig 11). They indicate that F 1 and F 3 are significantly more incompatible than F 2 and F 3 . Recall that F 3 corresponds to the fermentation capability of the network. To increase biofuel production one would intuitively expect that the highest yields could be obtained by knocking out or down the EFMs which are most incompatible to F 3 , thereby removing EFMs which act to attenuate its function. Recall that the effect of knocking-out a certain functionality F i on the overall dynamics of the network is equal to −L(F i jF j , j 6 ¼ i). Thus, the effect of knocking out GPD (Reaction 15) and, thus, glycerol production F 2 is −L(F 2 jF 0 , F 1 , F 3 , F 4 , F 5 ). The effect of a double knock-out of glycerol production together with glycogen build-up F 1 corresponds to −L(F 1 , F 2 jF 0 , F 3 , F 4 , F 5 ). The theoretical effect on fermentation efficiency of all possible combinations of knock-outs (Fig 12) can thus be directly derived, given the layering graph. Indeed, this analysis confirms that fermentation F 3 is significantly more incompatible with glycogen build-up F 1 than with glycerol production F 2 at the given experimental setup. The effect of knocking out more than one functionality is non-linear, and a double knock-out of the two functionalities yielding the highest gains when knocked out separately is not necessarily optimal, as shown in Fig 12. Thus, the layering graph provides an effective tool to analyse such potential interactions. It is out of the scope of this article to take side effects of the proposed genetic modifications into account, such as possible viability issues when knocking out some of the pathways. Our intention was to provide a proof of principle of how to apply our layering framework in the context of metabolic engineering. Notably, our analysis is based on a model of a non-industrial strain of S. cerevisia cells grown under glucose starvation, and which was created mainly to Fig 11. Compatibility of EFMs in the glycolytic pathway. The background shade of the matrix elements depict the steady-state incompatibility between the respective functionalities on the y-axis, given the functionalities on the x-axis, at a mixed flow glucose concentration of 10 mM (for notational convenience, dependencies on F 0 were assumed). For example, the shade of the element on the second row and twelfth column depicts I(F 1 ; F 3 jF 2 , F 4 ). The direction of the arrows depict the steady-state cooperativity between the respective elements, with straight upwards corresponding to cooperativity equal one, straight downwards equal minus one, and horizontal equal zero. Finally, the colour of the arrows provides further information about the interaction: (black) the sum of both conditioned functionalities alone as well as the mutual dynamics are stable, and the overall network is stable; (grey) same as black, only the overall network is unstable. Thus, the instability is in a layer the two functionalities are conditioned on; (green) the sum and the mutual dynamics are unstable and the overall network stable. The interaction stabilises the network; (yellow) same as green, but the overall network is unstable. Typical when the interaction only stabilises some of the species, or if the speed of divergence is smaller; (blue) the sum is unstable and the mutual dynamics and the overall network are stable. Typical if the instability was in a layer the functionalities are conditioned on, and one of the functionalities stabilises the network. explain glycogenic oscillations at low glucose conditions [48] rather than being optimised for metabolic engineering purposes. Furthermore, we essentially avoided the challenge of balancing the redox state (NAD + /NADH ratio) by our mixed layering approach (Fig 10) in which the control layer was always receiving the cofactor consumption and production rates of all cascaded layers while simulating the layering graph. Nevertheless, our analysis indicates that ethanol yield can be increased by approximately 6% by knocking out glycerol formation (F 2 ). This prediction is in good agreement with a reported [54] experimental increase of about 7% after inhibition of glycerol formation from dihydroxyacetone phosphate (DHAP) by knock-down of glycerol-3-phosphate dehydrogenase (GPD). In [54] the redox state was balanced by synthetically engineering an alternative route in the endogeneous Embden-Meyerhof-Parnas pathway based on NADP + and NADPH instead of NAD + and NADH. Discussion We have described a cascaded layering approach which can be used to systematically identify the interactions of functionalities (i.e. functional subsystems) in a decomposed network. Functionalities are defined as sets of reactions which together accomplish a given purpose. We have identified the dynamics of an isolated functionality as the dynamics of the subnetwork made up of only those reactions. However, the dynamics of a functionality are different when it is in the context of others. We have formulated the conditional dynamics of a functionality as the incremental change to the dynamics caused by integrating it with its context. We have also demonstrated that the computational burden associated with this layered framework can be minimised by exploiting symmetries within the definitions of interdependence. We have used the conditional dynamics to define the mutual dynamics, which describe the context-dependent interaction between any pair of functionalities. We can thus identify if the interaction between them is strong or weak (incompatibility), and to what extent they amplify or attenuate each other (cooperativity). This interdependence is also context-dependent, so that two functionalities may interact in vastly different ways depending on the wider system in which they are integrated. Our framework allows the unambiguous quantification of nonlinear interactions between functionalities. Finally, we illustrated our layering framework with three examples. The first considered signalling pathways interconnected by two distinct crosstalk mechanisms, a mutually inhibiting and a mutually excitatory crosstalk. We not only quantified how each mechanism separately influences the dynamics, but also how the crosstalk mechanisms interact with each other. Our second example concerned a pathway stabilised by two integral feedback mechanisms. For certain input signals, the pathway alone becomes unstable. It gets stabilised by the first integral feedback, but the second feedback further destabilises the system, as displayed by the negative and positive cooperativity respectively. Thus each feedback has an opposite effect on the pathway to the other when each is considered in isolation. However, when they are integrated together they interact cooperatively, due to the first controller increasing its strength to stabilise the network when the second controller is present. Third, we showed how to apply our layering framework in the context of metabolic engineering to analyse the dynamic interactions between elementary flux modes in glycolysis. Interestingly, the effect of abolishing an EFM by knocking out its enzymes can be easily described as the negative of the conditional dynamics of that EFM, given the rest of the network. The incompatibility between two EFMs, on the other hand, gives information about the expected increase in the flux through one EFM if the other one is knocked out. These interpretations allowed us to efficiently calculate the expected increase of ethanol yield in biofuel production when knocking out arbitrary combinations of EFMs. Interestingly, this yield is not additive, and a double knock-out of the two EFMs giving the highest increases in ethanol yield alone is not necessarily optimal. Instead, our method to calculate the integrated effect of all combinations of knock-outs allowed us to assess the best metabolic engineering strategy with a minimum amount of simulations required. As discussed in the Introduction, the layering framework [16,17] differs from modularization approaches [18][19][20][21][22][23][24][25][26]. Each of these frameworks allows different network architectures to be considered, reflecting a distinction between vertical and horizontal network decomposition [55]. Layered networks represent the overlay of multiple (possibly competing) functional subnetworks, while the modular framework interprets biological networks in terms of engineered interconnections of input-output systems. Although it is a modular phenomenon, previous work on quantifying retroactivity [4,5] can be related to our layered formulation. A goal of both techniques is to measure the difference in a given subsystem's isolated and integrated behaviours. In particular, an upper bound on the Euclidean norm of the difference in a module's isolated and integrated trajectories has been derived in terms of the system parameters in [5]. In the Supporting Information to this paper, we show in detail how this norm relates to our formulation of the incremental dynamics L(F 2 jF 1 ), and the mutual dynamics M(F 1 , F 2 ). Although our layering framework and the concept of retroactivity were developed to quantify the interactions between different kinds of subsystems, this comparison suggests that our additional measures of incompatibility and cooperativity as defining an interaction direction may also be applicable to further understand retroactivity in multi-dimensional modular systems. The computational method employed to quantify the interaction strength between a given functionality pair is clearly dependent on the kinetic constants and other parameter values in the model, including the initial conditions. A repetition of this computation across a range of such values may give a more informative picture of how uncertain biological functionalities interact, but will be difficult to visualise and expensive to evaluate. Possible directions for future work may include the derivation of analytical estimates in terms of the system's parameter values for the mutual dynamics, incompatibility, or cooperativity, of the type produced for layered steady state perturbations in [16] or retroactive perturbations in [5]. It would also be interesting to consider if any estimates of our measures can be computationally derived to hold for all parameters, similarly to the structural results on the direction of steady state responses to parameter perturbations given in [34]. Alternatively, semi-definite programming may be used to calculate worst-case estimates of the difference between the trajectories L(F 1 ) and L(F 1 jF 2 ) without resorting to simulation, similarly to previous work on model reduction error estimation [56]. An important assumption made in this paper was that the network functionalities F i are given. The examples in this paper provide two different strategies for how the functionalities necessary for our layering framework can be defined. In the first two examples this was done by intuition and prior knowledge, while in the third example we applied an already available method to group functionally related reactions in metabolic pathways, namely EFMs. Clearly, this is not a comprehensive list of how functionalities can be defined, and (depending on the precise application) other strategies might be more promising. One such example may arise in the context of Synthetic Biology, from the specification of synthetic biomolecular devices such as toggle switches, oscillators, and so on, the dynamics of which are significantly affected when combined into large-scale systems [2]. Such an application of our framework to Synthetic Biology would require these biomolecular devices-typically designed and modelled as modules-to be mathematically expressed as layers, respectively functionalities. If this is possible without requiring modifications in the biomolecular implementation remains a question for future research. Our layering approach provides a general and concise framework for the quantification of the nonlinear interactions between functionalities of signalling and metabolic pathways. It offers a great flexibility since it only requires that each reaction must be part of a functionality, but allows reactions to be part of more or even all functionalities. Besides the mathematical definition of the model, the only input data needed is the problem-specific definition of the sets of reactions which make up a functionality. For many signalling pathways these definitions are typically given by biological insight, and for metabolic networks implementations of efficient algorithms to determine the sets of EFMs are available (e.g. [50]). To allow a quick assessment of our approach, we have implemented a MATLAB (The MathWorks, Natick, MA) toolbox, available under the GNU General Public License from sysos.eng.ox.ac.uk/control/sysos/index. php/User:Prescott/Code. This toolbox provides algorithms to automatically construct the models of the layers from SBML files, to derive the layering graph with minimal numerical simulation, and to easily assess the mutual dynamics, the incompatibility and the cooperativity between functionalities. We hope that our implementation can serve as a starting point to integrate our layering framework into standard software solutions for the analysis of biomolecular networks. Supporting Information S1 Text. Mathematical details for the example networks, and relationship to retroactivity. (PDF)
2016-01-13T18:10:52.408Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "d2a2da704191a1886909f92d559f2f4680dba5e9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1004235&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2a2da704191a1886909f92d559f2f4680dba5e9", "s2fieldsofstudy": [ "Biology", "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
52301954
pes2o/s2orc
v3-fos-license
Intercellular adhesion promotes clonal mixing in growing bacterial populations Dense bacterial communities, known as biofilms, can have functional spatial organization driven by self-organizing chemical and physical interactions between cells, and their environment. In this work, we investigated intercellular adhesion, a pervasive property of bacteria in biofilms, to identify effects on the internal structure of bacterial colonies. We expressed the self-recognizing ag43 adhesin protein in Escherichia coli to generate adhesion between cells, which caused aggregation in liquid culture and altered microcolony morphology on solid media. We combined the adhesive phenotype with an artificial colony patterning system based on plasmid segregation, which marked clonal lineage domains in colonies grown from single cells. Engineered E. coli were grown to colonies containing domains with varying adhesive properties, and investigated with microscopy, image processing and computational modelling techniques. We found that intercellular adhesion elongated the fractal-like boundary between cell lineages only when both domains within the colony were adhesive, by increasing the rotational motion during colony growth. Our work demonstrates that adhesive intercellular interactions can have significant effects on the spatial organization of bacterial populations, which can be exploited for biofilm engineering. Furthermore, our approach provides a robust platform to study the influence of intercellular interactions on spatial structure in bacterial populations. Supplemental methods Plasmid construction All plasmids were built from fragments amplified by PCR with Phusion R High-Fidelity DNA Polymerase (NEB), using custom synthesized oligonucleotides (Sigma Aldrich). Amplified fragments were extracted from an agarose (Sigma Aldrich) gel. Linear DNA fragments were purified using the MinElute PCR purification kit (QIAGEN), and assembled into plasmids with Gibson Assembly [1]. E. coli TOP10 (Invitrogen) was used throughout cloning and further experiments. Plasmids were purified from overnight cultures using the QIAprep spin miniprep kit (QIAGEN). Plasmid were verified by Sanger sequencing from Source Bioscience. All plasmids are shown in more detail in section SI1. Time-lapse sample preparation A small agar pad was made by pouring molten LB (Sigma Aldrich) and 1.5% agar (Bactoagar, BD Biosciences), supplemented with chloramphenicol (12µg/mL) and kanamycin (50µg/mL), into a chamber made of a single GeneFrame (1.5cm×1.6cm, ThermoScientific) stuck onto a microscope slide, which was then covered by a glass cover slip and allowed to set. The coverslip was then removed carefully, and the agar pad was inoculated with bacteria. The pad was then cut into a square of ∼1cm×1cm and placed bacteria side down onto a clean cover slip, and stuck onto a stack of 3 Gene-Frames affixed to a microscope slide, such that the agar pad was on the cover slip, inside the chamber of GeneFrames and glass, and surrounded on the remaining 3 sides with air. To prepare the bacteria sample, E. coli TOP10 cells, containing either plasmids pSEG4s or pSEG4ag and accessory plasmid pL31N, were grown to exponential phase in liquid LB media with chloramphenicol (50µg/mL) and kanamycin (50µg/mL), and at 37 • C in a shaking incubator. During mid-exponential phase (OD=0.1-0.4) the cells were diluted such that 1µL would contain ∼5-50 cfu, and 1µL was applied onto the LB-agar pad, and allowed to dry for several minutes at 37 • C. Imaging Macro photography was performed using a Canon EOS 550D SLR Large two-domain colony images were taken on a Leica SP5 confocal microscope after 24 hours of colony growth. A 10x air objective was used (HCX PL APO CS 10.0x0.40 DRY), and sfGFP was excited at 488nm and emission detected 490-510nm, mCherry2 at 561nm and detected at 605-653nm. Colonies were dome shaped, around 1.5mm in diameter and on average around 110µm tall at the centre, falling to single cell thickness at the edges. The colonies were thus imaged as a stack of ∼40 horizontal confocal slices, which were combined into a single image for analysis by taking a maximum intensity projection. Time-lapse movies were taken on a custom Nikon Eclipse Ti inverted microscope with a Yokogawa CSU-X1 spinning disk module, hardware autofocus with Nikon Perfect Focus and an enclosed incubator (Okolab) set to 37 • C. Images were captured with a Photometrics Evolve (512x512 EMCCD) camera, with image acquisition performed using the Metamorph software (Molecular Devices). Fluorescence images of the cell layer closest to the coverslip were taken with a 60x objective (CFI Plan Apo VC 60X Oil) at multiple positions to track many colonies, every 10 minutes for 16 hours. sfGFP excited at 488nm, and emission detected at 525nm (ET525/50m filter). Plasmid characterization The plasmid segregation mechanism relied on the segregation plasmids being able to increase their copy number in response to arabinose. This was characterized for plasmids pSEG4s and pSEG5s using plate fluorometry and plasmid DNA quantification, and shown in figure S2a and b. Both plasmids showed a large increase in DNA yield and fluorescence intensity at arabinose levels above ∼0.1 mM , demonstrating the required increase in copy number. Plasmids pSEG4s and pSEG5s were augmented with a transcriptional unit containing ag43 driven by LacI repressed pLlac promoter [2] to create plasmids pSEG4ag and pSEG5ag. This promoter was characterized using plasmids pSEG4pL and pSEG5pL, which contained the pLlac promoter driving a spectrally distinct fluorescent protein. Fluorescence was measured in cultures containing pSEG4pL or pSEG5pL, with LacI bearing accessory plasmid Materials and methods Plate Fluorometry Samples were prepared for plate fluorometry by diluting overnight cultures 1:1000 into fresh media containing the appropriate antibiotics, with the LB media used throughout. The resulting inoculated media was loaded into a black 96-well, flat clear bottom plate (Grenier). If an inducer was used, 5 L of the inducer was added to each well at the appropriate concentration with 195 L of sample, otherwise 200 L of inoculated media was used. As a control, each plate also contained at least 3 wells containing LB media and E. coli TOP10 cells without plasmids. The places were then loaded into in a BMG Clariostar plate reader, where they were grown in LB with the appropriate antibiotics at 37 • C. Fluorescence and absorbance readings were taken every 10 minutes for 16 hours, with shaking between readings. The following settings were used on the CLARIOstar machine: sfGFP: excitation 470/15 nm, emission 515/20 nm, gain 800, mCherry: excitation 570/15 nm, emission 620/20 nm, gain 1100, OD: 600 nm. Data was subsequently analysed in the MATLAB (MathWorks) software. Background autofluorescence in each channel was removed by subtracting the fluorescence intensity of the control wells (with blank E. coli cells) at the appropriate OD. Fluorescent protein expression rates were defined as the fluorescence intensity gradient per OD [(dF/dt)/OD], averaged across an exponential phase window (OD = 0.2 -1.0). DNA quantification Plasmid DNA quantification was performed by purifying plasmid DNA from overnight cultures, incubated at 37 • C in LB media with chloramphenicol (10µg/µL) and kanamycin (50µg/µL), and varying levels of arabinose. Minipreparations were made using the QIAprep spin miniprep kit (QIAGEN), and DNA concentration was measured using a NanoVue (GE Healthcare Life Sciences). 2 Autoaggregation in liquid culture IPTG: Figure S3: (a) E. coli TOP10 liquid cultures grown overnight in LB and a shaking incubator at 37 • C, containing the pL31N accessory plasmid and one of following plasmids plasmids: pSEG4s, pSEG5s, pSEG4ag, pSEG5ag (indicated in the table below). Each plasmid combination was grown with either 100mM IPTG (+) or 0 IPTG (-). (b) The same cultures as in panel A after 5 hours standing at room temperature, showing that IPTG induced expression of ag43 led to cellular autoaggregation to the bottom of the flask. Adhesive colony morphotypes No arabinose 100mM arabinose 5 hours 21 hours Figure S4: Confocal micrographs of E. coli TOP10 cells with the pPBag43 plasmid, containing the pBAD promoter driving the ag43 adhesin. Colonies were grown at 37 • C on M9 agar pads at 0 and 100mM arabinose. Unusual morphology can be seen in colonies imaged at 5 hours, however, this morphology was not found in colonies imaged after 21 hours. Scale is 10 µm. Materials and methods Microcolonies in figure S4 were grown on 1.5% agar (Bactoagar, BD Biosciences) pads and M9 (Ameresco) media supplemented with 0.4% (w/v) L-glucose (Riedel-de Haen) and 0.2% (w/v) casamino acids (Fisher Scientific). Bacterial cultures were grown to exponential phase, diluted and placed onto the M9 agar pad, covered by a glass coverslip and enclosed in a chamber made of several 1.5cmx1.6cm GeneFrames (Thermo Fisher Scientific) and a glass microscope slide. Slides were incubated in a static 37 • C incubator and imaged in a Leica SP5 confocal microscope using a 40x objective (HCX PL APO CS 40.0x1.45 OIL UV) and with the following fluorescence settings: excitation 514 nm, emission 521-555 nm. CellModeller basics A full derivation of the mathematics underpinning CellModeller can be found in [3,4], as well as in the CellModeller documentation in https://github.com/HaseloffLab/CellModeller/ blob/master/Doc/Maths/math.pdf. However a brief introduction will be presented here as background to adhesive interactions. A cell in CellModeller is modelled by a capsule with 7 coordinatesx = (x θ l) T , with x representing the location of the cell centre in 3-dimensional space, θ the 3-dimensional orientation of the cell, and l the 1-dimensional cell length. Forces in the simulation are provided by cell growth, and cells feel a viscous drag force proportional to their velocity. To understand a cell's motion from a generalized force, we can study the cellular change in coordinates given a general impulse ∆p, represented by (∆p ∆L ∆g) T , with linear, angular and growth components. A change in cell of initial length l 0 coordinates can then be give by: Matrix M therefore determines the change in motion from the impulse, with a term for viscous drag on translational motion, rotational motion, and growth. µ represents the drag coefficient to motion, whereas γ presents the drag coefficient with respect to growth. The matrix P is the projection onto the plane perpendicular to the cell axis. A full derivation can be found in the references above. For any surface point q on a cell with a center at q a , directionâ and length l 0 , we can find the displacement of q generated by ∆x along any axisĥ by calculating: Adhesion is modelled as a simple linear elastic spring between the contact points, where the restoring force is proportional to the displacement between the cell contact points ( fig. S5a-b). The bacterial biophysics in CellModeller calculates the impulses on cells such that the distance between any two touching cells is 0 in the axis normal to the contact. Therefore to model adhesion between cells in contact, we only need to consider displacement in the plane orthogonal to the normal. The energy stored in an elastic spring is given by E = κx 2 where κ is the elastic constant of the spring, and represents the adhesion strength of the contact. This linear spring therefore produces a parabolic potential well, with the energy proportional to the square of the displacement (fig. S5c). Minimizing this energy at each timestep constrains the cells to move with adhesion between contact points. In a system of two cells A and B, which have a contact, indexed as i, at points p a and p b on each of the cells. Impulses on the two cells generate a displacement between two contact points on the cells. The displacement, ∆d f , generated in an axis tangential to the contact, f , can be found in terms of impulses using equation 3, giving: Therefore, the energy stored in adhesion at contact i is given by: The simulation solves all impulses such cell overlaps at contacts, defined by d, are minimized. This is done by solving A∆p = −d, where A is a block matrix with n contacts ×n cells , and represents the system of cellular contacts and the associated axes of each contact with relation to a cell. This acts to project each impulse for a given cell in the appropriate axis, thus defining how an impulse affects each contact overlap. Typically, A is poorly conditioned, meaning that there are many solutions. This therefore requires regularization, which is performed with Tikhunov regularization [5], which finds the solution which minimizes the energy. The adhesive energy can be added into the Tikhunov regularization term, which constrains the solution towards the solutions that minimize the work done against drag and adhesive energies. Therefore, the expression that needs to be minimized by the solver is given by: To minimize this expression, we find the derivative of with respect to ∆p. This expression therefore minimizes the adhesive energy with respect to tangential displacement in 1 axis. In the 2D case, therefore, it is required only once, as the tangential axiŝ f , is always perpendicular to the contact and theẑ axis (assuming the cells are constrained in thex −ŷ plane). In the more general 3 dimensional case, two axes in the plane tangential to the contact are defined on the fly, and the adhesion energies in each axis are calculated and added and minimized. Implementation The implementation of this model of intercellular adhesion can be found on http://haselofflab. github.io/CellModeller/, within on the 'Adhesion' branch of the GitHub repository. Within the biophysical model, the term κ defined the strength of the adhesive interaction for a given contact. For implementation, the adhesion strength between each cell type can be set in the initialization of each cell. The logic of adhesion can also be defined by the user in the model in the adhLogicCL function, which is parsed into the openCL code. This function sets how different cell types with differing adhesion strengths interact. For example, the following code: d e f adhLogicCL ( ) : """ r e t u r n min ( a d h s t r 1 , a d h s t r 2 ) ; " " " defines the adhesion strength of a contact by the minimal adhesion strength of the cells in contact. However this can be altered in a variety of ways to model a wide range of adhesive behaviour. A usage guide for the adhesion module can be found in the documentation on the CellModeller website. Usage notes At intermediate adhesion the spatial packing of cells was less efficient than in experiments. This was likely due to the interaction between the agar pad and the colony which serve to compress the cells in the colony together [6], which was not explicitly accounted for in the model. However, these forces are only present when the bacteria are between an agar pads and glass cover slips, and are not present in colonies growing on an agar surface. Optical flow optimization The Farneback optical flow algorithm [7], used in chapter 6 on time-lapse microscopy data was first parameterized and validated with simulation data. To do this, CellModeller simulations required the appropriate length and timescales, to demonstrate that the optical flow finds the appropriate motion, and to find parameters optimized for motion in the time-lapse microscopy data. Therefore, colonies were simulated growing from a single cell, saving an snapshot of the simulation with a frequency such that area growth of the simulation matched experimental data ( fig. S6c). Furthermore, the resolution of the snapshots was set to match experimental data (512x512 pixels), and the width of the field of view also matched the data (136.5 µm). The image sequence of the simulated colonies ( fig. S6b) therefore appeared similar to experimental data ( fig. S6a). Furthermore, the velocity field between each frame of the simulation was found, and exported to a file. The optical flow algorithm, implemented in python using the openCV package [8], requires several parameters which are listed below, along with the descriptions from the python openCV documentation (http://docs.opencv.org/2.4/modules/video/doc/motion_analysis_ and_object_tracking.html): • pyr scale parameter, specifying the image scale (< 1) to build pyramids for each image; pyr scale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one. • levels number of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used. • winsize averaging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field. iterations number of iterations the algorithm does at each pyramid level. • poly n size of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field, typically poly n =5 or 7. • poly sigma standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for poly n=5, you can set poly sigma=1.1, for poly n=7, a good value would be poly sigma=1.5. • flags operation flags that can be a combination of the following: OPTFLOW USE INITIAL FLOW uses the input flow as an initial flow approximation. OPTFLOW FARNEBACK GAUSSIAN uses the Gaussian winsize x winsize filter instead of a box filter of the same size for optical flow estimation; usually, this option gives z more accurate flow than with a box filter, at the cost of lower speed; normally, winsize for a Gaussian window should be set to a larger value to achieve the same level of robustness. In order to find a parameter set that would allow for the Optical Flow algorithm to accurately find the velocity fields in experimental data, an exhaustive parameter search was performed using a simulated dataset with known velocity fields. The simulations were designed to be of equivalent image pixelation (512x512), length scale (cell width = 0.75 µm, cell length = 1 -4 µm) and time scale between frames (figure S6c). A total of three such simulations were performed and used for parameter optimization. To calculate the error of the optical flow output, the optical flow derived velocity field, F opt was subtracted from the exact velocity field provided by the CellModeller simulation, F sim . The resulting magnitude of this vector was then found for each pixel, and then summed in quadrature with values from other pixels containing cells. To calculate the error per pixel as a percentage, the total error was normalized by the magnitude of the real velocity fields, to obtain the following expression for the error per pixel: Iteration over all parameters was performed one parameter at a time. Once a minimum was found for the first parameter, the value was saved and iteration proceeded over the next parameter, and so on. This was performed several times to find a global minimum. Iterations about the final parameter set used are shown in figure S7a, finding a final parameter set with an error rate per pixel of around 2%. The final parameter set accurately recapitulated the model velocity field well, as shown in figure S7c and d. The final parameter set chosen was the following: • pyr scale =0.25 • levels = 3 • winsize = 25 • poly n = 5 (Note that this was a slightly suboptimal choice, however, the documentation recommended that this be kept at either 5 or 7) • poly sigma = 0.7 • flags = 256 (OPTFLOW USE INITIAL FLOW) (This flag was used throughout since the velocity field at any given point was similar to the previous time point) Therefore, the code calling the optical flow algorithm is as follows: cv2 . c a l c O p t i c a l F l o w F a r n e b a c k ( prev , img , None , 0 . 5 , 3 , 2 5 , 2 8 , 5 , 0 . 7 , 2 5 6 ) Where prev refers to the initial or previous image of the image sequence, and img the subsequent image. Figure S7: (a) Exhaustive parameter iteration across all 6 parameters of the optical flow algorithm for 3 separate simulation datasets, shown in red, green and blue. The y-axis in each case shows the percentage error per pixel. The vertical bold line in each plot represents the final parameter value used. When the optical flow algorithm uses the final parameter set on simulation data video, it accurately recapitulates the velocity field of the simulation. Panel (b) shows a snapshot of simulation data at the 40th frame, and (c) shows the velocity field as a quiver plot for the frame calculated by the simulation software, almost indistinguishable from the optical flow derived field in (d). Arrow lengths represent the distance travelled every 10 minutes. Scale bars are 50 µm.
2018-09-24T14:22:43.432Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "291db8d1ca6cf14ede91361ff22a66dabf634f28", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsif.2018.0406", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "16a0adbb9d171a8515feaba32751c2120d5054b7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
5701557
pes2o/s2orc
v3-fos-license
Emerging Themes in Epidemiology Methodological Issues in Estimating Survival in Patients with Multiple Primary Cancers: an Application to Women with Breast Cancer as a First Tumour Background: Comparing survival of patients with a single tumour and patients with multiple primaries poses different methodological problems. In population based studies, where we cannot rely on detailed clinical information, the issue is disentangling the share of survival probability from the first and second cancer, and their compounded effect. We examined three hypotheses: A) the survival probability since the first tumour does not change with the occurrence of a second tumour; B) the probability of surviving a tumour does not change with the presence of a previous primary; C) the probabilities of surviving two subsequent primary tumours are independent (additivity hypothesis on mortality rates). Introduction The improvement of patients survival for the vast majority of neoplasms led to a substantial increase in the probability of further developing subsequent primary tumours. However, the study of multiple primary tumours on a population basis posed many additional problems. There is, indeed, a problem of differential diagnosis, when it comes to distinguish between local and distant metastases, recurrences and the onset of a truly new lesion. Classifications may also vary leading to substantial differences in rates. For example Surveillance Epidemiology and End Results (SEER) rules [1] differ substantially from those adopted by International Agency for Research on Cancer (IARC) [2]. Furthermore, survival of patients with multiple tumours has been neglected in population-based analyses, where they are usually list-wise deleted, or analysed for the first tumour occurrence only [3,4]. Only recently two studies [5,6] reconsidered this exclusion policy. On the contrary, in clinical series survival of patients with multiple tumours is usually defined clinically and specific cause of death is assessed accordingly. However, in population studies and in series from cancer registries, clinical information on patients follow-up is often unavailable and assessment of cause of death is based only on death certificates, often liable to gross misclassification. Heinävaara et al. [7] proposed to estimate the differential amount due to first or second tumour with a statistical parametric model. Their application dealt with patients with two primary breast cancers, where the question of disentangling the cancer-specific survival due to the first or the second tumour is more difficult, also from a clinical point of view. In the case of a subsequent primary cancer of a different origin the question is apparently simpler, although not yet investigated on a population basis. The following questions can be raised: whether the overall survival of patients has decreased because of the interaction between the two cancers, or if it has been left substantially unchanged in comparison to those with one cancer only, or even increased. For example, active surveillance and care due to the first cancer can lead to earlier diagnosis of subsequent cancers and therefore to a longer survival (or a longer lead time). However, before studying the possible effect of surveillance and other prognostic factors (which was not the aim of this study), we should focus on the correct measurement of survival, which is our research objective. To achieve this, we had to face many complex methodological challenges: first, we had to fix the zero reference time (the time from when we started the follow-up); second, a person can die only once thus the background death rate is confounded in the follow up information after the diagnosis of the second primary, therefore it is crucial to use models able to suitably describe a situation of competing risks; third, in order to make inferences, for each model we had to define the correct expected survival based on the appropriate comparison group. We focused our attention on the following questions: A. Does the survival probability of a patient with a second primary tumour differ from those with only first type of tumour? B. Does the survival probability of a patient with a second primary tumour differ from those with only second type of tumour? If a difference in survival is found in some of the previous situations, a third more fundamental question arises. C. Are the probabilities of surviving two subsequent primary tumours independent? Studying survival probabilities in terms of the underlying hazard of death, the question can be rephrased as follow: Is the mortality rate after a second tumour simply the sum of the two intensities (additivity hypothesis), or the way the mortality rates act follows a different functional law? This paper aims at answering these questions for women with breast cancer and a subsequent primary tumour, paying particular attention to the conditional survival probability due to the time elapsed between the two malignancies. Statistical analysis To correctly defining the probability of surviving conditional to be alive up to the occurrence of a second tumour, we started by writing questions A, B and C as hypothesis in term of mortality hazard. We defined: λ A (t): mortality rate for the population with two tumours at time t from the occurrence of the first tumour; λ B (t): mortality rate for the population with two tumours at time t from the occurrence of the second tumour; λ C, α (t): mortality rate at time t from the occurrence of the second tumour for the population with a second tumour given that they already survived a time interval α. We can break these down as where, for i = 1, 2, λ i|0 is the specific mortality rate at time t from the occurrence of tumour i for the population with only that tumour, and λ 0 is the general mortality. We assumed that λ 1|0 , λ 2|0 and λ 0 were known, by previous studies on mortality and survival in population with the first type of tumour only, with the second type of tumour only, and in the general population, respectively. We observed that λ 1|2 (t) was the possible difference in mortality rate in patients with a tumour of type 1 followed by a tumour of type 2 with respect to that of patients with a tumour of type 1 only, measured from the occurrence of tumour 1; λ 2|1 (t) was the possible difference in mortality rate in patients with a tumour of type 1 followed by a tumour of type 2 with respect to that of patients with a tumour of type 2 only, measured from the occurrence of tumour 2. Questions A, B, and C can be written as follows: Occurrence probabilities conditioned to different events (occurrence of a second cancer, death) in each time interval can be estimated with the Aalen-Johansen [8] (AJ) method in the framework of a Markov process, as described later. Once we obtained these conditional probabilities, we calculated the number of expected deaths by sex, age, calendar period and follow-up time, under the different hypotheses A, B and C. From a practical point of view, we calculated the expected deaths in a similar way to that used to calculate the denominator of relative survival [9]. For example, in the case of a woman diagnosed with breast cancer at 62 who developed a rectal cancer after two years and survived for an additional period of five years, we associated an expected probability of dying with a breast cancer, occurred at the same age, for the two years elapsed with that cancer only. Subsequently, we associated an expected probability of dying with breast and/or with rectal cancer for the following years, taking into consideration the ageing of the patient (i.e. using the annual probability of dying according to the age of the patient, from age 64 to age 69). The way the calculation of the expected number of death (or the expected probability of dying) for the conjoint period when both tumours are present is performed depends on which one of the three hypotheses we are testing. If we consider hypothesis A, we do not add the probability of dying with a colon-rectum cancer. If we test hypothesis B, we do not add the probability of dying associated to a breast cancer for the first period. Finally, if we test hypothesis C (additive hypothesis), we sum the two underlying mortality hazards during the second period. Expected probabilities were derived from analyses of the cohort of patients with only one incident cancer included in the cancer registry's data. For the interested reader, we now explain in details how we calculated expected probabilities. Since different states are concerning, we resorted to the theory of Markov models [8]. In a Markov process individuals can belong to a finite set of states and move to one state to some others with a probability, possibly depending on time. The main hypothesis is that the probability of moving from state i to state j at time t depends on i, j and t only, and not on the previous history of the individual. We constructed a simple model with three states 1 first tumour 2 second tumour 3 death after a first (but not a second) tumour where 2, and 3 are absorbing states and the possible moves are: 1 → 2, 1 → 3. Since our data showed right censoring, transition probabilities P ij (s, t) from state i to state j, in the time interval (s, t) were calculated using Aalen-Johansen (AJ) estimators [8]. The procedure we adopted included age standardisation, and precisely: • For each age class k we calculated the AJ estimator P ijk (s, t). We let N k be the number of subjects in class k at time 0 and we set a weight , where N equals the sum of the N k 's. • We defined the standardised estimator as: • It is reasonable to assume that weights are deterministic (fixed) variables; under this assumption we have: ( , )) ( ( , )) . 12 2 12 stand = ⋅ ∑ Then, from probabilities previously calculated with AJ estimators, it was possible to compare observed mortality with mortality expected in the hypothesis of no interaction between the two tumours; that is the mortality intensities due if the two tumours were independent. As a consequence, the number of expected deaths is the sum of the deaths due to mortality for both tumours acting separately. We calculated the number of expected deaths considering for each patient j the time of occurrence of the first primary malignancy T 1j , time of occurrence of the second primary malignancy T 2j , and, most important, the time interval between the occurrence of the two tumours α j = T 2j -T 1j . Each patient, after a time interval t 2 since the inception of the second tumour, has a probability p 2j (t 2 ) of dying for the second tumour or general mortality equal to that of the general population of patients with only that type of tumour, according to her/his age, sex, calendar period of diagnosis and follow-up time. In addition, that patient has a probability p 1j (t 2 + α j ) of dying at the (t 2 + α j )-time interval for the first tumour or general mortality again equal to that of the general population of patients with that type of tumour only, according to her/his age, sex, calendar period of diagnosis and follow-up time. We set (t 2 ) = p 2j (t 2 )·(1 -p 0j ) where p 0j is the general mortality of the subject j according to her/his age, sex, calendar period of diagnosis, taken from the life tables of the general population. Thus, we can say that (t 2 ) is the specific mortality for the second tumour. Therefore, within the cohort of patients with two malignancies, at the t 2 time interval since the second tumour, in the hypothesis of no interaction between the mortality forces of the two tumours, we expect the following number of deaths: where the probability of dying for the second tumour (t 2 ) is corrected by the probability of surviving from the first tumour and general mortality 1 -p 1j (t 2 + α j ). Since the output of these calculations was the number of expected deaths, we consequentially compared it with the observed number in a ratio similar to the well known Standardised Mortality Ratio: We used the term SMR AJ because it was quite similar to the standard term "SMR"in the sense that it was that ratio of observed to expected deaths; the expected deaths were calculated as a sum over age groups; and finally, it was similar to the indirect method of age standardisation since, as standard, we applied the mortality rates of the cohort of patients with only one tumour. Patients We Results We identified 9233 women with breast cancer in Turin from 1985 to 1998, 563 cases were excluded as they were identified from Death Certificate Only (DCO) or synchronous primary tumours (same day of diagnosis for the two tumours) or they were patients with more than two tumours, leaving 8670 cases for analysis. From this cohort, 436 second (metachronous) primary tumours developed during the prolonged follow-up period (1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002) Figure 1 where cumulative numbers of deaths are displayed over years of follow-up. It can be seen that in all graphs hypothesis C (additivity) tends to overestimate the actual observed trend, hypothesis A strongly underestimates it, while hypothesis B is the closest to observed data, especially during the first time (years) of follow-up. Discussion The dramatic improvement of cancer survival during the last decades in Western countries brought with it a new health threat: the development of second primary cancers in survivors. An editorial in CEBP of David Alberts clearly stated that 'Second cancers are killing us! ' [11]. However, in spite of the fact that several studies on the multiple primary cancer risk were undertaken [12], the rate at which first and second, or higher-order cancers are killing us remains neglected. In clinical studies, when reliable information are available it is often possible to understand if the pathological conditions linked to a specific cancer affected the patient survival and to which extent. How- ever, at a population based level this is often not feasible due to the lack of clinical information or cause of death. Even when cancer-specific causes of death are available, they are subject to various degrees of misclassification, hindering the possibility of a reliable estimate of cancer specific survival. In the main population based statistics on cancer survival worldwide available (Eurocare [3] and SEER [4]) subsequent cancers were excluded: only the first occurring cancer was analysed, or all the subjects with multiple cancers were deleted from analysis. Although, this strategy has recently undergone through a rethinking [5,6], it was supposed to allow for more comparable results across registries with different back up information, and therefore with a different possibility in identifying those cancers that occurred in prevalent cases. However, we believe that the problem deserves more attention also from its implication in the management and care of such patients. Indeed, a wider availability of effective cancer treatments has prolonged patient survival, so increasing the possibility of developing another cancer. Studying the occurrence of multiple tumours and their association, it emerged as the higher susceptibility to subsequent malignancies can possibly be due to unfavorable genetic pattern or common exogenous risk factors [13,14]. Multiple cancer survival is also a stimulating topic of study, but received less attention. Recently, an analysis of the SEER data on multiple tumours following breast cancer [15] showed that survival of women 20-29 years old at time of breast cancer diagnosis had a worse 10-year survival, compared with women with breast cancer only, while there were no differences in the 5-year survival. However, in that analysis the time elapsed until the second cancer occurrence was not taken into account. Before investigating the reasons influencing survival for patients with multiple tumours, we, indeed, believe that it is essential to have a correct measurement of survival that takes into account the effect of conditional probabilities of surviving given the different timing of primary cancers occurrences. We proposed a method that assigns the correct number of expected events according to the different components of mortality due to each type of cancer. The proposed method is useful only in correctly stating the prediction of mortality probabilities while cannot explain the causes of the different mortality probabilities. The expected number of deaths was calculated taking into account the exact time spent at risk of dying for one or another cancer by age classes and calendar period, using conditional probabilities estimated by the AJ estimator Cumulative number of deaths following different hypotheses for women with a second cancer after breast cancer Figure 1 Cumulative number of deaths following different hypotheses for women with a second cancer after breast cancer: (i) all tumours (ii) colorectal cancer (iii) corpus uteri cancer. from a simple one-way Markov process with two absorbing states. Such approach was recommended since it allowed a better control of probabilities of events arising from different states. In the model referring to hypothesis A, we calculated the expected number of deaths due to the first occurring cancer starting since its time of occurrence. This model is similar to model 2 proposed by Heinävaara and colleagues [7] in the absence of cancer specific cause of death. We wrote the model's parameters in terms of risk excess (hazard rate), rather than estimating the specific mortality rates. While survival of patients with a second primary tumour was comparable or higher with that of those patients with breast cancer only during the first years, it was rapidly declining at a higher rate than the reference group after five years of follow-up. This effect was explained by the fact those patients had survived an extra amount of time (a median of five years) before developing the subsequent cancer. Indeed, results from hypothesis A showed an increased cumulative mortality only at ten years for women with two cancers when compared to those with breast cancer only, as found in the study of Raymond and Hogue [15]. The second model (hypothesis B) was built with the same structure as model A, calculating the expected number of deaths due to the second occurring cancer starting since its time of occurrence. However, the change in the baseline population and the shift in the time zero reference made the hazard rates not comparable. Indeed, for a proper comparison with those patients with the second type of cancer only, we set the starting time at the diagnosis of the second cancer. In this case, the survival was comparable at 1 and 5 years of follow-up, than that of patients with one type of cancer only, while it was slightly shorter at 10 years. In summary, results from hypothesis B showed no extra mortality compared to patients with only one cancer of the same type, and observed and expected number of deaths closely get on during the years of observation. We then addressed the question of evaluating the eventual extra mortality due to the combination of effects of the two primary neoplasms, checking the hypothesis if the mortality of women with two cancers was due to the sum of the baseline mortality rates of breast and other cancers (additivity hypothesis C). It clearly emerged how observed cumulative mortality was lower than expected under the additivity assumption, with a statistically significant difference in the case of all cancers and colon-rectum after 10 years of follow-up. The agreement of a specific model to observed data was therefore useful for having further hints of the underlying mechanisms. In our study, the less than expected results can be explained by the fact that the second cancer can have a less advanced stage and therefore a better prognosis, since a subsequent cancer is usually diagnosed because of a deeper clinical surveillance due to the first cancer. It is clear that women with breast cancer and a subsequent cancer survive less than women with breast cancer only, but their survival is not always decreased simply as it would be if the forces of mortality work together in an additive way. The study has some possible limitations. First of all, the method of correction is based on observed rates (mortality rates measured in the cohort of patients with only one tumour) that, when based on small numbers, can be unstable. Then, this method, being inherently non-parametric, does not give information on the underlying incidence/mortality competing laws. In calculating expected number of deaths a possible bias could have been introduced, depending on the numbers of patients who emigrated outside the Cancer Registry's area. In this case, information on life status were still available and collected, but we did not know if the patient had developed a subsequent cancer when resident in another area. During the study period, we observed about 8% of women who emigrated among those classified with breast cancer only. Their median time of emigration was 6.5 years since the breast cancer diagnosis. As a consequence, considering that the median time for developing a second primary cancer was about 5 years, the detection bias should be very limited. Finally, the method was presently tested only on a limited set of data: patients with breast cancer as a first primary tumour. As few studies are still available on this topic, more research is needed, with larger samples and including clinical data (e.g. stage at presentation, hormone receptor status), therapies (e.g. tamoxifen), information on follow-up circumstances, and modality of diagnosis. In conclusion, we showed that the presented approach for calculating conditional probabilities was correct when dealing with situations, as with multiple tumours, where competing causes of death can bias the results of survival probabilities. We also pointed out how shifted reference times can be considered in correctly comparing survival. In addition, departure from the expected additive model can give hints towards which direction to further investigate. Authors' contributions SR conceived the idea for the study. SR, FR and LT planned and designed the research. FR developed the statistical models. LT revised the mathematical foundations of the proposed model. FR and LT analysed the data. SR and RZ wrote the first draft of the manuscript. RZ coordinated this project. All authors edited and approved the final version of the manuscript. The corresponding author has final responsibility to submit for publication. Preliminary results were presented by Fulvio Ricceri at the GRELL meeting 2006 in Palma de Mallorca awarding the "Enrico Anglesio Prize "offered by the "Anglesio/Moroni Foundation "of Turin, Italy. We thank the researchers and professors of the Me.Ri.Ma. group of the University of Turin (Department of Mathemathics) who shared their ideas with us and gave us their time and comments.
2018-05-08T18:14:00.098Z
0001-01-01T00:00:00.000
{ "year": 2009, "sha1": "ce82518097c377478a230004771c595ecd3f72fd", "oa_license": "CCBY", "oa_url": "https://ete-online.biomedcentral.com/track/pdf/10.1186/1742-7622-6-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce82518097c377478a230004771c595ecd3f72fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264951578
pes2o/s2orc
v3-fos-license
The Dynamics of Evangelism in a Multicultural Context: Challenges and Opportunities for Contextualization Evangelism in a multicultural context has become a primary concern in the mission of the modern church. In an effort to comprehend the dynamics involved in evangelism within multicultural environments, this research aims to identify the primary challenges faced by evangelists while also detailing the emerging opportunities in the endeavor to contextualize the message of the Gospel. Through an in-depth literature review in qualitative research, this study reveals the complexity of the multicultural context within the scope of evangelism. The main challenges encompass differences in religious beliefs, language disparities INTRODUCTION The term contextualization has gained popularity in the world of theological education in the late decades of the 20th century.Various definitions of contextualization are as follows: Contextualization is an important concept for enveloping the Gospel within the context of the native culture (Rantosari Siahaan, 2002). Contextualization is any activity undertaken to make the Gospel more easily understood and relevant to the culture, including its customs, language, and traditions (David Racey, 2012).Contextualization involves delivering the Good News and the Word of God in forms of culture that are meaningful, relatable, and understandable to the audience in both external forms (liturgy) and internal thoughts (content), thus addressing their real issues and needs within their societal framework (Jong Kuk Kim, 2000).Contextualization means translating the unchanging core of the Gospel into words that hold significance for various cultural groups in their unique contexts (Bruce Nicholls, 2007). Contextualization involves connecting the Gospel to a specific culture, encompassing all of its dimensions (John A. Gration, 1980). Contextual Theology seriously considers the historical and cultural context in which a person lives and works. It not only interprets traditional theological answers differently but also poses different questions in each culture. Contextual theology has a dual task of interpreting and constructing (Daniel J. Adams, 1993).Contextualization Theology is theology itself.This means that theology can only be called theology when it is truly contextual. Essentially, theology is an effort to dialectically and creatively reconcile the text with the context, the universal kerygma with the contextual reality of life.In simpler terms, theology can be described as an effort to formulate the Christian faith experience within specific contexts, spaces, and times (Eka Darmaputra, 1991).Christian contextualization can be understood as an attempt to proclaim the news of the person, work, Word, and will of God in a faithful manner to the Divine revelation, especially as expressed in the Bible, and also meaningful to audiences within their respective cultures.Contextualization can be carried out through spoken and written forms and is related, among other things, to theological efforts, Bible translation/interpretation, and application, a lifestyle following the example of Christ, evangelism, Christian religious education, church planting, growth, organization, and worship practices.In short, contextualization pertains to all the activities necessary for fulfilling the Great Commission (David J. Hesselgrave & Edward Rommen, 2000).To the extent that contextualization is not applied, Christianity will be perceived as a foreign religion. There are groups that use and advocate for the use of the term "contextualization," but there are also those who use different terms, such as local theology, enculturation theology, and some choose the terms indigenous or native theology.The explanations are as follows (Y.Tomatala, 2001): 1. Local Theology is a term used by Roman Catholic theologians and some Protestant theologians.The term local theology reflects the use of the English language, emphasizing the surrounding context of logical reflection and having a number of ecclesiastical characteristics through its association with the local church. The reason for using this term is its compatibility with the term local church or location as the place for theological reflection, and it better represents sensitivity to the context. 2. Enculturation Theology is derived from the English term "inculturation," which means "the cultural learning process of the individual, the process by which a person is inserted into his or her culture" (Ayward Shorter, 1988).The term inculturation is deliberately used in the theological context to distinguish it from the use of the word inculturation in sociology.This term was first used by Fr.Joseph Mason, SJ, a professor at the Gregorian University in Rome in 1962. This emphasizes the fact that theology is done by and for a specific geographic region by local residents for their own region rather than by outsiders.The concept of indigenization explains the process of bringing the Gospel to eventually grow an indigenous church.Charles Kraft referred to this approach as the ethnotheological approach (1979) in delivering the Gospel to a specific society and culture.This term is inherently related to ethnotheology, which is explained below.4. Ethnotheology is a term used by evangelical theologians.The use of this term is closely related to the word "ethne" found in the Bible .The term ethnotheology helps us focus on the unique characteristics of theology for a specific cultural region.The emphasis given is on the validity of this approach, as demonstrated by its transcultural applicability.The term ethnotheology is used with the understanding that ethnology is used for a generalization approach in analyzing data between cultures, not theography (ethnography) used to analyze data from a single culture. RESEARCH METHOD This journal research is based on an in-depth literature review with the aim of investigating and developing a theory of evangelism facilitated by cultural contextualization.In this context, cultural contextualization refers to the effort to convey the message of the Gospel by understanding and adapting it to the local cultural context, making the Gospel message more relevant and understandable to diverse audiences within their respective cultures. The research involves a theoretical review from various reliable sources that have developed theories of contextualization in evangelism.This includes an understanding of the fundamental concepts of contextualization, implementation strategies, and their impact in different cultural contexts.Furthermore, the research also explores the interplay between cultural contextualization and evangelism within the realms of theology and practice. References used in this research may include works by theologians and researchers competent in the field of evangelism and contextualization.Relevant reference sources could encompass books, journal articles, theses, and related research reports.The research aims to deepen our understanding of the importance of contextualization in the context of evangelism and provide a strong theoretical foundation for successful evangelism practices in various cultural settings.The appropriate references will vary depending on the subject and focus of the research.The journal's authors will refer to relevant and influential works within this domain, as determined through a literature review in the journal research. Multicultural Context The term multiculturalism has shaped an ideology that recognizes and celebrates differences in equality.This definition is then simplified into an ideology that accommodates cultural diversity in terms of religion, ethnicity, race, language, geography, and culture It serves as a bridge to embrace diversity.The concept of multiculturalism is an idea to manage diversity with the fundamental principles of acknowledging the importance of diversity.This perspective relates to the rules of social relations or relations between ethnic groups.Meanwhile, Suparlan explains that "multiculturalism is an ideology that approves of differences in equality, both individually and in cultural forms.Therefore, ethnic and cultural diversity becomes a characteristic of diversity because multiculturalism emphasizes equality.Suryana says that "This diversity is like a double-edged sword."On the positive side, it is evident in the richness and cultural diversity of the Indonesian people.On the negative side, it indicates that this diversity is susceptible to conflicts among community groups, which can impact the stability of security, social, political, and economic aspects (Suryana and Rusdiana).In essence, efforts are needed to understand diversity so that diversity does not become an issue in the life of a nation and neighbors.From a theological perspective, in his book "Multicultural Theology," G. Sudarmanto (2014) emphasizes that Multicultural Theology "is a formulation of biblical principles that show God's perspective on relationships among human beings."But it can also be said, "what God desires in terms of what people should understand and do toward each other in their diversity (religiously and ethnically).Basically, God has shown how to live with care and build relationships with His creation (Pardede et al., 2022). Contextual Theology One of the key trends in the concept of mission in Asia is contextualization, specifically in the area of theology or the development of mission theology from social, cultural, political, and economic contexts.In other words, it involves theologizing to respond to social, political, economic, cultural, and religious issues directly faced by the church.For this reason, the effort to contextualize theology is called Contextual Theology (Richard A.D. Siwu, 1996). The term "contextualization" was first introduced by Shoki Coe, a theologian from Taiwan, during a lecture at the KKA General Assembly in Singapore in 1973.At the Asian mission conference in Indonesia in 1989, the term "context" was explicitly stated through the conference theme, "Participation in God's mission in the context of the suffering and struggle of the people of Asia" (Richard A.D. Siwu, 1996).The essence of a living theology should arise from the church's engagement with its surrounding world.Contextualization is open to insights from both the past and future cultural perspectives.Contextual theology should be understood as the insight to contextualize theology (contextualizing theology) rather than contextual theology and also not a theology that is contextualized (contextualized theology) (Richard A.D. Siwu, 1996). The task of contextual theology lies in two areas: first, to examine the relationship of the Christian faith or the church with society, politics, and the state; second, to examine the relationship of the Christian faith or the church with culture, religions, and ideologies.Ultimately, the task of contextual theology is related to the work of theology within a specific framework and from a specific concrete context (Richard A.D. Siwu, 1996). Contextualization Models Contextualization models provide a general overview of theological efforts within specific contexts while helping us evaluate the extent to which a biblically-based approach can be implemented (Y.Tomatala, 2001). 1. Accommodation Model (Acts 17:28): Accommodation is an attitude of appreciation and openness to the native culture, manifested in both the practical behavior and theological aspects of missionary work. Accommodation involves a comprehensive engagement with the cultural aspects of a nation, encompassing the physical, social, and ideal dimensions. Adaptation Model: The key distinction between adaptation and accommodation lies in their approach. The adaptation model does not assimilate cultural elements to express the Gospel but rather employs familiar cultural forms and ideas.For instance, John used the concept of "logos" to explain the truth of Journal Didaskalia Vol 6, No. 2, October 2023 the incarnation (John 1).The purpose of adaptation is to express and translate the Gospel into local terms to make it relevant in the cultural context. Procession Model: The procession model is a negative response to culture.Groups following this model view culture as something corrupted by sin and do not see any inherent goodness within it. Transformation Model: This model posits that God is above culture and through culture, He employs cultural elements to interact with humanity. Dialectical Model: This model involves dynamic interaction between text and context, and it is based on a strong presumption that change is inevitable within culture.As change occurs over time, the church must use its prophetic role to analyze, interpret, and assess each situation. Application of Contextualization. The explanation provided by Pastor Noor Anggraito in Religions of the Eastern World regarding the theory of the relationship between Christianity and the world's religions is outlined as follows (Noor Anggraito, 2001): 1. Radical Displacement Theory: According to this theory, Christian missions should uproot and entirely replace other religions because they are solely of human origin.Essentially, according to this theory, Christianity is unique and true, while other religions are false, and salvation lies outside of them.This approach to Gospel proclamation is polemical in the sense that it not only evaluates or judges other religions but also articulates Christ's demands explicitly in love.According to this theory, during the indigenization process, all practices in a culture must be scrutinized in the light of God's Word, as Special Revelation.General Revelation has been distorted, so all cultural practices or religions that do not pass the test of God's Word should be discarded, disregarded, and removed.Adherents of this theory belong to the contemporary fundamentalist groups. 2. Fulfillment Theory: According to this theory, Christianity represents the fulfillment of the quest for religions worldwide.Within those religions, there exists a deep longing and thirst for God.The clear analogy based on this theory is the Old Testament and other religions.Just as Christ fulfilled the expectations of the Jews in the Old Testament, He also fulfills the hopes and longings of other religions. All religions possess some elements of truth.Christianity is superior and fulfills or completes the deficiencies of all religions.In evangelism, a gradual, constructive approach is taken.Evangelists seek a point of truth and then build from there step by step.Their effort is to persuade others of the superiority of Christianity so that they may turn to the Gospel.During the indigenization process, care must be taken to ensure that Christian truths are not compromised.There is a tendency toward syncretism.Advocates of this theory include J.N. Farguhar and N. Macnichol. Faith Theory: This theory posits that there is potential for faith in every religion, but it distinguishes between non-Christian faith and Christian faith.The primary emphasis in evangelism is conversation. Someone who has faith but is not a Christian should be heard and recognized by another Christian.In initiating a conversation, the goal is not just to understand that faith but ultimately to acknowledge it as true faith.The Gospel of Christ should be addressed to that faith.A.G. Hogg is an advocate of this theory, which he presented at the Madras Conference.as a Threefold Relationship: Christian -Gospel -Foreign religions.Here, the Gospel represents God's revelation in Jesus Christ and is distinct from the Bible.The uniqueness of God's Word is entirely present in Jesus Christ.Even the Christian religion, including the New Testament, is an imperfect response by humans, so the New Testament must be evaluated in light of Christian events.The distinction of the Bible lies in its testimony about the Word, which is the Gospel.In evangelism, total confrontation and complete replacement of cultural elements are invited.This theory is often dubbed the Discontinuity Theory. Kraemer proposed dialogue among religions to facilitate Gospel communication.In the indigenization process, careful adaptation is recommended, but assimilation is rejected.When adopting existing terms and practices, the synthesis of tension between biblical truth is cautiously avoided. Reconception Theory: The objective of this theory is to present Jesus Christ in such a way that He Himself becomes the point of reconception for their beliefs.Evangelists should not aim to win converts from other religions.Instead, they hope for a merger (unity) between Christianity and other religions. Albert Schweitzer and Arnold Toynbee are two figures who espouse this perspective.The premise is that there are many ways to seek the truth, so there is truth in every religion, but none is perfect.In evangelism, the term "worldwide evangelism" is unknown; what exists is cooperation.Christians impart the truth of Christianity to non-Christians but must also be willing to accept new truths found in other religions.The process of indigenization aims for mutual understanding.This approach leans toward syncretism. Challenges and Opportunities for Contextualization. Challenges and Opportunities for Contextualization" is an important and multifaceted aspect of various fields such as theology, cross-cultural communication, education, and more.Contextualization refers to the process of adapting or tailoring a particular idea, message, or practice to fit a specific cultural, social, or contextual setting. It involves understanding and respecting the nuances, values, beliefs, and customs of a particular group or community in order to make information or ideas more relevant and accessible to them.Here are some perspectives on this concept: Cross-Cultural Communication: In the context of communication, contextualization is crucial for effective cross-cultural interaction.The challenge lies in navigating the differences between cultures and languages to convey a message accurately.The opportunity is to bridge gaps and build stronger relationships by understanding and respecting diverse worldviews.Theology and Religion: In theology and religion, contextualization is about presenting religious messages and teachings in a way that resonates with the beliefs and practices of a specific culture or community.The challenge is to avoid diluting the core message while adapting it to the local context. The opportunity is to make religion more accessible and relatable to a wider audience.Education: In education, contextualization involves tailoring teaching methods and materials to match the cultural and educational background of students.The challenge is to create a curriculum that accommodates diverse learning styles and needs.The opportunity is to enhance the learning experience and outcomes for students from various CONCLUSION In a multicultural environment, the practice of evangelism encounters a range of challenges that involve cultural, linguistic, and religious differences.Nevertheless, these challenges also present opportunities to develop wiser and more adaptable approaches in disseminating religious messages and fostering a deeper understanding of cultural diversity.Moreover, they offer the potential for collaboration with diverse community groups.The concept of contextualization plays a pivotal role in addressing these challenges and harnessing the opportunities within multicultural evangelism. In a multicultural setting, the dynamics of evangelism are marked by the need to navigate through the complexities of diverse cultures, languages, and belief systems.This calls for a level of sensitivity and cultural competence to effectively convey religious messages while respecting the local customs and traditions.Although these challenges may seem daunting, they also serve as a catalyst for developing more nuanced and context-specific approaches to evangelism. The advantages of engaging in evangelism within a multicultural framework are manifold.It provides a unique opportunity to bridge cultural divides, foster intercultural understanding, and build bridges of communication across diverse groups.These interactions offer the potential to create connections, promote empathy, and develop a richer tapestry of spiritual experiences within a multicultural society. Contextualization, a fundamental concept in this context, is the key to overcoming these challenges and maximizing the opportunities.It involves the skillful adaptation of religious teachings and messages to align with the cultural and social context of the target audience.By contextualizing religious messages, evangelists can ensure that the message is not only comprehensible but also relevant and meaningful to those from different cultural backgrounds. In summary, within a multicultural setting, the practice of evangelism presents challenges that are intricately linked to cultural, linguistic, and religious differences.However, these challenges, when approached with cultural sensitivity and the concept of contextualization, open doors to a world of opportunities for deeper intercultural understanding, collaboration, and the dissemination of religious messages that resonate with diverse audiences. 4. Triple Relationship Theory: This theory asserts that, in Jesus Christ, all religions, including Christianity, are evaluated.Froytag named it the Triple Relationship Theory, and Hendrik Kraemer is considered a proponent of this theory.He argues that the relationships between religions should be seen Journal Didaskalia Vol 6, No. 2, October 2023 Copyright © 2023 e-ISSN 2622-1039, p-ISSN 2621-8038 53 backgrounds.Business and Marketing: In the business world, contextualization is essential for global marketing and product development.The challenge is to understand local consumer preferences and adapt products or services accordingly.The opportunity is to tap into new markets and increase the relevance of offerings.Social and Political Issues: Contextualization is also significant in addressing social and political issues.It involves understanding the unique challenges and opportunities within different communities, which can help in developing more effective policies and solutions.In summary, challenges and opportunities for contextualization are essential Journal Didaskalia Vol 6, No. 2, October 2023 Copyright © 2023 e-ISSN 2622-1039, p-ISSN 2621-8038 54 in today's interconnected world.The ability to adapt and respect diverse cultural contexts can lead to more effective communication, education, religious outreach, business success, and problem-solving in various fields.It's an ongoing process that requires sensitivity, research, and a commitment to bridging cultural gaps while preserving the core of the message or idea.
2023-11-03T15:05:24.794Z
2023-10-26T00:00:00.000
{ "year": 2023, "sha1": "dc27e1a3282f08994e1a1d52dbfa00bbbd632df9", "oa_license": "CCBY", "oa_url": "https://jurnal.sttii-surabaya.ac.id/index.php/didaskalia/article/download/327/331", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "656c0c3b9fbbd613c59290d72869d3b69fafceeb", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
227261769
pes2o/s2orc
v3-fos-license
Widespread misregulation of inter-species hybrid transcriptomes due to sex-specific and sex-chromosome regulatory evolution When gene regulatory networks diverge between species, their dysfunctional expression in inter-species hybrid individuals can create genetic incompatibilities that generate the developmental defects responsible for intrinsic post-zygotic reproductive isolation. Both cis- and trans-acting regulatory divergence can be hastened by directional selection through adaptation, sexual selection, and inter-sexual conflict, in addition to cryptic evolution under stabilizing selection. Dysfunctional sex-biased gene expression, in particular, may provide an important source of sexually-dimorphic genetic incompatibilities. Here, we characterize and compare male and female/hermaphrodite transcriptome profiles for sibling nematode species Caenorhabditis briggsae and C. nigoni, along with allele-specific expression in their F1 hybrids, to deconvolve features of expression divergence and regulatory dysfunction. Despite evidence of widespread stabilizing selection on gene expression, misexpression of sex-biased genes pervades F1 hybrids of both sexes. This finding implicates greater fragility of male genetic networks to produce dysfunctional organismal phenotypes. Spermatogenesis genes are especially prone to high divergence in both expression and coding sequences, consistent with a “faster male” model for Haldane’s rule and elevated sterility of hybrid males. Moreover, underdominant expression pervades male-biased genes compared to female-biased and sex-neutral genes and an excess of cis-trans compensatory regulatory divergence for X-linked genes underscores a “large-X effect” for hybrid male expression dysfunction. Extensive regulatory divergence in sex determination pathway genes likely contributes to demasculinization of XX hybrids. The evolution of genetic incompatibilities due to regulatory versus coding sequence divergence, however, are expected to arise in an uncorrelated fashion. This study identifies important differences between the sexes in how regulatory networks diverge to contribute to sex-biases in how genetic incompatibilities manifest during the speciation process. Introduction Many kinds of reproductive barriers can contribute to speciation [1,2], with genetically intrinsic post-zygotic barriers a kind that makes speciation irreversible. Such intrinsic barriers result from disrupted developmental programs due to divergence in the regulatory controls of and functional activity within genetic networks. Consequently, research for decades has aimed to decipher the identity and general features of genetic changes that accumulate by selection and genetic drift to lead to Dobzhansky-Muller (DM) incompatibilities in hybrids of diverging populations, due to non-additive, negatively-epistatic interactions among genes [1,3]. It is therefore crucial to decipher how genes and gene expression evolve to understand how gene regulation influences post-zygotic reproductive isolation through misregulated gene interactions in hybrids [3][4][5]. Evolution of the regulatory controls over gene expression influences much phenotypic evolution [5,6], despite stabilizing selection as a prevailing force acting to preserve expression profiles [7][8][9][10][11]. Expression differences between species accrue in predictable ways. Regulatory differences between species disproportionately involve the evolutionary accumulation of mutations to cis-regulatory elements, facilitated by such changes being predisposed to additivity and having low pleiotropic effects on traits and fitness [12,13]. In contrast, larger, more pleiotropic effects can result from trans-regulatory changes that occur at distant genomic positions, such as to transcription factors, chromatin regulators, and small RNA genes. Consequently, theory predicts trans-regulatory mutations to fix less readily and to contribute fewer differences between species, despite their large mutational target size and disproportionate contribution to genetic variation within a species [12][13][14][15]. Studies nevertheless commonly find both cis-and trans-regulatory differences between species [16][17][18][19]. Indeed, the coevolution of changes to both cis-and trans-acting factors represents one plausible outcome of stabilizing selection on expression level. The compensatory effects of such coevolved cis-and trans-regulatory changes yield an overall conserved expression profile [10,14,20,21], but this multiplicity of changes are predisposed to generating misexpression in F 1 hybrids due to dysfunctional cisby-trans regulatory interactions [5,22]. contributing to hybrid male sterility [31,59]. The full extent of hybrid misexpression throughout the genome of each sex and its root causes in regulatory divergence, however, remain to be characterized. Given the extensive phenotypic stasis within the genus [60] and previous work showing prevalent action of cis and trans compensatory evolution, even between more distantly related species [20], we expect to find substantial developmental systems drift within this system, where sex-biased genetic networks will be disproportionately prone to misregulation and misexpression. In particular, we expect greater dysfunctional expression in hybrid males, which are sterile, compared to hybrid females, which are fertile. These effects also are expected to associate most strongly with genetic networks that most depend on X-linked genes. To test these ideas, we analyze mRNA transcriptome expression for each sex from each of C. briggsae, C. nigoni, and their F 1 hybrids. Using ASE profiles, we then characterize and quantify cis-and trans-acting regulatory causes of expression divergence linking genomic change to sex-biased expression, chromosomal features, and hybrid dysfunction. Extensive expression divergence between species and between the sexes involve the X-chromosome Each species and sex show distinctive overall transcriptome profiles that are further distinguishable from each sex of F 1 inter-species hybrids (S1A Fig). C. briggsae hermaphrodites resemble females phenotypically except that they are able to produce sperm in addition to eggs, and therefore show masculinized expression of some genes (S1A, S2 and S3 Figs). To more appropriately contrast "female" expression profiles between hermaphrodites and females, we identified 1,238 orthologous genes that are male-biased and up-regulated in hermaphrodites relative to sperm-less C. briggsae "pseudo-female" mutants (she-1(v4); [54]) (9% of the 13,636 one-toone orthologs analyzed). These 1,238 genes were then analyzed separately from most sex-based analyses that, for simplicity, we refer to hereafter as "female" expression contrasts. The transcriptomes of pure C. briggsae and C. nigoni differed in expression for more than half of their genes, with slightly more genes differentially expressed for "females" than for males. Females had a total of 61% (7,598) of genes differentially expressed between species, compared to 54% (7,385) for males (Fig 1A). The X-chromosome shows the most extreme differences in number of differentially expressed genes between species, with both males and "females" showing significantly higher ratios of upregulated X-linked genes in C. nigoni than in C. briggsae (Fig 1B). Autosomes, by contrast, showed greater abundance of genes with higher expression in C. briggsae, albeit only significantly for Chromosome I in males (Fig 1B; Fisher's exact test; P < 0.05). Similarly, we found more genes with significant ASE for autosomes among female hybrids (6,070 or 61%; n = 9,932) than among hybrid males (5,402 or 53%; n = 10,155). The enrichment of genes where a given allele gets upregulated in hybrids was largely consistent across autosomes of hybrid males and females (Fig 1B), albeit the pattern was slightly more extreme for females (higher expression of C. briggsae allele: 3,215 or 32%; C. nigoni allele: 2,855 or 29%) than for males (higher expression of C. briggsae allele: 2,793 or 27%; C. nigoni allele: 2,609 or 26%). The slightly greater tendency in females for C. briggsae alleles to be upregulated could suggest more regulatory changes fixed in the C. briggsae lineage leading to higher expression, or some degree of mapping bias in C. briggsae-derived reads compared to those from C. nigoni (simulations suggest that mapping biases are an unlikely cause; see Material and Methods). Within each species, approximately 60% of genes showed significant sex biases in expression (Fig 1C). Male-biased and female-biased genes occurred with similar incidence in both species: 30% C. nigoni male-biased, 29% C. briggsae male-biased, 29% C. nigoni female-biased, 27% C. briggsae "female"-biased. In stark contrast, over 80% of genes in F 1 hybrids were significantly sex-biased, with a slightly higher incidence of male-biased genes (43% vs 40%). In C. briggsae and F 1 hybrid transcriptomes, Chromosomes I and III were enriched for "female"biased genes, whereas Chromosomes V and X were enriched for male-biased and sex-neutral genes (Figs 1D and S2). None of the C. nigoni chromosomes exhibited strong enrichment of sex-biased genes and genes with "female"-biased expression in both species and in hybrids showed strong depletions from the X-chromosome (Figs 1D and S2). Expression dominance in F 1 hybrids differs between males and females We contrasted expression profiles of F 1 hybrids with their parent species to infer the expression inheritance of genes, i.e., to identify genes that exhibited additive, dominant (C. briggsaeand C. nigoni-like expression), or transgressive (overdominant and underdominant) expression patterns for each sex (Fig 2A). Gene sets with distinct expression inheritance profiles divergence between species and allele-specific expression (ASE) in hybrids (C. nigoni/C. briggsae) for female (top row) and male transcriptomes (bottom row). Colored bars indicate significantly differentially expressed genes, while grey bars indicate counts of genes with non-significant expression differences. (B) Enrichment of differentially expressed genes between species and alleles for females and males (log 2 odds ratio, i.e. observed/expected). Asterisks mark significant enrichments (positive values) or depletions (negative values) on chromosomes (Fisher's exact test, P value < 0.05 and |log 2 odds ratio| > 0.5). On the legend, "no DE" denotes genes that are not significantly differentially expressed. (C) Number and percentage of genes that show significant sex-bias is greater in F 1 hybrids than parent species. In C. briggsae, male-biased genes also expressed in hermaphrodites (dark green) are shown separately from male-biased only (brown) and from female-biased genes expressed in hermaphrodites (light green, "female") (see Fig 6). (D) Enrichment of genes with significant sex-bias and sex-neutrality in parent species and F 1 hybrids for each chromosome. The sexes differed most strikingly in their total number of transgressive genes: 26% (3,494) of genes show transgressive profiles in males (1,788 overdominant and 1,706 underdominant) versus 55% (6,881) in females (4,121 overdominant and 2,760 underdominant) (Fig 2A and 2C). Transgressive genes are thought to be associated with hybrid dysfunction, as such underdominant and overdominant expression profiles represent misexpression phenotypes that manifest values beyond the range of both parents [30,61]. Given the pronounced sterility of hybrid males, we were surprised that hybrid male transcriptomes showed only half the incidence of misexpression as hybrid females. This finding suggests that the genetic networks that control fitness are more robust to expression perturbation in female hybrids than in males. In addition, genes classified as underdominant, overdominant, and additive had significantly higher Euclidean expression distances from the centroid of expression space than genes with no change in expression or with simple dominance (Gamma-distributed multiple generalized linear least square regression [GLM], P < 0.001) (Fig 2A and 2B). This observation suggests that genes with these expression inheritance profiles are more prone to deviant expression phenotypes. However, except for transgressively expressed genes, "females" had on Per-chromosome enrichment (log 2 odds ratio, i.e. observed/expected) of genes in a given expression inheritance group. Asterisks mark significant enrichments (positive values) or depletions (negative values) on chromosomes (Fisher's exact test, P value < 0.05 and |log 2 odds ratio| > 0.5). (E) Biplot of expression divergence between species (x-axis) versus allele-specific expression (ASE) in hybrids that indicates the magnitude of cis-acting expression difference between alleles (y-axis). (F) Box-and density plots of the magnitude of absolute expression divergence between species for each type of cis and trans regulatory changes. (G) and (H) as for C and D, but indicating different types of cis and trans regulatory-change profiles. Colors indicate different groups of genes with different expression inheritance (see legend for A-D) and cis and trans regulatory changes (see legend for E-H). Top panels in each of A-H correspond to female transcriptomes, bottom panels to male transcriptomes. https://doi.org/10.1371/journal.pgen.1009409.g002 average significantly larger magnitude in expression distance from the centroid expression space than males (GLM, all P < 0.001 except transgressive P > 0.1) (Fig 2A and 2B). These results are consistent with our multidimensional scaling analysis that showed shorter expression distances for F 1 males to parental males (S1A Fig), in contrast to more dissimilar expression profiles of F1 females to parental females. Genes with additive expression of alleles from both parental species were relatively rare in F 1 hybrids of both sexes (8% or 691 genes in females; 8% or 946 genes in males) (Fig 2C). Genes showing simple dominance were up to four-times more common, with approximately 20-30% of genes expressed by each sex either matching C. briggsae or C. nigoni expression (23% or 2,869 genes in females; 30% or 3,698 genes in males). Expression dominance matching C. briggsae was consistently more frequent in hybrids of both sexes (15% or 1,876 genes in females and 18% or 2,228 genes in males), however, compared to expression dominance matching C. nigoni (Fig 2C; 8% or 993 genes in females and 12% or 1,470 in males), being more extreme in female hybrids across autosomes (mean ratio C. briggase vs C. nigoni dominant expression = 1.75 vs 1.14 in males) (Fig 2C and 2D). The disproportionate dominance of the C. briggsae copy in hybrid females was even more pronounced for the X-Chromosome (F 1 female X ratio = 2.66 vs F 1 male X ratio = 1.13). Expression dominance on the X-chromosome is distinct in hybrids of both sexes Genes and traits with dysfunctional expression are often associated with the X-chromosome, and we expect differences between the sexes due to Haldane's rule [26,29,40,41]. Consistent with these expectations, we found inheritance profiles associated with misexpression to differ between F 1 males and females across the genome, and to differ most conspicuously for the Xchromosome. In particular, the X-chromosome was enriched for underdominant genes in both F 1 males and females (Fig 2C and 2D), but only significantly so in females (Fisher's exact test, P < 0.05). The X-chromosome was also significantly enriched for overdominant genes in F 1 males, whereas females had significant depletions of such genes on the X (autosomal enrichments: V in males, and I and III in females) (Fig 2C and 2D; Fisher's exact test, P < 0.05). These data show clear differences in expression inheritance across chromosomes between the sexes and reflect distinct hybrid expression dynamics between autosomal and X-linked genes. Given the distinct misexpression of genes linked to the X-chromosome, we evaluated whether disrupted dosage compensation in hybrids might have systematically perturbed Xlinked expression profiles. X-chromosome dosage compensation in Caenorhabditis acts to halve expression of the two copies in females and hermaphrodites [62]. The magnitude of expression that we observed for the X-linked genes in hybrid females, however, does not differ on average from either parental species (S1B Fig). Consistent with the implications of conserved mechanistic features of the dosage compensation complex and its regulation across multiple Caenorhabditis species, including C. briggsae and C. nigoni [63], this suggests that dysfunctional dosage compensation is not a key driver of X-linked underdominant expression in hybrid females. Cis and trans regulatory divergence modulates differences between sexes Identifying the spectrum of changes to cis-and trans-acting regulators is important to understand how selection influences the evolution of gene expression and its effects on hybrid phenotypes. Correspondingly, we classified the types of regulatory changes and examined how they perturbed gene expression in F 1 hybrids for each sex. Consistent with ASE studies in flies [16,25], mice [26], plants [17,18] and yeast [19,27], we found substantial expression divergence due to cis-only or trans-only regulatory changes in addition to joint effects of cis and trans changes, with changes in trans and cis + trans conferring larger magnitudes of expression divergence (Fig 2E and 2F). Genes with cis-only and trans-only effects were not significantly enriched on any autosome for either sex, although Chromosome V exhibited a trend toward enrichment of genes with trans-only effects for males (Fig 2H). Comparing regulatory divergence between sexes, we found nearly 60% more genes involving trans-only changes in females (18% or 2,195 genes) compared to males (12% or 1,391 genes) (Fig 2G). For genes with cisonly divergence, however, the sexes showed a reciprocal pattern: cis-only divergence was more prevalent in male than in female transcriptomes (17% or 1,944 genes in males; 14% or 1,647 genes in females). Thus, the expectation that cis-regulatory divergence will be more prevalent than trans-regulatory divergence holds true only when expression is measured in males, and points to fundamental differences between the sexes in trends of regulatory evolution. Despite the limitations posed by the hemizygous condition of the X-chromosome in males, we devised a strategy to confidently assign one of three categories of regulatory divergence to X-linked genes in males based on their expression inheritance. In short, we inferred genes to have i) cis-only � divergence (inferred from expression dominance with respect to C. nigoni, which may include genes with X-linked trans regulators, with this distinction denoted by � ), ii) trans-only � divergence with recessive C. nigoni trans-acting factors (expression dominance with respect to C. briggsae, which may exclude X-linked genes with codominant trans regulators, denoted by � ), which provides a lower-bound estimate of trans-acting regulatory divergence affecting the X, and iii) cis-trans compensatory changes (no difference in expression between parent species with hybrids showing over-or underdominance). We did not find enrichments of cis-only � divergent genes on the X, as might be expected under a "large-X" effect model for regulatory contributions to hybrid dysfunction. Similarly, we found only a non-significant trend in both sexes toward enrichment of trans-only regulatory divergence affecting X-linked genes (Fig 3; Fisher exact test, P > 0.05). Instead, however, cis-trans compensatory changes were 1.9-fold enriched on the X chromosome for males and 1.7-fold underrepresented on the X for females (Fig 3; Fisher exact test, P < 0.05). These results for regulatory divergence "regroupings" are consistent with the full set of categories that could be defined for females (Fig 2H). This finding implicates stabilizing selection on X-linked malespecific networks as being especially prone to disruption in F 1 hybrids, and contrasts with the 1.6-fold enrichment of conserved regulatory controls for X-linked genes in females (Fig 3; Fisher exact test, P < 0.05). Because of codominant effects of trans-acting factors controlling the expression of both alleles, trans-only regulatory divergence is often associated with deviations from additivity [64]. Consistent with this idea, we found trans-regulatory changes more often associated with genes having dominant expression patterns and that genes with additive expression more often associated with genes having significant cis-effects (cis-only and cis + trans) (Fig 4). However, genes with cis-only effects often showed dominant patterns of expression. Genes that show cis-only divergence on autosomes for male expression often have C. briggsae or C. nigoni dominant expression in hybrids whereas, in females, they are more typically overdominantly expressed (Fig 4). We can think of four potential mechanisms that might contribute to this sex difference and to the cis-dominant effect in general: (1) more extensive post-transcriptional regulation in females [65] might limit transcript abundance through epistatic interactions; (2) greater degradation or turnover of cis-acting binding sites for male-expressed genes [55,56] could reduce affinity in transcription factor binding activity; (3) the presence of species-specific trans regulators that originate from lineage-specific gene duplication and/or loss resulting in allele-specific trans-regulation; and (4) some form of transvection that causes single-allele expression in hybrids and one of the parent species; this has been identified as a source for the expression of a dimorphic trait in Drosophila [66]. Furthermore, we categorized genes into 13 groups based on distinct combinations of species differences and sex differences in expression, including their interactions, and looked at the proportion of genes with different cis-and trans-effects (Fig 5A and 5B). Our results are consistent with the idea that cis and trans changes each play distinct roles in sex-biased expression and sexual dimorphism [67]. We find, on one hand, that trans-only changes in females are more often associated with genes that have male-biased expression (37% trans-only in male-biased genes vs. 15% in female-biased genes) and, on the other hand, that female-biased genes show more conserved regulation (32% conserved in female-biased genes vs. 9% in malebiased genes) when expressed in males (Figs 5D and S2). This pattern suggests that distinct sex-specific regulatory controls may be asymmetric in this system and may have evolved to repress expression of genes biased for the opposite sex. Together, these observations highlight Briefly, X-linked compensatory cis-trans changes in males represent genes that are not differentially expressed between species, but hybrids show over-or underdominance; cis-only � represent genes with expression divergence between species and hybrids matching C. nigoni expression; trans-only � denote genes with expression divergence with hybrids matching C. briggsae expression; and the "conserved" category refers to genes that are not differentially expressed between species and between species and F 1 males. The � differentiates these categories from the cis-only and trans-only categories inferred from ASE in autosomes and XX individuals. how the evolution sex-specific regulatory controls can underlie sexually dimorphic expression phenotypes, with implications for sex biases in the speciation process. Hybrid misexpression commonly involves genes with joint cis-trans regulatory changes Genes exhibiting transgressive expression profiles in F 1 hybrids are often associated with dysfunctional traits, due to radically different expression from those of parent species. cis and trans changes with opposing effects can interact epistatically in hybrids to induce transgressive expression and allelic imbalance [22]. Such regulatory evolution can arise through coevolutionary fine-tuning even when overall expression level is subject to stabilizing selection [6,11]. Consistent with this idea, we found that genes with transgressive effects in hybrids often are associated with cis-trans regulatory changes (33% or 877 genes in males and 42% or 2,872 genes in females; Fig 4). In both sexes, F 1 hybrids revealed a higher fraction of compensatory changes (27% in males and 31% in females) compared to enhancing cis-trans changes (6% in males and 11% in females). These results highlight how stabilizing selection can act independently on each sex to maintain sex-specific regulation, leading to opposing cis and trans effects that induce dysfunctional expression in F 1 hybrids. In addition, we also found substantial cistrans compensatory changes among genes that show no change in expression between species (S3 Fig) and in those that do not show sex-biased expression (i.e., C-N-N group in Fig 5D), implicating extensive developmental systems drift in regulatory processes despite conserved profiles of expression. Genes with additive expression inheritance also may generate hybrid dysfunction by generating intermediate expression profiles in F 1 hybrids and are thought to commonly reflect cis-only regulatory divergence [23,40,68]. In line with this idea, we found that genes classified as additive associated more often with significant cis-acting divergence in both sexes (cis-only: 43% in males, 26% in females; and enhancing cis-trans effects: 30% in males, 42% in females; Fig 4). However, expression additivity is not abundant in our analysis (Fig 2C), suggesting that it is not a major source of phenotypic dysfunction in hybrids of this system. To further assess the role that different regulatory controls play in the origin and maintenance of divergent sex-biased expression, we contrasted expression inheritance and patterns of cis-and trans-regulatory divergence for male-biased and female-biased genes (Fig 5C and 5D). We found that male-biased genes expressed in F 1 hybrids of both sexes frequently show underdominant transgressive misexpression compared to sex-neutral and female-biased genes (Fig 5A-5C). Examining male-biased genes when expressed in females, we find that they often show expression dominance matching the species with lower expression (Fig 5A-5C; N-M-I, Additionally, genes that show conserved expression between species, but with significant sex-bias (C-M-N for male-biased and C-F-N for female-biased genes), also often are either misexpressed (over-or underdominant) or have had no significant change in expression in hybrids (Fig 5C). These groups also have the highest proportion of genes with cis-trans compensatory changes (Fig 5D), suggesting that many "conserved" sex-biased networks have undergone substantial developmental systems drift. Male gene networks may experience greater "fragility" if they are more prone to perturbation by dysfunctional gene interactions. Fragility could arise by higher rates of molecular evolution among male-biased genes or by higher downstream effects of male-specific regulators. We found no overall significant difference between male-biased and female-biased genes for sequence differences in upstream regions (GLM, 1-P cons male-biased vs. female-biased, t = -0.48, P = 0.62; arms vs centers t = -27.66, P < 0.0001). We therefore conclude that the effects of upstream regulatory changes exert disproportionately strong effects on male-biased genes, implying that male genetic networks are more fragile. Complementing this idea, genes expressed in F 1 males are more commonly underdominant when they correspond to malebiased genes than to female-biased genes (957 genes among male-biased genes vs 431 genes among female-biased genes). Moreover, male-biased genes have a higher proportion of genes with enhancing or compensatory cis-trans changes (1,086 or 23% genes in male-biased genes vs 702 or 18% genes in female-biased genes). Male-biased genes also define more distinct expression profile modules than do female-biased genes (6 versus 3 co-expression modules), including one with male-specific underdominance (M15 in S2 Fig). By contrast, female-biased genes expressed in F 1 females were predominantly overdominant and are more often associated with cis-only and enhancing cis-trans changes (Figs 5D, 5C and S2). These observations suggest that female gene regulatory networks can be more resilient to regulatory divergence, with male networks being more fragile, potentially translating into similar resilience and fragility of organismal traits such as fertility [49]. Faster regulatory and molecular evolution of male-biased and spermatogenesis genes Sexual selection and sexual conflict are predicted to drive faster rates of molecular evolution and expression divergence [38,69,70]. Consistent with these predictions, we found that malebiased genes have higher average expression divergence (|log 2 -fold-change|, GLM, t = -23.639, P < 0.001) and faster rates of molecular evolution in the two groups of male-biased genes that are rare on the X-chromosome (K a /K s for B-M-I: GLM, t = -8.947, P < 0.001; B-M-N: GLM, t = -8.764, P < 0.001) (Fig 5E and 5F). Male-biased genes show elevated expression divergence compared to sex-neutral genes and female-biased genes as a whole, though the signal for faster sequence evolution was weak (K a /K s male-biased vs. sex-neutral GLM, t = -0.77, P = 0.44; male-biased vs. female-biased GLM, t = -0.78, P = 0.43; 1-P cons male-biased vs. female-biased, t = -0.48, P = 0.62; arms vs centers t = -27.66, P < 0.0001). Rapid sequence evolution for malebiased genes was not associated with enrichment in chromosomal arms (Fig 5G; Fisher's exact test, P > 0.05), regions which are known to show higher divergence [48]. The groups of malebiased genes with enrichments on the X-chromosome, however, have either conserved expression between C. briggsae and C. nigoni or have higher expression in C. nigoni males (Fig 5G; C-M-N, N-M-I, N-M-N; S2 Fig; Fisher's exact test, P < 0.05), indicating that the X-chromosome is home to a subset of genes that reflect ancestral male-biased gene networks. We observed the highest expression divergence as well as high rates of molecular evolution in the distinctive set of genes that combine male-biased expression, higher expression in C. briggsae than C. nigoni, and a species-by-sex interaction (B-M-I; Fig 5E and 5F). The species-by sex interaction in this B-M-I group indicates a masculinized expression profile for C. briggsae hermaphrodites, implicating a role for them in sperm production (S4 Fig and M5 in S2 Fig). To test this idea, we looked at C. elegans genes previously identified as spermatogenesis genes [45] and found overlapping orthologs in C. briggsae and C. nigoni to be 11-fold enriched in the B-M-I group (Fig 6B and 6C; Fisher's exact test, P < 0.05) and depleted from the Xchromosome (Fig 5G), consistent with previous observations for sperm genes in Caenorhabditis [36,43,44]. Further consistent with sperm-related function, male-biased genes that show upregulated expression in C. briggsae hermaphrodites relative to C. briggsae pseudo-females (Fig 6A-6C) were highly enriched (46-fold) and overlapped extensively with the B-M-I group (Fig 6B and 6C; Fisher's exact test, P < 0.05). Thus, the high divergence in both expression and sequence for these genes suggests distinctive selection pressure on them, potentially reflecting the outcome of sexual selection and sexual conflict on sperm. The collection of male-biased genes that also show elevated expression in C. briggsae hermaphrodites, putatively linked to spermatogenesis, showed expression phenotypes in hybrids often resembling C. nigoni: extensive expression dominance for the C. nigoni copy and underdominant expression in F 1 hybrids as well as high expression divergence (Figs 6D and 6E and S5). They also show an abundance of trans-only regulatory divergence (Fig 6F and 6G). Regulatory and expression divergence within the sex determination cascade Sex determination of somatic and germline development involves a negative regulatory cascade [71], including the secreted protein HER-1 that inhibits transcription factor TRA-2, and further downstream, TRA-1 represses genes such as fog-3 in controlling spermatogenesis [72,73]. Consistent with what is known about this pathway [71], our data shows male-biased expression of her-1 in both parental species and that it has evolved enhancing cis + trans regulatory changes that implicate lineage-specific regulatory changes that promote its expression in males of both species (S6 Fig). Without HER-1, TRA-3 then cleaves the intracellular domain of TRA-2 in XX individuals, which then interacts with FEM proteins preventing FEM-TRA-1 interactions [73]. We found the ortholog of tra-3 to be expressed and regulated (cis-only) similarly between sexes of pure species and hybrids. However, fem genes and tra-2 seem to be female-biased and overdominantly expressed (S6 Fig), suggesting sex-specific co-regulation; and at least two have potentially evolved cis and trans regulatory changes (fem-2, fem-3). In addition, cis x trans regulatory effects on tra-1 seem to have evolved in females, suggesting that opposing regulatory changes have evolved on an important transcription factor controlling germline and somatic sexual differentiation. However, the story for tra-1 regulatory divergence in males differs from females and is also less clear: our analysis categorizes it as "ambiguous" (significant ASE, but non-significant regulatory divergence and non-significant trans effects). In contrast, orthologs of fog-3, which is involved in spermatogenesis, get upregulated in C. briggsae hermaphrodites and in males of both species (B-M-I), as expected. In F 1 males, however, fog-3 is misexpressed (underdominant) having a non-significant trans effect (P = 0.28) and a marginally-significant cis ASE effect (P = 0.04). Its expression in F 1 females shows C. nigoni dominant expression that is due to trans-only regulatory divergence between the species (S6 Fig). The regulatory divergence between C. briggsae and C. nigoni for genes involved with sex differentiation and development point to plausible mechanisms for suppression of spermatogenesis in XX hybrids to produce a "female" rather than a "hermaphrodite" phenotype, as well as yielding hybrid male sterility. Genome architecture only modestly affects regulatory divergence Given that protein-coding sequence evolution and gene composition vary non-randomly along chromosomes in many Caenorhabditis species in association with the chromosomal (54), which includes C. briggsae "pseudo-females". Green dots denote genes that were differentially expressed between hermaphrodites and "pseudo-females". (B and C) Both male-biased genes upregulated in hermaphrodites and orthologs to C. elegans spermatogenesis genes (45) PLOS GENETICS recombination landscape, we asked whether distinct chromosomal domains would also associate with the degree of cis-regulatory divergence. We find higher molecular divergence between the genomes of C. briggsae and C. nigoni in chromosomal arms compared to centers in noncoding sequences upstream of protein-coding genes (S7A Fig), in addition to protein-coding sequence divergence (S8 Fig; also see [48]). These observations are consistent with the idea of stronger purifying selection on mutations to genes and their cis-regulatory regions when linked to chromosome centers, or more effective positive selection when linked to chromosome arms. Despite the elevated molecular divergence in arm regions, we only found modest elevation of ASE divergence for genes on arms (S7B Fig) Notably, we observed only a weak positive correlation between ASE divergence and rates of molecular evolution (Fig 7A and 7B; linear regression for K a /K s ': adjusted R = 0.035, m = 0.019, P < 0.0001; 1-P cons : adjusted R = 0.01, m = 0.017, P < 0.0001). Overall, these patterns indicate that rates of divergence for gene expression and their cis-regulatory controls are largely decoupled from protein-coding sequence evolution. Discussion Regulatory control over gene expression is an important component of phenotypic evolution [12]. As species diverge and accumulate mutations, selection will permit regulatory changes that maintain transcript levels as well as changes that allow exploration of new phenotypic space when they confer a fitness advantage. Sexual selection and sexual conflict can further promote such genomic divergence, both in terms of molecular evolution (e.g., rapid coding or regulatory sequence evolution for male-biased genes) and in terms of gene expression (e.g., divergence in sex-biased gene expression levels) [38,69,70]. In interspecies hybrids, sexually driven sources of genomic divergence can disrupt gene networks to create negative epistatic interactions that manifest as sex-biased hybrid sterility or inviability and generate reproductive isolation [5]. Here, we document extensive regulatory divergence in the face of both conserved and divergent gene expression, with prominent influences of sex-biases and genomic location on the potential to induce misexpression in interspecies hybrids. Compensatory regulatory evolution implicates pervasive developmental system drift C. briggsae and C. nigoni acquired substantial divergence at the DNA level since they diverged from their common ancestor~35 million generations ago (3.5 Mya assuming 10 generations per year), including~20% sequence divergence for synonymous sites, changes to genome size, and disproportionate loss of short male-biased genes in C. briggsae since its transition in reproductive mode to androdioecy [48,56]. Despite this genomic divergence, hybridization between these species yields viable and fertile F 1 hybrid females, as well as viable hybrid males that suffer complete sterility [36,49]. Despite observing substantial expression divergence, we nevertheless find that 39% of genes expressed in "females" (4,783 genes) and 46% in males (6,236 genes) show no differential expression between species. Conserved expression between species may result from stabilizing selection, recognized as a common force acting on transcript abundance [7][8][9]11]. Mechanistically, conservation of the expression phenotype can occur, despite sequence evolution, with co-evolutionary changes to both cis-and trans-regulatory elements. For example, if a trans-acting mutation fixes due to a pleiotropic benefit on other loci, then selection would favor fixation of any subsequent compensatory mutation in cis that returns expression to optimal levels at the focal locus [5,20]. We find evidence of widespread compensatory cis-trans divergence in gene regulation between C. briggsae and C. nigoni. Such coevolution represents just one mechanism leading to "developmental system drift," in which the molecular controls over developmental pathways can diverge while resulting in little or no change to their phenotypic outputs [14,21,74]. In Caenorhabditis nematodes, developmental system drift and stabilizing selection have been invoked to explain the high degree of phenotypic stasis and morphological constraint among species [60,[75][76][77][78]. Gene network conservation despite cis-regulatory divergence has been demonstrated by inter-species promoter swaps in Caenorhabditis, showing both robustness in regulatory networks and neo-functionalization in specific cell types [20,78,79], as well concerted action of cis and trans compensatory regulation between more distantly related Caenorhabditis species [20]. Our results reinforce this view of pervasive developmental system drift: we show a high incidence of transgressively expressed genes (overdominant and underdominant in hybrids) and cis-trans compensatory changes in the sexuallydimorphic regulatory evolution and expression inheritance of each sex (Fig 4), in addition to an abundance of transgressive hybrid expression and cis-trans compensatory changes among genes with conserved sex-neutral expression profiles (C-N-N) (Fig 5A-5D). Sequence divergence and developmental system drift in regulatory pathways can lead hybrids to experience misregulation due to the conflicting regulatory signals from the divergent genomes, yielding misexpression in hybrid transcriptomes [28]. This situation could present a Dobzhansky-Muller incompatibility if the misexpression leads to reduced fitness; genetic interactions like those experienced by hybrids have been untested by natural selection and will likely be detrimental [1]. The most striking signal of misexpression in our hybrids is the sharp contrast in the fraction of sex-biased genes:~83% in hybrids vs~60% in each parental species (Fig 1C). The degree of sex-biased gene expression is more extreme for the X chromosome in F 1 hybrids due in part to transgressive underdominance effects (Figs 1D, 2C and 2D), though the fraction of male-biased genes likely involved with spermatogenesis is, in fact, depleted on the X (Figs 5G, 6C, 6E and 6G), consistent with previous findings [43,44]. The X-chromosome in hybrid males is enriched, however, for genes showing overdominant expression, which can be linked directly to cis-trans compensatory changes (Figs 2D, 3 and S2). Overall, these results implicate extensive sex-limited developmental systems drift that generates extensive misexpression of genetic networks distinctly in each sex for hybrids of C. briggsae and C. nigoni. While expression differences between species are often biased towards one species in systems such as fruit flies and plants [16,18,25], we do not observe much asymmetry toward one species (Fig 1A; binomial test: males, ratio = 0.5, P = 0.26; females, ratio = 0.47, P < 0.001). This symmetry in expression suggests that demographic effects that exacerbate genetic drift are not likely to bias regulatory changes toward either increased or decreased expression disproportionately for one species, as could occur if regulatory changes fix more rapidly in species like C. briggsae with lower effective population sizes due to selfing. Sex differences in regulatory divergence expose sexual dimorphism of genetic networks Transgressive expression in F 1 hybrids beyond the bounds of parental expression levels is a signature of misexpression, which we observe in abundance. Given that C. briggsae × C. nigoni hybrids obey Haldane's rule [49][50][51], we expected more misexpression in hybrid males. Our analyses revealed, however, that it is hybrid females that experience more extensive transgressive gene misexpression (Fig 2A-2C). Studies in mice have shown that hybrid misexpression of X-linked genes confers male sterility [5,29] and Drosophila demonstrates an important influence of X-autosome incompatibilities for Haldane's rule and hybrid male sterility [37,39], though with a more tenuous link to misexpression of X-linked genes [30,80]. We found enrichment of misexpressed loci on the X-chromosome for both sexes (Fig 2D) and no indication of compromised dosage compensation. Most female-biased transgressive genes show overdominant misexpression, however, compared to more underdominant misexpression in male-biased networks among hybrid males (Fig 5A-5C). This observation, in addition to higher divergence in expression and coding sequences (Fig 5E and 5F), suggest that malebiased networks are more "fragile" or prone to regulatory perturbations. Moreover, we hypothesize that cis-and/or trans-regulatory changes acquired after speciation experienced selection to sustain upregulated expression of female-biased genes, with those changes behaving in an overdominant manner in hybrids. Indeed, overdominant genes with cis-trans divergence in females have disproportionately evolved "enhancing" regulatory changes compared to males (Figs 2G, 3 and 5D). In spite of the extensive overdominance in female-biased expression (Figs 2B, 2C and 5C), the fact that hybrid females are fertile suggests that overdominant expression does not impact fitness as negatively as does regulatory divergence that leads to underdominance in hybrids. Consequently, this sexual dimorphism in gene regulatory networks may contribute to the greater sensitivity of males to manifesting hybrid dysfunction. Interestingly, we find that regulatory controls over male-biased genes when they are expressed in F 1 females largely reveals trans-only regulatory changes (Figs 5D, 5E, 6F and 6G). These trans-acting regulatory changes are more strongly associated with C. nigoni dominant expression in females among autosomes, which contrasts with these same genes when expressed in males that reveal extensive cis-regulatory changes and C. briggsae dominant expression (Figs 4, 5C and 5D). Previous studies have shown that trans-acting regulation often manifests expression dominantly in hybrids, possibly as a result of masking effects between dominant and recessive trans alleles, whereas cis-acting regulation often generates additive contributions in expression [64]. However, cis-dominant expression patterns are not uncommon in ASE studies [18] and may arise, potentially, from cases where cis changes in one species decrease transcription factor binding affinity or where significant post-transcriptional regulation in pure species is compromised in hybrids (see Results section). Merritt et al. [65] showed that C. elegans oogenic germline gene networks depend strongly on 3'UTR post-transcriptional regulation, but that genes showing spermatogenic expression rely primarily on upstream transcriptional regulation. Curiously, disruption of endogenous 22G small RNA regulation of spermatogenesis in hybrids of C. briggsae x C. nigoni also is implicated in male sterility [31]. It may be that sex differences in the dominant mode of regulation (transcriptional versus post-transcriptional) contributes to the sex differences in the cis-dominance that we report. These results also align with observations of downregulation of spermatogenesis genes, such as fog-1 and fog-3, by specific transcription factors (i.e., tra-1), and sperm-specific expression depending more on upstream promoter regions than 3-UTRs in C. elegans [65,72,81]. We observed distinct signatures of regulatory divergence for genes with sex-biased expression when expressed in the opposite sex, raising the possibility that such divergence may reflect a history of sexual selection and sexual conflict [52,57]. One possible explanation invokes Rice's hypothesis [82]: the fixation of regulatory changes that create sex-biased expression serve as evolutionary solutions to inter-sexual genomic conflict. To avoid traits that are detrimental to females but improve male performance, genomic conflict resolution by means of sex-biased expression may be attained faster through trans-regulatory changes, which are more pleiotropic, downregulating male-biased genes in females. This logic aligns with the hypothesis that sex-biased expression is partly driven by selection acting to resolve sexual conflict by means of modifier alleles or regulators [67,82]. However, the fact that trans-only regulatory changes do not predominate in the control of female-biased genes in males (Fig 4E), suggests that regulatory mechanisms to resolve genomic sexual conflict act in different ways for the two sexes. Direct manipulative experiments would prove valuable in testing these ideas further. The fact that the egg-bearing sex in the C. briggsae parent is actually a hermaphrodite may help explain the presence of underdominant genes in hybrid females. Many of the genes in hybrid females that show underdominant effects would otherwise show male-biased expression (Figs 5C and 6E), suggesting that they may compromise spermatogenesis to effectively convert F 1 hermaphrodites into females. This perspective provides a complementary view to the idea that hermaphroditism is 'recessive' to femaleness in a simple Mendelian manner [49]. Our expression data characterizes tra-1, a master regulator of sexual development programs through its repression of genes such as fog-3 that would promote spermatogenesis and male development [72], as a conserved sex-neutral gene that is overdominant in both male and female F 1 hybrids (C-N-N; S6 Fig). Furthermore, tra-2 and fem genes, which interact to facilitate TRA-1 activity as a repressor, are all female-biased and overdominantly expressed in hybrids, which suggest an additional mechanism for hybrid XX "demasculinization". Consequently, repression of male-biased genes involved with the tra-1 pathway is likely to cause both the "female" phenotype in hybrid females (i.e. absence of male function normally seen in hermaphrodites) and reduced fertility in hybrid males. Gene expression in hybrid males predominantly shows either simple dominance or no change (Fig 2C). While it is tempting to speculate that it might be a byproduct of males of different species sharing the same reproductive role, the combined observations of reduced sexual selection in C. briggsae males [53,57], genomic divergence [55,56], and clear sex differences in hybrid fertility [49,50], suggest otherwise. If most transgressive expression occurs in the gonad, then the small and defective gonad development of F 1 males may have led to their observed paucity of transgressive expression. Distinct relative sizes of tissues between species, sexes, and hybrids could potentially influence differences in expression and their interpretation [83]. Several aspects of these Caenorhabditis, however, minimize the potential influence of allometric effects between species. The morphology of Caenorhabditis species is highly conserved even between very distantly related species, with body size being very similar for C. briggsae and C. nigoni in particular [84]; body size differences within and between species reflect differences in cell size not cell numbers [59]. Our data show symmetry in upregulated genes between species (Fig 1A), consistent with a limited potential role of allometric bias between species. While hybrid females experience delayed germline development [49], we analyzed adult animals with complete development. However, the gonad of hybrid males is anatomically deformed, potentially contributing to differences in expression due to tissue allometry. If disproportionately small gonad size of hybrid males were to skew expression patterns, then we would expect a large signal of genes showing low expression specifically in hybrid males. We observe two modules of co-expression with such a pattern (M3 and M15 in S2 Fig), but M15 comprises just 437 (8.7%) of male-biased genes and M3 contains genes with female-biased expression overall (672 genes, 15% of female-biased genes). Moreover, we observe a high number of genes with conserved expression across males, including hybrids (Fig 2C), counter to what would be expected if tissue allometry introduced substantial bias. Implications for Haldane's rule and the large-X effect Male-biased genes are expected to evolve fast because of sexual selection and sexual conflict, resulting in higher rates of protein-coding and gene expression divergence [38,69,70]. Faster evolution of male-biased genes is the premise behind the "faster male" hypothesis to explain the high incidence of hybrid male sterility in Haldane's rule [33]. We find faster evolving coding sequences for the subset of male-biased genes that also show exceptionally high expression divergence (B-M-I and B-M-N; Fig 5E and 5F), many of which are implicated in spermatogenesis (Figs 6A-6C; S4 and S5). This result is consistent with previous reports of faster molecular evolution of spermatogenesis and male germline genes of C. elegans [57,85,86], and suggests that "faster male" evolution may contribute to Haldane's rule in Caenorhabditis. To the extent that rapid evolution predisposes genes to forming genetic incompatibilities, however, the fact that we rarely observe such genes on the X-chromosome suggests that the "faster X" model does not provide a compelling explanation for Haldane's rule for hybrid male sterility [34] (Fig 5G). The "large-X" effect, where hybrid male sterility results from a disproportionate count of Xlinked loci involved in genetic incompatibilities [87], is evinced from deletion screen experiments in Drosophila [37,39] and introgression experiments in C. briggsae x C. nigoni [58,59]. An analogous effect can result from X-linked regulatory divergence that causes misexpression and hybrid dysfunction [26,29,88]. Consistent with a large-X effect due to regulatory evolution in Caenorhabditis, our analyses show that the X chromosome contains an excess of genes that have undergone cis-trans compensatory changes causing misexpression in hybrid males (Fig 3). This pattern implicates a disproportionate role for regulatory divergence of the X-chromosome in mediating misexpression in Caenorhabditis hybrids. Two non-mutually exclusive ways in which hybrid male dysfunction (i.e., sterility) can arise are: 1) through misexpression and misregulation of X-linked genes involved with male function, and 2) through negative epistatic interactions (i.e., incompatibilities) between X-linked and autosome genes involved in male-specific pathways. Our results suggest that both cases are plausible. First, the paucity of X-linked sex-biased genes in parental genotypes of Caenorhabditis species suggests that any misregulation and misexpression on the X might exert little downstream impact (Fig 1D; [43,44]). However, misexpression of X-linked genes in hybrids is relatively common compared to autosomes (with the exception of Chromosome V) (Fig 2D), with hybrid males having higher relative incidence of effectively misregulated genes (trans-only and compensatory cis-trans changes) compared to female hybrids (Fig 3). Although enrichments on the X were non-significant, we find other signs of trans-acting factors contributing to misexpression in both sexes (Figs 3 and 4). In hybrid females, trans-acting factors largely drive the expression of X-linked genes with C. briggsae dominant and underdominant expression, unlike autosomes (except Chromosome V) (Fig 4). In hybrid males, our inference that all Xlinked genes with C. briggsae dominant expression arise from trans-only � effects implicating recessive C. nigoni trans regulators (S13 Fig), and which, at minimum, should be an underestimation of all trans-acting regulatory divergence affecting the X-chromosome, suggests that similar trans-acting misregulation occurs. These findings are consistent with previous observations, particularly in Drosophila, of trans-acting sex-specific changes causing misregulation of X-linked genes [40,89,90]. Second, the extensive expression dominance in F 1 males that disproportionately matches the C. briggsae expression level due to cis effects have the potential to disrupt gene networks as they may interact negatively with C. nigoni X-linked genes in hybrid males (Figs 4 and S9). Autosomal spermatogenesis genes, by contrast, tend to show C. nigoni-dominant expression in F 1 hybrid males (S5B and S5C Fig), consistent with previous work showing recessive effects of the C. briggsae autosomal portions of genetic incompatibilities [58]. In addition, this prior work also showed that sterility in C. nigoni x C. briggsae hybrid males may not require many X-autosome incompatibilities [59]. Despite their low abundance on the X-chromosome, Xlinked spermatogenesis genes are often enriched for both misexpression and misregulation (Figs 3, S5B and S5C), potentially enhancing their role in hybrid dysfunction. Our genomewide transcriptome analysis of cis-and trans-regulatory divergence therefore sheds new light on Haldane's rule, reinforcing some previous key inferences about hybrid dysfunction associated with males, spermatogenesis, and the X-chromosome. Decoupled coding vs regulatory divergence and the evolution of hybrid dysfunction Genes that evolve fast may be predisposed to creating genetic incompatibilities, which could result either from dysfunctional structural activities of the encoded proteins or controls over the timing or location of gene expression. Studies to date indicate that rates of evolution of coding sequences and regulatory regions do not correlate strongly [91,92]. Supporting the idea that regulatory divergence due to cis-acting elements is largely decoupled from rates of molecular evolution, we found only weak positive correlations between rates of molecular for protein structure and gene regulation (Fig 7). cis-regulatory divergence also showed only a weak elevation in chromosome arms compared to centers (S7B Fig), genomic regions with marked differences in recombination rates, gene density, sequence conservation within and between species (S7A and S8 Figs) [48,93] that influence the rate at which mutations, especially weakly-selected regulatory mutations, can get fixed as a result of direct selection and linked selection [94]. The decoupled rates of evolution for protein structure and gene regulation imply that genetic incompatibilities mediated through protein activity versus gene regulation may follow different rules in the evolution of reproductive isolation. Conclusion We contrasted sex-specific transcriptomic profiles between C. briggsae and C. nigoni and their hybrids to understand how the evolution of cis-and trans-regulatory elements can contribute to F 1 hybrid dysfunction. Regulatory evolution underlies divergent expression as well as conserved expression subject to compensatory effects of changes to multiple elements. The sharp contrast of extensive misexpression in F 1 hybrids with the morphological stasis and expression conservation between Caenorhabditis species indicates substantial developmental system drift of regulatory networks that destabilize in hybrids to enforce reproductive isolation between species. Despite more extensive transgressive expression in hybrid females, they are fertile but unable to produce self-sperm and hybrid males are entirely sterile, suggesting that 1) genetic networks controlling "male" developmental pathways are more fragile in the face of genetic perturbation and 2) hybrid females may represent "demasculinized" hermaphrodites through the disruption of sperm-specific regulatory networks. Despite the rarity of sex-biased genes on the X-chromosome, the X is home to disproportionate misexpression in both sexes, with misregulation in hybrid males largely occurring through cis-trans compensatory changes, but also by trans-acting factors to some extent. X-autosome incompatibilities in hybrid males likely result from the propensity for divergent cis-acting factors to drive C. briggsae-dominant autosomal expression yielding allele-specific expression biases, which then interact negatively with C. nigoni X-linked genes. Moreover, C. nigoni-dominant trans-acting factors may act to downregulate male-biased genes in both males and females, through the misregulation of master controllers of sexual development such as tra-1. Finally, we find only weak correlations of cisregulatory divergence with chromosome architecture and protein-coding and non-coding sequence divergence, indicating that regulatory and protein evolution are largely decoupled. Consequently, Dobzhansky-Muller incompatibilities involving regulatory and coding sequences may accumulate independently of one another, and in distinct ways in the regulatory networks of each sex, building-up reproductive isolation that leads to Haldane's rule and speciation. Samples, RNA isolation, and sequencing We cultured triplicate populations of isofemale C. briggsae (AF16) and C. nigoni (JU1421) on NGM-agar plates with Escherichia coli OP50 at 25˚C, isolating total RNA via Trizol-chloroform-ethanol extraction from groups of approximately 500 individual age-synchronized young adult males or females (hermaphrodites) for each replicate sample. C. briggsae hermaphrodites are treated as the female sex for the purposes of this study (see below and the Results section), as their soma is phenotypically female despite the gonad producing a small number of sperm in addition to abundant oocytes. We also crossed in triplicate virgin C. nigoni females to male C. briggsae (isolated as L4 larvae) to produce F 1 hybrid progeny, with RNA isolated from male and female F 1 hybrids as for the parental pure species genotypes. The triplicate mRNA samples for each sex and genetic group (C. briggsae, C. nigoni, F 1 hybrid) underwent 100bp read length, single-ended Illumina HiSeq sequencing at Genome-Quebec according to their standard TruSeq3 protocol. A total of~250 million reads from these 18 barcoded samples spread across 4 lanes were cleaned for quality control using TRIMMO- To obtain allele-specific read counts in F 1 hybrids, we applied a competitive read mapping approach by developing a Python implementation for competitive read-mapping we call COMP-MAP (https://github.com/santiagosnchez/CompMap). This approach uses the PYSAM library (https://github.com/pysam-developers/pysam) internally for reading and indexing BAM alignments by read name. Using BED files with transcript-level coordinates consistent between references, read overlaps were then counted for each feature in both alignments. At each feature, the alignment score (AS) and number of mismatches (nM) of each read to both references were compared, assigning best aligned to reference-specific counts. Ambiguous reads (i.e., those having equally good alignments) were also counted and redistributed proportionally to the number of reads assigned to each reference. We validated our method with simulated RNA-seq data using the R package POLYESTER [97] finding high correlation between true ASE and COMPMAP counts, as well as low type 1 and type 2 error rates among cis and trans regulatory divergence categories (<5%; S10 and S11 Figs). We expected to have high power to detect ASE, given~20% neutral sequence divergence between C. briggsae and C. nigoni [48] conferring on average~5 nucleotide differences for every 100 bp of coding sequence (0.2 divergence � 0.25 fraction of synonymous sites � 100 bp). Additionally, we did not expect significant mapping bias [98,99] given that our data was strain-specific (i.e. our C. briggsae RNA-seq data comes from strain AF16, which is the same as the reference genome and our C. nigoni data comes from strain JU1421 which derives from the same wild isolate as the C. nigoni reference genome; see supplementary data in [56]). Scripts, programs, and commands used for bioinformatic analyses can be found online (https://github.com/santiagosnchez/competitive_ mapping_workflow). Ortholog identification and read abundance quantification The chromosomes in the C. briggsae and C. nigoni genomes are largely colinear, with only a few small inversions and translocations reported [100]. Therefore, we quantified gene expression abundance for a set of 13,636 genes that we inferred to be one-to-one reciprocal orthologs between C. briggsae and C. nigoni, for which upstream regulatory regions should also be syntenic. To identify orthologs, we applied a phylogenetic approach using ORTHOFINDER v2.2.6 [101,102], based on longest-isoform peptide sequence translations for gene annotations of 28 Caenorhabditis species [103] (http://caenorhabditis.org/). BLASTp all-by-all searches were done separately on SciNet's Niagara supercomputer cluster. ORTHOFINDER was run with default options, which included: -M dendroblast (gene tree reconstruction) and -I 1.5 (MCL inflation point). In further analysis of the final set of 13,636 orthologs, we excluded from a preliminary set of 15,461 orthologs those genes for C. briggsae and C. nigoni that could not be assigned to any of their six chromosomes (688 genes), that were associated with inter-chromosomal translocations (370 genes), that we could not estimate K a /K s reliably (275 genes), or that exhibited low mRNA-seq read abundance (492 genes; see below). A list of all the orthologs can be found online (https://github. com/santiagosnchez/competitive_mapping_workflow/blob/master/orthologs.txt). We quantified gene expression in parent species for each ortholog with FEATURECOUNTS V2.0.1 [104] as we found its read-counting method to be compatible with COMPMAP. Raw read counts were combined into a single table and imported into R together with ASE counts [105] for normalization and statistical analyses. All raw data counts can be found online (https:// github.com/santiagosnchez/competitive_mapping_workflow/tree/master/counts). Identification of male-biased genes in C. briggsae hermaphrodites biologically realistic contrasts between C. briggsae hermaphrodites, C. nigoni females, and female hybrids, we used an in silico approach to identify male-biased genes, which are also upregulated in hermaphrodites, through comparisons using previously published RNA-seq data. We used RNA-seq read data derived from C. briggsae pseudo-females (AF16-derived she-1(v47) mutant strain) [54], which are unable to produce self-sperm. We conducted differential expression analyses using a similar pipeline as described below to identify genes with significant sex-biased expression in our C. briggsae RNA-seq data and the one from pseudo-females in Thomas et al. [54], in addition to being differentially expressed between datasets. These genes where then excluded from analyses comparing hermaphrodites and females together and were analyzed separately. Data for these analyses can be found online (https://github.com/santiagosnchez/ competitive_mapping_workflow/tree/master/analyses/tables/Thomas_et_al_data). Differential expression analyses: Contrasts between species, hybrids, and sexes We used the R Bioconductor package DESEQ2 [106] to assess differential expression. Before statistically assessing differential expression, we summed the allele-specific counts from F 1 hybrids to yield a single count of transcripts per gene. After calculating library size factors, we filtered out genes that did not meet the criterion of having at least 3 samples with more than 10 library-size scaled counts. We visualized the overall expression distance between samples using a non-metric multi-dimensional scaling plot, which showed all three biological replicates to cluster consistently within their corresponding treatment (S1A Fig). We inferred sex-biased gene expression by comparing differential expression profiles between males and females (or hermaphrodites) in each genetic group (C. briggsae, C. nigoni, F 1 hybrids). We also quantified differential expression between the genetic groups in a pairwise manner (C. briggsae vs F 1 , C. briggsae vs C. nigoni, C. nigoni vs F 1 ) for each sex separately. We then contrasted expression patterns between species (C. briggsae and C. nigoni) by looking at sex differences (sex-biased expression) and their interaction (expression~species � sex). Within hybrid males and females, we compared allele-specific counts to measure ASE. For all of these contrasts, we used the negative binomial generalized linear model fitting and Wald statistics to determine differentially expressed genes, as implemented in DESEQ2. We used FDR-adjusted P-values at the 5% level to assess significance [107]. DESeq2 results can be found online (https://github.com/ santiagosnchez/competitive_mapping_workflow/tree/master/analyses/tables/DESeq2). Co-expression clustering To identify groups of genes with shared co-expression trends, we averaged log 2 -transformed normalized read counts with the rlog function from DESEQ2 for each sample. Then we standardized gene-wise expression data by calculating Z-score values and used K-means to cluster coexpression groups. We chose a sensible k value (k = 15) approaching the asymptote by plotting the within-group sum of squares for a range of k values (from 2 to 100). We then calculated centroid expression levels by estimating the mean relative expression across samples within each group. Co-expression modules were designated as M1-M15. Our co-expression clustering results can be found online (https://github.com/santiagosnchez/competitive_mapping_ workflow/tree/master/analyses/tables/clustering). Mode of expression inheritance in F 1 hybrids Based on patterns of expression in F 1 hybrids relative to parent species, we classified genes into those having additive (intermediate), dominant (matching either of the species), overdominant (higher that both parents), and underdominant (lower than both parents) profiles following the logic in McManus et al. [16]. Genes with no significant differences in expression between F 1 s and their parent species were deemed to have conserved regulatory controls resulting in no change in expression in F 1 s. Genes with additive effects had intermediate expression in F 1 s compared to both parental species, meaning that there were significant differences in expression between all groups in a manner where expression levels in F 1 s fall in between both species. Genes with dominant effects were those with expression levels in F 1 s matching either one of the parent species (i.e., no significant differential expression), but with significant differential expression between species. Finally, genes with significant differential expression from both parents, but that were either significantly underexpressed (underdominant) or overexpressed (overdominant) compared to both species were regarded as transgressive. Genes falling outside any these specific categories were considered ambiguous. A per-gene summary table with all expression inheritance classification can be found online (https://github.com/santiagosnchez/competitive_mapping_ workflow/tree/master/analyses/tables/expression_inheritance). We also measured absolute Euclidean distances in expression relative to the centroid or origin in expression space of F 1 hybrids relative to both parent species. For example, for every gene we took the expression difference between F 1 s and C. briggsae and between F 1 s and C. nigoni as an xy coordinate system. Then, we measured the Euclidean distance from that point in expression space to the origin (0,0), reflecting no change in expression: Where Δ F1/Cbr and Δ F1/Cni are coefficients of differential expression between F 1 hybrids and each parent species. This metric allowed us to visualize the magnitude of expression distance from a "conserved" expression profile. cis-and trans-regulatory divergence We used ASE in F 1 s to quantify the extent and type of cis-and trans-regulatory differences between species. Expression divergence between parent species results from both cis-and trans-regulatory changes, whereas significant differential expression between alleles in F 1 s results from cis-regulatory divergence only [16]. To quantify the extent of trans effects, we applied a linear model to test for differences in gene expression between parent species (P) and between alleles in F 1 hybrids (ASE) using the following model: expression~species/group, where "group" represented categorical variables pointing to data from P and ASE. The division operator of the function "/" measures expression ratios independently for each category in "group". We then used a post-hoc Wald-type test (linearHypothesis from the CAR package) to test for significant differences between both coefficients (P[C. nigoni/C. briggsae] = ASE[C. nigoni/C. briggsae]). P values were considered significant after a 5% FDR analysis [107]. We inferred the influence of cis-and trans-regulatory divergence on genes linked to autosomes, as well as to the X-chromosome in females, following the criteria in McManus et al. [16]. This procedure allowed us to designate genes having undergone significant regulatory divergence due to cis-only, trans-only, and cis-trans effects. Genes with significant cis and trans effects were split into those having synergistic effects or cis + trans and those having compensatory effects: cis x trans (compensatory) and cis-trans (compensatory) (S12 Fig). Genes expressed with no significant differences in expression between parents, ASE, or trans effects were deemed as conserved and those that did not strictly fit into any of the previous groups were considered ambiguous. A table with all gene-wise classifications for expression inheritance in all chromosomes for females and for autosomes in males can be found online (https://github.com/santiagosnchez/competitive_ mapping_workflow/tree/master/analyses/tables/cis_trans). Given the hemizygous condition of the X-chromosome in males, we cannot use F 1 ASE of X-linked genes to assess cis and trans regulatory divergence. However, we devised a scheme to assign different types of regulatory divergence to X-linked genes, with some limitations, based on the differences in expression between male F 1 hybrids and parent species (S13 Fig). F 1 males in our study carry their maternal C. nigoni X-chromosome. Therefore, assuming that the bulk of the trans environment derives from autosomes, we would expect that X-linked genes that differ in expression between parental species but display C. nigoni dominant expression owe their regulatory divergence mostly to cis-regulatory (cis-only � ) changes, because any significant deviation from C. nigoni expression levels would indicate trans-regulatory changes. Potentially confounding situations would involve (1) the action of "local" trans regulators found on the X, albeit still considered "local", and (2) autosomal trans regulators that are dominantly expressed by the C. nigoni allele. Given that the confounding effects of the dominant expression of trans regulators would also potentially apply to autosomal genes, we decided to keep the cis-only � category for X-linked genes. Moreover, genes with significant regulatory divergence between species where hybrids display C. briggsae dominant expression (transonly � ) would indicate no significant changes due to cis regulation, with the condition that autosomal C. nigoni trans regulators affecting those genes are recessive, and therefore not expressed. We consider these a subset of total trans regulation and is likely an underestimation compared to autosomes and the X-chromosome in females. The last category that we can assign confidently is compensatory cis-trans changes. Genes with no differential expression between species, but with significant up-or down-regulation in F1 males (i.e., overdominant or underdominant) were considered as having cis-trans compensatory changes. For the purpose of comparing regulatory divergence between autosomes and the X in males and females, we lumped all other genes with significant expression divergence into "other", as they would be difficult to disambiguate without additional data. These include genes with trans-only (i.e., with codominant trans regulation), cis + trans (enhancing), cis x trans (compensatory), and ambiguous. Genes with no expression change were kept as "conserved" (S11 Fig). Molecular evolution in coding sequences Orthologs in the genomes of both C. briggsae and C. nigoni were first aligned as protein coding sequences using MAFFT v7.407 [108]. These alignments were then back-translated to coding sequence (CDS) alignments using the python program CODONALIGN (https://github.com/ santiagosnchez/CodonAlign). We estimated rates of synonymous site (K s ) and non-synonymous site divergence (K a ) between the two aligned sequences using a custom Python script (https://github.com/santiagosnchez/DistKnKs) applying the Yang and Nielsen (2000) model implemented in BIOPYTHON. We also corrected K s values for selection on codon usage using the effective number of codons (ENC; [109,110]) as a predictor in a linear model. In short, we fitted a linear regression model (K s~E NC), which we used to predict K s at the maximum value of ENC (= 60). Then, we corrected the bias in K s by adding the residuals of the linear model to that idealized value of K s at ENC = 60. We refer to these corrected set of K s estimates as K s '. A table with these estimates of molecular evolution for each gene can be found online (https://github.com/santiagosnchez/competitive_mapping_workflow/tree/master/analyses/ tables/molecular_evolution). Upstream non-coding sequence conservation Chromosome-level FASTA sequences for C. briggsae and C. nigoni were aligned using LASTZ [111], outputting alignment files for each chromosome in MAF format. We used BEDTOOLS's v2.27 [112] flank function to generate 500 bp intervals of the 5' upstream flanking regions of each orthologous gene. We then used maf_parse from PHAST [113] to extract overlapping alignment blocks of at least 500 bp long. We quantified sequence conservation as the average number of identical 5 bp non-overlapping windows between aligned DNA in both sequences. Spermatogenesis genes To infer genes involved with spermatogenesis, we downloaded a list of C. elegans genes previously identified as spermatogenesis-related based on tissue-specific transcript abundance [45] (Additional File 4). We then used the BioMart tool of the WormBase Parasite website [114] to retrieve C. briggsae orthologs from the list of C. elegans genes. We cross-referenced C. briggsae orthologs to our own set of orthologs between C. briggsae and C. nigoni and annotated the 1,089 gene matches with a spermatogenesis tag. The data obtained for C. elegans spermatogenesis orthologs can be found online (https://github.com/santiagosnchez/competitive_mapping_ workflow/tree/master/analyses/tables/spermatogenesis). to females (Fig 6). The X-chromosome is depleted compared to autosomes (B) but has distinct relative enrichments of genes with C. briggsae expression dominance in both males and "females" in contrast to autosomes, and presents different misexpression categories between males (overdominant) and females X (i.e., X0). Categories not fitting into either cisonly, trans-only, or compensatory cis-trans were lumped into "other". (TIF)
2020-05-09T13:10:26.555Z
2020-05-04T00:00:00.000
{ "year": 2021, "sha1": "3219f73e29bb27628a2696f1d2900bd85fabc62b", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1009409&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa915e0c009a0a91543930f486e21460b391fc73", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
189897830
pes2o/s2orc
v3-fos-license
Constraints on Lorentz Invariance Violations from Gravitational Wave Observations Using a deformed dispersion relation for gravitational waves, Advanced LIGO and Advanced Virgo have been able to place constraints on violations of local Lorentz invariance as well as the mass of the graviton. We summarise the method to obtain the current bounds from the 10 significant binary black hole detections made during the first and second observing runs of the above detectors. Introduction The year 2015 saw the advent of gravitational wave (GW) astronomy with GW150914 1 , the first directly detected GW signal from a binary black hole (BBH) merger. Ref. 2 performed tests on strong-field gravity in the highly dynamical regime of general relativity (GR), finding no statistically significant violations of GR. Since then, 10 significant BBH signals have been detected, in addition to a binary neutron star (BNS) signal 4 . The first constraints on local Lorentz invariance violation (LIV) using real GW data were reported in Ref. 3. These bounds have been revised recently and reported in Ref. 7. These bounds, however, rely on the propagation effects and therefore do not directly probe the dynamical regime of gravity. In this proceedings, we give a brief overview of the method to constrain LIV in Sec. 2 and summarise the results with some concluding remarks in Sec. 3. Method GWs propagating in GR are non-dispersive and travel with the speed of light. Following Refs. 5, 6, we adopt the generic dispersion relation This is a Lorentz violating dispersion relation for α > 0, the LIV parameter is characterised by A α . α = 0 is a special case where we may parameterise the additional term in Eqn. 1 as A 0 = m 2 g c 4 , m g being the mass of the graviton. Examples of Lorentz violating theories for specific forms of Eqn. 1 include Doubly Special Relativity 9 for α = 3 and Hořava-Lifshitz theory 10 for α = 4, cf. In the presence of dispersion, the low (high)-frequency components of a GW signal travel slower (faster) and result in an overall offset in arrival times at the detector, leading to a frequency-dependent shift in the phasing. In frequency domain (FD), the total phase is then given by Ψ(f ) = Ψ GR (f ) + Ψ α (f ). Ψ GR (f ) is the phasing obtained from GR predictions and Ψ α (f ) denotes the phase shift following from the dispersion. The waveform model in FD used in our analyses is constructed bỹ We associate a length-scale λ A = hc|A α | 1/(α−2) with the LIV parameter, where h is the Planck's constant and c is the speed of light. λ A may be thought of as a screening length corresponding to an effective gravitational potential. In terms of λ A , the phasing relations are given by In the above equation, M is the detector-frame chirp mass of the binary system, a combination of component masses given by M = (m 1 m 2 ) 3/5 /(m 1 + m 2 ) 1/5 , m 1 and m 2 being the component masses. f is the frequency component and Z denotes the redshift to the source. D α is a cosmological distance, see Refs. 5, 7 for more details. The analyses carried out in the following section is based on a Bayesian framework which incorporates the Bayes' theorem p( θ|d) = p(d| θ)p( θ)/p(d), where θ refers to a parameter set, d refers to the data, p( θ|d) refers to the posterior probability density obtained on θ from the likelihood calculated from the data p(d| θ) and the a priori probability density given by p( θ). p(d) is a normalisation constant. The information learnt from the data is folded in the likelihood which takes the following form In the presence of a GW signal, the data output from the detector is d = h(t) + n(t), where h(t) is the GW signal and n(t) is the noise. For our analyses, the likelihood integral is computed in FD by including the LIVdeformed phase in the model waveform. For a value of α, this enables us to obtain a posterior probability density function on the parameter A α , leading to a constraint on LIV. Results Being a propagation effect, the strongest constraints come from events located at larger luminosity distances. The bounds obtained from the catalogue of 10 sources are presented in Fig. 1 From combining these sources, the mass of the graviton has been constrained to m g ≤ 5.0 × 10 −23 eV/c 2 at 90% confidence. and Technology (MOST), Taiwan and the Kavli Foundation. The authors gratefully acknowledge the support of the NSF, STFC, MPS, INFN, CNRS and the State of Niedersachsen/Germany for provision of computational resources.
2019-06-13T21:15:01.000Z
2019-06-13T00:00:00.000
{ "year": 2019, "sha1": "b020a8280d178e23071be46b72795e6460968878", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1906.05933", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b020a8280d178e23071be46b72795e6460968878", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260232652
pes2o/s2orc
v3-fos-license
Investigating Nickel Ferrite (NiFe 2 O 4 ) Nanoparticles for Magnetic Hyperthermia Applications ABSTRACT Introduction Cancer is the leading cause of death worldwide. Surgery, radiation therapy, and chemotherapyare the most commonly used procedures to treat cancer. However, the efficacy of these procedures is limited, and each one has its side effects (Hussain et al., 2021). 33 Hippocrates proposed hyperthermia or thermotherapy and stated that all skin tumors on the outer surface of the body may be treated with hot iron (Tomitaka & Takemura, 2019). Hyperthermia was used for the first time, and a Swedish gynecologist Westermarck used hyperthermia in 1898to treat cervical cancer by running hot water through an intracavity spiral tube (Westermark, 1898). Hyperthermia is acknowledged as a distinct therapy used alone or as an adjuvant to chemotherapy or radiotherapy (Bañobre-López, Teijeiro, & Rivas, 2013;Kumar & Mohammad, 2011). The biochemical processes occurring in the cell, such as metastasis, are vulnerable to alteration in temperature. A few centigrade temperature rises in body temperature, i.e., from 37 ℃ -(42 to 48) ℃, is required to kill the cancer cell (Dennis et al., 2008). Furthermore, the temperature range 42 ℃ -48 ℃ can either be used as an adjuvant therapy or can directly kill the cancer cells by a process called thermoabalation, a function of time and temperature (Hildebrandt et al., 2002). Tumor cells have poor heat dissipation and constrained blood flow due to their abnormal growth and disorganized atomic structure. Therefore, cancerous cells are highly sensitive to temperature than healthy cells (Cihoric et al., 2015;Kakehi et al., 1990;Kuwano et al., 1995). In combination with radiotherapy and chemotherapy, hyperthermia has shown 39% to 85% of complete response rate (Kuwano et al., 1995). However, hyperthermia has side effects, such as temperature control within the tumor and non-localized heating, thereby damaging the healthy cells. Moreover, secondary harmful effects have been found with normal tissues when hyperthermia was combined with other treatment modalities (Bañobre-López et al., 2013). To overcome the problems associated with hyperthermia, an alternative approach, known as magnetic hyperthermia, was used to treat cancer. Magnetic hyperthermia has been found to be a promising technique for treating cancerusing magnetic nanoparticles (Singh, 1990;Szasz, Szigeti, Szasz, & Benyo, 2018).When magnetic nanoparticles are subjected to RF fields, heat is generated through various loss mechanisms, such as hysteresis and relaxationallosses. Hysteresis losses occur in multidomain particles, whereas relaxation losses occur in single-domain superparamagneticnanoparticles (SPMNPs). Ideally, the nanoparticles used for magnetic hyperthermia should be superparamagnetic. When SPMNPs are exposed to radiofrequency fields, their magnetic moments rotate in the direction of the field and then relax back to the original field orientation called Néel relaxation. Brownian relaxation is due to the rotation of the particle within a viscous fluid where heat is produced due to friction at the surface particles (Kalambur, Han, Hammer, Shield, & Bischof, 2005). Superparamagnetic or ferromagnetic nanoparticles, i.e., Fe3O4 /ᵧ -Fe2O3, have been extensively investigatedin magnetichyperthermia to treat cancer. Recent challenges in magnetic hyperthermia are high heat efficiency and controlled in vivo temperature in the therapeutic limit of 42 -48 °C (Pradhan, Giri, Banerjee, Bellare, & Bahadur, 2007). The heating efficiency of probed nanoparticles is determined by measuring the specific absorption rate (SAR) as given by equation (1). where is the mass of the sample, is the specific heat capacity of the sample, and is the mass fraction of the magnetic component. The value of SAR determines the dose of the nanoparticles; the higher the SAR value, the low dose will be required to treat cancer, thereby reducing the side effects. Nickel ferrite (NiFe2O4) is a soft spinel ferrite (Šepelák et al., 2007). Nickel ferrite (NiFe2O4) has been found to be the alternative to Fe3O4 /ᵧ -Fe2O3 due to its biocompatibility and heating efficiency (Bae, Lee, & Takemura, 2006;Menelaou, Georgoula, Simeonidis, & Dendrinou-Samara, 2014;Stefanou et al., 2014). In this work, we have synthesized nickel ferrite (NiFe2O4) from nickel and iron nitrate salts using the chemical co-precipitation route. The prepared samples were further calcined/annealed at different temperatures to vary the particle size and crystallinity. The structural, magnetic, and hyperthermia measurements were carried out to find the suitability of Nickel ferrite (NiFe2O4) for magnetic hyperthermia applications. Experimental Details The NiFe2O4 sample was prepared using a chemical co-precipitation route. The details of the synthesis procedureare given in the work carried out to synthesize tin oxide nanoparticlesby Tazikeh, S. et al.(Tazikeh, Akbari, Talebi, & Talebi, 2014).The chemical precursors used for the synthesis of NiFe2O4 were nickel nitrate hexahydrate (Ni (NO2)3.6H2O) and iron nitrate nonahydrate (Fe (NO3)3.9H2O). The aqueous solution was prepared by mixingiron nitrate and nickel nitrate in the de-ionized water. Thensodium hydroxide (NaOH) was added slowlywhile continuously stirring to set the pH value at 10-11. The solution was heatedat 100 ºC for one hour and then washed with ethanol to remove excess sodium and nitrate traces. The precipitates formed were left to dry at 50 ºC overnight. The obtained sample was labeledS1. The resultant sample was ground and calcined at various temperatures, i.e., 600 ºC, 800 ºC, 900 ºC, and 1000 ºC each for ten hours and was labeled as S2, S3, S4, and S5, respectively. XRD wasused to obtain structural parameters such as crystallite size and lattice parameters.A scanning electron microscope (SEM)with a magnification power of up to 1,000,000 X and a resolving power of 10 Åwas used to study the surface morphology of the samples. Magnetic properties in the temperature range of 100 K to 300 Kwere carried out by Vibrating Sample Magnetometer (VSM). The heating efficiency was investigated by using an RF-induction unit.Finally, a simple model based on a heat diffusion equation is used (in MATLAB) to see the temperature rise as a function of the time of treatment and tumor radius. XRD Analysis The phase identification of the product was carried out by powder X-ray diffraction (XRD, Bruker D8-advance, Germany). The XRD patterns were collected by steps of 0.02 in the 20 -80 degree range with a constant counting time of 0.6 s per step at room temperature. The phase formation and purity were examined by powder x-ray diffraction using Cu-Ka (λ = 1.5425 Å) radiation. The XRD graph explains features of synthesized nanoparticles of NiFe2O4, especially the effect of annealing on phase contribution and change/increase in crystallize size. The XRD pattern of the prepared nanoparticles of NiFe2O4 is shown in Fig. 1. Almost a similar data trend can be seen in these figures as reported in the literature. Seven major peaks can be observed, which correspond to the crystal planes (220), (311), (222), (400), (422), (511), and (440) with the Face Centered Cubic (FCC) spinel structure of NiFe2O4with Fe cations at tetrahedral sites and nickel cations at octahedral sites. Bragg's reflections are found to be sharp and intense and have been indexed, confirming the formation of cubic spinel structure in a single phase. The average crystal size is determined by the Debye-Scherrer equation Where λ is the X-ray wavelength (Cu Kα radiation and equals to 0.154 nm), θ is the Bragg diffraction angle (in radians), and β is the FWHM (full-width half maximum) or integral breadth of the XRD peak appearing at the diffraction angles θ. t is the thickness of crystallite, and K is constant dependent on the crystalline shape. The crystalline size is found at the maximum intensity peak of the plane (311) to be 15, 30, 40, and 55 nm for S2, S3, S4, and S5 samples annealed at different temperatures, respectively. Annealed samples have more intensepeaks than prepared ones. It indicates more crystallinity of the nickel ferrite. The effect of increasing temperature on the improvement of the crystallinity of nickel ferrite and the conversion of some nickel and iron oxides to produce nickel ferrite crystallites can be seen in the same figure. The crystallization of nickel ferrite improves at increasing the annealing temperature. All the reflections are indexed based on the standard index system. The broad peaks of the XRD patterns stipulate that the particles of the synthesized samples are in the nanometer range. The presence of hematite diffraction peaks contributes as an impurity and reveals the formation of a multi-phase. S5 does not show the presence of a hematite phase. Still, the Diffused Reflectance Spectra (DRS) show an extra line of band gap energy (as shown below in Fig.6). It may be due to the effects of fluoresces, and the phase contribution of hematite is thus ignorable. All four samples' calculated cubic lattice parameters are the same (8.34 Å)as thatfor the standard NiFe2O4 (8.34 Å). Structural parameters are given in Table 1. SEM Analysis The Scanning Electron Microscope (SEM) images of samples S3, S4, and S5 are shown in Fig. 2. The images depict mass agglomerations of tiny particles resulting in large particles. he images show dense aggregation because of high surface energies and tend to grow into larger accumulation. Magnetic Measurements The magnetic behavior of NiFe2O4 nanoparticles was investigated using Vibrating Sample Magnetometry (VSM) (Lakeshore VSM 7410) with an applied field of 10 kOe or 1 Tesla.The magnetic measurements, i.e.,magnetization as a function of the fieldfor the two samples S4, S5 taken attemperatures in the range of 150 K to 300 K, are shown in Fig. 3. The hysteresis loops of the samples S4 and S5 at 300 K have been compared in Fig. 3 (a). The magnetic properties of the materials, such as magnetization and coercivity, strongly depend on the particle size, shape, crystallinity, etc.The samples show ferromagnetic behavior. The temperature and the size of the MNPs significantly influence the magnetic properties, as depicted in Fig. 3 and Table 2.It can be observed from Fig. 3(a) that S5 has a larger magnetization than S4 because magnetization increases with increasing particle size. The samples' saturation magnetization (Ms) as a function of temperature is plotted in Fig. 4. It can be seen that Ms decreases with the rise in temperature for both samples. This can be attributed to the thermal effects dominating at high temperatures. However, sample S5 due to its large particle size has relatively large magnetization at all temperatures compared to sample S4. The magnetic parameters determined from these plots have been listed in table 2. The inset of Fig. 3(a) shows the coercivity of the samples. The coercivity of S4 is larger than S5, likely due to the transition of the particle size from single domain to multidomainas the particle size increases. However, the product of coercivity and magnetization (Zeeman energy) for S4 is comparable to that of S5, suggesting that sample S4 might have the same heating ability as sample S5. This is because the heating ability of the sample in an RF field depends upon both the magnetization and coercivity(anisotropy). The coercivitiesofthe samples S4 and S5 from their M(H) loops taken at various temperatures have been plottedin Fig. 5. The decrease in coercivity can be observed with increasing temperature. The decrease in coercivity may be due to the effects of thermal fluctuations of the blocked moment across the anisotropy barrier. Coercivity is strongly dependent on particle size as well. If we correlate coercivity with particle size, it can be inferred that it (coercivity) directly relates to particle size -it decreases as the size of particles increases. Further, the coercivity of S4 is greater than that of S3 and S5 due to the small mass(30 mg) of S4 than that of S3 and S5 (50 mg).Coercivity values as a function of temperature are given in Table 3 below. Hyperthermia Measurements Magnetically induced heating measurements were carried out on 50 mg powder for the samples S4 and S5 under an alternating field strength and frequency of230 Ôeand 543 kHz, 200 Ôe, and 172 kHz, respectively. The results are shown in Fig. 5. It can be seen that heating is produced in both the samples at the applied fields and frequencies. The heating produced by magnetic nanoparticles depends on parameters such as particle size, saturation magnetization, effective anisotropy, applied field strength, and frequency. The effect of field strength and frequency can be observedin Fig. 5. The rapid increase in temperature can be observed (Fig. 5(a) compared to Fig. 5(b)) with the increase in frequency. Sample S5 has a large heating potential compared to sample S4 due to its relatively large saturation magnetization. The heating produced is usually quantified by measuring the specific absorption rate (SAR), the power released per unit mass by MNPs in heat. Maximizing the SAR is an essential objective in magnetothermal therapy in order to reduce the dosage of the magnetic nanoparticles. Table 4.Thevalues of SAR obtained are 500 W/g and 450 W/g at 543 kHz for the samples S5 and S4, respectively.In addition, it can also be seen that sample S4 produces heat in the therapeutic range of 42 -48 ℃ at 543 kHz since the cancer cells are more sensitive to this temperature range as compared to the healthy cells. Hence, sample S4 is found to be most suitable at the given frequency of 543 kHz for magnetic hyperthermia application. DRS Analysis Diffuse Reflectance Spectroscopy (DRS) technique was used to study band gaps from the light absorption spectrum. The band gap of NiFe2O4 nanoparticles was determined by using Tauc, Davis, and Mott relation given by: Where is the absorption coefficient, is the light frequency, is a proportionality constant, is the band gap energy. Band gap energy can determine from the plot(hνα) 1/n as a function of energy (hν), as shown in Fig. 6, by extrapolating the straight line to the axis intercept. There are two values of the energy gap, as evident from Fig. 6 for S2 -S5. The second band gap is due to the presence of hematite. The presence of the hematite phase is already confirmed from XRD data. The sample S6 is not annealed separately from other samples -it is collected after TGA/DTA analysis which is made at 1200℃. In this sample, no other band gap energy line can be seen as only NiFe2O4 is present, and the hematite phase is absent. The value of the indirect band gap of energy for NiFe2O4 is found to be around 1.53 eV, and for hematite, it is in the range of 1.82 -1.96 eV. 39 Figure 6: Extraction of energy gap value for NiFe2O4 from the Tauc-plot for different samples (S1 -S6) annealed at 150 ºC (S1), 600 ºC (S2), 800 ºC (S3), 900 ºC (S4), 1000 ºC (S5) and at 1200 ºC (S6) each for ten hours Thermal analysis Thermal studies of nanoparticles include the determination of their decomposition and crystallization temperature. For this, thermogravimetric and differential thermal analysis (TG-DTA) techniques were used in which dried precursors of NiFe2O4 were obtained via the co-precipitation method using a thermal analyzer (Mettler Toledo star system). The TG-DTA curve in Fig. 7 shows that major weight loss occurs from 270 ºC to 420 ºC. The plateau formed from 450 ºC to 1200 ºC indicates the formation of NiFe2O4 crystallites already confirmed by XRD. Modeling Approach and Mathematical Modelling An analytical approach was adopted to meet the heat propagation problem produced by magnetic nanoparticles (NiFe2O4) within. The thermal properties of human lung tissue obtained from literature were used,and the value of heat produced by the MNPs was used from our experimental data. Assumptions Some assumptions in our model, such as the spherical shape of the tumor and linear decrease in particle concentration from the center to the periphery of the tumor, are made. This linear particle concentration correlation is also applied to power density. According to the assumed initial conditions, the temperature of the tumor and healthy tissues, as well as that of the boundary/interface between healthy and tumorous tissue (r = R), is assumed to be the same -nearly equal to body temperature (To= 37 ℃). The heat diffusion equation is used to model thedistribution of heat behavior governed by the MNPs due to the application of alternating magnetic field and is given by: Where T(r, t) is the temperature at a discrete point in the tissue, which continuously varies with time; = is the power density, is the thermal conductivity; c is the specific heat capacity, and ρ is the density of a lung's tumor. The boundary conditions are given by: And concerning 'r'; By applying the steady state B.Cs, as stated above, one can get the values of the constants c1 and c2and put in eq. (9) to get: The same B is needed to get a solution for a homogeneous equation. Cs will be applied by neglecting source term, i.e., 2 , equation (8) We arrive at our final solution: As time endures, the temperature of the whole tumor will reach the steady state where we can measure the final temperature of the tumor -the steady state temperature. The effect of heat diffusion within the tumor and in the surrounding healthy tissues can also be investigated by modeling more realistic assumptions in the model. Temperature across the tumor will almost reach steady as time collapses. At this point, we can measure the final temperature of the tumor (denoted by Ts). From that steady state temperature and original temperature (To) of the tumor, we can determine the thermal conductivity of the tumor by using the relationship deduced from the solution (18) as given below in eq. (23). This can be further used to stimulate the more realistic heat diffusion from the tumor to nearby surrounding tissues in an advanced model. When the temperature of the tumor reaches to steady state. i.e., at r = 0. Eq. (22) Where the parameters, such as thermal conductivity ( ), specific heat capacity (c), density (ρ), and thermal diffusivity (D) of healthy and tumorous lung tissue, are taken from the literature (Giering et al., 1995). From Fig. 8, one can see the temperature dependence on the radius of the human lung tumor. This figure shows that temperature is nearly constant for almost 1 cm of the radius. Then it starts decreasing onward and drops off to human body temperature at the peripheries, i.e., 4 cm. This is because of the imposed conditions over the problem. It is pertinent to mention that the temperature is obtained in the therapeutic range (42 -48 oC) for cancer treatment of lung tumors. Without imposing such conditions on the problem, heat would have been propagated out of the tumor. This figure does not show a realistic situation. To combat the realistic situation, one cantackle this problem with advanced modeling techniques, for instance, with an approach using the finite difference method. Figure 9 shows the temperature produced by the prepared nanoparticles, i.e., NiFe2O4, as a function of the radius of the human lung's tissue and time to get the maximum temperature in the therapeutic range. Here, we see that temperature increases as time passes and reaches its maximum value (52.5 ℃) and then starts decreasing from the center to the peripheries of the lung's tumor with the assumed radius size of 4 cm. Conclusions The NiFe2O4 MNPs with crystallite sizes in the 15 -55 nm range were prepared via the co-precipitation route. The samples showed ferromagnetic behavior, and a decrease in saturation magnetization with the temperature rise was observed for samples S4 and S5.However, sample S5 was found to have large saturation magnetization at all temperatures compared to the sample S4. The coercivity of sample S4 was larger than that of S5; however, the coercivity and magnetization (Zeeman energy) for both samples are nearly the same, suggestingalmost the same heating capabilities for these samples. From hyperthermia measurements, sample S4 was found to be the most suitable candidate for hyperthermia applications at 543 kHz because of its ability to produce heat in the therapeutic range of 42 -48 ℃ and having a highSAR value. We found from the heat diffusion equation modeling that temperature reaches a maximum constant value of 52.5 ℃ within 2 minutes and then drops when moving towards the lung's tumor peripheries. Although the modeling results are not that realistic in the current study, they can be made realistic in a more advanced and suitable modeling using the same heat diffusion equation from the perspective of hyperthermia treatment of cancer.
2023-07-28T15:37:46.948Z
2023-06-29T00:00:00.000
{ "year": 2023, "sha1": "50321ee3f4232ea69fd518a88f33f32b6ba8a5e9", "oa_license": "CCBYNC", "oa_url": "https://www.internationalrasd.org/journals/index.php/jmps/article/download/1517/987", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4e3d9fa4a942f5511e6c1a52d62ca7aadf2a184e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
245360247
pes2o/s2orc
v3-fos-license
Effect of Real-Time Online High-Intensity Interval Training on Physiological and Physical Parameters for Abdominally Obese Women: A Randomized Pilot Study : Purpose: This study aimed to investigate the effects of online high-intensity interval training (HIIT) in abdominally obese women experiencing health complications due to COVID-19. Methods: Sixteen participants were enrolled and divided into the HIIT group ( n = 8) and moderate-intensity continuous training (MICT, n = 8) group. The HIIT group underwent 20 min of exercise consisting of 20 s of high-intensity (85–90% HRmax) exercise followed by 30 s of exercise at 60% HRmax using only body weight. The main exercise program for the MICT group included 40 min of stationary bike pedaling at 65–70% HRmax. Exercise was performed three days a week for eight weeks using a smart device and application that enables bidirectional communication. Results: The HIIT group showed reduced body fat ( p = 0.036), BMI ( p = 0.021), and visceral fat ( p = 0.003) compared to the MICT group. Further, the HIIT group also had reduced insulin ( p = 0.021) and LDL levels ( p = 0.024), increased grip strength (left p = 0.012, right: p = 0.002), and a substantial drop in total stress index ( p = 0.004) compared to the MICT group. Conclusions: Thus, online HIIT is a useful means to reduce abdominal fat, improve blood lipid profile and muscle strength, and relieve stress caused by COVID-19. Introduction The outbreak of the coronavirus disease (COVID-19) the world over in 2020 has triggered unprecedented political, financial, and social crises and cultural transformations, and the prolonged pandemic, ironically, has accelerated the cutting-edge technology-based fourth industrial revolution [1,2]. Going beyond the scope of online communication and consumption, the realization of work, education, and travel in virtual spaces, such as augmented reality (AR) and metaverse, has made smart devices an indispensable part of people's lives in modern society [3]. However, it has also resulted in musculoskeletal disorders and obesity in people leading sedentary lifestyles for prolonged periods of time [4,5]. Particularly, abdominal obesity among women has markedly increased in relation to childbirth, postural imbalance, and diminished physical activity [6,7] As of 2020 in South Korea, 40% of the adult population is obese, and the average daily sedentary time among women is 7.8 h, an increase from that before the COVID-19 pandemic [8]. According to recent studies, despite a diet plan and exercise, a sedentary lifestyle itself induces abdominal obesity because of pelvic imbalance and reduced core and gluteal muscle strength [9]. Further, a BMI of 30 kg/m 2 or higher increases COVID-19-related mortality by causing reduced immune functions [10,11]. However, with professional manpower and administrative focus directed toward measures for disinfection amid the global crisis caused by COVID-19, the gravity of obesity and secondary metabolic disorders is currently underestimated, and measures for their prevention and treatment have been put on hold [4,12]. Abdominal obesity causes various metabolic diseases because of the Appl. Sci. 2021, 11, 12129 2 of 12 large number of cytokines from fat cells. These cytokines cause inflammatory diseases, which have a positive correlation with abdominal obesity [13][14][15][16][17]. Therefore, despite the COVID-19 environment, efforts should be made to increase the amount of physical activity. In order to manage weight gained by the viral environment, the American College of Sports Medicine (ACSM) recommends 150 min of aerobic exercise per week [16]. In addition, it was said that high-intensity exercise for more than 75 min helps reduce fat, and non-face-to-face participation in exercise can be an alternative [18]. The HIIT effect is already known through previous studies [19][20][21]. However, there are very few cases of online HIIT in the COVID-19 situation. High-intensity interval training (HIIT) involves a repetition of short, vigorous exercise followed by a short break to maximize energy consumption in a short period of time and is thus known as a time-efficient exercise for metabolizing fat [10]. It has gained popularity since it ranked second in the 2020 ACSM fitness trend [19]. Recently, studies have proposed that HIIT is more helpful in preventing cardiovascular diseases, rapidly improving glucose regulation in patients with diabetes mellitus [19,20], and effectively reducing abdominal fat and blood cytokines in postmenopausal women compared to moderate intensity continuous training (MICT) [13]. Additionally, Reljic et al. (2020) reported that HIIT is less boring than MICT, while relieving stress [21], increasing physical fitness, and improving vascular functions [10]. Despite the greater health benefits and time efficiency compared to MICT, beginners or obese individuals are reluctant to actively utilize HIIT due to the risk of joint damage or injury [22,23]. However, with professional guidance and feasible options that ensure safety and engagement, HIIT can be useful in promoting physical activity and preventing obesity in inactive, sedentary individuals. A "new normal" has been established since the outbreak of COVID-19, whereby people have begun enjoying online challenges and sports in virtual reality using the Global Positioning System (GPS) and various smart device-based platforms [24]. However, research on the role of body mechanics, exercise physiology, and psychological effects of online exercise programs in obesity is scarce. Additionally, appropriate guidelines for high-intensity exercise are lacking, thus calling for detailed age-specific and diseasespecific surveys. Further, the new environment we live in today has elevated the risk of obesity compared to the pre-COVID-19 era, but solutions remain elusive. Thus, this study aims to investigate the effects of real-time, non-face-to-face HIIT on body composition, abdominal obesity, muscle strength, blood lipid profile, and stress index in women with abdominal obesity. Study Design A randomized-by-block design was used. Following published instructions, subjects were divided into two groups using computer (MICT: group 1, HIIT: group 2; group 1 moderate intensity continuous training; group 2 high-intensity interval training) [25]. This study is a pilot study to verify the effectiveness of non-face-to-face HIIT and was conducted to analyze feasibility and sustainability according to the results. Subject Women under the age of 40 who work at company S in Gyeonggi Province, Korea, each with a body mass index (BMI) of 25 kg/m 2 or higher and abdominal circumference of 85 cm or more, were recruited, and those who signed on the forms ensuring informed consent to participate in the study were enrolled. This study was approved by the institutional review board at Korea National Sports University (1263-202106-BR-011-02), and conformed to the recommendations of the Declaration of Helsinki. Table 1 shows the physical characteristics of the participants. Intervention Over a total of 8 weeks, each of the groups followed different types of intervention using real-time video web program. Following the recommendations of the ACSM, the program involved bidirectional communication with instructor feedback on the accurate postures, precautions, and conditions [19]. Both programs were performed three times a week for eight weeks. The MICT group underwent a 50-min exercise and HIIT group underwent a 30-min exercise. The HIIT program comprised five minutes of warm up, 20 min of the main exercise, and five minutes of cool-down. The MICT program comprised five minutes of warm up, 40 min of the main exercise, and five minutes of cool-down. Exercise intensity for the HIIT group was gradually increased every two weeks depending on the participants' fitness. The total calorie consumption was equal for both groups. Total calorie and exercise intensity was measured using Samsung Galaxy Smart Watch Active2. Subjects did not participate in other exercises that could affect the data, and dietary control was taken equally by both groups according to the prescription of a professional nutritionist. The nutritionist checked the subjects' food diaries once every Sunday at 11 a.m. (1) HIIT The HIIT program used in this study is a modified version of the protocol used by Gholizadeh (2018) and Karin (2020), which consisted of 20 s of high-intensity exercise at 85-90% of HRmax followed by 30 s of exercise at 60% of HRmax, with 2 sets of 12 reps [26,27]. The HIIT program consisted of movements using only the participant's weight. Exercise intensity was measured based on HRmax using Samsung Galaxy Smart Watch Active2. The participants wore the smartwatch on their wrist and checked their heart rate during exercise. (2) MICT The MICT program used in this study was a modified version of the protocol proposed by Karin (2020), and the participants pedaled on a stationary bike at 60-70% of HRmax for 40 min [27]. Exercise intensity was measured based on HRmax using Sam-sung Galaxy Smart Watch Active2, and the participants maintained an appropriate in-tensity by checking the smartwatch on their wrist during exercise. The examiner frequently checked participants' exercise intensity and monitored their condition on the screen and encouraged the participants during exercise. Both groups' participants stopped for a break immediately upon feeling pain or physical dis-comfort during exercise. Table 2 shows both exercise programs. (1) Body composition test To measure body composition, the participants fasted for 2 h before the test. Height was measured using an automatic height scale (DS-103M, Dong Sahn Jenix Co., Seoul, Korea), and after removing all metal accessories on the body, weight (kg), body fat (kg), skeletal muscle mass (kg), BMI (kg/m 2 ), visceral fat level, and percent body fat (%) were measured using a body composition analyzer (In-Body 770, Biospace Co., Seoul, Korea). (2) Abdominal obesity To test for abdominal obesity, abdominal subcutaneous fat thickness and abdominal and hip circumference were measured. Subcutaneous fat thickness was measured using a skinfold caliper (Harpenden HSK-BI, Skinfold Caliper, British Indicators, UK). With the participant standing upright with both arms comfortably rested on the side, the examiner pulled the skin 3 cm lateral to the navel to measure the thickness (mm) of subcutaneous fat. Abdominal and hip circumference were measured twice using a tape ruler around the widest part around the navel and buttocks, and the average value was used. (3) Blood test The participants fasted from 9 p.m. the day before until 10 a.m. on the day of the blood glucose and lipid test. At 10 a.m., 5 mL of blood sample was taken from the brachial vein. After 30 min of incubation at room temperature, the blood sample was centrifuged (3000 rpm, 10 min) to separate the serum layer and was immediately taken to Green Cross Laboratories, Inc. for insulin, Total Cholesterol (TC), Triglyceride (TG), low-density lipoprotein cholesterol (LDL), High-density lipoprotein cholesterol (HDL) analysis. (4) Muscle strength test Muscle strength was measured using an isometric dynamometer (TKK-5401, Takei, Japan). The participants stood with their two feet shoulder-width apart with both arms naturally rested on the sides, held the digital dynamometer with four fingers (excluding the thumb) perpendicular to the handle, and squeezed it as hard as they could. Grip strength was measured twice for each hand, and the best record was used for analysis. (5) Stress Index test Stress Index was measured using the Korean version of the Perceived Stress Scale (PSS) modified by Lee (2016) based on the original scale developed by Cohen (1988) [28,29]. This 10-item tool asks about perceived stress in the past months using a 5-point scale (0 = never, 1 = rarely, 2 = sometimes, 3 = frequently, 4 = very often). Negatively worded items 4, 5, 7, and 8 were reverse scored, and the total possible score is 40, where a higher score indicates more severe stress. The reliability (Cronbach's α) of the PSS used in this study was 0.83, and it was processed using SPSS 22.0 software. Data Processing All study data were processed using the SPSS 22.0 software. The differences in body composition, abdominal fat, blood lipids, muscle strength, and stress between the HIIT and MICT groups were analyzed. Because the assumption of normality is not met due to the small sample size, nonparametric methods were used for all analyses. Differences in the average changes (post-exercise average-baseline average) between the two groups were analyzed using the Mann-Whitney U test, and changes over time in each group were comparatively analyzed using the Wilcoxon signed rank test. A comparison of the reference values between groups was performed by obtaining Cohen's d values. Statistical values were presented as mean and standard deviation, and statistical significance was set at α < 0.05. Changes of Abdominal Fat According to Exercise Intensity The changes in abdominal fat after eight weeks of exercise were compared between the MICT and HIIT groups. There were significant differences in the abdominal fat thickness, abdominal circumference, and hip circumference (abdominal fat thickness: z = −2.004, p = 0.050, abdominal circumference: z = −2.223, p = 0.026, hip circumference: z = −2.111, p = 0.035) ( Table 4) Figure 1). Changes of Blood Glucose and Lipid Profile According to Exercise Intensity The changes in blood glucose and lipid profile after eight weeks of exercise were compared between the MICT and HIIT group, and there were significant differences in insulin and LDL (Insulin: z = −2.310. p = 0.021, LDL: z = −2.260. p = 0.024) ( Table 5) Changes of Blood Glucose and Lipid Profile According to Exercise Intensity The changes in blood glucose and lipid profile after eight weeks of exercise were compared between the MICT and HIIT group, and there were significant differences in insulin and LDL (Insulin: z = −2.310. p = 0.021, LDL: z = −2.260. p = 0.024) ( Table 5) Note: Values are presented as mean ±SD (n = 8 per group). † p < 0.05 from Pre and Post. * p < 0.05 between groups. HIIT: high-intensity interval training, MICT: moderate intensity continuous training, TC: total cholesterol, TG: total Triglyceride, LDL: low-density lipoprotein, HDL: high-density lipoprotein. Changes of Muscle Strength According to Exercise Intensity The changes in grip strength after eight weeks of exercise were compared between the MICT and HIIT group, and there were significant differences in grip strength at the post-exercise assessment (left: z = −2.521. p = 0.012, right: z = −3.046. p = 0.002) ( Table 6). Within the groups, the MICT group showed significant changes in the right grip strength (right: z = −2.527. p = 0.012), and the HIIT group significant changes in both right and left grip strength (left: z = −2.251. p = 0.012, right: z = −2.521. p = 0.012) (Table 6, Figure 3). Changes of Muscle Strength According to Exercise Intensity The changes in grip strength after eight weeks of exercise were compared between the MICT and HIIT group, and there were significant differences in grip strength at the post-exercise assessment (left: z = −2.521. p = 0.012, right: z = −3.046. p = 0.002) ( Table 6). Within the groups, the MICT group showed significant changes in the right grip strength (right: z = −2.527. p = 0.012), and the HIIT group significant changes in both right and left grip strength (left: z = −2.251. p = 0.012, right: z = −2.521. p = 0.012) (Table 6, Figure 3). Note: Values are presented as mean ±SD (n = 8 per group). † p < 0.05 from Pre and Post. * p < 0.05 between groups. HIIT: high-intensity interval training, MICT: moderate intensity continuous training, TC: total cholesterol, TG: total Triglyceride, LDL: low-density lipoprotein, HDL: high-density lipoprotein. Changes of Muscle Strength According to Exercise Intensity The changes in grip strength after eight weeks of exercise were compared between the MICT and HIIT group, and there were significant differences in grip strength at the post-exercise assessment (left: z = −2.521. p = 0.012, right: z = −3.046. p = 0.002) ( Table 6). Within the groups, the MICT group showed significant changes in the right grip strength (right: z = −2.527. p = 0.012), and the HIIT group significant changes in both right and left grip strength (left: z = −2.251. p = 0.012, right: z = −2.521. p = 0.012) (Table 6, Figure 3). Note. Values are presented as mean ± SD (n = 8 per group). * p < 0.05, † p < 0.01 between groups. ‡ p < 0.05 from Pre and Post. HIIT: high-intensity interval training, MICT: moderate intensity continuous training, HGS: hand grip strength. Changes of Stress According to Exercise Intensity The changes of stress after eight weeks of exercise were compared between the MICT and HIIT groups, and there were significant differences in the stress index at the postexercise assessment (z = −2.852. p = 0.004) ( Table 7). Within the groups, there were significant changes in both groups (MICT: z = −2.271. p = 0.023, HIIT: z = −2.536. p = 0.011). (Table 7, Figure 4). Note. Values are presented as mean ± SD (n = 8 per group). * p < 0.05 between groups. † p < 0.05 from Pre and Post. HIIT: high-intensity interval training, MICT: moderate intensity continuous training, PSS: perceived stress scale. Changes of Stress According to Exercise Intensity The changes of stress after eight weeks of exercise were compared between the MICT and HIIT groups, and there were significant differences in the stress index at the postexercise assessment (z = −2.852. p = 0.004) ( Table 7). Within the groups, there were significant changes in both groups (MICT: z = −2.271. p = 0.023, HIIT: z = −2.536. p = 0.011). (Table 7, Figure 4). Note. Values are presented as mean ± SD (n = 8 per group). * p < 0.05 between groups. † p < 0.05 from Pre and Post. HIIT: high-intensity interval training, MICT: moderate intensity continuous training, PSS: perceived stress scale. Discussion As the number of sedentary people has increased due to COVID-19, the problem of obesity in women has become serious. This study investigated the effects of online highintensity interval training (HIIT) on women's body composition, abdominal obesity, muscle strength, blood lipid profile, and stress, and confirmed that online HIIT for 8 weeks had greater physical and psychological effects compared to MICT participants. Effects of Online HIIT on Body Composition and Abdominal Obesity There were significant differences in body fat, BMI, and visceral fat levels between participants of the two groups. This is consistent with previous findings that show that high-intensity exercise at 90% HRmax more effectively reduces body fat mass and visceral fat mass, compared with moderate-intensity continuous cycling in obese men and women [26], additionally, HIIT is more effective on protein synthesis and lipid metabolism than MICT [28]. Borrega et al. (2021) reported that HIIT is a highly effective and time-efficient weight control strategy for busy people in modern society [30]. Moreover, a combined aerobic and anaerobic training program online was reported to unarguably alleviate metabolic disorders, including diabetes mellitus, and participation in HIIT for 75 min a week Discussion As the number of sedentary people has increased due to COVID-19, the problem of obesity in women has become serious. This study investigated the effects of online high-intensity interval training (HIIT) on women's body composition, abdominal obesity, muscle strength, blood lipid profile, and stress, and confirmed that online HIIT for 8 weeks had greater physical and psychological effects compared to MICT participants. Effects of Online HIIT on Body Composition and Abdominal Obesity There were significant differences in body fat, BMI, and visceral fat levels between participants of the two groups. This is consistent with previous findings that show that high-intensity exercise at 90% HRmax more effectively reduces body fat mass and visceral fat mass, compared with moderate-intensity continuous cycling in obese men and women [26], additionally, HIIT is more effective on protein synthesis and lipid metabolism than MICT [28]. Borrega et al. (2021) reported that HIIT is a highly effective and timeefficient weight control strategy for busy people in modern society [30]. Moreover, a combined aerobic and anaerobic training program online was reported to unarguably alleviate metabolic disorders, including diabetes mellitus, and participation in HIIT for 75 min a week to maintain an appropriate BMI was reported to lower COVID-19-related mortality [31]. Within group, participants of both the HIIT and MICT groups showed reductions in body weight, body fat mass, percent body fat, BMI, and visceral fat level after exercising. This supports the recommendations of the ACSM and CDC that point to the fact that 150 min of moderate intensity exercise every week prevents obesity and diabetes mellitus [3], and regular exercise boosts immune functions against viruses [32]. There were significant differences in the changes in levels of abdominal fat between participants of the two groups, particularly in abdominal subcutaneous fat thickness, abdominal circumference, and hip circumference. This is in line with previous findings that show that HIIT reduces body fat mass and WHR in overweight and obese individuals and is substantially more effective in improving visceral fat compared to MICT [33][34][35]. Therefore, an online HIIT program would be effective in preventing abdominal obesity and improving cardiopulmonary and immune functions in sedentary women. Effects of Online HIIT on Blood Glucose and Lipid Profile There were significant differences in the changes of blood glucose and lipid levels between participants of the two groups, particularly in insulin and LDL levels. This is consistent with the findings that showed that combined resistance exercise and HIIT reduce fasting glucose and insulin more effectively compared to moderate-intensity training in older adults with chronic diseases [36]. Further, in terms of within-group changes, the HIIT group showed a reduction in insulin, TC, TG, and LDL, which is consistent with past reports that show that HIIT increases insulin sensitivity and improves diabetes-related lipid concentration [37]; it also supports study findings that show that HIIT suppresses inflammatory cytokine expression and prevents nonalcoholic fatty liver [17]. Both groups showed an increase in HDL after exercise, which is consistent with previous reports that showed that HIIT at 95% HRmax for five days a week markedly increased HDL concentration in mice [38] and both protocols helped increase HDL, although the effect may have varied. Thus, either protocol can be prescribed to encourage exercise in the COVID-19 era, though HIIT will be particularly effective in preventing both obesity and chronic disease. Exercise combined with resistance and HIIT in the elderly improved LDL-C, insulin, and HOMA-IR profiles compared to moderate-intensity exercise [39], and these findings are the same as in this study [39]. Effects of Online HIIT on Muscle Strength In this study, there was a significant group-by-time interaction effect on grip strength. Such differences between groups are consistent with previous findings, which show that HIIT, including total body resistance exercise (TRX), increases grip strength in older adults [40] and substantially improves muscle strength and balance [41]. The findings also suggest that the resistance exercise included in the HIIT program is effective in increasing skeletal muscle mass and function, with high-intensity modes of exercise having a positive effect on the protein synthesis mechanism by regulating metabolic reactions in obese individuals [42]. This is in line with previous findings, which are that a 12-week HIIT program, including resistance training, improves cardiovascular indices, skeletal muscle strength, and grip strength in overweight adults [43]. García emphasized that physical activities and HIIT using one's own body weight are helpful at-home methods of training for muscle enlargement in the COVID-19 era, and, particularly, this form of resistance exercise can be performed by untrained adults or trainers [42,43]. Therefore, resistance training must be included in HIIT programs; subsequent studies should establish and validate various muscle strengthening protocols for various groups of people. Effects of Online HIIT on Stress There was a significant group-by-time interaction effect on stress. This is consistent with past findings that HIIT was more effective than MICT in reducing anxiety, depression, and stress that had intensified during the pandemic, and presents evidence supporting HIIT as an effective strategy for coping with physical stress resulting from the social isolation caused by COVID-19 [43]. Both groups showed a reduction in the stress index after the exercise program. This is in line with past reports that show that regular exercise is a useful tool for reducing anxiety and suicide rates and improving self-esteem. Further, HIIT and MICT interventions at home can have positive effects on psychological health and processes of recovery [43,44], confirming that the non-face-to-face HIIT program used in this study is effective in promoting psychological stability in obese women. Taken together, participating in a smart device based HIIT program seems to have prevented abdominal obesity in women who spend a smaller amount of time outdoors compared to men [44]; additionally, a media platform that enables bidirectional communication seems to have increased participants' motivation and bonding with others participating in the program and helped instill confidence in the fact that they can easily exercise at home and enjoy themselves. Karin reported that there are no cases of acute injuries reported among women who participated in HIIT and that participants can be engaged in the program by adjusting the exercises and rest intervals as they liked [45]. Thus, subsequent studies should further validate HIIT to expand the scope of its benefits and should continue to examine high-quality online exercise programs focused on specific groups of people, such as those with musculoskeletal disorders, geriatric diseases, and disabilities, such that HIIT can be utilized as a means to maintain public health in the post COVID-19 era. Although it is difficult to generalize the results due to the small number of samples in this study, it is a meaningful attempt to identify the effectiveness of alternative exercise in the COVID-19 environment. Therefore, further studies will require more subjects and various layer-specific effectiveness verification. Conclusions This study showed that eight weeks of online HIIT led to significant changes in body fat mass, BMI, and visceral fat level and markedly reduced abdominal subcutaneous fat thickness, abdominal circumference, and hip circumference compared to MICT. In the blood test, participants in the HIIT group showed reduced insulin and LDL levels and increased grip strength compared to those in the MICT group. Finally, participants in the HIIT group showed a marked reduction in the stress index, confirming that HIIT has greater psychological benefits than MICT. Therefore, HIIT using a smart device is a useful tool for improving abdominal obesity and blood lipid profiles, increasing muscle strength, and alleviating stress in sedentary women in the COVID-19 era. Future studies which consider age and disease as characteristics will be required.
2021-12-22T17:17:53.141Z
2021-12-20T00:00:00.000
{ "year": 2021, "sha1": "bf0cc9ba88b7aded0813ea1fe91d6905131bc285", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/24/12129/pdf?version=1639991260", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b0a60800e49b8d828b85781e547f84be26726309", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252891348
pes2o/s2orc
v3-fos-license
Diagnostic value of combination of biomarkers for malignant pleural mesothelioma: a systematic review and meta-analysis Introduction Early-stage accurate diagnosis of malignant pleural mesothelioma (MPM) has always been a formidable challenge. DNA and protein as biomarkers for the diagnosis of MPM have received considerable attention, and yet the outcomes are inconsistent. Methods In this study, a systematic search employing PubMed, EMBASE, and Cochrane Library to identify relevant studies from the first day of databases to October 2021. Moreover, we adopt the QUADAS-2 to evaluate the quality of eligible studies and Stata 15.0 and Review Manager 5.4 software programs to perform the meta-analysis. Additionally, bioinformatics analysis was performed at GEPIA for the purpose of exploring relationship between related genes and the survival time of MPM patients. Results We included 15 studies at the DNA level and 31studies at the protein level in this meta-analysis. All results demonstrated that the diagnostic accuracy of the combination of MTAP + Fibulin-3 was the highest with the SEN 0.81 (95% CI: 0.67, 0.89) and the SPE 0.95 (95% CI: 0.90, 0.97). And the bioinformatics analysis indicated that the higher MTAP gene expression level was beneficial to enhance the survival time of MPM patients. Discussion Nonetheless, as a result of the limitations of the included samples, it may be necessary to conduct additional research before drawing conclusions. Systematic review registration https://inplasy.com/inplasy-2022-10-0043/, identifier INPLASY2022100043. Introduction Malignant pleural mesothelioma (MPM) is an aggressive malignancy that arises from the serosa of the body cavity. The majority of mesothelioma originates in the peritoneum, with 85-90 percent originating in the pleura (1). And the diagnostic standard methods of MPM include chest X-ray, computed tomography (CT) scan of chest and upper abdomen, examination of the pleural effusion using thoracentesis as well as histopathological examination with thoracoscopy. Furthermore, chest X-ray and chest CT lack sufficient sensitivity to diagnosis. And substantial volumes of pleural effusions can mask pleural/chest lesions as well as render undetectable a small amount of malignant pleural effusion. The gold standard for diagnosis of MPM is pathological examination. Nonetheless, as a result of inconspicuous clinical features and the long incubation period of MPM, patients are frequently diagnosed at an advanced stage, which is very unfavorable for treatment and prognosis. In this instance, additional techniques are required to demonstrate the lesions' malignant biological potential (2). Currently, due to their non-invasive nature and relatively low cost, tumor biomarkers for disease diagnosis have become increasingly desirable. In addition to many protein markers in MPM [mesothelin (MSLN), soluble mesothelin-related peptide (SMRP), osteopontin, fibrin, High Mobility Group Box 1 (HMGB1) protein, etc.], the DNA is released and expressed in cells (3)(4)(5)(6)(7). In this context, epigenetic markers such as DNA are emerging as promising biomarkers for multiple cancer types, including MPM (8,9). Many scholars have published studies on DNA as biomarkers for early diagnosis of MPM. Moreover, it has been proved to be effective in recognizing the malignant transformation of tumors and in predicting prognosis in many cancers. It is worth mentioning that all DNA found in biological samples is stable and quantitative, which offers a significant benefit for laboratory detection. All of the aforementioned factors support the need to identify appropriate biomarkers that can be easily measured in easily accessible tissues for early detection and improved prognosis. To date, no biomarker has been clinically available to diagnose MPM alone, and there are frequent cases of poor sensitivity (SEN) or specificity (SPE). Consequently, we constantly presume that seeking a combination of highly accurate diagnostics is a reasonable choice. In our previous study, diagnostic accuracy analysis was performed at the protein level. The results demonstrated that the Fibulin-3, pooled SEN 0.90 (95% CI: 0.74, 0.97) and SPE 0.91 (95% CI: 0.84, 0.95), might be a more appropriate indicator for early diagnosis of MPM comparing with the other two biomarkers (MSLN and SMRP) (10). Encouragingly, genes such as BRCA1-associated protein 1 (BAP1), Methylthioadenosine (MTAP) and CDKN2A have been clinically applied as biomarkers for the early diagnosis of MPM (11). In this study, hence, we centered on BAP1, MTAP, and BAP1+MTAP, and in addition, combined them with MSLN, SMRP, and Fibulin-3 separately to analyze the diagnostic accuracy for MPM. Subsequently, we have obtained satisfactory results after the unified integration of protein markers and DNA for diagnostic accuracy evaluation, which implies that in the early diagnosis of MPM biomarkers, we can try to use more precise biomarker combinations. On top of this foundation, we conducted a bioinformatics study employing Gene Expression Profiling Interactive Analysis to assess the relationship between BAP1, MTAP and prognostic survival of MPM. Materials and methods This meta-analysis followed the Systematic Review and Meta-Analysis Guidelines (PRISMA) and was registered on the INPLASY (registration number: INPLASY2022100043). The registration information can be viewed in its entirety on inplasy.com (https:// inplasy.com/inplasy-2022-10-0043/; accessed on 12 October 2022). Search strategy and study selection We conducted systematic searches in PubMed, Embase and Cochrane libraries until October 2021. And the details of the literature search strategy are listed in Figure 1 and Table 1. Furthermore, we sought references to relevant systematic reviews/ meta-analyses to identify other potential studies. Protein biomarker information can be found in the corresponding citations (10). Inclusion and exclusion criteria Studies meeting the following inclusion criteria were considered eligible for selection: (a) Study type: We evaluated the diagnostic precision of MPM antibody markers prospectively or retrospectively. Exclusion criteria: (a) papers in languages other than English and Chinese; (b) animal experiments; (c) papers about reviews, meta-analyses, conference summaries; case reports, letters, expert opinions; duplicates or multiple publications. (d) insufficient data to calculate SEN and SPE. Selection of studies Two authors independently screened the titles and abstracts of each study following the completion of the search. And we obtained all articles deemed appropriate on either side of the full text for further evaluation. The same two authors would evaluate potential full-text and select studies for a discussion of inclusion on the basis of inclusion/exclusion criteria and reach an agreement through discussion and consensus to resolve distinctions. And a third reviewer will be sought if an agreement cannot be reached. Data characteristics Two independent reviewers were tasked with conducting a literature search and assessing the applicability of each study based on the inclusion criteria. During the same time, a third researcher resolved the conflict problems if they existed. Following a review of the full texts of the included articles, we compiled the following information: (a) Study information: authors, year of publication, the language of publication, information of journal and type of study; (b) Sample size: number of MPM patients and non-MPM patients; (c) Index test: detection methods and types of biomarkers; (d) Baseline data: age, gender and diagnosis; (e) Number of outcomes: true positive(TP), false positive(FP), true negative(TN) and false negative(FN) for each study. Risk of bias and quality assessment of evidence We adopted the revised The Quality Assessment of Diagnostic Accuracy Studies 2 quality assessment tool (QUADAS-2) (HTA Program 2011 (www.hta.ac.uk)) with the aim of assessing the quality of each study. This tool includes 4 predominant areas that discuss patient selection, index test, reference standards, flow and timing. And the risk of bias was assessed by the results of 4 domains and each question was answered as "yes", "no" or "unclear". The applicability concerns were assessed by the results of the first 3 domains and each question was rated as "low," "high," or "unclear" (25). This evaluation was done independently by two reviewers. Assessment of publication bias We utilized Deek's funnel plot to detect publication bias. And data with severe publication bias (P<0.05) were excluded. Survival analyses and RNA-seq data acquisition We performed an overall prognostic analysis of the BAP1 and MTAP gene in mesothelioma on GEPIA (GEPIA: a web server for cancer and normal gene expression profiling and interactive analyses. Nucleic Acids Res, 10.1093/nar/gkx247). In this database, the RNA-seq data of 86 tumor tissues and clinical information of patients including age, gender, tumor stage and histologic subtype (epithelioid, sarcomatous, biphasic) were collected. We divided the patients into a high-expression group and a low-expression group according to the median expression level of the MTAP gene to evaluate the prognosis of the MPM patients. Kaplan-Meier (KM) survival analyses were conducted using the R package (survminer, v.0.4.9 and survival, v.3.2.10) (https://CRAN.R-project.org/package=survminer) (http://cran.rproject.org/package=survival). Statistical analysis We adopted Stata 15.0 (Stata Corporation, College Station, TX, USA) and Review Manager 5.4 statistical software programs for the FIGURE 1 Flowchart of literature search. purpose of processing the data and detecting the heterogeneity of the studies in this meta-analysis. Subsequently, we extracted true TP, FP, TN and FN data from each study and obtained a 2x2 contingency table. Besides, the SEN, SPE, positive likelihood ratio (PLR), negative likelihood ratio (NLR) and DOR for each study were calculated to generate a ROC curve with STATA software. The resulting regression coefficients were used to fit the ROC curves, yielding the AUC, SEN, SPE, and likelihood ratios (LRs). The RNA-Seq datasets GEPIA used are based on the UCSC Xena project (http:// xena.ucsc.edu), which are computed by a standard pipeline. The statistical calculations of data from TCGA were processed through R software (v.3.6.3). The characteristic of included studies The included studies were published between 2008 and 2020. We collected data on 1054 MPM patients and 810 non-MPM patients. After eliminating duplicate articles, reviewing titles and abstracts, we conducted a full-text screening of the remaining 45 studies, and eventually determined to include 15 studies. The details of the literature search flowchart are illustrated in Figure 1. Moreover, the characteristics of the included studies are summarized in Table 1. All MPM patients are diagnosed by pathological examination and the results contain biomarkers MTAP or BAP-1. Quality assessment QUADAS-2 has been used to evaluate the methodological quality of studies. Supplementary Figure 1 in the Appendix summarizes the quality of included studies. Whereas Supplementary Figure 2 provides details on the risk of bias and applicability concerns for each included study. Diagnostic accuracy 15 studies (9, 12-24) assessed the diagnostic value of the combination of biomarkers for the diagnosis of MPM. There were a total of 1864 patients included. Forest plots from the meta-analysis illustrated that the pooled sensitivity of the combination of MTAP + Fiblin-3 for MPM diagnosis was 0.81 (95% CI, 0.67-0.89) and the pooled specificity was 0.95 (95% CI, 0.90-0.97). The AUC was 0.96 (95% CI: 0.94-0.97). The data for the diagnosis of MPM by the DNA and combinations of DNA and protein are detailed in Figures 2, 3. The SROC curves are given in Figure 4. And the full data results of our analysis are indicated in Table 2. Publication bias The Deek's funnel plot asymmetry test was applied to evaluate studies for potential publication bias (Supplementary Figure 3). These results indicate that the research articles included in this meta-analysis have no publication bias. Prognostic analysis of BAP1 and MTAP gene in mesothelioma The overall prognosis of BAP1 and MTAP genes in mesothelioma was analyzed on GEPIA. And the Log-rank test Subgroup analysis The results of the subgroup analysis demonstrated that in male MPM patients, higher MTAP gene expression was associated with longer survival time (P < 0.001) ( Figure 6A). MTAP gene indicated that higher expression corresponded to longer survival time at all ages ( Figure 6B). Stage I, Stage II, Stage III, Stage I & Stage II, and Stage II & Stage IV all revealed the same trend, higher MTAP expression is associated with longer survival time in MPM patients (P<0.05) ( Figure 6C). In T1& T2, T3&T4 and N0&N1 subgroups, the higher the MTAP expression, the longer the survival time (P<0.05) ( Figure 6D). In the Pathologic stage (Stage I) and N stage (N2&N3), MTAP expression and patient survival time did not show a correlation ( Figure 6E). As shown in Figure 7, there was no significant difference in MTAP expression among all subgroups of MPM patients. Discussion Malignant pleural mesothelioma is the most common type of primary pleural tumor. Despite the fact that the histologic diagnosis of MPM is currently the most widely used in clinical practice, patients will have a better prognosis with an early diagnosis. Currently, there are no reliable indicators for longitudinal surveillance and associated risk assessment of asbestos-exposed populations (26). It is widely known that the development of non-invasive diagnostic methods for oncology is a major challenge in modern oncology, and the analysis of samples of plasma, serum, urine, cerebrospinal fluid and pleural fluid is a suitable method to identify markers in association with cancer progression, as these samples are easier to collect and less invasive to the patient. Ideally, Biomarkers for cancer detection should ideally be readily available and inexpensive to measure, allowing for early disease detection and an improved prognosis (27). Meanwhile, many studies have identified changes in DNA expression in tumor tissues and body fluids from various tumor pathological processes, suggesting DNA as a potential diagnostic marker, Accordingly, our research focuses on whether DNA or combinations of DNA and other biomarkers can be the most appropriate solution for early diagnosis of MPM. Moreover, the gold standard for clinical diagnosis of MPM is essential in the final confirmation of diagnosis (28)(29)(30). The purpose of this study is to evaluate the diagnostic value of DNA as well as multiple marker combinations for MPM using a meta-analysis approach. And the studies we included were free of publication bias, indicating that the results of this study are reliable. Comparing the results between the 13 groups revealed that MTAP +Fibulin-3 had a better specificity as well as AUC, but the sensitivity was not so outstanding, with a sensitivity and specificity of 0.81 (95% CI: 0.67, 0.89) and 0.95 (95% CI: 0.90,0.97), respectively. The bioinformatic analysis demonstrated that MPM patients with higher MTAP gene expression had had a longer period of survival. All of the aforementioned findings indicate that we can attempt to increase the survival time of MPM patients by regulating the expression of MTAP. Research from both early diagnosis and improved prognosis will be more beneficial to the whole process of MPM treatment. Except for that, our findings make some efforts in different levels of binding (DNA and protein combination) for diagnosis, and yet due to the low sensitivity of MTAP+Fibulin-3, which may be caused by variables including the amount of data included in the study, the diagnostic effect of BAP1, MTAP, as well as other combinations are not outstanding, so we believe that in practical clinical applications we can preferentially recommend Fibulin-3 as a biomarker for early diagnosis of MPM since it is more readily available as a protein and can be preserved until the assay is completed, and our team has published the diagnostic accuracy of Fibulin-3 in previous studies (10). In our previous study, Fublin-3 can be detected in plasma and pleural effusion, which can be used as a biomarker for early diagnosis. However, due to the lack of specificity, for MPM with unclear diagnosis, MTAP can be further detected by immunohistochemistry to improve the specificity of diagnosis. At the same time, MTAP can affect the prognosis of patients with MPM according to the results of bioinformatics analysis, which can help to guide the prognosis and treatment of MPM patients. What's more, we also determined that it was not easy to obtain pleural effusion and tissue specimens of suspected MPM patients in the early stage, which may increase the difficulty of our early diagnosis. More research is required to determine whether MTAP can be extracted from plasma. The results of this study may aid in the early clinical diagnosis of MPM: if Fibulin-3 can be detected preferentially, it will be the most Subgroup differences of MTAP gene expression in mesothelioma. ns, not significant. recommended biomarker, and other markers at the DNA level are not recommended preferentially. Nonetheless, considering the excellent specificity of MTAP+Fibulin-3, we suggest that future clinical studies on MTAP could be more in-depth, to compensate for the low sensitivity of MTAP+Fibulin-3 in the diagnosis of MPM due to factors including the amount of data or distinctions in laboratory techniques, which may be a new breakthrough point for the early diagnosis of MPM in the future. Meanwhile, there are some limitations of this study. Although 15 studies were included, the small number of sample studies might compromise the reliability of the pooled estimates of the meta-analysis. In addition, the majority of studies did not report the time interval between diagnostic testing and reference standard testing. Most of the included studies were cross-sectional studies involving patients with advanced disease, and there may be inconsistent laboratory methods and technical irregularities, which largely limit the diagnostic accuracy of MPM biomarkers. Conclusions In conclusion, multiple normalizers may be more appropriate than a single reference DNA for obtaining reliable data. Increasing the expression of the MTAP gene can well enhance the prognosis and prolong the survival time of MPM patients. Such exploration can help MPM early diagnosis and improve prognosis to move faster towards precision medicine. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. Author contributions MZ and ZZ provided concept, design, and manuscript preparation. ZL developed the search strategies, conducted literature and study selection. HG and MZ contributed to the development of selection criteria and the risk of bias assessment strategy. XG and DW extracted data from the included studies and assessed the risk of bias and summarize the evidence. MZ and ZL read, provided feedback, and approved the final manuscript. ZZ contributed to review and funding acquisition. All authors contributed to the article and approved the submitted version. Funding This research was funded by Cuiying Scientific and Technological Innovation Program of Lanzhou University Second Hospital: CYXZ2020-31.
2022-10-14T15:20:38.871Z
2022-10-12T00:00:00.000
{ "year": 2023, "sha1": "80e47c7d8921f2cabab52c8d15e76458f7050f73", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "c8570a079122348641843a20dd551e9bcc9b6d24", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
209389495
pes2o/s2orc
v3-fos-license
High frequency of pathogenic ACAN variants including an intragenic deletion in selected individuals with short stature Context Defining the underlying etiology of idiopathic short stature (ISS) improves the overall management of an individual. Objective To assess the frequency of pathogenic ACAN variants in selected individuals. Design The single-center cohort study was conducted at a tertiary university children’s hospital. From 51 unrelated patients with ISS, the 16 probands aged between 3 and 18 years (12 females) with advanced bone age and/or autosomal dominant inheritance pattern of short stature were selected for the study. Fifteen family members of ACAN-positive probands were included. Exome sequencing was performed in all probands, and additional copy number variation (CNV) detection was applied in selected probands with a distinct ACAN-associated phenotype. Results Systematic phenotyping of the study cohort yielded 37.5% (6/16) ACAN-positive probands, with all novel pathogenic variants, including a 6.082 kb large intragenic deletion, detected by array comparative genomic hybridization (array CGH) and exome data analysis. All variants were co-segregated with short stature phenotype, except in one family member with the intragenic deletion who had an unexpected growth pattern within the normal range (−0.5 SDS). One patient presented with otosclerosis, a sign not previously associated with aggrecanopathy. Conclusions ACAN pathogenic variants presented a common cause of familial ISS. The selection criteria used in our study were suggested for a personalized approach to genetic testing of the ACAN gene in clinical practice. Our results expanded the number of pathogenic ACAN variants, including the first intragenic deletion, and suggested CNV evaluation in patients with typical clinical features of aggrecanopathy as reasonable. Intra-familial phenotypic variability in growth patterns should be considered. Introduction Short stature is a common pediatric disorder affecting 3% of the population (1) and represents one of the most frequent referrals to pediatric endocrinologists. Despite standard clinical and laboratory evaluation, the etiology of short stature is not determined in 50-90% of cases (i.e. idiopathic short stature, ISS) (2). As height is one of the most heritable human traits (3), different aggrecan (ACAN, OMIM:*155760) has been associated with growth failure (1,5). Aggrecan, a large chondroitin sulfate proteoglycan, is a major structural component of the extracellular matrix of cartilage, including growth plate, articular, and intervertebral disc cartilage. Core protein comprises three globular domains (G1, G2, and G3), interglobular domain (IGD), and centrally located glycosaminoglycan attachment region (GAG) (6) (Fig. 1). G1 domain forms interactions with hyaluronan, whereas the G3 domain binds to different extracellular proteoglycans (i.e. tenascin and fibulin) via its C-type lectin repeat (CLD). GAG serves as a chondroitin and keratan sulfate attachment region, creating a highly negative-charged molecule that enables hydration of the cartilage tissue. Consequently, it allows the cartilage to withstand the high mechanical load in the skeletal joint (7). The autosomal dominant inheritance pattern and the presence of advanced BA have been consistent features of aggrecanopathy, serving as diagnostic indicators. Recently, heterozygous ACAN mutations were reported in approximately 40 families worldwide (5,10,12,13,14,15,16), including few individuals with a decelerated BA (10,16). According to the recently reported clinical ACANassociated features, we aimed to assess the yield of pathogenic ACAN variants in a study cohort having not only idiopathic short stature but also additional selection criteria. In our cohort study, the first diagnostic tier genetic analysis was NGS single nucleotide variant (SNV) analysis, whereas in patients with typical ACAN presentation additional copy number detection was performed. Participants From 51 unrelated children and adolescents (31 females, mean age of 11 years (4-20 years)) with and without ISS (height below −2 s.d. score (SDS) ( All selected patients agreed with the participation in the study. All participants or their legal guardians provided informed consent for genetic analysis. The protocol was approved by the Slovene Medical Ethics Committee (0120-36/2019/4). Methods Clinical data of the probands were obtained from their electronic medical records, whereas clinical data of family members were partially available from medical history and partially from their medical documentation. Arginine and L-Dopa GH stimulation tests were performed according to previously published test procedures (18). Serum GH levels were determined by immunoassay using Immulite 2000 (Siemens). Bone age was evaluated based on Greulich and Pyle Atlas (GP) bone age determination system, 2nd edition, or determined with the BoneXpert program (19). Z-scores for height were calculated using the LMS method and the British 1990 reference growth data (17). Whole-blood EDTA samples were collected for isolation of genomic DNA according to established laboratory protocols with FlexiGene DNA isolation kit (Qiagen) (20). In all probands, next-generation sequencing (TruSight One or whole-exome sequencing (WES)) was performed. The NGS libraries for clinical exome sequencing were prepared using TruSight One sequencing panel (Illumina, San Diego, CA, USA) according to manufacturer's instructions and sequenced on the MiSeq desktop sequencer together with MiSeq Reagent kit v3 (Illumina). WES libraries preparation and sequencing were outsourced to Novogene Co. Ltd. DNA sequencing libraries were prepared using Agilent SureSelect Human All ExonV6 kit (Agilent Technologies) following the manufacturer's recommendations. DNA samples were fragmented to generate 180-280 bp fragments. The fragments' overhangs were converted into blunt ends and 3′ ends adenylation enabled adapter oligonucleotides ligation in the next step. DNA fragments with ligated adapter molecules on both ends were amplified with index sequences in a PCR reaction. Exome regions were enriched with WES probes and PCR reaction amplified enriched WES target regions. After purification and WES libraries quantification, libraries were sequenced on Illumina NovaSeq 6000 (Illumina). The collected data were analyzed with the bcbionextgen toolkit (https ://gi thub. com/b cbio/ bcbio -next gen) using BWA-MEM (21) to align reads to the human reference genome (GRCh37) and GATK HaplotypeCaller (22), Freebayes (23), Strelka2 (24), and VarDict (25) variant callers. A final dataset of variants was assembled from variants detected by at least two different variant callers. Standard hard filtering parameters and variant quality score recalibration according to GATK Best Practices (26,27) recommendations were applied. FastQC was used for QC metrics and multiqc for reporting. Copy number variations in the ROI (region of interest) were inferred by CNVkit 'Python library and command-line software toolkit' (28). By the CNVkit algorithm, segmentation analysis and consequent targeted analysis using the moving average of the calculated copy ratio signals (smoothed trendline) within the ACAN gene were applied (29). Identified genetic variants with coverage >10× were annotated and filtered with VarAFT software (30). The minor allele frequency threshold for known variants was set at 1% and all variants exceeding this value were excluded from further analysis. Candidate variants were subsequently confirmed with targeted Sanger sequencing as was family segregation analysis. In addition, in probands in whom exome sequencing SNV analysis did not reveal causal variant in ACAN and who exhibited distinct ACAN-associated phenotype with both selected inclusion criteria, that is, accelerated BA and dominant pattern of inheritance (probands no. P1, P3, P4, and P8), additional CNV detection was performed using array CGH and NGS CNVkit detection algorithm. Array CGH analysis was performed using a commercial oligonucleotide array (Agilent 180K Baylor Oligo, Agilent Technologies) and a sex-matched human reference DNA sample (Agilent Technologies). Data were analyzed with the Cytogenomics 3.0 Software (Agilent Technologies). To report the exact nucleotide-resolution coordinates of the detected CNV, we additionally performed longrange PCR (LR-PCR) of the selected region (exons 2-9). LR-PCR primer set was designed to the human genomic ACAN sequence (GRCh37/hg19) using Primer3web version 4.1.0 (31) with CGH and CNV-kit indicated deleterious sequence. LR-PCR of the selected ACAN region (12 729 base pairs length wild-type sequence; includes exons 2-9) was amplified using forward TTGACCTCACCATGCCTTCA and reverse TTCAGTAGGAGAGCAGGCAC primer with LongAmp Taq 2X Master Mix (thermocycling conditions: initial denaturation 30 s 94°C; 30 cycles 20 s 94°C, 20 s 60°C, 11 min 65°C; final extension: 10 min 65°C). PCR products were purified using AMPure XP Beads (Beckman Coulter Inc., Brea, CA, USA), and NGS sequencing libraries were prepared using the Nextera DNA Flex Library Prep Kit (Illumina) according to the manufacturer's protocol and sequenced on a MiSeq sequencer (Illumina). LP-PCR NGS-sequencing data were aligned using the bwa-mem aligner, and the ACAN deletion was characterized using the Integrative Genomics viewer (IGV) visualization tool. Results The prevalence of pathogenic ACAN variants in our selected study cohort was 37.5% (6/16). Six novel ACAN mutations (two nonsense, two frameshifts leading to a premature stop codon, one missense, and one intragenic multi-exon deletion) in six different pedigrees (P1, P6, P10, P11, P15, and P16) ( Fig. 1 and Table 1) were determined. None of the reported variants were previously reported in ClinVar, HGMD, and LOVD databases, or large population databases (GnomAD and ExAC). All variants were predicted to be damaging by different prediction algorithms (CADD, SIFT, Polyphen, and Mutation Taster). The only missense variant (c.7069A>T, p.Ser2357Cys), which is located at the G3 functional domain, inside CLD domain ( Fig. 1), had the following results of in silico testing: CADD Phred 28, SIFT damaging, Mutation taster disease-causing, EIGEN pathogenic, PROVEAN damaging, and conservation GERP score 5.59. The missense variant was identified in proband no. 15 (P15) and segregated with short stature in the family. More precisely, it was confirmed also in her mother (P15M), who presented with short stature (−3 SDS) but was not confirmed in proband's brother, who had a normal height (−0.62 SDS at age 5 years). Maternal grandfather was also short, but unfortunately, the segregation analysis here was not feasible, because he died before the study began. The total number of ACAN-positive individuals was 19. *On GH +/− GnRH analogue therapy; **GRCh37, NM_013227. 3 In probands who exhibited ACAN-associated phenotype (P1, P3, P4, and P8) and had negative NGS results, additional CNV detection focused on the ACAN gene was performed using the array CGH and NGS CNVkit algorithm (28). In P1, segmentation analysis by the CNVkit algorithm did not call a specific copy number within the ACAN gene, but targeted analysis using the moving average of the calculated copy ratio signals within the ACAN gene identified a visible drop in the copy ratio signal (Fig. 2A). The horizontal coverage of the TruSight One panel was 170×. Array CGH revealed a heterozygous 5.3 ± 1.5 kb sized deletion, encompassing exons 3-5 of ACAN (Fig. 2B). Additional LR-PCR of the selected region (encompassing exons 2-9 of ACAN) and consequent NGS analysis determined exact nucleotide positions of the deletion (NG_012794.1:g. 39409_45491del), which was 6.082 kb in size, encompassing exons 3-6 ( Fig. 2C). No similar deletions or large deletions elsewhere within the ACAN gene were described in either genomic databases (ClinVar, Decipher, ISCA, LOVD, and HGMD) or in the large population database of healthy individuals (Database of Genomic Variants). In P3, P4, and P8, additional CNV detection analysis did not reveal any CNVs. Phenotypic characteristics of patients with heterozygous ACAN mutations In total, 19 ACAN-positive individuals from six non-related families were identified (Fig. 3A). Their ages ranged from 3 to 73 years (Table 1). At birth, the affected individuals tended to have a lower birth weight SDS and birth length SDS (average birth weight (n = 11): −0.8 SDS (range: −3 to 0.9); average birth length (n = 11): −1.7 SDS (range: −2.6 to −0.3); Table 1). Height SDS ranged from low-normal to significant short stature (average height (n = 19): −2.7 SDS (range: −4.1 to −0.5); Table 1). The growth charts of ACAN-positive children and adolescents are presented in Supplementary Fig. 1 (see section on supplementary materials given at the end of this article). Reported variants were co-segregated with short stature phenotype in probands' respective families, except in one family member (P1S) who had a normal height (−0.5 SDS, 30th percentile) at the age of 5 years without being on growth hormone therapy (Fig. 3B). Five probands (P1, P6, P10, P11, and P16) and one family member (P11S) were treated with GH (mean dose: 34.8 µg/kg/day, for an average of 23 months (range: 3-51 months)). The peak GH values, height gain, age at GH therapy, and duration are shown in (Table 2 and Supplementary Fig. 1). Four out of seven children had an advanced BA, whereas in the other three BA was delayed (Table 1 and Supplementary Fig. 1). One adult patient (P1F) presented with otosclerosis with mild nonprogressive low-frequency hearing loss of 17% since his childhood years, a sign not previously associated with aggrecanopathy. Although in the majority of patients articular problems started in early adulthood or late adolescence, the proband no. 10 (P10) presented with frequent patellar luxations since the age of 10 years and subject no. P1F with degenerative arthritis of the spine since the age of 11 years. Not all study individuals had obvious orthopedic problems. In some families, mild facial dysmorphism was reported (mild midface hypoplasia, flat nasal bridge, and frontal bossing). Significant brachydactyly was seen in two individuals. Phenotypic characteristics of the ACANpositive group are summarized in Tables 1 and 3. Discussion Recently, heterozygous ACAN mutations were identified as a cause of ISS with a prevalence of 1.4-6% (1,5,33). As the autosomal dominant inheritance pattern and the presence of advanced BA have been reported as possible diagnostic indicators of aggrecanopathy (5,12,14,15,16), our study cohort was selected according to these inclusion criteria from a larger cohort of 51 patients with ISS. The total yield of ACAN pathogenic variants in our study group was 37.5% (6/16). With this kind of selection, we potentially lost probands with de novo mutations, which are assumed to be rare. Our data suggest that pathogenic variants in ACAN are a common cause of familial short stature. All our ACAN-positive families, except families no.1 and 15, had a nonsense/frameshift mutation with the introduction of the premature stop codon (PTC), leading to a loss of the CLD domain, which is needed to link the aggrecan molecule to other components of the extracellular matrix. Thus, these variants are likely to significantly perturb protein function. Nevertheless, to determine whether these particular mutations result in nonsense-mediated decay (NMD) of mRNA or allow translation of truncated protein as the main mechanism, additional functional studies are needed. To date, only a few in vivo animal models of aggrecan mutation leading to PTC were studied but each of them showed different disease mechanisms (mouse cmd/cmd-bc, chick nanomelia, dexter cattle) (34,35,36,37,38). In family no. 1, the deletion of exons 3-6 caused the lack of the whole G1 domain, which is crucial for interactions with hyaluronan and link proteins; thus, the function of the protein is likely significantly perturbed. The proposed effect for aggrecan missense mutations located in the G3 domain is the secretion of a mutant aggrecan, disrupting cartilage structure and function (dominant-negative effect) (8,11,39). We herein report the first intragenic deletion in the ACAN gene. In one out of four patients with typical clinical presentation of aggrecanopathy (i.e. advanced bone age and familial short stature with/without earlyonset articular findings) and negative exome sequencing SNV analysis result, a heterozygous deletion in ACAN was identified by the array CGH and detected also by the CNVkit detection algorithm using the moving average approach, but not by the default segmentation analysis. Exact coordinates of the revealed deletion were determined with additional LR-PCR and subsequent NGS analysis. To date, intragenic pathogenic ACAN deletions have not yet been reported. Ristolainen et al. report incidentally revealed a 57 bp in-frame deletion within exon 12 of ACAN gene in one patient with lymphoma, without functional characterization or data about patient's Table 2 GH stimulation testing with arginine and/or L-dopa results and growth follow-up in all participants receiving GH. P1 18. (41). The principal clinical feature of aggrecanopathy is short stature. It is of interest to emphasize that our P1S subject with heterozygous deletion at the time of genetic evaluation did not present with short stature. Moreover, her growth velocity during early childhood increased, starting at the 0 percentile at birth and increased to the thirtieth percentile at the age of three without any therapy. Her growth pattern differed from her younger sister (P1) and father (P1F) carrying the same ACAN deletion (Fig. 3B), suggesting that additional genetic and environmental factors affected her growth. To date, height within the low-normal range was reported only in patients with heterozygous missense ACAN variants, but not in untreated patients with null variants (14). Though the growth pattern of P1S was unusual for aggrecanopathy at her prepubertal age, her BA was typically advanced, which could predict shorter final height. Concerning other clinical features of aggrecanopathy, not all our probands presented with advanced BA. As described in previous reports, advanced BA is an indicator of the presence of ACAN pathogenic variants, but it is not by itself a reliable selection criterion (10,16). Although joint problems are presumed to commonly start in late adolescence or even later (14), our P10 presented with frequent patellar luxations already at the age of 10 years. On the contrary, the affected mother of P10 (P10M) did not show any skeletal or articular features by the time of publication (age of 34 years), indicating wide phenotype variability within the same family. Recently, it was proposed that heterozygous null variants in the upstream half of the gene had a primary effect on growth plate cartilage, whereas those in the downstream half of the gene affected both, articular and growth plate cartilage (5,12). As the joint disease occurred also in our patients (P1F, P6, P6M, P6U, and P6GM) carrying pathogenic null variants in the upstream half of the gene, the genotype-phenotype correlation becomes more complicated, as it has been already described in some other studies (1,14). One adult patient (P1F) presented also with otosclerosis with consequent conductive mild hearing loss since his childhood years. The hereditary nature of otosclerosis has been acknowledged for over a century but without a precise genetic basis ascertained (42). To date, otosclerosis has not yet been reported in patients with pathogenic ACAN variants. Nevertheless, several reports and family linkage analyses have identified the association between 15q26.1-qter locus and otosclerosis (OTSC1) with ACAN to be one of the candidate genes (43,44). Experiments on mice also showed that Acan is expressed in mouse auditory tissue (44). Therefore, otosclerosis may be a part of aggrecanopathy in patients with pathogenic variants. However, additional analysis of larger cohorts is required to determine the frequency of otosclerosis and hearing loss in patients with aggrecanopathy. From four ACAN positive study individuals treated with GH for more than 1 year, two (P1 and P6) had a significant improvement in SDS height. The subjects P10, P11, and P11S started GH therapy during puberty after 10 years of age, which was relatively late; furthermore, they were receiving combination therapy with GnRH analogue and by the time of publication did not yet reach the final height. On the other hand, the probands P1 and P16, who started GH therapy before 5 years of age, had a short follow-up at the time of publication to assess the efficacy. In the largest clinical study of ACAN patients, Gkourgianni et al. reported that some patients tend to lose height continuously with age, whereas some can maintain their height percentile during childhood with subsequently obvious growth cessation seen in puberty, not before (14). Therefore, response to growth hormone therapy may be difficult to judge in individual ACAN patients because of different growth patterns during childhood/prepubertal/ pubertal periods. In conclusion, our findings corroborated the postulation that pathogenic ACAN variants are a common cause of familial short stature. High yield of pathogenic variants identified in our study cohort was likely related to the use of specific selection criteria, which could be suggested for a personalized approach to genetic testing of the ACAN gene in clinical practice. Our results expanded the number of pathogenic ACAN variants; furthermore, we reported the first intragenic deletion, detected by array CGH as well as by analysis of the NGS data on the exome level. Our clinical evaluation indicated that heterozygous pathogenic variants in ACAN most often present with evident familial short stature with or without advanced BA. However, we described a pediatric patient with an atypical growth pattern for aggrecanopathy, indicating complex genotype-phenotype correlation and quite a prominent phenotype variability among patients with identical ACAN genetic variants. Supplementary materials This is linked to the online version of the paper at https://doi.org/10.1530/ EJE-19-0771. Declaration of interest The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of this study. Funding The study was supported by the financial support from the Slovenian Research Agency (research core funding No. P3-0343) and University Medical Center Ljubljana Research Project (research core funding No. 20170122). Author contribution statement S L, A S M, and B T contributed to the study concept and design. K J, T T, D M, and S L performed the molecular genetic analysis and data analysis with interpretation. L L did the array-CGH analysis and interpretation. S L, K P, B S, D K, and A S M selected the patients for study cohort and collected their clinical data. S L drafted the paper, and A S M along with all authors contributed to the finalizing of the manuscript. A S M is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
2019-12-18T14:06:24.821Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "efbed9de1e80dc0735489a9dbc04990e4d85922d", "oa_license": "CCBY", "oa_url": "https://eje.bioscientifica.com/downloadpdf/journals/eje/182/3/EJE-19-0771.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b8eb38f205bb538aae3000dbb4b767836117f17c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269228711
pes2o/s2orc
v3-fos-license
Seizing the Future: Predicting Epilepsy After TBI ECoG Spiking Activity and Signal Dimension Are Early Predictive Measures of Epileptogenesis in a Translational Mouse Model of Traumatic Brain Injury Di Sapia R, Rizzi M, Moro F, Lisi I, Caccamo A, Ravizza T, Vezzani A, Zanier ER. Neurobiol Dis. 2023;185:106251. PMID: 37536383. doi:10.1016/j.nbd.2023.106251 The latency between traumatic brain injury (TBI) and the onset of epilepsy (PTE) represents an opportunity for counteracting epileptogenesis. Antiepileptogenesis trials are hampered by the lack of sensitive biomarkers that allow to enrich patient’s population at-risk for PTE. We aimed to assess whether specific ECoG signals predict PTE in a clinically relevant mouse model with ∼60% epilepsy incidence. TBI was provoked in adult CD1 male mice by controlled cortical impact on the left parieto-temporal cortex, then mice were implanted with two perilesional cortical screw electrodes and two similar electrodes in the hemisphere contralateral to the lesion site. Acute seizures and spikes/sharp waves were ECoG-recorded during 1 week post-TBI. These early ECoG events were analyzed according to PTE incidence as assessed by measuring spontaneous recurrent seizures (SRS) at 5 months post-TBI. We found that incidence, number and duration of acute seizures during 3 days post-TBI were similar in PTE mice and mice not developing epilepsy (No SRS mice). Control mice with cortical electrodes (naïve, n = 5) or with electrodes and craniotomy (sham, n = 5) exhibited acute seizures but did not develop epilepsy. The daily number of spikes/sharp waves at the perilesional electrodes was increased similarly in PTE (n = 15) and No SRS (n = 8) mice vs controls (p < 0.05, n = 10) from day 2 post-injury. Differently, the daily number of spikes/sharp waves at both contralateral electrodes showed a progressive increase in PTE mice vs No SRS and control mice. In particular, spikes number was higher in PTE vs No SRS mice (p < 0.05) at 6 and 7 days post-TBI, and this measure predicted epilepsy development with high accuracy (AUC = 0.77, p = 0.03; CI 0.5830-0.9670). The cut-off value was validated in an independent cohort of TBI mice (n = 12). The daily spike number at the contralateral electrodes showed a circadian distribution in PTE mice which was not observed in No SRS mice. Analysis of non-linear dynamics at each electrode site showed changes in dimensionality during 4 days post-TBI. This measure yielded the best discrimination between PTE and No SRS mice (p < 0.01) at the cortical electrodes contralateral to injury. Data show that epileptiform activity contralateral to the lesion site has the highest predictive value for PTE in this model reinforcing the hypothesis that the hemisphere contralateral to the lesion core may drive epileptogenic networks after TBI. The latency between traumatic brain injury (TBI) and the onset of epilepsy (PTE) represents an opportunity for counteracting epileptogenesis.Antiepileptogenesis trials are hampered by the lack of sensitive biomarkers that allow to enrich patient's population at-risk for PTE.We aimed to assess whether specific ECoG signals predict PTE in a clinically relevant mouse model with *60% epilepsy incidence.TBI was provoked in adult CD1 male mice by controlled cortical impact on the left parietotemporal cortex, then mice were implanted with two perilesional cortical screw electrodes and two similar electrodes in the hemisphere contralateral to the lesion site.Acute seizures and spikes/sharp waves were ECoG-recorded during 1 week post-TBI.These early ECoG events were analyzed according to PTE incidence as assessed by measuring spontaneous recurrent seizures (SRS) at 5 months post-TBI.We found that incidence, number and duration of acute seizures during 3 days post-TBI were similar in PTE mice and mice not developing epilepsy (No SRS mice).Control mice with cortical electrodes (naı ¨ve, n ¼ 5) or with electrodes and craniotomy (sham, n ¼ 5) exhibited acute seizures but did not develop epilepsy.The daily number of spikes/sharp waves at the perilesional electrodes was increased similarly in PTE (n ¼ 15) and No SRS (n ¼ 8) mice vs controls (p < 0.05, n ¼ 10) from day 2 post-injury.Differently, the daily number of spikes/sharp waves at both contralateral electrodes showed a progressive increase in PTE mice vs No SRS and control mice.In particular, spikes number was higher in PTE vs No SRS mice (p < 0.05) at 6 and 7 days post-TBI, and this measure predicted epilepsy development with high accuracy (AUC ¼ 0.77, p ¼ 0.03; CI 0.5830-0.9670).The cut-off value was validated in an independent cohort of TBI mice (n ¼ 12).The daily spike number at the contralateral electrodes showed a circadian distribution in PTE mice which was not observed in No SRS mice.Analysis of non-linear dynamics at each electrode site showed changes in dimensionality during 4 days post-TBI.This measure yielded the best discrimination between PTE and No SRS mice (p < 0.01) at the cortical electrodes contralateral to injury.Data show that epileptiform activity contralateral to the lesion site has the highest predictive value for PTE in this model reinforcing the hypothesis that the hemisphere contralateral to the lesion core may drive epileptogenic networks after TBI. Commentary Traumatic brain injury (TBI) affects 69 million people worldwide yearly. 1 One of the severe long-term health risks of TBI is post-traumatic epilepsy (PTE), defined as the development of epileptic seizures months to years after TBI, which is different from the acute seizures that commonly occur shortly after a TBI. 2 People with PTE have shorter life expectancies than people without. 3The latent period between TBI and PTE (months to years) presents a promising opportunity for therapeutic intervention 4 if we can identify patients at risk of PTE early enough.In a recent study, Rosella Di Sapia and colleagues 5 identified cortical electrical signals that predict PTE after a severe TBI in mice and may one day translate to markers of PTE risk in humans. The authors used a mouse model whereby a controlled impact to the cortex induces a severe TBI.After implanting two electrocorticogram (ECoG) electrodes (one anterior, one posterior) on each brain hemisphere immediately after TBI, they monitored the mice with 24/7 video and ECoG recordings for 1 week, and then again for 3 weeks, 1.5 months, and 5 months after injury.At the 5-month time point, 65% of the TBI mice displayed evidence of PTE, defined by at least one spontaneous seizure lasting more than 10 seconds.Post-traumatic epilepsy seizures were characterized by generalized motor convulsions and electrographic paroxysms in both hemispheres.The authors then analyzed video and ECoG recordings taken within the first week after TBI to look for patterns correlated with PTE severity, including the number of epileptiform spikes, signal dimension, and acute seizures in each ECoG channel. Surprisingly, acute seizures after TBI were not predictive of PTE: their incidence, number, and duration were similar in mice that did and did not develop PTE.This finding is contrary to the current belief that acute seizures may be predictive of PTE.However, the acute seizures reported in the current study seem to arise in part from surgery, as they also occurred in naı ¨ve mice that received the electrode implants but not the TBI.These control mice did not develop PTE, confirming that TBI, not just surgery, induces PTE in this model.Nevertheless, a different surgical procedure (eg, with no electrode implants) will be needed to test rigorously the relationship between early seizures and eventual PTE. The most remarkable finding was that cortical activity contralateral to the contusion site had better predictive value in this mouse model than ipsilateral cortical activity.Specifically, the number of epileptiform spikes recorded at the contralateral posterior electrode 7 days post-TBI positively correlated with both the number and cumulative duration of PTE seizures.Similarly, ECoG signals dimensionality, a measure of nonlinear dynamics estimated by analysis of recurrence, in the contralateral hemisphere was highly predictive of PTE.The ECoG signal dimension of the contralateral anterior electrode provided the best discrimination between mice with or without PTE with high sensitivity (92%), specificity (87%), and accuracy (90%).Since signal dimension excludes certain aspects of the ECoG, such as high frequency oscillations, 6 the list of contralateral predictors of PTE may yet grow to include additional markers.This limitation of ECoG signal dimension measure may also hide potential markers in the ipsilateral region. The drivers of injury-induced epilepsy, like PTE, are usually thought to be related to circuit hyperexcitability near the injury site.So, why were the ipsilateral ECoG signals less instructive?One possibility is that the ipsilateral region is severely damaged in this mouse model of severe TBI.A common caveat of rodent models of TBI is that they need a more severe lesion than humans to develop PTE because rodents are markedly resilient to injury than are humans.The TBI induced in this study is relatively severe, however histological analysis is absent, precluding precise quantification of lost brain tissue on the injured side.If most of the hemisphere was dead or the lesion size variable between mice, it could explain in part why the ipsilateral ECoG signatures are not reliable predictive markers.In other words, while the study strongly suggests that the contralateral region harbors valid markers of future PTE, it does not eliminate the possibility that the ipsilateral region does as well. Whether the findings reported in this mouse model will translate to humans remains to be seen.It will require careful monitoring of patients' cortical signals, ideally 24/7, during the acute post-TBI phase.It is, however, unclear whether the predictive features found in mice can be picked up on a scalp EEG, as opposed to a brain ECoG, which is an intracranial EEG, and during the deep sedation typically provided to patients with severe TBI in the intensive care unit.But if so, this finding could be transformative as it would help stratify the patient population for clinical trials for anti-epileptogenesis therapies, which would otherwise be too costly and inconclusive because of the heterogeneous PTE outcomes.Whether the early predictors uncovered in the present study have any value in females remains unknown since the study only included male mice.If the predictors are sex-dependent, sex will need to be included in patient stratification strategies. What might these predictive signatures of PTE tell us about the origins of epileptogenesis?A tantalizing clue comes from the observation that in TBI patients: dysfunctional thalamocortical connectivity is associated with decreased complexity of EEG. 7,8he prominent and persistent decrease in ECoG signal dimensionality in mice developing PTE may reflect functional alterations of thalamic projections to the cerebral cortex.The thalamus output to the cortex is indeed critical for seizure activity in human and experimental epilepsy after acute cortical injury. 9nterestingly, the mice that eventually developed PTE showed fluctuations in daily circadian ECoG spikes in the first week post-TBI, with more spikes during the active (dark) phase than during the inactive (day-time) phase.The finding that circadian spike distribution is another PTE predictor also highlights the need for 24/7 monitoring and the importance of considering thalamocortical networks in this process, given that thalamocortical neuromodulation is critical in circadian rhythms. In conclusion, the findings support the significance of casting a wide network 10 when looking for predictors of PTE: predictors such as changes in spike frequencies or ECoG signal dimension suggest that epileptogenesis entails the early recruitment of large-scale circuits, presumably including interhemispheric circuits and thalamocortical circuits, far from the initial lesion.Ultimately, we may be able to prevent PTE by targeting these large-scale circuits early, taking advantage of the large therapeutic window afforded by PTE latency.In the meantime, PTE biomarkers found in this TBI animal model will be useful in the lab to stratify mice to discover and test of antiepileptogenic therapies.It is also important to note that some of the aspects of the early changes in ECoG dynamics may be adaptive and important for recovery of the injured brain, warranting preclinical studies to decipher both the adaptive and maladaptive aspects of the early network changes after TBI, which will be crucial to develop antiepileptogenesis therapies.
2024-04-19T15:24:24.727Z
2024-04-17T00:00:00.000
{ "year": 2024, "sha1": "50ba3b2e669821c2de761c55514f52a3d17f5264", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15357597241242241", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "1f4ed180c908acd2c8066b33d42bf21c77f48150", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119116561
pes2o/s2orc
v3-fos-license
Dispersive deformations of the Hamiltonian structure of Euler's equations Euler's equations for a two-dimensional system can be written in Hamiltonian form, where the Poisson bracket is the Lie-Poisson bracket associated to the Lie algebra of divergence free vector fields. We show how to derive the Poisson brackets of 2d hydrodynamics of ideal fluids as a reduction from the one associated to the full algebra of vector fields. Motivated by some recent results about the deformations of Lie-Poisson brackets of vector fields, we study the dispersive deformations of the Poisson brackets of Euler's equation and show that, up to the second order, they are trivial. Introduction The idea that the dynamics of fluids can be understood using the language of infinite dimensional Lie groups is due to Arnol'd and dates back from the 1960's [1]. In that seminal paper, Arnol'd proved the correspondance between the motion of an ideal fluid and the geodesics of the right invariant metric corresponding to the kinetic energy of the fluid itself. The main idea is that the configuration space of the system is the Lie group of the volume-preserving diffeomorphisms of the domain occupied by the fluid;the diffeomorphisms must be volume-preserving because an ideal fluid is incompressible. In this paper, we will study the Euler's equations for a two-dimensional fluid. They can be put in Hamiltonian form, that essentially corresponds to Helmholtz's equation for the vorticity [13]. We will write where ω is the scalar vorticity of the fluid, H is the Hamiltonian function corresponding to the kinetic energy and the Poisson brackets are the Lie-Poisson brackets associated to the algebra of the divergence free vector fields. Such an algebra is exactly the Lie algebra of the group of the volume preserving diffeomorphisms introduced by Arnol'd. We will show how to derive the bracket from the more general Lie-Poisson bracket of vector fields, introduced by Novikov [12] and corresponding to the Hamiltonian structure of the so-called EPDiff equation [9]. Motivated by our study of the dispersive deformations of the Poisson brackets of hydrodynamic type [5], we study the same kind of deformations for the Poisson structure of Euler's equation. In doing so, we rely on the language of multidimensional Poisson Vertex Algebras (mPVA), that constitute a very efficient framework for the needed computations and in general for the study of Hamiltonian PDEs [2]. The well-known Euler's equations for a 2-dimensional ideal fluid are where u = (u, v) is the velocity field, p is the pressure and ∇ = 2 j=1 e j ∂ ∂x j . By Helmholtz's decomposition theorem for vector fields, the two-dimensional ideal fluid is characterized by its vorticity alone Taking the curl of Euler's equation we derive the so-called Helmholtz's equation [10] ω t + u∂ x ω + v∂ y ω = 0 (4) which, as observed by Olver, can be written in Hamiltonian form (1). The Poisson brackets have form where we denote r = (x, y) and r ′ = (x ′ , y ′ ). To derive Equation (4) from Equation (1), we recall that for two-dimensional fluids the velocity field can be uniquely determined by introducing the stream function ψ, the scalar analogue of the vector potential for a solenoidal (or the soleinodal component according to Helmholtz's decomposition) field. By definition, we have The Hamiltonian H, hence, can be written as where we integrated by parts to get the final result. By inserting in the expression for H the solution of the Poisson equation ∇ 2 ψ = −ω, whose validity can be easily checked, we can write and hence, finally, we can find the variational derivative of H with respect to ω δH δω(s) = ψ(s). The computation of the Poisson bracket on the RHS of (1) is done according to the usual general formula that for F = H and G = ω(r ′ ) gives −uω x − vω y . The Lie-Poisson bracket Let g be a Lie algebra, and consider its dual space g * . We can define a linear Poisson bracket between functions F, G ∈ C ∞ (g * ), which is called the Lie-Poisson bracket. The De Rham differential of functions on g * that we denote, in general, dF (α) (α ∈ g * ), is a linear map T α g * → R; since the tangent bundle of a vector space coincides with the underlying space, we can regard the differential as a linear map g * → R, hence as an element of g * * , hence as an element of g. We have where [dF, dG] is the Lie bracket of g and ·, · is the pairing between the algebra and its dual space. To get an expression in coordinates of the Lie-Poisson bracket (11), we can choose a basis {x i } dim g i=1 of g. In this coordinate system, we define the structure constants as c ij k x k := [x i , x j ] k x k . We can identify the basis of g with a coordinate system on its dual space g * , and hence we find the well-known formula The concrete form of the bracket depends on the Lie algebra we are considering and in particular on whether it is finite or infinite dimensional. Let us consider the Lie algebra of vector fields on a two-dimensional domain D. To avoid technical difficulties, we assume that D ≡ R 2 and that the fields are fastly decaying functions of r. In some coordinates {x i } 2 i=1 , a vector field on D can be written as . This implies that the structure functions of the Lie algebra of vector fields X must have the form behaves as a scalar under change of variables. Here, v i (r) are the components of a vector field. This means that p i (r) are densities of 1-forms. The Lie-Poisson bracket is linear in the coordinates and defined by the structure functions as The Poisson brackets is then defined by the density This construction is due to Novikov [12]. The Poisson bracket (14) is the Hamiltonian structure of EPDiff equation, namely the Euler-Poincaré equation associated with the diffeomorphisms group of a n-dimensional manifold. It has important applications in fluidodynamics -it generalizes Camassa-Holm equation -and even computer visions and imaging [9]. In components, EPDiff equation has form where u i are the components of the velocity field and we choose m i 's as the conjugate momenta with respect to the Hamiltonian H = 1 Here η ij denotes the inverse of the metric on D. It is a straightforward to check that is equivalent to (15), using the Poisson structure (14) after relabelling p i 's as m i 's. Let us perform the reduction of the Poisson structure from X * to X ′ * , where with X ′ we denote the Lie algebra of the divergence free vector fields. We will reduce the structure functions of X and use them to define the reduced Lie-Poisson bracket. In dimension 2, as already discussed, the components of such vector fields can be written in terms of the scalar stream function: a more compact version of (6) is X i = ǫ ij ∂ j ψ, with ǫ the two-dimensional Levi-Civita symbol. In the infinite dimensional setting, the form of the commutator of two vector fields is Denoting with φ the stream function of the vector field X and ψ the stream function of Y , we rewrite the integral as The commutator of two divergence free vector fields is a divergence free vector, hence we must have [X, Y ] k = ǫ kn ∂ n χ for a new stream function. After some manipulation we find the form for the structure function C of To define the Lie-Poisson bracket we have to find the conjugate momentum of the stream function. From the Hamiltonian (7) we observe that the vorticity field ω plays that exact role. We can conclude that the Lie-Poisson bracket on X * is as in (5). Poisson Vertex Algebras and dispersive deformations of scalar Poisson brackets The language of Poisson Vertex Algebras is regarded as a very effective framework to study evolutionary Hamiltonian PDEs [2]. In particular, it provides a fully algebraic formalism in which one can investigate the Hamiltonian structures, their symmetries and integrability, and so on. In this Section we will first briefly introduce the notion of multidimensional Poisson Vertex Algebra, of which the one associated to (5) is an example; then we will discuss the dispersive deformations of the bracket by direct computation. Multidimensional Poisson Vertex Algebra Let us consider the algebra A of the differential polynomials generated by ω and its derivatives. We will denote ∂ x 1 ω = ω 1 , ∂ x 2 ω = ω 2 , and similarly for higher order derivatives. Moreover, we assign a degree to the differential polynomials, by counting the order of the jets variables. For f, g ∈ A, we have deg ) endowed with D commuting derivations and with a bilinear operation {· λ ·} : A ⊗ A → R[λ 1 , . . . , λ D ] ⊗ A called the λ bracket satisfying the following set of properties: We use a multi-index notation λ I = λ i1 1 λ i2 2 · · · λ iD D for I = (i 1 , i 2 , . . . , i D ). The terms in the RHS of Property (4) The grading on A is extended to R[λ] ⊗ A by imposing deg λ I = |I|. The set of axioms for the PVA translates into a practical formula that gives the bracket between two elements of A in terms of the bracket between the generators u i . The relation between the notion of Poisson Vertex Algebra and the formal variational calculus is given by an isomorphism between the Poisson Vertex Algebras and the Poisson bracket on the space of local functionals [5]. In particular, given the λ bracket of a PVA we have that and, given a Poisson bracket in the space of the local densities A we can define a λ bracket Moreover, an evolutionary Hamiltonian PDEs of form u t = { h, u} is mapped to u t = {h λ u}| λ=0 . In our case, D = 2 and the differential algebra A is generated by the vorticity. The Poisson bracket (5) is translated by (22) to the λ bracket One of the main advantages of the formalism of PVAs with respect to the standard technique is that the straightforward formula (20) can be easily implemented to perform the explicit computations. In the next Paragraph we will study the deformations of the bracket (23) up to the second order. Deformations of the bracket Let us introduce the transformations on the space A, where F k is a homogeneous differential polynomial of order k, and ∂F 0 (ω) ∂ω = 0. The transformations (24) form a group who is called the Miura group [7]. It can be regarded as the group of local diffeomorphisms on the space A. The transformation of the 0 degree coordinates ω is then lifted to the higher degree jet variables ω I . An important subclass of Miura transformations, that plays a central role in the theory of the deformations, are the so-called second kind Miura deformations [11], for which F 0 = ω. Definition 2. Given a λ bracket {· λ ·} 0 , a n-th order infinitesimal compatible deformation of it is a bracket such that the properties (5) and (6) of the PVAs are satisfied up to order ǫ n . Moreover, the deformation is constituted by homogeneous terms in such a way that the bracket {· λ ·} is homogeneous after the assignment of degree −1 to the formal parameter ǫ. In the case (23), since the degree of the undeformed bracket is 2, we have that deg{· λ ·} k is k + 2. Definition 3. A deformation of the λ bracket (23) is said to be trivial if there exists an element φ of the Miura group such that From Definition 2, this implies that φ must be of second kind. Theorem 1. The Poisson bracket of two-dimensional Euler's equation (5) does not admit nontrivial infinitesimal deformations up to the second order. The main result of this paper is obtained by direct computation of the deformations for the related λ bracket (23). The first order deformed bracket of (23) is a homogeneous λ bracket of degree 3. Its general form, that depends on 36 parameters, is The commas in the indices are just a bookkeeping device to keep track of the different properties of symmetry, that are the ones corresponding to the symmetry of the expression. For instance, from the total symmetry in (a, b, c) of λ a λ b λ c it follows that A abc must be totally symmetric in its upper indices. Imposing the skewsymmetry property to the bracket (27) we find that the parameters A's, C's, and D's are left unconstrained, while the remaining ones must satisfy the conditions Imposing the fulfillment of the PVA-Jacobi identity generates a system of linear algebraic and differential equations for the remaining 16 parameters. The system is obtained by taking the first order in ǫ of Property (6). The result is a 5-th order differential polynomial in λ, µ and the jet variables ω I . The coefficients of each term of the polynomial give an overdetermined system of equations. In this case, however, it is enough to set equal to 0 the coefficients of the terms λ 4 µ, λ 3 µ∂ω, and λ 2 µ(∂ω) 2 to get that A's, C's, and D's must all vanish. This means that there do not exist any first order deformation of (23). This is always the case for a scalar bracket. A second order deformation of the bracket depends has degree 4. In principle, it depends on 92 parameters, but after imposing the skewsymmetry property we are left with 36 free coefficients. A general fact occurring with scalar brackets is that the relevant coefficients are only the ones that multiply an odd number of λ's. This means that for the bracket only the coefficients A's, D's, E's, and F 's are independent, while the remaining can always be expressed in terms of them. Moreover, it is worthy noticing that there do not exist the terms in λ 4 , since we cannot have a skewsymmetric scalar differential operator whose leading order is even. The huge set of equations we get after imposing the compatibility condition (28) is composed both of algebraic and differential ones. In particular, it is possible to algebraically solve them for most of the free 36 parameters, resulting with six free parameters. After a change of coordinates they are A 112,1 , A 122,1 , A 222,1 , E 1,112 , E 1,122 , and E 1,222 . Then, we consider the trivial deformations of (23). We select the ones for which the Miura transformation is of degree two, in such a way that the resulting deformed bracket is of degree 4 as the one we are dealing with. Such a deformation has form and depends on the six parameters f 's and g's. We will compare the form of the six free parameters for a generic compatible deformation with the ones we get for a general trivial one, that in principle depends on six parameters, too. We can regard this latter set as a inhomogeneous system of algebraic and differential equations for f 's and g's, whose explicit form is Concluding remarks In this paper we focussed on two aspects of the Hamiltonian structure of Euler's and Helmholtz's equations for a two-dimensional fluid. We have explicitly obtained the Poisson brackets as the reduction to the divergence free vector fields, that characterize the condition of ideal fluid, of the Lie-Poisson brackets associated to the full algebra of vector fields. Moreover, we have established the triviality of the first and second order deformations of the bracket. It is a well known fact, proved by several authors [8,6,7], that all the deformations of the one dimensional Poisson brackets of hydrodynamic type are trivial. Using the language of PVAs, this means that the result holds true for brackets homogeneous of degree 1 and a differential algebra endowed with one derivation ∂, namely for fields depending on one space variable. For the two-dimensional case it has been proved that the deformations at the first order are trivial [5] while this is not the case for higher order deformations [4]. Very recently it has been proved that the Poisson cohomology of multidimensional scalar brackets is extremely large [3], hence there are plenty of deformations of such structures. The bracket (23) is at the crossroads of the aforementioned cases. It is a two-dimensional scalar bracket, namely it is defined for a scalar field of two variables, but it is not of hydrodynamic type, since its degree is 2. Nevertheless, it arises from the reduction of the Lie-Poisson bracket of hydrodynamic type for the Lie algebra X, which is two-dimensional and two-component: the generators of A are the two fields (p 1 (x, y), p 2 (x, y)). In [4] we have proved that the first order deformations of the Poisson bracket (14) are trivial, while the second order ones span a two-dimensional space. Looking for higher order deformations of the bracket (23) requires much more computational effort. It may always be true that the odd order deformations are trivial, according to preliminary results for the third order. A more general method to compute the dimension of the cohomology groups of the bracket, as the one presented in [3], maybe sufficient to prove the triviality of all the deformations, deserves further investigation. "Geometric and analytic theory of Hamiltonian systems in finite and infinite dimensions" of Italian Ministry of Universities and Researches. The author is grateful to the organizers of PMNP2015 for the opportunity to present his work and for the wonderful scientific environment they have established in Gallipoli.
2015-09-01T12:24:14.000Z
2015-09-01T00:00:00.000
{ "year": 2016, "sha1": "fd940bdd6f9d119c78cc97b365af98595a966f98", "oa_license": null, "oa_url": "https://kar.kent.ac.uk/67930/1/defoHD-submission-edit.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fd940bdd6f9d119c78cc97b365af98595a966f98", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
249017744
pes2o/s2orc
v3-fos-license
OpenVVC: a Lightweight Software Decoder for the Versatile Video Coding Standard In the recent years, users requirements for higher resolution, coupled with the apparition of new multimedia applications, have created the need for a new video coding standard. The new generation video coding standard, called Versatile Video Coding (VVC), has been developed by the Joint Video Experts Team, and offers coding capability beyond the previous generation High Efficiency Video Coding (HEVC) standard. Due to the incorporation of more advanced and complex tools, the decoding complexity of VVC standard compared to HEVC has approximately doubled. This complexity increase raises new research challenges to achieve live software decoding. In this context, we developed OpenVVC, an open-source software decoder that supports a broad range of VVC functionalities. This paper presents the OpenVVC software architecture, its parallelism strategy as well as a detailed set of experimental results. By combining extensive data level parallelism with frame level parallelism, OpenVVC achieves real-time decoding of UHD video content. Moreover, the memory required by OpenVVC is remarkably low, which presents a great advantage for its integration on embedded platforms with low memory resources. The code of the OpenVVC decoder is publicly available at https://github.com/OpenVVC/OpenVVC I. INTRODUCTION D URING the last decade, the extensive use of on-line platforms and the democratization of higher resolutions (4K, 8K) have lead to a significant increase in the volume of exchanged video content [1]. The multimedia services have also diversified with the apparition of video applications that offer immersive and more realistic viewing experience, such as Virtual Reality (VR, 360 • ). This increasing demand for video content brings new challenges to compression, mostly to enhance the video coding efficiency and reduce the carbon footprint induced by video storage, transmission and processing. Finalized in July 2020, VVC [2], [3] is the State-ofthe-Art (SOTA) video coding standard. VVC has reached the ultimate goal of up to 50% bit-rate saving compared to HEVC for similar subjective video quality [4], [5]. The bit-rate savings brought by VVC standard over HEVC are achieved at the expense of more complex coding tools at both encoder and decoder sides. The computational complexity of the VVC reference encoder has increased by a factor 8 and 27 compared to the HEVC reference encoder in inter and intra coding configurations, respectively [6]. At the decoder side, which is the focus of this paper, the computational complexity increase of VVC standard compared to HEVC has doubled (2x) in both inter and intra coding configurations [7]. This decoding complexity increase raises new research challenges for VVC deployment, especially on embedded platforms or for live applications that require real-time decoding capability. Usually, hardware decoders [8] are preferred to software decoders for embedded platforms with low memory and energy supplies. However, hardware decoders will only be commercialized several years after the standard finalization. The design of efficient software solutions is therefore mandatory during the next couple of years to support real time decoding of the emerging video applications. For these applications, the flexibility of software decoders is crucial to support the minor evolution of a standard (ie. new extended profiles), as well as for their deployment on previous generation devices, which do not embed a VVC hardware decoder. Currently only few software decoders compliant with the VVC standard have been implemented. The VVC reference software VVC Test Model (VTM) [9], for instance, is compatible with the complete set of new coding tools but requires high memory usage and achieves poor decoding frame rate performance [7]. From the source code of the VTM, the Fraunhofer Heinrich Hertz Institute has developed a VVC decoder named VVdeC [10]. This latter offers high decoding speed, at the cost of high memory consumption. In this paper, we present an open source VVC software decoder named OpenVVC. The software architecture, parallelism strategy as well as detailed experimental results are described in this paper. OpenVVC is developed in C programming language and compiled as a cross-platform library. It provides real time decoding capability under various operating systems including MAC OS, Windows, Linux and Android, targeting sustainable real-time decoding of Ultra High Definition (UHD) content on high performance General Purpose Processor (GPP) and low performance GPP platforms. OpenVVC is compatible with popular video players such as FFplay [11], VLC and GPAC. Current version of OpenVVC decoder supports the decoding of a wide set of conformance videos in addition to the four principal coding configurations defined by Joint Video Experts Team (JVET): All Intra (AI), Random Access (RA), Low-Delay P (LDP) and Low-Delay B (LDB). The OpenVVC decoder has been designed to achieve high decoding speed, with the lowest possible memory usage. The decoder relies on Single Instruction on Multiple Data arXiv:2205.12217v1 [eess.IV] 24 May 2022 (SIMD) optimizations [12] to reduce the decoding time of the most computationally complex operations. The architectures of multi-core processor are exploited through frame level parallelism, where several frames are processed simultaneously by the decoder. By combining these two levels of parallelism on a multi-core x86 platform 1 , OpenVVC decoding speed reaches over 290 Frames Per Second (fps) and 90 fps for Full High Definition (FHD) and UHD resolutions, respectively. Moreover, this high decoding speed is achieved at a very low memory usage. The sequential decoding of FHD content requires less than 25 MB and 75 MB in AI and RA configurations, respectively. The rest of this paper is organized as follows. Section II presents the general block diagram of a VVC decoder and the main decoding stages. The Section also provides an overview of the SOTA parallelism techniques for the decoding process and presents the VVC software decoders currently available. The proposed OpenVVC decoder architecture and optimizations are described in detail in Section III. Section IV presents the experimental setup, as well as the performance in terms of memory consumption and decoding speed of OpenVVC decoder in AI and RA coding configurations. In order to highlight the most time consuming tasks of the decoding process, the complexity of OpenVVC is also provided by group of tools. Finally, Section V concludes the paper. II. BACKGROUND AND RELATED WORK In this section we first give the main normative tools integrated in the VVC standard and then describe optimizations and parallel processing techniques used to speedup the decoder along with existing software VVC decoders. A. Overview of VVC decoder Fig. 1 presents the general block diagram of a VVC decoder, where each block corresponds to one of the main decoding stages. The decoder converts an input bitstream composed of binary symbols (bits) into decoded pictures, based on the conventional hybrid coding that takes advantage of both intra/inter prediction and transform coding. The operating principles of the decoding stages in Fig. 1 are introduced in this section, without going into in-depth details. For a detailed description of the VVC tools, the reader may refer to the following papers [2], [6]. 1) Entropy decoding: The first decoding stage is the entropy decoding of the bitstream. The Context Adaptive Binary Arithmetic Coding (CABAC) [13], first introduced in Advanced Video Coding (AVC) standard, is the entropy engine used in VVC. At encoder side, the CABAC has compacted in the bitstream all the syntax elements generated by the coding tools. These syntax elements include among others block partitioning information, intra and inter predictions information, quantized coefficients or in-loop filtering parameters. At decoder side, the entropy decoding stage parses the binary symbols in the bitstream, and converts them into non-binary symbols. These non-binary symbols are provided as input data for all the other decoding stages. 2) Predicted block: At the encoder side, the block partitioning scheme divides the picture into appropriate block sizes according to the local activity of the samples. The block partitioning scheme divides recursively a large block of typical dimension 128×128, called Coding Tree Unit (CTU), into smaller blocks of sizes in the range 128 × 64, 64 × 128 to 4 × 4 samples, called Coding Units (CUs). At the decoder side, the CU dimensions are retrieved by the entropy decoder and each CU is reconstructed by summing a residual block with a predicted block. The predicted block is an approximation of the original block, computed using intra prediction, inter prediction or a combination of both intra and inter predictions. Intra prediction exploits spatial redundancy within the same frame, whereas inter prediction exploits temporal redundancy among adjacent frames. In VVC, the novel Combined Intra/Inter Prediction (CIIP) tool combines inter and intra predicted blocks as a weighted sum in order to generate the final predicted block. When the CU is intra predicted, a prediction mode, as well as the previously reconstructed samples of adjacent left and above CUs are required. These neighboring samples are either propagated with a given angle (angular and Wide Angular Intra Prediction (WAIP) modes), averaged (DC mode), interpolated (Planar mode) or used as input for alternative Matrix-based Intra Prediction (MIP) mode [14]. The Multiple Reference Line (MRL) mode [15] introduced in VVC enables the encoder to choose among three reference lines and explicitly signal the one performing the lowest rate-distortion cost. For inter prediction, the samples of current CU are approximated based on the reference samples stored in the Decoded Picture Buffer (DPB). This process is called Motion Compensation (MC). The decoder first derives one or several Motion Vectors (MVs) (whether the CU is uni or bi-predicted) from CABAC information. Each MV is composed of a vertical and an horizontal components, representing the underlying samples translation from the reference picture to current picture. For bi-predicted blocks, a blending process is applied to aggregate the two motion compensated blocks. In VVC, the Luma Mapping with Chroma Scaling (LMCS) has been introduced [16] operating in three distinct parts of the decoding process: residual chroma-scaling, forward luma mapping for inter prediction and inverse luma mapping of the reconstructed block. For inter prediction, the LMCS modifies the luma predicted samples with a forward luma mapping. This process maps (i.e. redistributes) the inter predicted luma samples to another range of values. The residual block being also distributed in the entire possible range of values, this operation is mandatory to sum the inter predicted block with the residual block. 3) Residual block: The inverse quantization and inverse transform are crucial decoding stages in conventional hybrid video codecs. The inverse quantization retrieves the value of the transformed residual coefficients, taking as input the quantized coefficients transmitted in the bitstream. The transformed residual coefficients are further converted into a non-scaled residual block by the inverse transform. The chroma-scaling part of the LMCS is finally applied on the non-scaled residual block. The scaling factor applied to the chroma samples is computed based on the luma samples of the reconstructed block. The inverse transform module consists in the inverse Low Frequency Non-Separable Transform (LFNST) and the inverse Multiple Transform Selection (MTS). 4) In-loop filters: Four in-loop filters are performed on the reconstructed samples in order to reduce the visual artifacts of previous coding tools. They include the inverse mapping of the LMCS, the Deblocking Filter (DBF), the Sample Adaptive Offset (SAO) and the Adaptive Loop Filter (ALF). First, the inverse mapping of the LMCS redistributes the reconstructed luma samples from the entire possible value range to a smaller range of values. Samples in this smaller range of values will be used by all the following in-loop filters and will be stored in the DPB. The DBF [17] is applied on block boundaries, reducing the blocking artifacts introduced among others by quantization. In the decoding process, the horizontal filtering for vertical edges is first performed, followed by vertical filtering for horizontal edges. The SAO [18] then classifies the reconstructed samples into different categories to alleviate the remaining artefacts sample by sample. For each category, an offset value, retrieved by the entropy decoder, is added during the SAO process to each sample of the category. The SAO is particularly efficient to filter the ringing artefacts and enhance the perceptual video quality. The last in-loop filters are the ALF [19] and Cross Component ALF (CC-ALF) [20]. The ALF performs Wiener filtering to minimize the Mean Squared Error (MSE) between original and reconstructed samples. It is responsible for an important share of VVC decoding complexity [21], mostly due to the classification of every 4×4 block of samples and to the application of diamond shape filters on both luma and chroma samples. Applied in parallel with ALF, the CC-ALF relies on the luma samples to adjust the chroma samples value. Once the in-loop filters are performed on the entire picture, the decoded picture is stored in the DPB. B. Sate-of-the-art of VVC decoding The total video decoding workload is determined at the encoder side since it mainly depends on the set of enabled coding tools and on the bitstream final size [22]. The challenges for a VVC decoder are speed and compliance with the standard to support real time decoding of a wide range of bitstreams encoded by different encoders, rates and resolutions. The speed challenge is addressed by parallel processing techniques, which aims to distribute optimally the decoding workload on several actors. In this Section, we first describe the SOTA for parallel decoding techniques. Since only few works on VVC decoders are currently available, many of the presented related works are designed for HEVC decoding process. It is nonetheless possible to adapt these techniques to a VVC decoder, considering that the decoding process of HEVC and VVC standards are quite similar. The second part of this section presents the VVC software decoders currently available. The parallel processing of a decoder essentially operates at three levels of parallelism: data level, high level and frame level. The data level parallelism techniques are applied on elementary operations. They include among other techniques relying on SIMD instructions [12]. In high level parallelism techniques, several threads operate on continuous regions of the same frame. These techniques are either normative, i.e. defined in the standard and require additional information in the bitstream, or non-normative. With frame level parallelism techniques, several frames are processed in parallel, under the restriction that the MC dependencies are satisfied [23]. 1) Data level parallelism: In the video decoding field, data level parallelism is widely explored through techniques relying on SIMD optimizations [23]- [25]. With a single SIMD instruction, an operation is applied simultaneously on a vector of data, producing a vector of results. For x86 processors, various SIMD set of instructions are available (mainly Streaming SIMD Extensions (SSE) [26] and Advanced Vector eXtensions (AVX) [27]). For instance, the SSE2 [28] supports instructions on 128-bit registers, which is 2 and 4 times shorter compared to AVX2 and AVX512 operating on 256-bit and 512-bit registers, respectively. The computations that benefit from SIMD architectures are typically those including elementary operations on vectors and matrices. In the decoding process, these computations include among others the application of the diamond shape filters of the ALF, the process of the MC interpolation filter, the derivation of reconstructed samples in intra prediction and the inverse transform applied on the residual transform coefficients. On the other hand, the entropy coding stage does not include significant data level parallelism which makes the use of SIMD instructions unnecessary. Related works widely rely on SIMD architectures to speed up the decoding process. Yan et al. [25] rely on intensive SIMD optimizations (SSE instructions on 128-bit registers) to reduce the HEVC decoding time. The Neon instructions set [29], available on low performance GPP processors, is exploited by Raffin et al. in their work [30]. In the particular case of the scalable extension of HEVC standard, named Scalable High Efficiency Video Coding (SHVC), Hamidouche et al. [23] optimize among other the upsampling of the base layer picture with SIMD instructions. A complete summary of the possible SIMD optimizations for HEVC decoding process is provided by Chi et al. [24]. The authors discuss the challenges of the SIMD implementation for many of the most complex decoding computations, and provide experimental results on 14 different platforms. 2) Normative high level parallelism: The normative high level parallelism techniques are defined in the standard and require conveying additional information in the bitstream. This subsection presents two of the most widely used normative techniques, namely tiles [3] and Wavefront Parallel Processing (WPP) [31]. They have been standardized in both HEVC and VVC in order to facilitate the use of parallel processing architectures for encoding and decoding. The proposed OpenVVC decoder for instance is compliant with bitstreams including both tiles and WPP. Tiles In both VVC and HEVC standards, tiles are rectangular regions of the picture containing entire CTUs [3]. In Fig. 2, the tile partitioning forms a 2×2 grid. They are labeled from 0 to 3 and delimited by the thicker black lines. Prediction dependencies across tile boundaries are broken and entropy encoding state is reinitialized for each tile. These restrictions ensure that tiles are independently decoded, allowing several threads to decode simultaneously the same picture. The inloop filtering stages across tile boundaries must however be performed when the reconstructed samples of all tiles are available. In [22], the tile partitioning is adapted at the encoder side in order to minimize the decoding time. The decoding load imbalance among tiles is reduced based on the relation between the decoding time and the number of coded bits of a given CTU. In the context of computing systems with asymmetric processors, Yoo et al. [32] take advantage of tile partitioning flexibility in order to optimize HEVC decoding time on these specific platforms. Asymmetric tile work load is delivered at encoder side by varying tile sizes. At the decoder side, the large and small tiles are further allocated to fast and slow cores, respectively. Wavefront Parallel Processing The WPP tool, enabled at the encoder side, divides the picture into CTU rows [31]. The CABAC context is reinitialized at the start of each CTU row with the CABAC context of at least the second CTU in the preceding row. In VVC, WPP removes the dependency to the top-right CTU, thus reducing the CTU-offset required between adjacent lines. The decoding of a row may therefore begin once the first CTU in the preceding row is reconstructed, since it ensures that the decisions needed for prediction and CABAC reinitialization are available. These constraints allow several processing threads to decode the picture in parallel, with a delay of one CTU between adjacent rows. The propagation of these delays across the picture rows limits the parallelism speedup, especially for high number of threads. For this reason, many works including [33] and [34] combine WPP tool with frame level parallelism in their solutions. Zhang et al. [34] show that the decoder combining frame level parallelism and WPP enables much higher speedup compared to the decoder using WPP alone, as soon as the number of threads exceeds 5 for high resolution video. 3) Non-normative high level parallelism: In opposition with decoding parallelism techniques that require specific pro-cessing at encoder side, the parallelism techniques presented in this section have not been standardized and are suitable to decode any input bitstream. For instance, task level parallelism techniques refer to techniques in which several threads process simultaneously one or several decoding tasks, exploiting their specific parallel opportunities. A detailed description of the main task level parallelism opportunities is provided in [35]. Entropy decoding process is sequential and thus it is the most difficult stage to process in parallel. In order to process the CABAC in parallel, CABAC reinitialization must be included in the bitstream at encoder side, as presented in tiles or WPP high level parallelism techniques. Habermann et al. [36] propose three solutions to improve the CABAC processing in WPP for low-delay applications. Once the CABAC output data is retrieved, the other decoding tasks may be performed in parallel. Two main approaches exist to retrieve the CABAC output data in related works. The first approach performs the CABAC stage on a picture basis, as a pre-processing of the picture reconstruction. This approach, adopted in the VTM and in recent works [35], [37], is optimal for task level parallelism. However, it requires additional memory to store of the CABAC output data of the whole picture. Another approach consists in retrieving the CABAC information on the fly [23]. With this approach, task level parallelism is disabled but the memory consumption to store CABAC output is negligible. In this work, the second approach is adopted to lower the memory footprint. The task level parallelism opportunities are therefore not exploited. For in-loop filtering, a classical approach consists in processing the in-loop filter in a separate pass once the entire picture is reconstructed. A slight synchronization overhead is introduced in this case since the in-loop filters are applied one by one on the entire picture. Kotra et al. [38] provide three parallel implementations of the DBF on the entire reconstructed picture for HEVC decoding. The limit of this approach is that the final samples needed as reference for MC are available only after the last in-loop filter is processed on the entire picture. However, when in-loop filters are performed on a CTU level, the final samples are available at lower delay. For instance in [39], the final samples of a CTU are available with a delay of 2 CTUs. In the proposed solution, the in-loop filtering is applied at a CTU line level. This approach improves the frame level parallelism in inter configuration compared to processing the in-loop filter on the entire picture. 4) Frame level parallelism: With frame level parallelism, the decoder processes several frames in parallel, under the restriction that the MC dependencies are satisfied. Frame level parallelism is particularly efficient in AI coding configuration since there are no MC dependencies. In inter coding configuration, its efficiency highly depends on the motion activity in the sequence and on the ranges of MVs used for MC. Based on this observation, Chi et al. [33] restrict at the encoder side the downwards motion to 1/4 of image height. This restriction reduces greatly the MC dependencies for the decoding process, without impacting significantly the coding efficiency. Frame level parallelism requires a large memory overhead compared to sequential decoding, since the decoder must store additional picture buffer per decoding thread. For systems with strong memory constraints such as mobile devices, this memory overhead is a serious limitation. For instance in the context of mobile devices, authors in [40] rely on high level parallelism techniques rather than frame level parallelism alone to accelerate the HEVC decoding process. 5) VVC software decoders: Currently only few open-source software VVC decoders have been implemented. The first to be highlighted is the VTM [9], the reference software in which the new tools have been validated during the standardization process. Its main advantage is its compatibility with the complete set of new VVC standard tools. It has been extensively used during OpenVVC development to validate the proper implementation of the new coding tools. However, it requires high memory usage and achieves decoding performance far from real time [7]. The Fraunhofer Heinrich Hertz Institute has developed an open-source VVC decoder named VVdeC [10], from the source code of the VTM as a starting point. Based on VVdeC software, the work of Wieckowski et al. [37] provides one of the first experimental results on a real-time VVC decoder. The authors propose intensive SIMD optimizations through SSE42 (128-bits register) and AVX2 (256-bits register) instruction sets. These optimizations are coupled with task level parallelism, which does not require normative parallel techniques. A recent alternative decoder has been proposed by the Tencent Media Lab O266dec [41]. This decoder is only available as a binary for testing purpose. Zhu et al. [42] describe the operating principle and experimental results of O266dec decoder in RA configuration. The authors combine SIMD optimization (256-bit register instruction set AVX2) with techniques exploiting various levels of parallelism: task level, CTU level, sub-CTU level and frame level. The decoding speed for FHD and UHD video content is very promising. However, the task level and CTU level parallelism require the storage of the CABAC information on a picture basis, which may increase the memory consumption. Since 70% of the world population will have mobile connectivity by 2023 according to Cisco [43], the optimization of the decoding process on low performance GPP platforms 2 is a crucial issue. To tackle this concern, Saha et al. [21] optimize the VVdeC decoder for a system on chip heterogeneous platform composed of a low performance GPP and Graphical Processing Unit (GPU) processor. The SSE and AVX instructions included in VVdeC are converted to the Neon instruction set available on low performance GPPs. The results presented do not yet exploit the GPU processor, which may decrease the processing time of computationally complex decoding tasks. A similar effort was accomplished by Li et al. [44] to optimize O266dec decoder for various low performance GPP platforms. The reader can refer to [45] compliant with HEVC standard, used in widespread players such as VLC 3 and GPAC [47]. As previously mentioned, the decoder is implemented in C programming language, and is integrated as a dynamic library with FFmpeg player 4 . The project aims to implement a conforming VVC decoder and supports the Common Test Conditions (CTC) [48] in AI, RA and Low-Delay (LD) configuration for 10/8-bits input content. This section describes the OpenVVC general architecture, its buffer characteristics and the implemented parallelism strategies. A. Decoder architecture The decoding parameters required at the sequence, picture, slice or tile level are first retrieved by parsing global parameter sets such as the Sequence Parameter Set (SPS), Picture Parameter Set (PPS), picture header or slice header. The general block diagram for the decoding of a frame in OpenVVC is presented in Fig. 3. The main tasks of the decoding process, previously detailed in Section II, are applied at various levels: • The CU level reconstruction includes the intra/inter prediction, inverse quantization, inverse transform and LMCS (included in prediction unit method). • The DBF is applied at CTU level, right after all the CUs of the CTU are reconstructed. This choice avoids the storage of the Quantization Parameter (QP) map and CU dimensions, required by the DBF, on a larger scale. • The SAO filter is applied at a CTU line level followed by the ALF/CC-ALF. This approach improves the frame level parallelism in inter configuration compared to processing the in-loop filters after reconstruction of the entire frame. The decoding process of OpenVVC is performed in four successive steps (reconstruction, DBF, SAO, ALF) as illustrated in Fig. 4. The upper-left sub-figure shows the progress of the reconstruction stages : prediction, inverse quantization and transform. In green, the two first CTU lines are completely reconstructed. As mentioned previously, the DBF (upper-right sub- figure) is applied right after the reconstruction on almost all the samples of the current CTU. A margin of 8 samples is left un-processed at the bottom and at the right of current CTU, and will be processed once the required reconstructed samples are available. For SAO filter, a delay of 1 CTU line is introduced. The first CTU line is entirely processed, as well as a margin of 3 pixel rows in the second CTU line. This margin is mandatory to apply the ALF (bottom-right sub-figure) on the entire first CTU row. The final samples needed as reference for MC of other frames are therefore available with a delay of only 1 CTU line. B. Frame level buffers management The decoding of a picture in 4 : 2 : 0 chroma format requires a picture buffer with dimensions 1.5×W f rame ×H f rame , with The picture buffers are counted and a picture pool system manages the unused resources in order to avoid buffer reallocation (which is time and memory consuming). On the other hand, a DPB structure stores the decoded frames that are required for display or used as reference for MC. Fig. 5 illustrates the DPB management and picture pool in OpenVVC. When the decoding of a picture starts, the decoder requests a picture buffer to the picture pool. A new picture buffer is allocated only when the picture pool is empty. Otherwise, the decoder uses a picture buffer popped from the picture pool. Then, the decoder increments the counters of the frames in the DPB required as reference for MC by the current frame. The counters of the reference frames are further decremented when the current picture is decoded. When a picture counter falls to 0, it is equivalent to say that it is currently not required neither for display nor as reference for MC. An important aspect for memory consumption limitation, is to remove the unused picture buffers as soon as possible from the DPB. In the bitstream, an integer dpb max nb pic is transmitted, signaling the maximum number of frames needed in the DPB for the decoding of a sequence. When the current number of frames in the DPB exceeds dpb max nb pic, the picture with counter equal to 0 and with minimal Picture Order Count (POC) is removed from the DPB and then is stored in the picture pool. The MV buffers required for the application of the novel Temporal Motion Vector Prediction (TMVP) tool are also stored on a frame level. These MV buffers contain a MV for every 8 × 8 pixel block of the reference frames. A MV buffer pool with similar operating principle as the picture buffer pool is implemented in order to avoid buffer re-allocation. C. Local-level buffers management The local structure and buffer dimensions have been designed to operate on a CTU level. As mentioned in Section II-B3, the CABAC output data required to decode the CTU is retrieved on the fly and not stored in the local structure. This approach reduces substantially the memory consumption compared to the storage in a frame-basis of the CABAC output. The local structure includes the CTU transform residual and MV information. Moreover, several decoding stages require intermediate samples belonging to neighboring CTUs, including intra prediction, SAO and ALF. The intermediate pixel values are stored in local buffers, and a considerable effort has been made to minimize their dimensions. The dimensions of the local buffers are shown in Fig. 6, and their usage is further described in this section. Fig. 7 illustrates the use of the local buffers for the processing of a given CTU. The processes considered in this paragraph are either intra prediction, SAO or ALF. As shown in Fig. 7a, the EC buffer is first filled with unprocessed samples, i.e. samples on which the process has not been applied. The CTU area, of dimension W CT U × W CT U , as well as the bottom and right margins are filled directly with the content of the picture buffer. The left margin is filled with the unprocessed samples of the left CTU, previously stored in the RC buffer. The upper margin is filled with the unprocessed samples of the upper CTU, previously stored in the BR buffer. The second step is shown in Fig. 7b. The RC and BR buffers are updated with samples of current CTU before processing. They will be used during the process of the following CTUs. Finally, the process is applied on the CTU area of the EC buffer that is further copied in the decoded picture. I summarizes the memory consumption of the OpenVVC buffers and structures, with chroma format 4:2:0, input bit depth 10 and 128 × 128 CTUs. The global context structure contains decoding parameters required at the sequence, picture, slice or tile level. Since part of the global context parameters are stored on a CTU basis, the buffer memory consumption varies slightly from FHD to UHD sequences. The local context structure contains local information required to decode a CTU, as well as the local buffers described in Section III-C. The local buffers include the BR buffer of dimension M × W F rame , that is larger for UHD resolution compared to FHD. For this reason the memory consumption is 741KB for FHD and 750KB for UHD. The picture buffer has the dimensions of the picture and is therefore 4 times larger for UHD compared to FHD resolution. The same observation applies to MV buffer used for TMVP, described in Section III-B. TABLE I shows that the largest share of the memory is consumed by the picture and the local buffers. D. Parallelism strategies OpenVVC decoder currently supports data level parallelism, frame level parallelism, as well as normative slice level and tile level parallelism (both dynamic and static) defined in the VVC standard. In this work, we only present the results generated with the combination of data level parallelism and frame level parallelism. 1) SIMD optimization: For data level parallelism, OpenVVC relies on SIMD instruction set SSE42 [26] operating on 128 bits registers. Several computationally expensive methods are optimized with SIMD instructions, which are summarized in TABLE II. They are mainly distributed into 4 modules : transform, motion compensation, intra prediction and in-loop filters. Many tools in TABLE II have been introduced in VVC standard, including Inter-Component Transform (ICT), Low-Frequency Non-Separable Transform (LFNST), Cross Component Linear Model (CCLM), MIP and ALF. These tools carry-out computations that benefit from SIMD architectures, since they apply elementary operations at sample level. In particular, the SIMD optimization divides by 4 the time consumption of the ALF diamond shape filters. In the future, OpenVVC will also rely on SIMD instruction sets with larger registers such as AVX2 (256-bits) or AVX512 (512-bits), which will further improve the speed-up offered by data-level parallelism. 2) Frame level parallelism: With frame level parallelism, the decoder processes several frames in parallel, under the restriction that the MC dependencies are satisfied. Regarding the memory usage, each thread requires separate picture buffer (see Section III-B) and local buffers (see Section III-C). Fig. 8 shows an example of a decoding time-line in RA configuration with a main thread and 2 decoding threads. The main thread is responsible for the parsing of global parameter sets such as the SPS, PPS, picture header or slice header. It also manages the scheduling of the decoding threads through a thread pool. Since only frame level parallelism is enabled in this work, the scheduling management of the decoding threads is straightforward. Once the global parameter sets of a picture are parsed, the main thread selects the first available decoding thread in the pool and updates its internal structures with the data required to decode the frame. The main thread further signals to the decoding thread (green arrows in Fig. 8), which starts the decoding of the frame. When the decoding thread finishes the picture processing, it signals to the main thread (red arrow in Fig. 8) and becomes available again in the thread pool. The MC synchronization between decoding threads is also a crucial issue for frame level parallelism in inter coding configuration. When a decoding thread requires samples not yet available for MC, it waits until these samples are reported as available by the thread decoding the reference frame. In the example of Fig. 8, the thread T 2 decodes CTU line L 1 in picture P OC 16 and requires the reference data of CTU line L 3 in picture P OC 0 . The thread T 2 waits until thread T 1 reports available samples in CTU line L 3 of picture P OC 0 (blue arrows in Fig. 8), and further continues the decoding of picture P OC 16 . As explained in Section III-A, the decoding and in-loop filtering processes are performed on a CTU line basis in OpenVVC. For this reason, the samples are reported available as reference for MC on a CTU line basis, once the last in-loop filter is applied. IV. EXPERIMENTAL RESULTS This section presents the experimental setup, as well as the performance in terms of memory usage and frame-rate of the proposed OpenVVC decoder in both AI and RA coding configurations. Data level and frame level parallelism performance is discussed and compared to two open-source SOTA VVC decoders: VTM-16.2 and VVdeC-1.5. In order to highlight the most time consuming tasks of the decoding process, the complexity repartition of OpenVVC is also provided under the form of pie charts. A. Experimental setup The following experiments are conducted with the proposed TABLE III. Each core has 80KB L1 cache (per core), 1.25MB L2 cache (per core) and 25MB L3 cache (shared). Moreover, the decoding process is forced to be executed on the P-cores and a single set of SIMD instructions is enabled during these experiments (SSE4.2 -128 bits registers), in order to provide a fair comparison between the software decoders. The complexity increase of VVC decoding process raises a critical issue mainly for high resolution video sequences. For this reason, the test sequences selected in this work include 5 High Definition (HD) (classes E and F, 1280×720 samples), 6 FHD (classes B and F, 1920×1080 samples) and 6 UHD (class A, 3840×2160 samples) video sequences included in the CTC [48]. Current version of OpenVVC is compatible with the encoding tools enabled in the CTC, in both AI and RA configurations. The performance of the OpenVVC decoder is assessed at various bit-rates, obtained with QP values of {22, 27, 32 and 37} following the CTC. The memory consumption of the software and the output frame-rate are used as performance metrics. The maximum memory consumption is measured by time Linux instruction. It is a crucial information to assess the portability of OpenVVC decoder on platforms with strong memory constraints. The decoding time is evaluated through the number of decoded frames per second (fps). B. Comparison with SOTA under AI configuration Fig. 9 presents the performance in AI coding configuration of the proposed OpenVVC decoder (green points), compared to VTM-16.2 (blue points) and VVdeC decoders (yellow points), on FHD and UHD test sequences. The two subfigures correspond to the performance in term of frame-rate (Fig. 9a) and memory consumption (Fig. 9b). The results are averaged across all the test sequences with similar resolution and across the 4 QP values studied in this work. The dashed and continuous lines correspond to FHD and UHD resolutions, respectively. The experiments in this section have been carried out with 7 different parallelism configurations, each corresponding to a different abscissa coordinate. These parallel configurations include mono-thread disabling SIMD optimizations, as well as mono, 2, 3, 4, 6 and 8 threads with enabling SIMD optimizations. The number of threads does not exceed 8 since software decoders are mainly used on personal computers or smartphones, which rarely exploit architectures with more than 8 cores. The VTM-16.2 reference software does not support parallel decoding, its performance is therefore assessed through mono-thread setting with disabling and enabling SIMD optimizations. 1) Decoding performance: Fig. 9a presents the results in AI coding configuration in term of decoding frame-rate, expressed in fps. First, we will focus on mono-thread results. The points on the left correspond to mono-thread results without SIMD optimizations, and shows almost equivalent decoding speed for VVdeC and OpenVVC. Fig. 9a shows that the SIMD optimizations are slightly more efficient in OpenVVC compared to the two other software decoders. This is due to the specific effort dedicated to SIMD optimizations on intra prediction tools, as presented in Section III-D1. The performance is however still far from real-time since OpenVVC achieves an average framerates of 37 fps and 12 fps for FHD and UHD resolutions, respectively. In order to explain multi-thread results, it is important to provide a reminder of the approaches in the different software decoders. In OpenVVC, frame level parallelism is enabled as presented in Section III-B. Since there are no MC dependencies between frames in AI configuration, each decoding thread is totally independent. For this reason, the frame-rate obtained by OpenVVC increases linearly with the number of threads in Fig. 9a, reaching in average 249 fps and 75 fps for FHD and UHD resolutions, respectively. On the other hand, VVdeC relies on task level parallelism, introduced in Section II-B3. Data dependencies between decoding tasks exist in AI configuration, adding synchronization overhead between decoding threads. For this reason the frame-rates obtained with more than one thread with VVdeC software are lower compared to OpenVVC. In order to achieve 50 fps decoding of UHD resolution, VVdeC requires 6 threads in average, while OpenVVC only requires 4 threads. 2) Memory consumption: Fig. 9b presents the results in AI coding configuration in term of memory consumption, expressed in MB. With less than 115MB in average, OpenVVC is able to decode simultaneously up to 8 frames in FHD resolution and up to 2 frames of UHD resolution. However, VVdeC and VTM-16.2 decoders consume 510MB and 420MB, respectively, for the decoding of UHD resolution whatever the parallel setting (ie. number of threads). These numbers are 1.2 and 1.5 times higher compared to OpenVVC with 8 threads. As mentioned in Section II-B3, the memory consumption of these decoders is not optimized. However, it gives an order of magnitude of the very low memory consumption required by the proposed decoding approach in OpenVVC. This low memory is essentially due to the design of local structure (see Section III-C) and to the optimized management of the picture buffer pool described in Section III-B. Second, Fig. 9b shows the linear increase of the memory consumption with the number of threads in OpenVVC. As mentioned in Section III-B, the integer dpb max nb pic signals the maximum number of frames required in the DPB for the decoding of a sequence. In AI configuration, dpb max nb pic is equal to 1. For frame level parallelism, a picture buffer and local buffers must be stored in addition per decoding thread, explaining the linear increase. C. Comparison with SOTA under RA configuration The experiments described in previous Section IV-B are also conducted in RA coding configuration. Fig. 10 presents the performance in RA coding configuration of the proposed OpenVVC decoder, compared to VTM-16.2 and VVdeC decoders, on FHD and UHD test sequences. As for Fig. 9, the results are averaged across the 4 QP values studied in this work. The dashed and continuous lines correspond to FHD and UHD resolutions, respectively. 1) Decoding performance: Fig. 10a presents the results in RA coding configuration in term of frame-rate, expressed in fps. Fig. 10a shows that the mono-thread results with SIMD optimizations obtained with OpenVVC are higher with almost a factor 2 compared to VTM-16.2 for both FHD and UHD contents. On the other hand, VVdeC achieves slightly better mono-thread results with SIMD compared to openVVC: 12% and 10% higher on average for FHD and UHD resolutions, respectively. In VVdeC, significant effort has been invested on data level parallelism, where a larger share of the inter tools are optimized with SIMD instructions compared to OpenVVC. This small gap will be filled in the future by extending SIMD optimizations to a larger share of the inter prediction tools, adding among other Geometric Partitioning Mode (GPM) and CIIP. Fig. 10a also shows that OpenVVC green curves are not completely linear with the number of threads. Indeed, frame level parallelism in RA configuration is less efficient compared to AI configuration, due to MC synchronization overhead between frames. OpenVVC will achieve higher decoding speed by enabling tile level or task level parallelism in addition to frame level parallelism. The results obtained by OpenVVC in RA configuration are nonetheless very promising, since our decoder achieves live decoding FHD sequences beyond 60 fps with in average 2 decoding threads. For UHD content, picture rate of 50 fps is in average reached with 4 decoding threads. 2) Memory consumption: Fig. 10b presents the results in RA coding configuration in term of memory consumption, expressed in MB. In RA configuration, the maximum number of frames dpb max nb pic required for the decoding of a sequence is in average equal to 7. TABLE I has shown that the picture buffer size is 8.3 MB and 33.2 MB for FHD and UHD resolutions, respectively. This explains the mono-thread memory consumption in OpenVVC of 60 MB (≈ 7 × 8.3 MB) and 250 MB (≈ 7×33.2 MB) in average for these resolutions. For frame level parallelism, a picture buffer and local buffers must be stored in addition per decoding thread, explaining the affine increase of memory consumption with the number of decoding threads. Fig. 10b also highlights the very low memory consumption of OpenVVC in RA configuration compared to the VTM-16.2 and VVdeC software decoders. For mono-thread decoding, OpenVVC memory consumption is around 30% of theVVdeC memory in average. Even with 8 decoding threads, the memory consumption of our solution represents 65% of theVVdeC memory for UHD resolution. D. Per-Sequence performance TABLE IV presents the decoding speed in AI configuration with 1 and 4 threads, according to the video sequence and QP value. The speed-up obtained with 4 threads is also displayed. For all the sequences, the decoding frame-rate increases with the QP value. Indeed, for QP37 the bitstream size is in average divided by 2.5 compared to QP27. It leads a considerable decrease in the decoding complexity due to the lower amount of symbols to process. The relation between the bitstream size and the decoding computational complexity has been studied in detail in [22]. At sequence level, TABLE IV shows that for a given QP value, resolution and number of threads, a high variance in the decoding speed exists according to the sequence characteristics. The clearest example is provided by the HD sequences, where the frame-rates of Johnny and KristenAndSara sequences are considerably higher compared to SlideEditing video. Indeed, SlideEditing displays screen content with complex spatial textures. It requires a higher number of symbols to be coded, compared to Johnny and KristenAndSara that display television talk shows with uniform background. The decoding speed disparities are also observable among FHD sequences, especially between RitualDance and ArenOfValor, and among UHD sequences, especially between Tango2 and ParkRunning3. TABLE V presents the decoding speed obtained with OpenVVC in RA configuration according to the sequence and QP value. Many observations about the decoding speed disparities that were made in AI configuration also apply in RA configuration. Indeed, the decoding frame-rate increases with the QP value and the sequences previously mentioned with more complex spatial textures lead lower decoding speed also in RA configuration. TABLE V points out the direct link between the speed-up obtained with 4 threads and the sequence resolution. Indeed, the speed-up variance is low among sequences with similar resolution, since it is contained in the short intervals [2.2, 2.8], [2.8, 3.3] and [3.5, 3.7] for HD, FHD and UHD resolutions, respectively. These numbers show that the speed-up is considerably impacted by the resolution of the sequence. As explained in Section III-D2, the MC synchronization between decoding threads has been designed on a CTU line basis. For 128×128 samples CTUs, a CTU line represents a 6th of a HD picture against a 17th of a UHD frame. Therefore, at least a 6th of the HD reference picture must be fully decoded before its data is used for MC. The interactions among decoding threads for MC synchronization will be higher, resulting in a lower speed-up compared to FHD and UHD resolutions. both AI and RA configurations at high bitrate (QP=22). In this specific configuration, the decoding frame rate of FHD resolution with 4 threads is in average higher than 80 and 130 fps for AI and RA configurations, respectively. E. Complexity distribution in OpenVVC The decoding complexity distribution is obtained by running OpenVVC with Callgrind 5 . Callgrind is the Valgrind profiling tool that records the call history of program functions as a call graph. By default, the data collected includes the number of instructions executed, the calling/callee relation among functions and the number of calls. In contrast to execution time, that depends among others on memory accesses or CPU frequency, the insight of the complexity distribution given by Callgrind is nearly constant regardless of the execution platform. F. AI coding configuration Fig. 11 shows the decoding complexity distribution of OpenVVC in AI coding configuration, for UHD test sequence CatRobot1 encoded at two QP values (Fig. 11a for QP27 and Fig. 11b for QP37). The results are shown under the form of pie charts, in % of total number of decoding instructions. The main decoding tasks displayed in Fig. 11 have been presented in Section II. The CABAC stage extracts from the bitstream the input data for all the other decoding stages. As mentioned in Section II-B1, the CABAC does not include significant data level parallelism and therefore is not accelerated with SIMD optimizations. It explains its relatively high share of the decoding complexity at QP27 (13.2%). For QP37, the bitstream size is in average divided by 2.5 compared to QP27. It results in a considerable decrease in the CABAC complexity (6%) due to the lower amount of symbols to process. The intra prediction stage includes among others the application 5 Callgrind: http://valgrind.org/docs/manual/cl-manual.html. of angular, DC and Planar modes, as well as alternative WAIP, MRL and MIP modes. The intra prediction stage represents around 12% of total complexity regardless the QP value. The transform stage in Fig. 11 computes the residual block through inverse quantization and inverse transform. It also includes the aggregation of the predicted block with the residual block. The transform stage is responsible for 12.8% and 9.2% of total complexity at QP27 and QP37, respectively. This difference is due to the transform skip mode, which is more selected by the encoder at high QPs. Four in-loop filters are performed on the reconstructed samples. They are displayed in shades of orange color in Fig. 11. The ALF provides a significant improvement in encoding efficiency [3]. However, in counterpart of the aforementioned benefits, ALF represents a considerable burden for the decoding process (29% and 39% according to the QP value). The DBF comes second with a share of over 17.3-19.9% of total decoding complexity. In total, the in-loop filters are responsible for over 50% of the decoding complexity in AI configuration. Finally, the operations on OpenVVC structures and buffers, presented in Sections III-B and III-C, represent 3.7-4.3% of OpenVVC complexity. G. RA coding configuration Fig. 12 shows the decoding complexity distribution of OpenVVC in RA coding configuration, for UHD test sequence CatRobot1 and according to the QP value (Fig. 12a for QP27 and Fig. 12b for QP37). It is important to note the lower share of the complexity required by the ALF in RA configuration (21.8% and 9.7%) compared to AI configuration. Indeed, the ALF is disabled on a large number of CTUs in RA configuration, which is not the case in AI configuration. In total, the sum of in-loop filters, CABAC, transform, intra prediction and internal buffers management represent 51% and 41% of the decoding complexity at QP27 and QP37, respectively. The remaining complexity is caused by the inter prediction tools, illustrated in shades of yellow color in the pie chart. The predominance of inter predicted frames in RA configuration explains this number, and also explains the very low portion of intra prediction in the pie charts. In VVC standard, the inter prediction stage enables various coding tools. As shown in Fig. 12, the most complex tools are the MC interpolation filters followed by Decoder-side Motion Vector Refinement (DMVR), Bi-Directional Optical Flow (BDOF) and Prediction Refinement with Optical Flow (PROF). Theses thee last tools require the application of the MC interpolation filters on the predicted block. For this reason, the complexity share of the MC interpolation filters is higher than 24% for both QP values. To summarize, this section has identified two decoding stages as complexity bottlenecks for OpenVVC. The in-loop filtering stage is responsible for over 50% of the decoding complexity in AI configuration and the inter prediction stage is responsible for over 60% of the decoding complexity in RA configuration. In future works, the efforts for data and high level parallelism will therefore focus on these two decoding stages. V. CONCLUSION In this paper, we presented openVVC, an open source software VVC decoder that supports a broad range of VVC tools. By combining extensive data level parallelism with frame level parallelism, OpenVVC achieves real-time decoding for UHD content. Considerable effort has been devoted to minimize both local and global buffer dimensions. As a consequence, the memory required by OpenVVC is remarkably low, which is a great advantage for its integration on embedded platforms with low memory ressources. Compared to other SOTA open source VVC decoders, OpenVVC achieves higher decoding speed than VVdeC and reference software VTM in AI configuration. In RA configuration, the small gap with VVdeC may be filled by implementing additional SIMD optimizations and by combining frame level parallelism with other high level parallelism techniques, such as tile or wavefront.
2022-05-25T06:47:48.126Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "bef844591b38f275014d6391404c9b7f9a8bae65", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bef844591b38f275014d6391404c9b7f9a8bae65", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
129812071
pes2o/s2orc
v3-fos-license
Social Vulnerability Assessment to Flood in Medina Gounass Dakar This paper is about the assessment of social vulnerability (SV) as a critical component of comprehensive disaster risk assessment. This study was conducted in Medina Gounass Dakar, Senegal, to bring out evidence that flooding was a threat to human security. The aim of this present study is to assess the social vulnerability to flood in Medina Gounass. Survey was carried out using structured questionnaires drawn on one hundred randomly selected households. For vulnerability assessment, the Methods for the Improvement of Vulnerability Assessment in Europe (MOVE) framework and Arc GIS are used to characterize vulnerability through three key factors, namely, 1) exposure, 2) susceptibility, and 3) lack of resilience. As a result, Medina Gounass inhabitants have a particular relationship with the place they have been living for decades. Although facing diseases and many challenges in their everyday life, people actually resist the government’s relocation projects because of their symbolic relationship with the area. Introduction and Context For centuries, human progress has depended on access to water in sufficient quality and quantity to make possible life on earth.This water has a great number of sources; the most common and known by almost everyone is rainfall.Rainfall depends on the climate and it is known that the most brilliant civilisations that planet earth has ever known become prosperous in periods of favourable climate.Nowadays, this planet which produces the ecosystem services for human well-being has some disturbances in a pace that threatens the future of humankind.Thus, the earth is facing challenges, such as raising population, increasing desertification and, of course, climate change.Climate change, in IPCC (Intergovernmental Panel on Climate Change) usage, refers to a change in the state of the climate that can be identified (e.g., using statistical tests) by changes in the mean and/or the variability of its properties, and that persists for an extended period, typically decades or longer [1].It refers to any change in climate over time, whether due to natural variability or as a result of human activities. This usage differs from that in the United Nations Framework Convention on Climate Change (UNFCCC), where climate change refers to a change of climate that is attributed directly or indirectly to human activities that alter the composition of the global atmosphere and that is, in addition to natural climate variability, observed over comparable time periods [2].This panel has a long history and periodically published key findings in the state of the climate.Their predictions anticipated more extreme, frequent and violent events, like droughts and floods.The occurrence of disasters linked to damages of physical events and social losses has a direct relationship with human existence.However, their frequency and damaging impacts are drastically increased during these last decades. An increase in the number of persons, population density, infrastructure and production is located in hazardous areas and in conditions of such vulnerability they are more susceptible to excessive damage and loss and face considerable difficulties in coping [3].Furthermore, the severity of the impacts of extreme and non-extreme weather and climate events depends strongly on the level of vulnerability and exposure to these events [4]. Generally, flooding occurs in periods of heavy rainfall.However, floods are not always caused by precipitations.In these last decades, many scholars from various disciplines are interested in the field of disasters caused by floods.In Africa for instance, the frequency of hydrological disasters represented 68.8% [5]. For instance, assessment of flood vulnerability in the city of Abeokuta in Nigeria during the flood event in 2007 used a questionnaire survey to reach its goals.In his study, flood vulnerability was assessed by examining exposure and susceptibility, and coping indicators in the study area.One of his key findings was that most of Abeokuta inhabitants did not anticipate a flood event of such magnitude to occur, despite its location on a flood plain and, therefore, they were unprepared for such hazard [6].The specificity of our study area is that flooding becomes more acute over the years with flood water obstructing human activities and many structures situated within a flood plain.In this regard, Medina Gounass is essentially and illegally settled in to a large extent, and does not follow any planning.Medina Gounass is already at risk from periodic floods due to heavy rainfall events.Additionally, population mostly uses latrines with septic tanks [7].The combination of these elements puts the area at risk.The social processes and their interaction with the environment lead its vulnerability against floods. The concept of vulnerability has a multitude of definitions.This concept is fundamental to human-environment research.The word "vulnerability" is derived from the Latin word vulnerare, meaning "to wound".At a very basic level, vulnerability can be defined as "the capacity to be wounded" or the "potential for loss".Therefore, social vulnerability encompasses many aspects.It is not limited to social weaknesses to withstand a natural or man-made hazard but it includes social discrepancies in terms of food security, health security, and all the components of human security at large in the flooding situation.Social vulnerability is also directly linked to the environment in which people are living [8].Additionally, social vulnerability refers to the socioeconomic and demographic factors that affect the resilience of communities [9]. Social vulnerability can have multiple forms: it can be the state of the system before the event, and the likelihood of outcomes in terms of economic losses and life lost, and it can also be the lack of capacities or weaknesses to face and recover quickly when the disaster strikes.The latter deals with the resilience of a system or a community to respond and recover with its internal means to the adverse impacts of the disaster.Studies on the social production of vulnerability as a central theme of research on the human dimensions of environmental change hold that vulnerability to environmental disasters is largely a product of the way that humans occupy and use the natural environment [10].Moreover, four different levels have been identified when it comes to social vulnerability to natural hazard impacts [11].Consequently, the assumption of social construction of risk becomes clearer.For that, the socio-natural co-production of hazard and the social qualities of vulnerability, and the ways in which different stakeholders perceive hazard, vulnerability and risk also need to be considered to understand the social construction of risk [12]. The occurrence of floods in the Dakar suburbs is a new phenomenon.The Senegalese capital is characterised by an out of control urbanisation process.Among the many impacts noted, flooding has appeared recently as a major threat to poor population leaving in the suburbs of Dakar.Nevertheless, the combination of population growth, lack of urban planning, and climatic conditions have led to an unprecedented flood disasters in different urban areas of Senegal [13].It is in this context that the paper examines the realities of floods in Medina Gounass.This is done first by analysing rainfall indices at local level.Then, selected exposure, susceptibility, and lack of resilience indicators of sampled population through the MOVE framework have been mapped using Geography Information System (GIS).Lastly, the analysis of these indicators provides the social vulnerability characteristics of Medina Gounass inhabitants. Study Area Medina Gounass district is located between 14.769 latitude North and −17.387 longitude West in the Guédiawaye Department.It is limited in the North by the district of Sam Notaire and the Ndiarème Limamoulaye district; in the East by the district of Wakhinane Nimzath and to the South and West by the district of Djiddah Thiaroye Kao.Formerly, it belonged to the Pikine Department (Figure 1).The Guédiawaye department was created in 1990 following the decentralization law.Currently, it has five boroughs, including Medina Gounass which is located South-East of the city of Guédiawaye.It is by far the most densely populated district.Medina Gounass has the smallest area (1.1 km 2 ) but has the highest density of 31,086 inhabitants per square kilometre across the Guédiawaye department.The number of people living in the study area is not definitely established.It is mentioned 44,000 inhabitants while former Deputy Mayor of Medina Gounass has 43,000 inhabitants [7].Now with the population movement, it is estimated at around 40,000 inhabitants.After the 2005 flood events, the former government has built a city not very far from Keur Massar to relocate people and this project was called "Plan Jaxay".The present government has done the same in another location in "Tivaouane Peul" called Plan "Tawfekh" which is about 22 km North-East of Dakar, the country's capital and near to the Keur Massar administrative district. Hydrologically, the watersheds of the district of Gounass are small in size.Degradation of the river system resulted in the formation of a chain of lakes or ponds.Flows are endorheic (having no outlet) as offshore bars prevent their escape to the Atlantic sea.Strong soil sealing in urban areas has changed the nature of runoff quantity (decrease losses flow, rapid movement of water).Over the past thirty years, this region has experienced very rapid urbanisation, linked to the rural exodus that climatic deterioration and degradation of living conditions in rural areas have caused in the entire Sahelian region [14]. Moreover, the ground water corresponds to outcropping geological formation consisting of sand dunes dating from the Quaternary or the continental terminal.These sands have been underlain by sedimentary geological formations [15]. Methods and Data Social vulnerability includes the climatic conditions, socioeconomic status, household composition and disability. Key factors of the MOVE framework are related to the exposure of a society or system to a hazard or stressor, the susceptibility of the system or community exposed, and its resilience and adaptive capacity (Figure 2).Also, this approach underlines the necessity to consider multiple thematic dimensions when assessing vulnerability in the context of natural and socio-natural hazards [12].It is the case in the study area because flooding is not only caused by extreme events like a heavy rainfall but it is a combination of rapid urbanisation and anarchic settlement for decades that trigger natural hazard-associated risk that threatens human security globally.This theoretical framework shows the linkage between different concepts of disaster risk management and climate change adaptation and appears as a useful tool for communicating complexity; it stresses the need for societal change in order to reduce risk and to promote adaptation.Therefore, the MOVE framework makes a clear differentiation between risk and vulnerability and also deals with the integration of the concept of adaptation in vulnerability assessments to natural hazards [12].Exposure describes the extent to which a unit of assessment falls within the geographical range of a hazard event.Susceptibility (or fragility) describes the predisposition of elements at risk (social and ecological) to suffer harm.The Lack of resilience or societal response capacity is determined by limitations in terms of access to, and mobilisation of, the resources of a community or a social-ecological system in responding to an identified hazard, whereas the adaptation box deals with the ability of a community or a system to learn from the past disasters and to change existing practices for potential future changes in hazards as well as vulnerability contexts. The MOVE framework characterizes vulnerability through three key factors, namely, 1) exposure (E)-reflecting the extent to which a unit of assessment falls within the geographical range of a hazard event; 2) susceptibility (SUS)-which describes the predisposition of elements at risk to suffer harm; 3) lack of resilience (LoR), which is determined by limitations in terms of access to, and mobilization of, the resources of a community or social-ecological system in responding to a particular hazard.Based on data availability, previous research and personal judgement, the following indicators were selected under each vulnerability component (Table 1). This table gives the functional relationship between the indicators and the vulnerability.All datasets were standardized, using linear min-max normalization (Equations ( 1) and ( 2)) according to (Iyengar and Sudarshan) [16] [17].When the indicators are related positively with the vulnerability, the normalized value of the indicator is computed as follow. When the indicators are related negatively with the vulnerability, the normalized value of the indicator is computed as follow. X ij is the normalized value of the indicator i of the component j; x ij , le value of the indicator i; max(x ij ) and min (x ij ) are respectively the maximum and minimum values of the indicators i of the component j. It is assumed that there are M regions or districts, K indicators of vulnerability and x ij ( ) are the normalized scores.The level of development of i th zone, i y is assumed to be a linear sum of x ij as where w's ( 0 1 w < < and ) are the weights [16].In Iyengar and Sudarshan's method, the weights are assumed to vary inversely as the variance over the regions is the respective indicators of vulnerability.That is, the weight w j is determined by ( ) where c is a normalizing constant such that ( ) The choice of the weights in this manner would ensure that large variation in any one of the indicators would not unduly dominate the contribution of the rest of the indicators and distort inter regional comparisons. The aggregation of the three components (i.e., E, SUS and LoR) into the final composite indicator of socioeconomic vulnerability was then performed using the equation below, while taking into account specific weights for the three components as detailed below: In the equation VU refers to the vulnerability index for a given neighbourhood, m equals the number of components, w j represents the weights for domain j and x j is the index of component j (i.e., E, SUS, LoR).In this study, the three components have the same number of indicators.So, the weight w is equal to 1 for each of them and m is represented by 3. The vulnerability index so computed lies between 0 and 1, with 1 indicating maximum vulnerability and 0 indicating no vulnerability at all. Climate data involved in this study were monthly values of minimum and maximum temperatures and rainfall amount sorted by decades for the whole time series.These data are from the Dakar Yoff station and the dataset covers the time period 1947 to 2012.For the analysis, descriptive statistics for both monthly mean temperature and monthly total rainfall amount were firstly extracted.With Excel, the diagram average monthly mean temperature and rainfall is made. The following formula is used for the Lamb index determination who proposed a rainfall analysis method named "rainfall anomaly index.This index is calculated by the following formula: where r ij is the rainfall measured in a year j at a station i, m i and σ i are respectively the average and standard deviation of the rainfall recorded at the station i, and N j is the number of stations that have recorded rainfall in the year j.Since the study area has one station (Dakar Yoff station), the above-mentioned formula becomes: where X i is the rainfall anomaly index for the year i, r i is the total annual rainfall for the year i, m and σ are respectively the average and standard deviation of the annual rainfall recorded during the period of time chosen for this study [18].Quantitative method is used for the socio-economic analysis.The advantage of quantitative research is that the findings from the sample under study will more accurately reflect the overall population from which the sample was drawn [19].For that, the sampling method and simplified formula is used to calculate sample size.A 95% confidence level and P = 5%, 7% or 10% are assumed for different population size by the equation below [20]. ( ) where n is the sample size, N is the population size and e is the level of precision.44,000 people live in Medina Gounass our sample comprises 100 households, because we apply P = 10% with the formula above.This research equally adopts an exploratory approach, using predominantly qualitative methods.Qualitative research provides a richer and more in-depth understanding of the population under study.Techniques such as interviews and focus groups allow research participants to give very detailed and specific answers [19]. EpiData, excel and the Statistical Package for the Social Sciences (SPSS) are used for data entry and statistical analysis for data analysis.For EpiData, questions should be coded in quantitative form so that it could be easily analysed.This software is very useful because it can allow us to convert different variables from the field survey to an excel file for the statistical analysis in SPSS.Thus Pie charts and bar graphs are drawn. Results of Meteorological Data Analysis Precipitation Analysis Generally speaking, precipitation in Sénégal is closely related to the one that prevails in the Sahel.It occurs with the advent of the African Monsoon.During the second half of May and June, the ITCZ (Intertropical Convergence Zone) is stable around 5 degrees north.It is the first rainy season in the Gulf of Guinea.While in July, the ITCZ has moved rapidly to the north, reaching its second equilibrium position at 10 degrees North and remains there until mid-August.It is the wet season in the Sahelian region [21].Rainfall is by far the most crucial variable that influences the climate and people's lives.It is determinant to the changing environment in this region.Thus, it is the most suitable parameter to characterize and analyse climatic changes in the Sahel.Even though Sahelian lives depend on rainfall, extremes like floods are destructive for crops in agricultural activities and for human settlement in cities. Figure 3 which is a combination of the monthly mean of rainfall and monthly mean of temperature from 1947 to 2012, shows how rainfall is distributed in Dakar.This confirms that July, August and September are the wettest months in Sénégal.They are also the periods where flood events occur mostly.Also, as it can be observed, the months of rainfall are the hottest in Dakar.Therefore, the monthly mean rainfall of July is around 70 mm; 180 mm for August and 145 mm for September.August is the most pronounced in terms of rainfall.105 millimetres (mm) of rainfall are recorded in 24 hours which triggered flooding in Cities as Dakar 22 nd of July, 2000 and Saint-Louis on 1 st of August 2000 [18]. In the same vein, 2005 is characterised by the frequency of heavy rainfall within a short period.Therefore, with an annual total of 590 mm rainfall, 270 mm with a percentage of 46% have fallen within seven days in mid-August and 360 mm in fourteen days which represent 61% by the end of August and the beginning of September.Comparing these information with our data which are sorted per decade, in 2005, Dakar Yoff station recorded 188.9 mm and 145 mm for the second and third decades, respectively, for a monthly total of 336 mm.In the first decade of September, 106.8 mm were recorded [22].Furthermore, in July 2000 Dakar Yoff station has recorded 154.3 mm in the third decade.As a result, flooding in Medina Gounass is caused by heavy rainfall during a short time.Consequently, August and September are months in which local inhabitants suffer the most.This information confirms that these three months are the wettest where an important amount of rainfall is recorded in Dakar and the most pronounced month is August. The annual rainfall index variation has a long history in climatology for the determination of dry years and wet years.For instance the Sahel drought was a recurrent concern for populations, hydrologists and ecologists.Actually, West Africa, as a whole, experienced a widespread drought in the 1970s and 1980s [23].Thus, the decade 1990s saw a gradual return to more humid conditions of rainfall deficit over the Sahel, continued until 2002.For this index, the negative values below −0.5 are considered dry years and positive values above 0.5 are wet years, the Lamb index used to perform rainfall analysis based on daily data precipitation collected at ten synoptic stations of Sénégal from 1921 to 2000 [18].The choice of this series of rainfall lies in a preoccupation of doing a relevant analysis, which takes into account a long period before and after the onset of drought.Analysis of rainfall-floods events relationship is based only on the review of daily precipitation, regardless previous rainfall. Furthermore, a survey has been conducted and using the same station data ranging from 1970 to 2009.Climate variability is obvious through the rainfall totals throughout the Sahel.Normalized standard deviation for Dakar-Yoff station can vary considerably from one year to another.This variability due to type of disturbances; squall lines and cyclonic disturbances in the atmosphere bring most of the rainfall in Dakar.Additionally, the rainfall trend is on an increase from 1970 to 2009 and shows positive differences during the 2000s [24].Thus, with the extension of buildings in high pressure areas, floods are becoming more frequent.However, the 2005 flood affected districts that were not so far touch by floods.Houses in Hann-Marist, a well-planned area are particularly vulnerable to these increases in rainfall; Medina Gounass where people live haphazardly, was highly affected by these events. Looking at Figure 4, it appears that the time series is not well distributed and it is shown by the mobile mean drawn in red.Even though the index is used to determine flood and drought years, this study puts emphasis on floods.To justify the relevance of this index, the field survey revealed that 1989 can be considered as a base line concerning the history of floods in the area.Nevertheless, in 2005, 450 mm of cumulative rainfall recorded in 24 hours in August 2005 caused a damaging flood in Dakar and its suburbs.For us, it's too much to record such a quantity of rainfall in a single day in Dakar [14].The data we have did not say the same thing even though they are from the same meteorological station of Dakar Yoff.She over estimate this information because in our data, 336 mm of rainfall are recorded in the whole August 2005. One might be tempted to say that we are at the end of the cycle of drought and beginning of a new wet phase in Sénégal, but it's too earlier to argue that it is the return period to wet years in Sénégal.As an example, in an interview during the data collection, the 2014 rainy season was forecasted by ANACIM (Agence National de l'aviation Civile et de la Meteorologie) to be a normal to deficit one.Thus, until the beginning of September, many areas in the country did not have much rainfall for agricultural activities. A close observation of the Lamb index graph, suggests that there are wet years during the period 1947-1970 and 1970 to 1989 corresponds to a long period of dryness.This period corresponds to the long droughts which hit the Sahel in general.In 1989, there was a wet year and after that we fall again in drier years till the early 2000s where wet years seem to be more frequent.In this last decade, rainfall is not without negative consequences for urban inhabitants.Hence, Dakar suburban particularly those from Medina Gounass, are constantly on rain water mixed with sewage, and drainage water which obstruct people's activities.This particular aspect is emphasised that in August and September 2005, nearly 200,000 people in poor suburbs of Dakar had their feet in the water and were later displaced and resettled in precarious sanitary conditions [14].As a result, it becomes a threat for human security generally. Characteristics of the Sample The sample size of the survey is 100 household administrated within the study area.These filled questionnaires related to a certain number of social, economic, educational, environmental, and existential issue in the area where they live. The education level shows sometimes the degree to which a community in able to withstand and recover quickly when a disaster strikes.Figure 5 illustrates the literacy rate of Medina Gounass inhabitants In the same vein, the informal sector employs the youngest workers, the less educated and more females in Medina Gounass.This is also the area where we have the lower income, where social benefits are the lowest and the social welfare is almost null [14]. The literacy rate sometimes determines the level of income.Thus, Medina Gounass residents are not highly paid.Consequently, their level of income cannot allow them to afford housing in the well planned urban areas where viable and liable amenities already exist. Figure 6 shows that those who have a salary paid in cash represent 39%, for family support 18%, job wage 25%, other 5% and no answer 13%.Linked to the literacy rate, these statistics confirm that the Medina Gounass inhabitants in general have limited economic means.Additionally, the survey reveals that the washrooms in the study area are not connected to the sewage.Only 8% are connected and 92% use the septic tanks.It is a real problem in this area because they are subject to recurrent flood events and the water table is not deep.When flooding occurs, the septic tanks' water and rain water flow together and affect populations. It appears clearly in Figure 7 that the percentage of people having their own houses in Medina Gounass is by far greater than tenants, with 82% against 18%.For a brief historical view, the site is a former crop fields space (Niayes) belonging to the Lebou ethnic group.They have sold the first plots of land for residential use at prices ranging between 3000 and 5000 CFA francs (less than 10 euros).People settled in regardless urbanisation standards [7].Furthermore, the economic situation of Sénégal is characterised by an unequal wealth distribution and imbalance in the development level between western regions close to the capital city Dakar and those in the East and the South.This situation is exacerbated by the droughts of 1972-1983 which generated inter-regional migration.The main migration flows are directed towards the Dakar region (49% of flows in 1976) [25]. These situations are the driving forces behind the high population density in many suburbs including Medina Gounass.The socio-economic situation, the governance issues and climatic conditions encouraged people to settle haphazardly in these areas and no one at that time could imagine what would happen if there is a return period of rainfall.Additionally, the population growth in an unplanned site aggravated people's suffering during flood events.The land ownership was so cheap that people with moderate income prefer to have their houses in that risky area.These factors put Medina Gounass populations in a permanent situation of human insecurity. Thus, the main driving force behind generates social inequities which are exacerbated by governance leniency, lack of preventive measures, bad behaviours and risk unawareness."The inability to sustain stresses is produced by on-the-ground social inequality, unequal access to resources, poverty, poor infrastructure, lack of representation, and inadequate systems of social security, early warning, and planning.These are the factors that translate climate vagaries into suffering and loss" [26].This statement summarises what happens in the study area.As a matter of fact, they have a strong symbolic relationship towards land they have struggled to obtain.It has become difficult for them to leave for an unknown land with many uncertainties.Hypothetically, flood-affected people settled within that area because they are mostly low-income people that could not afford housing on the urban-planned sites dotted with liable amenities.The situation of land ownership is important but not the only determinant to assess social vulnerability in Medina Gounass. Internal mobility to access critical infrastructures is important too.Figure 8 shows the average time one has to spend to reach a certain number of critical infrastructures.For instance, those who do not have a tap at home spend on average five minutes to get water.Moreover, access to the nearest market is too important and the average time to get to nearest market is about twenty minutes. Vulnerability Analysis The exposure is the first element analysed and it is determined by the size of household. The exposure (Figure 9) shows that the most exposed population to adverse impacts of floods by the family size is spread out all over Medina Gounass.It is ranked from very lowly to very highly expose.The green bullets represent very low whereas red ones mean very high vulnerability to flood.However, the major part of the most exposed are grouped in the South, the Centre and the East-Northern part of the area.It is materialised by red bullets with a very high degree of household exposition to floods.Therefore, the spatial distribution of exposition to adverse consequences to floods in the precise case depends on the number of persons within the household.As a result, large families become a heavy burden when a disaster strikes in this area. The susceptibility index is composed of the number of children less than 4 years of age in a given household.These households have the predisposition to suffer the most harm due to flood events during the rainy season.The susceptibility as highlighted in Figure 10 is sparsely distributed in the South, the South-East, and the Centre and in the East-North.It is also ranked from very low to very high.Some households are very highly susceptible because they have an important number of children less than four years who are known to be fragile to recurrent flood events in Medina Gounass.Humidity and water borne diseases such as cholera and water related diseases like malaria are very dangerous to this age group.In addition, they are low income people; thus they hardly take care of their children in terms of health.This spatial distribution is due to the fact that Medina Gounass is densely populated and sometimes families are parked within a small piece of land with a great number of people in a small house. The lack of resilience is determined by the distance from the nearest health centre within Medina Gounass.Therefore, the nearer the household to the health centre, the higher their resilience is.In contrast, the farther a household from the health centre the lower is the resilience and, by extension, the higher the vulnerability in terms of health.The households located in the North and North-East are highly vulnerable, and those from the East and South experience medium vulnerability.These households living permanently in flood situation during the rainy seasons encounter many challenges regarding their health (Figure 11).This main aspect calls for their capacity to anticipate, cope and recover from the adverse effects of recurrent flood events.Medina Gounass inhabitants are not resilient. The survey and the focus group interviews showed that people use bags of sand and power-driven pumps to fight against floods.These measures are not sustainable because they did not offer perspective for the future.These power-driven pumps are provided by the government which spends money for fuelling and monitoring flood events instead of putting in place sustainable adaptation measures.Additionally, an old person notes during our interview in managing flooding events, those who have means put a great quantity of sand near their houses, generating conflicts in the neighbourhood. These actions block the water ways and water enter in the houses of those who do not have means to do so.Sometimes, conflicts are so violent that the police have to intervene.As a result, flood generates conflicts between friends and relatives.In such situations, individualism prevails over the community. However, the description of the results obtained through the spatial analysis for all single and composite indicators are aggregated for the calculation of the final map of vulnerability. The combination of the three composite indicators shows the vulnerability of people living in Medina Gounass to floods.The interactions of these three major aspects of the MOVE framework are a crucial part of vulnerability assessment.From the map above, one can observe a concentration of the biggest bullets in the North-Eastern part of Medina Gounass.This result is not surprising because this part is a low land compared to other part of Medina Gounass.Additionally, they are the farthest from the health centre and the interview I conducted with the former Deputy Mayor details the new project of the construction of a basin at that place in order to collect the running waters in the area.This project can reduce considerably the social vulnerability of Medina Gounass to floods.Furthermore, Medina Gounass is among precarious neighbourhoods that suffer from the lack of drainage network wastewater and storm water.The streets are narrow and winding and do not facilitate fast and safe movement of people and goods.Thus, flooding is the most obvious risk in these illegal settlements and is a latent scourge behind much vulnerability and especially in rainy season.Based on rainfall, topographic criteria, hydrogeological, environmental and hygiene, Medina Gounass is one of the most affected areas.Although Medina Gounass belongs to the city of Guédiawaye, located on a dune site where soils are more permeable and therefore the infiltration of runoff is more obvious 75% of its area is flooded.In 2005, 911 houses were flooded [14]. As a result, social vulnerability index in Figure 12 to floods is not solely limited to the set of indicators assessed.The reason for this restriction is due to the absence and lack of accurate data to have a more composite index.However, the household size for the exposure, children less than four years of age for susceptibility and the distance from the nearest health centre for the lack of resilience appear to be relevant in assessing the social vulnerability of the community to flood. Conclusions Medina Gounass is really vulnerable to floods.This vulnerability is not solely related to climatic conditions but it is a combination of a set of factors.The analysis of climatic data highlights a raise in temperature from May to October where the peak reaches.Consequently, in Dakar, the hottest month is October.Thus, temperature parameter cannot solely determine the changing climate.Therefore, rainfall data have been used jointly and the reason is that precipitation in Sénégal is related to the one of the Sahel.Then, rainfall is by far the most crucial variable on the climate and people's lives. Extreme rainfall is one of the manifestations of climate change which causes floods events.Hence, Dakar suburban particularly those from Medina Gounass are constantly on rain waters mixed with sewage and drainage water which obstruct people's activities and become a threat for human security generally. Moreover, the survey highlights that those who have a salary paid in cash represent 39%, family support 18%, job wage 25%, other 5% and no answer 13%.So they are not highly paid.The statistics with the linkage to the literacy rate confirm that the Medina Gounass inhabitants in general have limited economic means to buy houses elsewhere where the amenities already exist.Additionally, Medina Gounass lacks amenity plan for a district which is said to be on its own.Finally, the social vulnerability index to floods is limited to a few numbers of indicators.The reason for this restriction is due to the absence and lack of accurate data to have a more composite index.However, the household size for the exposure, children under four years of age for susceptibility and the distance from the nearest health centre for the lack of resilience appear to be relevant in assessing the vulnerability of community to flood.As a result, flooding in Medina Gounass through this study shows how inhabitants are in a tricky situation and it is a real threat for human security. Figure 7 . Figure 7. House ownership and tenant in Medina Gounass. Figure 8 . Figure 8.Average time to reach some critical infrastructures as tarred road, the nearest market, and the nearest public tap. Figure 9 . Figure 9. Exposure map of Medina Gounass determined by the household size. Figure 11 . Figure 11.Lack of resilience map of Medina Gounass. Table 1 . Recapitulation of components, number of indicators and the functional relationship.
2019-04-24T13:03:53.957Z
2015-07-17T00:00:00.000
{ "year": 2015, "sha1": "a08d0be4f7c772a58544e7c388a22be2e780617a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=58838", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a08d0be4f7c772a58544e7c388a22be2e780617a", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Geography" ] }
258089946
pes2o/s2orc
v3-fos-license
Future Agricultural And Forestry Extension By Applying Information Technology And Communication Since the past, extension has been seen as a transfer of technology from researchers to farmers. Now the role of extension is seen more as a process of helping farmers to make their own decisions by adding choices for them and helping them develop insights into the consequences of each of these choices. The role of extension workers is clear in bridging and communicating government policies to farmers and business actors who are directly involved in activities to realize food security and independence by utilizing technology and information so that it is easier to get information from various sources that can be accessed using the latest technology and communication equipment. Farmers who tend to master the technological aspect are very dominant seen in those of productive age. The ability to perform good analysis and problem solving is generally often seen in those who tend to be highly educated. In terms of extension activities using information technology media, all farmer groups tend not to have a real role in their members. Almost all farmers have a long experience in farming activities, even though they have quite a lot of knowledge and experience, farmers still need renewable information. While farmers still need additional knowledge and renewable information through information technology media, quite a lot of farmers in Simalungun Regency do not master the aspects of information technology. The majority of farmers stated that extension workers are rarely or sometimes in the field, and extension workers never carry out face-to-face counseling activities or are carried out in the field. Activities with a remote extension model to farmers in Simalungun Regency using information technology media which is very important to support activities. Farmers who tend to master information technology on average have Smartphone communication devices, and farmers who lack technology on average only have ordinary mobile phone devices, which cannot be used to access many functions. The lack of understanding of farmers towards the content of the counseling material presented is enough to explain that the lack of effective extension models for information technology barbasis if the material is presented in the form of visual texts INTRODUCTION Indonesia as an agricultural country, which has the potential for a large wealth of natural resources, certainly relies heavily on the agricultural sector in improving the welfare of its plural people and spread across several regions and islands. This is based on the consideration that directly or indirectly, food needs affect social life and security defense. The issue of food security is a pillar for the existence and sovereignty of a country. For this reason, all components of the nation, both the government and the community, must jointly build national food security. This is as mandated by Law Number 8 of 2012 concerning food which states that the government and the community are responsible for realizing food security. Development of planned food security designed and driven by humans to improve their quality of life. It is human beings who determine the direction, goals, processes, intensity, time, costs and resources that development requires. Human beings also organize all the resources necessary so that the development process runs productively, effectively, and efficiently and achieves the goals that have been set. Realizing the position and strategic role of human resources in development, efforts are needed to improve their quality. Improving the quality of human resources must be carried out in a planned, sustainable and far-sighted manner and https://ijhess.com/index.php/ijhess/ taking into account changes in the strategic environment. One of the efforts to improve the quality of resources of farmers and their families is through agricultural and forestry extension activities (Damanik, et al, 2022). At this time, there are actually many strategic environmental changes that affect agricultural extension services in the future. However, there are three strategic environmental changes that significantly have a broad influence on agricultural extension, namely globalization and the enactment of Regional Autonomy and the enactment of Law Number 16 of 2006 concerning Agricultural, Fisheries and Forestry Extension Systems. The changes that occur in the era of globalization are the availability of greater economic opportunities, the rapid development of information and technology and the rapid progress in the field of telecommunications which makes it easier to obtain information from various sources that can be accessed using the latest communication technology and equipment information and communication technology.On this basis our paradigm in the face of globalization is that globalization is not a threat, but an economic opportunity that we can and should make the best use of. The economic opportunities offered by globalization include market opportunities, business opportunities, job opportunities, and cooperation opportunities that are increasingly widespread in the regions, nationally and globally. All these economic opportunities are offered in a highly competitive climate that must be answered by increasing competitiveness, both the competitiveness of farmers, officers, and competitiveness, stakeholders (Damanik, Triastuti, 2022). And the most important thing is that so that competitiveness can run well is efficiency in the development of food security carried out in order to support the achievement of success in agriculture, namely increasing self-sufficiency and sustainable self-sufficiency, increasing food diversification, increasing added value, competitiveness and exports, and improving the welfare of farmers. Extension workers play a role in conveying information about agricultural innovations so that farmers know, want, and are able to run their agricultural business properly and correctly. Including its ability and providing assistance for business actors and main actors in increasing the production and productivity of agricultural products that are competitive and have high economic value (Rusmono, 2021). The information and communication technology-based extension model is a method that can be run remotely and does not have to be face to face with farmers. Information and communication technology in the agricultural sector that is timely and relevant provides appropriate information to farmers for decision-making in farming, thereby effectively increasing productivity, production and profits. In addition to socialization of various policies and developments in agricultural activities, education and training for agricultural and forestry extension workers to support activities carried out by agricultural assistants at the field level. Therefore, through education and training as well as the efforts of extension workers to always strive to develop themselves by continuing to learn and learn the development of agricultural science and technology, they are able to provide great support for farmers. Moreover, agriculture has been designated as one of the priority development programs to improve the economy and the standard of living as well as the welfare of the community. Thus, the role of extension workers is very strategic in agricultural development by utilizing technology that will continuously develop (Damanik, et al, 2022). With the problems that occur, it is very necessary to have information and solutions that will be taken by agricultural and forestry extension workers to realize future agricultural and forestry extension by applying technological and communication information. This study aims to analyze the effectiveness of the implementation of information technology-based agricultural extension to farmers RESEARCH METHODS The location of this research was carried out in Raya District, Simalungun Regency, North Sumatra Province was chosen as the research location because the information technology-based extension model has never been implemented until now. For this reason, information technology-based counseling needs to be implemented in Simalungun Regency. This research uses an experimental approach model by conducting information technologybased counseling trials on farmers. The population and sample of this study were farmers who belonged to farmer groups. The total number of farmers in the number of 102 people who are members of 5 farmer groups. Farmer sampling techniques are carried out using purposive sampling techniques that meet the following criteria: 1) farmers have a smartphone or computer that can be connected to the internet network. 2) farmers are able to operate smartphones or computers. 3) be active in farmer groups for at least 1 year. 4) have participated in counseling activities before. Furthermore, the number of samples is determined based on opinions (Arikunto, 2002) namely if the population is homogeneous, then the sample size can be taken 10% to 25%, then the entire population is taken. The sample in this study was taken, namely 25% or a total of 26 farmers. Data Analysis Techniques The effectiveness of information technology-based counseling conducted in Simalungun Regency uses the EPIC model method, which is one of the tools for measuring media effectiveness with a communication approach. Covers four critical dimensions namely Empathy, Persuation, Impact and Communication (EPIC). The effectiveness rate of extension media using the EPIC model is determined by the Likerts Summated Rating Scale (LSRS). The EPIC method of the model is to calculate the scale range with the following formula: The range of scales used in this study was 1 to 5, then the range of assessment scales obtained was: Rs = 5 -1 / 5. Continuumly, the following categories are made: Respondent's Demographic Profile a. Farmer's Age The age factor has always been the main benchmark of a person said to be a productive or unproductive category. Farmers in Simalungun Regency are more dominated by farmers in the productive age category. Of the respondents, farmers of unproductive age were around 25.7%. Farmers who tend to master the technological aspect are very dominant seen in those of productive age. Of all farmers who master the aspects of information technology, 87.3% are those of productive age. b. Level of Education of Farmers The level of education is closely related if it is connected with aspects of knowledge and social status of a person. The ability to perform good analysis and problem solving is generally often seen in those who tend to be highly educated. Farmers who are highly educated up to the undergraduate level can still be found even with a relatively small figure of 2.9%. Interestingly, obtaining the level of secondary education by farmers in Simalungun Regency does not absolutely affect them in terms of mastering aspects of information technology. This can be seen in 52.8% of farmers who have a high school education, it turns out that they are quite familiar with the use of smartphones for the benefit of internet network access. c. Membership of Main Farmers and Commodities Group The important role of farmer groups as a liaison for access to seed and fertilizer assistance for corn crops also contributes positively to farmers. In terms of extension activities using information technology media, all farmer groups tend not to have a real role in their members. The cessation of direct field extension activities by extension workers during the Covid-19 pandemic has had an impact on cutting off access to information and knowledge needed by farmers. Some farmers in Simalungun Regency who are relatively proficient in information technology can get additional information with internet access independently. d. Farming Experience Experience can refer to how long a person is involved in a certain work environment or field so as to explain a person's level of knowledge or skills with that field. Knowledge formed on the basis of experience, can be called empirical knowledge. A person who is said to be an expert, one of them refers to how much experience he has. Almost all farmers have a long experience in farming activities. Most have an average farming experience of more than 5 years even up to more than 10 years. There are some farmers with an amount of 8.57% who turn out to have longer experience with an average of more than 20 years. Those who tend to have less experience than other farmers are less than 5 years, and on average more than 2 years to 4 years. The factor of considerable experience in farming can explain how strongly aspects of their knowledge and skills are formed. Despite having quite a lot of knowledge and experience, farmers still need renewable information. While farmers still need additional knowledge and renewable information through information technology media, quite a lot of farmers in Simalungun Regency do not master the aspects of information technology. Agricultural Extension in Simalungun Regency Agricultural extension workers have an important role in agricultural development because as agents of change, extension workers are the spearheads who are directly in contact with farmers. In this case, extension workers are parties who empower farmers to become independent in carrying out their agricultural business; that is, being independent in thinking, International Journal Of Humanities Education And Social Sciences (IJHESS) E-ISSN: 2808-1765 Volume 2, Number 4, February 2023, Page. 1150 -1157 Email : editorijhess@gmail.com https://ijhess.com/index.php/ijhess/ acting, and controlling it. As a government officer, an extension officer is a functional position that has duties and roles that are in accordance with the job description that has been set. The Simalungun Regency Government continues to give appreciation to agricultural extension officers in assisting farmers because farmers in Simalungun Regency have succeeded in harvesting during the pandemic for several commodities such as rice, beans, vegetables and chilies. As a result of interviews with farmers, the majority of farmers stated that extension workers are rarely or sometimes in the field, and more than 55% of farmers stated that extension workers have never carried out face-to-face counseling activities or were carried out in the field. Activities with a remote extension model to farmers in Simalungun Regency using information technology media which is very important to support activities The use of information technology media which is very important to support extension activities has a very good opportunity, considering that many areas in Simalungun Regency have been connected to telecommunications networks. The main obstacle to implementing the extension model by utilizing telecommunications network media is still focused on farmers. The results of this study found that almost all farmers did not agree with remote counseling or using telecommunications media (online). The main reasons for farmers' rejection are expensive costs, difficulty in direct practice, and a tendency to find it difficult to understand the material and information presented. Other reasons stated by farmers are the difficulty of accessing the internet network, not knowing how the online counseling mechanism is, and online counseling is certainly not accompanied by simulations or direct practice. The information and communication technology-based extension model is a method that can be run remotely and does not have to be face to face with farmers. Information and communication technology in the agricultural sector that is timely and relevant provides https://ijhess.com/index.php/ijhess/ appropriate information to farmers for decision-making in farming, thereby effectively increasing productivity, production and profits. The development of internet technology is currently developing very quickly and almost reaches all levels of society and can even reach the farthest areas from the city center. Through an internet connection, electronic equipment such as computers and mobile phones can be a communication medium that allows extension workers and farmers in real time as if they were face to face. The benefits of the internet for extension activities have several advantages, namely that they can be delivered anywhere and anytime, the material is relatively easy to update, can develop the number of interactions between participants and speakers, and can relate in real time. With the existence of information technology, it can provide extension services from various agricultural sectors and play an important role in rural development so as to produce various changes. Information technology in the form of the internet offers the potential for more decentralized and more democratic communication compared to the mass media offered previously Mastery of Information Technology by Farmers in Simalungun Regency Efforts to implement an information and communication technology-based agricultural extension program in Simalungun Regency need to consider the mastery of information technology by farmers. Communication devices such as smartphones have become a basic necessity as a flexible communication tool and are widely used by all circles of society. Not only does it function as a communication tool, but it has many other functions such as accessing information, teaching and learning activities, to shopping and transaction purposes. In Table 2, it can be seen that the aspect of mastering information technology by farmers in Simalungun Regency is relatively visible only in the number of 84.6% of farmers. All farmers who tend to master information technology on average have Smartphone communication devices, and farmers who lack technology on average only have ordinary https://ijhess.com/index.php/ijhess/ mobile phone devices, which cannot be used to access many functions. The main reason for farmers who do not have a smartphone is not because they cannot afford it, but because they do not understand how to operate the device Effectiveness of Information Technology-Based Counseling in the Form of Visual Texts Measurement of the effectiveness of counseling for visual text categories using the EPIC Model, generally categorized as less effective. The overall score of each variable shows a less effective category with an average of less than 3.40. Of the total average score, the level of effectiveness of counseling using visual text with a score of 2.84 was categorized as less effective. The lack of understanding of farmers towards the content of the counseling material presented is enough to explain that the lack of effective extension models in information technology barbasis if the material is presented in the form of visual text. The ineffectiveness of the extension model/media in the form of text published on the website is caused by several factors, namely : Counseling materials are still considered inappropriate or incomplete based on the demands of information needs by farmers, farmers cannot be free to provide responses, responses and even questions directly if there is an information need that wants to be explored in detail and deepening, emahami sentence by sentence related to the information presented is considered quite troublesome for farmers. The majority of respondent farmers only have a high school education, which is considered not difficult to grasp the meaning of certain sentences that are considered very unfamiliar to hear, very minimal reading habits affect farmers relatively lazy to read the material / information presented, especially if the text of the material is long enough, almost all farmers do not like the information technology-based extension model. 1. Agricultural and forestry counseling based on information technology in Simalungun Regency has never been implemented optimally. Information technology-based counseling (online) conducted in this study on farmers in Simalungun Regency, found that the online extension model in general obtained a less effective category to form farmers' knowledge and understanding. 2. Some of the obstacles in information technology-based agricultural counseling include the lack of mastery of information technology aspects, obstacles to ownership of smartphone
2023-04-13T15:27:59.595Z
2023-02-03T00:00:00.000
{ "year": 2023, "sha1": "aa9fc37914ca2b8142edec9f92caed7fe1c25e2e", "oa_license": "CCBYNC", "oa_url": "http://ijhess.com/index.php/ijhess/article/download/346/315", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c7d406195002f0d6933ad60c5002b80a1c6850b1", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science", "Business" ], "extfieldsofstudy": [] }
55934673
pes2o/s2orc
v3-fos-license
The flank eruption history of Etna (1610-2006) as a constraint on lava flow hazard Data of the flank eruptions of Etna from over the last 400 years were extracted from the new geological map for the lava flow extensions and vent positions, and from the catalogs of historical eruptions for the eruption durations and lava volumes. The partially or widely hidden lava fields on the new geological map were retrieved from older geological maps. The distributions of the eruption durations and lava volumes were analyzed, with the definition of six eruptive classes for use in numerical simulations. The threshold values for the eruption durations and lava volumes were set at 45 days and at 35 × 106 m3 and 100 × 106 m3, respectively. A global analysis was performed on the whole volcano to evaluate the recurrence of the classes, and to estimate for each class the ranges, means and standard deviations of the durations, volumes and elevations of the main vent. The same analysis was repeated by subdividing the volcano into three sectors, which were defined on the basis of the distribution of the eruptive fissures over the last 15 ka. The classes have different recurrences across these various sectors, and different distributions of volumes, durations and elevations of the main vent. Finally, a lava flow resurfacing map that counts the number of lava flows on each given area of the volcano over the last 400 years was compiled and then normalized. Introduction The volcanic threat is the combination of the hazards (the dangerous or destructive natural phenomena produced by a volcano) and the exposure (the people and property at risk from the volcanic phenomena).Volcanic hazard is the probability that given areas will be affected by potentially destructive volcanic processes [Fournier 1979].Evaluation of volcanic hazard is generally based on the past eruptive behavior, on the assumption that the previous activity is representative of that in the future [Guest andMurray 1979, Behncke et al. 2005].Hazard assessment can be supported by the use of computer simulation codes, which are becoming valuable tools for better understanding and forecasting of lava flow emplacement [Vicari et al. 2006]. Mount Etna is a basaltic composite stratovolcano that formed along the Ionian coast of eastern Sicily as a consequence of the complex eruptive history over the last 500 ka [Branca et al. 2008].As Etna volcano is located in a densely inhabited area, assessing the hazard posed by its eruptions is particularly important, and this can be based on the long record of its activity.The key information for analyzing its flank activity comes from recent revisions of the catalog of its eruptive history [Branca andDel Carlo 2004, 2005] that provide the durations and volumes of the recorded eruptions.Moreover, geometrical and geographical information on the lava flow fields can be found in eight different datasets: -Three geological maps: the New Geological Map (NGM) of Etna volcano, which was updated to 2007 [Branca et al. 2011], and the previous geological maps compiled by Waltershausen [1845-59] and Romano et al. [CNR, 1979]; -Orthophotographs extracted from aerial surveys in 1994, 2001 [Coltelli et al. 2007], 2004 [Baldi et al. 2006] and 2005 [Gwinner et al. 2006]; -Lava flow maps reported in Azzaro and Neri [1992]. Eruptions of Etna consist of quasi-continuous activity at summit craters, and quite frequent events along fissures on its flanks [Branca and Del Carlo 2005].Strombolian activity and periodic lava fountaining episodes, which are often associated with lava flows, frequently occur at the summit craters [Guest 1982, Coltelli et al. 1998, Coltelli et al. 2000, Calvari et al. 2002, Alparone et al. 2003].In contrast, the flank eruptions take place at intervals of years, and they produce lava effusion that is commonly associated with weak explosive activity [Branca and Del Carlo 2005]. About three millennia of Etna eruptions are documented in historical sources, which represent a unique record for an active volcano [Tanguy 1981, Branca andDel Carlo 2004].However, the catalog of flank eruptions is complete and accurate only after the mid-17 th century Special Issue: V3-LAVA PROJECT [Branca and Del Carlo 2005], although a few events (e.g.1702, 1755) are poorly documented, so that the lava flows and eruptive systems that occurred are today irretrievably lost for a valuable study [Tanguy et al. 2007].The catalog of summit activity is reliable only since the late 19 th century [Salvi et al. 2006], although the summit activity poses relatively minor problems in terms of hazard to inhabited areas when compared to the flank eruptions [Duncan et al. 1981, Wadge et al. 1994, Behncke et al. 2005].Furthermore, the high frequency of summit eruptions implies that most of their products are soon covered by successive volcanics, thus making it difficult to identify and delimit each flow field.Fifty-one summit effusive eruptions over the last 400 years have been retrieved from the historical catalogs [Branca andDel Carlo 2004, 2005], although only 30 lava flow fields can be identified on the three geological maps, and no more than four of these are totally outcropping.As summit eruptions are generally not dangerous for inhabited areas, the evaluation of the hazard at Etna can be limited to flank eruptions, and can be based on the running of several numerical simulations of the lava flow paths, provided that the past eruptive behavior has been analyzed. This study was devoted to the collection and analysis of all of the available data regarding flank eruptions of Etna that have occurred over the last 400 years .It has allowed the characterization of the distributions of duration and volume, as well as the spatial distributions of these past flank eruptions.These analyses are necessary to define the input of the numerical simulations to be used to compile an Etna hazard map. Etna flank eruptions over the past 400 years This study of the eruptive activity that occurred along the Etna flanks from 1610 to 2006 (Figure 1) was firstly based on an analysis of the historical catalogs [Romano and Sturiale 1982, Branca and Del Carlo 2004, Andronico and Lodato 2005, Behncke et al. 2005, Branca and Del Carlo 2005, Tanguy et al. 2007].These catalogs furnished information on the eruption durations and on the lava volumes of 67 effusive events, with the volumes of the 2001 and 2006 lava flows retrieved from two studies in the literature [Coltelli et al. 2007, Behncke et al. 2009].Then, a geodatabase, i.e. a database designed to store, query, and manipulate geographic information and spatial data, was implemented with the ArcGis software (www.esri.com), to store all of the data available for an analysis of the geometry and the location of the lava flows during the 67 eruptions (an example of a geodatabase implemented for monitoring Etna eruptions can be found in De Beni and Proietti [2010]).In particular, the two geological maps of Waltershausen [1845-59] and Romano [CNR, 1979], as well as the maps reported in Azzaro and Neri [1992], were digitized and compared with the NGM, to reduce distortions that might arise from the map georeferencing and orthorectifying.Three layers, as polygonal, punctual and linear, were also created inside the geodatabase, for collecting the lava flow fields, their main vents, and the eruptive fissures, respectively. The analysis of the NGM allowed the identification of the limits and locations of the lava flows during the 67 selected eruptions.Nevertheless, it should be noted that a single eruptive event can produce several lava flows that can be fed by different vents along the same fissure, e.g. the 2002-2003 eruption on the north-east flank of Etna [Andronico et al. 2005].In such cases, only the main flows, which are those having the longest duration and that covered the greatest distances from the vent and/or widest areas, were generally considered.Minor lava flows were excluded because they pose little concern for risk reduction purposes, with respect to the main lava flow, and thus they are not specifically relevant when evaluating the spatial distribution of the lava flow fields.In a few cases, the vents that fed lava flows that covered small areas were included, if they threatened inhabited areas and infrastructure, as during the 2001 eruption [Scifoni et al. 2010].Instead, multiple lava flows were considered when they were on different flanks of the volcano, e.g. during the 1879, 2001 and 2002-03 eruptions [Branca and Del Carlo 2004, Coltelli et al. 2007, Andronico et al. 2005]. The polygons that define the geometry of the selected lava flow fields were collected in the geodatabase, and the covered area was measured for each polygon and associated to the corresponding record, together with its outcropping area.This parameter qualitatively classifies the present outcrop with respect to the area that each flow field would have originally covered.Indeed, older lava flows might have been covered by more recent eruptions, and thus their limits appear incomplete or they are missing on the NGM although they have a greater outcrop in previous maps.The flow fields can therefore be distinguished as: totally or almost totally outcropping, and partially, widely or totally hidden.Auxiliary information (e.g.eruption year and flow name, eruption style, duration and area, geological map from which the limits were traced, outcropping area, volume, outcropping fracture length, main vent location) is also stored in the attribute table of the polygonal layer of the geodatabase that contains the main lava flows. Different layers of the geodatabase were also defined for collecting the main vents associated to each lava flow, as well as the eruptive fissures.The attributes associated to the last layer are the fissure length and considerations on their present outcrop, likewise for the lava flow fields.A single vent (called the main vent) was associated to the main lava flows identified on the NGM, and its cartographic coordinates and elevation were measured.The main vent was located along the eruptive fissure, generally where most of the lava was emitted, or at the fissure center, when the lava was uniformly distributed along it.The spatial distribution of the vent relative to the Etna flank eruptions over the last 400 years was necessary for computing the Etna hazard map, as reported in Cappello et al. [2011]. Data analysis As the evaluation of volcanic hazard is generally based on the past eruptive behavior, the data available for these 67 Etna flank eruptions over the last 400 years were analyzed.In particular, the distributions of durations and volumes, and the positions of the main vent were investigated, which allowed the definition of a set of eruptive classes that can be used to carry out a number of simulations for the implementing of the Etna hazard map [Cappello et al. 2011]. To increase the information on the spatial distributions of the lava flows as much as possible, all of the available maps were analyzed together with the NGM.In particular, the Waltershausen [1845-59] and Romano [CNR, 1979] geological maps were used to retrieve the lava fields during the 19 th century and the 19 th and 20 th centuries, respectively.The maps reported in Azzaro and Neri [1992] show the lavaflow fields between 1971 and 1991.Finally, the orthophotographs allowed the complete mapping of the 1991-1993 and 2004-2005 Ten effusive events among the 67 were excluded because their records are not complete, for different reasons.In particular, five lava fields cannot be located: four (1643, 1702, 1755 and 1918) do not outcrop in any of the maps, and one (1975) cannot be distinguished from another flow field (1975-77) that occurred soon after in the same area.Moreover, three events (1956, 1964 and 1975-77) have unknown volumes, and two (1682 and 1689) have unknown durations and volumes.Nine eruptions were also excluded because they represent exceptional events, due to their extraordinarily brief (1869, 1908 and 1942) or long (1651-54 and 1614-24) duration, as well as their extremely low (1883( , 1968( , and December 1985) ) or high (1651-54 and 1669) volumes.Duncan et al. [1981] have already indicated that the 1669 eruption was a feature of the activity at the peak over the last 400 years and that its inclusion in the calculations might distort any general predictive model based solely on more normal flank activity.Forty-eight flank eruptions were therefore available for the analysis. To analyze the spatial distributions of these 48 lava fields, the geometry of those that are partially, widely or totally hidden on the NGM was updated.In particular, their limits were modified on the basis of the additional data sources, starting from what was still outcropping on the NGM.For example, the 1811 and 1819 lava flow fields (Figure 2) in the Valle del Bove were buried by several eruptions, and so their original geometries were retrieved from the Waltershausen [1845-59] map.Their outcropping areas measured on the NGM are about 0.26 × 10 6 m 2 and 0.13 × 10 6 m 2 , and respectively corresponds to 5.5% and 2.2% of the values measured in the Waltershausen [1845-59] map (4.77 × 10 6 m 2 and 5.98 × 10 6 m 2 , respectively).After analysis of all the available data sources, the parameters defining the flow outcrop were updated (Figure 3), showing that two hidden flows were retrieved, and that the definition of the planar expansion of the lava flows was improved.Indeed, the percentage of partially, widely or totally hidden flows decreased from 52% to 17%. ETNA HISTORY CONSTRAINS LAVA FLOW HAZARD The recurrence of each of the six classes can be seen in the pie diagram in Figure 5b, which shows that class 1 is the most recurrent (39.6%), followed by class 5 (20.8%), class 6 (16.7%) and class 2 (14.6%); class 4 contained the lowest number of events (8.3%), although finally, class 3 is empty. The ranges, means and standard deviations of the durations, volumes and elevations of the main vent were also evaluated, for the whole volcano and for the events according to each class (Table 1). A same analysis was performed by dividing the volcano into three radial sectors that are centered on the summit craters (Figure 6a).The zero and minima of the distribution of fissure orientation, relative to the last 15 ka [Azzaro et al., in press], were considered as delimiting angles of each sector (355˚-115˚for sector 1; 115˚-225˚for sector 2; 225˚-355˚for sector 3), according to the Duncan et al. [1981] analysis.These authors observed that the sectorial distribution of all of the adventive cones over the last few thousand years is very similar to that of the flank eruptions between 1537 and 1974, thus strongly implying that the distribution remains fairly constant over a time scale of hundreds to thousands of years.The sectorial analysis presented here shows that most of the flank eruptions over the last 400 years occurred in sectors 2 and 1 (Figure 6b), which include the south and north-east rifts, respectively.The frequency of the six classes on the three sectors was analyzed and is shown in Figure 7, which allows the identification of the most recurrent class, which was different for the various sectors.Class 1, i.e. that containing brief eruptions that emitted low lava volumes, was confirmed as having the highest number of events in sectors 1 and 3, where it represents 50.0% and 55.6% of the events, respectively (Figure 7b, d).Class 5, i.e. long eruptions that emitted medium lava volume, was the most recurrent (33.4%) in sector 2 (Figure 7c).It should also be noted that in sector 2, which coincides with the densest inhabited slope of Mount Etna, class 6 events (long eruptions that emitted large lava volumes) have a recurrence of 23.8%, as does class 1.The ranges, means and standard deviations of the durations, volumes, and main vent elevations were then evaluated for each sector, for all of the events, and for the most recurrent classes (Table 2).This analysis shows that class 1 has different ranges and means in sectors 1 and 3.The same behavior can be seen when looking at the results of all of the events in each sector. Resurfacing map of the flank lava flows A resurfacing map was also constructed that shows how many times each point of the surface of Etna (in practice each pixel of the DEM adopted for the simulations) has been covered by a lava flow field over the last 400 years.To retrieve as much information as possible, the smaller lava flows that occurred during events that produced multiple flows and that were excluded when defining the vent positions were now taken into account, giving a total of 97 flow field limits.The resurfacing map (Figure 8a) shows that the more frequently covered areas are the Valle del Bove, as well as the south flank, and that the north-east rifts were relatively active.Then the map was normalized by dividing ETNA HISTORY CONSTRAINS LAVA FLOW HAZARD it by 97, the total number of flow fields identified.The normalized resurfacing map (Figure 8b) counts how many times a pixel was covered with respect to the total number of lava flows, and it represents the only quantitative information available for the description of the spatiotemporal evolution of these Etna flank eruptions.It should therefore be considered in the validation processes of lava flow hazard maps that are implemented during the LAVA project. Discussion and conclusions This study analyses the Etna flank eruptions over the last 400 years on the basis of data collected from historical catalogs and geological maps.A geodatabase was implemented for the collection of all of the information available for each eruption; e.g. the georeferenced flow limits (geographic records) and the geological maps on which they were drawn, the definition of the present flow outcrop, the flow areas and the information collected from historical catalogs; i.e. eruption durations and lava volumes.The connection of the information derived from different data sources has facilitated the carrying out of the main tasks of this study; namely, the definition of the eruptive classes and the evaluation of the Etna resurfacing map. A classification based on the durations-volumes of the main lava flows over the last 400 years is presented, with 45 days and 35 × 10 6 m 3 and 100 × 10 6 m 3 set as the thresholds of the durations and volumes, respectively.Nevertheless, the analysis shows that five classes are sufficient to include all of the eruptions considered because class 3 (short durations and high volume eruptions) is empty, while class 4 (long durations and low volume eruptions) is poorly represented. To explain the results obtained for classes 3 and 4, it should be taken into account that two effects control the rates and durations of magma discharges: the balance between the driving and the lithostatic pressures, and the thermal evolution of the magma and the host rock.As an eruption continues, the driving pressure gradually decreases, and it can reduce the flow rate [Bruce and Huppert 1990].The effect of the driving pressure, and in particular of the de-pressurization of the magma source, is mainly due to relaxation of the reservoir (elastic contraction of the magma body), which was first studied by Wadge [1981].He analyzed effusion rate trends for various volcanoes, among which there was Mount Etna, and he identified a two-phase trend: "waxing", as characterized by a rapidly increasing flow rate, followed by "waning", a longer period in which the flow rate decreases exponentially.He also observed that the maximum effusion rate is directly proportional to the magma overpressure.When looking at the effusive events analyzed here, a long waning phase might have been responsible for high volume eruptions (V >100 × 10 6 m 3 ), and indeed, these generally had durations longer than 170 days, which corresponds to mean effusion rates lower than 6 m 3 /s.This quite low value suggests that most of the eruption could have been characterized by stable and low discharge that was related to a long equilibrium phase between the internal pressure of the dike and the lithostatic pressure which, together with the lava cooling, tend to close the dike.Otherwise, the early end of brief eruptions (<45 days) was probably due to internal dike pressure that was not sufficient to keep the dike open.Moreover, an eruption shorter than 45 days would have a lava volume higher than 100 × 10 6 m 3 only if its mean effusion rate was higher than 26 m 3 /s.Such a high mean effusion rate (total volume divided by total duration) was observed only for eruptions with durations of less than 31 days, which are thus characterized by relatively short waning phases.Finally, as the peak effusion rate is proportional to the initial driving pressure of the reservoir [Wadge, 1981], and as the effusion rate decreases rapidly during the waning phase, a lava volume as high as 100 × 10 6 m 3 cannot be emitted during brief eruptions (D <45 days).For these reasons, over the last 400 years Etna did not produce eruptive events that belong to class 3 (short durations and high volumes eruptions).Class 4 (long durations and low volume eruptions) is poorly represented for a similar reason: to have a long duration, the internal lava pressure should be high enough to keep the dike open, and this more commonly resulted in medium-high lava volumes, and thus to eruptive events that belong to classes 5 or 6. After defining the eruptive classes, two different analyses were performed: a global analysis in the whole volcano, and a sectorial one that considered three radial sectors centered on the summit craters.The recurrence of each class was analyzed through pie diagrams (Figure 7), which show that for the whole volcano, most of the eruptions considered (39.6%) belong to class 1 (short durations and low volumes).This was also confirmed when looking at the sectorial analysis in sectors 1 and 3. Sector 2, coinciding with the southern Etna flank, mainly contains (33.4%) class 5 events (long durations and medium volumes).This sector also has a higher frequency (23.8%) of class 6 events (long durations and high volumes) with respect to the whole volcano (16.7%).Moreover, class 6 has the same frequency as class 1. Taking into account that sector 2 coincides with the most inhabited area of the volcano, the corresponding hazard for this sector should be more carefully assessed.The results obtained also suggest that the south-east rift is the area that has most frequently been affected by magma intrusions since 1610.The ranges, means and standard deviations of the volumes, durations and elevations of the main vent were then evaluated for each class, for the whole volcano (Table 1), as well as for the most recurrent classes, on the three sectors (Table 2).The means can be considered as the representative values to be assigned to each class, while the standard deviations show the data dispersions.By assuming that the future behavior of Etna will be the same as in the recent past, this analysis furnishes the input parameters (duration and volume) to be used in simulations for compiling an Etna hazard map.Different representative values should be considered in the three sectors, because each class has different ranges, means and standard deviations of the volumes, durations and elevations of the main vent. Finally, the resurfacing maps (Figure 8) allow the evaluation that the areas that have been mostly invaded by lava flows are the Valle del Bove and the south flank.As a probabilistic representation of Etna flank eruptions, the normalized resurfacing map can be compared with the hazard maps produced during the LAVA project, as support in their validation. Figure 1 . Figure 1.Etna flank lava flows over the last 400 years mapped from the NGM (orange) and Etna volcanics (beige) Inset: location of Mount Etna (red dot) with respect to Sicily and Italy. lava flows in the Valle del Bove, as well as the 2001 lava flows. Figure 2 . Figure 2. Example of delimitation on the Waltershausen [1845-59] geological map of lava flow fields that are widely hidden on the NGM.The limits of the 1811 and 1819 flow fields defined on the Waltershausen [1845-59] map are drawn as blue and red lines, respectively; the red and blue filled areas represent their portions outcropping on the NGM.Green lines and filled area define the 1811 scoria cone on the Waltershausen [1845-59] map and on the NGM, respectively. Figure 3 . Figure 3. Classification of the lava flow fields over the last 400 year as: totally or almost totally outcropping (TO or ATO), and partially, widely or totally hidden (PH, WH or TH). Green bars refer to the flows defined only on the NGM, while gray bars refer to all of the datasets.The percentages associated with each group are given above the corresponding bar. Figure 4 . Figure 4. Distributions of the durations (a) and volumes (b) of the 48 selected flank eruptions over the last 400 years.The bar widths are 5 days and 5 × 10 6 m 3 , respectively. Figure 6 . Figure 6.a) Subdivisions of the Etna volcano edifice into three sectors, and classification of the main vents on the basis of the six eruptive classes.b) Pie diagram showing the percentages of the vents located in each sector. Figure 7 . Figure 7. a) Distributions of the six classes in the three sectors into which Etna was divided.The legend refers to all of the panels.(b-d) Pie diagrams showing the distributions of the six classes, as: sector 1 (b), sector 2 (c) and sector 3 (d). Number of events, ranges, means and standard deviations (v) of durations (D), volumes (V) and elevations of the main vent (H) for all the events and the most recurrent classes in each sector. Figure 8 . Figure 8. a) Resurfacing map of all of the minor and main lateral lava flows that occurred between 1610 and 2006.b) Normalized resurfacing map. Table 1 . Number of events, ranges, means and standard deviations (v) of the durations (D), volumes (V) and elevations of the main vent (H) for each class.
2018-12-07T16:07:35.996Z
2011-12-16T00:00:00.000
{ "year": 2011, "sha1": "263b969f449dc1197d6a93eadb8e2c42dad91729", "oa_license": "CCBY", "oa_url": "https://www.annalsofgeophysics.eu/index.php/annals/article/download/5333/5466", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "263b969f449dc1197d6a93eadb8e2c42dad91729", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
250563182
pes2o/s2orc
v3-fos-license
ST-T segment changes in prehospital emergency physicians in the field: a prospective observational trial Aims Due to time-critical decision-making, physical strain and the uncontrolled environment, prehospital emergency management is frequently associated with high levels of stress in medical personnel. Stress has been known to cause ischemia like changes in electrocardiograms (ECGs), including arrhythmias and deviations in ST-T segments. There is a lack of knowledge regarding the occurrence of changes in ST-T segments in prehospital emergency physicians. We hypothesized that ST-T segment deviations occur in prehospital emergency physicians in the field. Methods In this prospective observational trial, ST-T segments of emergency physicians were recorded using 12-lead Holter ECGs. The primary outcome parameter was defined as the incidence of ST-T segment changes greater than 0.1 mV in two corresponding leads for more than 30 s per 100 rescue missions. The secondary outcomes included T-wave inversions and ST-segment changes shorter than 30 s or smaller than 0.1 mV. Surrogate parameters of stress were measured using the NASA-Task Load Index and cognitive appraisal, and their correlation with ST-T segment changes were also assessed. Results Data from 20 physicians in 36 shifts (18 days, 18 nights) including 208 missions were analysed. Seventy percent of previously healthy emergency physicians had at least one ECG abnormality; the mean duration of these changes was 30 s. Significantly more missions with ECG changes were found during night than day shifts (39 vs. 17%, p < 0.001). Forty-nine ECG changes occurred between missions. No ST-T segment changes > 30 s and > 0.1 mV were found. Two ST-T segment changes < 30 s or < 0.1 mV (each during missions) and 122 episodes of T-wave inversions (74 during missions) were identified. ECG changes were found to be associated with alarms when asleep and NASA task load index. Conclusion ECG changes are frequent and occur in most healthy prehospital emergency physicians. Even when occurring for less than 30 s, such changes are important signs for high levels of stress. The long-term impact of these changes needs further investigation. Trial registration The trial was registered at ClinicalTrials.gov (NCT04003883) on 1.7.2019: https://clinicaltrials.gov/ct2/show/NCT04003883?term=emergency+physician&rank=2 Background Emergency physicians are exposed to high levels of physical and psychological stress [1][2][3]. This is due to high-stakes medical decision-making, time constraints and working out of hours. Furthermore, physically challenging access to the patients and constantly changing teams can contribute to stress [4]. Electrocardiographic (ECG) changes can be a sign of stress with a broad spectrum of effects, including elevated risk for cardiac events, burnout and fatique [5,6]. It has been shown that physical and mental stress leads to changes in the ST-T segment. Changes in ST-T segment generally are seen as sign of coronary ischaemia [7,8]. Among airline pilots with previously known significant coronary disease, 25% developed ST segment changes during aviation mental stress tests [9,10]. Among marathon runners, 7.5% of participants without known cardiac disease showed ST-T segment changes, and 8% of participants in a ski marathon showed ST trace depressions of variable duration. Although an increase in troponin was observed after these events in marathon participants, the five-year follow-up did not reveal a higher rate of cardiac events such as myocardial infarction, arrhythmias, or death [11,12]. ST-T segment changes have not been reported in medical personnel until now. Prehospital emergency care is a high-stakes domain with exposure to increased levels of stress. In this trial, we hypothesized that ST-T segment changes occur in healthy emergency physicians during prehospital emergency care. Study design A prospective single-blinded observational trial was conducted. It was registered at ClinicalTrials.gov (NCT04003883). Outcomes including the definition of corresponding leads were defined following the European Society of Cardiology's (ESC) guidelines on acute myocardial infarction [7,8]. The primary outcome of this trial was the incidence of ST-T segment change > 0.1 mV for more than 30 s in two corresponding leads per 100 missions. As secondary outcomes, potential indicators of ischaemia were used: Incidences of (a) changes in ST-T segments < 0.1 mV or < 30 s) per 100 missions, (b) new onset T-wave inversions for more than 30 s per 100 missions, and c) T-wave inversions for less than 30 s per 100 missions. Furthermore, we assessed correlation of influencing parameters on ECG changes: (1) The different phases of missions to the ECG changes listed above, (2) correlation of the ten most stressful alarm codes to ECG changes and (3) special events logged (Intubation, pediatric emergency,…) to ECG changes. To assess the psychological stress during mission the National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Cognitive appraisal was used [16][17][18]. The correlation of the 10 most stressful and lesser stressful mission codes to surrogate parameters of stress measured using NASA-TLX and to cognitive appraisal was assessed. Population The included emergency physicians were anaesthesiologists, emergency medicine consultants and senior anaesthesia or emergency medical residents with prehospital emergency medicine credentials and no previously known underlying cardiovascular diseases. The exclusion criteria were as follows: known pregnancy, pre-existing cardiac diseases (valvular heart disease > I°), any form of cardiomyopathy or channelopathy diagnosable with ECG, echo or ergometry, history of coronary artery disease, history of myocarditis, known high degree (> 1% of all beats within 24 h) premature atrial or ventricular beats or atrial fibrillation or conduction disturbance, any antiarrhythmic therapy, any implanted cardiac device and manifest hyperthyroidism. Written informed consent was obtained from every participant prior to data collection. To ensure that no pre-existing cardiologic pathologies were present, every participant was tested, including a medical history, a 12-lead resting ECG, a transthoracic echocardiography and a 24-h ECG during a day off as well as an ergometry. Participants with abnormal test results indicating relevant cardiac pathologies were excluded from the trial and referred to the cardiology department. Data obtainment The advanced life support unit at the Medical University of Vienna is manned by an emergency physician from mainly the anaesthesia department or the emergency medicine department and a paramedic from the Medical Emergency Service Vienna. Shifts lasted approximately [8][9][10][11][12][13][14][15][16] ECGs were recorded for one day and one night shift for each participant. After arrival at the ambulance station, the participants were asked to take 10 min to relax, after which the ECG was attached. Electrodes were placed in a standard 12-lead formation. During the shift, participants were asked to write a log about the missions. This log contained information about the diagnosis of the patients treated, patient age (< or ≥ 18 years), special events, and procedures (intubation, i.v.-medication, cardiopulmonary resuscitation, and other invasive procedures). Additionally, chest pain experienced by the participant was recorded. Participants were asked to mark if alarms were received during sleep or while awake. All participants were instructed to use the pager alarm system, as it is local standard practice for alerts during the night to create standardized conditions. Surrogate parameter of stress Surrogate parameters of stress were obtained by using cognitive appraisal and TLX. Cognitive appraisal was measured en route to the patient and after handover of the patient using the method described by Tomaka et al. by dividing the expected (primary) appraisal and the real (secondary) appraisal using a 10-point Likert-like scale, the cognitive appraisal index was calculated. An index < 1 indicates that resources did not meet demands, and the task is appraised as a "threat", while an index > 1, where resources were greater than demands, indicates a "challenge" [17,18]. The NASA-TLX is widely used in health care and was developed to assess the workload of a task across six subscales: Mental Demand, Physical Demand, Temporal Demand, Performance, Effort and Frustration [19][20][21][22][23]. The NASA-TLX was measured after the mission to assess the participant's individual workload. Data analysis After shifts, the ECG and the participant's logs were collected and saved for analysis. ECG analysis was conducted by MM supervised by a senior cardiologist (TP). Investigators analysing the ECGs were blinded to the participant's names and details of missions, including the logs. ECGs were analysed after recording using the software supplied by the manufacturer (medilog DarwinV2 2.*-Schiller AG, Switzerland, 2017) with the aim of identifying ST-T segment changes, T-wave inversions and other ECG abnormalities. Missions were divided into four phases: alarm (two minutes before the alarm until confirmation of alarm), en route (while en route to patient), patient care (arrival at the patient until departure from scene), and transport to hospital (if done). The ten most stressful alarm codes were identified preliminarily via a modified Delphi process [24,25]. Statistical analyses According to previous data and unpublished local observations during a pilot phase of the project, the incidence of significant ECG changes defined ST-T segment changes > 0.1 mV and > 30 s as was expected to occur in 10% of all prehospital emergency response missions in physicians [11,12]. As the workload during shifts is heterogeneous, we used a convenience sample of 25 physicians in each shift (day, night), resulting in 50 shifts (300 expected missions, range: 150-600). For all primary and secondary analyses, prehospital emergency response missions were considered as the unit of observation all measurements were standardized to. Descriptive statistics such as the mean and standard deviation were computed for all metric variables. Absolute and relative frequencies were calculated for categorical variables. Descriptive statistics were computed for the overall data and for each grouping variables. A two-sided Student's t-test and chi-square test were used as appropriate to assess relations between primary and secondary as well as within secondary outcome parameters. Secondary outcome parameters were used to generate hypotheses. Therefore, no correction for multiple testing was performed, and p values < 0.05 were considered statistically significant. All analyses were performed using Python 3.8, mainly the pandas and numpy packages [26,27]. Ethical approval of the Medical University of Vienna's Institutional Review Board (EK 1648/19), the Workers' Council and Data Protection Commission was given prior to inclusion of the first proband. Results The study population consisted of 25 emergency physicians. After the cardiac tests, one physician had to be excluded due to a pre-existing ventricular septal defect resulting in ECG abnormalities. All others had a normal resting ECG, ergometry, echocardiography, 24 h Holter ECG during off-duty time. During the study period, four more physicians had to be excluded because they stopped doing their shifts in preclinical emergency medicine due to paternity leave, illness or changing place of work, resulting in 20 emergency physicians included in the study. (Fig. 1) Except for 4 physicians who only did one shift, all others did one night and one day shift, resulting in 36 recorded (18 day and 18 night) shifts with a total of 208 missions between 2019-11-15 and 2021-03-27. Details can be found in Table 1. Table 2. ECG changes were not distributed equally between the four phases of a mission: alarm, drive to patient (enroute), patient care, and transport to hospital (p < 0.001). The majority of ECG findings occurred during the alarm phase (41.5%), followed by the patient care (30.8%), en route phases (20.0%) and patient transport (4.6%). A total of 3.1% of changes occurred within 5 min after mission and therefore were not assigned to one of the predefined phases such as patient care. (Fig. 2). ECG findings In the 24-h Holter ECGs, one ECG change (T wave inversion) was found. In the log, the participant noted that he was woken by the washing machine's alarm in the middle of the night. Overall, the signal quality with the used Holter was excellent with only one episode of missing registration occurring. Surrogate parameters of stress Data on cognitive load were available in 167 missions (80%), and NASA-TLX was available in 192 missions (92%). The mean cognitive load was 0.6 (SD: 0.67), indicating that the missions were mainly perceived as challeng, not as threat. The mean NASA-TLX score was low, at 25.6 (SD: 20.5). No significant correlation between missions with ECG changes and cognitive load or TLX were found (CL: t-test, p = 0.4, TLX: t-test, p = 0.4) when including all missions. When considering only the emergency physicians who had ECG changes during patient care, scores on the NASA-TLX were significantly higher when ECG abnormalities were present (t-test, p = 0.03, Fig. 3) as well as in missions classified as stressful in the preliminary Delphi process (t-test, p < 0.01, Fig. 3). The ten alarm codes perceived to be most stressful by emergency physicians are presented in Table 3. Of the 208 studied missions, only two were stressful codes. During those two missions, no ECG changes occurred. The events marked by the physicians (paediatric emergency, iv medication, intubation and polytrauma) had no significant association with ECG changes (chi2 p > 0.05). When the alarm occurred during a sleep phase significantly more participants had ECG changes (chi2, p = 0.001). Discussion Prehospital emergency medicine is both physically and psychologically challenging, leading to relevant ECG changes. The incidence of ST-T segment changes remains unclear; therefore, the aim of this trial was to show the incidence of ST-T deviations and other ECG changes, such as T wave inversions, in emergency physicians. By recording ECGs during a shift to assess emergency physicians, we aimed to close this knowledge gap. In contrast to the primary hypothesis, we found no significant ST-T deviation that fulfilled primary outcome criteria as defined by the ESC. Nevertheless, minor ST-T deviations and a considerably high number of T-wave inversions could be detected frequently especially during missions at night. A high number of T-wave inversions was seen between missions. However, these changes did not correlate with predefined stressful codes. To our knowledge, this is the first trial investigating ST-T deviations in healthy prehospital emergency physicians while on shift. In contrast to the previously published trial by Doorey focusing on pilots with known coronary artery disease showing signs of ischemia during stress, this trial concentrated on participating physicians without pathological cardiac history. Even in this population, ST-T deviations and T-wave inversions were frequent. It is commonly accepted, that ST-T segment deviations are typically caused by ischemia [10]. While the followup of the marathon trial observing ST deviations during running showed no increased incidence of cardiac events after one yearsilent ST-T changes in exercise testing were linked to an increased risk of cardiac death [11,28,29]. Another factor known to cause changes in the ST-T segment is stress [9-12, 30, 31]. However, the effect of stress-induced ST-T segment changes is not fully understood. It seems likely that the reported ECG changes are partly attributed to stress. This is supported by the fact that most changes were seen during the alarm and patient care phases, when there was a combination of psychological and physical stress. Whether this stress results from the stress induced by the loud noise of the pager, sudden awakening, rapid change into an upright position or psychological stress induced by patient care remains unclear. In a tilt-table test, it was shown that rapid change in position can cause ST-T depression and T-wave inversion even in patients with no known cardiac disease [31]. As the volunteers/participants had these changes not only when sitting up, not all changes can be explained by this. A rather large portion of the ECG changes occurred while treating a patient. During this time interval, many different stressors occurring during alarm are not present. Therefore, it seems likely that the ECG changes reflect stress. A further possible explanation for stress induced ST-T segment changes is an autonomous conflict between sympathic and parasympathic responses, which can be shown especially during abrupt wake up from deep sleep. Shattock MJ. et al studied this by immersing participants in cold water [32]. ECG changes in our trial that where observed when the alarm occurred during sleep might be attributed to such an effect. Certain limitations of this study must be acknowledged. This trial was a single-blinded, single-centre observational study. ECGs were analysed by hand with the support of software. This absence of a consistent four eye principle may have led to some ECG changes being missed or overinterpreted. To reduce the possibility of bias, a senior cardiologist revaluated borderline ECG changes and reviewed a sample of the ECG in a routine way. Another possible limitation is the participant's coronary risk. We tried to minimize this bias by performing extensive testing (ECG, 24-h ECG, echocardiography, blood samples, ergometry). Indeed, the authors had to exclude one participant due to abnormalities in the 24 h baseline ECG. Due to the setting in preclinical emergency medicine, the conditions were not standardized. Stress is a very individual parameter representing an important limitation. By using the NASA-TLX and cognitive appraisal, we aimed to quantify these different stress levels. Furthermore, this trial examined a single observation of each participant's cardiovascular response to stress. As more invasive measurements like serial troponins were not possible in the trials setting, this data were not collected and needs to be assess in future trials. Therefore our results are hypothesis generating by nature. Emergency physicians at our centre are very highly trained and are able to work from a point of health and of similar socioeconomic state -this makes the population a very homogenous group, which is another limitation of this study. Due to COVID, the study had to be paused in the beginning of the COVID pandemic due to roster changes and concerns of the hygiene authority of handing over the 12 Lead ECG from physician to physician (16/3/2020 -1/6/2020). This and the duty roster of the participants led to a rather long study period of 17 month in total. During that time the length of shifts changed from 12 h + 12 h (day/night) to 8 h + 16 h (day/night). 30:47 As in the general Austrian emergency physician's population 60% of the study's population was male. A gender difference is known regarding risk factors, symptoms and diagnosis of ischemic heart disease [13]. This also includes differences in ECG changes, especially prevalence and type of signs of ischemia [14,15]. The origin of the ECG changes cannot be definitively identified in this trial; therefore, further trials will be necessary to determine the origin of these changes. Important questions remain regarding the long-term impact of these ECG changes. To quantify whether these changes in the ST-T segment can help identify individuals at risk of cardiac adverse events, a longterm analysis is needed. An analysis including troponin to detect ischemic damage to heart muscle will be needed to solve the definite impact of these ECG changes on a cellular level. Using our data, it will be possible to conduct interventional trials to understand the reasons for and regarding methods to reduce occupational stress. Stress levels are obviously very high in preclinical emergency medicine. Our trial provides data that ST-T segment changes -especially T-wave inversions -are common in medical professionals working in high-stakes environments. Conclusion In our group of healthy emergency physicians, ECG abnormalities with a possible ischemic reason were frequently seen -mostly T-wave inversions. At least one ECG abnormality was found in 70% of the included emergency physicians. Alarm when sleeping was significantly associated with ECG changes. There was a significant association between the NASA task load index and changes in the ECG, showing the impact of stress on such changes.
2022-07-16T13:32:42.886Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "abb2e38d194444dd3d7c94d3957ea716421e9af5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "abb2e38d194444dd3d7c94d3957ea716421e9af5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201616670
pes2o/s2orc
v3-fos-license
Patients’ out-of-pocket expenses analysis of presurgical teledermatology Background This study undertakes an economic analysis of presurgical teledermatology from a patient perspective, comparing it with a conventional referral system. Store-and-forward teledermatology allows surgical planning, saving both time and number of visits involving travel, thereby reducing patients’ out-of-pocket expenses, i.e. costs that patients incur when traveling to and from health providers for treatment, visits’ fees, and opportunity cost of time spent in visits. to The study quantifies the opportunity costs and direct costs of visits for adults waiting for dermatology surgery. Method This study uses a retrospective assessment of 123 patients. Patients’ out-of-pocket expenses of presurgical teledermatology were analyzed in the setting of a public hospital over 2 years. The teledermatology network covering the area served by the Hospital Garcia da Horta, Portugal, linked the primary care centers of 24 health districts with the hospital’s dermatology department. The patients’ opportunity cost of visits and direct costs of visits (transport costs, and visits’ fee) of each presurgical modality (teledermatology and conventional referral), were simulated from initial primary care visit until surgical intervention. Two groups of patients, those with Squamous Cell Carcinoma and those with Basal Cell Carcinoma, were distinguished in order to compare the patients’ out-of-pocket expenses according to the dermatoses. Results From a patient perspective, the conventional system was 2.12 times more expensive than presurgical teledermatology. Teledermatology allowed saving €0.74 per patient and per day of delay avoided. This saving was greater in patients with Squamous Cell Carcinoma than in patients with Basal Cell Carcinoma. Although, the probabilistic sensitivity analysis corroborates the results of the base case scenario, only a prospective study can substantiate these results. Conclusion In the Portuguese public healthcare system and under specific cost hypotheses, from a patient economic perspective, teledermatology used for presurgical planning and preparation is the dominant strategy in terms of out-of-pocket expenses, outperforming the conventional referral system, especially for patients with severe dermatoses. Background Time spent seeking healthcare represents a burden to patients, lost productivity to employers and society, and a potential inefficiency within healthcare systems. Opportunity costs, which value patient time based on the value of forgone activities are one method of estimating patient time costs of visits [1,2]. Opportunity costs are increasingly relevant given the increasing emphasis on patientcentered care [3], and the recognition that telemedicine in healthcare delivery options may reduce patients' burden regarding time and expenses. One of the specialties in telemedicine, teledermatology (TD) appears as a way to implement dermatological healthcare to underserved areas and populations. Teledermatology may be achieved by videoconference or store-and-forward. In the former, videoconference equipment is used to connect a patient with a remote consultant. In store-and-forward, specialists assess a transmitted still image. Using teledermatology patients do not have to visit the dermatologists physically. By avoiding the need for clinic-based visits, teledermatology also saves on societal costs that are associated with patients' travel and workplace absenteeism [4,5]. Teledermatology not only decreases appointment waiting times and the amount of time needed for a consultation, but also reduces transportation costs and loss of productivity [6]. While most literature reports fewer in-person appointments, teledermatoogy can increase the overall appointment burden for some patients. This depends on the type of teledermatology (videoconference or store-and-forward) and on the health system in which it is implemented [7,8]. Poor health outcomes can result if at least one of the following condition is met: the waiting time for dermatology treatment increases, costs of visits (opportunity costs and direct costs) increase, the patient's health deteriorates [9][10][11][12], the patient tends to withdraw from treatment because (s)he cannot afford the cost of visits [13][14][15][16]. Teledermatology reduces the negative effect(s) associated with these risks. Especially for chronic patients, patient-assisted follow-up care at home avoids traveling to a physician and long appointments during work time. Teledermatology has been shown to be more effective in the management of circumscribed and tumoral lesions than in patients with generalized dermatoses [17]. In patients with skin cancer, store-and-forward teledermatology has been shown to be an effective triage tool that reduces the time to an initial intervention in the specialized dermatology service [18][19][20][21][22][23]. Presurgical teledermatology using a store-and-forward system may establish a correct diagnosis and even obtain sufficient information to plan a surgical intervention [24,25]. Consequently, in the field of surgical dermatology, teledermatology offers added value as a complementary tool for the assessment and presurgical management of patients. In the context of the regional hospital setting and from patients' perspective, this study seeks to compare outof-pocket expenses between patients whose routine care was carried out using a store-and-forward teledermatology system and conventional referral system. It identifies and simulates out-of-pocket expenses borne by presurgical dermatology patients. Out-of-pocket expenses refers to the direct payment of money for seeking healthcare. This comprises the direct costs of visits such as transport costs and visit fees, and the opportunity cost of visits, including the time away from paid work, devoted instead to visiting the primary care provider (PCP) or hospital. This study also quantifies the out-of-pocket expenses per day of wait time for presurgical teledermatology when compared to the conventional referral system. Methods The teledermatology network covering the area served by the Hospital Garcia da Horta in Almada, Portugal, links the primary care centers (PCP) of 24 health districts with the hospital's dermatology department via the corporate intranet of the Portuguese healthcare system. Store-and-forward teledermatology is currently being used as a complementary tool for the triage of patients and the management of patient referral from the primary care center to the hospital in the Portuguese National Health System. Following the first visit to a general practitioner (GP) at PCP, digital pictures of patients who agreed to store-and-forward teledermatology were taken and transmitted to the hospital via intranet. Alternatively some patients were referred to a dermatology visit at the hospital. In total, 153 patients were treated but 30 did not require surgical intervention. This study therefore uses a sample of 123 presurgical cases (falling between February of 2016 and January of 2018). A retrospective assessment was made of the clinical course of 53 patients who were managed with hospital's dermatologist visits and 70 patients who were managed with store-and-forward teledermatology from initial primary care consultation until the surgical intervention. Activity map for Surgical Intervention The first step was to map all of the activities involved in the process until surgical intervention (see Fig. 1). This study focuses on the patients' out-of-pocket expenses, and the costs of neither the direct health care expenditures (health care procedures and interventions) nor the indirect health care costs (telecommunications, information technology, and digital photography equipment) were included in the analysis. All patients visited their GP at the PCP. As shown in Fig. 1, patients who were managed with store-and-forward teledermatology needed to visit only PCP before surgical intervention, while those who were managed with conventional care had to visit both PCP and hospital before surgical intervention. On the basis of this activity map, a specific cost was assigned to each visit involved in the process. Patients' out-of-pocket expenses of a visit included visit fees, the cost of travel, and the opportunity cost associated with wages lost during the visit. There were 15 patients using teledermatology who were called for an extra visit to the hospital before surgical intervention, and four patients using conventional care who were called for an extra visit. This is illustrated in Fig. 1 by the dashed arrow. Since one of the stated objectives of the teledermatology system is to reduce time to surgery, two subgroups were also analyzed. The first of these included the patients with lesions suspected of being malignant (Squamous Cell Carcinoma and Melanoma) and the second comprised those who presented the most common lesion among the patients under study (Basal Cell Carcinoma). This comparison allows us to analyze the importance of reduction of time to surgery according to skin lesions. Table 1 shows the input data collected from the retrospective assessment of the clinical course of 123 patients. Number of visits corresponds to the total visits the patient made between the initial primary care consultation and the surgical intervention. For both consultation types, wait time was defined as the number of calendar days that elapsed between the initial primary care consultation and the surgical intervention. The distance in kilometers (km), based on patients' zip code, was used in order to calculate in google maps the travel distance from home to the PCP and to the hospital. Patients' ages ranged between 22 and 94 years old. This study therefore refers to working and retired adults. Key assumptions The exact means of transportation that patients used for traveling to visits was not available; as a result it was proxied by taxi since anyone could use a taxi to travel 1 km or 20 km, from city center or from suburbs, allowing a comparison of transport costs among patients. The travel cost was based on the official published fares for Portuguese Taxi transportation [26] by both kilometer and day fare, which was then multiplied by travel distance in kilometers. Legend: General practitioners (GP); Primary care provider (PCP); Extra visit. The estimation of the opportunity costs for adults was based on mean wage and time spent on visits. Total time per visit comprised travel time and visit time to PCP or to hospital. Employers in Portugal do not pay for time not worked and spent in PCP or hospital visits. Retirementage patients (≥ 70) were not assigned an opportunity cost since they do not work. Neither the specific wage nor the employment situation were available for the individual patients. As a result, calculating the opportunity cost of visits, i.e. loss of pay during the visits was based on the Portuguese average wage [27]. This study used an average 121 min for total time per visit (with 37 min of travel time and 84 min of clinic time including both waiting and face-to-face time), estimated for ambulatory medical care [28]. All patients in the study are in the public national health system (NHS) and are assumed to pay the regulated basic visit fees (as per Directive no. 64-C/2016, Diário da República no. 63/2016) [29]. The basic fee of visiting general practitioners (GPs) at PCP is €4.5 and the fee for a visit of a specialist at the hospital is € 7.0. For the teledermatology diagnosis, this study used the basic fee of a hospital's visit without patient (€2.5). Wait time was defined as the number of calendar days that elapsed between the initial primary care consultation and the surgical intervention. Taking into account the mean wait times for surgery in both presurgical TD and the conventional referral system reported in Table 1, the out-of-pocket expenses per day of wait time saved was calculated as the difference in out-of-pocket expenses relative to the difference in wait time between presurgical teledermatology and convention referral. Results The results of the identification of out-of-pocket expenses are shown in Table 2. Total patients' mean cost of travel was €23.78. Presurgical TD patients paid €14.31 and presurgical CR patients paid €36.29 in travel costs. The difference in the cost of travel between the opportunity cost of using presurgical TD and using the conventional process was large and significant (P < 0.001). The mean opportunity cost of visits was €10.74 to all patients analyzed. The mean opportunity cost of visits was €15.39 to patients in the conventional referral system and €7.21 in the presurgical. Significant differences were found between the opportunity cost of presurgical modalities (P < 0.001). The mean visit fee was €10.02. There was a difference of €4 between the modalities' fees, which was found to be significant (P < 0.001). Table 3 shows the detailed out-of-pocket expenses of patients managed by either presurgical TD or the conventional referral system until the surgical intervention. The first column shows the results for all patients and the results for the two subgroups analyzed: patients with Squamous Cell Carcinoma and with Basal Cell Carcinoma are shown in the second and third columns, respectively. The ratio between the two modalities shows that for all patients, conventional care was 2.12 times more costly to the patients than presurgical TD. In the group of patients who had Squamous Cell Carcinoma, conventional care was 2.75 times more costly, while, in the group of those with Basal Cell Carcinoma, conventional care was 1.75 times more expensive than presurgical TD. Sensitivity analysis To test the robustness of our cost analysis, sensitivity analysis was carried around the assumed transport costs, as taxis are often considerably more expensive than public transport or self-driving. Sensitivity analysis was carried out on the opportunity cost as the wage rates and employment situations were not available; and on visit fees, as patients that are exempted from paying basic fees, such as pregnant women and the unemployed were not considered. The ranges used for the Probability Sensitivity Analysis are reported in Table 4. Parameters were assigned a distribution according to the methodology suggested by Briggs et al. [30]. Those authors suggest using the Gamma distribution for costs where parameters are non-negative. The results of the base case scenario were confirmed (see Fig. 2) after 10,000 simulation draws of the mean of the out-of-pocket expenses. Table 5 shows that presurgical TD was found to be a better strategy than the conventional consultation process, with a saving of €0.74 per patient and per day of wait time avoided. This saving was much greater for patients with Squamous Cell Carcinoma than for patients who had Basal Cell Carcinoma (€4.35 compared to €0.38). Discussion The economic analysis provides information from a series of patients whose routine care was carried out using a store-and-forward teledermatology system and conventional referral system for presurgical assessment in a Portuguese public healthcare setting equipped with intranet. In the context of the regional hospital setting and the patients' perspective adopted in the analysis, this study shows store-and-forward teledermatology to be an economically advantageous method for the patients involved in the presurgical assessement and management. Considerable differences were found between the out-of-pocket expenses using presurgical TD and the conventional process. Overall, presurgical TD was 2.12 times less costly than the conventional referral system for patients having Table 1. The observed difference between the sample means is not considerable to say that the average age and gender between presurgical TD and conventional referral patients differ. Patients who were managed using teledermatology made on average one visit before surgical intervention. Presurgical CR patients made on average two visits before surgery. The mean time to surgical intervention for the patients managed by presurgical TD was 86.09 days. In the group managed using the conventional process, mean time to surgery was 126.11 days. There were 15 patients using teledermatology who were called for an extra visit to the hospital before surgical intervention, and four patients using conventional care who were called for an extra visit. Systematic reviews show that there is good diagnostic agreement when comparing a teledermatology diagnosis and in-person clinical diagnosis or histopathology with traditional faceto-face consultations [31]. However, several factors may directly impact the reliability of teledermatology, including proper imaging, comprehensive relevant history, and skills of the teledermatologists and referring physicians [32]. The difference of extra visits between the two systems could reflect the fact that a lack of sufficient information to plan the surgical intervention is more frequent in teledermatology than in the conventional method. The economic results of the use of teledermatology have been analyzed in earlier studies [2, 4-6, 25, 33]. From the point of view of out-of-pocket expenses, there has been no prior analysis of the use of teledermatology in presurgical assessment and management. The economic analysis of presurgical teledermatology in patients with nonmelanoma skin cancer by Ferrándiz et al. [25] found the conventional system to be 1.78 times more expensive than presurgical teledermatology. However, comparisons to their results should not be made because their travel costs took into account the type of transportation used (public, private, or medical transport) and the cost incurred through loss of wages used the minimum wage. Also, their study included direct and indirect healthcare costs. The expenses relative to wait time difference suggested a €0.74 saving per patient and per day of wait time avoided for patients using presurgical TD. This saving was substantially greater for patients with Squamous Cell Carcinoma and lower for patients with Basal Cell Carcinoma (€4.35 and €0.38 per patient and per day of wait time avoided, respectively). The saving was greater among patients with Squamous Cell Carcinoma because this calculation is based on the reduction in wait time. Patients with Squamous Cell Carcinoma generally required close medical follow-up, giving rise to a reduction in the difference of waiting times between the two modalities, decreasing the mean wait time difference. Furthermore, two patients with Squamous Cell Carcinoma under presurgical TD system required an extra visit to the hospital, which made it more expensive, and thus closer to conventional care. The study has several limitations, the most important of which concerns the quality of data entered into the model. It was assumed that all patients traveled by taxi, and some patients may have traveled by other means of transportation, namely bus or private transport. This may overestimate the transport cost, as taxis are an expensive means of transportation when compared to public or private transport. Patients who had difficulties in traveling to the hospital (bedridden patients and those in other incapacitating situations) and who required home treatment and medical transport were not taken into account. This may underestimate the travel cost. Companions of patients were not taken into account in the calculation of the expense associated with travel and lost wages. This may underestimate the out-of-pocket expenses. Unemployed and or chronically ill patients were not taken into account when calculating the opportunity cost of visits. This may have overestimated the opportunity cost. The real patient salaries and exact time spent traveling to, from, and during visits were not available, so the opportunity cost was calculated based on average wages and average time spent on visits, which weakens the validity of the results. Finally, the allocation of patients to the subgroups was not random and, therefore one has to be attentive to the potential bias in the analysis between subgroups. Despite these drawbacks, which are typical of most model-based economic evaluations, our study helps to clarify the often contradictory research in the field of economic evaluation of teledermatology. Future research This study would benefit from the greater accuracy of data that a prospective study would afford. In particular, a prospective study would allow knowing how to quantify the value of time, especially for the unemployed and retired patients. It would also allow studying the impact of teledermatology on the quality of life and the quality of care attached to teledermatology as perceived by the patients. Conclusion The results of this study suggest that in the Portuguese public healthcare system and under specific cost hypotheses, store-and-forward teledermatology applied to the preparation and presurgical planning is the dominant strategy in terms of out-of-pocket expenses, outperforming the conventional face-to-face process.
2019-08-24T13:29:18.179Z
2019-08-23T00:00:00.000
{ "year": 2019, "sha1": "a86195e360d3fcedb6021fcc6e1b8126fe9904b1", "oa_license": "CCBY", "oa_url": "https://resource-allocation.biomedcentral.com/track/pdf/10.1186/s12962-019-0186-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a86195e360d3fcedb6021fcc6e1b8126fe9904b1", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }