text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Five Solar Cycles of Solar Corona Investigations These are the memoirs of fifty years of research in solar physics, closely related to the history of three of the major solar space missions, from the Solar Maximum Mission, SMM, to Solar Orbiter, at present in navigation toward vantage points closer and closer to the Sun. My interest in solar physics was stimulated by the studies on cosmic rays at the University of Turin, and the research in this field initiated at Stanford University as a postdoctoral fellow in the team of John Wilcox with studies on the large-scale corona and its rotation. Thanks to Alan Gabriel, during my first space mission, SMM, I was involved in the operations and scientific data analysis of the Soft X-ray Polychromator. Together with Giancarlo Noci and Giuseppe Tondello, I participated in the realization of the UltraViolet Coronagraph Spectrometer, NASA/ASI, flown on-board SOHO. After this experience there was the opportunity to participate in the formulation of the proposal of the Solar Orbiter mission, and to guide the team, which for this mission developed the Metis coronagraph, up to the delivery of the instrument to the European Space Agency in 2017. Introduction The Sun has been the object of my scientific interest for five full solar cycles, including the first years of research on galactic cosmic rays when I realized that the magnetism and the activity of the Sun were affecting the flux of these particles even at relatively high energies. The second half of the twentieth century was marked by extraordinary breakthroughs in our knowledge of the Sun, and the environment that the Sun creates, thanks to the access to space observations, and when I started my research activity in solar physics, in the 1970s, the solar emission at the short-wavelength wing of the electromagnetic spectrum was at last almost fully accessible. I had the opportunity to be involved in three of the major space missions dedicated to the study of the Sun: the Solar Maximum Mission (SMM) launched in February 1980, just seven years after the Apollo Telescope Mount (ATM) on Skylab (1973 -1974), which was the first large observatory in space; the Solar and Heliospheric Observatory (SOHO), initially dedicated to the observation of the Sun at solar minimum, launched in December 1995 at the end of solar Cycle 22; and finally Solar Orbiter launched in February 2020, at the time of this writing on its way to reach perihelia closer and closer E. Antonucci<EMAIL_ADDRESS>1 to the Sun. With regard to the last two missions, I was not only involved in the data analysis and interpretation but also in the development of two innovative instruments for the observation of the solar corona. The first of these was the UltraViolet Coronagraph Spectrometer (UVCS) on SOHO, a NASA coronagraph built with a significant contribution by part of the Italian Space Agency (Agenzia Spaziale Italiana, ASI). For the second instrument, ASI took the lead in providing the ultraviolet and visible-light coronal imager Metis for the Solar Orbiter mission. In my studies and professional life, my choices were mainly determined on the basis of scientific curiosity and interests that matured progressively, as well as by the influence of outstanding scientists I had the honor to meet and, in some cases, to work with. My Early Years I was born in Boves, a town in the province of Cuneo (Piedmont), in the early morning of March 10, 1945 during the curfew of wartime. My first days of life were certainly not easy ones for my parents; bombings and conflicts were a daily reality. On the other hand, there was a widespread feeling that the end of the Second World War was imminent. It was indeed just a matter of weeks away, and Northern Italy would be liberated soon after, on April 25. I grew up in a loving and caring family. In the years immediately following the end of the war, still trying to overcome the moral and physical ruins of the war and to rebuild a country that had been heavily hit, people were aware that a hopefully brighter new era was beginning. My father, Egidio, was born in Arpino, the ancient Roman town of Arpinum, birthplace of Marcus Tullius Cicero and Gaius Marius. When I was a child, people in Arpino were still tied to the glorious Roman past. The cyclopean walls of the acropolis, enclosing an almost unique ogive arch, however, testify to an important Pelasgian-Mycenean influence preceding the Roman era and place the origin of the town in more remote epochs. As a young man my father was serving in the Royal Guard in Rome when an army division, the Frontier Guard, was formed and toward the end of 1934 he was sent as part of this new army force to the French border in a small town of the Alps, Briga Marittima (formerly part of Italy, but ceded to France in 1947, and now named La Brigue). On Christmas Eve, just after he arrived at Briga, he met my mother, Elisabetta, who was back home after some years spent in Germany, the homeland of my grandmother who had married a young Italian man from a family in Briga. This is why my grandparents used to send their children to spend a few years with their relatives in South Germany. My parents married soon after they met and my brother, Remo, was born in Briga before they moved to Cuneo and then to Boves. During the last two years of the war, Boves was the site of many tragic events. My mother was a courageous woman who saved my father from being seized during a vindictive attack by the German army in retreat in the very last days of the war. She was perfectly at ease in her role of housewife and mother, although, given a chance, she would have liked to be a medical doctor. My father rejoined the army, which was reconstituted after wartime. He always encouraged me to study and follow my inclinations. When I was a child, summer holidays were often spent visiting Arpino and Sorrento, La Brigue, and Nice and biking and hiking in the valleys close to Boves during the weekends (Figure 1). Of the times of the primary school in Boves in the 1950s (Figure 2), I remember long walks to go to school in cold and snowy winters. Fresh snow was always a lot of fun and as I walked to school, I had in front of me the rising sun and the Bisalta, a beautiful mighty mountain rising just at the edge of the inhabited area. I remember a class of little girls in black uniforms, older than me, since I started to attend school a year early, and the pleasure of learning from my severe but caring teacher, Dina Lanteri. One of my classmates was Remo, 1947. a war orphan, the symbol of the heavy price that Boves paid for the war; in consequence of the foolish and disastrous campaign in Russia alone eighty young Alpini, the army's specialist mountain infantry, never came back home; they had been sent to die in the snow. I remained in touch with my teacher for the rest of her long life, she passed away, perfectly clear-minded, when she was 103 years old. Arpino, with the same number of inhabitants as Boves, had a long cultural tradition and there the students could complete their education up through the Classical Lyceum. 1 Boves instead was a rural town hosting only the primary school. Thus, when I was 10 years old, the only one of my classmates who was continuing her studies, I had to commute each day by train to Cuneo, 10 km away. In my long walks from the Cuneo train station to the school I sometimes dreamt about my future but my imagination did not go beyond the end of the century, which seemed so far away; sometimes I could see myself working in a vaguely defined scientific laboratory. In the days before the beginning of the school year, there was the ritual of carefully covering my school books, which I enjoyed as being the start of a new little cycle. Even now in the digital era, I much prefer to read the pages of books on paper that I can hold in my hands. My family assured me a happy childhood and always supported and encouraged my choices, so that when I was 13 years old, as my brother did before me, I chose to attend the Scientific Lyceum named after Giuseppe Peano, the mathematician born in Cuneo who was one of the founders of mathematical logic. By that time, we were living in Cuneo, just two blocks from the Lyceum. I was fortunate once more in having the opportunity to profit from the lectures of excellent teachers, especially those teaching humanistic subjects. My family has been always important to me and now that they are no longer with me, I fully realize how much I owe them: a positive attitude to life and friendship, the love for nature and the animal world, and the energy that I could spend in my personal and professional existence. Boves, 1950Boves, -1951 University Years and Research on Cosmic Rays During the last months of high school in 1963 when the time to enter the university was approaching it was quite difficult for me to decide the line of my future studies. Physics was not the only field I was interested in; I also loved architecture, philosophy, and mathematics. However, there were aspects of physics that were quite intriguing, such as the analogy of the planetary system around the Sun -the only one known at that time -and that of the electrons orbiting around the nucleus of an atom. When it was time to enroll at the University of Turin, the ultimate decision was to choose experimental physics. In the first years of university, I remember the perfect lectures held at the Mathematics Institute by Francesco Tricomi, well known for his studies on partial differential equations, as well as those of Gleb Wataghin, long-time director of the Physics Institute, who pioneered the research on cosmic rays in Turin. Later, the lectures of Carlo Castagnoli, who often referred to the most important experimental projects being carried on in particle physics, were a decisive factor in the choice of the subject of my thesis for the Laurea. 2 The Team of Carlo Castagnoli Before taking over the chair in Turin in 1961, Castagnoli worked in Rome with Edoardo Amaldi, who was one of the 'via Panisperna boys' together with Enrico Fermi and Emilio Segrè, both Nobel laureates and both of whom emigrated to the USA when in the Fascist period the racial laws were enforced in Italy. Amaldi was a key person in establishing the Conseil Européen pour la Recherche Nucléaire (CERN) in Geneva in the years 1952 -1954, the institution dedicated to the study of elementary particles. He was also a key person in Physics Institute (the first computer of the Institute, preceding the IBM one, was an Elea computer built by Olivetti in Ivrea). After a while, depending on how many others were before us, we picked up the results of our analysis. Most plots and graphs were, however, still drawn by hand and this gave us quite a lot of time to think about the data, and some of the effects could be indeed spotted in such a way before being confirmed by computer analyses. The work with the Castagnoli team lasted about six years, from 1966 to 1972. Those years were extremely exciting for young students, since all universities in Europe were going through an extraordinary epoch of changes and novelties, debate and protest. The so-called 'wave of 68' in Turin begun as early as 1967 when the first student protests, assemblies, and occupations in some of the university departments had already started. Just two years later, in 1969, the prolonged protests and strikes of the metal-workers of the large FIAT factories in Turin heavily affected the whole city. In the end, the new statute of rights, granting more benefits to the industry workers, was signed. The Physics Institute was also involved in this new wave of endless discussions on politics, dealing with the need for justice and equality of rights all over the world, but, unlike other departments, studies and exams were not forgotten. Although understanding the students' quest, I was quite disappointed when attending my first conference of the Italian Physics Society in Rome, where I was giving my very first talk in 1968, the participants at the meeting were met with protests by the students of Rome's La Sapienza University. In May 1969, the majority of the Castagnoli team attended the Ettore Majorana Summer School on space physics and astrophysics in Erice, Sicily. The school was exceptionally interesting for me, offering, for instance, the opportunity to attend the lectures of Bruno Rossi, a pioneer in space plasma research and in X-ray astronomy, who left Italy before the Second World War as a consequence of the racial laws, and of William A. Fowler, who a decade later won the Nobel Prize. Several new results in solar heliospheric physics, cosmic rays, and astrophysics were presented at this school, such as the recent detection of solar neutrinos and of the X-ray radiation from the Sun, and the measurements of the infrared cosmic background reported by Kandiah Shivanandan of the Naval Research Laboratory (NRL). Galactic Cosmic Rays Detected Underground Great impetus was given to the search for cosmic rays with energy higher than those captured by the neutron monitors at ground level (that is, below 10 GeV) due to the stimulating atmosphere generated by the International Geophysical Year 1957Year -1958, the first large postwar international initiative by some of the scientists involved in geophysics and solar activity, held in correspondence to the peak of solar Cycle 19. These were also the years when the first Soviet and, four months later, the first American artificial satellites were launched into space (October 4, 1957 andFebruary 1, 1958, respectively). The world was moving again and, this time, more peacefully. Five underground stations were set up and became operative between 1958 and 1961 at depths between 30 and 60 m.w.e.. The Cappuccini station, set up in 1966, was the deepest one at 70 m.w.e.. We were accessing primary particles with energy above 100 GeV. At these energies, the cosmic-ray intensity modulation was thought to be of a sidereal origin. However, the results obtained in Turin showed that solar activity was also affecting the flux of these particles and in a different way from that for the lower-energy particles detected at ground level. In the years of maximum activity during solar Cycle 20, two rather welldefined long-lived active longitudes, almost 180°apart and with a strong north-south asymmetry, produced 80% of flares. The interpretation was that these active longitudes formed corotating streams due to the injection into the interplanetary space of magnetic inhomogeneities by individual flares, thus reducing the access of the high-energy galactic cosmic rays and inducing a long-lived modulation of the interplanetary medium observable in correspondence to their central meridian passage 6 (Antonucci, Cini Castagnoli, and Dodero, 1971a). The recurrent decreases in particle flux were also associated with enhancements observed in the intensity of the interplanetary magnetic field measured with the instruments of Pioneer VIII, at the time at 26°East from Earth. The most interesting effect of the corotating structures screening the cosmic radiation was a twenty-seven-day periodicity not detected at lower energies (Antonucci, Cini Castagnoli, and Dodero, 1971b). We also found an effect that nowadays would be considered relevant for space-weather forecasting. The high-energy cosmic-ray diurnal variation was strongly enhanced just a few hours after the occurrence of the flares giving rise to the Forbush decreases observed at much later times with the neutron monitors on the ground. This was interpreted as being ascribable to the larger particle gyroradius (above ≥10 7 km for particles with energies ≥100 GeV) allowing a sampling of the interplanetary space conditions much closer to the Sun and of the magnetic cloud ejected by the flare when this reached dimensions comparable with the particle gyroradius itself (Cini Castagnoli and Dodero, 1969;Cini Castagnoli, Dodero, and Antonucci, 1970). It was noted that the analysis of solar activity based only on flare occurrence was insufficient: 'other phenomena not yet identifiable could contribute to the modulation' (Antonucci, Cini Castagnoli, and Dodero, 1971c). Indeed, there was already an understanding of phenomena such as the association of flares with a cloud of plasma and magnetic inhomogeneities injected into the interplanetary space, the distortion of the magnetic field with the increase of the azimuthal component and the formation of a shock wave in the solar wind, the latter giving rise to the Forbush decreases observed in the lower-energy particle flux detected at ground level. However, coronal mass ejections, at first named coronal transients, had not yet been observed. Furthermore, it was not yet clear that the high-speed streams with a persistent character in certain phases of the cycle, which were ascribed either to Bartels' unknown M regions or sometimes to recurrent flares, had their origin in coronal holes. Indeed, the first well-observed coronal transient was imaged during the Skylab era on June 10, 1973 and coronal holes were identified in rocket flights starting in 1970 and interpreted as the source of recurrent high-velocity solar-wind streams in 1973 (Krieger, Timothy, and Roelof, 1973). Solar Physics at Stanford University The evidence of the Sun as regulator of the high-energy cosmic-ray flux and the exciting discoveries from space of the physical conditions of the interplanetary medium aroused my interest in solar and heliospheric physics. Once the cycle of the university studies was over, we were advised and encouraged to broaden our experience by spending some years as visitors in scientific institutes abroad. Thus, I got in touch with John M. Wilcox to explore the possibility of joining his group as a postdoctoral fellow. I was particularly interested in the results he obtained with Norman Ness on the interplanetary magnetic-field sector structure, evidence for persistent regular magnetic-field patterns in the heliosphere (Wilcox and Ness, 1965). John Wilcox welcomed my proposal and in November 1972, I left My first travel to the US was an adventure, I arrived two days later than scheduled because of an airline strike in Italy on the day of my departure and an airplane technical problem in the US on the way to San Francisco. Fortunately, upon arriving at the San Francisco airport John Wilcox rescued me and during my first days in Palo Alto I was a guest of his family. When I started my life on campus, I realized I was in one of the most important and challenging scientific environments and one of the most beautiful university sites. On the other hand, I also discovered to my great surprise that I was quite an exception, since there were no women in the Stanford physics departments. Indeed, graduate students, faculty, and research staff in the four large physics institutes -Plasma Physics, Applied Physics, Geophysics, and Radioastronomy 7 -were all men. The minorities were represented by myself in the Plasma Physics department and a black graduate student in the Applied Physics department. Secretaries and librarians were almost exclusively of my gender, and they were delighted to support and help me in all possible ways. The magazine Newsweek contacted me for an interview, but I refused, not being sufficiently fluent in English to deal with such a delicate matter as minority issues. In the end, it turned out to be not difficult, although not immediate, to adjust to this situation, which was certainly unusual, not only for me but also for my colleagues. After my visit, Maria Grazia Borrini from Florence visited the Wilcox group for a while (Figure 3). A few years later, in 1978, the first American woman in space, Sally Ride, graduated in physics from Stanford University. Life on campus was fantastic and certainly quite different from my previous life in an old university such as the University of Turin, dating back to the fifteenth century, whose departments were scattered all over the town center and not gathered in a campus as at Stanford. The Stanford International Center provided a great opportunity to meet new friends coming from everywhere and to attend interesting lectures and discussions, opening a new universe for me. A novelty was also the always positive attitude pervading almost everyone, compared with the thread of skepticism that sometimes can be sensed in people coming from the old European countries. There is no need to describe the atmosphere of the early 1970s in San Francisco and the universities. Not only the Stanford campus but also California was an enchanting state, with the lively and beautiful San Francisco, the ocean with its very cold waters and wild life, the deserts, and the forests so unexpected for a Mediterranean person. It was a state still preserving so many lands that were almost untouched by man, whilst in Europe every single square centimeter, except the mountains, bears the mark of centuries of history. It was a country much greener and with bluer skies than nowadays, when the almond and apricot trees were still shaping the landscape near Cupertino. On my desk, on the first day of work at Stanford, I had all that was needed for my research: a paper notebook, a pen, and an absolutely necessary blue and red pencil to draw the polarity of the interplanetary magnetic field on 27-days Bartels charts. A new PDP/11 Wilcox, Ester Antonucci, Phil Scherrer, July 1980. computer was located in front of the office I occupied in my first weeks, an office that then became the Wilcox office. This was a new way of interacting more easily with a computer; punched cards were no longer needed, but we still had to access the computer one at a time. Of that time only the beautiful old oak that was in front of my office survived and is still there. All the buildings have been replaced by new ones and the last time I visited the Stanford campus I had some difficulty in recognizing where 'my' mighty oak was. In addition to Leif Svalgaard, visiting from Denmark, and myself, the Wilcox group included three graduate students: Phil Scherrer, involved in the project of the new solar observatory, Tom Duvall, and Phil Dittmer. Twice a day during coffee breaks in the Wilcox office we exchanged opinions, ideas and the results we were achieving. One day a telex informing the scientific institutions about the presence of a huge, unexpected perturbation with its origin in the Sun, propagating in space and potentially hitting the Earth, monopolized our attention. This was 1973, when Skylab was observing unambiguously the first coronal transients (MacQueen et al., 1974). We also attended a weekly science meeting in the office of Peter Sturrock, with Vahé Petrosian and their students, discussing the more recent solar-physics papers and conference contributions. Soon after my arrival, I attended the fall meeting of the American Geophysical Union (AGU) in San Francisco. The first day of the meeting, a beautiful annular rainbow greeted us on our way to San Francisco along the highway on the hills. I have never again seen such a phenomenon. This was the first of a few great conferences and meetings I attended from 1972 to 1974, with the presence of astronauts just coming back from their journey to the Moon or from Skylab. Every result presented at these conferences was entirely new and exciting. At that time the largest meetings, such as those of the AGU, were attended by a few hundred participants and it was much easier to follow presentations even outside our own field of research, a way of broadening our views. The results achieved by John Wilcox in the analysis of the Interplanetary Monitor Platform (IMP), Mariner and Pioneer data, obtained in the years 1963 -1966, were paving the way for further studies on how the solar wind was organized within the sector structure in terms of magnetic-field direction and magnitude, plasma velocity, and proton density. This pattern, with alternating sectors of positive and negative magnetic-field polarity lasting several days, was found to corotate with the Sun and its influence on geomagnetic indexes and the low-energy cosmic-ray flux detected by neutron monitors was clear (see review by Wilcox, 1968). The other puzzling question, in order to unambiguously identify the interplanetary sector structure source, was how this persistent wind pattern relates to the large-scale photospheric magnetic field that was observed with the solar magnetograph at the Mount Wilson Observatory in the period 1959 -1967. A quite intriguing finding was that the solar sector structure had little or no differential rotation in the low-latitude zones, that is, in the range 20°N -20°S . Following a new approach -that is, studying the magnetic field of the Sun seen as a star with the Crimean tower telescope -Severny et al. (1970) established that the observed mean solar-field behavior allowed a prediction of the interplanetary field measured at Earth four or five days later. This was the cultural frame that allowed me to begin my research in solar physics. The Fe XIV Green-Line Corona and Solar Rotation John Wilcox was expecting to obtain the first data of the new solar observatory under construction at Stanford during the time of my visit. This would have been my first opportunity to observe the Sun from the ground. Such an opportunity occurred once more in 1975 when Otto Kiepenheuer invited me to observe during the summer time at the Anacapri Observatory of the Fraunhofer Institute, 8 but his sudden death just before the summer of that year, a great loss for the European solar-physics community, ended the chances of an experience as a ground-based observer. There were delays in the completion of the observatory at Stanford and the first lowresolution maps of the Sun's magnetic field date back to May 1976. Thus, even if a long set of the interplanetary magnetic-field polarity data -inferred by Leif Svalgaard (1972) from the perturbations of the geomagnetic field near the pole registered at Godhavn since 1926 -was available, there were no new solar data to play with. Thus, whilst each day I was learning a lot in that lively scientific environment, I realized I had to find some solar data to work with. Considering that no study on how the solar corona was organized in relation to the interplanetary sector structure had yet been made, I sent a letter to J. Sýkora, who I did not have the pleasure of knowing personally, asking for the data of the intensity of the Fe XIV green corona emission that he had compiled, transforming all green-line observations available from 1947 to 1968 into a homogeneous data set (Sýkora, 1971). In the meantime, an intriguing peculiarity was noticed: during solar minimum the polarity of the inferred interplanetary magnetic field was found to be predominantly positive (away from the Sun); in other words, the sector structure was disappearing for a few solar rotations (references relative to the present understanding of this effect are reported in the review by Antonucci et al., 2020a). This magnetic configuration, associated with the position of the Earth along its orbit, could explain the anomalous cosmic-ray anisotropy observed with neutron monitors during the summer of 1954, by considering its dependence on the northern branch of the cosmic-ray intensity gradient across the equatorial plane . The green-line data, kindly provided by Sýkora, sounded coronal heights from 40 to 60 arcsec from the limb and were covering almost two full solar cycles. Thus, I presumed they were suitable for studying possible regular patterns of the corona within the magnetic sector structure. Indeed, we found that sector boundaries separating leading and following polarities of the interplanetary field were preferentially related to green corona enhancements, implying that enhanced coronal features present in each hemisphere were displaced by 90°o r 180°in helio-longitude relative to the opposite hemisphere, depending on the presence of a four-or two-sector structure in the solar wind, respectively. In addition, the pattern reversed from one solar cycle to the next, according to the Hale law on the emergence of sunspot polarities Antonucci and Duvall, 1974;Antonucci and Svalgaard, 1974a). The present understanding of the boundaries later named Hale boundaries is reported in Antonucci et al. (2020a). The set of green-line data was also suitable for a detailed investigation of the rotation of the corona through the solar cycle. It was not possible to apply to limb observations the typical method of tracing the motion of a solar feature due to solar rotation, thus I performed this study by applying autocorrelation techniques to the temporal series of the coronal data. The results were quite interesting: during the years before sunspot minimum, the degree of differential rotation of the corona was decreasing to approach rigid-rotation conditions. Moreover, differential and rigid rotation coexisted. Shorter-lived coronal emission was found to show the same differential rotation as the short-lived photospheric magnetic fields, whilst longlived coronal enhancements were rotating nearly rigidly (Antonucci and Svalgaard, 1974b), as did the coronal holes observed with Skylab in the time frame 1973(Wagner, 1975. Active longitudes and rotational characteristics of the corona were also discussed by Sýkora (1971) on the basis of the same data set. The green corona emission, observed in the Skylab period, but not yet included in these first studies, was analyzed with Maria Adele Dodero when I returned to Turin, confirming that in the declining phase of the solar cycle coronal structures persisting more than one synodic period are rigidly rotating (Antonucci and Dodero, 1977;Antonucci and Dodero, 1979). Intrigued by the coffee-break discussions about the new observational results on the coronal configuration, mentioned in the paper by Svalgaard, Wilcox, and Duvall (1974), I attempted to describe the evolution of the coronal pattern through the solar cycle as due to the rotation of the solar magnetic dipole. This hypothesis was phenomenologically consistent in my opinion with the general observational characteristics of the photospheric magnetic field and of the coronal patterns during the twenty-two-year magnetic cycle. Unfortunately, the referee thought that this was an awkward idea and the paper entitled 'Solar Rotating Magnetic Dipole?' was rejected (Antonucci, 1974c). Back in Turin in the Seventies During the second year of the ESRO fellowship in 1974, I had to make a difficult choice. I was feeling completely 'at home' in Palo Alto and John Wilcox offered me the opportunity to continue to work in his group at Stanford also in the future. However, even if still today I consider California my second home, I chose to return to Italy since there was the chance to upgrade my permanent position at the University of Turin. 9 A further extension of my visit in California would have paved the way to remain there forever. Back in Turin, I was teaching Laboratory Physics at the Institute of Physics and then Fundamental Physics at the Institute of Agriculture. These years were interesting but at the same time troublesome because of the new wave of protests that spread beyond the universities. This time the protests were more violent. In 1978, during the worst phase of terrorism, the former prime minister Aldo Moro was murdered, shot in Rome by members of the Red Brigades. When I was back in the US in 1980, reading the Italian newspapers I realized that some of the students who attended my lectures (I should say, disturbed my lectures) at the Institute of Agriculture later became leaders of one of the worst terrorist nuclei called 'Prima Linea' (Front Line). Fortunately, they were a minority and although very aggressive, nothing happened to me, even though at the Institute of Agriculture I was one of the few professors who never stopped giving their lectures. I was trying to make the point that the paralysis of the university was detrimental first of all for those students who could have a better chance to progress in their life only by pursuing their studies. Chromospheric Rotation with the Ca II K 3 Filtergrams Detected at the Anacapri Observatory Carlo Castagnoli was opening new lines of research, such as geophysics, and the Monte dei Cappuccini cosmic-ray station was no longer one of his priorities. In the years 1974 -1975 the station became no longer operative. Thus, it was time to find a way to continue solarphysics research that nobody else was pursuing in Turin and to look for financial resources to support my own research. First, I needed data to continue for a while both solar and cosmic-ray research. Solar rotation was still an interesting topic to be explored by studying other layers of the solar atmosphere. For instance, chromospheric rotation at that time was inferred by tracing the motion of individual features. Hence, it would have been interesting to apply techniques of analysis analogous to those used for the coronal data to determine the rotation as a function of lifetime and size. The daily observations performed at the Anacapri Observatory were certainly suitable for this goal. Thanks to Kurt Sitte, often a visiting professor in Turin, I was introduced to the director of the Fraunhofer Institute, Karl Otto Kiepenheuer, and I was invited to visit Freiburg in order to present to him my project to analyze the Anacapri data. The digitization of the images of the Ca II K 3 filtergrams registered daily would have allowed a frequency analysis of the chromospheric emission. Our first conversation took place in 1974 in Kiepenheuer's office, where I was invited to sit at the desk that in the past had belonged to Joseph von Fraunhofer. Otto Kiepenheuer was a charismatic person indeed ( Figure 4). Only years later, I became fully aware of the role that he played in safeguarding solar observatories and protecting solar physicists all over the European countries occupied by the German troops during the Second World War, as well as in developing solar physics in the postwar years. It is interesting to note that in wartime he pursued the innovative idea of observing the Sun from a vantage point outside the atmosphere in order to explore the solar ultraviolet radiation. This would have been possible in collaboration with Werner von Braun, who was developing the first powerful rockets capable of reaching altitudes well above the ozone layer absorbing the short-wavelength ultraviolet radiation emitted below 2900 Å. At the same time, he had the clear view that the corona was the key to the understanding of all solar-terrestrial relationships, which were the object of intense study in the Luftwaffe research centers, for military reasons of course. In Europe, space-weather studies, strongly supported by the Luftwaffe leaders, had started around 1939 under the coordination of Hans Plendl, who was setting up an efficient network of ionospheric stations to predict solar disturbances in radio transmission during the raids of the air force. Historical studies on the role played by Otto Kiepenheuer in the war years have been carried out by Michael P. Seiler ('Solar research in the third Reich', 2006). Unfortunately, Kiepenheuer did not see any of the results obtained on chromospheric rotation using the Anacapri data, as he suddenly died in 1975. Digitization of the astronomical images was not easily performed in the early 1970s. Fortunately, the scientists of the Institute for Elaboration of Information (IEI) of Italy's National Research Council (CNR) in Pisa, contacted thanks to Roberto Falciani of the Arcetri Observatory, became quite interested in the project of digitizing the filtergrams provided by the Freiburg Institute. Two years of daily chromospheric filtergrams, 1972 -1973, were processed using a flying-spot photometer controlled by a PDP/8/I computer. It was quite complex work in those years. The 1024 × 1024 optical density arrays were recorded on large magnetic tapes stored in the computer room. When this laborious work was over, during the restoration of the Renaissance building that hosted the CNR's Institute on its ground floor, the roof of the computer room partially collapsed, damaging most of the tapes, which were no longer recoverable. This is why in the first analysis of the data completed in 1976 the full two-year temporal sequences of data were used, whilst in the following papers the study was limited to the period May 8 -August 14, 1972. The first analysis based on the power spectra of the temporal sequences of the full set of digitized data, as expected, showed that during the years before sunspot minimum the chromospheric features with a lifetime of the order of or exceeding one solar rotation were also almost unaffected by differential rotation . These results were consistent with the rotation behavior of the green corona and the Skylab coronal holes. In the following two papers, the shorter data set was then used to study short-lived and small-scale/middle-scale chromospheric features, tracing their average daily displacement by computing average crosscorrelations of brightness features (Antonucci et al., 1979a,b). These features were found to rotate differentially, as do the Ca faculae and bright mottles. Eventually in 1985, after almost one solar cycle of observations obtained at the Wilcox Solar Observatory at Stanford, it was time to propose to Phil Scherrer and Todd Hoeksema a study of the rotation of the large-scale persistent patterns of the photospheric magnetic field to corroborate the chromospheric and coronal results. It was a pleasure to work again, al-though for a quite limited period, with the Stanford group, which later became fully involved in the detection of the solar magnetic field from space. During solar Cycle 21 in the interval from 1976 to 1986, the photospheric magnetic-field pattern persisting more than one solar rotation showed, as expected, broad latitude zones rotating rigidly. Furthermore, a strong north-south asymmetry was found in the properties of rotation, which was more rapid in the northern hemisphere than in the southern one. These results suggested an association of the northern structure with the four-sector structure and the southern field, likely contributing to a two-sector structure in the interplanetary magnetic field (Antonucci, Hoeksema, and Scherrer, 1990). Modulations of Cosmic Rays Detected on the Ground On my return to Turin the interest in cosmic rays was still alive, but now the focus was on the modulation of cosmic rays of lower energy, below or equal to 10 GeV, detected on the ground since these data allowed studies over one full solar magnetic cycle, from 1954 to 1973. 10 The results were quite interesting. The annual and semiannual modulation of the cosmic-ray intensity, due to the existence of perpendicular gradients across the heliospheric equatorial plane, showed an unexpected phase reversal at the reversal of the solar and interplanetary magnetic dipoles at the solar-activity maximum (Antonucci, Marocchi, and Perona, 1978a 11 and references therein). This twenty-two-year cycle in the modulation of cosmic rays pointed out the significance of particle drifts due to gradient and curvature effects in the spiral interplanetary magnetic field, which are influenced by the field polarity (Perona and Antonucci, 1976). In conclusion, at the Earth's orbit the contribution of transverse diffusion to the cosmic-ray transport in interplanetary space was found to be not negligible with respect to the convection of the solar wind and to the radial diffusion due to magnetic inhomogeneities. Additional contributions to the yearly modulations could be ascribed to perpendicular gradients induced by a hemispheric asymmetry existing in the sunspot-area data collected at the Astrophysical Observatory of Catania from 1954 to 1976 (Antonucci, Marocchi, and Perona, 1978b). 12 However, our colleagues in the Bologna cosmic-ray group were not so convinced about the validity of these results. Thus, some years later, in order to settle this question, we decided to work together on a somewhat more extended set of data, from 1953 to 1979, employing a different analysis technique. This collaboration eventually confirmed the scenario deduced from the previous studies. In addition, a residual yearly variation with maximum at the time corresponding to the local galactic magnetic-field direction was present and proposed to be of sidereal origin (Antonucci et al., 1985a). A breakthrough in the understanding of solar rotation and cosmic-ray modulation occurred a few years later when helioseismology discoveries made it possible to measure the rotation of the interior of the Sun and when, in its long journey in space, the Ulysses spacecraft, launched in 1990, explored the out-of-ecliptic heliosphere. These two achievements opened new scenarios in both fields. With regard to helioseismology, 1974 was a crucial year. At the Anacapri Observatory, Franz Deubner obtained a set of photospheric observations that enabled him to resolve a few stable eigenmode ridges of the 5-minute oscillations in the famous frequency versus horizontal wavenumber diagram. His findings were consistent with the predicted existence of fundamental modes of standing acoustic waves at the subphotospheric level (Deubner, 1975). The Erice Summer School in 1976 The third Ettore Majorana Summer School I attended in Erice, organized by Bruno Caccin in August 1976, was of crucial importance to me. Some of the lectures were dedicated to the ground-based observations of solar oscillations and the models put forward to interpret them. With regard to the solar atmosphere, Alan H. Gabriel (1976) presented his new magnetic model of the solar transition-region derived on the basis of space observations. The encounter with Alan Gabriel started a new chapter in my scientific interests and a long-lasting collaboration and friendship. A short time after having attended this summer school, I received an invitation to apply for a position at the Appleton Laboratory in Culham, Abingdon, UK, in view of the launch of NASA's Solar Maximum Mission (SMM). SMM hosted the Soft X-Ray Polychromator (XRP), an instrument developed jointly by the Appleton Laboratory, the Lockheed Palo Alto Research Laboratory and the Mullard Space Science Laboratory. The XRP team was guided by three Principal Investigators, one for each of these institutions: Alan H. Gabriel, Loren W. Acton, and J. Leonard Culhane, respectively. Since I was not interested in a permanent position at the Appleton Laboratory, in agreement with Alan Gabriel I applied for an ESA research fellowship, which allowed me to participate in the XRP experiment and in this way to gain access to the interesting field of the X-ray spectroscopy of flaring plasmas, a field totally new to me. It was a fantastic opportunity and quite a challenge to be involved for the first time in a space mission. (Antonucci, Gabriel, and Patchett, 1984). It took a long time to prepare and submit the paper, since in the meantime many exciting results were achieved thanks to the first XRP observations that completely absorbed our time. There were also the numerous documents relative to SMM and XRP to study, acronyms to memorize and so on, in order to be prepared to observe with XRP. In these years the Culham scientific area near Abingdon was chosen as the site of the Joint European Torus, the European project devoted to experiments of nuclear fusion that began in the early 1980s. Hence, in 1979 the Appleton Laboratory was incorporated in the huge area of the Rutherford Laboratory at Chilton, thereafter named Rutherford Appleton Laboratory (RAL). The Solar Maximum Mission The year 1980 was a milestone in my scientific career. In January I moved to Abingdon to collaborate with Alan Gabriel, thanks to the ESA fellowship that was granted to me. As the SMM launch was imminent, at the end of the month I moved to Greenbelt, Maryland, USA, to join as Co-Investigator the XRP team at the Experiment Operation Facility (EOF) of the Goddard Space Flight Center (GSFC-NASA) (Figure 7). I arrived just two weeks before the SMM launch. My arrival was probably wisely delayed until that moment so that there was not much time to realize how hectic was the activity of the XRP team at the EOF in preparation of the initiation of the mission and to realize that most of our offices in building 7, where we were spending very long days, were depressingly illuminated only by artificial light. Moreover, all the team members were much more experienced than myself. In the first days, a quick adjustment to the mixture of the various British and American accents and to the continuous use of acronyms was also necessary. On February 14, whilst part of the team was at the Kennedy Space Center to be present at the launch of the spacecraft, most of us were watching the event at GSFC and when just after the launch the screen turned black for a second, I thought that the visit at NASA was ending in that moment. Instead, SMM was successfully launched and entered its nearly circular orbit around the Earth at an altitude of about 574 km with an inclination of 28.6° (Bohlin et al., 1980). 13 This first successful step of the mission was celebrated the same day at Loren Acton's house with all the XRP team members, since everybody immediately came back from the launch site. Only six years had elapsed since the completion of the Apollo Telescope Mount (ATM) solar program on Skylab, 14 the first observatory in space, when the SMM, the second large solar space observatory, was launched. The comparison with the many years needed nowadays to develop a new mission of the same complexity is amazing, also considering the quite limited means available in the 1970s and early 1980s in terms of computer capabilities and rapidity of communications. 15 13 The satellite circled the Earth about every 96 minutes, with an eclipse period of approximately 30 minutes during each orbit (see 'Solar Maximum Mission Flare Workshop Proceedings', Editors: Kundu and Woodgate, 1986). 14 ATM was the first manned space observatory and carried eight instruments, each of which was the most advanced instrument of its type ever flown, covering the wavelength range from 7000 to 2 Å. The hot corona was imaged in the range from 2 to 60 Å with the X-ray telescope, one of the most successful Skylab instruments. The ATM scientific experiment payload weighed 900 kg. 15 Documents were typed by the indispensable and always very supportive secretaries and the fastest means to transmit papers and documents was the postal service; short messages were sent by telex. During the first SMM year I had the chance to meet, collaborate with and enjoy the company of new colleagues of the experiment teams, short-term visitors, and scientists supporting SMM operations coming from all over the world. Space solar physics has become since that time a truly global experience, a positive early example of involvement of scientists of all countries in pursuing the same research objectives. Some of the outstanding people I met during SMM times became lifelong friends. SMM was the first solar satellite to be fully devoted to a specific scientific problem: the investigation of solar flares. Coordinated observations with five complementary instruments covering the energetic part of the electromagnetic emission from UV to gamma rays up to 160 MeV, and the visible-light coronagraph for the study of the ejecta during solar activity, were envisioned to address the ensemble of the complex phenomena occurring during such events. In order to obtain spatially coordinated flare monitoring, the instruments and module were designed to ensure an accurate and stable pointing at a selected active region and a good coalignment of the instruments with small fields of view. Imaging in hard X-rays with energies up to 30 keV and spectroscopically resolved soft-X-ray emission down to 1.7 Å could be achieved. With the support of optical and radio ground-based observatories the simultaneous coverage of the entire electromagnetic spectrum was ensured. An additional experiment, monitoring the total solar irradiance, turned out to be crucial to show beyond any doubt the solar-cycle dependence of this quantity. SMM had an important peculiarity: it was the first spacecraft designed to be retrieved by the Space Shuttle, which was going to be launched one year later. The many fundamental SMM results are summarized in two books: Energetic Operations at the Experiment Operation Facility The SMM observation program, highly structured in order to reach its goals, set an example for future solar space missions. The program was operated on a twenty-four-hour cycle and a detailed day-by-day planning of observations was carried out starting on February 17, 1980. I had the privilege of representing XRP at the first two SMM daily planning meetings, supported by experienced XRP people who were sitting not far from the planning table. XRP scientific operations started on February 19, as soon as the spacecraft was properly oriented. The analysis of the acquired scientific data was performed in as near as possible to a real-time manner to provide valuable feedback for the daily planning cycle. Evaluation of the scientific content and quality of the data was achieved within eight to twenty-four hours of its collection. The indepth analysis was also immediately initiated at the EOF, taking advantage of coordinated efforts with the other SMM experiments observing flare emission in different and complementary spectral ranges. The roles of the XRP planner and evaluator were performed on a rotational basis. During the daily planning at the SMM level, the most difficult decision concerned the active region to be selected for the pointing of the spacecraft. 16 The choice of the active region was based on the daily forecast provided by the National Oceanographic and Atmospheric Administration (NOAA) (Figure 8). 17 A large network of observatories, also part of the Solar Maximum Year effort, was involved in support of the NOAA forecast as well as in the coordinated observations program. Immediately following the daily solar-weather report, an orbit-by-orbit plan with the precise timeline of observing modes was formulated for the following day at the instrument level and presented by the instrument planner at the SMM planning meeting ( Figure 9). The plan was formulated according to the identified Specific Scientific Objectives (SSO) addressed by ad hoc Joint Observing Sequences (JOS). During that SMM daily meeting the pointing decision sometimes turned out to be quite a challenge and gave rise to lively discussions. Each day we attended several meetings at the SMM and XRP levels before the scientific plans could be converted to command loads to be uplinked to the spacecraft overnight. Thus, the meaning of the mission acronym slowly evolved to become 'So Many Meetings'. Punctuality and efficiency were a must and meetings were short and exhaustive, qualities that were an absolute prerogative of the time. Although we were at the peak of Cycle 21, the first days were not very successful in our search of flaring regions. At last, on February 26, SMM pointed at the right active region and at 3:20 UT the first flare was observed. However, no major flare was observed until March 29. Then on April 7, the first class-M flare (M8) was detected and on May 21 the first class-X flare (X1). 18 Finally, the first white-light flare detection occurred on July 1, 1980. With time, flares of increasing importance were observed and the challenge became to prepare and improve codes in order to quickly interpret the data, try to understand the observations, and present as soon as possible the preliminary results to the solar community. Unfortunately, on November 19, 1980, following two previous gyro failures also the third of the four gyros of the attitude control subsystem failed and this ended the first phase of the mission, at least for the instruments with limited fields of view, such as XRP. The evening that this problem was discovered, in the role of the SMM chief planner, I received a dramatic call from GSFC. This was an awful moment since there was nothing we could do. I thought this was the end of XRP. The satellite could no longer be pointed with accuracy, thus after nine months of operations only the nonimaging instruments were able to continue their observations as the Sun was still observable in their field of view. In these moments, the only consolation was that XRP had already collected a wealth of flare spectra in its fifteen channels, thus the experiment's success was ensured in any case. A picture of the XRP team that was present at the EOF in the first nine months of operations is shown in Figure 10. The days following this dramatic failure were days of great stress. The number of meetings increased up to seven in one day, and, as one of the three XRP deputy principal investigators, 19 I had to attend most of them ( Figure 11). I was convinced that there was nothing to do any longer, at least as far as our instrument was concerned. Instead, much to my surprise, since SMM was the first application of the Multi-Mission Spacecraft designed to be serviced by the Shuttle, very soon the idea of repairing the spacecraft in orbit was put forward by the US colleagues, even if the Space Shuttle was not yet ready to fly. After three years of preparation -part of the astronaut preparation took place right in building 7 -the repair mission indeed took place successfully. It was an incredible adventure for the time. SMM was repaired in orbit by the crew of the Space Shuttle Challenger on mission 41 C. The pilot of this mission, F.R. Scobee, was the commander of Challenger's tragic last flight. For this reason, the book The Many Faces of the Sun (eds. Strong et al., 1999) is dedicated to his memory. The spacecraft-retrieval process turned out to be highly dramatic, but, in the end, successful. The spacecraft was repaired onboard the Shuttle and released in orbit on April 10, 1984. The second phase of the mission ended five years later in November 1989, with the spacecraft debris dispersed into the Indian Ocean. It was a success story, although with breath-taking moments, also considering that the spacecraft was designed for a minimum of one year of operational life. Keith Strong and Joan Schmelz describe in detail the dramatic phases of the repair enterprise in the introductory chapter of The Many Faces of the Sun. Diagnostic-Tools Development XRP was a complex experiment designed to measure the light emitted in the spectral range from 1 to 22 Å, where a large part of the energy output of flares is observed. The instrument included two subsystems, the Bent Crystal Spectrometer (BCS) and the Flat Crystal Spectrometer (FCS), for a total of fifteen spectroscopic channels (Acton et al., 1980). The BCS consisted of eight curved crystals simultaneously diffracting and dispersing photons into position-sensitive detectors. The very clever and innovative concept of using bent crystals made possible at once the detection at high resolution of the entire spectra of highly ionized heavy ions, each wavelength being reflected by a different part of the crystal surface. Thus, it was possible to trace the temporal evolution of the spectral emission with an unprecedented resolution. 21 This combination of wavelength and time resolution was not achievable with traditional flat crystals, which could scan a given wavelength range only by changing the Bragg angle with time. This led to a breakthrough in the study of the rapidly changing plasma conditions characterizing the initial phase of flares, crucial to understanding the flare process. The 6 × 6 field of view allowed one active region to be studied at a time. The BCS was an extremely successful instrument, whereas the use of the FCS was somewhat limited from the beginning of March 1980 onwards, due to a problem concerning the crystal drive. XRP science benefited from the parallel observations and results of two almost simultaneous space missions: the US Air Force P78-1 and the Japanese Hinotori solar missions, launched in 1979 and 1981, respectively. The SOLFLEX instrument onboard the P78-1 satellite was designed to explore soft X-rays down to 1.8 Å, although with a somewhat reduced temporal resolution due to the use of traditional Bragg crystals (Doschek, 1983). The Hinotori instruments detected the wavelength region from 1.72 to 1.95 Å, scanned by utilizing the spin of the spacecraft itself (Tanaka et al., 1982 and references therein), and further extended the energy range of the hard-X-ray imaging up to 40 keV. At the beginning of the XRP operations, there was a demand to further contribute to the development of the tools needed to interpret the data. Hence, I started to develop computer codes enabling us to interpret the spectra of the He-like Ca XIX and Fe XXV, 22 and the H-like Fe XXVI highly charged heavy ions detected in three distinct BCS channels. These spectra, in addition to the resonance, intercombination and forbidden transition lines, are densely populated by satellite lines formed in the process of dielectronic recombination or inner-shell excitation, 23 situated on the longer-wavelength side of the resonance line (Gabriel, 1972). These lines represent a crucial diagnostic tool for measuring the electron temperature and the ionization conditions of the flare plasma. The simplest approach to analyze the BCS data I adopted was to calculate a synthesized theoretical spectrum derived from a set of trial plasma parameters and to adjust the initial parameters (electron temperature, Doppler temperature, and relative population of adjacent ion stages) in order to best fit the observed spectra. This method made it possible to properly take into account the dependence of the line intensity on the presence of merging nearby lines (that is, the blending of all lines, as well as their width determined by thermal and dynamic plasma conditions and the Lorentzian instrumental profile characteristic of the crystal response). During his visits at the EOF, Alan Gabriel guided me in the use of the atomic data relative to the ion transitions observed with BCS to be included in the computation of the synthetic spectra. By mid-March the codes were ready in a preliminary form, taking into account the most important satellites, sufficient for a first interpretation of the observed spectra. During the summer several visitors from Europe supported us. Visiting the EOF for an extended period, Jacques Dubau helped to improve the BCS codes to include the complete set of satellite lines, which became available thanks to the calculations of all relevant atomic transitions performed by the French-Belgian atomic physicists collaborating with Alan Gabriel (Bely- Dubau et al., 1982a,b). First BCS Results The first intense flare, detected on March 29, was immediately analyzed and a component of reduced intensity and blue-shifted relative to the principal component, moving at about 300 km s −1 , was identified in the Ca XIX spectra during the flare impulsive-phase. Moreover, the blue-shifted component was present throughout the hard-X-ray burst duration. This turned out to be a characteristic feature of the impulsive-phase Ca XIX and Fe XXV spectra, which also showed pronounced nonthermal widths. In other words, these spectra were markedly different from those observed during the gradual phase, characterized by thermally broadened lines and the absence of blue-shifts. Further, no blue-shifts were found in flares close to the limb. These results were interpreted as evidence for plasma rising in the solar atmosphere, as predicted in the case of chromospheric evaporation. According to the theory, whenever the energy release in flares is so sudden that the chromosphere is unable to radiate it at a sufficiently high rate, the chromospheric plasma heats up to coronal temperatures and as a consequence the large pressure difference established between the dense chromosphere and the tenuous corona drives high-velocity upflows of large amounts of plasma (e.g., Antiochos and Sturrock, 1978). In this way, most of the flare energy is transformed into energy of the thermal plasma at temperatures above 10 million K and is radiated away predominantly in soft X-rays. In order to explain three soft-X-ray flares observed with OSO-III in 1967, Neupert (1968 first suggested as a likely interpretation of the data that 'a portion of the chromosphere is heated to sufficiently high temperatures (as high as 20 -40 × 10 6 K) to account for the existence of the Fe XX-Fe XXV and is ejected into the lower corona'. In the hypothesis that the blue-shifted emission was due to plasma rising in the solar atmosphere and accumulating in a magnetically confined coronal region, it was possible to verify that the observed mass and energy flows were sufficient to create the soft-X-ray source and to account for all radiation and heat-conduction losses. Hence, plasma upflows were beyond doubt interpreted as the manifestation of chromospheric evaporation (Antonucci et al., 1982a). 24 It was also clear that the energy transported by the upflows was far more important than that appearing in the form of nonthermal turbulent motions. These results were preliminarily presented, together with the first well-resolved Fe XXVI spectrum emitted by a plasma region at 29 × 10 6 K, at the Solar Maximum Year workshop in Crimea in March 1981 (Antonucci et al., 1982b). The two resonance transitions and dielectronic satellites of the Fe XXVI spectrum were first detected during the white-light flare of July 1, 1980 25 and fitted with a synthesized spectrum calculated according to the theory developed by Dubau et al. (1981). The indepth discussion of Fe XXVI spectra analysis is found in Parmar et al. (1981). Subsequently, in several flares Tanaka et al. (1983) measured temperatures of the thermal plasma above 30 × 10 6 K with the Hinotori spectrometer. The flare scenario that emerged during the first year of analysis was substantially confirmed by analyzing the large set of X and M flares detected in 1980 (Antonucci, Gabriel, and Dennis, 1984;Antonucci et al., 1985b). The complementary information -on the volume of the soft-X-ray source (3.5 -8 keV) in the corona inferred from the Hard X-ray Imaging Spectrometer (HXIS) (Van Beek et al., 1980), on the energy released above 25 keV deduced from the Hard X-Ray Burst Spectrometer (HXRBS) (Orwig, Frost, and Bennis, 1980), and, in a few cases, on the loop footpoint geometry identified in the images of the hard-X-ray channel of the HXIS images, 16 -30 keV (Hoyng et al., 1981;Antonucci, Marocchi, and Simnett, 1984) -made it possible to compute the total energy input by nonthermal electrons in the chromosphere in the assumption of thick-target interaction. The value obtained was fairly consistent with the soft-X-ray energy present at the coronal level as deduced from the XRP data. In other words, the energy input to the chromosphere in the form of fast electrons during the impulsive phase, which drives the evaporation process, was found to be of the same order of the thermal energy content of the soft-X-ray emitting plasma, which in turn was of the same order of the conduction and radiation losses observed during the flare-decay phase. The mass transfer into the corona was also consistent with the plasma density derived on the basis of considerations on the emission measure and on the volume where the plasma has propagated. These results on evaporation, including the X13 flare on April 24, 1984 observed in the second phase of the SMM mission, and the issues relative to energy deposition and transport processes were summarized in Antonucci (1989) and references therein. Enhanced emission in the blue wing of the Ca XIX and Fe XXV lines was also observed with SOLFLEX Feldman et al., 1980) and red-shifts in the Fe XXV spectra were observed by Korneev et al. (1980) with the Intercosmos-4 instruments, but the presence of persistent blue-shifts throughout the impulsive phase was not so unambiguously established and systematically detected, likely because of the somewhat limited temporal resolution of the classic Bragg spectrometers. Moreover, for the first time all satellite lines were properly taken into account in the BCS data analysis codes. There was a heated debate about these results. Models of chromospheric evaporation predicted the blue-shift of the principal component of the resonance line itself and not only of a component of reduced intensity. However, only in a few cases was there evidence of an initial dominant blue-shifted component moving upward at low velocities present for a very short time. Another issue was that, in most cases, previous studies did not identify the blueshifted component, probably because this can be easily masked by the blending of satellite lines with the red wing of the resonance line, possibly resulting in the spurious detection of an additional broadening of the resonance line. A further perplexity concerned the crucial question of whether the blue-shifted component was indeed capable of injecting sufficient hot plasma into the corona to justify the observed soft-X-ray source at the peak of the flare. A final important issue was whether the chromosphere was heated by nonthermal electrons accelerated during the primary energy release or by thermal conduction from a hot coronal source where energy is released. In the three sessions of the Solar Maximum Mission Flare workshop organized at the GSFC and held between January 1983 to February 1984, one of the workshop teams addressed the topic 'Chromospheric Explosions'. No other title could have been more appropriate. The team members were quickly drawn into lively discussions on the dynamics of the flare impulsive-phase and George A. Doschek, the team leader, very cleverly organized the team work as well as the conclusive paper in the form of a debate on three main controversial issues, with subteam coordinators arguing pro and contra a given interpretation (Doschek et al., 1986). The first issue concerned chromospheric evaporation, with George Doschek and myself as subteam coordinators. I learned a lot in these 'explosive' debates on different ideas and interpretations, although I remained convinced that the XRP observations identified the true signatures of chromospheric evaporation and that this was the main process involved in the formation of the soft-X-ray flare. Return to Europe Due to the failure of the SMM pointing system, there was no reason for me to remain any longer at the EOF. Thus, I left Greenbelt at the end of January 1981 and continued the collaboration with Alan Gabriel in the UK. I rented an old cottage in a country village some 10 miles from RAL and from Oxford, on the way to the enchanting Cotswolds region. In my village the microclimate was much milder and less windy than in Chilton. The winter of 1981 was quite unusual: extremely cold and with a lot of snow, which caused a great deal of trouble in a country not used to such harsh winters and driving on snow. The following year, at the end of February, I was back in Italy where I resumed teaching my courses in experimental physics at the university, with a promotion to Associate Professor in 1983. Thus, my 'space' adventure as part of the XRP team at the EOF ended, but not the work on the BCS data, the collaboration with Alan Gabriel and the frequent visits at the GSFC in the second phase of the mission. In these years there were many workshops, and conferences dedicated to the solar maximum physics entailing a lot of traveling within European countries and in the US. After the Crimea meeting in March 1981, I was invited for the second time to the USSR for the International Workshop on Solar Maximum Analysis in Irkutsk, Siberia, in 1985. This was quite interesting since the USSR pioneered the soft-X-ray spectroscopy of the highly ionized Fe ions (Grineva et al., 1973) and there was great interest in the BCS results. There was a lot of interest in the SMM results in the Asian countries as well. I attended my first meeting in Tokyo in October 1982 (Figure 12). At that meeting the scientists taking part in the discussion of the presentations had to state their name when they spoke, since the comments were recorded for publication purposes. Worried because during my first comment I had forgotten to say my name, a colleague reassured me that I was certainly going to be recognized, as the only woman at that meeting. It was certainly not the last time that this situation occurred. Japan was starting a very successful story of space missions dedicated to solar physics with the launch in 1981 of Hinotori, leading to important contributions in soft-X-ray spectroscopy of the active corona thanks to Katsuo Tanaka. Thereafter, the Yohkoh mission was launched in 1991 with instruments suitable for continuing soft-X-ray flare spectroscopy and the Hinode mission in 2006. In the last letter that I received from Katsuo Tanaka, aware of his hopeless health conditions, he expressed his wish to live at least until the launch of Yohkoh, but his wish was not fulfilled. Very sadly, this letter arrived to me the very day I learned of his death. I was also invited to present the BCS results at the IAU General Assembly in New Delhi in November 1985 and one year later in November 1986 at the International Symposium on Space Physics in Beijing. The meetings in India and China were an immersion in completely new dimensions. Visiting China, I was aware that this was a country with significant potential, considering its great culture with roots dating back a few thousand years. However, it was difficult to imagine the rapid and astonishing development of space science and technology that would occur in the next few decades. A Sad Epoch After Tanaka's death, in the span of just a few years, from 1995 to1997, we sadly lost a number of dear colleagues and friends who had greatly contributed to solar science from space and passed away while still too young: Chung-Chieh Cheng of NRL, Bruce Patchett of RAL, and Brunella Monsignori Fossi of the Arcetri Astrophysical Observatory. They are commemorated in: 'Remembering Brunella Monsignori-Fossi' (Landini, 1997); 'Dedication to Bruce Patchett' (Gabriel, 1997); 'Dr. Chung-Chieh Cheng's Contributions to Coronal Physics' (Wu, 1997). Further Science with the XRP Data The studies of soft-X-ray flares with XRP data continued in parallel with the interest in future space missions until 1995, the year of the launch of the second mission I was involved in: the ESA-NASA SOHO mission. The main lines of research addressed in flare physics were still concerned with the interpretation of the two unambiguous systematic signatures of the impulsive phase: the nonthermal line broadening, much enhanced with respect to that observed in quiescent active regions, and the significant blue-shifted emission, evidence for the dominant mode of mass and energy supply to the corona during flares. These signatures revealed that the soft-X-ray emission in the 1 -22 Å range was not only a manifestation typical of the gradual phase, as had been considered before the SMM era, but was a powerful means to investigate the impulsive phase of flares. Soft-X-Ray Line Broadenings During Flares The soft-X-ray line broadening observed with BCS throughout the impulsive phase was addressed on the basis of two models, both based on the process of magnetic reconnection. In both cases, we attempted to explain the source of the excess widths in the preflare and flare regimes as associated with the primary energy release. In collaboration with Robert Rosner and Kanaris Tsinganos of the Center for Astrophysics (CFA) in Cambridge, Massachusetts, the line broadening was interpreted in terms of their model of local heating resulting from reconnection due to magnetic-field line stochasticity. In this scenario, the turbulent broadening of the spectral lines emitted from the hightemperature plasma could be entirely explained as due to the outflowing motions from many reconnection sites where particles were accelerated and scattered throughout the flaring loop. This suggestion was dictated by the symmetry of line broadening (isotropic turbulence), which likely occurs in preexisting coronal material, filling about 5% of the total volume seen in soft-X-ray emission (Antonucci, Rosner, and Tsinganos, 1986). Another scenario based on the same process was then proposed with Boris Somov of Moscow State University. Line broadenings could be interpreted as a signature of magnetic reconnection in current sheets forming in active regions that provide the energy necessary for generating solar flares, as well as the kinetic energy of fast hydrodynamic flows and jets due to plasma ejected in opposite directions from the reconnecting current sheet with a velocity comparable to the Alfvén velocity (see the book Physical Processes in Solar Flares by Boris Somov, 1992). If many reconnecting current sheets operate at the same time, the effect of the jets would be observed as a symmetrical line broadening significantly larger than the thermal line broadening. The largest broadenings are expected at the rupture of the current sheet in connection with the transition from the preflare to the flare regimes (Antonucci, Dodero, and Somov, 1993). A comparison of nonthermal broadenings of the Fe XXV emission observed at flare onset with the predictions of the high-temperature turbulent current sheet model (Somov, 1992) suggested the presence of either several small-scale or a few large-scale reconnecting current sheets with internal temperature ≤ 80 × 10 6 K in the flare region. In this model, the velocities of the plasma emerging from the reconnection site, inferred to be ≤ 1100 km s −1 , depend on the temperature inside the current sheet. The number and/or geometrical complexity of the reconnecting current sheets ensures the substantial isotropy of the velocities in the flare region such that the observed nonthermal velocities are independent of flare longitude (Antonucci, Benna, and Somov, 1996). Comparison of Flaring Loops with Simulations During the SMM mission a large effort was dedicated to performing simulations of the hydrodynamics and magnetohydrodynamics of flaring loops and to comparing the spectral emission resulting from these simulations to the soft-X-ray spectral observations in order to discriminate between possible physical flare models, in particular between thermal and nonthermal models. The different approaches are summarized in Chapter 10 of Many Faces of the Sun (Strong et al., 1999). 26 Simulations of the profiles of the individual spectral lines were first performed by Doschek et al. (1983). Following the SMM workshop, in collaboration with the Palermo group led first by Giuseppe Vaiana and then, after his death in 1991, by Salvatore Serio, we analyzed how the plasma in a coronal loop responded to the deposition of energy. The hydrodynamic simulations were performed on the basis of the Palermo-Harvard numerical hydrodynamic model for a magnetically confined plasma (Peres et al., 1982). Comparing simulations with observations we concluded that in the case of thermal heating, evaporation represents the most important process in generating the flare soft-X-ray source (Antonucci et al., 1987b), whilst the electron-beam model did not fully reproduce the flare soft-X-ray line profiles. Electron beams with soft spectra and low-energy cutoff, most closely resembling thermal heating, were the ones more consistent with the observations (Antonucci et al., 1993 and references therein). In addition to the simulations of flaring loops, we continued to pursue phenomenological studies of the evaporative plasma flows. In a number of very energetic flares, such as the class X13 flare of April 24, 1984, impulsive phase upflows characterized by a broad velocity distribution with a tail at a very high velocity (up to 1000 km s −1 ) were detected in the Fe XXV spectra (Antonucci, Dodero, and Martin, 1990a). In the case of this extremely energetic flare, it was also possible to find successive injections of very hot plasma of 30 -40 × 10 6 K. In any case, the emission measure of the high-velocity tail turned out to be relatively unimportant in the energy budget of the flare. Several other interesting results were achieved. Examples worth mentioning are: the velocity-temperature distribution in the evaporating plasma (Antonucci, Dodero, and Martin, 1990b), the law relating nonthermal velocities derived from the soft-X-ray lines and plasma temperature ; the presence of a superhot coronal condensation at about 40 × 10 6 K identified in very impulsive events both on the basis of the Fe XXVI spectra (Tanaka et al., 1982) and on a differential emission measure analysis (Martin, Antonucci, and Somov, 1993); and the Fe and Ca abundance values derived from the XRP data that were indicative of the existence of Ca-rich and Fe-rich flare events . Using XRP and SOX-Hinotori data, we also proposed a revised ionization balance for iron, based on the relative abundances of Be-like and Li-like ions to He-like ions (Antonucci et al., 1987a), as well as for calcium based on the relative abundances of Lilike ions to He-like ions . In addition, we derived the argon/calcium abundance ratio on the basis of XRP and P78-1 data (Antonucci et al., 1987c). Even at the time of this writing, no other instrument has provided soft-X-ray spectroscopy of the flaring plasma as complete as that of the XRP and in this sense some of the XRP results remain unique. The main results on flare dynamics obtained by the high-resolution soft-X-ray spectrometers operating during Cycle 21, including XRP, are summarized in the Chapter 10 on 'Flare Dynamics' in The Many Faces of the Sun (Strong et al., 1999). 27 From the SOHO Proposal to the Spacecraft Launch Chatting at RAL with Bruce Patchett I learned that he was working on a proposal entitled Solar High-resolution Observatory (SOHO) to be submitted in response to an ESA call for ideas for new missions in November 1982 (proposers: M. Malinovsky-Arduini, H.F. van Beek, J.-P. Delaboudinière, M.C.E. Huber, P. Lemaire, and B. Patchett). The intention to respond to this call was at first discussed at a meeting in Paris by Roger Bonnet, Monique Arduini, Martin Huber, Bruce Patchett, and Alan Gabriel. The aim of the mission was the study of the physical processes involved in heating the solar corona and accelerating the solar wind, basic mechanisms that are still not fully understood. SOHO also included the objectives of the Grazing Incidence Solar Telescope (GRIST) mission studied by ESA at phase-A level. As the great opportunities that a European solar space mission could offer became clear to me, I became interested and then involved in the long-term SOHO venture. During the ESA assessment study, from February to August 1983, the original scientific goals of SOHO were merged with those addressed during the phase-A study of the mission named Dual Spectral Irradiance and Solar Constant Orbiter (DISCO), proposed to investigate the interior of the Sun via helioseismology. The purpose then became to study the Sun from its deep core to the outer corona as well as the solar wind in the corona and in the heliosphere, with a payload including in situ instruments in addition to the remote-sensing ones. A continuous view of the Sun and a low relative velocity between Sun and spacecraft, which could be achieved by choosing a halo orbit around the first Lagrangian point L1 (at a distance of 1.5 × 10 6 km from Earth), was considered essential in order to best address this broader spectrum of scientific objectives. The mission was then renamed Solar and Heliospheric Observatory. The phase-A study was conducted from November 1983 to December 1985, with the final presentation of the results in January 1986. The spacecraft industrial study 28 was carried out in the assumption that the mission could be developed in collaboration with NASA. At a certain point, ESA also decided to conduct a rider study to the phase-A study, aimed at considering as an alternative a solely ESA mission with reduced payload in case the International Solar-Terrestrial Physics (ISTP) program were not approved. 29 At this point, I was part of the science team of the rider study, contributing to the definition of the model payload of the reduced SOHO version. When in 1983 Roger Bonnet was appointed Director of the scientific program of ESA, 30 he acted to strengthen ESA science by setting up a grand long-term plan of scientific investigations from space that could attract sufficient financial resources to be fully implemented. In a short time, his effort led to the definition of 'Space Science -Horizon 2000', a wellstructured and well-balanced program aimed to promote a vigorous growth of space research in Europe. In this scenario in 1984, SOHO was identified as part of the first cornerstone of the Horizon 2000 program together with Cluster, a four-spacecraft mission designed to study plasma structures in three dimensions. This was a great achievement for the European solar community. The final decision on the first cornerstone was taken in February 1986 when it 28 The industrial part of the phase-A study was conducted from July 1984 to October 1985, with final presentation in November 1985. The rider study to the phase-A study was performed between July and October 1985. 29 SOHO Solar and Heliospheric Observatory -Report on the Phase A Study, ESA-SCI (85)7, December 1985. 30 Roger Bonnet stepped down from this position in 2001. Figure 13 Waiting for the SOHO launch at Cape Kennedy, November 1995. From the right: Giancarlo Noci, Giuseppe Tondello, and Ester Antonucci. was decided that SOHO was to be implemented in collaboration with NASA, responsible for the launch and the operations and to be included in the ISTP program, of ESA, NASA and ISAS, Institute of Space and Astronautical Science, the Japanese space agency. 31 Following the announcement of opportunity issued on March 1987 the payloads were selected on March 1988. 32 The industrial phase B started in October 1989. From January 1986 to 1988 I was able to witness the mission advancements as a member of the ESA Solar System Working Group. 33 The eventful history of the SOHO mission is reported in great detail in Huber et al. (1996). SOHO became the third large solar observatory in space, after Skylab and SMM. Its payload turned out to be the most comprehensive set of solar and heliospheric instruments ever placed on the same platform, for a total weight of 640 kg. The spacecraft was launched on December 2, 1995 from Cape Kennedy (Figure 13), only six years after the end of the SMM mission. The spacecraft entered a halo orbit at L1 on February 14, 1996, exactly on the sixteenth anniversary of the SMM launch. The SOHO payload is described in the volume of Solar Physics dedicated to the SOHO mission (Editors: Domingo, Fleck, and Poland, 1995). Like SMM, SOHO was affected by an unfortunate event. The spacecraft was lost after about two and a half years of operations, at the end of June 1998. The failure was due to 31 President Ronald Reagan declared support of the international Solar Terrestrial Program on February 10, 1986. 32 See 'The SOHO Mission -Scientific and Technical Aspects of the Instruments' ESA SP-1104, Scientific Coordinator V. Domingo, November 1988. 33 Soon after the payload selection, I became part of the working group set up by ESA to study the ground segment and the operation scenario of SOHO. ESA was evaluating the possibility of developing a remote operation center in Europe, the European Science Data and Operations Centre (ESDOC); several countries, in particular France, Italy, and Spain, declared their interest and submitted a proposal to develop such a center. In the end only the Experiment Operation Facility, with the purpose of coordinating the observations and operating the payload instruments, was implemented at the GSFC, although scientific planning for limited periods was also carried on at MEDOC, at the Institut d'Astrophysique Spatiale (IAS) in Paris. In 1995 ESA granted to IAS, the University of Turin and RAL the right to host a SOHO data archive, the principal one being located at GSFC. operational problems. After various efforts to reestablish contact with SOHO, a scientist coming from the SMM experience, Alan Kiplinger, suggested the successful idea to try to locate the spacecraft with the support of the Arecibo radar. On July 28, 1998 the Arecibo-Goldstone bistatic radar configuration confirmed that SOHO was still in its expected position and was not rotating excessively fast. 34 The first signal from the spacecraft was received on August 3: SOHO was still alive. The spacecraft was reoriented and the Sun-pointing attitude was fully recovered in September 1998. At the end of December of the same year the last gyroscope was lost but the engineers were able to overcome the problem and SOHO became the first three-axis stabilized spacecraft operated without gyros. With the exception of the 1998 summer of distress, SOHO continued its incredible success story for more than two and a half solar cycles under the guidance of the ESA project scientists: initially Vincente Domingo and later Bernhard Fleck. Designed for a two-year lifetime, eight of the twelve SOHO instruments are still operational at the time of this writing. In the first twenty-five years, more than 6000 papers based on SOHO data have been published in the refereed literature and more than 5000 scientists have been involved in data analysis. On October 21, 2020, the PROSWIFT (Promoting Research and Observations of Space Weather to Improve the Forecasting of Tomorrow) Act, signed into law by the President of the United States, after it had passed both chambers of Congress, declared SOHO an infrastructure of critical importance to the nation's space-weather architecture. 35 The UltraViolet Coronagraph Spectrometer The idea of participating in the payload development started to come to my mind when SOHO entered the assessment study. It would have been quite natural for me to be part of the science team of the Coronal Diagnostic Spectrometer (CDS) led by Bruce Patchett at RAL, or of the Solar Ultraviolet Measurements of Emitted Radiation (SUMER), accepting a few years later an invitation by Ian W. Axford, Director at the Max Planck Institute. However, I thought it was perhaps the right time to undertake some action in view of the design and realization of the SOHO payload with other interested Italian colleagues, rather than being exclusively interested in the SOHO science. 36 The payload designed during the SOHO phase-A study included a UV coronagraph in order to detect the source regions of the solar wind in the atmosphere of the Sun with a new spectroscopic technique. Martin C.E. Huber of the Technische Hochschule, Zurich, chair of the ESA Solar System Working Group at the time of the SOHO assessment and phase-A study, brought to the attention of the study teams the concept of a novel EUV coronagraph based on Doppler dimming. The idea of determining the outward expansion of the hot corona by measuring the Doppler dimming of the resonantly scattered UV lines -first the dominant H I Lyα line at 1216 Å emitted by neutral hydrogen atoms -was put forward in the 1970s by Giancarlo Noci of the University of Florence (as quoted by Withbroe 34 The detection of the location, orbital velocity, and spin rate of the SOHO spacecraft after its loss of control in 1998 was achieved with a bistatic radar configuration that utilized the Arecibo antenna as the transmitter and the 34-meter NASA Deep Space Network antenna at Goldstone (California) as the receiver. 35 'Promoting Research and Observations of Space Weather to Improve the Forecasting of Tomorrow' Act: 'In order to sustain current space-based observational capabilities, NASA shall: in cooperation with the European Space Agency, maintain operations of the Solar and Heliospheric Observatory/Large Angle and Spectrometric Coronagraph (SOHO/LASCO) for as long as it continues to deliver quality observations, prioritize the reception of LASCO data.' https://www.congress.gov/bill/116th-congress/senate-bill/881/all-info. 36 At that time Giuseppe Vaiana, who had returned to Palermo after a long and successful career in the US as the leading scientist of the Soft X-ray telescope S-054 on Skylab, was the most experienced Italian solar scientist in space instrumentation. et al., 1982), who later also suggested the diagnostic method based on the O VI 1032 -1037 doublet, which turned out to be crucial for extending the range of the measurable velocities of coronal outflows (Noci, Kohl, and Withbroe, 1987). John Kohl, who had started developing at CFA a UV coronagraph-spectrometer based on Noci's idea, was one of the US scientists supporting the SOHO phase-A study team, which also included Giuseppe Tondello of the University of Padua, an expert in laboratory UV spectroscopy. In July 1983, John Kohl visited Florence to discuss with us the opportunities offered by SOHO to pursue the new approach to ultraviolet coronagraphy, which exploited the Doppler-dimming diagnostics. In the opinion of both Tondello and myself, the best way to get Noci fully involved, not only from the scientific point of view, but also from that of the hardware development in the SOHO project was to join forces and collaborate with Kohl and Huber in proposing a UV coronagraph with spectroscopic capabilities, never flown before on a long-term space mission. Our intent became to contribute to the coronagraph with the study and development of the spectrometer subsystem. Thus, in 1983, the trio formed of Noci, Tondello, and myself, quite complementary in terms of competence and personality, initiated a challenging and I would say successful experience. Starting in 1984 we held numerous meetings with leading figures in Italian industry -at first a bit hesitant to be involved in the development of scientific hardware -with the Italian Space Office to ensure the needed endorsement and financial support, and with Kohl's team to define the partnership and to contribute to the instrument design. The description of the main features of the UV coronagraph for inclusion in the model payload dossier was ready by January 1986. During the Initiation Meeting with Kohl, held at the CFA in March 1987, the baseline design and the definition of the main tasks to be carried out in Italy by Aeritalia 37 in Turin and Officine Galileo 38 in Florence were agreed upon in view of the submission of the proposal of the Ultraviolet Coronagraph Spectrometer (UVCS). The best and final UVCS proposal based on the revision of costs, complexity and weight requested by NASA was submitted in January 1988. A decisive meeting concerning the contribution of the spectrometer subsystem was held at the Space Office of the Ministry for Scientific Research in Rome in the presence of the three of us -Noci, Tondello, and myself -on December 30, 1987, in order to be able to submit in time the revised proposal with the full approval of the Italian part. The meeting turned out to be quite positive, with a more than substantial increase in terms of financial support (unfortunately I had to forget my 'white' week of crosscountry skiing in the Alps!). In the best and final proposal, the Italian Space Agency (Agenzia Spaziale Italiana, ASI) 39 was responsible for providing the spectrometer of the UVCS coronagraph ( Figure 14), with Kohl and Noci in the roles of UVCS Principal and Co-Principal Investigator, respectively. To become a US instrument supported by NASA with ASI and Swiss participation, the first step in the UVCS selection procedure was to undergo a preliminary NASA evaluation, which turned out to be negative: UVCS was not recommended for funding-allocation reasons. Notwithstanding this initial very serious problem, the proposal was later approved during the ESA selection process, with the justification that UVCS was crucial to meeting SOHO's scientific objectives and the Italian scientists and resources were important to the ESA Scientific Program. 40 In the end, as many as four coronagraphs were onboard SOHO: 37 Aeritalia, subsequently Alenia Space in 1990, and today Thales-Alenia Space -Italia. 38 Today Leonardo s.p.a. 39 The ASI was established in 1988 in place of the Space Office of CNR. 40 The selection letter arrived at CFA on March 11, 1988. Figure 14 The spectrometer subsystem, Italian contribution to the UVCS, on the Alenia Space premises in Turin. the three coronagraphs of the LASCO suite, built by the Naval Research Laboratory, the University of Birmingham, the Max Planck Institute fur Aeronomy, and the Laboratoire d'Astronomie Spatiale (Marseille), observing for the first time the corona from 1.1 to 30 solar radii (Brueckner et al., 1995), in addition to UVCS for the UV spectroscopy of the solar-wind acceleration region from 1.5 to 10 solar radii (Kohl et al., 1989;Kohl et al., 1995aKohl et al., , 1995b. In Italy, an industrial team comprised of Alenia Space in Turin and Officine Galileo in Florence was involved in the realization of the UVCS. At the end of March, Riccardo Giacconi, Nobel Prize winner in 2002 for his work in Xray astronomy, was invited to the Alenia premises to attend a presentation of the scientific space projects under industrial development in Turin: the satellite SAX 41 and the UVCS spectrometer. On this occasion he demonstrated great interest in UVCS and this was very encouraging at the start of this new undertaking. Soon after, during the kickoff meeting that took place in April 1988 at the Physics Institute in Turin, we duly celebrated the UVCS selection with the US and Swiss colleagues (Figure 15). The first SOHO workshop was held in Annapolis in August 1992, where a brief description of UVCS was presented (Kohl and Noci, 1992; see also Noci et al., 1994). The progress of the UVCS in the development phase was not quite straightforward, since UVCS was a complex and heavy instrument, including a series of mechanisms, such as the mechanism needed to roll the heavy structure around the axis of the instrument in order to allow the observation of different sectors of the corona so that an image of the full corona could be acquired with subsequent rolls. The guidance of NASA, ESA, and ASI and the endurance of the Italian trio, together with Martin Huber in supporting the Principal Investigator, turned out to be crucial to overcoming the various technical and managerial difficulties and in the end to deliver an instrument only slightly descoped with respect to the initial design. UVCS successfully operated for more than one solar cycle, from 1996, when SOHO reached L1, until 2013. The wealth of spectroscopic data collected with UVCS has not yet been fully exploited and still contains a lot of information to be mined. X-Ray UltraViolet Imager for the Orbiting Solar Laboratory In the year of the UVCS approval, NASA invited ASI to take primary responsibility for the X-ray UV Imager (XUVI) for the Orbiting Solar Laboratory mission (OSL). The invitation letter by L.A. Fisk, Associate Administrator for Space Science and Applications, NASA, was sent in October 1988. One year later OSL was declared a candidate for a new start in the fiscal year 1992, with launch foreseen within the time frame 1998 -1999. Thus, thanks to the experience gained from UVCS, it was time for me to take responsibility for this new interesting undertaking. OSL was designed as a satellite in polar orbit around the Earth with one main scientific element: a meter-class telescope working in the visible and near-UV range, a project led by Alan M. Title of the Lockheed Research Laboratory. The other two complementary instruments foreseen onboard were the high-resolution UV spectrograph (1200 -1700 Å) led by Gunther E. Bruckner of the NRL, and the X-ray UV Imager (40 -400 Å), led by myself. The other two Principal Investigators could count on their long experience as space scientists, and I could count on their support, and on the precious advice of US coinvestigators such as Marylin E. Bruner of the Lockheed Research Laboratory, Leon Golub of the CFA, Roger J. Thomas of the GSFC, and Donald F. Neidig of the Phillips Laboratory of the US Airforce. On the Italian side, I had the full collaboration of Marco Malvezzi from the University of Pavia and Luigi Ciminiera of the Politecnico di Torino, with part of the UVCS team. The XUVI, a normal-incidence telescope with mirrors coated with multilayers, was an ambitious instrument and working on its definition was quite an instructive effort. The multilayer mirror technology was new and had been tested in rocket-flight experiments. The instrument was comprised of two complementary units: the high-resolution imager resolving 0.25 arcsec (pixel size smaller than 200 km) on a limited field of view and the full-disk imager resolving 2.3 arcsec, each acquiring high-resolution images in UV spectral bands capable of covering a wide plasma-temperature range; in other words, the instrument was designed to allow the simultaneous observation of all layers of the solar atmosphere from the cool chromosphere and transition region to the hot corona, detecting the XUV radiation from plasma emitting at the various temperature regimes from 10 5 K to a few 10 7 K . One of the main goals was to get closer to resolving the individual magnetic fluxtubes present in the solar atmosphere. The phase-A/B study of the instrument was approved by NASA during a successful review that took place in Turin. The participation in OSL was a positive though brief experience, with its most pleasant moment when Daniel S. Spicer and I organized the OSL workshop on the island of Capri, in May 1991, addressing the mission scientific objectives. 42 Unfortunately, everything ended a few months later, at the time of the Iguazu solar-physics meeting in Argentina at the beginning of July. The news of the selection of the High Energy Solar Spectroscopic Imager (HESSI) mission instead of OSL as the new entry in the NASA scientific program was announced to the audience of the meeting during an afternoon session, just a few seconds before my talk on the XUVI project. It was shocking, but I decided in any case to present the work done in defining our innovative instrument, and before dinner I did not miss the opportunity to toast the good fortune of HESSI with Brian Dennis, one of my old SMM friends. After all, the selected mission, later renamed Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) was continuing the SMM tradition, dear to me, and it turned out to be extremely successful. In any case, the OSL effort was not a total loss, since other missions have profited from the OSL concepts and studies. While OSL was on hold, the XUVI was also discussed in the frame of possible scientific utilization of the Space Station and presented in a simplified configuration as a EUV complementary instrumentation (Antonucci, 1992) for SIMURIS, Solar, Solar System and Stellar Interferometric Mission for Ultrahigh Resolution Imaging and Spectroscopy, 43 a mission concept proposed by Luc Damé. MGS -the Multilayer Grating Spectrometer In February 1991 ASI organized a workshop in Rome with the intent to stimulate and coordinate the interests of the scientific community on the opportunities offered by the flight of the European Retrievable Carrier (EURECA-3) Precursor Flights program. Being three-axis stabilized and pointing at the Sun, in my opinion the EURECA platform offered interesting opportunities for testing new technologies for solar telescopes. Thus, with the colleagues already involved in XUVI, in March 1991 we submitted to ESA a proposal (for which I was responsible) to fly a new kind of instrumentation, the Multilayer Grating Spectrometer (MGS). The proposal was selected by ESA and the accommodation study on EURECA-3 was carried out in 1991. The instrument-interface issues were discussed in detail in a meeting at ESTEC in view of a phase-B completion foreseen at the end of 1992 and of a 1996 flight. The scientific goal of the MGS was to acquire high spatial and spectral resolution monochromatic images of the full Sun in each of the intense lines emitted in the spectral region 170 -230 Å, corresponding to the temperature regime between 10 5 and 10 7 K, in order to derive density, temperature, and velocity maps of the full solar disk with a spatial resolution of 10 arcsec. The novelty consisted in covering a standard grating with a suitable multilayer coating to enhance the normal incidence reflectivity so that an entire highresolution spectrum in the selected wavelength window could be acquired for each spatial pixel. A full solar map could be acquired in less than 60 minutes. MGS was based on a technological innovation devised by Thomas et al. (1991), who manufactured at the GSFC a multilayer grating for an XUV normal incidence telescope that could achieve very high spectral resolution. In addition to the benefits in terms of expected solar-physics results, the participation in the EURECA project offered the opportunity to test the long-term performance of this new technology for optical components in the soft-X ray/UV domain, since the instrument could be retrieved and analyzed in the laboratory after flight. However, following the first one, the EURECA flights provisionally scheduled in 1992 -1993 were canceled. In order to take advantage of this experience, in the following years we also pursued a study for accommodating this instrument on an Express Pallet of the Space Station. Although the Space Station did not ensure the best operational conditions for such an instrument, the proposal was formulated and submitted to ESA in 1997, but it was not selected. In the end the MGS was never built and never flown. In a sense, neither the XUVI-OSL nor the MGS-EURECA experiences were a failure. They were instead excellent training opportunities that prepared me for the role of Principal Investigator of the UV coronagraph of the Solar Orbiter, my last space mission. An Unexpected Surprise In 1994, the year before the UVCS launch, I was very honored to be nominated a corresponding member in the International Academy of Astronautics during the Academy meeting in Jerusalem. The list of the members of the Basic Sciences section included only one Italian name at that time, Giuseppe Occhialini, who, however, had passed away in 1993. 44 This was the first recognition I received from an international academy. It was also very important for me because it gave me the opportunity to meet some of the pioneers of the early phase of the space adventure that had begun in the twentieth century. I have always been convinced that the generation of scientists and engineers preceding my generation was really an exceptional one, and that we owe to them infinite gratitude and admiration. UVCS -SOHO Scientific Planning at the EOF The first year of observations with UVCS was again a busy period of traveling back and forth to the US. I quite frequently visited the SOHO Experiment Operation Facility in building 1 at the GSFC. At the EOF the experiment teams were planning the observations, sending the commands and receiving the data from the instruments in an almost real-time manner. A good share of the UVCS team at the EOF was formed by young Italian graduate students and postdoc researchers from Florence, Turin, Padua, and Palermo, sent to collaborate with Kohl's staff. On January 29, 1996, two weeks before the insertion of SOHO in the halo orbit at L1, the detectors of the UVCS were switched on. Although preliminary scientific data were acquired even before, regular scientific operations began in April. In the first year of observations the outer corona showed a quite regular dipolar configuration, a perfect example of solar-minimum corona, especially suitable for studying the transition from the fast to the slow wind in the quiet solar atmosphere. In the first week of science planning, from April 1 to April 9, I had the privilege of playing the role of lead observer. We verified and optimized observing sequences, starting with the observation of the plasma dynamics in polar coronal holes and tested the first full synoptic sequence, prepared by John Raymond of the CFA, aimed at obtaining daily images of the global corona and the solar wind right in its acceleration region, via successive scans in altitude at a given latitude and rolls of the coronagraph to observe the entire corona. After several weeks of almost continuous presence at the EOF the team needed a rest, at least at Easter, thus I suggested that we could prepare and upload a 72-hour long polar-observation sequence. The Easter break was rewarded with an unprecedented observation of polar plumes in the outer corona, extending out to about 2 R (Antonucci et al., 1997b;Giordano et al., 1997). By June 1996 several Joint Observing Programs developed in collaboration with the other SOHO instruments were performed, also taking into account quiescent streamers and a few active ones. During my weeks as lead observer two interesting events occurred. At the beginning of June 1996 UVCS caught the first CME in ultraviolet light in the outer corona. During the event peculiar mass motions consistent with untwisting magnetic fields around an erupting flux tube were observed, confirming a previous Skylab observation (Antonucci et al., 1997a). On May 1, 1997 the LASCO team, working in the office next door, spotted a sun-grazing comet in the large field of view of their coronagraphs. This was an unexpected opportunity and I proposed that the team try to observe the comet along its path across the solar corona. Guessing the speed of the comet we predicted its successive positions and by accordingly moving the UVCS slit we did indeed succeed in detecting the comet several times as it approached the limb of the solar disk in the plane of the sky. The comet was again spotted and traced along its path when it reemerged at the opposite solar limb. To be able to perform this observation in ultraviolet light was quite a rewarding experience, although the team involved in the comet chase had to stay at the EOF for many long hours. 45 Upon a preliminary analysis the H I Lyα data looked very interesting; however, I did not pursue their study any further. Taking advantage of the quiet Sun conditions, I proposed a sequence of supersynoptic, global Sun observations lasting two weeks, making it possible to observe the full outer corona at very high spatial and spectral resolution. The supersynoptic observation was first performed in August 1996, on the occasion of the Whole Sun Month campaign, which ran from August 10 to September 8, 1996. The same sequence was repeated several times in the following months, continuing as long as the large-scale configuration of the corona remained relatively stable. 46 This program allowed a detailed study of the typical solar minimum configuration and dynamics of the wind in the solar corona. What We Learned with the UVCS Spectroscopic Observations of the Corona As in the case of the XRP observations, the innovativeness of the instrument made it possible to observe new phenomena and as a consequence obtain unexpected results. When the first UVCS observations were performed, the observed profiles of the O VI lines were quite surprising. The widths of such lines, if simply interpreted as thermal broadenings, yielded temperatures up to two orders of magnitude in excess with respect to the expected coronal temperature of the order of 1 million K. The H I Lyα 1216 Å lines emitted by neutral hydrogen were found to be broadened to a lesser extent. A first quick analysis of the wind-velocity regimes in the corona -using the diagnostic technique based on the O VI 1032, 1037 Å doublet ratio to mark the boundary of the regions where the wind speed exceeded 100 km s −1 -showed that spectral lines were wider in the regions where the solar wind flows and the largest line widths were observed in the core of coronal holes (see, for example, Antonucci et al., 1997c). Further analyses confirmed that the observations unquestionably showed the existence of two phenomena: extreme kinetic temperatures deduced from the broadenings of the line profiles across the magnetic field, which were correlated with the wind speed; and a high degree of anisotropy in the kinetic temperatures -that is, quite different ion/atom velocity distributions across and along the magnetic field -suggesting preferential deposition of energy across the magnetic field. These early findings implied two immediate corrections. On the one hand, when preparing an observation, there was the need to modify the masks that were applied to the spectra in order to isolate the spectral lines to be observed, by taking into account their real, unexpectedly large, width. On the other hand, there was the need to upgrade and adapt the Doppler-dimming codes to correctly interpret the data, since in the case of the anisotropy of the velocity distribution of the atoms/ions, during the resonant scattering process the incident photons would encounter in the corona absorbing profiles with different widths depending on the direction of incidence. Furthermore, while visiting the Institut d'Astrophysique Spatiale (IAS) in Paris, directed in those years by Alan Gabriel, Renato Martin, the first PhD student in solar physics at the Physics Institute in Turin, noticed that in addition to the C II line considered by Noci, Kohl, and Withbroe (1987), a second C II line present in the blue wing of the O VI 1037 line should be considered when computing the Doppler dimming of the O VI emission. This was an important point, since the presence of this line allowed us to extend the diagnostic capability of the O VI doublet to much higher velocity values exceeding 400 km s −1 . Giancarlo Noci ensured that this point was taken into proper account in the codes developed in Dodero et al. (1998) and Li et al. (1998). Following in-depth analyses of the wind speed with the updated diagnostic codes, the velocity distribution of neutral hydrogen atoms and O VI ions was indeed confirmed to be anisotropic to the highest degree in the core of coronal holes where the fast wind originates Antonucci, 1998;Antonucci, Dodero, and Giordano, 2000). Thanks to the UVCS data, it was now possible to observe the flow of the fast wind in the corona out to several solar radii during its acceleration process. In the core of coronal holes neutral hydrogen and protons were traced approximately out to 4 solar radii and the oxygen component out to 5 solar radii, where its flow speed approaches that of the heliospheric fast wind. The remarkable acceleration of the oxygen component was interpreted as the signature of energy dissipated by ion-cyclotron resonant Alfvén waves in the outer corona (Cranmer, Field, and Kohl, 1999) and energy was found to be dissipated at the maximum rate across the magnetic field at about 2.9 solar radii (Telloni, Antonucci, and Dodero, 2007), that is, beyond the sonic point, as expected on the basis of the solar-wind theory. The dynamics of the corona was studied in detail from the fast-wind regime observed in the core of the solar-minimum polar holes to the slower wind close to the interface with the equatorial streamers. It was also possible to relate the wind speed to the flux-tube areal expansion. In particular, the slow wind emerging from the open magnetic-field lines characterized by nonmonotonic expansion -separating the substreamers present within the solar minimum equatorial streamers -was found to be related to variations in the abundance of oxygen, observed as a streamer core dimming in O VI . A wealth of results was achieved thanks to the UVCS observations of solar wind, streamers, coronal mass ejections, and comets, throughout Cycle 23. These results, presented at several international conferences (Figure 16), are in part reported in various review papers (e.g., Abbo et al., 2016;Cranmer, Gibson, and Riley, 2017;Antonucci et al., 2020a; and references therein). Many more results remain to be extracted from the UVCS data collected over one full solar cycle. The Origin of the Solar Orbiter Idea In 1993 when space solar physicists were still engaged in the payload integration and verification activities in view of an imminent SOHO launch, Per Maltby organized in Oslo a workshop sponsored by ESA on the 'Scientific Requirements for Future Solar-Physics Space Missions' 47 to explore new ideas for the next European solar observatory in space. Although it turned out to be an extremely long-lived mission, SOHO was designed for operations lasting two years, to be extended to six years in case of success, hence it was time for the community to start thinking of the future of European solar science from space. The next step in the solar exploration was going to be a mission characterized by a close approach to the Sun to investigate in detail its atmosphere from the layers below the photosphere, accessible via helioseismology, to the outer corona at all latitudes, including the solar poles and to navigate the unexplored circumsolar regions. However, the difficult path to achieve the approval for the next solar mission in the ESA science program lasted more than one solar cycle, and almost another whole solar cycle was needed before the new mission was finally flown. The idea of placing telescopes aboard a solar orbiter emerged from a rich scenario of hypotheses and preliminary studies circulating in the 1990s among scientists and space agencies. 48 The crucial event in the genesis of the next European observatory in space, Solar Orbiter, took place in March 1998, when ideas for future solar missions were discussed at the ESA conference 'A Crossroads for European Solar and Heliospheric Physics', 49 The outcome of the Tenerife conference was a consensus to recommend that ESA fly, as the next solar-heliospheric mission, an observatory in orbit around the Sun. In the session dedicated to the future missions, Eckart Marsch presented the concept of InterHelios (Marsch et al., 1998). The profile of this mission was similar to the one later adopted for Solar Orbiter, except for its orbit, which laid in the ecliptic plane. The key idea was to exploit the heliosynchronous segments of the orbit at heliocentric distances near 0.3 AU to study the characteristics of the near-Sun solar wind and identify its source regions with a combination of in-situ and remote-sensing instruments. In the same conference session, on behalf 47 Workshop proceedings published in P. Maltby and B. Battrick (eds.), ESA SP -1157. Some years later, in 1997, the Oslo meeting was followed by the ESLAB symposium on 'Correlated Phenomena at the Sun, in the Heliosphere and in Geospace' at ESTEC, ESA SP-415 (scientific coordinator B. Fleck), aimed at discussing the progress desirable in solar and heliospheric physics, solar variability, climate, and space weather. 48 Space instrumentation and missions studied for solar physics in the mid-1990s are briefly summarized in Antonucci and Simnett (1996). 49 The proceedings of the conference are published in ESA SP-417, 1998. 50 'Horizon 2000 Plus -European Space Science in the 21st Century', ESA SP-1180. of a European team exploring a possible collaboration with NASA for the development of a Solar-Terrestrial Relations Observatory (STEREO) multispacecraft fleet, Bothmer et al. (1998) presented as one of the possible mission configurations a scenario including an outof-ecliptic spacecraft. The inclusion of a polar element was inspired by the feasibility study of a Solar Polar Sail Mission carried out by Marcia Neugebauer and collaborators in 1998 at the Jet Propulsion Laboratory (JPL). 51 The polar element was presented at the meeting as a plausible ESA contribution to the STEREO mission. More than two decades earlier, in September 1974, I had attended the presentation of the European space programs during a conference at the European Space Research Institute (ESRIN) in Frascati. 52 One of the most stimulating talks concerned the out-of-ecliptic mission jointly studied by Europe 53 and NASA. The mission approved two years later consisted of two spacecraft envisioned as flying in formation out to Jupiter and then to head for the two opposite solar poles. Eventually, in 1990, only the ESA spacecraft, Ulysses, was flown. In the same 1974 meeting the idea of imaging the solar polar regions, was brought forth. To me, the perspective of exploring the poles of the Sun was quite fascinating and, after many years, the Tenerife forum turned out to be an excellent opportunity to explore this possibility by merging the study of the circumsolar regions and the observation of the poles put forward in the InterHelios and STEREO presentations, respectively. Thus, both during the discussions at the meeting and in private conversations, I tried to make the point that an InterHelios-type mission with an orbit sufficiently inclined with respect to the ecliptic plane to image the poles would have greatly enhanced the science objectives of the next European solar mission. During the session on future missions, in the heat of the argument I almost sat right down on the floor since I forgot that the movable seats in the congress room had to be pulled down horizontally before one could sit. This event was quite funny and had the 'honor' of being mentioned by Eric Priest in his speech during the closing dinner as the 'trick of the disappearing lady'. The concept of an ESA Solar Orbiter mission as further development of InterHelios was better substantiated and formulated in a preassessment study conducted by ESA in 1999 in order to evaluate the potential risks and technology issues related to the mission profile. The outcome of this study was the basis for the proposal we submitted on January 27, 2000 in response to the ESA call for the flexi-missions F2 and F3. The Solar Orbiter Odyssey I spent the first decade of the new century being fully involved in the long odyssey of Solar Orbiter, participating in the proposal of the mission, in the related studies promoted by ESA and coordinating, with the full support of ASI, the Italian interests in this mission until the final approval. There was a great synergy within the European group of scientists, growing in number over time, who worked for the Solar Orbiter's positive outcome, instrumental in overcoming the various obstacles. The proposal 'Solar Orbiter -High-Resolution Mission to the Sun and Inner Heliosphere', 54 coordinated by Eckart Marsch and Rainer Schwenn and submitted to ESA in 51 A Solar Polar Orbiter was already present in the NASA fund requirement assessment formulated in 1985 (EOS Transactions, AGU, 66, 50, 1985). 52 In 1974 2000, envisioned a mission that had characteristics close to those of the mission actually flown in 2020. Our idea was to develop a near-Sun, out-of-ecliptic mission with perihelion below 0.21 AU and a maximum orbit inclination of 38°, consisting of a three-axis stabilized spacecraft constantly pointing at the Sun with a pseudosynchronous viewing ten days long. The strawman payload included a UV and visible-light (VL) coronagraph 55 based on the UVCS diagnostic techniques. The proposed mission was selected a few months later by the ESA Science Program Committee, thus the future observation of the polar regions was ensured. The main remaining problem concerned its budget, which exceeded the typical budget for a flexible mission. As a mitigation, Solar Orbiter could take advantage of some of the technological developments required for the cornerstone Mercury mission, later renamed BepiColombo, in honor of Giuseppe Colombo of the University of Padua. Within the same timeframe, NASA's Living with a Star initiative was included in the fiscal year 2001 presidential budget released on February 7, 2000, with program elements being a new space-weather network including a Solar Dynamics Observatory. The Solar Orbiter selection was immediately followed by a 'delta' assessment study. 56 Yves Langevin of the IAS kindly assisted the Solar Orbiter study team on the aspects concerning the mission design and analysis. In 2004 the ESA Science Program Committee confirmed the selection of Solar Orbiter within the Horizon 2000+ Program, but this was not yet the definitive word; we had no idea what a long path we still had to face before the final approval. Solar Orbiter was integrated into the new Cosmic Vision 2015 -2025 ESA scientific program in 2008, with the consequence that it was placed back into competition with other missions. Following a second assessment study 57 the mission was finally selected as a 'candidate' for the first medium-class mission, M1, of the Cosmic Vision Program in 2010, and after a further definition study, 58 Solar Orbiter was definitively selected as the first element of Cosmic Vision 2015 -2025 in 2011. NASA's contribution, including the launcher and science instruments, was crucial for mitigating the budgetary constraints. The fact that the mission was actually approved two years after the selection of the scientific payload implied that the design studies of the instruments and the spacecraft were not perfectly phased in the first years. The formal selection of the Solar Orbiter instruments was announced in March 2009 when, as mentioned before, the mission was not yet fully blessed by ESA. This was of concern to the instrument principal investigators of the selected projects, and after a few telephone calls we decided to organize as soon as possible the third Solar Orbiter workshop in order to keep the attention of the agencies on our mission. I offered to organize the workshop at the end of May in Sorrento. Notwithstanding the fact that a few other international heliospheric meetings were held around that period and that the community was informed of the workshop at short notice, there was an unexpectedly large attendance, indicating once more that a wide community was interested in Solar Orbiter. At the meeting, there was ample time to present the mission and its payload and to discuss future plans and collaborations. Even the colleagues interested in participating in the ESA technological mission Proba 3, 55 In the Pre-Assessment Report, a LASCO-C2 type coronagraph was foreseen. Future Solar Missions in the ESA Cosmic Vision Program I had the opportunity to follow the vicissitudes of the Solar Orbiter in its transition phase from the Horizon 2000 to the next ESA scientific program in the role of one of the members of the ESA Space Science Advisory Committee (SSAC) in the years from 2004 to 2006. This was the period when the Committee was involved in the formulation of the program Cosmic Vision -Space Science for Europe 2015 -2025. 60 The new decade-long program envisioned a clear strategy for the theme 'How does the Solar System Work?', which included as a fundamental element for solar research the need to 'chart the 3D magnetic field at the Sun's visible surface using a Solar Polar Orbiter'. In the ESA plans this implied the development of a reliable solar-sailing system, and a preliminary study of the spacecraft for the polar mission based on such a technology was already underway at the European Space Research and Technology Centre (ESTEC) of ESA. At last, solar physics could take advantage of a mission primarily dedicated to the study of the polar regions of the Sun, foreseen to be observed at the minimum distance of 0.25 AU. Hence, one of the scientific objectives most dear to me was finding its optimal place in the ESA planning, thanks to the fact that 59 The preliminary idea of ASPICS, a formation flyer solar coronagraph, was formulated in response to a CNES call for ideas back in 2004 (e.g., Vivès et al., 2005) with design studies also by part of Frassetto et al. (2005). The final proposal for developing the ASPIICS (the acronym changed) coronagraph on Proba-3 was submitted in 2009. 60 'Cosmic Vision -Space Science for Europe 2015-2025', ESA-BR-247, October 2005 Peter Cargill, also in the SSAC, and I shared the same view and the ESTEC engineers were interested in fulfilling the solar-sail technology. Later, the Cosmic Vision Program was superseded, and the present Voyage 2050 scientific plan identifies the solar polar science as one of the possible themes for medium missions, albeit with much less emphasis. My wish is that in the future the pre-eminent role assumed by ESA in space solar physics thanks to SOHO and Solar Orbiter will be maintained. Metis -the Multiwavelength Coronagraph for Solar Orbiter When we were preparing the Solar Orbiter proposal, I was coordinating the Italian team working on the new concept of the Ultraviolet and Visible-light coronagraph envisioned for the mission. In addition to the historic nucleus comprising Giancarlo Noci and Giuseppe Tondello, this effort involved all our younger collaborators, who perfected their skills during the development, implementation, and operations of UVCS and, in turn, their students. The emphasis this time was on designing an instrument capable of simultaneously imaging the full solar corona in the resonantly scattered UV emission lines and in the polarized visible light, with the aim of measuring the plasma outflow speed with the same diagnostic techniques adopted for the UVCS data. The simultaneous imaging of the global corona at different wavelengths -obtained of course at the expense of the great spectroscopic capabilities that characterized UVCS -was preferred in order to be able to trace the propagation and evolution of the solar wind in the corona at high spatial and temporal resolution. The strawman payload described in the Solar Orbiter proposal included the design of a coronagraph imaging the EUV/UV emission of the He II Lyα 304 Å, H I Lyα 1216 Å lines and measuring the polarized brightness of the visible K corona by using mirrors coated with ad hoc multilayers . The goal was to derive, on the basis of these data, global maps of the flow velocity of the two major components of the solar wind, the H and He components. Throughout the coronagraph development, the instrument design effort was mainly carried out by Silvano Fineschi, Giampiero Naletto, and Marco Romoli, whilst Daniele Spadaro was more involved in the definition of the scientific objectives and Vincenzo Andretta in the operation concept. Gianalfredo Nicolini joined the team at the time of the instrument proposal in 2007, and he was fully devoted to this project; his contribution and support were indispensable from then on. SCORE -the Solar Orbiter Coronagraph Prototype The instrument envisioned for Solar Orbiter had never been tested before, and therefore in a meeting in Florence I proposed to fly a prototype on a rocket. Whilst exploring the suborbital flight opportunities, Russ Howard of the NRL suggested that I get in touch with Daniel Moses, who would shortly be attending the first Solar Orbiter workshop, 'Solar Encounter', held in Tenerife in May 2001. 61 This event was a great opportunity to sort out how to proceed in order to fly the Solar Orbiter UV-VL coronagraph prototype within the NASA suborbital flight program. This was the first step of a long-term collaboration with Dan Moses, crucial for realizing the rocket project, which continued throughout the phase of design and development of the Solar Orbiter coronagraph. The Helium Resonance Scattering in the Corona and Heliosphere sounding rocket (HER-SCHEL), 62 built to establish proof of concept for the Solar Orbiter coronagraph, was successfully launched on September 14, 2009 from the White Sands Missile range in New Mexico. The instrument package was composed of the multiwavelength Sounding-rocket Coronagraph Experiment (SCORE), 63 the coronagraph prototype developed by the Solar Orbiter coronagraph team, the Helium Coronagraph (HECOR) developed by Frederic Auchère and collaborators of the IAS, and the HERSCHEL Extreme-ultraviolet Imaging Telescope (HEIT), instrument developed by Dan Moses and Jeff Newmark of the NRL. SCORE was designed to obtain simultaneous narrowband H I 1216 Å, He II 304 Å and visible-light Kcorona images from 1.5 R to 3.5 R (Fineschi et al., 2003;Romoli et al., 2007). HEIT observed the helium emission on the solar disk. It was much easier to develop the program and launch the instruments on the rocket than to write the HERSCHEL paper! The first results on the abundance of helium in the outer corona were obtained in a relatively short time, but because of the many commitments of the partners involved in this program, a full solar cycle elapsed from the rocket launch to the publication of the results in Nature Astronomy (Moses et al., 2020). The measurement of helium in the corona was obtained in correspondence with a period of anomalously quiet conditions during the solar minimum of Cycle 23 when the solar-wind speed in the heliosphere reached its lowest values. A Brief Participation in the Solar Dynamics Observatory with the Transition Region Spectroheliograph Project Whilst the HERSCHEL proposal was taking shape, we briefly participated in the Solar Dynamics Observatory (SDO) mission studies. In August 2002 the Solar and Heliospheric Activity Research and Prediction Program (SHARPP) was selected by NASA for the phase-A study of the SDO mission. SHARPP was led by Russ Howard of the NRL, in collaboration with Pierre Rochus of the Centre Spatial de Liege, Belgium, and the Italian consortium led by the Observatory of Turin. SHARPP included a suite of EUV coronal imagers 64 and the visible-light coronagraph, KCOR. The Italian contribution to the EUV component of SHARPP consisted of the Spectroheliograph for the Transition Region (SPECTRE). 65 The scientific objective of the SPECTRE experiment was to study the plasma in the transition region by obtaining high-resolution full-disk images of the solar atmosphere (1.2 arcsec) in the O V 629.7 Å line, emitted at about 2.5 × 10 5 K. The requirements driving the optical design were satisfied by an innovative solution consisting of two spectrographs with opposite dispersion placed in tandem. In September 2003, during the phase-A study, quite unexpectedly NASA decided not to proceed with the realization of the SHARPP suite of instruments, which was substituted with the EUV payload developed by the Lockheed Martin Solar and Astrophysics Laboratory (LMSAL). The work already done in the definition of SPECTRE's optical design was reported in two papers by Naletto et al. (2004; 64 Initially AIA (Atmospheric Imaging Assembly) was the term used for the baseline set of EUV coronal imagers in the SDO announcement of opportunity, it was then also adopted for the assembly of EUV instruments built by LMSAL, flown on SDO in place of the EUV set of instruments designed in Belgium during phase A, named MAGRITTE (Rochus et al., 2004). SPECTRE complemented the diagnostic capability of MAGRITTE. 65 Principal Investigator: Ester Antonucci, Deputy Principal Investigator: Silvano Fineschi. The Metis Coronagraph Several years elapsed between the preliminary studies of the UV coronagraph performed in view of the mission proposal, submitted at the beginning of 2000, and the solicitation of proposals for the Solar Orbiter instrumentation issued by ESA in September 2007. In the meantime, the coronagraph studies were progressing thanks to the Solar Orbiter Payload Accommodation -Heat Shield Study promoted by ESA. A letter of intent (LOI) to propose the Coronagraph for the Solar Orbiter Mission was submitted in September 2006 in response of the ESA Call for Submission of LOI, with a detailed description of the science objectives, the instrument design, and the management approach. In this phase the project was named Coronal Imaging Advanced Observations (CIAO). 66 The aim of the ESA study, carried out from November 2006 to September 2007, was to address, among other matters, the definition of the interface of the remote-sensing instruments and the thick heat shield required to protect the platform of the instruments from the high flux of solar energy and thus mitigate the harsh thermal conditions expected when facing the Sun at perihelion. Never before had solar telescopes been designed to face the Sun at such a short distance. In particular, this was a study of crucial importance for the coronagraph, the only remote-sensing instrument protruding outside the feed-through of the shield protecting the spacecraft. By the time of the final proposal submitted on January 15, 2008 the design of our instrument had evolved and become more and more ambitious. We indeed proposed a complex instrument composed of two elements, a EUV-UV coronal and disk spectrometer and a multiband visible/EUV/UV coronagraph, named 'Multi Element Telescope for Imaging and Spectroscopy' (METIS, the acronym coinciding with the name of the oceanid nymph, mother of wisdom and deep thought). 67 However, the recommendation of the payload review committee, we received in July 2008, was to drastically descope METIS to a coronagraphonly experiment. Although the spectrometer was not approved, we were in any case pleased that the METIS coronagraph was going to be onboard such a challenging mission. The UV spectrometer for Solar Orbiter was later included in the payload as an ESA-provided instrument. At this point METIS was redesigned with the aim of imaging the H, He, and polarized VL coronal emission, while still maintaining some EUV spectroscopic capabilities, though limited to a coronal sector 35°wide. The payload selection was formally announced in March 2009; at the same time, I was appointed by ESA to be the Principal Investigator of the METIS project. In April 2009 the first Solar Orbiter Science Working Team meeting was held at ESTEC (Figure 18). The need to solve instrument-accommodation problems and minimize the total energy transmitted into the instrument by minimizing its entrance aperture led the team to invent a new type of coronagraph. The new design, the result of a collaborative effort, was based on an inverted-occultation concept proposed by Dan Moses and turned into reality by the METIS team, which developed this idea and designed a brand-new coronagraph -first by Silvano Fineschi who was thinking of a similar approach but starting from the UVCS config- To make a long story short, at the end of phase B in 2012, the European and Italian space agencies asked us to consider the fact that the impact of the METIS repointing mechanism was adding complexity both at the instrument and spacecraft level. In addition, they asked us to reduce the instrument mass and the cost for developing the coronagraph. Difficult decisions had to be made; the only solution was a drastic descoping of the instrument. The contribution of the UV-detector assembly on the part of Sami Solanki of the Max Planck Institute, and a letter of support of the METIS advisory board, including prominent international colleagues, written in January 2013, certainly helped in the negotiations with ASI, which in the end approved a simplified version of the coronagraph. In the descoping process, METIS lost the He-imaging capability and the sector dedicated to spectroscopy, as well as the repointing mechanism. The phase-B structural design had included this mechanism to ensure that the coronagraph would constantly point at the Sun's center, even when other remote-sensing instruments would be required to point at the selected target off-center and at the limb of the Sun. The loss of this mechanism meant that the entrance door of the instrument had to be closed each time Solar Orbiter pointed off-center when observing near perihelion, that is, when the highest spatial resolution in coronal imaging can be obtained. This was the price to pay. Thus, METIS became simply Metis, that is, the instrument's name could no longer be an acronym, having lost the spectroscopy element. Notwithstanding the painful descoping, the coronal imaging in both the ultraviolet H I Lyman alpha and the polarized visible-light channels were maintained. Hence, the main goal of obtaining for the first time instantaneous global maps of the flow of the principal component, that is, the hydrogen component, of the solar wind in the corona was preserved. The descoping phase ended with the signature of the new industrial contract with a consortium formed by OHB Italia in Milan and Thales Alenia Space Italia in Turin, at the time of the kickoff of phase C/D. The simplified design and the new industrial organization were presented at the third Metis scientific meeting, organized by Vincenzo Andretta at the Astronomical Observatory of Naples in October 2013. The following years were extremely demanding but we were able to recover the time lost in the process of descoping and assigning the new industrial contract and to deliver the instrument on time. Many crucial decisions had to be made to ensure the optimal future performance of the instrument while respecting the time schedule, and many difficulties had to be overcome. All this was done in close contact with the industries involved in the fabrication of the instrument and with our partners manufacturing the mirrors, provided by Petr Heinzel of the Astronomical Institute of the Czech Academy of Sciences (Figure 19), and the detectors provided by the Max Planck Institute, with the full support of Filippo The Space-Weather KuaFu Mission A white-light and H I Lyα 1216 Å coronagraph to obtain double-band images of the corona was also proposed for the Chinese KuaFu mission, envisioned to explore the space-weather phenomena by means of one spacecraft at L1 for Sun monitoring and two spacecraft orbiting the Earth to detect the magnetospheric effects of solar activity. In 2006, the Chinese National Space Administration (CNSA) authorized the prestudy of the mission led by Chuanyi Tu of Peking University. Considering this project to be a great opportunity for space-weather studies I proposed to provide the Ly-alpha Coronagraph (LyCo), to be developed in collaboration with Weiqun Gan of the Purple Mountain Observatory. 70 Between 2006 and 2010 I attended 69 A colleague of the Politecnico di Torino suggested that I submit a technological proposal to the Piedmont Regional Government; it was selected and financed in 2005. The idea was to develop a facility to integrate and calibrate solar instrumentation (UV and VL), and Silvano Fineschi was responsible for the facility design and implementation. 70 The contribution to the Science Requirement Document during the prestudy phase of the Kuafu project is reported in 'A white light and HI Ly alpha Coronagraph for the Kuafu Mission', by Ester Antonucci, Marco Romoli, Fabio Frassetto, Giampiero Naletto, and Daniel Telloni, based on a first optical design by Frassetto and Naletto. KuaFu project provided a great opportunity to discuss the solar and heliospheric science as well as visit beautiful sites, such as Sanya on the Hainan Island and Kunming in Yunnan, but it never became a reality. At the Astronomical Observatory of Turin By the time of the Metis proposal, I had been appointed director of the Astronomical Observatory of Turin, now named the Astrophysical Observatory of Turin (OATo). Applying for a position at the Observatory as Senior Astronomer 72 meant I could finally access, in 1995, the highest level of the academic/scientific profession. Whilst at the University it was extremely difficult to open new positions for young researchers, at the Observatory at last I had the possibility to gradually form a team dedicated in principle to solar-physics science and space projects. The projects dedicated to space pursued at the Observatory were the participation in the ESA mission Gaia, coordinated by Mario Lattanzi, and the participation in Solar Orbiter. In the role of astronomer, I continued to advise graduate students in their research work and for a few years I taught solar physics at the university. My last student was Daniele Telloni, who graduated in 2008. From the beginning he devoted his research to the solar wind both in the corona and in the heliosphere and in 2021 he published the first paper on the Parker Solar Probe -Metis Solar Orbiter joint science (Telloni et al., 2021). The International Heliophysical Year During my five years as director of the Observatory, from 2005 to 2010, many exciting events and celebrations took place. The year 2007 was dedicated to the International Helio-72 Astronomo Ordinario physical Year (IHY) program sponsored by the United Nations. Our most important initiative that year was the organization in Turin of the Second European General Assembly of the IHY, June 18 -22. It was a pleasure to host in Turin colleagues coming from countries not often represented at solar-physics meetings. Among many other initiatives in the frame of IHY, OATo also organized a program for high schools in North-West Italy, aimed at setting up a temporary space-weather network thanks to the distribution of a number of instruments for establishing small stations for space-weather forecasting run by students. In the same year OATo and the Archivio di Stato di Torino, 73 organized an exhibition, 'Nel Fuoco del Sole' with the aim of displaying all the numerous documents relative to the history of the Astronomical Observatory. This occurred from September 25 to October 14, 2007. 250 Years of Astronomy in Turin 2009 was another special year marked by two important events: the celebrations of 250 years of astronomy in Turin and participation in the International Year of Astronomy, with a dense program of events and exhibitions. According to tradition, astronomical studies in Turin date back to the first measurements of the Gradus Taurinensis, the degree of the meridian arc in Piedmont, by Giovanni Battista Beccaria in 1759. The scientific and cultural environment in the eighteenth century in Turin was very lively, enlightened by scholars such as Jean-Louis Lagrange, the famous mathematician and one of the founders of Turin's Accademia delle Scienze, who spent the last part of his life in Paris. Lagrange attended the lectures of experimental physics held by Beccaria, who dedicated most of his studies to electricity. Beccaria's interest in astronomy did initiate, under invitation of Charles Emmanuel III, Duke of Savoy and King of Sardinia, with the measurement of the meridian arc and the construction of a telescope, positioned in the royal gardens. The construction of the telescope was stimulated by the interest in astronomy aroused by the passage of the Halley comet. Although they never met, Beccaria was in touch with Benjamin Franklin for 30 years. Exchanging novel ideas on electricity and inventions, they corresponded first in Latin then each one in his own language. Thanks to Franklin one of the books on electricity written by Beccaria was translated into English and published in London. I suddenly became aware that the history of the Observatory was quite intriguing, thus to honor my predecessors I decided to restore the historic instruments, most of them still present in the Observatory, and to involve the astronomers interested in history in narrating episodes from the 250 years of astronomy in Turin. Their contributions were published in a book, Osservar le stelle -250 anni di Astronomia a Torino, 74 which also included the catalog of the restored instruments. Many enjoyable stories came to light during this search for the roots of the Observatory. The historic instruments were displayed from October 1 to November 14, 2009 at Palazzo Bricherasio and Palazzo Lascaris 75 in an exhibition with the same title as the book about the history of the Observatory. Laurent Levi-Strauss, representing UNESCO, was our guest during the inauguration of the exhibition. In 2009 Palazzo Bricherasio also hosted an exhibition 73 The Archivio di Stato (State Archive) di Torino is home to almost one thousand years of historical documents. 74 The book was also published in an English translation and will soon be available on the Observatory's web site. 75 Home of the Piedmont Regional Council. Both Palazzo Bricherasio and Palazzo Lascaris were built in the seventeenth century. dedicated to Akhenaton, 76 the pharaoh who worshiped the Sun, with a contribution by part of the Observatory. The International Year of Astronomy During the International Year of Astronomy, declared to celebrate the 400th anniversary of the first recorded astronomical observation by Galileo Galilei, Turin was the site of several other events that involved OATo. One of these was the 'Torino Cosmology Colloquium: Latest News from the Universe' of the Ecole Internationale Daniel Chalonge, 77 held in October with the participation of the 2006 Physics Nobel Prize laureate George Smoot. Another was the day organized at the University by the Centro Unesco Torino to honor also the work of women astronomers, from Hypatia to Caroline Herschel. The rich program of events of 2009 took shape during conversations among friends and represented a nice parenthesis indeed between my daily duties as director of the Observatory and my involvement in Solar Orbiter. Lucia Abbo was of great help in the organization of the 2007 and 2009 conferences and events, managing to do this while not compromising her dedication to science. Her support was also crucial in the organization of the Third Solar Orbiter workshop in Sorrento. I would also like to mention that during the International Year of Astronomy it was a great honor for me to be one of the astronomers invited to be received by Pope Ratzinger, Benedict XVI. We were guided by the Vatican astronomers on a very special visit of the Vatican City, the Accademia Pontificia, and Castel Gandolfo, the beautiful home of their headquarters close to Rome. 400 years after his first astronomical observations, Galileo was 'definitely' exonerated. 78 On the occasion of my 'formal' retirement in 2010 (Figure 24), when I was 65 years old, I received a very welcome and unexpected present. The planetologists of the Observatory proposed to assign my name to the minor planet 'Esterantonucci ' -1998 TB34 (22744). 76 The exhibition Akhenaton, il Faraone del Sole, was held from February 27 to June 14, 2009. 77 The main local organizer was my colleague Alba Zanini of the National Institute of Nuclear Physics, INFN. 78 Galileo Galilei was rehabilitated by Pope John Paul II in 1992. One of the motivations was: '. . . director of the Osservatorio Astronomico di Torino -the first woman to hold this position'. 79 Solar Orbiter on Its Way Toward the Sun After I formally retired in 2010, when Solar Orbiter was not yet definitely approved and Metis still faced a long and sometimes difficult path, my time was almost exclusively devoted to the Metis project, until the delivery of the instrument to ESA. From 2013 to 2019, I also served on the European Space Sciences Committee, advising on solar physics and space-weather initiatives. After a while, my committee colleagues were erroneously convinced that I was a space-weather scientist since I was insisting that Europe should have a space-weather program as robust as, for instance, the Sentinels program for the observations of the Earth. Once the Metis instrument was finally delivered in 2017, I resigned as principal investigator and Marco Romoli was appointed to lead the Metis activities. Although ASI kindly invited me to remain in that a role as long as possible, I deemed this to be the right moment to pass the responsibility to a younger person so that my successor would have sufficient time to get acquainted with his new role before the Orbiter launch. Moreover, he could rely on my advice, if needed, during the phase preceding the launch of Metis for the various tests and integration issues that were strictly related to my earlier activities and decisions. The Metis instrument, as delivered to ESA in mid-2017, is described in detail in the paper by Antonucci et al. (2020b) included in the special issue of Astronomy and Astrophysics dedicated to the Solar Orbiter mission (Müller et al., 2020). Solar Orbiter was launched on February 10, 2020 just when the world was hit by the twenty-first century COVID-19 pandemic. This was the first mission -moreover, one of great complexity -to be operated almost from the beginning in the unforeseen anomalous conditions dictated by the recurrent lockdowns, that made it impossible to reach the usual work locations. I was absolutely impressed by the ability and the commitment of all scientists and engineers involved in the effort of operating the spacecraft and the payload in this totally new way, efforts that assured a fully successful commissioning and transition to the nominal mission. Notwithstanding the difficulties during commissioning due to the COVID-19 pandemic, Solar Orbiter has already provided quite a rich harvest of scientific results, although the most important phases of the mission have yet to be reached. On May 15, 2020 Metis observed the solar corona for the first time and I was pleased to take part in the analysis and interpretation of Metis's first light (Romoli et al., 2021). The first map of the solar wind in the corona was finally obtained. Conclusions I have attempted to present my scientific contributions to solar physics over the course of half a century in the context of the development of space missions for solar physics and of the knowledge of the Sun and the heliosphere that we had at a given time and within the frame of the new information provided by the major solar space observatories flown from 1980 to the time of this writing. I have dealt in greater detail with the period preceding the SOHO launch, with the intent to illustrate the history and evolution through the years of our understanding of some of the physical processes and phenomena occurring in the solar atmosphere. With regards to SOHO and Solar Orbiter, I have tried to outline the history of these missions, as seen from my point of view and on the basis of my experience.
34,304.2
2022-07-01T00:00:00.000
[ "Physics" ]
Co-expression based cancer staging and application A novel method is developed for predicting the stage of a cancer tissue based on the consistency level between the co-expression patterns in the given sample and samples in a specific stage. The basis for the prediction method is that cancer samples of the same stage share common functionalities as reflected by the co-expression patterns, which are distinct from samples in the other stages. Test results reveal that our prediction results are as good or potentially better than manually annotated stages by cancer pathologists. This new co-expression-based capability enables us to study how functionalities of cancer samples change as they evolve from early to the advanced stage. New and exciting results are discovered through such functional analyses, which offer new insights about what functions tend to be lost at what stage compared to the control tissues and similarly what new functions emerge as a cancer advances. To the best of our knowledge, this new capability represents the first computational method for accurately staging a cancer sample. The R source code used in this study is available at GitHub (https://github.com/yxchspring/CECS). We present a computational approach to stage accurately cancer tissues based on their RNA-seq data. The stage of a cancer is a key parameter for clinically characterizing the cancer. As a cancer advances, the disease generally evolves from a localized issue to a whole-body problem [1][2][3] , not just in term of whether a cancer is metastasized or not, as cancer tends to persistently release certain molecules such as protons, cytokines and polyamines [4][5][6] as well as "consume" certain molecules like sodium and iron, leading to substantial alterations of their blood concentrations over time. For some molecular species, such changes will trigger highly damaging responses by different organs throughout the body. Cachexia, i.e., loss of muscle cells throughout the body, is one consequence of such responses towards the advanced stage of a cancer [7][8][9][10] Intracellularly, considerable changes take place in metabolisms as a cancer evolves, giving rise to gradual and extensive metabolic reprogramming in cancer [11][12][13][14] . Hence, cancers detected at different stages require distinct treatment plans. Therefore, accurate staging of a cancer is vitally important to the cancer patient and his/her physician. Somewhat surprisingly, the clinical practice of cancer staging has not changed much in the past 40 years [15][16][17] as it is still done predominantly based on the morphology and the size of a cancer tissue, examined manually by cancer pathologists under microscope, assisted by limited protein biomarkers. One would intuitively expect that cancer staging nowadays should have been done in a more objective manner based on molecular data, knowing that cancer tissue omic data, particularly gene-expression data are easily obtainable in a financially viable manner. However, the reality is: while gene-expression data represent the easiest to get and the most informative omic data for studying cancer tissues, they have not been widely used for cancer staging outside of laboratory studies [18][19][20] . Published work is mostly on transcriptomic biomarkers for cancer prognostic prediction 21-27 rather than cancer staging. A key challenge in achieving this goal comes from the reality that scientists have yet to identify genes whose (differential) expression patterns in cancer vs. controls are specifically associated with individual stages of a cancer type, and hence can be used for cancer-stage prediction. Our own analyses have discovered that co-expression patterns are considerably more informative than differential expressions of individual genes for cancer staging. Here we present a co-expression based cancer staging method. To the best of our knowledge, there are no published studies that predict cancer stages using co-expression patterns of cancer tissues. www.nature.com/scientificreports/ A technical challenge in applying co-expression data for cancer staging is: how to derive co-expression information of genes in individual tissue samples since it generally requires multiple samples to infer such information while cancer staging needs to be done on individual tissues. Fortunately, Chen and co-workers have recently published a statistical method for inference of co-expressed genes in a single sample through comparing the co-expressed genes in a set of reference samples and those in the reference set plus the current sample 28 . Specifically, the approach assesses if the co-expression patterns among the reference samples are enhanced or weakened by including the sample into the reference set, namely an expanded set. A pair of genes in the new sample is considered as having the same co-expression pattern in the reference set and the expanded set if its coexpression level in the latter is not statistically lower than in the former. Hence, when applied to all gene pairs, a set of co-expressed genes can be derived for the given sample with respect to the reference set. This method has been applied to solving a variety of co-expression analysis problems and found to be highly effective 28 . We have adapted and applied this approach to cancer tissue staging. Specifically, we assume that some samples for each stage of a cancer type are available, along with their genome-scale transcriptomic data, from which co-expression patterns can be derived reliably for each stage of the cancer type. Then a new sample is assigned to a stage if the sample's co-expression pattern is most consistent with the co-expression patterns of the stage of the reference samples within a specified level of difference. We have applied this staging approach to eight cancer types in the TCGA database for stage prediction, representing all the cancer types that has at least ten cancer samples in each of the four stages. The consistency levels range from 71 to 95% across the eight cancer types we studied. The reason we have applied our method only to the TCGA data is that the data are collected from cancer tissue samples, rather than cell lines 29,30 , with the highest data quality compared to other databases. An important application of this methodology is to elucidate the functional differences between cancer samples at different stages, hence providing important and useful information regarding cancer evolution from early to the advanced stage. To do this, we have developed a new method for assessing the statistical significance of pathways enriched by a set of gene pairs rather than a set of genes as commonly done. By applying this method, Results Gene expression data of eight cancer types, namely BRCA, COAD, HNSC, KIRC, KIRP, LUAD, STAD and THCA, are extracted from the TCGA database. Our cancer-stage prediction is conducted and assessed on these samples. The detailed information about these cancer data are given in the Methods section. Identification of co-expressed genes. For each cancer type, edgeR in the R package is used to identify the differentially expressed genes (DEGs) using |log(FC)| > 2.5 and p value < 0.05 as the cutoffs. Pearson correlation coefficient (PCC) is used to calculate the co-expression level between two genes. A pair of genes (x, y) is deemed to be co-expressed (CEGs) if PCC x, y > 2.5 with p value < 0.05 (see Methods). Table 1 summarizes the numbers of DEGs and CEGs for each cancer type at each stage. www.nature.com/scientificreports/ An algorithm for representing cancer samples as co-expression networks. We have developed an algorithm for representing the gene-expression data of cancer tissue samples of a given cancer type as four stage-specific co-expression networks, one for each stage, and their perturbed networks when a new sample is added to the sample set of each stage. The level of perturbation due to inclusion of the new sample to each of the four co-expression networks, in general, will be significantly different between the network where the new sample intrinsically belongs and the three other networks. This serves as the basis of our cancer staging algorithm. A co-expression network is built over samples in each stage of a given cancer type, consisting of only gene pairs that are highly co-expressed, where each gene pair is represented as an edge connecting two nodes denoting the two genes. When a new sample is added to the sample set of each stage, the co-expression levels of some gene pairs may change. Chen and co-authors have made the following observation 28 : if two genes are co-expressed over a sample set, then adding a new sample to the set should not change their co-expression level significantly if their expression levels in the new sample are linearly consistent with those in the sample set; otherwise the co-expression level will decrease or remain at a low level. In addition, we have noted that cancer samples in the same stage tend to have a large collection of stage-specific co-expressed genes, used to execute the biological functions specific to the stage. By integrating these two insights, we have the following key observation: for a given co-expression network of a specific stage, adding a new sample that "intrinsically" belongs to the stage should not alter significantly the structure of the co-expression network; in contrast, when a sample is added to the sample set of a different stage, it will affect the co-expression levels of some gene pairs, hence altering the structure of the co-expression network. Our algorithm follows. Step 1: Identification of DEGs for co-expression analyses. To ensure that the numbers of DEGs are approximately the same across different stages to avoid sample-size related bias, we have selected n DEGs with the largest variance for each stage, where n is the smallest number of DEGs in a stage across the four for the given cancer type. Step 2: Construction of co-expression networks. Samples of each stage are divided into three groups: 30% as the reference, 40% for training, and 30% for testing. A co-expression network is constructed over the reference set for each of the four stages: each DEG is defined as a node and a pair of co-expressed DEGs above a PCC-based threshold (see METHODS) as an edge linking the two genes. Step 3: Construction of a perturbed network over each sample set plus a new sample. For each co-expression network N built at Step 2 and a new sample s, calculate the PCC value for each co-expressed gene pair in N over the expanded sample set. If the relationship between the new PCC and the threshold is reversed compared to the original PCC, remove it from N if PCC > threshold; and otherwise add the edge to N. Step 4: Data preparation for cancer-stage classifier training. For each new sample considered for cancer staging, represent each of its four perturbed networks as a one-dimensional vector: each pair of co-expressed genes in a co-expression network is given a fixed location in the vector, containing the PCC value or a 0.0 if the gene pair is removed in the perturbed network, hence allowing direct comparisons among such PCC-based vectors. The detailed process of our algorithm is shown in Fig. 1. A 4-way classifier for cancer staging. A machine learning-based classifier is trained to predict the stage, 1 through 4, for a given cancer sample based on the PCC vectors defined in Sect. 2.1. Intuitively, if a new sample belongs to a specific stage, its perturbed network should be largely the same as the corresponding co-expression network; otherwise, the perturbed network may lose most of the stage-specific co-expressed genes, i.e., edges, from the original co-expression network. We have used the following six machine-learning methods: Naive Bayes, treebag, C5.0, random forests (RF), random ferns (RFerns), and weighted subspace random forests (WSRF), respectively, to train the classifier. Cancer stage prediction. Using the above cancer-staging algorithm, we have predicted stages for all the test samples of the eight cancer types. For each cancer type, we have randomly selected 30% of the samples from each stage and used them to derive the co-expression patterns; 40% for classification model training; and the remaining 30% for testing. Three-fold cross-validation with 100 repeats is used when training a classifier for each of the six machine learning methods. This process is iterated 10 times, and the average of the staging accuracy is used as the final evaluation results. Table 2 summarizes the prediction results by C5.0, and prediction results by other methods are summarized in Supplementary Tables S1 (1)(2)(3)(4)(5). We note that most of the machine learning methods give comparable results except for Naive Bayes and random Ferns, whose performances are poorer than the others as detailed in the Table S1. To understand what might be the reasons for the inconsistent predictions by our method compared to the annotated stages in TCGA by pathologists, we have examined the prediction results for HNSC and STAD, the two cancer types with the worst overall prediction performance ( Table 2). Tables 3 and 4 list, for each stage, the numbers of samples correctly predicted and of predicted to earlier or later stages of HNSC and STAD, respectively. Since there is no ground truth for the actual stages of the cancer samples under consideration that can be used to assess the quality of the two staging methods, we have compared the distributions of the number of DEGs across samples at different "stages" by the two methods, as shown in Fig. 2. We see from the boxplots that our predicted stages give rise to boxplots with somewhat higher level of regularity compared to that of the annotated stages, hence providing one piece of evidence that our predicted stages, which is based on molecular information, might be more intuitively meaningful. Figure 3 shows the similar information for STAD to that in Fig. 2. Analysis results on other cancer types are given in Supplementary Tables S2(1-6) and Figures S1(1-6). Overall, we consider that our predicted stages are probably as scientifically justified as the manually annotated stages by cancer pathologists or better. www.nature.com/scientificreports/ www.nature.com/scientificreports/ Pathways enriched by co-expressed genes. We have conducted pathway enrichment analyses over co-expressed genes in each stage of each of the eight cancers against the GO Biological Processes using our new scoring scheme (see "Methods"). Table 5 summarizes the numbers of the enriched pathways by co-expressed genes, with the pathway names given in Supplementary Tables S3-1 (controls), S3-2 (up-regulated), and S3-3 (down-regulated), and information about pathways, hence functions, that disappear at each stage as well as new pathways that emerge at each stage in cancer versus controls, hence providing footprint information of cancer evolution. We have also calculated the numbers of enriched pathways by co-expressed genes in controls, which (I) remain enriched throughout all stages of the cancer samples of each type; and (II) disappear by each stage of cancer samples, which do not appear again in a later stage, and in total for each cancer type. And we have also calculated (III) the number of new pathways that are not present in controls but present in earlier stages (1 and 2) or advanced stages (3 and 4). All these are shown in Table 6 (I), (II) and (III), and the detailed pathways in cancer are listed in Supplementary Tables S4-1, S4-2 and S4-3. From these tables, we conclude: (i) it is somewhat surprising to see from Table S4-1 that different sets of functions remain unchanged throughout the development of a cancer type across the eight cancer types. For example, for BRCA, it is cell cycle and cell division activities that represent the predominant class of functions that remain unchanged throughout stages 1-4. And this is the only type of cancer with this or similar property. For COAD, it is three classes of functions, namely cellular stress, immune responses and tissue repair that remain unchanged throughout the evolution of the cancer. For HNSC, it is the combination of two functional classes: tissue repair and cellular stress that remain unchanged throughout its evolution. For KIRC, no functional activities remain unchanged throughout its evolution. For KIRP, it is some developmental activities that remain unchanged. For LUAD, it is a few cell division activities that remain unchanged. And for STAD, it is predominantly immune responses that remain changed. (ii) from Table S4-2, we see the following: (1) pathway disappearance in cancer predominantly take place at stage 1 for six cancer types or stage 4 for two cancer types; and (2) most of the lost pathways tend to be cancer specific or at most shared by 2-3 cancer types except for a few, namely neutral lipid metabolic www.nature.com/scientificreports/ process (shared by 6 cancer types), triglyceride metabolic process (shared by 5), acylglycerol metabolic process (by 5), response to drug (by 4), regulation of lipid localization (by 4), regulation of hormone levels (by 4), and organic anion transport (by 4), indicating that they may have negative effects on cancer development, hence selected for removal. The detailed list of the lost pathways by multiple cancer types is given in Table S5. (iii) from Table S4-3, we note that different cancer types tend to have different sets of emerging functions in cancer tissues vs. controls, which generally fall into the following classes: development and proliferation, immune related, stress related, migration related, metabolisms, tissue repair, and neural functions. For BRCA, two classes of new functions account for the majority of the new functions, hence considered as predominant: development and proliferation and metabolisms in both early (stages 1 and 2) and advanced (stages 3-4) cancers. For COAD, the two predominant functional classes are development and proliferation and stress related in both early and advanced cancers. For HNSC, the new functions in early-stage cancer tissues are development and proliferation and immune related; and for the advanced tissues, only the former remains to be predominant. For KIRC, no single class of functions stands out in the early stage; and immune related and development and proliferation stand out. For KIRP, development and proliferation and metabolisms stand out in both the early and advanced cancer tissues. For LUAD, development and proliferation and stress related functions stand out in the early stage; and the latter changes to neural activities in the advanced stage. For STAD, tissue repair and immune related functions stand out in both early and advanced stages. In addition, development and proliferation become one of three standout functional classes with the other two in the advanced stage cancer tissues. For THCA, immune and tissue repair stand out in the early stage; and the former changes to development and proliferation in the advanced stage. Among these functions, development and proliferation related functions become increasingly predominant as a cancer advances from early to the advanced stage for virtually all cancer types. Similarly, the percentages of the following functions also increase as a cancer advances: stress, immune, and migration related. Discussion Our preliminary analyses strongly indicate that differential expressions of individual genes do not have adequate information for accurate cancer staging, and conserved co-expression patterns across cancer samples of the same stage do as we have demonstrated through here. This represents a key technical contribution to the research of cancer biology. We anticipate that a similar technique could be used for various similar problems such as cancer grading, classification of primary cancers that have metastasized vs. that have not. Our prediction results are generally consistent with those assigned manually by cancer pathologists. In cases where our predictions are inconsistent with the manual annotation, further studies are needed as there are no clear indication of which "predictions" are more accurate between the two, although from one specific angle, our predictions seem to be biologically more meaningful. This should not be surprising since our prediction is based on functional commonalities shared by most of the cancer tissue samples of a specific stage. We anticipate that systematic applications of this new tool could lead to improved and biologically more meaningful staging schemes for different cancer types. For example, by studying how the overall functionality of cancer samples changes as a cancer advances, one could possibly identify key "jumps" in changes in the total functionality, which can be used to distinguish distinct phases of the evolution for specific cancer types, compared to the current staging schemes, which are largely based on sizes and morphology of tumors. Cancer staging based on such molecular functions could lead to improved treatment plans that can target at key functional hubs or weakest points in cancer metabolic networks at distinct phases. Otto Warburg speculated fifty some years ago about cancer evolution as: "the highly differentiated cells are now transformed into fermenting anaerobes, which have lost all their body functions and retain only the now useless function of growth" [31][32][33] . Since then, very little has been established regarding what specific functions are lost as a cancer evolves. We consider that a scientific contribution made by this study is: we have provided some information along this direction, although our study is clearly primitive. A further study is planned to elucidate detailed functionalities of cancer at individual stage and of different types. Both functionalities shared by all or most of the cancer types and specific to individual cancer types are of great interests. Our co-expression based functional identification will prove to be a highly effective tool for conducting such studies. Regarding the predominant new functions in cancer vs. controls as revealed by our analyses, it is understandable why development and proliferation represents a predominant one across a majority of the cancer types under study as cancer proliferation, unlike normal developmental processes, may require segments from multiple developmental programs, which might be activated possibly by different signals for different reasons such as the need for tissue repair, to have the cell-cycle genes activated and form a somewhat coordinated cell cycle process in support of continuous cell proliferation. Other emerging functions, such as immune, tissue repair, metabolisms and/or neural activities, tend to be less conserved across different cancer types. Hence it is natural to ask: are new functions in each cancer type relevant to or even possibly dictate the clinical behaviors of different cancer types such as more vs. less malignant cancers? Clearly, further and more in-depth analyses are clearly needed to address this question. controls as well as pathways that disappear gradually throughout the evolution of individual cancer types. We anticipate that the co-expression based analyses will prove to be an important direction for functional studies in cancer research. Data and methods Data. 14 cancer types were initially selected since this set of cancers has been used in our previous studies [34][35][36][37] as they each have sufficiently large number of samples in TCGA, namely: BLCA, BRCA, COAD, ESCA, HNSC, KICH, KIRC, KIRP, LIHC, LUAD, LUSC, PRAD, STAD, and THCA. Here, we further require that each cancer type have at least ten samples for each stage, which leaves only eight cancer types: BRCA, COAD, HNSC, KIRC, KIRP, LUAD, STAD, THCA. Table 7 gives the detailed information for each of the eight cancer types. Calculation of co-expressed genes. For a given set of cancer tissues and their transcriptomic data, we calculate the Pearson correlation coefficient (ρ) between each pair of expressed genes across the samples as follows: where E(X) is the expected value of expression levels of gene x across all samples. A pair of genes is deemed to be co-expressed if |ρ(X, Y)| > 0.7 with p value < 0.05, where the p value is calculated as follows: with n being the number of samples. Pathway enrichment. We have developed a new scoring scheme to assess the statistical significance of a pathway enriched by a set of co-expressed DEGs at a specific stage of a cancer type. For a pathway with n gene pairs containing k co-expressed gene pairs over a given set of cancer samples, the following hypergeometric distribution 38 is used to calculate the statistical significance of this pathway enriched by the k gene pairs where N gene pairs are differentially expressed in cancer vs. controls, of which K pairs of genes are co-expressed:
5,493.6
2020-06-30T00:00:00.000
[ "Medicine", "Computer Science" ]
Multiscale Supervised Classification of Point Clouds with Urban and Forest Applications We analyze the utility of multiscale supervised classification algorithms for object detection and extraction from laser scanning or photogrammetric point clouds. Only the geometric information (the point coordinates) was considered, thus making the method independent of the systems used to collect the data. A maximum of five features (input variables) was used, four of them related to the eigenvalues obtained from a principal component analysis (PCA). PCA was carried out at six scales, defined by the diameter of a sphere around each observation. Four multiclass supervised classification models were tested (linear discriminant analysis, logistic regression, support vector machines, and random forest) in two different scenarios, urban and forest, formed by artificial and natural objects, respectively. The results obtained were accurate (overall accuracy over 80% for the urban dataset, and over 93% for the forest dataset), in the range of the best results found in the literature, regardless of the classification method. For both datasets, the random forest algorithm provided the best solution/results when discrimination capacity, computing time, and the ability to estimate the relative importance of each variable are considered together. Introduction Over the last three decades, there has been a proliferation of techniques for data collection, including geospatial data. At the same time, new mathematical methods to process those data have been developed, such as data mining, machine learning, big data, and, lately, deep learning. 3D point clouds, a type of geospatial data very widespread nowadays, are essentially massive files of point coordinates (X,Y,Z) that provide a discrete representation of the measured object in a Cartesian coordinate system. A significant problem with 3D point clouds is that they are unstructured collections of data, and this makes it necessary to process them in order to extract the information of interest. There are different methods for feature detection and extraction from 3D point clouds, although they can be divided into two main groups: direct (also called rules-driven) and indirect methods (data-driven methods). Direct methods are explicitly developed to solve a particular problem; for instance, in an urban environment, to extract roads [1][2][3][4], buildings [5][6][7], pole-like objects such as lampposts or trees [8,9], or traffic signs [10,11], but normally only one type of object per algorithm. They consist of a set of rules that take into account the geometric characteristics of the objects to be extracted that distinguish them from other objects in the scene. One of the advantages of these methods is that they require very little intervention by the users, beyond having to tune some parameters that affect the solution (e.g., voxel size, threshold distance between points for object segmentation, etc.). However, they may be difficult to implement in an algorithm and, as mentioned, they are normally devoted to extract a particular category of objects from the point cloud, not a group of them. Indirect methods are those based on the application of machine learning techniques (specifically classification methods) to a set of features or explanatory variables that have been constructed from the point coordinates. They are usually easier to implement than direct methods since the effort is not put into devising the algorithm but into defining the features (explanatory variables) and selecting an adequate model among the different possibilities. They can be used to extract one or more types of objects at once, but a disadvantage is that they require a training procedure. In addition, they can be less accurate than direct methods in some cases. Among indirect classification methods, there is also a subdivision into two categories: supervised and not supervised methods [12]. Supervised classification methods require user intervention to train the model, but this is not normally very time consuming for point cloud with the help of suitable software and, in return, they usually provide better solutions than unsupervised classification methods (see [13][14][15][16] as examples of the application of supervised classifications methods to 3D point clouds). Overfitting is a drawback associated to supervised classification, as it provides an accuracy in the training sample that is far above the accuracy that can be obtained in independent test samples. Deep learning for 3D point cloud classification and segmentation is an active research area at this moment that can displace conventional methods in a short time. With few exceptions [17], most of the deep learning approaches require transforming the point cloud into images or voxel meshes before feature learning using 2D (3D) convolutional neural networks [18]. However, there is still few literatures concerning deep learning for point segmentation and classification using aerial or terrestrial (static or mobile) laser scanning [19,20], and the results reported for now do not improve the best results obtained using rule-based methods or feature extraction combined with machine learning algorithms [21]. As shown in [22], a combination of convolutional neural networks with 3D point cloud features can improve the results obtained independently. Unsupervised do not provide the solution in a training set to estimate the parameters of the mathematical model, so they need to look for similarities between the values of the different features in order to construct the clusters. Some algorithms require the user to preset the number of clusters in the data in advance, while others can make an estimation of the optimal number of clusters. In this work, we are concerned with supervised classification methods, as we want to accurately extract some specific categories of objects from point clouds of different scenarios using the same algorithm, even at the expense of having to create training sets from the point clouds. In particular, we deal with a multiscale classification algorithm that assumes that the features defined for any observation (point) provide different information depending on the size of the neighborhood (the scale) around that observation. A reference work of multiscale supervised classification in geosciences is [23], whose algorithm is implemented in Cloud Compare, a free license software for point cloud processing and visualization that is widespread among users. The algorithm works directly with the point coordinates (neither the intensity nor the color) and allows some degree of variability in the class characteristics. The idea is that, depending on the scale, the cloud around a point can look like a line (1D object), a plane surface (2D object), or a volume (3D object). While this algorithm was initially developed to discriminate between two classes, in this work, we extend the algorithm to the multiclass problem. In addition, different features are used to train the models. In summary, the most relevant aspects of this work are: the comparison of several multiclass machine learning algorithms for point cloud segmentation, the analysis of their performance for two kind of problems (urban and forest inventory), and the analysis of the influence of the scales and the extracted features in the results. The paper is structured as follows: Section 2.1 focused on the definition of the features at different scales to be introduced in the models. Section 2.2 makes a brief explanation of the four classification techniques used in the case studies. In Section 3, we apply those classification methods to extract five classes of objects from a point cloud in an urban environment and three classes in a forestry area. The discussion of the results is given in Section 4. Finally, our conclusions are summarized in Section 5. Feature Extraction A key step in the supervised classification analysis is to establish the features to be used in the model. In a multiscale method, those features are extracted at different scales, which tends to improve the results when only a single scale is used, as it has been stated in several works [24][25][26][27]. This is because a region of the cloud around an observation can look like a 1D, 2D, or 3D object depending on the size of that region [23]. Moreover, the multiscale strategy allows some degree of variability and heterogeneity in the characteristics of the different objects. For object detection and extraction from 3D point clouds, it is quite common to use the eigenvalues of a principal component analysis or some algebraic expressions relating them. These eigenvalue-based features have proven to be useful to describe the local geometry of the points. Examples of the application of this technique can be found in [15,27]. An analysis of the accuracy and robustness of the geometric features extracted from 3D point clouds was performed in [28]. Given any point p i = (X i , Y i , Z i ) of the point cloud, eigenvalues and eigenvectors are obtained through an eigendecomposition of the covariance matrix Σ: where λ1 > λ2 > λ3 are the eigenvalues, and V a matrix which columns are the corresponding eigenvectors. The relationship between the values of the eigenvalues λ1, λ2, λ3 at a point is related to the local geometry at that point [28]: In this study, we used five features at different scales as input variables (Table 1). Table 1. Input variables for classification. Name Formula Linearity Z range for each point is calculated using the points in a vertical column of a specific section (scale) around that point. In order to avoid the negative effect of outliers, instead of using the Z coordinate ranges, we used the range between the 5th and 95th percentiles. As will be seen below, these five variables allow to obtain an accurate classification of the point clouds in different categories, both for natural and artificial objects. Initially, artificial objects are expected to produce better results than natural objects in 3D point classification, as they are more homogeneous. The scales are determined by the diameters of spheres around each point used to perform the PCA according to Equation (1) (so N in Equation (1) would be the number of points in that sphere). The five features calculated for the points in each sphere were assigned to the point in the center. Classification Methods Many classification methods have been described in the literature, but we choose to use four, according to their simplicity and proven discrimination capacity: linear discriminant analysis (LDA), multiclass logistic regression (LR), multiclass support vector machines (SVM), and random forest (RF). Although they are well-known methods that have been described and applied many times before, in the following sections we provide a brief summary of each of them, in order to highlight their fundamentals and assumptions. Linear Discriminant Analysis LDA is, as its name indicates, a linear transformation that computes the directions of the axis that maximize the separation between multiple classes [29,30]. The data points are projected onto these directions. For a set of observations X = (x 1 , x 2 , ..., x n ); x i ∈ R p (in this study, p = number of scales and n = number of points), this can be accomplished by maximizing the ratio between the within-class (S w ) and between-class (S b ) scatter matrices, µ j is the mean value for class j, n i is the number of elements in class i, and µ is the mean value for all the classes. By maximizing R, the algorithm tries to assign elements to each class so that these are as homogeneous as possible while their means are as far apart as possible. If S w is nonsingular, the solution is given by the eigenvectors of S −1 w S b corresponding to the largest C − 1 eigenvalues. These eigenvectors represent the directions of maximum separation between classes. LDA assumes that the features (input or explanatory variables) are continuous and normally distributed, while the dependent variable is categorical. Logistic Regression Multinomial logistic regression [31] is in some way similar to LDA, since it also establishes a linear transformation between the output variable and input variables. However, the linear transformation is not between the output and the input variables, but between the input variables and the odds of the output categorical variable. In addition, input variables do not need to be continuous and normally distributed. They do not even have to be independent. The result is not the class for a category but the probability of an observation belonging to it. The mathematical model for the multiclass logistic regression can be expressed as follows: where Y represents the output variable (a categorical variable representing the class), α i and β j are the coefficients of the model, and x 1 , x 2 , . . . , x p are the covariates. This means that a multiclass logistic regression model for C classes is equivalent to C-1 binary models considering one of the classes a reference (class C in Equation (3)). The rest of the classes are separately regressed against the reference. From (3) it follows that Finally, each observation is assigned to the class of maximum probability. Support Vector Machines Essentially, SVM algorithm [32] maps the original finite-dimensional space into a much higher-dimensional space where the boundaries between classes are hyperplanes of the form wX + b = 0, where w represents the normal vector to the hyperplane and b represents the offset. SVM search for the maximum-margin hyperplane, that is, the hyperplane for which the distance to the closest observations of each class is maximized. In fact, the maximum-margin hyperplane lies halfway between two parallel hyperplanes (margin hyperplanes) that separate the two classes. The distance between these two planes, that must be maximum, is called margin. For a hypothetical perfectly separable case, no observation may lie between the margin hyperplanes. In a nonperfectly separable situation, the margin is "soft", which means that classification errors ξ are admitted and have to be minimized. SVM is formulated as the following minimization problem with restrictions min w,b The solution to this optimization problem is a linear combination of a subset of the original observations located near the margin hyperplanes, w = ∑ n i=1 α i x i , named support vectors, that completely determine them (hence their name). C is a parameter that controls the generalization ability of an SVM. Although they may seem very different, SVM resembles LR as both solve the same optimization problem, although with different loss functions. However, SVM is not a probabilistic method, so it directly assigns a class to an observation, not a probability. Multiclass support vector machines is an extension of the original binary classifier to problems with more than two classes. Several strategies have been proposed [33], although a typical solution consists of solving several binary problems, such as with the "one-against-all" or "one-against-one" approaches. The first one constructs C binary classifiers, while the second constructs C(C − 1)/2 classifiers, C being the number of classes. Random Forest Random forests are assemblies of classification orregression trees (CART) [34,35] that use the bootstrap aggregated ensemble method to combine them and reduce the variance. This method consists of building multiple decision trees by resampling with replacement of the data and averaging the prediction. CART are built through a recursive binary partitioning, that involves iteratively splitting the data into subsets according to some rules. Then, the resulting subsets are split into two new datasets and so on until no more splits can be obtained according to some criterion. In each partition, the objective is to obtain a pair of homogeneous subsets. If this procedure is represented in a graphic from the top (initial node or dataset) to the bottom joining each subset (son node) by means lines (branches), then it resembles a tree. One positive characteristic of this machine learning method, compared with other methods such as LDA, LR, or SVM, is that it is easy to understand and to interpret, as it is close to human thinking. There are different partition metrics or cost functions to evaluate splits. Specifically, CART use the Gini index of impurity, a measure of how mixed the classes are in the groups after a split. The minimum value is 0.0 (for a perfect separation) and the maximum value gets close to 1. For a set of items with C classes, the Gini index is where p i is the fraction of items labeled with class i in the set, and ∑ k =i p k = 1 − p i is the fraction of items labeled with a class different to i. From all the possible splits dictated by the data, the one with the smallest aggregated Gini index is selected. One way to measure the relative importance of each input variable is to calculate the mean of the Gini index decrease throughout the tree. For each variable, the G is calculated and accumulated each time that variable is chosen to split a node. Finally, the sum is divided by the number of tress in the forest to calculate the average value. Evaluation of the Results To evaluate the performance of the different models, five metrics have been used: precision, recall, and F1 score for each class; and overall accuracy and Kappa coefficient for all classes. Precision measures the proportion of points classified as positives. Recall measures the proportion of positives that are real positive. F1 score is a metric that combines precision and recall. The strategy to calculate these metrics is one-versus-all, this means that any one of the metrics for each class is calculated against all the samples for the rest of the classes. Overall accuracy measures the proportion of points correctly classified, independently of the classes, and Kappa coefficient tells us how good the classification is compared to random assignment. The four supervised classification methods were trained with the same dataset and applied to the same test sample. The training dataset was balanced, so approximately the same number of points were stored for each class. Urban Point Cloud In order to validate the proposed methodology, we applied it to two point clouds, one corresponding to an urban area and the other to a forestry area. The first one has mainly artificial objects and was obtained using a Mobile Laser Scanner (MLS), that consists of a two-dimensional laser scanner, an Inertial Measurement Unit (IMU), and a Global Navigation Satellite system (GNSS), all of them calibrated and mounted on road vehicle. The measurement rate is 1000 points per second, and the maximum measurement range is 60 m. The vehicle drove at up to 20 km/h along 1.5 km in the CMU Campus in Oakland, USA, gathering 1.6 million points [36][37][38]. As will be shown, although the quality of the point cloud is not high, the results obtained were good, despite it being collected with a low cost system. As can be imagined, it is essential to process these point clouds in order to select and extract the different kinds of objects. Five classes were considered: poles, ground, vegetation, buildings, and vehicles. Figure 1 shows a small piece of the point cloud, where different colors have been assigned to each class. Forest Point Cloud The second point cloud tested was a forest plot, where objects to be extracted are natural. In particular, we were interested in three classes that are important in forestry: ground, trunks, and branches. Unlike the previous point cloud, the system used to perform the study was a wearable laser scanner (WLS), the ZEB-REVO (GeoSLAM) [39] mobile laser scanner. It integrates a rotating 2D scanning device, an IMU, and data storage and processing units. The system acquires information from the scanning head that is transformed into a three-dimensional point cloud by applying 3D-SLAM algorithms, instead of the integration of GNSS and IMU data performed by MLS systems. The data acquisition is performed within the default range 0.60-15 m outdoors, with a scanning rate of 40,000 points per second. The dataset contains 1.3 million points from a Sitka Spruce forest plot in Aberfoyle, Scotland (UK) that contains 10 trees. The data is owned by Forest Research (UK) and provided to the authors for the purpose of this study. Figure 3 shows a small part of the forest point cloud. Points for each of the three classes extracted to train the mathematical models are depicted on the right. Figure 4 shows the values of the first two normalized eigenvalues and the horizontality at five scales. As in the previous example, it is almost possible to visually distinguish the three classes from the values of these features at some scales. Results and Discussion The proposed methodology was applied to the data corresponding to both the urban and the forest scenarios. To construct RF and SVM models, it is necessary to fix some parameters. The same parameters were used for both datasets. For the RF model, a forest with 50 trees and a minimum and maximum of 1 and ∞ terminal nodes were fixed. For the SVM model, a radial basis function with parameter γ = 0.01 and C = 10 were chosen. Table 2 shows the results for the urban dataset. Specifically, the training sample contained 20,869 observations, while the test sample had 15,553 observations. Training and test samples were independent, as they correspond to different areas without adjacent points. The metrics improve those obtained with each individual scale. As Table 2 shows, the four models provide similar results. Given their simplicity and time of calculation, linear discriminant or logistic regression could be considered the most appropriate, but they have worse predictive performance than the other two. SVM and RF models have better behavior for almost all classes, similar to those reported in previous works, even when, mostly, the algorithms were explicitly designed to extract a specific type of object [4,9]. A review of the literature provides the following ranges of recall values: poles (0.67-0.82); vegetation (0.85-0.98); buildings (0.41-0.87); cars (0.13-0.95). However, in addition to the low quality of the point cloud, it should be taken into account that our approach does not include any preprocessing operation to remove noise and artifacts, nor color or intensity as input variables. For those users interested not only in prediction, but also in determining the most significant or important features, RF has methods to obtain an ordered list of them. Figure 5 (left)-provides the mean decrease in Gini coefficient for each variable, which is a measure of the variable importance. A higher mean decrease in Gini indicates higher variable importance. In this sense, the range of the Z coordinate and the horizontality are within the most important variables for the urban dataset. Regarding the forest point cloud, the results are registered in Table 3. The size of the training and test samples were 16,603 and 7508, respectively. All models provided similar results in terms of predictive capacity, although SVM is much slower than the other three models. The three classes have high values for the five metrics, with a global accuracy for each of them above 93%. These values are really good, above those reported by other authors (see [40] for a review of forest inventory with TLS). Again, for people interested, not only in the predictive capacity of the models but in the analysis of the features and variable selection, the RF model has the advantage of giving an ordered list of features according to its importance to discriminate between the different object categories. As in the urban scenario, according to the RF model, horizontality is among the most important variables to discriminate between the three classes ( Figure 5 (right)). Conclusions In this work, we analyze the utility of multiscale supervised classification methods for laser scanning or photogrammetry point cloud segmentation, for both artificial and natural objects. The method relies only on the coordinates of the points (not the color or the intensity), that is, on the geometric information, which makes it independent of the system used to capture the point cloud. Only a few easily calculated variables (obtained at different scales from the coordinates) were used to solve the problem. Four different supervised classification algorithms were tested, with behavior differing depending on the characteristics of the data. The best results were obtained for the forest dataset, reaching an overall accuracy of 96% and a Kappa coefficient of 0.93. Good metrics, although significantly lower (overall accuracy 0.85 and Kappa coefficient 0.81), were also obtained for a dataset of an urban area, despite the quality of the data. These metrics were similar and even better than those reported in previous works using different techniques. Although similar results were obtained for the four classification methods tested, the random forest algorithm provided the best solution when accuracy, calculation time, and specification of the importance of the different input variables are considered at the same time. Regarding the scales and the extracted features to feed the machine learning algorithms, the analysis of the relative importance of the features shows that almost all the scales contribute to the solution, and that the horizontallity is among the most important variables at different scales. In addition, we verified that a multiscale strategy superpassed a single scale strategy. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
5,790.4
2019-10-01T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Spatiotemporal Variations of No Y Species Measured by Mipas-b Spatiotemporal Variations of No Y Species in the Northern Latitudes Stratosphere Measured with the Balloon-borne Mipas Instrument Spatiotemporal Variations of No Y Species Measured by Mipas-b This paper presents the spatiotemporal distribution of NO y species at altitudes between 14 and 31 km as measured with the MIPAS-B instrument on the morning of 21 March 2003 in northern Scandinavia. At lower altitudes, temperature variations and the distribution of ClONO 2 and the tracer N 2 O reveal the dynamics along the cross section 5 through the edge of the late arctic polar vortex. At higher altitudes, continuous measurement before, during, and after sunrise provides information about photochemistry illustrating the evolution of the photochemically active gases NO 2 and N 2 O 5 around sunrise. The measured temporal evolution of NO 2 and N 2 O 5 is compared to box modelling that is run along backward calculated trajectories. With regard to NO 2 , there 10 is a good agreement between the model and observations in terms of quantity but the photochemistry in the model is slightly too slow. The comparison of measured and modelled N 2 O 5 , however, reveals significant differences of the absolute quantities pointing at a too slow photochemistry in the model. Introduction Odd reactive nitrogen (NO y ) can be divided into reactive radicals NO x and the less reactive reservoir species (Brasseur et al., 1999).For the following discussion, we define: Correspondence to: A. Wiegele<EMAIL_ADDRESS>Nitrogen constituents, which are of minor importance for the NO y budget, such as HNO 2 and BrONO 2 , are neglected here.The most important NO y reactions are shown in Fig. 1 with hν denoting photolytic dissociation reactions.The time constants for the photolytic reactions are quite different for the various species.Photolytic reactions of NO 2 and NO 3 are very fast (in the order of minutes), while photolysis of N 2 O 5 is slower (in the order of several hours), and photolysis of HNO 3 and ClONO 2 is almost negligible in the lower stratosphere (Wayne, 2000) at high altitudes in winter. Thus, the partitioning within NO x is dominated by fast photochemistry.NO 2 is most prominent during nighttime, while a considerable amount of NO x is converted to NO during daytime.The mixing ratios of NO and NO 3 differ between sunlit and dark conditions by a few orders of magnitude depending on altitude and latitude.In the middle and lower stratosphere, the reformation of NO 2 after sunset is about as fast as its photolytic dissociation after sunrise. N 2 O 5 is photolysed into NO 2 and NO 3 during the daytime.This reaction is much slower than the photolysis of NO 2 into NO, thus N 2 O 5 is decreasing slowly during the whole daytime period.At nighttime, in the absence of sunlight, N 2 O 5 is built up through the reaction of NO 2 with NO 3 leading to a slow increase of N 2 O 5 during the nighttime with its maximum occurring around sunrise. The NO y family plays an important role in ozone chemistry, where the NO y species have the ability to act in two ways.One is the ability to buffer reactive halogen species by forming reservoir gases and the other is its capability of catalytic ozone destruction (Brasseur et al., 1999).While the overall reactive nitrogen content NO y is invariant at short timescales, the partitioning changes rapidly around sunset and sunrise and slowly during day-and nighttime.Furthermore, the amount of NO y at a certain altitude differs between inside and outside the vortex. Numerous measurements of individual species of the NO y family have been reported over the last decades whereas observations of the complete partitioning and budget can be found less frequently in the literature.The latter have been mainly based on airborne and spaceborne remote sensing during day-and nighttime in emission or at sunrise and sunset in occultation (e.g.Toon, 1987;Abbas et al., 1991;Sen et al., 1998;Danilin et al., 1999;Osterman et al., 1999;Küll et al., 2002;Stowasser et al., 2002Stowasser et al., , 2003;;Wetzel et al., 2002;Mengistu Tsidu et al., 2005). From these various measurement techniques, diurnal variations of NO y species can be best addressed by balloonborne emission measurements.Emission instruments are capable of measuring in any azimuth direction at any time of the day, and a balloon platform allows sampling of the same air masses over several hours.Stowasser et al. (2003) have studied the variation of short-lived NO y species around sunrise, but this was based on only three limb sequences, that have been performed one hour before, during, and three hours after sunrise. For the study presented here, vertical profiles of the most important NO y species have been measured quasicontinuously with high temporal resolution during several hours around sunrise.The measurements cover the altitude range from 14 to about 31 km and a spatial range of a few hundred km.This has allowed the study of both the temporal evolution of short-lived species and the distribution of longer-lived constituents across the vortex edge. Measurement and sampling The Michelson Interferometer for Passive Atmospheric Sounding, balloon-borne version, MIPAS-B, is a cryogenic Fourier transform spectrometer which measures the thermal emission of the atmosphere using the limb sounding geometry (Fischer and Oelhaf, 1996).Details about its layout, measurement technique, and data processing are reported by Friedl-Vallon et al. (2004) and references cited therein.The instrument covers the wavenumber region of 750 cm −1 up to 2460 cm −1 (equivalent to 4.06 µm up to 13.3 µm) where the most important NO y species show prominent rotationalvibrational transitions. The passive measurement of the thermal emission of atmospheric constituents allows MIPAS-B to measure at any time of the day and to point the line-of-sight (LOS) in elevation and azimuth according to the scientific needs.The remote sensing technique of MIPAS-B is suited for covering a range of altitudes in a short time interval and the azimuth angle can be adjusted relative to the position of the sun.These capabilities are essential for studying temporal evolutions at consistent illumination conditions. Measurement technique and data analysis The raw data measured by MIPAS-B are interferograms with a maximum optical path difference of 14.5 cm leading to a spectral resolution of about 0.07 cm −1 (apodised).The sampling of one interferogram lasts about 10 s.The pointing system (Maucher, 1999) offers an accuracy of better than 150 m (3σ ) with respect to tangent point altitudes. The processing of the measured interferograms to calibrated spectra includes mathematical filtering, non-linearity correction, phase correction, and complex Fourier transformation (Kleinert, 2006;Kleinert and Trieschmann, 2007).The two point calibration that leads to radiance units is done by means of "deep space" (+20 • elevation angle) and black body spectra.As a measure of the instrument sensitivity approximate Noise Equivalent Spectral Radiance (NESR) values for each spectral channel are compiled in Table 1. For the temperature and trace gas retrievals the Karlsruhe Optimized and Precise Radiative transfer Algorithm KO-PRA (Stiller et al., 2002) and the adapted inversion tool KO-PRAFIT have been used.A Tikhonov-Phillips regularisation approach was applied which was constrained with respect to the form of an a priori profile.Spectroscopic data was taken from the spectroscopic database HITRAN01 (Rothman et al., 2003).Absorption cross-sections of ClONO 2 originate from Wagner and Birk (2003).The microwindows used for the retrieval are basically the ones described by Wetzel et al. (2002). All retrievals have been performed with the same a priori information independent of the time of day.Error calculations include noise and LOS errors as well as spectroscopic errors for all retrievals.For the temperature retrieval, Error contributions due to horizontal gradients along the line of sight can be regarded as minor for the reported observations because of the carefully defined measurement scenario and the well defined averaging kernels with narrow peaks of the contribution function around the tangent altitudes.For the retrieval of ClONO 2 , for example, horizontal gradients of 1 ppbv per 100 km generate only small variations of up to 1% in the retrieved volume mixing ratios (vmrs). The retrieval of NO profiles under consideration of nonlocal thermodynamic equilibrium (NLTE) effects has turned out to be extremely difficult for the balloon measurement geometry at this season and latitude, since both the stratospheric and the thermospheric contributions of NO were mainly located above the balloon altitude of about 31 km.Therefore, those two contributions could not be separated properly, leading to large errors in the retrieved profiles.Qualitatively, however, the raise of NO with sunrise is quite obvious.Because of the large uncertainties in the retrieved NO profiles, these results are not further discussed, and the NO y budget calculations will be restricted to the nighttime measurements. Sampling approach In order to sample the temporal evolution of the NO y species around sunrise, a dedicated measurement scenario has been performed covering the time from two hours before to three hours after sunrise.This scenario had to fulfil two requirements: First, the limb scans had to be comparably fast in order to get a high temporal resolution, and second, the direction of the measurement beam had to be chosen perpendicular to the azimuth direction of the sun in order to ensure symmetric illumination conditions along the LOS before and beyond the tangent point. The first requirement was reached by averaging only two interferograms per tangent altitude (two have been chosen for the sake of redundancy).The tangent altitude sampling was 14 km, 17.5 km, 19.5 km, followed by equidistant steps of 1.5 km up to 30 km, 31 km, and completed by constant ele-Fig.2. Measurement scheme with its tangent points above northern Scandinavia.The colour code denotes the altitudes of the tangent points.For each azimuth direction, start and stop times of the measurements are indicated in UTC.The sunrise was between 03:40 UTC at the highest and 04:00 UTC at the lowest tangent points, during the measurements in the third azimuth direction from the west.vations at −0.3 • , 2 • , and 20 • .With this sampling scheme, the measurement duration of one limb profile was about 5 min.In total 58 limb sounding sequences have been recorded.In order to fulfil the second requirement the azimuth direction of the LOS was changed about every 30 min.Thus, individual limb sequences were grouped in 7 different azimuth angles. The sampling of the measurements with the coordinates and altitudes of the tangent points is shown in Fig. 2. Very calm conditions at float altitude led to nearly no movement of the gondola during the recordings.It should be noted that this measurement pattern not only covers the temporal evolution of the NO y species, but, by looking into different azimuth directions, also the spatial distribution within the covered range.While the temporal evolution is the dominant effect for short-lived species in the higher altitude range, variations of long-lived trace gases in the course of the measurements rather reflect the spatial distribution of the trace gases than their temporal evolution.shows the distribution at the potential temperature level 475 K (∼19.5 km), the lower one at 550 K (∼22.5 km).The tangent points of the respective altitudes are given as black dots.The dashed line shows the vortex edge according to Nash et al. (1996). Meteorology The early winter 2002/2003 was governed by low temperatures (Naujokat and Grunow, 2003) that were below the threshold temperature for the formation of polar stratospheric cloud (PSC) particles (Hanson and Mauersberger, 1988).After a major warming in mid-January followed by a reformation of the vortex, temperatures sank again below the threshold temperature for few days in early February.During those periods PSCs were observed (Spang et al., 2005) and denitrification had been measured by the MkIV instrument (Grooß et al., 2005).Grooß et al. (2005) have modelled the PSC formation as well as denitrification and give a more detailed view about the meteorological conditions during the winter 2002/2003. The MIPAS-B measurements have been performed during a balloon flight above Kiruna (Sweden) at 20th/21st of March 2003.At this time of the year, a weak arctic vortex still existed and its center was shifted to Scandinavia and northern Russia. The vortex axis was tilted in the vertical (see Fig. 3), therefore the different tangent altitudes (Fig. 2) were partly inside and partly outside of the vortex.The fields of potential vorticity (PV), as well as an analysis of the edge of the polar vortex calculated according to Nash et al. (1996), have shown that the westerly tangent points at 17 km and 19 km were situated outside the vortex, while the easterly tangent points were inside.All tangent points above 21 km were clearly inside the vortex.At the lowest level (14 km) the vortex was not well defined anymore. Regarding the time of the year an exceptionally high tropopause was found in the westernmost measurement region above the North Sea and Norway in the ECMWF temperature profiles and the distributions of the PV.The temperature profile of a radiosonde launched from Kiruna the same day shows a first strong inversion, indicating the tropopause, as high as almost 13 km. In summary, the measurements taken at altitudes between 17 km and 21 km covered the edge of the polar vortex with strong horizontal gradients while weaker gradients could be expected above these altitudes, where the scanned air masses were situated inside of the vortex.Therefore, both features can be explored, the edge of polar vortex with its strong horizontal gradients in the lowermost stratosphere as well as the diurnal evolution of the shorter lived species in the middle stratosphere. Observations The measurements reflect different air masses across the vortex edge (mainly at lower altitudes) as well as diurnal variations (mainly at higher altitudes).Therefore, the discussion focuses first on temperature and longer lived species to characterise the dynamical and thermal state of the observed air volumes.Thereafter, the diurnal evolution of photolytically active species such as NO 2 and N 2 O 5 will be discussed. The following plots show the retrieved profiles in colour code.The observations in the different azimuth directions are separated by black bars.The time of measurement is displayed on the X-axis.This axis is not perfectly linear because of calibration measurements during the change of azimuth angles.The approximate longitudes of the tangent points, valid for an altitude of 21 km, are denoted as white numbers.They serve as indicator for the position relative to the vortex edge.Furthermore, Figs. 10 and 11 include the sunrise for different altitudes as a white solid line. Temperature The retrieved temperature profiles are shown in Fig. 4. The temperature does not show mentionable diurnal variations, therefore the longitude, which serves as indicator for the position relative to the vortex edge, is more relevant than the time axis.Temperature errors of the individual profiles are in the order of 1 K (1 σ , random and systematic error) and differences to ECMWF temperatures are in the range of 1 to 2 K (not shown). The transection across the vortex edge shows differences between the air masses situated outside or inside the polar vortex.The more easterly the measurement is situated, the more the air masses are influenced by processes inside the vortex.The altitude of the minimum temperature decreases from about 22 km at the westernmost profiles to about 19 km at the easternmost profiles.In the altitude range of about 17 to 19 km, where the measurements cross the vortex edge, horizontal gradients are in the order of 1.5 K 100 km with a maximum value of 1.8 K 100 km at 18 km (see also Fig. 9). N 2 O The retrieved N 2 O profiles of all 58 measured sequences are collected in Fig. 5.The horizontal gradient of N 2 O across the vortex edge in 18 km is shown in Fig. 9.Because of the longevity of N 2 O, again the situation of the sampled air masses relative to the vortex is more important for the interpretation of the measurement than the time of day.The dynamic tracer N 2 O indicates the subsidence of air masses across the vortex edge very well.The subsidence is clearly visible below 21 km and is more pronounced at lower altitudes.From the 75 ppbv N 2 O contour (cyan), a subsidence of about 1 km can be derived (from 20.8 km to 19.8 km).Values of 225 ppbv (yellow) range from 18.5 km to 16 km, revealing a subsidence of about 2.5 km in this altitude range.It should be noted, however, that the actual subsidence may be larger than the values derived from these measurements because the measurements do not cover the whole range from well outside to well inside the vortex at all altitudes and mixing across the vortex edge may also have had an influence on the N 2 O concentration (Müller et al., 2007).Above the cyan area, the relative subsidence is less obvious, because at higher altitudes the tangent points of different azimuth directions are relatively close together (see Fig. 2) and all situated inside the vortex.Furthermore, vertical gradients are very weak, which makes the quantification of any subsidence in that altitude region uncertain. Very low mixing ratios of N 2 O are measured at altitudes between about 22 and 24 km.Mixing ratios below 15 ppbv are marked in dark blue colours and the black area at this altitude in the two easternmost azimuth directions indicates extremely low N 2 O concentrations close to zero.These mixing ratios can be attributed to air masses originating from the upper stratosphere as suggested by Müller et al. (2007).For a more detailed discussion of the dynamical situation of the late winter 2003 vortex, see also Engel et al. (2006). The very high tropopause level mentioned in Section 3 is reflected by the high mixing ratios of N 2 O at the lowest altitude of 14 km of the westernmost sequences.The maximum values of about 323 ppbv are in good agreement with the tropospheric mean value of 2003 (about 318 ppbv, WMO, 2006).individual errors.HNO 3 dominates the budget up to 28 km altitude while higher up, the budget is dominated by NO 2 . The nighttime contribution of NO to the total NO y is negligible in the measurement range.Thus, the exclusion of NO is acceptable in the nighttime budget, while at daytime the NO contribution to total NO y exceeds that of NO 2 at higher altitudes (not shown). Several NO y species measured with MIPAS-B during the same balloon flight but at different times and azimuth directions have been compared to coinciding satellite measurements (e.g.Höpfner et al., 2007;Wang et al., 2007;Wetzel et al., 2007Wetzel et al., , 2008)), showing the overall high quality of data measured by MIPAS-B. HNO 3 The retrieved mixing ratios of HNO 3 across the vortex edge are displayed in Fig. 7.In contrast to N 2 O, HNO 3 cannot be regarded as good dynamical tracer as it undergoes photochemistry over weeks and may also be affected by denitrification, sedimentation, and renitrification. From looking at the HNO 3 profiles, the subsidence of the air masses inside the vortex seems also visible but is much less obvious as compared to N 2 O (see Fig. 5).Furthermore, the inside-outside contrast could underpin subsidence only for altitudes below about 19 km as it exhibits a different vertical behaviour than N 2 O. Inside the vortex the HNO 3 peak mixing ratios are lower than outside by up to about 1 ppbv whereas, below the vmr peak, the HNO 3 abundance is increased.This pattern suggests some residual redistribution of HNO 3 after events of denitrification that were reported for periods earlier that winter (Grooß et al. (2005), see also Sect.3).However, the vertical redistribution of HNO 3 does not show up very clearly anymore at this time of the year since after the major denitrification events that have taken place in January, dynamics and photochemistry might have washed out any more pronounced structures. ClONO 2 As described in Sect. 3 the MIPAS-B measurements cover the vortex edge at lower altitudes.Figure 8 shows the distribution of ClONO 2 across the vortex edge.After periods of strong chlorine activation, a chlorine nitrate ring is formed near the vortex edge in spring due to the recombination of ClO with NO 2 .An intersection through this ring is clearly visible in Figs. 8 and 9. Maximum ClONO 2 mixing ratios of up to 2.5 ppbv appear inside of the vortex close to the edge pointing back at previously strong chlorine activation. The retrieved ClONO 2 profiles show significant differences between adjacent azimuth directions (separated by thin vertical lines), revealing the very strong horizontal gradients of ClONO 2 at the edge of the polar vortex.The different subsidence at adjacent longitudes is clearly visible in the lower altitude range. NO 2 Figure 10 shows the evolution of the NO 2 mixing ratios before, during, and after sunrise.The plot is performed like the figures shown before, but here the time of measurement has to be noted and the white solid line in the third azimuth direction displays the time of sunrise for the different altitudes. The plot shows mixing ratios of NO 2 up to 6.5 ppbv during nighttime at the highest altitudes.At daytime, the mixing ratios at these altitudes are reduced to about 2.5 ppbv.The reduction during sunrise is quite fast and shows the fast photolysis rates of NO 2 .For example, at 31 km, the decrease from 5.4 ppbv at local sunrise to 2.3 ppbv takes place within 65 min (see also Fig. 12).The diurnal variations are visible down to about 22 km. N 2 O 5 For the interpretation of the measured mixing ratios of N 2 O 5 shown in Fig. 11 again the measurement time is more important than the longitude.This species is expected to reveal minimum mixing ratios around sunset and maximum mixing ratios around sunrise as described in Sect. 1. Although the results of N 2 O 5 are a little noisier than those of e.g.NO 2 , the tendencies are obvious.As expected, the maximum of the mixing ratios is found around sunrise.The peak mixing ratio of 1.15 ppbv is measured at an altitude of 30 km few minutes after local sunrise (see also Fig. 12). The mixing ratios at altitudes below 25 km do not show a significant response to changing sunlit conditions.The low volume mixing ratios of N 2 O 5 at about 23 km in the easterly profiles can again be explained with the reduced NO y caused by the already mentioned downward transport from the upper stratosphere. Modelling A box model has been used to compare the MIPAS-B measurements with state-of-the-art chemistry modelling.For the model calculations, backward trajectories originating from the tangent points and times of the MIPAS-B measurements have been calculated. Trajectories The backward trajectory calculations (Langematz et al., 1987;Reimer and Kaupp, 1997) are based on the ECMWF analysis given every 6 h with a resolution of 2.5 • in latitude and longitude.The trajectories are calculated on isentropic surfaces taking into account radiative heating and cooling with climatological heating rates.9. Temperature and vmrs of N 2 O, HNO 3 in 18 km, and ClONO 2 in 20 km altitude as a function of longitude.This plot reveals the gradients across the vortex edge in the respective altitudes.For each azimuth direction, the mean value of all measurement sequences is shown.The error bars denote the standard deviation (1 σ ) of the results within one azimuth direction.10.Same as Fig. 4, but for NO 2 .Here, the added white solid line denotes the time of the local sunrise, which is altitude dependent. At each altitude, the box model was run for a duration of 3 days along the calculated backward trajectories, ending at the tangent points of the MIPAS-B measurements.In order to compare the model results to the measurements, the trajectories ending at the times and locations of measurements for the first (westernmost) azimuth direction have been extended along so-called synthetic trajectories which are defined by the times and locations of the following MIPAS-B measurements of the same altitude.These synthetic trajectories do not represent the transport of the air parcels anymore; they give only the model results along the MIPAS-B measurements.Furthermore, for each tangent altitude and azimuth direction of the measurements, individual trajectories have been determined and model calculations along these individual trajectories have been performed.The model results at the measurement points calculated with the individual trajectories do not differ significantly from the results calculated along the synthetic trajectories.Therefore, solely the results of the box modelling along the backward trajectories of the first azimuth direction in combination with the synthetic trajectories along the points of measurements will be discussed further. Box model The box model is a zero dimensional chemistry model described by Ruhnke and Röth (1995) and includes rate coefficients taken from Sander et al. (2003).The photolysis rates are precalculated for the different altitude levels of the trajec- tories with the radiation transfer model ART (Röth, 2002).Therein, the solar zenith angle dependence of the photolysis rates of each substance at a distinct altitude is given by the following parametrisation: (3) Therein f 0 is the photolysis rate at overhead sun conditions and the coefficients b and c describe the solar zenith angle dependency of the photolysis rate. As the photolysis rates depend especially on the ozone profile, a climatological ozone profile, normalised to the mean ozone column measured by MIPAS-B, has been used to calculate the photolysis rates as realistically as possible.For the albedo, a constant value of 0.7 has been used.The box model takes into account 48 different gases, combined in nine families, and includes 167 reactions, of which 39 are photolytic.During the three days of backward trajectory calculations prior to the start of the measurements, time steps of the model are 10 min and the output is obtained every hour.During the time period of the measurements, each time step and output of the model is similar to the temporal resolution of the corresponding measurements (i.e. about 5 min).The zero dimensional modelling offers the possibility to simulate the chemistry of the measured air parcels for several days, because photochemistry is modelled according to the sunlit conditions along the Pressure and temperature are taken from the trajectory calculation.This way of modelling is justified as long as horizontal and vertical mixing is negligible.For initialisation of the box model the mixing ratios of the gases used for the modelling are taken from a long-term simulation of the 3-D chemistry transport model KASIMA (Kouker et al., 1999), except for the constraints described below.A multi-annual KASIMA run with a grid of 5.6 • in latitude and longitude is used with an interpolation to the starting coordinates of the trajectories.The initial NO y was constrained to the total NO y content measured by MIPAS-B.Therefore, the various NO y species NO y,i had to be normalised by the factor NO y , such that the NO y partitioning given by KASIMA is preserved, but the total amount is constrained to the budget of the MIPAS-B measurements.The normalisation factor is in the order of 0.8 to 1.5.However, sensitivity studies performed with different initialisations (not shown) do not exhibit any non-linearity.With this normalisation the mixing ratios for the box modelling NO y,i,BOX becomes: NO y, i, BOX = NOy • NO y, i, KASIMA These calculations have been done in terms of the definition of NO y as given in Eq. ( 2).The mixing ratios of the tracers N 2 O, CH 4 , as well as of ozone are also taken from the MIPAS-B measurement to avoid any biases in the initialisation fields modelled by KASIMA and to meet reality as good as possible. Results of box modelling Figures 13 and 14 present the time-resolved box modelling results together with the measurements for the photochemically active species NO 2 and N 2 O 5 , respectively.The time axis is extended to 3 days before the measurements to include the box model results for the simulated period.While this first part of the axis is linear, the time during measurement is stretched and not perfectly linear (see Sect. 4).The coloured circles overlaying the measurements denote the mixing ratios for the different altitudes in the same colour code as the measurements.Differences between model results and measurements thus appear as colour contrast between fore-and background.For clarity, the results of the box modelling during the 3 days before the measurement are displayed on fixed altitudes, although the air parcels experience some altitude excursion. The thin black lines denote the local solar zenith angle in the respective altitudes.In this context, the fixed altitude denotes the horizon.Thus, when the black line is above the circles, the air parcel is sunlit, while during nighttime, the black line is below the circles.Depending on altitude and taking terrestrial refraction into account, sunrise and sunset occur at solar zenith angles of 93 • to 95 • . NO 2 Model results The modelling of NO 2 (Fig. 13) during the 3 days before the measurements shows the altitude-dependent behaviour of NO 2 with changing sunlit conditions very well.The mixing ratios are changing rapidly between night-and daytime according to the variation of the solar zenith angle.This is visible down to 19.5 km.Only few hours are necessary for the model to tune and no mentionable accumulation or consumption of NO x due to an erroneous initialisation is visible during the modelled 3 days. The comparison between the model and the measurement at the right side of the figure and in the upper panel of Fig. 15 shows that the daytime equilibrium mixing ratios are represented very well by the model.However, the nighttime mixing ratios above 25 km are too high in the model and some differences are evident during sunrise.In the model, the reduction of NO 2 starts earlier (before sunrise), such that the NO 2 vmr in the model even drops below the measured values (see Fig. 15).However, later on, in the fourth azimuth direction, the model provides significantly slower reductions of NO 2 than the measurement, such that the NO 2 vmr in the model again exceeds the NO 2 vmr in the measurement, and the equilibrium mixing ratio is obtained more than one hour later, pointing to slower photolysis in the model.This bias in time is less pronounced at lower altitudes. N 2 O 5 Model results The diurnal variations of N 2 O 5 (Fig. 14) during the first days of modelling show minor variations with time compared to those of NO 2 and well defined maxima at sunrise and minima at sunset above 21 km. At the highest altitude, it appears that the N 2 O 5 mixing ratio obtained by KASIMA, which is used for the model initialisation, is too low.Despite the weak accumulation of N 2 O 5 during three days at the model run prior to the measurements, the comparison between the model and the measurements shows too small modelled volume mixing ratios at the highest altitude. The most significant differences between the box model results and the observations occur at altitudes between 25.5 km and 28.5 km where the simulated N 2 O 5 volume mixing ratios are larger than the measured ones (see also Fig. 15,lower panel).Beside initialisation problems due to an inappropriate partitioning of the individual NO y species, differences in the temperatures along the trajectory could result in a stronger simulated build up of N 2 O 5 in particular during night time (Kircher et al., 1984).However, this explanation is not likely in our case, since the differences between retrieved MIPAS-B temperatures and ECMWF temperatures are mostly below 2 K.Such a difference would alter the N 2 O 5 volume mixing ratio by only about 5%.As the main differences are obvious during night time, the assumed surface area density of sulphuric acid aerosols in the box model runs, which is kept constant during the 3 day simulation, is expected to be an important source of uncertainty for the modelled N 2 O 5 evolution (Dufour et al., 2005). Photolysis rates of NO 2 In order to get a better understanding of the discrepancies in the mixing ratios in the model and in the measurement around sunrise, photolysis rates for NO 2 (J (NO 2 )) have been deduced from the measurements. This requires some assumptions which are described in the following: 6.1 The basic relationship between NO and NO 2 is given by the following reactions: The change in [NO 2 ] can be calculated from Equation 9 can be solved for J (NO 2 ): In the steady state approximation d[NO 2 ]/dt is set to zero, but at sunrise (and also sunset) the changes in solar flux are too rapid for this assumption (Gao et al., 2001). In addition, NO 2 and NO 3 and thus NO x (=NO+NO 2 +NO 3 ) is in thermal equilibrium with N 2 O 5 : Therefore, it can be assumed that an extended NO x , which is defined as NO x,ext =NO x +2×N 2 O 5 , is constant during sunrise. 6.2 Derivation of J (NO 2 ) from MIPAS-B data For the derivation of J (NO 2 ) from MIPAS-B data, only the measurements in the third and the fourth azimuth direction (03:30 h-04:40 h) are considered as the largest differences in NO 2 occur shortly after sunrise (see Sect. 5.3.1).As temperature, pressure, time, zenith angle, as well as the vmr of NO 2 , N 2 O 5 , and O 3 at each tangent point are known from the MIPAS-B data, only NO has to be estimated to derive J (NO 2 ) from MIPAS-B data.The first four limb sequences of the third azimuth direction (called S1 to S4) are performed during night time when [NO] is negligible and [NO x ] night is set to the mean of [NO 2 ] of these four sequences: In addition, it is assumed that during the time period considered, [NO x,ext ] is constant.Thus, [NO x,ext ] can be determined similar to [NO x ] night : With these assumptions, [NO] at a given time t can be calculated as: When rewriting Eq. 10 as we can calculate J (NO 2 ) from MIPAS-B data: Results and discussion The J values for NO 2 calculated according to Eq. 18 and the values used in the model (using Eq. 3) are displayed in Fig. 16 for an altitude of 31 km.While the photolysis rate in the model increases rather linearly with decreasing solar zenith angle (SZA), the photolysis rate deduced from the measurement is close to zero for SZAs above 93.5 degrees and increases linearly below.However, the slope is much steeper in the measured data, such that the J values at 91 degrees SZA exceed the values in the model by more than 50%. The lower J values used in the model explain the slower decrease of NO 2 .The discrepancy in the J values for NO 2 between the measurements and those used in the box model runs can be explained with the parametrisation of the J values in the model.The values of the parametrisation are based on a fit of the originally calculated J values in the SZA range from 0 • to 80 • (E.-P.Röth, personal communication, 2008).Thus, the SZA range during sunrise which is analysed for J (NO 2 ) in this paper is outside the fit interval of the parametrisation.This is obvious from Fig. 16 when comparing the J (NO 2 ) values of the parametrisation to the originally calculated J values.These original J values are in much better agreement with the J values deduced from the MIPAS-B measurements.This leads to the conclusion that the parametrisation (Eq. 3) has its advantages mainly for global mean conditions with SZAs below 80 • but for winter or spring arctic conditions at sunrise or sunset the fit procedure needs to be adjusted.An adjustment of the parametrisation for these conditions is currently in preparation (E.-P.Röth, personal communication, 2008). The influence of the ozone column and the albedo on the model data has been investigated with additional sensitivity calculations using ART.The ozone column has been increased by about 43 DU up to 296 DU, which is at the upper range of the TOMS data around Kiruna at March 21, 2003, but these variations in the ozone column only have a marginal effect on the J values whereas they are rather sensitive to the albedo assumed.An albedo of 0.31 instead of 0.7, which has been applied for the box model runs, increases the J values significantly, bringing the model values closer to the measurement (see Fig. 16). Conclusions The results presented here show the ability of MIPAS-B to measure the diurnal variations of photochemically active NO y species with high temporal resolution.Furthermore, the capability of measuring the spatial distribution of various trace gases across the vortex edge has been demonstrated.The measurements in March 2003 yield a cross section through the edge of the polar vortex as well as the temporal evolution of photochemically active NO y species around sunrise.Spatial and temporal effects can be separated, because the measurements across the vortex edge have only occurred at lower altitudes (as can be seen from the PV distributions, Fig. 3), where photochemistry is less important.The measurements at the higher altitudes are situated well inside the vortex, so that chemistry can be investigated solely.The edge of the polar vortex and the relative subsidence of polar vortex air are well seen from the distribution of temperature and N 2 O at a good spatial resolution.This spatial resolution allows to resolve the ring of enhanced ClONO 2 close to the vortex edge with its very strong horizontal gradients. The measurements of species that are affected by photolysis clearly show the diurnal variation as expected for both NO 2 and N 2 O 5 . The comparison with a box modelling along backward calculated trajectories reveals differences in particular shortly after sunrise.The modelled decrease of NO 2 after sunrise is about three times slower than the decrease in the measurement.However, for SZAs above 93.5 • the model already shows some photolysis which is not seen in the measurement (see Fig. 16) leading to a negative difference in the left panel of Fig. 15 for high altitudes shortly after sunrise. One reason for the discrepancy between model and measurement is found in the parametrisation of the photolysis rates used in the box model.Especially in the high altitudes, the photolysis rates used in the model are significantly smaller than the photolysis rates that are deduced from the measurement.These differences are mainly caused by the J value parametrisation in the model which is based on a fit of the originally calculated J values in the SZA range from 0 • to 80 • .This parameterisation is clearly not appropriate for SZAs around 90 • as they occur in our measurements.Furthermore, it has been shown that the assumptions on the albedo have a rather strong influence on the photolysis rate while errors in the overhead ozone profile only have a minor effect.The initialisation of the box model in particular with respect to the assumed chlorine activation is a further reason for the discrepancies between modelled and measured results. Fig. 1 . Fig. 1.Simplified reaction scheme of NO y .All mentioned NO y species except NO and NO 3 have been retrieved from MIPAS-B measurements for this study. Fig. 3 . Fig. 3. Distribution of PV above north-eastern Europe on the morning of 21 March 2003.The upper panelshows the distribution at the potential temperature level 475 K (∼19.5 km), the lower one at 550 K (∼22.5 km).The tangent points of the respective altitudes are given as black dots.The dashed line shows the vortex edge according toNash et al. (1996). Fig. 4 . Fig. 4. Temperature across the edge of the polar vortex.The seven azimuth directions of the measurements are separated by black bars.The different blocks are labelled with the mean longitude of the tangent points at 21 km.The X-axis shows the time of the measurement. 4. 3 NO y Partitioning Vertical profiles of nighttime NO y partitioning are shown in Fig. 6.The error bars include noise, temperature and LOS errors as well as spectroscopic errors (see also Sect.2.1).The NO y error bars are calculated as root of sum squares of the www.atmos-chem-phys.net/9/1151/2009/Atmos.Chem.Phys., 9, 1151-1163, 2009 Fig. 6 . Fig. 6.Nighttime partitioning of NO y except NO of the first measured sequence. Fig. 13 . Fig. 13.Comparison between measured and modelled NO 2 mixratios.Model results are figured as coloured circles while the measurements results are plotted at the right hand side in the background.Again, the white line denotes the time of the local sunrise. Fig. 16 . Fig. 16.Photolysis rates as a function of the solar zenith angle.Black squares: Values deduced from the measurements.Red line: Values from the model using the parametrisation as given in Eq. 3. Red squares: Exact values from the model.Furthermore, the model values for different O 3 colums and albedos are shown for the parametrisation (param) as well as for the exact values from the model. Table 1 . Spectral channels of MIPAS-B during the flight in March 2003 along with NESR values and prominent gases in these spectral regions.
9,553.6
2008-01-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Closing the loop – the role of pathologists in digital and computational pathology research Abstract An increasing number of manuscripts related to digital and computational pathology are being submitted to The Journal of Pathology: Clinical Research as part of the continuous evolution from digital imaging and algorithm‐based digital pathology to computational pathology and artificial intelligence. However, despite these technological advances, tissue analysis still relies heavily on pathologists' annotations. There are three crucial elements to the pathologist's role during annotation tasks: granularity, time constraints, and responsibility for the interpretation of computational results. Granularity involves detailed annotations, including case level, regional, and cellular features; and integration of attributions from different sources. Time constraints due to pathologist shortages have led to the development of techniques to expedite annotation tasks from cell‐level attributions up to so‐called unsupervised learning. The impact of pathologists may seem diminished, but their role is crucial in providing ground truth and connecting pathological knowledge generation with computational advancements. Measures to display results back to pathologists and reflections about correctly applied diagnostic criteria are mandatory to maintain fidelity during human–machine interactions. Collaboration and iterative processes, such as human‐in‐the‐loop machine learning are key for continuous improvement, ensuring the pathologist's involvement in evaluating computational results and closing the loop for clinical applicability. The journal is interested particularly in the clinical diagnostic application of computational pathology and invites submissions that address the issues raised in this editorial. Digital and computational pathology-based manuscripts are increasingly being submitted to The Journal of Pathology: Clinical Research [1][2][3][4][5].This underpins the impact of this technology from basic science to clinically relevant diagnostic application [6][7][8] the scope of our journal.Recent advances in image digitization have simplified collaboration between pathologists for research purposes, for example interobserver studies and consensus in diagnostically challenging cases.Picture archiving and communication systems have been shown to be equivalent to conventional microscopy techniques, with only minimal limitations, including micrometer z-stacks, polarization and focus points, among others, that require additional research.After this prerequisite for a digital workflow, the arrival of morphometry-based algorithms from various companies and open-source systems have advanced more precise evaluation of biomarkers.In particular, immunohistochemistry can be analyzed more precisely as counts per area or cell numbers compared with traditional cumbersome and time-consuming enumerations or quick eyeball estimates.However, known interpretive elements like threshold setting, hotspot selection, and segregation of tissue classes have remained or even become more important with these quantitative approaches.Emerging novel histogram-based representations have helped to overcome debated scoring systems like immune reactive score or H-score.Lastly, computational and artificial intelligence models have been developed within an interdisciplinary framework of contributing pathologists, data scientists, engineers, and informaticians [6].The system 'learns deeply' how to obtain the best results for a given training set based on dynamically and iteratively optimized multifactorial zero-one decisions that are obscured to the trainer.Deep learning is now an integral part of the scientific repertoire to study tissues, but is strongly dependent on one factor: the trainer. The input of pathologists in terms of annotations frequently serves as a starting point for intensified data analysis.In reflecting about this important but frequently underestimated task during the evaluation of submitted articles, three important elements can be highlighted: granularity, time issues, and responsibility for the interpretation of computational results. Granularity refers to the detail in which annotations are made by a pathologist.In a frequently cited review article in this journal from the PathLake Group, four layers of annotation are mentioned [9], namely case level, regional annotations, cellular compositions, and attributions from elsewhere (e.g.synoptic report elements, molecular pathology, etc.).Frequently underappreciated, whole slide image (WSI) selection already represents a pathologist annotation as not all information about a case is present on a single slide.Additionally, subcellular information such as nucleoli, mucin depletion, mitoses, nuclear grooves, and pleomorphism are of interest.These layers of granularity parallel the general pathology knowledge of searching for features at certain magnifications under the microscope.It is important to bring this knowledge of diagnostic criteria together with the applied computational technique; for example, as a rule of thumb, tile size definition should match the diagnostic criteria, for example architectural criteria might not be resolved at high magnification (small tile size) and vice versa.Of note, attribution of diagnostic classes to noninformative tiles, such as ductal carcinoma in situ attribution without technically visible myoepithelium, should prompt our scientific curiosity.Technical solutions like random tile cropping, tile shifting, and analysis at different scanning resolutions in parallel could optimize the correct tile size creation. Looking at time as a factor, time constraints due to shortage of pathologists come to the fore.Several techniques have been developed to reduce input time from pathologists during annotation tasks [10].Representative tissue classes determined from a few cells can be upscaled with locally run algorithms to whole slides based on morphological similarities.In computational methods, pathologists are asked to sort tile stocks to one class and decide upon resulting borderline cases with few iterations.The aggregation of such tile stocks could be provided by a convolutional network agnostic to any attribution.This so-called unsupervised method allows calculations to run on the images hypothesis-free [11].However, these methods frequently take advantage of former hidden annotations by pathologists, including slide or tissue microarray core selection, molecular data from selected tissue areas, or the prior histopathology report containing prognostic information related to the image.Given these hidden annotations, the term unsupervised could even be questioned; the term weakly supervised should be considered to acknowledge the human input.Finally, in telling the items apart, the expert pathologist performs diagnostics in its original Greek sense.Hence, the role of the pathologist might be dual in providing a gold standard with annotations, but also in interpreting the output of the computational methods.Full annotations at the cellular level are the most valuable reference but are hard to obtain.The dedicated time for slide annotations seems to correlate indirectly with the experience of the annotator in research practice; this task may be performed by residents or students rather than by field experts in pathology or board certified pathologists. Regarding responsibility for the interpretation of computational results, the impact of pathologists seems to be reduced in the literature as technological aspects are emphasized more strongly.A histopathological baseline is postulated, which isas pathologists know always preliminary, and observer-and task-dependent.Marking the tumor is performed differently to obtain molecular pathology data, quantify epithelial biomarkers, or study stromal elements.The so-called ground truth can be set with descending reproducibility and stability from normal human anatomy to pathology, disease prognostication, and ultimately therapeutic prediction.Of note, conversely, oncological interest increases with these steps, but is more likely to vary over time.Therefore, conventional pathology research is still needed to improve diagnostic criteria, test robustness with interobserver reproducibility studies, and seek consensus.The rapid evolution of pathological knowledge is evident by the rhythm of World Health Organisation (WHO) classifications, which currently have a turnover time of approximately 6 years [12].Emerging entities, novel ancillary tools, molecular definitions, and shifts in biomarker thresholds are only some elements that continuously refine the pathological gold standard, leading to so-called diagnostic shifts in healthcare.Access to unprocessed image data according to FAIR (findability, accessibility, interoperability and reusability) principles [13,14] is key and will ensure future adaptations of algorithms in terms of data monitoring.This demand is strongly supported by the SPIRIT-Path guidelines, which outline the contribution of pathology to clinical trials [15][16][17]. Strengthening the connection between pathological knowledge generation and the computational field is crucial.In the literature, the lack of communication between pathologists and other scientists can be seen in various examples, including relevant diagnostic classes being missed in the computational training setup (known as hidden stratification), assembly of large case series that fail to represent emerging entity subtypes reinforcing outdated practices, and computational methods not being tested for relevant mimickers and differential diagnoses, to mention a few [18,19].Pathologists should commit themselves to the AI-driven transition and regain their central role as tissue experts and evaluators of digital and computational methods.Conversely, pathologists should accept that their descriptive and sometimes vague language forms the basis for future machine reading and will undergo embedding in ontologies and structured reproducible elements, such as the International Collaboration on Cancer Reporting [20]. In their current form, computational methods are reproductive in nature though at intended higher precision.This applies to well-known contests such as the CAMELYON trial for lymph node metastasis in breast cancer [21] or the PANDA challenge for Gleason grading of prostate cancer [22].However, progress beyond the pathological status quo will only be achieved if pathological and computational research seeks to surpass the boundaries of current knowledge.The most suitable approach for pathology is human-in-the-loop machine learning, which is the opposite of streamlining applications away from pathologists once annotations are made. Closing the loop to clinical applicability brings in the pathologist once again during the evaluation of computational methods.The terms explainability, causability, and interoperability were recently discussed in this journal in an invited review [23].There are many innovative ways to display computational results, for example UMAP (uniform manifold approximation and projection), vector graphics, and cell distribution heatmaps, which should be discussed with pathologists and interpreted with their input.Additionally, interactions with the computational output could initiate an additional iteration to compare deep features with corrected handcrafted features in an effort to test their reliability and robustness.Furthermore, the interdisciplinary research team may generate mathematical representations of histopathological features (e.g.morphometric parameters) that can be compared quantitatively with the computational result, unlocking the black box of AI and allowing conventional criteria to be weighed and sorted in a better way.We argue that loops and iterations are key for continuous improvement practices and strongly believe that ensuring that the pathologist remains in the loop makes the difference. Call for papers The involvement of pathologists is central to evaluation of the clinical utility of digital and computational pathology methods.The Journal of Pathology: Clinical Research invites the submission of original articles using digital and computational approaches for relevant clinical applications.We are interested in papers that focus on clinical application in any field of pathology, but would give priority to submissions that: 1. Precisely describe the testing and validation cohorts used according to REMARK guidelines [24].2. Critically evaluate and describe annotations made by pathologists, from case and block selection to cell-level annotations 3. Match WHO entity-specific essential and desirable criteria with the applied computational approach.4. Delineate the limitations of the computational algorithm with the inclusion of known differential diagnoses, mimickers and pitfalls as a separate challenging case series.5. Close the loop with final evaluation of computational versus conventional pathological features. We look forward to receiving your manuscript through the journal submission system: https://mc.manuscriptcentral.com/jpathclinres. 2 of 4 TT Rau et al © 2024 The Authors.The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland and John Wiley & Sons Ltd.J Pathol Clin Res 2024; 10: e12366
2,486.4
2024-03-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Effects of salinity on the treatment of synthetic petroleum-industry wastewater in pilot vertical flow constructed wetlands under simulated hot arid climatic conditions Petroleum-industry wastewater (PI-WW) is a potential source of water that can be reused in areas suffering from water stress. This water contains various fractions that need to be removed before reuse, such as light hydrocarbons, heavy metals and conditioning chemicals. Constructed wetlands (CWs) can remove these fractions, but the range of PI-WW salinities that can be treated in CWs and the influence of an increasing salinity on the CW removal efficiency for abovementioned fractions is unknown. Therefore, the impact of an increasing salinity on the removal of conditioning chemicals benzotriazole, aromatic hydrocarbon benzoic acid, and heavy metal zinc in lab-scale unplanted and Phragmites australis and Typha latifolia planted vertical-flow CWs was tested in the present study. P. australis was less sensitive than T. latifolia to increasing salinities and survived with a NaCl concentration of 12 g/L. The decay of T. latifolia was accompanied by a decrease in the removal efficiency for benzotriazole and benzoic acid, indicating that living vegetation enhanced the removal of these chemicals. Increased salinities resulted in the leaching of zinc from the planted CWs, probably as a result of active plant defence mechanisms against salt shocks that solubilized zinc. Plant growth also resulted in substantial evapotranspiration, leading to an increased salinity of the CW treated effluent. A too high salinity limits the reuse of the CW treated water. Therefore, CW treatment should be followed by desalination technologies to obtain salinities suitable for reuse. In this technology train, CWs enhance the efficiency of physicochemical desalination technologies by removing organics that induce membrane fouling. Hence, P. australis planted CWs are a suitable option for the treatment of water with a salinity below 12 g/L before further treatment or direct reuse in water scarce areas worldwide, where CWs may also boost the local biodiversity. Graphical abstract Electronic supplementary material The online version of this article (10.1007/s11356-020-10584-8) contains supplementary material, which is available to authorized users. Introduction Oil extraction generally results in the production of extensive volumes of petroleum industry wastewater (PI-WW) that contains heavy oil fractions and light hydrocarbons (Fakhru'l-Razi et al. 2009). Heavy oil fractions are commonly separated from PI-WW aboveground in oil/water separators (Murray-Gulde et al. 2003). The remaining water contains light hydrocarbons, heavy metals, salts, and conditioning chemicals, such as corrosion inhibitors and antiscalants (Fakhru'l-Razi et al. 2009;Rehman et al. 2018;Afzal et al. 2019), and its specific composition depends on the characteristics of the oil reservoir and the conditioning chemicals that were used in the oil extraction process (Ozgun et al. 2013). After the oil/water separation, there are different options for further management of PI-WW: injection back into the subsurface formation where the oil was extracted from, discharge aboveground, or reuse in the oil extraction process (Fakhru'l-Razi et al. 2009). However, since oil extraction often occurs in water-scarce locations, such as the Arabian peninsula, it is beneficial to treat and reuse the PI-WW instead of disposing of it to enhance the local water supply. In areas with sufficient water supply, the treatment and reuse, instead of disposal of PI-WW, protects the environmental quality of surface water and prevents the contamination of groundwater that is potentially used as a drinking water source. Various physical, chemical, and biological treatment technologies for PI-WW that could allow its reuse have been studied in recent years (Fakhru'l-Razi et al. 2009;Sosa-Fernandez et al. 2018, 2019Sudmalis et al. 2018). Amongst these, naturebased treatment in the form of a constructed wetland (CW) is an attractive treatment option, because it has low energy and maintenance requirements, it is regarded as aesthetically enriching landscapes, and it can function as an oasis in desert landscapes where it can significantly improve the local biodiversity (Stefanakis 2018(Stefanakis , 2019(Stefanakis , 2020a. CWs are man-made wetland systems in which various natural processes remove contaminants from a water stream, such as adsorption, biodegradation, photodegradation, and plant uptake (Garcia et al. 2010;Wagner et al. 2018;Hashmat et al. 2018). These simultaneously working removal mechanisms allow the treatment of water streams with a wide variety of contaminants. In fact, one of the largest CWs in the world was built in the desert of Oman for the treatment of PI-WW . The treatment of PI-WW in CWs results in the removal of heavy oil fractions and grease compounds (Abed et al. 2014;Alley et al. 2013;Horner et al. 2012;Ji et al. 2002Ji et al. , 2007Stefanakis 2018a, b), light hydrocarbons (Stefanakis 2020b), heavy metals (Alley et al. 2013;Horner et al. 2012;Ji et al. 2002Ji et al. , 2007Kanagy et al. 2008), organics (Ji et al. 2002(Ji et al. , 2007Stefanakis 2018Stefanakis , 2019Rehman et al. 2019) and total nitrogen (Ji et al. 2002, 2007, Stefanakis 2018, 2019 from the PI-WW with a low salinity. The removal of conditioning chemicals from PI-WW in CWs has not been assessed yet. In addition, it is not known what the effect of increasing salinities of PI-WW on the treatment efficiency of the CW is. The salinity of PI-WWs ranges worldwide from 1 to 300 g/L and is the result of both subsurface geochemical conditions and the injection of oil recovery fluids (Rosenblum et al. 2017;Al-Ghouti et al. 2019). Although the PI-WW does show temporal changes in composition and salinity (Rosenblum et al. 2017), the dominant ions in PI-WW are generally sodium and chloride (Fakhru'l-Razi et al. 2009). The salinity determines the viability of plants in the CW (Klomjek and Nitisoravut 2005) and influences removal mechanisms, such as biodegradation and adsorption (Wagner et al. 2018). Furthermore, the salinity of the CW effluent will determine the options for reuse of the treated PI-WW. In the present study, synthetic PI-WW with light hydrocarbons and increasing salinities was treated in lab-scale verticalflow CWs planted with Phragmites australis or Typha latifolia under hot arid conditions to assess the effect of an increasing salinity on the viability of P. australis and T. latifolia. In addition, we determined the impact of an increasing salinity on the removal of a heavy metal (zinc), a conditioning chemical (benzotriazole) and an aromatic hydrocarbon (benzoic acid) from synthetic PI-WW, and the contribution of the plant species to the removal of these pollutants. Lab-scale constructed wetlands Six vertical-flow CWs based on the CWs described by Saha et al. (2020) were built in glass containers of 25 × 25 × 25 cm with dark sides to prevent algae growth. The CW substrate consisted of 3 layers. The bottom layer contained 4 cm of 8-16 mm gravel (GAMMA, Wageningen, The Netherlands) for optimal water drainage. This was covered with a 16-cm layer of a mixture of 75% sand with a diameter of > 2 mm (GAMMA, Wageningen, The Netherlands) mixed with 25% sediment from a fully operational P. australis planted surface-flow CW to inoculate the substrate. This layer was covered with 2 cm of the 8-16 mm gravel for optimal water distribution. Two CWs were planted with P. australis, two CWs were planted with T. latifolia and two CWs remained unplanted to determine the effect of vegetation on the CW treatment efficiency. The CWs were placed in a Heraeus Vötsch MC 785-KLIMA climate chamber (Weiss Technik, Stuttgart, Germany) that simulated the climatic circumstances at the Arabic peninsula with a temperature of 30°C, relative humidity of 57.2% and a light exposure time of 9.5 h/day. The CWs were fed 4 times per day, 45 min and 0.85 L per feeding cycle, by a Watson Marlow 205U pump (Rotterdam, The Netherlands), which resulted in a theoretical hydraulic retention time of approximately 1 day. The effluent was collected in 500-mL Schott bottles with a continuous overflow system to the sewer. Influent composition The CWs were fed with a mix of synthetic PI-WW with light hydrocarbons and synthetic domestic wastewater (Table 1). A mixture of synthetic PI-WW with synthetic domestic wastewater was used to simulate mixing of these two water streams at oil extraction facilities, which could supply the plants in the CW with necessary nutrients. The composition of the synthetic municipal wastewater was based on the guidelines for synthetic wastewater preparation of the OECD test 303A (OECD 2001). In the synthetic PI-WW, benzoic acid was used as a representative aromatic hydrocarbon. Benzotriazole was added as representative conditioning chemical because of its use as corrosion inhibitor in the oil extraction process. Zinc is one of the more common heavy metals in PI-WW and originates partly from the galvanized steel structures used in oil extraction (Lee and Neff 2011). Sodium chloride was used to adjust the salinity of the synthetic wastewater mixture, since these ions are the most common ions in PI-WW (Fakhru'l-Razi et al. 2009;Jimenez et al. 2018;Al-Ghouti et al. 2019). Experimental timeline During the first 10 days, the CWs were fed with only synthetic domestic wastewater to start up the microbial activity in the CWs. After 10 days, the systems were fed with the mixture of synthetic PI-WW and synthetic municipal wastewater with a salinity of 2 g/L. Sampling started 24 days later. The salinity of the synthetic mix was increased from 2 to 4, 8 and 12 g/L every 21 days. Sampling and sample preparation Liquid samples were taken twice per week from the influent and effluent of the CWs. Samples for benzotriazole analysis were filtered over a 0.2-μm polyethersulfone (PES) filter, diluted 100× with ultrapure water and stored in 1.5-mL glass LC vials at − 20°C until analysis. Samples for benzoic acid analysis were filtered over a 0.2-μm PES filter, acidified with 1% (v/v) acetic acid (99%) and stored in 1.5-mL glass LC vials at − 20°C until analysis. Samples for zinc analysis were spiked with 1% (v/v) of the yttrium internal standard solution ("Chemicals" section). Analyses The temperature, pH and electrical conductivity (EC) were measured with a Hach Lange HQ440D multimeter (Tiel, The Netherlands). Benzotriazole was analysed by LC-MS/ MS as described by Wagner et al. (2020a). Benzoic acid was determined by HPLC-DAD as described by Wagner et al. (2020b). Zinc was measured based on a certified ICP method of the Belgian federal agency of food chain security (2017/I-MET-209/LAB/FLVVG) using a PerkinElmer Avio 500 ICP-OES spectrometer (Groningen, The Netherlands) in axial and radial plasma view mode. The high-energy (f/6.7) echellebased ICP-OES utilized two SCD detectors covering the spectral range from 163 to 782 nm. The injector tube was made of alumina and had an internal diameter of 2 mm. Argon (99.998%, Linde Gas, The Netherlands) was used as the plasma (10 L/min), auxiliary (0.2 L/min) and nebuliser gas (0.6 L/ min). Nitrogen (99.996%, Linde Gas, The Netherlands) was used as purging gas (2 L/min) in the optical system of the spectrometer. The ICP-OES was operated with a plasma power of 1300 W, a signal integration time of 5 s, and peak processing in peak area mode, using three points/peak for integration. The analytical wavelength of Zn(II) was 206.200 nm. The emission line of Y(II)-371.027 was used as an internal Results and discussion The influence of salinity on plant growth Typha latifolia and Phragmites australis both grew well at a synthetic PI-WW salinity of 2 g/L and a temperature of 30°C ( Fig. S1), demonstrating that these plant species are suitable choices for CWs treating mildly saline water in hot, arid climates. A picture of the planted and unplanted CWs during the complete experimental period is provided in the supplementary info (SI) (Fig. S1). P. australis survived better with increasing salinities then T. latifolia, as shown by visual observations of the plant colour, plant height and plant leaf density. At a salinity of 4 g/L, the T. latifolia started to discolour, while no visual effect of the increased salinity on the P. australis was observed (Fig. S1). At a salinity of 8 g/L, the T. latifolia completely discoloured and died, while the growth of the P. australis seemed to be impeded and its leaves became a lighter tone of green ( Fig. S1). At 12 g/L, P. australis leaves discoloured and the plant height decreased (Fig. S1). The impact of the salinity on P. australis has already been studied extensively, and P. australis is known to tolerate salinity levels below 15 g/L, with a higher tolerance for rhizome-grown plants, as used in the present study, than for seed-grown plants (Lissner and Schierup 1997). P. australis is relatively tolerant to NaCl stress because it restricts the transport of Na + and Cl − ions to the leaves, it intrinsically enhances the water use efficiency and it produces proline to adjust the intracellular osmotic pressure (Pagter et al. 2009). A too high salt concentration reduces the biomass of P. australis as a result of osmotic and ion-specific effects (Pagter et al. 2009), as was observed at a NaCl concentration of 12 g/L in the present study. The impact of the salinity on plant growth is less extensively studied for T. latifolia compared to P. australis. T. latifolia is reported to grow well in CWs treating shrimp farming wastewater with a salinity of 3-8 g/ L, where the T. latifolia overgrew 9 other planted species in the second year of growth (Tilley et al. 2002). In addition, T. latifolia grew well in CWs treating fish farming wastewater with a salinity of 24 g/L (Jesus et al. 2014), which contradicts with the complete withering of the T. latifolia at a NaCl concentration of 8 g/L in the present study. The difference in salinity tolerance of T. latifolia in the present study and the study of Jesus et al. (2014) can be the result of a difference in origin of the plants, a difference in the climatic circumstances in which the plants were grown, or a different effect of the type of wastewater. Hootsman and Wiegman (1998) observed that T. latifolia died as result of treatment with 18 g/L sea salt that mainly contains NaCl, whereas P. australis was able to survive. For hot climatic conditions, when living vegetation is required in the CW, P. australis proves to be a better choice as vegetation in CWs treating saline water. The influence of constructed wetland treatment on the electrical conductivity The presence of P. australis and T. latifolia substantially impacted the electrical conductivity (EC) of the effluent of the CWs, as long as the plants were alive (Fig. 1). A NaCl influent concentration of 2 g/L resulted in an EC of 3 mS/cm, while a NaCl influent concentration of 12 g/L resulted in an EC of~17 mS/cm (Fig. 1), suggesting that NaCl is the main contributing factor to the EC. The EC of the effluent of the unplanted CWs was slightly higher than the EC of the influent (Fig. 1), most likely due to evaporation of the water as a result of the high temperature. The presence of P. australis and T. latifolia substantially increased the EC of the treated effluent as a result of plant transpiration (Fig. 1). Combined evaporation and plant transpiration, evapotranspiration, resulted in an effluent EC that was substantially higher than the influent EC during low salt concentrations of 2 g/L and 4 g/L for both plant species (Fig. 1). The EC of the effluent increased because plants protect their plant tissue against toxic concentrations of mainly Na + and Cl − ions by excluding these when they take up water with their root system (Munns and Tester 2008), leaving these ions in the water that exits the CW as effluent. The visual observations of the plant vitality ("The influence of salinity on plant growth" section) are corroborated by the trends of the EC of the effluent of the planted CWs. With a NaCl concentration of 2 g/L, the EC of the effluent of the CWs planted with P. australis and T. latifolia is similar to each other (Fig. 1), revealing a similar evapotranspiration rate. During the addition of 4 g/L NaCl, the EC of the P. australis planted CW effluent was higher than the EC of the T. latifolia planted CW effluent. The difference in EC between the two vegetation types further increased when the salt concentrations were increased to 8 g/L and 12 g/L NaCl, demonstrating a higher evapotranspiration in the P. australis planted CWs. During the increase of the NaCl concentration from 4 to 12 g/L, the EC of the effluent of the T. latifolia planted CWs approached the EC of the effluent of the unplanted CWs, illustrating that plant transpiration stopped. This is in agreement with visual observations of the T. latifolia death ("The influence of salinity on plant growth" section). The higher EC of the P. australis effluent compared to the unplanted CWs at 12 g/L NaCl implies that the plant still actively transpires water and is thus functional, despite the (partial) leaf decolouration ("The influence of salinity on plant growth" section). The impact of an increasing salinity on the removal of benzoic acid Aromatic hydrocarbon benzoic acid was completely removed in planted CWs at a NaCl concentration of 2 g/L, as demonstrated by the low residual concentrations in the effluent of the CWs compared to the influent concentration (Fig. 2). Complete benzoic acid removal in planted CWs was observed previously (Zachritz et al. 1996;Wagner et al. 2020c) and was the result of biodegradation (Wagner et al. 2020b). Previous studies also showed that biodegradation was the main removal mechanism in subsurface-flow CWs for other aromatic hydrocarbons, such as phenol (Stefanakis et al. 2016). Benzoic acid biodegradation occurs both in aerobic and anaerobic conditions, and many microorganisms, such as Pseudomonas, can degrade benzoic acid (Ziegler et al. 1987;Parsons et al. 1988;Hall et al. 1999). Pseudomonas was also identified by Hashmat et al. (2018) in their surface-flow CWs treating water contaminated with aromatic hydrocarbons. Benzoic acid did not adsorb to CW substrate (Wagner et al. 2020b). Plant uptake of benzoic acid cannot be ruled out in the present study, but the benzoic acid removal efficiency of the unplanted CWs varied between 70 and 100% at a NaCl concentration of 2 g/L (Fig. 2), which was solely the result of biodegradation. Hence, biodegradation is the main removal mechanism for benzoic acid in the present study and complete benzoic acid removal in the planted CWs is probably because the presence of vegetation results in a higher microbial density and microbial activity (Gagnon et al. 2007;Zhang et al. 2010;Nguyen et al. 2019), since the plant rhizosphere provides nutrients and favourable redox conditions (Gagnon et al. 2012). In the unplanted CWs, the absence of the vegetation probably results in less favourable conditions for the microorganisms, resulting in a decreased benzoic acid removal efficiency after the first 5 days. With an increasing NaCl concentration increasing to 8 g/L, benzoic acid remains almost completely removed in the planted CWs (Fig. 2). Surprisingly, the removal efficiency in the unplanted CW increased over time (Fig. 2). This could be an indication that the microbial biomass in the newly started CWs increased despite the increasing NaCl concentrations, resulting in a higher benzoic acid removal efficiency. An increasing biodegradation efficiency with increased salinities is in contrast with Lin et al. (2008) and Villasenor Camacho et al. (2017), who observed decreasing biodegradation efficiencies in CWs fed with influent with increasing salinities. At a NaCl concentration of 12 g/L, the residual benzoic acid concentration in the effluent of the unplanted CWs stabilized at around 5 mg/L. The benzoic acid is still completely removed in the P. australis planted CWs (Fig. 2). In contrast, at this NaCl concentration, the benzoic acid concentration in the effluent of the T. latifolia planted CWs became similar to the effluent concentration of the unplanted CWs (Fig. 2). We hypothesize that the microbial community in this CW was less well able to cope with the increased NaCl concentration, as a result of the loss of its favourable environment due to the death of the T. latifolia ("The influence of salinity on plant growth" section). In the present study, a cutoff point for benzoic acid biodegradation was not observed for NaCl concentrations up to 12 g/L. Castillo-Carvajal et al. (2014) showed that aromatic hydrocarbons can be biodegraded by halophilic bacteria up to a NaCL concentration of 200 g/L. The impact of an increasing salinity on the removal of benzotriazole The removal efficiency of the unplanted, P. australis planted and T. latifolia planted CWs for conditioning chemical benzotriazole differed substantially at a NaCl concentration of 2 g/L (Fig. 3). The lowest benzotriazole removal efficiency at a NaCl concentration of 2 g/L was observed in the unplanted CWs (Fig. 3). The removal of benzotriazole in CWs is the result of simultaneous adsorption to the substrate and aerobic biodegradation, as demonstrated before (Wagner et al. 2020a). The presence of plants in the present study enhanced the benzotriazole removal efficiency, and the CWs planted with P. australis had a higher removal efficiency than the T. latifolia planted CWs (Fig. 3). The uptake and transformation of benzotriazole in the plants was not studied; hence, it is not possible to distinguish between direct enhancement as a result of plant uptake and degradation in the plant tissue and an indirect enhancement as a result of a more favourable environment by microorganisms in the plant rhizosphere. However, the uptake and degradation of benzotriazole by plants in CWs has been described for other plant species than those used in the present study, such as Arabidopsis thaliana and Carex praegracilis (LeFevre et al. 2015;Pritchard et al. 2018). The benzotriazole removal efficiency at a mild salinity of 2 g/L under hot arid circumstances in the P. australis planted verticalflow CWs (Fig. 3) was comparable with the benzotriazole removal efficiency ranging from 63 to 90% in vertical-flow CWs treating domestic wastewater or cooling tower water in less hot climates as observed by others Kahl et al. 2017;Brunsch et al. 2018;Nivala et al. 2019;Brunsch et al. 2020;Wagner et al. 2020a). Increasing NaCl concentrations from 2 to 12 g/L lowered the benzotriazole removal efficiency in the planted and unplanted CWs (Fig. 3). However, despite the high evapotranspiration and corresponding concentration increase of both NaCl and fractions that are not taken up by the plants, there is still a net removal of benzotriazole over the whole experimental period for the planted and unplanted CWs (Fig. 3). The increasing difference between the P. australis planted CWs and T. latifolia planted CWs indicates that the death of the T. latifolia ("The influence of salinity on plant growth" section) is responsible for the large decrease in benzotriazole removal efficiency in the T. latifolia planted CWs. At a salinity of 12 g/L, the removal efficiency of the unplanted and T. latifolia planted CW was similar, indicating the absence of a positive effect of the vegetation on the benzotriazole removal in these CWs. In the P. australis planted CWs, the benzotriazole removal efficiency at a NaCl concentration of 12 g/L was slightly lower than at 2 g/L, but still showed more than 50% removal (Fig. 3). Given the benzotriazole removal efficiency difference between the P. australis and T. latifolia planted CWs, this is the result of biological removal with P. australis, and P. australis is still capable of stimulating the biological benzotriazole removal at a NaCl concentration of 12 g/L, despite the first signs of plant discolouration ("The influence of salinity on plant growth" section). The impact of an increasing salinity on the removal of zinc The removal of zinc is substantially impacted by an increase in the NaCl concentration in the T. latifolia and P. australis planted CWs (Fig. 4). In the unplanted CWs, the effluent concentrations of zinc are not affected by increasing NaCl concentrations, and approximately 80% of the zinc is removed (Fig. 4). At a NaCl concentration of 2 g/L, the P. australis and T. latifolia planted CWs have similar effluent zinc concentrations that are higher than those of the unplanted CW (Fig. 4), with substantial deviations in the removal efficiencies of the experimental duplicates of mainly the P. australis planted CWs. Evapotranspiration resulted in a higher concentration of zinc in the effluent of the planted CWs, but the evapotranspiration rate cannot fully explain the difference in zinc removal between the planted and unplanted CWs. In addition, the deviation in effluent zinc concentration of the P. australis duplicates are not the result of differences in evapotranspiration, since the deviation of the effluent EC in the planted CWs is smaller than the deviation in effluent zinc concentrations ( Figs. 1 and 4). The differences in removal efficiency of the planted CWs could be the result of difference in root system development. The removal of zinc in CWs is the result of precipitation in the rhizosphere and on the root surface of wetland plants (Otte et al. 1995) and uptake by the plants (Sheoran and Sheoran 2006). In addition, zinc adsorbs to the CW substrate as result of cation exchange or chemisorption and to clay and organic matter by electrostatic interaction (Sheoran and Sheoran 2006). The lower removal in the planted CWs is probably governed by a rhizospheric decrease of the pH and higher redox conditions by oxygen influx through plants in the planted CWs. The differences in development of the rhizosphere might explain the differences between experimental duplicates. The rhizosphere is able to decrease the pH within 1 cm of the root system by the release of acidic root exudates, and this decreased pH causes changes in metal speciation and solubility (Weis and Weis 2004), and competition for adsorption on the CW matrix between the proton and metal cations. Moreover, oxygen influx through the plant root system might mitigate the establishment of sulphate-reducing conditions in planted systems, which prevents zinc-sulphide precipitation and keeps the Zn ions in solution. Wright and Otte (1999) showed that a decreased pH in the rhizospere increased the concentrations of dissolved zinc near the roots of T. latifolia, and the transport of this dissolved zinc to the effluent would result in a lower zinc removal efficiency in the planted CWs. This suggests that more detailed probing of pH, redox and dissolved Zn gradients in the root zones of the CW could shed more light on the actual mechanisms controlling zinc adsorption and desorption in CWs, for which the experimental setup of the present study did not allow for. An increase in the NaCl concentration from 2 to 4 g/L resulted in a sudden increase in the effluent zinc concentration of the T. latifolia and P. australis planted CWs to a concentration above the influent concentration of approximately 5 mg/L (Fig. 4). After this increase, the effluent concentration stabilized around 4 mg/L, and never reached the effluent concentration at a NaCl concentration of 2 g/L. A similar increase in effluent zinc concentration was observed with an increase of the NaCl concentration from 4 to 8/L. Such increased effluent zinc concentrations were not observed in the unplanted CWs, indicating that plant-mediated processes are responsible for the observed increased zinc concentrations. The release of zinc from the planted CWs after an increase in the salinity could be the result of the complexation of zinc with root exudates that are released to protect the plant against the increased NaCl concentrations and that increase the zinc mobility (Lutts and Lefèvre 2015). The low increase in the effluent zinc concentration of the T. latifolia planted CWs with an increase in NaCl concentration from 8 to 12 g/L, where the T. latifolia died ("The influence of salinity on plant growth" section), provides additional proof that the release of Zn at salinity increases at lower salinities is a result of an active plant response mechanisms to increased salinities. Reuse of petroleum-industry wastewater in hot arid climates It can be concluded from the present study that treatment of synthetic mildly saline PI-WW with a NaCl concentration up to 12 g/L in CWs in simulated hot arid climatic conditions results in the removal of the conditioning chemicals benzotriazole and aromatic hydrocarbon benzoic acid from the synthetic PI-WW. However, the present study also shows that increasing salt concentrations of the CW-influent negatively affect the biodegradation of benzotriazole and benzoic acid. Increasing salt concentrations also affect the removal of zinc from the synthetic PI-WW. Zinc is steadily removed from the synthetic saline PI-WW as long as the salinity of the influent is stable, but fluctuating salt concentrations result in spikes of the effluent zinc concentration as result of active plant defence mechanisms. Moreover, evaporation and evapotranspiration in the hot and arid conditions applied in the present study result in an increased EC of the CW treated water. The increased EC of the CW effluent limits its further reuse options. Al-Ghouti et al. (2019) identified various reuse options for PI-WW: livestock watering, habitat and wildlife watering, fire control, dust control, reuse in the oilfield, cooling water during power generation and irrigation in agriculture. A distinction is made between irrigation of food crops and irrigation of non-food crops that can be used for instance as bio-based fuels or construction material. For non-food crops, the legislative water quality requirements are generally lower (Al-Ghouti et al. 2019). Plant growth can be substantially hindered by too high salt concentrations of the treated PI-WW (Pica et al. 2017). The use of treated saline PI-WW can not only negatively affect the growth of plants directly but can also result in an increased soil salinity and soil sodicity (Echchelh et al. 2018). These processes can irreversibly change the soil structure and significantly lower the soil fertility (Echchelh et al. 2018). Therefore, various countries have guidelines in place with threshold values for the EC, sodium adsorption ratio (SAR) and heavy metal concentrations to which irrigation water has to comply to prevent damage to soil and crops (do Monte 2007; Jeong et al. 2016). An increased salinity as result of CW treatment does not only limit the applicability of PI-WW for irrigation in many countries, but also its use in industrial processes, such as its use as cooling tower make-up water (Wagner et al. 2018). To make the treated PI-WW suitable for either irrigation or as make-up water, CW treatment can be followed by desalination technologies to obtain appropriate salinities for reuse. Desalination of PI-WW has been studied extensively (Al-Ghouti et al. 2019;Fakhru'l-Razi et al. 2009;Shaffer et al. 2013), and various technologies are capable to lower the salinity of PI-WW to a level suitable for use as irrigation water or in cooling towers. For instance, Mallants et al. (2017) showed that reverse osmosis of PI-WW results in a water source that meets the EC requirements for use as irrigation water. CWs can be applied as a pre-treatment before membrane-based desalination of PI-WW to remove fractions that shorten the lifespan of desalination membranes, such as suspended solids and conditioning chemicals (Wagner et al. 2020c). In conclusion, P. australis planted CWs are a suitable option for the treatment of water with a salinity below 12 g/L, and the CW effluent can be reused directly or further desalinated to increase its reuse options. CW treatment and water reuse would be beneficial not only in water-scarce areas but also in areas with stringent PI-WW discharge regulations. Such CWs will not only protect the environment but can also boost the local biodiversity. Practical implementation of CWs for the treatment of high-volume industrial wastewater, such as PI-WW, is described by Stefanakis (2019). Economic Affairs, and co-financed by the Netherlands Ministry of Infrastructure and Environment and partners of the Dutch Water Nexus consortium (Project nr. STW 14302 Water Nexus 3). Compliance with ethical standards Competing interests The authors declare that they have no competing interests. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
7,508.2
2020-09-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
57Fe Mössbauer spectra from fluorinated phases of Fe0.50M0.50(M = Co,Mg)Sb2O4 Fluorinated phases formed by reaction of Fe0.5Co0.5Sb2O4 and Fe0.5Mg0.5Sb2O4 with gaseous fluorine have been examined by 57 Fe Mössbauer spectroscopy between 298 and 5 K. The degree of oxidation of Fe2+to Fe3 +has been used to quantify the amount of fluorine incorporated within the channels of the schafarzikite-related structure and enable the evaluation of the compositions as Fe0.5Co0.5Sb2O4F0.41 and Fe0.5Mg0.5Sb2O4F0.31. The multiplicity of components observed in the spectra recorded in the paramagnetic regime can be related to the number of near neighbour fluoride ions which lie in the channels at the same value of the crystal z- coordinate as the iron ions. Comparison of the magnetically ordered spectra recorded at lower temperatures from Co0.5Fe0.5Sb2O4F0.41 with those recorded previously from FeSb2O4 indicates that the insertion of fluoride ions into the channels of the structure does not affect the angle between the EFG and magnetic hyperfine field. Introduction The insertion of fluorine into two-dimensional structures induces structural changes and electronic properties resulting from the oxidation of metal ions in the host material. The conversion of semiconducting La 2 CuO 4 to superconducting La 2 CuO 4 F x [1] illustrates this type of reaction and other examples are documented in a recent review [2]. As a part of our ongoing interest in the fluorination of inorganic structures we have recently reported on the accommodation of fluorine within the narrow one-dimensional channels of phases related to the mineral schafarzikite of composition FeSb 2 O 4 [3]. The compound FeSb 2 O 4 is isostructural with the tetragonal form of Pb 3 O 4 [4,5] (Fig. 1) and consists of rutile-related chains of FeO 6 octahedra along the c-axis linked by trigonal pyramidal Sb 3+ cations which, being bound to three oxygen anions, possess a lone pair of electrons which can be regarded as a fourth ligand. The Fe-Fe distance within the chains (2.96 Å) is shorter than the nearest Fe-Fe distance within the layers (6.07 Å) and is consistent with the one-dimensional character of FeSb 2 O 4 . The material undergoes an antiferromagnetic transition around 45 K [6,7]. The 57 Fe Mössbauer spectrum of FeSb 2 O 4 at ca. 4.2 K is unusual, being the result of combined magnetic hyperfine and electric quadrupole interactions of comparable strength [8][9][10] and, together with data recorded above T N [11], has been used to determine the order of the t 2g orbital energy levels. The literature also documents some structurally related compounds of composition MSb 2 O 4 (M = Mn 2+ , Co 2+ , Ni 2+ , Cu 2+ , Zn 2+ , Mg 2+) [12][13][14][15][16][17]. We have also prepared new phases involving substitution on the M site. The magnetic properties deduced from neutron diffraction, magnetic susceptibility and Mössbauer spectroscopy measurements in the series of composition Fe 1 − x Co x Sb 2 O 4 suggested the existence of some short range correlations within the chains [18,19] whilst in materials of formulation Fe 1 Fig. 1 The structure of FeSb 2 O 4 : FeO 6 octahedra are shaded with Fe 2+ ions located within the octahedra; O 2− ions are shown as black spheres and Sb 3+ ions are shown as white spheres − x Mg x Sb 2 O 4 , the decrease in magnetic ordering temperature with increasing concentrations of magnesium was associated with non-magnetic Mg 2+ ions weakening the magnetic interactions between Fe 2+ ions [19,20]. We have also demonstrated the capacity of these materials to accommodate additional oxygen within their structures [21]. Our previous work [3] reported on the reaction of Mg 0.50 Fe 0.50 Sb 2 O 4 and Co 0.50 Fe 0.50 Sb 2 O 4 with fluorine gas at low temperatures to give topotactic insertion of fluorine into the channels which are an inherent feature of the structure. Neutron powder diffraction and solid state NMR studies showed that the interstitial fluoride ions were bound to antimony within the channel walls to form Sb − F − Sb bridges. 57 Fe Mössbauer spectra recorded at 300 K showed that oxidation of Fe 2+ to Fe 3+ was primarily responsible for balancing the increased negative charge associated with the presence of the fluoride ions within the channels. In this work we report on the 57 Fe Mössbauer spectra recorded at low temperatures from new samples of the fluorinated phases of Mg 0.50 Fe 0.50 Sb 2 O 4 and Co 0.50 Fe 0.50 Sb 2 O 4 with the schafarzikite structure and their interpretation in terms of the number of near F − ions which lie in the channels at the same value of the crystal z-coordinate as the iron ions and the magnetic properties of these materials. Experimental Materials of composition Mg 0.50 Fe 0.50 Sb 2 O 4 and Co 0.50 Fe 0.50 Sb 2 O 4 were prepared by heating appropriate quantities of the dried metal oxides in evacuated sealed quartz tubes as previously described [18,20]. Fluorinated samples were obtained by heating the parent oxides in flowing fluorine gas (10% in nitrogen) at 230C and purging with gaseous nitrogen or argon [3]. The structural characterisation and analysis of the samples of composition Fe 0.5 Co 0.5 Sb 2 O 4 F 0.49 and Fe 0.5 Mg 0.5 Sb 2 O 4 F 0.31 has been reported previously [3]. In this work new samples were examined. 57 Fe Mössbauer spectra were recorded in a helium gas-flow cryostat in constant acceleration mode using a ca. 25 mCi 57 Co source. All spectra were computer fitted and all chemical isomer shift data are quoted relative to metallic iron at room temperature. Results and discussion The 57 Fe Mössbauer spectra recorded at 298 K, 80 K, 45 K, 20 K and 5 K are shown in Fig. 2 and the fitting parameters are listed in Table 1. The spectra are different from those reported previously [3] which is not unexpected given that we have already noted [3] that differences in fluorine content probably relates to differences in particle size of starting materials and the fluorination conditions not being identical. Identification of components in spectra recorded at 298 K and 80 K The spectrum recorded at 298 K showed a component with δ = 1.06 mms −1 which can readily be assigned to Fe 2+ and components with δ = 0.41 mms −1 and δ = 0.37 mms −1 characteristic of Fe 3+ (the partial superposition of the two components precludes their observation in Fig. 2 The fitting parameters listed in Table 1 show that in the spectrum recorded at 298 K the two Fe 3+ components have slightly different values of quadrupole splitting. In the spectrum recorded at 80 K there are two Fe 2+ components with slightly different values of quadrupole splitting. We have investigated whether this behaviour could be linked to different near neighbour fluoride configurations around the iron ions. In the ab -planes containing the iron-and fluoride-ions the structure [3] shows that there are four possible sites for an inserted fluoride ion around each iron ion. A simulation incorporating random occupation by iron and cobalt of the cation sites and random occupation of surrounding fluoride sites (subject to their relative abundance, x) indicated that the relative percentages of an iron ion having 0,1,2,3 near neighbour fluoride ions was 9%, 42%, 42% and 7% respectively. This provides some indication that the observed difference in splitting in the doublets observed in spectra recorded in the paramagnetic regime corresponds to the main probabilities of having 1 or 2 fluoride neighbours. Spectra recorded at 45 K, 20 K and 5 K The spectra recorded below 80 K all show the superposition of magnetic sextet components. This is consistent with a magnetic ordering temperature T N = 75 K recorded from magnetisation results recorded previously from the compound Fe 0.5 Co 0.5 Sb 2 O 4 F 0.49 [3]. Despite their complexity each spectrum is clearly dominated by a sextet of large spectral area with isomer shift characteristic of Fe 3+ . We associate these sextet components as the low temperature versions of the large-area Fe 3+ doublet components observed in the spectra recorded at 298 K and 80 K. In these sextet spectra the quadrupole shift Δ is evaluated from 2Δ = the energy difference between the splitting of lines 5 and 6 and lines 1 and 2 of the sextet. The angle Θ between the principal axis of the electric field gradient (EFG) and the direction of the magnetic hyperfine field B hf can be evaluated from the values of Δ and the quadrupole splitting e 2 qQ/2 observed in the 80 K spectrum using the relation Δ ¼ e 2 qQ=2: 1 =2 : 3cos 2 Θ-1 À Á : The values of Θ from the spectra at 45 K, 20 K and 5 K are evaluated as 69°, 67°and 64°r espectively. These values are effectively identical to those measured in the parent material FeSb 2 O 4 [10] and although there are differencesin FeSb 2 O 4 the iron is Fe 2+ and the magnetic structure is A-type instead of the present C-typeit seems that the insertion of fluoride ions into the channels of the structure does not affect the angle between EFG and magnetic hyperfine field. Given the complexity of the Fe 2+ spectral components we have not analysed these to a degree which justifies inclusion of parameters in Table 1. Mg 0.5 Fe 0.5 Sb 2 O 4 F x The 57 Fe Mössbauer spectra recorded at 298 K, 80 K, 45 K, 20 K and 5 K are shown in Fig. 3 and the fitting parameters are listed in Table 2. Identification of components The spectra recorded at 80 K and 45 K are well fitted with a single Fe 3+ quadrupole split component and a single Fe 2+ quadrupole split component. The spectrum recorded at 298 K spectrum shows-in addition to the Fe 3+ and Fe 2+ quadrupole split components-a component with isomer shift δ = 0.71 mms −1 which, as in Co 0.5 Fe 0.5 Sb 2 O 4 F 0.41 , can be associated with electron mobility between Fe 2+ and Fe 3+ ions. At lower temperatures this component is not observed as electrons become localised on the Fe 2+ and Fe 3+ ions. A simulation similar to that described in 3.1.2 above was performedwith x = 0.31. This indicated that the percentages of iron ions having 0,1 and 2 near neighbour fluoride ions is 25%, 50%, 25%. This distribution appears not to cause observable differences in quadrupole splitting for the Fe 3+ components in the 298 K, 80 K and 45 K spectra or for the Fe 2+ components in the 80 K and 45 K spectra. Spectra at 20 K and 5 K The spectra at 20 K and 5 K both show components representing magnetically ordered and nonordered states of Fe 2+ and Fe 3+ ions. However, these components are not sufficiently well defined to enable the evaluation of the angle theta or further analysis. In particular the complexity of the Fe 2+ spectral components precluded confident evaluation of parameters for inclusion in Table 2. Conclusion The 57 Fe Mössbauer spectra recorded from fluorinated Mg 0.50 Fe 0.50 Sb 2 O 4 and Co 0.50 Fe 0.50 Sb 2 O 4 show that a major effect of fluoride ion insertion involves the oxidation of Fe 2+ ions to Fe 3+ ions in the structure. The 57 Fe Mössbauer spectra enable the degree of fluorination to be evaluated and the compositions to be derived. We suggest that that the multiplicity of components in the spectra recorded in the paramagnetic regime may be related to the number of near fluoride ions which lie in the channels at the same value of the crystal zcoordinate as the iron ions. Comparison of the magnetically ordered spectra recorded at lower temperatures from Co 0.5 Fe 0.5 Sb 2 O 4 F 0.41 with those recorded previously from FeSb 2 O 4 indicates that the insertion of fluoride into the channels of the structure does not affect the angle between EFG and magnetic hyperfine field.
2,847.6
2019-06-27T00:00:00.000
[ "Materials Science" ]
Coverage, Formulary Restrictions, and Out-of-Pocket Costs for Sodium-Glucose Cotransporter 2 Inhibitors and Glucagon-Like Peptide 1 Receptor Agonists in the Medicare Part D Program The Centers for Medicare & Medicaid Services recently announced a voluntary plan to cap out-ofpocket costs associated with insulin products in participating enhanced Part D plans.1 However, this model will not apply to other high-cost glucose-lowering medications such as sodium-glucose cotransporter 2 (SGLT2) inhibitors and glucagon-like peptide 1 (GLP-1) receptor agonists. These classes are increasingly used as second-line agents for patients with type 2 diabetes despite only a modest effect on glycemic control (approximately 0.8% to 1%) because of mounting evidence of cardiovascular benefits. We sought to examine contemporary coverage and out-of-pocket costs for beneficiaries filling either an SGLT2 inhibitor or GLP-1 receptor agonist prescription in Medicare Part D. Introduction The Centers for Medicare & Medicaid Services recently announced a voluntary plan to cap out-ofpocket costs associated with insulin products in participating enhanced Part D plans. 1 However, this model will not apply to other high-cost glucose-lowering medications such as sodium-glucose cotransporter 2 (SGLT2) inhibitors and glucagon-like peptide 1 (GLP-1) receptor agonists. These classes are increasingly used as second-line agents for patients with type 2 diabetes despite only a modest effect on glycemic control (approximately 0.8% to 1%) because of mounting evidence of cardiovascular benefits. We sought to examine contemporary coverage and out-of-pocket costs for beneficiaries filling either an SGLT2 inhibitor or GLP-1 receptor agonist prescription in Medicare Part D. Methods This cross-sectional study used the 2019 quarter 1 Prescription Drug Plan Formulary, Pharmacy Network, and Pricing Information Files 2-5 to assess drug coverage, formulary restrictions, median retail prices, and annual out-of-pocket costs associated with commonly used SGLT2 inhibitors and GLP-1 receptor agonists (Table) across Part D plans (both stand-alone and Medicare Advantage). Because each drug is available in several different formulations and package sizes, we report estimates for only the most commonly dispensed National Drug Code according to Medicaid State Drug Utilization Data for 2019 and that represented the drug label's recommended maintenance dose. We excluded combination products because they are infrequently used. We calculated the percentage of Part D plans that covered each drug without formulary restrictions, defined as having no prior authorization and no step therapy requirements. We also calculated the median retail price Author affiliations and article information are listed at the end of this article. and interquartile range (IQR) for a 30-day supply for each drug. We reported the percentage of plans that covered each drug at tiers 1 to 3 (ie, as a preferred brand-name drug or better). All estimates were weighted by average plan enrollment during quarter 1 of 2019. Plans with enrollment less than 10 were excluded. We estimated the median annual out-of-pocket costs for Part D beneficiaries not eligible for low-income subsidies for each drug by using 2 approaches: using the 2019 standard Part D benefit design (ie, 25% of the brand-name drug costs during the initial coverage and coverage gap phases and 5% during catastrophic coverage after a deductible of $415), and using plan-specific benefit information from the formulary files to calculate plan estimated out-of-pocket costs (ie, deductible and co-pay/coinsurance were drawn from observed formulary data). All analyses were performed with SAS version 9.4 and R version 3.6.1. This study follows the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cross-sectional studies and was determined to be exempt by the University of Pittsburgh institutional review board. Coverage and retail prices for SGLT2 inhibitors and GLP-1 receptor agonists were variable for 3992 Part D plans during quarter 1 2019 (Table). Excluding ertugliflozin and lixisenatide (which were covered by only 6% and 3% of plans, respectively), coverage without prior authorization and without step therapy requirements ranged from 53.2% (95% CI, 49.1%-57.4%) for canagliflozin to 95.4% In the approach using a standard Medicare Part D benefit design ("standard benefit design"), out-ofpocket costs were calculated with the 2019 benefits structure, in which beneficiaries (not eligible for low-income subsidies) pay 25% of brand-name drug costs (obtained from the pricing file) during both the initial coverage and the coverage gap phases after a deductible of $415. During catastrophic coverage, beneficiaries pay 5% of the brand-name drug cost (or a minimum of $8.50, whichever is greater). In the approach based on plan-specific benefit data, deductible, co-pay, and coinsurance amounts were drawn from plan-specific observations in the formulary files (excluding plans in which exact co-pay or coinsurance amounts could not be directly calculated). Both algorithms assume that a beneficiary fills only 1 drug prescription per month during 12 months. Data are plotted as median (interquartile range). (Figure). Discussion Coverage for SGLT2 inhibitors and GLP-1 receptor agonists was generally high in 2019 Part D plans, although variable across specific drugs. However, Medicare beneficiaries not eligible for low-income subsidies or Medicaid potentially face very high out-of-pocket costs for SGLT2 inhibitors and GLP-1 receptor agonists. With the exception of less commonly prescribed drugs such as lixisenatide and ertugliflozin, the average beneficiary covered by a Part D plan could spend at least $1000 annually for 1 SGLT2 inhibitor and greater than $1500 for 1 GLP-1 receptor agonist. Although these products are used less frequently than insulin, these annual out-of-pocket costs are on par with-and in some cases exceed-those associated with insulin, 6 and may be unaffordable for the hundreds of thousands of older adults with diabetes and elevated cardiovascular risk who may receive clinical benefit from one of these newer agents. Our analysis has 2 main limitations. First, although we weighted our analyses by plan enrollment, we could not account for diabetes-specific enrollment. Second, out-of-pocket cost calculations assumed that beneficiaries only use 1 drug at a time, when in fact older adults using SGLT2 inhibitors and GLP-1 receptor agonists commonly fill other prescriptions, including metformin and insulin. Therefore, our results may overestimate out-of-pocket costs for these drugs among patients who would have reached catastrophic coverage because of simultaneous use of multiple higher-cost drugs. Some beneficiaries may also be insulated from high out-of-pocket costs through patient assistance programs. Although median retail prices do not include proprietary rebates, they do influence the amounts patients pay as coinsurance.
1,428
2020-10-01T00:00:00.000
[ "Medicine", "Economics" ]
Fractional-Order Thermoelastic Wave Assessment in a Two-Dimensional Fiber-Reinforced Anisotropic Material : The present work is aimed at studying the e ff ect of fractional order and thermal relaxation time on an unbounded fiber-reinforced medium. In the context of generalized thermoelasticity theory, the fractional time derivative and the thermal relaxation times are employed to study the thermophysical quantities. The techniques of Fourier and Laplace transformations are used to present the problem exact solutions in the transformed domain by the eigenvalue approach. The inversions of the Fourier-Laplace transforms hold analytical and numerically. The numerical outcomes for the fiber-reinforced material are presented and graphically depicted. A comparison of the results for di ff erent theories under the fractional time derivative is presented. The properties of the fiber-reinforced material with the fractional derivative act to reduce the magnitudes of the variables considered, which can be significant in some practical applications and can be easily considered and accurately evaluated. Introduction Fiber-reinforced composites are used widely in structural engineering. Continuous models are used to illustrate the mechanical properties of these materials. Fibers are supposed to have inherent material properties, rather than some form of inclusions in the model as in Spencer [1]. A fiber-reinforced thermoelastic material is a composite material that exhibits strongly anisotropic elastic behaviors such that elastic parameters have extensions in the fiber directions that are on the order of 50 or more times greater than their parameter in the transverse directions. This composite material is lightweight and has high strength and rigidity at high temperatures. Due to practical and theoretical importance, several problems with wave and vibration in fiber-reinforced mediums have been studied. The idea of introducing a continuous self-reinforced for every point of an elastic solid was presented by Belfied et al. [2]. The models were then applied to the rotations of a tube by Verma and Rana [3]. Verma [4] also studied the magneto-elastic shear wave in self-reinforcing media. Sengupta and Nath [5] studied the problem of surface waves in fiber-reinforced anisotropic elastic materials. Hashin and Rosen [6] discussed the elastic modulus for fiber-reinforced mediums. Singh and Singh [7] discussed the problem of reflections of a plane wave on the free surfaces of a fiber-reinforced elastic plane. Chattopadhyay and Choudhury [8] investigated the problem of propagations, reflections, and transmissions of magnetoelastic shear waves in self-reinforcing media. Mathematical Model The basic equations in the context of generalized fractional thermoelastic theory with one relaxation time for an anisotropic fiber-reinforced medium in the absence of a body force and heat source are given by: σ ij = λe kk δ ij + 2µ T e ij + α a k a m e km δ ij + a i a j e kk + 2(µ L − µ T ) a i a k e k j + a j a k e ki + βa k a m e km a i a j − γ ij ( By taking into consideration the above definition, it can be expressed by where I υ is the Riemann-Liouville integral fraction introduced as a natural generalization of the well-known integral I υ g(r, t) that can be written as a convolution type, where g(r, t) is Lebesgue's integral function and Γ(υ) is the Gamma function. In the case where g(r, t) is definitely continuous, it is possible to write We consider plane waves in the xy-plane; therefore, in the two-dimensional fiber-reinforced medium, we have written The fiber direction is chosen such that a = (1, 0, 0) so that the preferred direction is the x-axis and Equations (1)-(3) can be expressed by with where 11 , γ 22 = (2λ + α)α 11 + (λ + 2µ T )α 22 , and α 11 , α 22 are the linear thermal expansion coefficients. Initial and Boundary Conditions The initial conditions of the problem are given as while the problem adequate boundary conditions are expressed as For suitability, the nondimensionality of the physical quantities can be taken as where η = ρc e k and c = c 11 ρ . In these nondimensional terms of parameters in Equation (16), the basic Equations (8)- (15) can be written as (after ignoring the superscript ' for appropriateness) with σ yy = f 4 ∂v ∂y where ρc e , and f 8 = Method of Solution Now, we can apply Laplace transforms, which defined by where the Laplace transforms parameter is s, whereas the Fourier transforms for any functions h(x, y, s) can be expressed as Thus, the governing equations with the boundary conditions under the initial conditions are presented to obtain the ordinary differential equations as follows: − with Now, we obtain the general solutions of Equations (26)-(28) by the eigenvalues method proposed [36,[43][44][45][46]. From Equations (26)-(28), the vectors matrix can be expressed by where The characteristic equation of matrix A takes the form where R 1 = a 41 a 53 a 62 − a 63 a 52 a 41 , R 2 = a 52 a 41 − a 62 a 53 − a 62 a 54 a 46 + a 63 a 41 + a 63 a 52 + a 63 a 54 a 45 + a 64 a 52 a 46 − a 64 a 53 a 45 , The roots of the characteristic Equation (34), which are also the eigenvalues of matrix A. In the cases where ξ 1 , −ξ 1 , ξ 2 , −ξ 2 , ξ 3 , and −ξ 3 are the eigenvalues, the conforming eigenvectors of eigenvalues ξ can be calculated as The solutions of Equations (33) can be given by where the terms containing exponentials of growing nature in the space variable Z have been discarded due to the regularity condition of the solution at infinity, and A 1 , A 2 , and A 3 are constants to be determined from the boundary condition of the problem. Now, for any function h * (x, q, s), the transforms of the Fourier inversion are given by Finally, to obtain the general solutions of the variations in temperature, the components of displacement, and the components of stresses with respect to the distances x, y for any time t, the Stehfest [35] numerical inversion method is used. In this method, the inverse of Laplace transforms for h(x, y, s) can be expressed as where where N is the number of terms. Numerical Result and Discussion In order to illustrate the theoretical outcomes obtained in the preceding sections, we give some numerical values for the physical parameters [47]: The field quantities, the increment in temperature T, the displacement components u, v, and the components of stresses σ xx , σ xy depend not only on space x, y and time t, but also on the fractional order of the time derivative ν. The numerical computations for all the nondimensional field quantities of the plate are demonstrated in Figures 1-13. Figures 1-3 show the temperature contours on the plate for different values of the fractional-order parameter ν when t = 0.6. We can find that the temperature-change zone is restricted in a finite area and the temperature does not change out of this area. The white color in these figures refers to a temperature variation of zero in this region. We can observe that the region with changes in temperature become larger with the generalized thermoelastic theory without fractional time derivative ν = 1, while out of this region, the temperature maintains the original values. Figure 4 displays the variation in temperature with respect to the distance x, and it indicates that the temperature field has maximum values at the boundary and, after that, decreases to zero. Figure 5 shows the variations in horizontal displacement along the distance x. It is apparent that when the surface of the half-space is taken to be traction-free, and the heat flux is applied on the surface, the displacement for various values of fractional order parameter ν shows a negative value at the boundary of the half space. In addition, it attains stationary maximum values after some distances and then decreases to zero. Figure 6 shows the variations in vertical displacement with respect to x for various values of fractional order parameter ν, where we observed that a significant difference in the value of displacement is noticed for the different values of ν. Figures 7 and 8 display the distributions of stress components σ xx , σ xy with respect to the distance x for different values of fractional order parameter ν. It is observed that the components of stress σ xx , σ xy always start from the zero value and terminates at the zero value to obey the problem boundary conditions. Figure 9 shows the temperature variations T along the distance y and indicates that the variations in temperature have maximum values at the length of the heating surface y ≤ 0.5 , and start to reduce completely close to the edges y ≤ 0.5 where they reduce smoothly and, in the end, reach zero. Figures 10 and 12 display the horizontal displacement variations u and the stress component σ xx with respect to the distance y. They indicate that they have maximum values at the length of the thermal surface y ≤ 0.5 , and they begin to decrease completely close to the edges (y = ± 0.5) and, after that, reduce to zero. Figures 11 and 13 display the vertical displacement variations v and the stress component σ xy with respect to the distance y. It is noticed that they begin increase and reach maximum values just near the edges (y = ± 0.5), and then decreases close to zero after that. As expected, it can be found that the fractional parameter has great effects on the values of all the physical quantities. According to the numerical results, this new fractional parameter of the generalized thermoelastic model offers finite speed of the thermal wave and mechanical wave propagation. Conclusions In this article, we have studied the solutions of a two-dimensional problem for an infinite fiberreinforced thermoelastic material. Based on the eigenvalue scheme, Laplace transforms, and exponential Fourier transforms, the analytical solutions have been obtained. The properties of the fiber-reinforced medium with the fractional derivative act to reduce the magnitudes of the variables considered, which can be significant in some practical applications, and can be easily considered and accurately evaluated. Conclusions In this article, we have studied the solutions of a two-dimensional problem for an infinite fiber-reinforced thermoelastic material. Based on the eigenvalue scheme, Laplace transforms, and exponential Fourier transforms, the analytical solutions have been obtained. The properties of the fiber-reinforced medium with the fractional derivative act to reduce the magnitudes of the variables considered, which can be significant in some practical applications, and can be easily considered and accurately evaluated. Author Contributions: The two authors conceived the framework and structured the whole manuscript, checked the results, and completed the revision of the paper. The authors have equally contributed to the elaboration of this manuscript. All authors have read and agreed to the published version of the manuscript. Acknowledgments: The authors acknowledge with thanks the University technical and financial support. Conflicts of Interest: The authors declare no conflict of interest. Nomenclature u i are the components of displacement, T, is the increment in temperature, ρ is the medium density, T o is the reference temperature, λ, µ T are the elastic constants, σ ij are the components of stresses, c e is the specific heat at constant strain, α, β, (µ L − µ T ) are the reinforced anisotropic elastic parameters, a i are the components of vector a where a 2 1 + a 2 2 + a 2 3 = 1, K jj is the thermal conductivity, τ o is the thermal relaxation time, υ is the fractional parameter, where 0 < υ ≤ 1 cover two types of conductivity, υ = 1 for normal conductivity, and 0 < υ < 1 for low conductivity q o is a constant H is the Heaviside unit function t p is the pulse heat flux characteristic time i, j, k = 1, 2, 3, are the number of components
2,842
2020-09-01T00:00:00.000
[ "Physics" ]
Logarithmic Differential Operators and Prequantization of Logsymplectic Poisson Structure In this paper, we study the prequantization condition of logsymplectic structure using integrality of such structure on the complement of associated divisor D. Introduction The first problem of geometric quantization concerns the kinematic relationship between the classical and quantum domains.At the quantum level, the states of physical system are represented by the rays in Hilbert space H, and the observable by a collection O of symmetric operators on H, while in the limiting classical description, the state space is a symplectic manifold or more generally Poisson manifold (X, ω) and the observable are algebra C ∞ (X) of smooth functions on X.The kinematic problem is: given X and ω, is it possible to reconstruct H and O? According to Dirac's general principles, the canonical transformations of X generated by the classical observable should correspond to the unitary transformations of H generated by the quantum observable, and Poisson brackets of classical observable should correspond to commutators of quantum observable.Each classical observable f : X → R should correspond to an operator φ( f ) ∈ O such that 1. the map f → φ( f ) is linear over R 2. if f is constant, then φ( f ) is the corresponding multiplication operator Therefore, the Poisson bracket is the classical analogue of the quantum commutator.Linear map φ satisfying (1 → 2) is called prequantization formula .When the prequantization formula exists, C ∞ (X) acts faithfully on the Hilbert space H.In the case where classical phase space if represent by symplectic manifold, J. Soureau and B. Kostant; independently presented necessary and sufficient conditions on which symplectic manifold (X, ω) shall be prequantizable.Their proved that (X, ω) is prequantizable if and only if the De Rham cohomolgy class of ω is element of the image of homology map induced by the inclusion of Z in R.More generally, I. Vaisman introduced the notion of contravariant derivative, Poisson Chern class and used it to generalize the Kostant-Soureau integral theorem.They prove that the obstruction to prequantization of symplectic manifold is measured by De Rham cohomology while Poisson cohomolgy measures the obstruction to the prequantization of Poisson manifold. Between symplectic manifold and Poisson manifold, there exist the logarithmic manifold.The notion of logsymplectic structure taking their origin from particular meromorphic forms having at most simple poles along certain divisor D of a giving complex manifold X.Such forms are amply studied in (Saito, K. (1980)).Using the notion Lie-Rinehart algebra, we give the algebraic generalization of such notion. The aim of this work is to study integrality of logsymplectic structure.Of course, we first use the notion of Lie-Rinehart to introduce the algebraic versus of logsymplectic structure.Secondly, we use the notion of logarithmic differential operator to define Dirac principle of prequantization of such structures.Since each 2n dimensional logsymplectic manifold has 2n − 2 dimensional leaves, we study the impact of the integrality of such leaves on the all manifold. The main result of this paper is as follow: Theorem 2.4 If the divisor D is free and irreducible hypersurface of X and the sheaf of logarithmic 1-form is generated by closed forms, then for all logarithmic 2-form ω, the following conditions are equivalent The structure of the paper is as follows: 1. [Section 1] It is devoted to algebraic formalism of the notion of logsymplectic structure.Using the notion of Lie-Rinehart algebra, we define the notion of logsymplectic structures and we prove in Proposition 2.11 that such structures induce two Lie-structure on the base algebra.we also prove that Poisson structure associated to logsymplectic structure are logarithmic Poisson structure.We give two examples of logsymplectic structures and one example of non logsymplectic structure. [Section 2] In this section, we recall the notion of logsymplectic manifold and we prove the main theorem of the note. 3. [Section 3] This section is devoted to the interpretation of Dirac principle for logsymplectic manifold.Of cause, we introduce the notion of logarithmic differential operator and we prove in Proposition 4.14 that the module of logarithmic differential operator is a Lie-Rinehart algebra.We also prove that the latter is central extension of logarithmic vector fields along the sheaf of holomorphic functions of complex manifold with free divisor. [Section 4] We use extension of sheaf to study integrality condition of logsymplectic algebra. [Section 5] In this section, we introduce Lie-algebroide formalism to study integrality of logsymplectic structures. Lie-Rinehart-logsymplectic Algebras and Associated Lie Brackets Throughout this section, k denotes a field of characteristic 0 and A a commutative k-algebra with unity 1. Let Der A be the A-module of k-derivations of A and Ω A the A-module of formal differentials of A. It is proved in (Huebschmann, J. (2013)) that Ω A is generated by {da, a ∈ A} together with the relations where d is the canonical derivation associated to Ω A .An element D of Der A is said to be logarithmic along an ideal I of A if D(I) ⊂ I.We denote by Der A (log I) the set of derivations of A, logarithmic along I.By definition, Der A (log I) is a sub-Lie algebra of Der A .Let S = {u 1 , ..., u p } be a subset of p nonzero and nonunit elements of A. An element D of Der A is said to be logarithmic principal along the ideal generated by S if for all u i ∈ S , D(u i ) is an element of the ideal u i A generated by u i .We denote by Der A (log I) the set of derivations of A which are logarithmic principals along the ideal I of A generated by S .Der A (log I) is a sub-Lie algebra of Der A (log I).We denote Ω A (log I) the A-module generated by We have the following: Lemma 2.1.Let I be the ideal A generated by S = {u 1 , ...; u p , u i ∈ A; 1 ≤ i ≤ p}.The A-module Der A (log I) of derivations of A logarithmic principal along I is the dual over A of Ω A (log I). We recall that a Lie-Rinehart algebra over A is a pair (L, ρ); where L is Lie algebra which is also an A-module and ρ : L → Der k (A) is Lie algebras homomorphism which satisfies the following equality. Remark 2.2.The notion of Lie-Rinehart algebra is particular case of a more general notion called P-Lie-Rinehart algebra; where P is an A-algebra.More explicitly, a P-Lie-Rinehart algebra is a pair (L, ρ) formed by a P-module L and A-linear map ρ : L −→ Diff 1 (P) which is also a Lie-algebras homomorphism satisfies (1) for all a ∈ P and l 1 , l 2 ∈ L. Where Diff 1 (P) denote the A-module of first differentials operators of P. In the next section, we will recall the definition and give its logarithmic counterpart. It follows from this definition that Der A endowed with identity is Lie-Rinehart algebra.On the other hand, we can easily prove that the inclusion map of Der A (log I) in Der A is a structure of Lie-Rinehart algebra on Der A (log I). Let (L, ρ) be a Lie-Rinehart algebra over A. By definition, the associated structure ρ is also a representation of the Amodule L; by derivations of A. For all q ∈ N, a q-linear alternating mapping of L into A is called a q-dimensional cochain. Definition 2.4.The cohomology of the complex (Lalt q (L, A), The following is easy to prove.Definition 2.7.see (Huebschmann, J. ( 2013) such that for all x 1 , ..., x q−1 ∈ L; and for all f ∈ Lalt q (L, A) we have Remark 2.8.If X is a smooth manifold then (i) Every Poisson structure on X is a Lie-Rinehart-Poisson structure on Ω X with Lie-Rinehart structure the Hamiltonian map. (ii) Every symplectic structure on X is a Lie-Rinehart-Poisson-symplectic on the O X -module of smooth vector fields on X It follows from the definition that Der A (log I) is a Lie-Rinehart algebra; with inclusion as structure. In particular, when A is the algebra of holomorphic functions on a 2n dimensional complex manifold X, a logsymplectic structure on A is algebraic analogous logsymplectic form on X (see (Goto, R. (2002))).According to above the definition, logsymplectic structure are 2-forms on Der A (log I) which are closed under the De Rham differential.Since Der A (log I) is sub-module of Der A its algebraic dual over A is bigger than Der A one's; which is not well defined since, in general the bi-dual of Ω A is not Ω A .In (Saito, K. (1980)), the author proves that the sheaf of logarithmic forms and the sheaf of logarithmic vector field on final dimensional complex manifold are dual each other. Let µ be a logsymplectic structure.Since µ is non degenerated, its contraction by logarithmic derivation induce an isomorphism of A-modules between Der k (log I) and its dual Der k (log I) * which is the module of logarithmic forms (see (Saito, K. (1980))).Therefore, for all a ∈ A, there is an unique δ a ∈ Der k (log I) such that: a is called logarithmic Hamiltonian element and δ a is called logarithmic Hamiltonian field.For all a, b ∈ A, consider: We have the following proposition. Proposition 2.10.(Dongho, J.(2012) Let µ be a logsymplectic structure on A. The following bracket is a well defined logarithmic Poisson structure on A In the sequel, we suppose that I is generated by S = {u 1 , ..., u q } and µ, is a logsymplectic structure.Then for all u i ∈ S there exist a unique δu i ∈ Der k (log I) such that But since du i ∈ Ω A ⊂ Ω A (log I), there exist δ u i such that i δ u i µ = du i .It is easy to prove that δ u i = u i δu i .We can then consider the following bracket: By direct computation, we have the following proposition. Proposition 2.11.A Logsymplectic structure µ on A induces two Lie structures {−, −} and {−, −} sing such that for all u, v ∈ I − 0, Example 2.12.(Dongho, J.( 2012) y] the algebra of two variables polynomials and I = xA, J = yA, K = x 2 A the ideal generated respectively by x, y and x 2 .We have: and Where < U > B denotes the B-module generated by U for all U ⊂ L and B ⊂ A. The following 2-forms ) and its determinant is x 2 which is not an inversible element of A. Therefore, ω 3 is a Lie-Rinehart-Poisson structure which is not a logsymplectic structure. Logsymplectic Manifold In this section we propose a geometric application of the concepts introduced in the above section.As we can see, the geometric analogous of Lie-Rinehart-logsymplectic structure correspond to logsymplectic structure.Throughout this section, X a a final dimensional compact complex manifold with reduced divisor D. To define what is logsymplectic manifold, we need the notion of logarithmic forms which are extensively studied in algebraic geometry; see (Saito, K. (1980)).In addition to the assumptions made in (Saito, K. (1980)), we assume that D is square free. Let ω be a meromorphic q-form on X with poles only in D. We suppose that D := {z ∈ X, h(z) = 0} where h is some holomorphic map. ω is said to be logarithmic along D if hω and dh∧ω are holomorphic forms.As in (Saito, K. ( 1980)), we denote Ω q X (log D) the sheaf of logarithmic q-forms on X. dx ∧ dy x 2 .Therefore, according to Theorem 1.8 of (Saito, K. (1980)), the system { dx x 2 , dy} shall be a bases of Ω X (log D).But on the other hand, { dx x , dy} is free bases of Ω X (log D).Then there exist two holomorphic functions a and b such that dx x 2 = a dx x + bdy; but this implies that ax = 1 and b = 0. Then a = 1 x .Which is contradictory to our assumptions.So dx x 2 is not logarithmic 1-form. It follows from this remark that the assumption that D shall be square free is necessary. One important notion related to logarithmic forms is associated residue form.According to Theorem 1.1 of (Saito, K. (1980)), if its is a n−dimensional complex manifold and ω is a logarithmic q-form, then there exist an holomorphic function g such that Where ψ and η are holomorphic form. Thus, the logarithmic forms may have poles outside of D. One form in such a way that residues of all q-form is holomorphic on X if h is irreducible and the residues of all element of Ω X (log D) is holomorphic. In such cases, global sections of Ω 2 X (log D) are in the form dh h ∧ ψ + η where ψ and η are holomorphic forms. Let us denote H * (X, C) the De Rham cohomology of X.According to De Rham Theorem, it is isomorphic to the cohomology of the complex of holomorphic forms of X.Since the latter is sub-complex of the complex of logarithmic forms, there is an homomorphism p : is the cohomology group of the logarithmic De Rham complex.So we have the following sequences With those tools, we can prove the following result. Theorem 3.4.If the divisor D is free and irreducible hypersurface of X and the sheaf of logarithmic 1-form is generated by closed forms, then for all logarithmic 2-form ω, the following conditions are equivalent conversely, if ω 0 + dλ = η and ψ = dβ with ω 0 integral, then Let us denote by D sing the singular part of D and D red the smooth part.The proof of the following is essentially the same as the proof of Darboux Theorem in symplectic geometry. Lemma 3.5.(Goto, R. ( 2002)) (Log Darboux Theorem).Let (X, D) be a log symplectic manifold with a logarithmic symplectic form ω, where D is reduced divisor.There exist holomorphic coordinates (z 0 , z 1 , ..., z 2n−1 ) of a neighborhood of each smooth point of D red such that ω is given by where {z 0 = 0} = D. we refer to these coordinates as log Darboux coordinates. It follow from this result that the residue 1-form of a logarithmic 2-form ω if dz 1 in the log Darboux coordinates.We have then an integrable distribution by setting {δ ∈ T D red , Re(ω)(δ) = 0}.Therefore, we have 2n − 2-dimensional leaves on D red . More explicitly, we have the following result. It follows from this lemma that in a complement of 2-dimensional sub-manifold of X, logsymplectic forms are symplectic. Logarithmic Lie-Rinehart Differential Operators In this paragraph, (X, ω, D) is logsymplectic manifold and E a locally free O X -module of rank 1 and D = {h = 0} a divisor of X. Logarithmic Connection The notion of logarithmic connection is original in the work of P. Deligne when he formulated and proved the theorem establishing a Riemann-Hilbert correspondence between monodromy groups and Fuchsian systems of integrable partial equations or flat connections on complex manifolds.He also gives a treatment of the theorem of Griffiths which states that the Gauss-Manin or Picard-Fuchs systems of of differential equations are regular and singular. where d is the exterior derivative over O X . This is equivalent to a linear map △ : Der X (log D) → End(E) satisfy the following If ∇ is logarithmic connection K ∇ will denote its curvature and the pair (M, ∇) will refer to logarithmic connection on a locally free O X -module of rank 1 M. for a giving nowhere vanish section s Let p be a point of D and (z i λ ) a logarithmic coordinate system along D at p. where a i ∈ H 0 (X, O X ).Therefore,we deduce that: Lemma 4.3.Let D be a normal crossing divisor and α ∈ H 0 (X, Ω X (log D)).If dα = 0 then the residue of α is constant on any component of singular locus of D. Any such form with at least one nonzero residue admits representation Definition 4.6.Let (M, ∇) be a connection on X * = X − D. A meromorphic prolongation of (M, ∇) is a meromorphic connection ( M, ∇) on X such that the restriction is an isomorphism. Module of Logarithmic Differential Operator Let A be a commutative ring.For any pair of A-modules M, N we define module Di f f k A (M, N) inductively by putting M).Replacing M by E, the above definition becomes; Definition 4.7.An r-order differential operator on E is a C-linear map φ : E → E such that s → φ( f s) − f φ(s) is an (r-1)-order differential operator on E); for all f ∈ O X In the previous paragraph, we see that each logarithmic connection induces a morphism △ on Der X (log D) such that for all f ∈ O X and X ∈ Der X (log D), we have ] is zero order operator.This motivate the following definition. Definition 4.8.An r-order differential operator φ is logarithmic along D if s → [φ(hs) − hφ(s)]h −1 is an (r-1)-order differential operator on E. Notation 4.9.We denote Di f f r (E) the set of r-order differential operators and Di f f r log (E) is the subset of r-order differential operators logarithmic along D According to what precedes, △ X ∈ Di f f 1 log (E); for all X ∈ Der X (log D).Lemma 4.10.Let φ be a first order differential operator logarithmic along D, for all sections f of O X , There exists unique Proof.For all s ∈ E, φ(hs) − hφ(s) = hs and there exist g ∈ O X such that φ(hs) − hφ(s) = hgs.Therefore, ( h − hg)s = 0 for all s. For all f ∈ O X , s ∈ E, we have: From above results, we deduce the following exact sequence of Lie-Rinehart algebras The morphism φ is called quantization formula and it satisfies: a ∈ A, ∇ is a section of g; and v(a) = {a, −}. In general, ∇ is only an A-module homomorphism.Obstruction to become Lie-morphism is measured by cohomology class of an 2-cocycle K ∇ ; it is usually called curvature of ∇ on H. When (H, φ) exists and φ satisfies (13) the triplet (H, ∇, K ∇ ) is called prequantum representation of A. The following paragraph is devoted to the construction of H when µ := ω is a logsymplectic structure on a even dimensional complex manifold X with reduced divisor D. Suppose that the logsymplectic manifold (X, ω, D) admit a prequantum representation (Di Changing the role of f and g, we obtain: It follows that φ is prequantum map of (X, ω, D) if and only if Since ω ∈ H 0 (X, Ω 2 X (log D)), relation ( 14) implies that K ∇ and then ∇ are logarithmic forms.Definition 5.1.We refer to prequantum sheaf on (X, ω, D) a rank 1 connection (M, ∇) satisfy ( 14) Extension of Prequantum Sheaf Our main objective being to determine the existence condition of prequantum sheaf (M, ∇) on (X, D, ω) satisfy ( 14), we intend in a first time to determine in which case integral condition of ω on X − D could be extended to entire X.Of cause we shall know how and when it is possible to extend connection on X − D to logarithmic connection on D. First about, we recall the following proposition. Proposition 5.2.(Iitaka, S. (1981)) Let F be a closed subset of a nonsingular variety X with F X If ω 1 and ω 2 are rational q−forms such that ω The existence of τ follow from lemma 4.3. Lemma 5.5.Let (N, ∇ 0 , K ∇ 0 ) be a sheaf of locally free O X * -module of rank 1.If (M, ∇, K ∇ ) is a sheaf of locally free O X -module of rank 1 such that Proof.The prequantum map (14) becomes And we have by simple calculation Definition 6.18. 2. The divisor D ⊂ X is locally quasi-homogeneous if for all x ∈ D there are local coordinates on X, centered at x, with respect to which D has a weighted homogeneous defining equation. Proposition 6.19. ( F., Narváez-Macarro, L., & Mond, D. (1996)) Let D be a strongly quasihomogeneous free divisor in the complex manifold X, let U be the complement of D in X, and let j : U → X be inclusion.Then the natural morphism from the complex Ω * X (log D) of differential forms with logarithmic poles along D to R j * C is quasiisomorphism. It follows from 6.19 and Grothendieck's Comparison Theorem that, Cohomology of X−D compute the one of the complex (Ω * X (log D), d).We denote H * dR log (X) the cohomology of the complex (Ω * X (log D), d).Let X be a complex analytic space, F a coherent sheaf on X. Denote by G k (F ) := {m ∈ X; pro f m F ≤ k}. We saying that the sheaf F satisfies the condition (s k ) if dim G k (F ) ≤ k − 2 Theorem 6.20.If D is zero dimensional locally homogeneous free divisor of X and if the De Rham cohomology class of ω on X − D live in i * (H 2 (X − D), Z) then (X, ω) have prequantum bundle if the associated prequantum bundle of X − D satisfies the condition (s 2 ) Proof.Since the De Rham cohomology class of ω on X − D live in i * (H 2 (X − D), it follow from B. Kostant in (Kostant, B. (1970)) that there exist a rank one locally free O X−D -module F such that the curvature satisfies the equation 24 with α = −2πi.If F satisfies the condition (s 2 ) and D is zero dimensional analytic divisor of X, then according to Trautmann Theorem, there exist an unique analytic coherent sheaf F on X extending F .Since the curvature of F coincides on X − D with curvature of F it follows from Proposition 5.2 and to the Logarithmic Comparison Theorem that F is prequantum sheaf of X Proposition 2.5.If Der A and Der A (log I) are respectively the Lie-Rinehart algebra of derivations of A and of logarithmic derivations of A, then (i) The Lie-Rinehart cohomology of Der A is the De Rham cohomology.(ii) The Lie-Rinehart cohomology of Der A (log I) is the logarithmic De Rham cohomology Follows J. Huebschmann in (Huebschmann, J. (2013)), we can now introduce the following notion.Definition 2.6.A structure of Lie-Rinehart-Poisson on a Lie-Rinehart algebra (L, ρ) is an alternating 2-form µ µ : L × L → A such that d ρ µ = 0.If µ is a structure of Lie-Rinehart-Poisson on (L, ρ),then (L, ρ, µ) is called a Lie-Rinehart-Poisson algebra.The notion of Lie-Rinehart-Poisson algebra is a part of more general notion defined by I. Krasil'shchik in (Krasil'shchik, I. (1988)) He called it canonical algebra.We denote by L * the algebraic dual over A of a Lie-Rinehart algebra L. Remark 3.1.(Dongho, J.(2012)) Let X = C 2 and D = {0} × C. In canonical coordinate system (x, y) of X, the divisor D is the set of zero of the ideal I generated by x.But the only useful element of I which represent completely D is h = x.Other element of ideal I are square free.It follows from our definition of logarithmic forms that dx x holomorphic 1-form.But according to K. Saito definition (See Definition 1.2 of(Saito, K. (1980)), dx x 2 and dy are logarithmic forms.In addition, dx x 2 ∧ dy = 1. 10) Proof.Let p be a point of D and U λ an open coordinate neighborhood of p.We have: α = Res(α) dh p h p + α reg where Res(α) is the residue of α.But dα = 0 imply d(Resα) = O; since Res commute with d.However, from Theorem 2.9 in (K.Saito, 1980), Res(Ω 1 X (log D)) = O X , therefore dRes(α) = 0 imply Resα ∈ C. Proposition 4.4.K ∇ = 0 if and only if σ = with a i ∈ C. Proof.K ∇ = dσ; where σ is the connection one form of ∇.The result is then a consequence of Lemma 4.3 Definition 4.5.Let (M, ∇) and (N, δ) be two connections.An homomorphism from (M, ∇) to (N, δ) is a sheaf homomorphism φ : M → N such that the following diagram commute. dy Are Lie-Rinehart-Poisson structure on Der A (log I), Der A (log J) and Der A (log K) respectively; since dω 0 = dω 1 = dω 2 = 0. We recall that in this case, the Lie-Rinehart structure is just their inclusion in Der A .The matrix of ω 1 relative to the bases (x∂ x , ∂ y ) of Der A (log I) and ( )It is also the matrix of ω 2 relative to the bases (∂ x , y∂ y ) of Der A (log J) and (dx, dy y ) of Ω A (log J).Since de determinant of M ω is inversible element of the ring A, we conclude that ω 1 and ω 2 are logsymplectic structures.On the other hand, the matrix of ω 3 relative to the bases (x∂ x , ∂ y ) of Der A (log I) and ( dx x , dy) of Ω A (log I) is
6,013
2017-07-25T00:00:00.000
[ "Mathematics" ]
A Robust Context-Aware Proposal Refinement Method for Weakly Supervised Object Detection Supervised object detection models require fully annotated data for training the network. However, labeling large datasets is a very time-consuming task, therefore, weakly supervised object detection (WSOD) is a substitute approach to fully supervised learning for the object detection task. Many methods have been proposed for WSOD to date, their performance is still lower than supervised approaches since WSOD is a very challenging task. The major problem with existing WSOD methods is partial object detection and false detection in an objects cluster with the same category. The majority of the methods on WSOD follow multiple instance learning approaches, which does not guarantee the completeness of detected objects. To address these issues, we propose a three-fold refinement strategy to proposals to learn complete instances. We generate class-specific localization maps by fused class activation maps obtained from fused complementary classification networks. These localization maps are used to amend the detected proposals from the instance classification branch (detection network). Deep reinforcement learning networks are proposed to learn decisive-agent and rectifying-agent based on policy gradient algorithm to further refine the proposals. The refined bounding boxes are then fed to instance classification network. The refinement operations result in learning complete objects and greatly improve detection performance. Experimental results show better detection performance by the proposed WSOD method compared to the state-of-the-art methods on PASCAL VOC2007 and VOC2012 benchmarks. I. INTRODUCTION Weakly supervised object detection (WSOD) has acquired enormous attention in the literature due to its great ease of demanding only image-level annotated data for training object detector. This has been made possible by the development of convolutional neural networks (CNNs) [1] and largescale datasets [2] with at least image-level annotations. In this paper, we aim to effectively learn and infer whole object detections trained with coarse image-level labels indicating the particular categories of objects present in an image for the WSOD problem. Earlier methods [3]- [5] follow conventional multiple instance learning (MIL) paradigm for WSOD task. In MIL framework, images are treated as a set of positive and negative The associate editor coordinating the review of this manuscript and approving it for publication was Gustavo Callico . instances bags, and classifier utilizes these instances set as labels. High confidence proposals can be extracted by applying MIL, which is also a suitable solution to localize objects with image-level labels. However, MIL learns the most discriminative part of the target object instead of the complete object [6]. Moreover, MIL has certain constraints, such as positive bags contains at least one positive instance and the negative bag contains all negative instances. Another major drawback of MIL is the most likely positives are predicted using existing classifier which can result in faulty learning in case of false-positive predictions, as classifier explicitly cannot deduct true positives in the given image [7]. Notable progress has been made in WSOD with the advancement of CNNs, since many methods combine MIL and CNNs [8], [9]. Recent studies have revealed that even better results for WSOD can be achieved by training MIL network in the standard end-to-end manner [4], [10], or a variant of end-to-end training [11], [12]. We adopt the same training approach motivated by [11], [13] a variant of end-toend training. The approaches which integrate the CNN with MIL for WSOD have shown performance improvement by CNN as feature extractor compared to conventional handcrafted features [13]. Although a considerable advancement has been accomplished in WSOD literature and these methods have delivered significant results, the existing approaches [6], [13], [14] are less efficient in detecting tight boxes to cover the entire object. In this paper, we propose a framework for WSOD to tackle the main problems with existing WSOD methods, specifically, 1) partial object detection and 2) occluded object detection. This paper is an advanced version based on our previous work [7]. In particular, a three-fold refinement strategy is presented to amend the detected proposals by leveraging the localization maps. The proposed deep network for WSOD consists of three main branches, classification branch for extracting localization information, multiple instance classifier (MIC) (detection branch), and a refinement branch. We present a robust proposal refinement module (PRM) to rectify the proposals to be learned by object detector network by retraining through the instance-level supervisions generated by PRM. This study aims to detect complete and tight bounding boxes around an object instance. We utilized the class activation maps (CAMs) to obtain localization maps from the fused complementary network (FuCN) [7], which is a classification network. A CAM signifies the distinctive image areas which CNN utilizes to classify an object as an instance of a particular class. Therefore, to cope with the problem of partial object detection and loose detection, we leverage localization maps from FuCN to refine the object detection outputs as shown in Figure 1. For each image localization maps are generated corresponding to each object category. The generated localization maps have the same spatial resolution as of the input image. In PRM, proposals are first refined based on activated regions within the bounding box with respect to the corresponding localization map. The coordinates of the bounding box are adjusted according to the activated region (pixels). This step compacts the loose proposals, and then further finetuning of bounding box is performed by the surrounding region to expand the too-tight proposals which do not cover the complete object, thus attains a complete and tight bounding box for each instance using within the bounding box and the surrounding region. Afterward, the activations for each refined proposal are erased from localization map iteratively. In the second step, missing detections are inferred if there exists any closed region (regions) of activations in the localization map after refining all proposals. These refinement operations result in learning complete objects and improves detection performance. Concisely, the proposed PRM first refines the detections and then investigates the detection completeness for all instances in the entire image by phase I and phase II, respectively. As summary, our contributions are listed as follows: 1) We present a three-fold refinement procedure after the MIC branch to learn complete instances in the PRM branch. We leverage the localization maps as contextual information for proposal refinement. 2) In the proposed PRM, first, we perform refinement within bounding boxes by the activated pixels in the localization region, then additional fine-tuning is achieved by investigating the fixed neighborhood region of bounding boxes. Lastly, we leverage the connected localization regions to infer missing detections. 3) In phase I of PRM, we determine the precise association between bounding boxes and localization regions by position-aware and class-specific localization maps corresponding to bounding boxes, and elucidate the conflicting regions encompassed by multiple boxes. 4) We propose a cascade of efficient light-weight reinforcement learning (RL) networks based on policy gradient method to further learn complete proposals by training decisive-agent and rectifying-agent concurrently. Lastly, the object detector is retrained based on these refined proposals as instance-level supervision. VOLUME 8, 2020 5) We evaluate our method on PASCAL VOC2007 and VOC2012 datasets in terms of average precision (AP) and correct localization (CorLoc) to measure accuracy and inference time in frames per second (FPS). The rest of the paper is organized as follows. Section II is devoted to related work. In Section III we present our proposed method. Section IV presents the experimental results and discussion. Finally, in Section V we conclude the paper. II. RELATED WORK WSOD problem has been investigated since last decade, yet the performance of WSOD methods is not close to fully supervised object detection methods. The majority of existing works in WSOD conform to MIL as a baseline framework [3], [11]. However, MIL is non-convex optimization problem [11] and also suffers from the erroneous learning process due to predicted false positives. Several studies have proposed to regularize optimization by reforming the MIL initialization strategy. Cinbis et al. [8] studied a multi-fold MIL strategy, this method was effective in averting the performance collapse in object localization. Recent approaches combine MIL with CNNs [4], [10], [11], [13]. Bilen et al. [11] proposed an end-to-end framework, weakly supervised deep detection network (WSDDN) for WSOD. Their method is also an adaptation of MIL approach. The authors used the set of pre-computed proposals [15] to obtain candidate boxes that may contain objects, perform feature extraction of these proposals, and classify each proposal. The spatial regularizer is employed for improving performance. Sangineto et al. [16] proposed self-paced learning approach and trained with Fast RCNN [17]. During network training, the same network at different progression stages is used to predict object localization of positive samples. At each stage, a subset of images is selected whose pseudo-ground truth is the most reliable. Online instance classifier refinement (OICR) [4] is proposed to refine the instances in a fully supervised approach on results from WSDDN [11]. Some methods [18], [19] aimed at a proposal-free framework by utilizing deep features. Tang et al. [18] proposed a two-stage region proposal network for WSOD. The authors focused on proposals generation within end-to-end framework by exploiting the deep feature maps. The hidden object location information generated by early CNNs layers is utilized to generate the proposals and refine the proposals by region-based CNN classifier. Shen et al. [19] proposed an end-to-end WSOD network with generative adversarial learning approach, they used single shot multibox detector (SSD) [20] as baseline detector. Cheng et al. [21] merge selective search [15] and a gradient-weighted CAM based technique to produce proposals that can better enclose the whole objects. An adversarial erasing (AE) approach is proposed by Wei et al. [22] to localize integral object regions. The authors trained the accompanying classification networks with par-tially erased discriminative object regions from input images. Zhang et al. [23] employed adversarial learning approach motivated by [22], they trained two classifiers to learn distinct features. This strategy boosts object localization performance. We adapt [23] to learn integral object localization cues intended for proposal refinement. Jie et al. [24] proposed a self-taught learning approach for learning object location evidences to train detector. The detector is progressively learned to localize positive samples. Tang et al. [4] presented multiple instance classifiers approach to learn the features of the whole object iteratively. They performed the instance refinement by multiple supervised stages. However, in their refinement procedure, high confidence proposals neighbouring (intersection-overunion (IoU) based) are considered with the highest score proposal or by using graphs of top-ranking proposals to make clusters which also contain partial object proposals. This approach can result in learning falsely objects with discriminative parts and ineffective to learn whole object detection, hence this method does not guarantee to predict the whole object. Wei et al. [6] proposed tight box mining strategy with a segmentation context for WSOD. They adapted OICR [4] as their detection branch. This method exploits the segmentation context to evaluate proposals by the aid of segmentation maps per-pixel scores to confine the boxes with object parts and obtain the high-quality boxes to learn instance classifier. Their network has an additional segmentation branch. Different from [6], in our WSOD framework we directly refine the proposals from localization maps to reduce the computations. They [6] trained the segmentation branch by pseudo supervisions of object localization acquired from the classification network. Hitherto, a single classification network learns the discriminative features of the object category and cannot provide good localization cues for supervising the segmentation network. It can be trapped in local minimums, consequently, partial object detection problem also resides in this method as well. Shen et al. [25] studied the multi-task learning for WSOD. They treated object detection together with semantic segmentation as a joint learning problem to overcome the failure patterns of segmentation and object detection which are typically encountered in other MIL based self-enforcement methods [4], [8], [26] trained with singletask learning. Li et al. [27] studied WSOD jointly with segmentation task in a collaborative loop. They trained each task with the supervision of the accompanying task. Lately, spatial likelihood voting method is proposed by Chen et al. [28] to converge proposal localization for object detection with image-level supervision using multi-task learning. Recently, Zhang et al. [14] proposed region-searching paradigm for WSOD with reinforcement learning approach under weak supervisions. They extracted region correspondence maps to use as pseudo-target regions for training the agent. A teacher-student learning approach through multiple instance self-training is used by Ren et al. [29]. They trained the student network with the pseudo-labels from teacher network. They also used a dropblock to zero out the discriminative parts of objects, a similar idea to [7]. These methods [27]- [29] have achieved suitable performance, however, these approaches still have the problems of missing detection in case of occluded objects and wrong detections due to objects cluster. Since the training image is decomposed into thousands of proposals, and each approximately correct training instance is flooded with many incorrect training instances. Such weak supervision results in inaccurate predictions as it inevitably involves a lot of uncertainty due to noisy training instances. Therefore, to address the aforementioned limitations in WSOD and to improve the robustness of WSOD, we rectify the proposals predicted by MIC through class-specific localization maps, and retrain the object detection network by optimizing the instance-level objective function to learn the refined instances. III. PROPOSED METHOD We propose a proposal refinement module to the WSOD network in [7] to efficiently tackle the problem of partial object detection, loose detections, and objects cluster detection. In particular, our method not only generate a complete and tight box by revising each predicted bounding box but also infer missing instances (if any) from the corresponding class-specific localization maps. The same procedure is applied to all predicted boxes by the MIC branch. The overall architecture of the proposed approach is shown in Figure 2. VGG [30] is used as a backbone, which further has three branches, i.e., FuCN, MIC (object detection), and PRM. We utilize the high-level features of classification networks learned in complementary manner to make WSOD network learns the whole object representations. We fused the CAMs from two complementary classifier networks to generate a localization map to cover whole object activation regions using Score-CAM [31]. These localization cues from com- plementary classification network are used in the PRM. The spatial resolution of localization maps is made identical to the input image by upsampling (interpolation). In PRM, we determine the pairs of bounding boxes and the corresponding activation regions in class-specific localization map by coordinated positions of bounding boxes. First, we perform refinement with the inner region of the bounding box with respect to activated pixels in the associated localization region. Then we again exploit the localization maps for contextual information within the fixed region surrounding the box by scaling it with a factor as in [6]. For further refinement, we propose policy networks to learn complete and incomplete object detections and discover undetected regions in case of incomplete detections. These dual refined proposals are learned by detector. Figure 3 demonstrates the generated CAMs overlayed on the detection output from FuCN, and refined bounding boxes by PRM with class activation values greater than δ a (a pre-defined normalized threshold). In subsequent sections, the three main branches of proposed WSOD method are described in detail. Table 1 shows the notations used in this paper. A. FUSED COMPLEMENTARY NETWORK (FuCN) FuCN aims to learn the whole object features as well as localization through discriminative but complementary feature learning approach inspired by [22] and [23]. FuCN includes two complementary classifiers trained with distinct inputs. In particular, image features are extracted by pretrained VGG [30]. Then, we add four additional convolutional layers (convs) followed by a fully connected (FC) layer in both classifiers. Feature erasing is performed by thresholding the discriminative features of network A on its heatmaps, afterward, these regions are removed from the entire feature maps of the input image. Removed regions are replaced by zeros. Network B is then encouraged to learn the complementary features for the target object from these erased feature maps. The CAMs from both classifiers are extracted and fused to obtain complete localization maps of objects for each class separately. We upsample these localization maps to the same resolution as the image by interpolation. This network is optimized while training with binary cross entropy (BCE) loss function. B. DETECTION NETWORK (MIC) Instead of focusing only on discriminative features which results in partial object detection, in the object detection network features are concatenated with complementary features from FuCN [7] to learn the whole object features. The object detection network has two further branches for classification and detection as in [11]. A set of about 2000 region proposals from selective search [15] is employed in both the training and test phase. Single-level spatial pyramid pooling (SPP) [32] is then applied to produce fixed-size feature maps for these different sized proposals. The discriminative feature learning approach used in [7] drives the network to learn the whole objects and results in improved detections. For imagelevel classification, we used class-specific max-pooling of regions score from classification branch only. C. PROPOSAL REFINEMENT MODULE (PRM) PRM is proposed as a rectification scheme to aim at learning complete and tight object detections. Although whole objects can be learned by object detector as in [7] through concatenated features from FuCN branch, yet in some cases, it can result in loose bounding boxes. Therefore, further refinement is required to obtain tight bounding boxes. We use spatial information within the input image to achieve the refined tight and complete proposals with only image-level supervision. PRM consists of two refinement phases, 1) In phase I, pixel activations from localization maps are leveraged to rectify the detected bounding boxes, 2) phase II includes two policy networks trained with reinforcement learning for proposal refinement. The overall flow of PRM with phase I and phase II rectification is demostrated in Figure 4. In subsequent subsections, each module is described in detail. 1) PHASE I: PROPOSAL INWARD AND OUTWARD CAM BASED REFINEMENT The spatial resolution of localization maps obtained from FuCN is made identical to the input image by upsampling (interpolation). We select the pixels from class-specific localization maps with values greater than or equal to δ a as foreground objects. Inferring the precise association between bounding boxes and localization regions is very critical especially in case of multiple instances of the same objects in an image. The association between bounding boxes and the activated localization maps is based on the position of bounding box coordinates and corresponding position in the localization map. Bounding box coordinates are transformed according to localization map activations within the box and also to a fixed proportion of its surrounding context (s c ). Algorithm 1 and Figure 5 shows the proposal rectification procedure. As a first step, we check whether the activated pixels (a c ) in localization map (L c ) each belong to a single bounding box or multiple bounding boxes. If a c exist in multiple bounding boxes, we call these pixels 'conflicting pixels'. Afterward, we separate the conflicting pixels from nonconflicting pixels. We calculate the area of bounding boxes which contains conflicting pixels. Then, we assign conflicting pixels to the smallest bounding box to ensure the compactness. When a pixel is assigned to a particular bounding box, we change its activation to zero in the localization map for other proposals during inward refinement. Consequently, there remains no incongruity while associating the activated pixels in the map to bounding boxes. Inward refinement involves the adjustment of region proposal according to the closed activated pixels in the corresponding localization map using image coordinates. Lastly, outward refinement is per- 2) PHASE II: REINFORCEMENT LEARNING NETWORKS TOWARDS COMPLETE REGIONS We design a cascade of policy networks to learn whole objects under weakly supervised detection settings. Since there are no GT bounding boxes in WSOD, consequently, regression cannot be applied. We, therefore, train RL agents to further rectify the proposals in the weakly supervised setting. To the best of our knowledge, this is the first effort to apply RL based policy algorithms to learn the actions for complete or incomplete proposals and then further rectify proposals with contextual connections for WSOD. The overall procedure of phase II rectification is summarized in Algorithm 2. In subsequent subsections, we describe the decisive and rectifying agents for WSOD. a: DECISIVE NETWORK The objective of decisive-agent for WSOD is to learn the actions for input images as complete or incomplete object detections. We mask out all detected bounding box regions with zeroes correspondingly in input image feature maps. The decisive-agent is designed to discover the undetected object regions in the mask-out image to further improve detections of phase I. For an image if the decision of decisiveagent is incomplete, then the rectifying network decides about the detected region as a new bounding box (i.e., a missing detection from preceding detection procedure) or part of the existing proposals. A policy gradient reinforcement strategy is adopted to learn policy function to maximize the expected reward. To input to decisive network, we mask out all detected bounding box regions obtained from phase I with zeroes in input image feature maps. The policy network predicts complete and incomplete action likelihood for the mask-out image (I erase ). This method corresponds as the detections are complete or there exists some missing objects or parts undetected. The state is the input image features with the positionaware masked-out feature maps of detected regions from phase I. Action to the current state is the binary decision {d ∈ (incomplete, complete)}. Decisive-agent inspects whether the mask-out image further have any undetected object(s) regarded as incomplete detections, or detections are complete such that no remaining activated regions are found in the masked-out image (complete detections). The policy gradient method directly models and optimizes the policy (π θ ) with respect to parameters (θ), π θ (a|s) (a is action and s is state) to learn actions for the states. The proposed decisive network consists of 3 FC layers followed by a softmax layer as shown in Figure 2. We use 256+1 (+1 is for additional input for action history), and 128 sized vectors for first FC and hidden FC layers, respectively. To avoid catastrophic forgetting, we input the agent's experience replay [(s, a, r),. . .,(s t , a t , r t )] to the last FC layer. REWARD FUNCTION: In RL, designing an appropriate reward function for the specific decision problem is of great importance. The reward function {r ∈ (−1, +1)} for decisive-agent is defined based on the confidence score (s erase ) of mask-out image from a maximum of both classifiers (classifier A and B). The agent learns the correct deci- append b c to B c 8: (1) and (2); We propose a context-aware approach by training the RL agent to amend the region proposals (B c from phase I) based on the contextual connections of bounding boxes with CAMs (L c ). The proposals neighboring to the CAMs (a c1 , a c2 ,. . . , a cn ) in the mask-out image are investigated and each bounding box (b c ) is modified according to the decision of rectifying-agent. The architectural design of rectifying network is same as decisive network (Figure 2). Closed activated regions (a c ) from localization map (L c ) are computed as activated values within threshold (δ a ) with 8-connected neighbors. We define the criteria to decide about the discovered activated regions (a c ) by the decisive network. This mainly includes dual considerations: 1) Is a c a missing detection or part of an already detected proposal (phase I detected proposals), 2) explore the neighboring context of each a c (a c ∈ L c ) to determine action for it. We construct a temporary bounding box (b t ) around each a c as a closed foreground region based on activated locations as thresholded regions. Then we perform a distance measure between each b t and b c . As a first step, we calculate the IoU between b c and b ct (b ct is temporary or hypothetical bounding box generated as b c extended to b t ). If IoU between b c and b ct is greater or equal to δ IoU then further pixel-based distance measure comparison is performed. Additional spatial information is fed to the network by computing spatial relations within an image by exploring the neighboring context of activated areas in L c . We calculate the connectivity of each a c with a c in detected proposals (B c ) as a distance measure between both activated regions. The pixel distance is computed between activated regions within the bounding boxes b t and b c by the most upper-left, upper-right, bottom-left, and bottom-right pixel location of the activated regions. The pixel distance of each outermost pixel in a c is computed to all four outermost pixels of the neighboring region a c to estimate the nearest region. The neighboring bounding box with the least distance to b t is selected to modify with b c . However, if IoU between b c and all b ct is less than δ IoU , the particular temporary box is considered as a separate new bounding box that was a missed detection. For training the rectifying-agent, these two-level contextual relationship details between the bounding boxes b t and b c are fed to the network. The color smoothness information between each b t and their neighboring proposals is also fed to the rectifying network. Color smoothness matching further guides the agent about the degree of similarity between two regions. To compute smoothness over the color image regions, the Laplacian operator (second derivative along pixels) is applied on each channel and then take an average for all three channels. For each region proposal, smoothness information is feed to the network. We feed the rectifying network with the following state information, 1) convolutional features, 2) fused CAMs (L c ) from both classifiers, 3) rectified proposals (B c ) with scores, and 4) contextual details. Action history along with α (tolerance coefficient) history for the entire image including all bounding boxes rectification actions is also fed to the network. The transformations to the bounding box(es) are applied according to the action of rectifying-agent w.r.t. L c . Figure 6 presents the procedure of rectification network. REWARD FUNCTION: we define reward function for rectifying-agent by computing the overall score (h) for amended boxes (B c and B c ) with high weightage to entropy score than category confidence score. The score (h(b c )) of the amended bounding box b c (phase II) is compared to the score (h(b c )) of the previous same bounding box b c (phase I). If a large gain in the score is calculated then the decision of rectifying-agent is considered as wrong and negative reward is given and vice versa. A little gain in h(b c ) is considered tolerable. For rectifying-agent rewards are defined in (3) and (4) as follows; α is the tolerance coefficient that is learned by the rectifying-agent. To optimize the network performance α coefficient is learned by the rectifying-agent. α value is VOLUME 8, 2020 updated according to learning rate based on the reward for the current action given a particular state, with current α value, and α history with action history. Therefore rectifying-agent is trained based on reinforcement learning as well as selflearning approach at the same time. D. TRAINING PROPOSED WSOD FRAMEWORK Overall training of the proposed WSOD method is performed epoch-wise. Once the network is trained for all epochs including all episodes for policy networks, the whole network is trained again with continuous rewards for each image regardless of object categories groups. After completing the network training for each epoch MIC network is retrained to learn whole instances. 1) TRAINING DETECTION NETWORK (MIC) After feature extraction from pretrained VGG [30], the network is branched into three further networks, A, B, and C. Network A and network B extract discriminative features which are then concatenated for input to network C (MIC network). For N multi-label images in trainval set, label-vector for i th image is y i = [y i1 , y i2 , . . . , y iC ], where y ij = 1(j = 1, . . . , C), if j th class object is present in image and y ij = 0 otherwise, and C is the total number of categories. For detection network, image-level classification scores are computed by class-specific max-pooling on all regions R, R = (r 1 , r 2 , . . . , r T ), here T is the total number of proposals. For i th image, the j th class score is calculated by maximum of all regions j th class softmax probabilities, p ij = max(p j r 1 , p j r 2 , . . . , p j r T ). Thus, the prediction vector for i th image is computed as p i = [p i1 , p i2 , . . . , p iC ]. We use BCE loss function with stochastic gradient descent with momentum 0.9 and weight decay 5×10 −4 for optimization. The overall loss function for the network is defined in (5) as; where, L is the loss function for WSOD network [7], L A and L B are the loss functions of networks A and B, respectively. L C is the loss function of MIC network. In (7), s is the individual state in the probability distribution (s ∈ S), where S is the number of discrete states. 2) TRAINING POLICY NETWORKS Training settings for both decisive and rectifying networks are same except reward functions. We set episode length to the number of images in the entire subset of class-specific images (e = I c ) for training the network on PASCAL VOC2007 dataset. Whereas, for PASCAL VOC2012 episode length is half of each class-specific image subset (e = I c /2). For rectifying-agent, episode length is dynamic depending on the number of discovered instances in each image. For training policy networks, random images with same category are used in an episode. softmax in action preferences is used for policy parameterization such that the actions with the highest preferences in each state are given the highest probabilities of being selected [35]. Furthermore, parameterizing policies according to the softmax in action preferences with discrete action space enables optimizing the approximate policy as a deterministic policy which is imperative for WSOD scenario. We use gradient ascend with RMSprop [36] optimizer for policy optimization with same learning rate as in detection network. The objective function J (θ) for each episode with a set of parameters (θ), current state at time t (s t ) and action (a t ) with discount factor (γ = 1.0) using Monte-Carlo policy differentiation with reinforcement method is defined in (8) as; We use 0.4 dropout for the last hidden layer during training as regularization to avoid overfitting. For each epoch, 20 separate episodes are executed corresponding to each PASCAL VOC category. A. BENCHMARK DATA The proposed method is evaluated on PASCAL VOC2007 and VOC2012 datasets with 20 object categories which are widely used as benchmarks for object detection. We use trainval sets with 5011 images for VOC2007 and 11540 images for VOC2012 for training. Only image-level labels are used for training. The test sets with 4952 images for VOC2007 and 10991 images for VOC2012 are used for evaluation of the proposed WSOD framework. B. PERFORMANCE EVALUATION METRICS We use two performance measures for detection. The first metric is AP with 0.5 IoU between detected boxes and ground-truths and mean of AP (mAP). Furthermore, we use CorLoc to evaluate the localization accuracy. Both AP and CorLoc metrics measure the quantitative performance of the object detector based on the PASCAL criteria with IoU ≥ 0.5. C. IMPLEMENTATION DETAILS VGG16 [30] trained on ImageNet [2] is used as a backbone network. In our experiments, FuCN is trained with two complementary networks with same implementation settings as in [7]. We employ pretrained FuCN to train the proposed network. Score-CAM [31] is used for computing class activations. The learning rate is set to 10 −3 for the first 30 epochs and then to 10 −4 for the remaining 40 epochs. The number of episodes for training policy networks is same as the number of epochs. We initialize α with 1.3 value for optimizing rectifying-agent. It is always crucial to select an optimal threshold. Nevertheless, depending on the objective for a specific task at a particular stage we use threshold values in our experiments which yields maximum performance for that specific task. In our experiments δ a is 0.5 for phase I refinement and 0.7 for phase II refinement. We use a more constricted activation threshold for phase II since we aim to extract highly confident but mistakenly unexploited regions. Using low activation values in phase II refinement can cause ambiguity and may result in incorrect detections. Surrounding context (s c ) is set to 1.2 [6] during the refinement of proposals in phase I. The threshold δ IoU is set to 0.75 in rectifying network. A loose δ IoU can even harm already correctly detected regions instead of improving the detections, and too tight δ IoU may not aid in rectifying a region even if it has its true part in an activated neighboring region. The overall score h is computed by 0.7 weight to entropy score and 0.3 weight to category confidence score. All experiments are conducted on NVIDIA GeForce TITAN XP 4 parallel GPUs. D. COMPARISON WITH STATE-OF-THE-ART METHODS We compare the proposed WSOD method with state-of-theart methods on PASCAL VOC2007 and VOC2012 datasets. We reported experimental results for both PRM phases (PI and PII) separately, and retrained MIC. Table 2 and Table 3 illustrate the comparison of the results on PASCAL VOC2007 dataset in terms of AP, mean AP for test set and CorLoc for trainval set, respectively. It can be observed that our method with each module outperforms all compared methods for both test and trainval sets. In Table 2, from all compared methods, [29] has the highest mAP (54.9%), however, our method (MIC+PI+PII) has an enormous improvement with 6.7% mAP to [29]. From Table 3, Chen et al. [28] have the highest mean Corloc 71.0% among all other compared methods. In comparison to [28], the proposed method has a significantly high score with 6.0% gain in mean CorLoc. Table 4 and Table 5 illustrate the results by proposed and compared methods on PASCAL VOC2012 test and trainval sets in terms of mAP and CorLoc, respectively. A similar VOLUME 8, 2020 performance pattern as PASCAL VOC2007 is observed for PASCAL VOC2012 on both test and trainval sets. All modules of the proposed method have surpassed all compared methods with a substantial performance boost. Our method has numerous advantages compared to all previous state-of-the-art methods. Unlike [28], we investigate the outside neighboring region of the detected box but on a fixed scale, which avoids our method to erroneously include activated regions of the same category other than the specific instance. SLV approach [28] wrongly detect whole objects cluster with same category as a single detection and individual instances remained undetected. Partial object detection problem also still occur in [29] and [28]. Furthermore, mostly WSOD methods have declined performance on classes with relatively smaller size objects particularly ''bottle'' class. It is also observed that ''chair'' class has lower AP by all compared methods. The results in Table 2 and Table 3 illustrate that our method has achieved significantly improved AP for these categories compared to state-of-the-art methods. Qualitative results (Section IV: F) have further verified that small object detection is significantly improved by phase II refinement which is mostly missed in many other WSOD methods [21], [28], [29]. Overall the proposed PRM and MIC retraining results in enhanced performance for WSOD. CAM based dual-step rectification and complete or incomplete detection investigation supervise the WSOD network to learn high-quality, refined, tight, and complete regions. Furthermore, RL lightweight policy networks have an imperative contribution in enhancing the detection performance by further discovering missed instances. E. INFERENCE TIME We report the inference time of our method increamentally for each module with VGG16 [30] backbone network in Table 6. The inference time for our method (MIC) is same as [7] with much improved detections. Since PRM involves a series of refinement stages the inference time for phase I is only 0.70 FPS including MIC branch, and inference time for phase II including MIC and phase I is 0.67 FPS for PASCAL VOC2007. Phase I with MIC has inference time 0.82 FPS, and phase II with MIC and phase I takes 0.79 FPS for PAS-CAL VOC2012 dataset. Moreover, it is practical to use MIC during inference as it is trained with supervision from PRM and attains high accuracy with reasonable inference time (0.72 FPS on PASCAL VOC2007 and 0.84 FPS on VOC2012 dataset). Figure 7 shows the final qualitative detection and localization results of the proposed method. Our proposed method has overcome the problems of 1) partial object detection, 2) same category object cluster detection, 3) loose detections, and 4) missing detections effectively. However, there are certain failure cases for some images as shown in Figure 8. These failure cases are mainly due to images with less effective initial proposals because of high similarity in texture or color between neighboring instances, or due to occlusions. F. QUALITATIVE RESULTS Duplicate detections for the same object are effectively tackled by our method since phase I refinement procedure avoids the double detections by assigning a particular activated region only to a single bounding box. Very few false detections are observed as objects cluster or partial object detection. Such failure cases can be overcome by employing a more sophisticated proposal generator in the first place. Moreover, the qualitative results in Figure 4, Figure 6 V. CONCLUSION This paper proposed an effective framework for WSOD to achieve exhaustive refinement after instance-level classification. Two main rectification stages are proposed, phase I for rectifying the detected regions to obtain close-fitted and whole object bounding boxes. Phase II goals to detect missed instances which are mostly small objects. Since the majority of WSOD methods suffer from problems of partial object detection, objects cluster detection, and loose detections. Therefore, this study has efficaciously accomplished the aim of tight and complete object detections in weakly supervised settings. Qualitative results have demonstrated high-quality detection output by the proposed method on PASCAL VOC2007 and VOC2012 benchmarks. Correspondingly, the quantitative results have shown enhanced performance by the proposed WSOD method compared to the state-of-the-art WSOD methods.
9,017.4
2020-01-01T00:00:00.000
[ "Computer Science" ]
The impact of dike construction on the flow and sediment siltation in the project of Meishan Container terminal phase II In order to avoid the adverse impact of Qinglongshan deep groove on the flow pattern of the Meishan Container terminal phase II ensuring the safety of berthing, a submarine dike is proposed along the land front direction toward the Qinglongshan Island. The influence of different dike layout schemes on the water flow in the surrounding sea area was studied through the mathematical model. In addition, the effects of different dike heights and gap width on the surrounding water flow and siltation were analysed. The results show that the crossflow in front of the terminal under the two submerged dike heights and gap widths are basically the same, which is less than that of without the construction of the dike. Appropriate reduction of dike height is conducive to improve the flow at the channel between the Qinglongshan and Meishan Island. After the dike construction, the groove will maintain a certain depth, and will not result a blocking at the top side of dike and the rear channel. Intruduction Ningbo-Meishan port area has good water and land conditions to construct large-sized container berths. In the southeast of Meishan Island, a container terminal phase I has been built and already put into use [1]. In the north of container terminal phase II, there exist a deep grove between Qinglongshan and Meishan Island (figure 1). The tidal current of the deep groove obliquely pierces the water area in front of the wharf making the flow at the southeast side of the wharf disorder, and the large local cross-flow can influence the use of the dock [2]. In order to ensure the safety of ship berthing and load-unloading operation, the embankment construction along the land front line along the direction of Qinglongshan Island was proposed. The project will smooth the tidal current around the terminal area, and resulting a small crossflow in front of the wharf. However, the impact on the topography for the area behind the Qinglongshan Island is not known. Will the construction of dike causes a damage to the island landscape? This paper tries to analyze this problem from the perspective of sediment movement and landform evolution. The local hydrodynamic conditions will be adjusted after the construction of the dike between the Qinglongshan Island and the terminal. Which can significantly reduce the velocity at the water area behind the Qinglongshan Island. The slow velocity water area will result a rapid deposition of sediment. In this study, the sediment siltation after the project will be modelled using a mathematical model, and the influence of different dike heights and gap widths on the surrounding water flow and siltation will be discussed. Model verification The numerical model is DHI MIKE21 [3]. In order to simulate the tidal dynamics of the sea near the wharf, a model nesting method was adopted to provide the open boundary tidal condition [4]. The water depth distribution of the model is shown in figure 2, and the mesh of the mathematical model calculation is shown in figure 3. The triangular meshes are used to fit the irregular coastlines. The grid resolution is about 20m near the coast, the number of grid cells is 34488, and the number of nodes is 18556. The mathematic model was verified by comparing the water level and flow velocity to the measured data. The tidal, tidal current locations of the measuring point are shown in figure 2, arranged 10 tidal current and 4 tidal elevation measurement points. Figure 4 shows the results of tide verification at four tidal level sites and it can be seen that the simulation error is less than 0.1m, and the tidal wave phase coincides well with the measured results Figure 5 shows the verification result of the flow velocity and direction of spring tide. Due to the large number of islands and the complicated topography, the flow pattern in this area is complicated. From the simulation and the measured results, the flow at the points of 1 # , 2 # , 5 # , 6 # and7 # show the swirling pattern. While the flow of other points show the reciprocating pattern. Points 1 # , 2 # , and 3 # are different in the flow direction, at the point 3 # the flow is reciprocating flow due to the boundary constraint, and the flood velocity is 1 times larger than that of ebb velocity at 3 # . In the vicinity of the project area (9 # ), the tidal current is strong and the flood-ebb velocity is about 1.50 m/s during the spring tide, with its flow direction parallel to Meishan terminal Phase I shoreline. From the verification results we can see the model simulate the tidal current very well compared to the observed data, which indicates that the boundary conditions provided by the mathematical model and the values of the model calculation parameters are reasonable and correct. Analysis of water flow characteristics between Qinglongshan and Meishan Island under different cases Under the land constraints of the second phase terminal, the forefront flow is parallel to the forefront of the pier. In the tidal grove between Qinglongshan and Meishan Island, the flow velocity is relatively large. After the second phase construction, the flow in the channel between Qinglongshan and Meishan Island has weakened and influence scope can reach the north of Meishan Island [5]. The local flow velocity near Meishan Island in Qinglongshan is slightly different from that before the project. The analysis shows that although the embankment dike has been constructed from case 2 to case 4, the flood and ebb flow pathway mainly at the outside Qinglongshan. The flow velocity inside the channel is relatively small. In order to further understand the influence of the embankment project on the surrounding water flow, the flow pattern of the maximum northeast and the southwest flow velocity in the channel are added. Taking the comparison of the case 1 and 2 as an example (figure 7). From the comparison between the moments of the maximum northeast flow velocity occurs in the channel we can see that as the embankment is not built, the flow velocity in the channel is large and the flow direction is parallel to the channel axis. After the completion of the embankment dike with the top height equals 2.2m, the water flow is smooth in front of the terminal. As the maximum southwest flow occurs in the channel (figure 7b), the construction of the embankment project enhanced the smoothness of the water flow at the frontier of the land area, and the flow velocity in the channel was greatly reduced. The embankment dike construction has little effect on the flood current (figure 7c). In case 1, the water flow is strong at the northeast corner of the newly reclaimed land area and forms a recirculation area at the foreland of the land area (figure 7d). After the closure of the embankment dike, the flow becomes smooth and the flow velocity has increased, which is conducive to reducing the back of the terminal siltation. In contrast to the instantaneous flow patterns of the case 2 to case 3, it is found that there is little influence on the flow pattern in the other areas except for the local water flow near the gap. In the case 4, when the height of the embankment dike is lowered (85 datum in elevation of 0 m), the flow velocity in the channel is a little smaller than that without the construction of the dike, but the dike can also smooth the frontier water flow at the terminal resulting a small cross section velocity. The maximum velocity of northeastern and southwest flow in the channel occurred during the high tide period, and it was inconsistent with the flood and ebb tide time of the outer channel of Meishan Island. The construction of the embankment dike could increase the smoothness of water flow in the foreland of the terminal land area. The decrease of dike height is helpful to increase the water flow in the channel between Qinglongshan and Meishan Island and to reduce the siltation. Sedimentation for the 4 cases The annual silt distribution is calculated by mathematical model [6], and figure 8 shows the annual siltation for the case1~case4 respectively. After the implementation of case 1, the local maximum sedimentation is about 3m in the tidal channel between Qinglongshan and Meishan Island due to the relatively high sediment concentration before the project. After the implementation of case 2, the construction of dike makes the flow velocity within the channel between Qinglongshan and Meishan Island decrease obviously, and the siltation within the channel is strengthened. The distribution of siltation for the case 3 is basically the same as that of case 2, and the degree of siltation is reduced only near the gap inside the embankment. After the implementation of case 4, the dike height decreased from 2.2 m to 0 m, and the flow velocity in the tidal channel between Qinglongshan and Meishan Island increased significantly compared with those in case 2 and 3, which resulted in the siltation within the channel significantly weakened. In general, with the increase of siltation, the local cross-section area is reduced, resulting the flow velocity increased, and the new cross-section depth will be maintained. The siltation at the two sides of dike will gradually reach the balance. Due to the maintenance of water and wave dynamics, and with a certain amount of tide volume and overhead flow above the dike, the channel between Meishan and Qinglongshan Island will not silt to death. Conclusions The influence of the Qinglongshan embankment on the water flow in the surrounding sea area and the siltation between Meishan Island and the Qinglongshan channel on the backside of the embankment are analyzed by mathematical model. The conclusions are as follows: 1) The construction of the embankment dike project slow down the flow velocity in the channel between Qinglongshan and Meishan Island. With the decrease of the height of the embankment dike, the flow velocity in the channel increased. The maximum velocity in the channel is about 0.50 m/s when the elevation is 0m, which increases obviously compared to the elevation of the embankment equals 2.2m. 2) The crossflow after the construction of the dike embankment decrease to 0.23 m/s. The amount of siltation can be decreased after the decline of the dike height. The influence of different gap width on the distribution of back siltation is only limited to the area near the gap. 3) With the increase of siltation, the local cross-section flow velocity will be increased, and thus maintain a new section depth. With the combination effect of the flow and wave dynamics, the grove will not be disappeared after the dike construction.
2,531.2
2017-03-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Mapping GDP and PPPs at Sub-national Level through Earth Observation in Eastern Europe and CIS Countries Following the line of research originated from Henderson at al. (2012), this paper focuses on how ‘observations from the above’, in the form of night-lights satellite data, might contribute in mapping at very fine geographical level (ideally, one square km), two core macroeconomic indicators used extensively in the Sustainable Development Goals monitoring and reporting framework: Gross Domestic Product, GDP, and Purchasing Power Parities, PPPs. The analyses are carried out on a panel of 17 Eastern Europe and CIS countries for the period 2000-2013 and use indicators constructed from satellite images in the form of night lights, as processed by the US Department of Defense, and its Defense Meteorological Satellite Program’s Operational Linescan System. Estimations of GDP in current US dollars and PPP terms are carried out at both national and sub-national level, testing for the existence of a modifiable areal unit problem, and comparing results with the official available information. Maps for GDP and PPP at the sub-national levels are obtained as a final product of the research. Introduction The adoption of the Sustainable Development Goals in September 2015 by the United Nations General Assembly is calling National Statistics Offices (NSOs) worldwide to underpin a data revolution, as they are asked to extend both the scope and disaggregation of the data traditionally produced, and measure new economic, social and environmental phenomena, leaving none behind. There is a growing consensus in the digital era that Big Data, particularly satellite images captured from the above, might strengthen the capacity of traditional data sources and official statistics to help in monitoring sustainable well-being, thus facing the increasing request for more spatially disaggregated data. Following the line of research originated from the paper by Henderson at al. (2012), this paper focuses on how 'observations from the above', in the form of night-lights satellite data, might contribute in mapping at very fine geographical level (ideally, one square km), two core macroeconomic indicators used extensively in the SDG monitoring and reporting framework: GDP and PPPs. Nowadays, the use of night-light as proxy of GDP has becomes a standard in empirical economics (see, e.g., Donaldson and Storeygard (2016)). The obvious advantage in using night-lights is that they generally show a good correlation with GDP, they are available for free and for a long time span, and they are objectively measured. This research uses extensively the set of information coming from satellite images, as processed by the US Department of Defense, and its Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS). Scientists at the National Geophysical Data Center (NGDC) process these raw data and distribute the final set to the public, thus making freely available 34 annual products from six satellites spanning 22 years, from 1992 to 2013. However, given the proximity of the first data available for satellites with the dissolution of the Soviet Union and the length of the transition period in the economies of the region, the sample analysed in this paper goes from 2000 to 2013. The stable night lights are those used in this research to proxy GDP in nominal and PPP terms for 17 CIS and Eastern Europe countries: Azerbaijan, Armenia, Belarus, Bulgaria, Czechia, Hungary, Kazakhstan, Kyrgyzstan, Poland, Republic of Moldova, Romania, Russian Federation, Slovakia, Tajikistan, Turkmenistan, Ukraine and Uzbekistan. Henderson at al. (2012) were the first to use night-lights in a complete statistics and econometric framework to estimate, in a panel of world time series, real economic growth. Following their examples, the relation between lights and GDP at sub-national administrative levels have been deeply investigated for North Korea, Kenya, Rwanda, Sweden, Nigeria, India and China. More recently, while some papers have confirmed the ideas underlying the lights-to-GDP hypothesis at the country level (see, e.g., Elvidge at al. (2014)), the approach used by Henderson et al. (2012) have been criticized due to the implicit assumption of stable elasticity made in obtaining sub and/or supra-national estimates (Bickenbach et al. (2016), Addison and Stewart (2015)), which is hardly met under common situations where a modifiable area unit problem (MAUP) exists. Particularly, it has been stressed that the elasticity of GDP-to-lights should be statistically significant and positive, as well as temporally and spatially stable. For CIS and Eastern Europe countries, the literature on lights and GDP is practically non-existent, the only indirect reference being a global exercise carried out by Elvidge et al. (2014) on the correlation (in levels) between GDP, night-lights and population at national level during 1992-2012. Furthermore, to the best of our knowledge, no study has been so far carried out on the direct or indirect relation between lights and PPPs. Our paper innovates with respect to the preceding literature in at least three respects. First, it analyses in a systematic way the relationship between DMSP-OLS night-lights and GDP in CIS and Eastern Europe countries at the finer extent possible, looking at conditions under which lights can be used to obtain estimates of GDP and PPPs at detailed geographical level. Second, the research uses both a time and spatial approach in the analysis, particularly through the use of balanced panel regressions models, and tests the conditions of spatially and time stability of GDP-to-light elasticity. Third, use is made of the available national and sub-national data produced by NSOs of the region. After testing for the existence of temporally and spatially stable elasticity of GDP both in real and PPP terms with respect to lights, the estimated coefficients are used to map economic activity and parities at very fine geographical level, thus offering two sets of information that are mostly needed for SDGs monitoring and reporting. We are fully aware that the estimations provided in this paper cannot replace primary statistics produce by NSOs of the region. However, we hope these estimates will be of some use for policy makers and researchers for their policy intervention, analyses and discussion, and contribute in partially answering the increasing demand for more spatially disaggregated macroeconomic data to further advance the sustainable development agenda. The scheme of the paper is as follows. The next section describes the main characteristics of the DMSP system and the satellite information obtained in terms of night lights. Section 3 details on the indices considered in empirical analyses, the transformation carried out on night lights information and the population data used. Section 4 follows with the results of the empirical applications. The last Section of the paper summarizes main results and concludes. Night-lights Data from Satellite Images Earth observation have been used in many respects to shed light on specific aspects of human development, such as economic output, population, urbanization, land, water and natural resources use, weather conditions and climate change, and pollution monitoring. In parallel, there has been a growing use of night-lights, one of the most important byproducts of satellite remote sensing, as proxy for measuring economic, social and environmental phenomena. This paper makes an extensive use of the set of information coming from satellite images, as processed by the US Department of Defense, and its Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS), see Croft (1979) and Doll (2008) on technical aspects of the programme and, for a survey on use of such images, Huang et al. (2014). A characteristic of DMSP-OLS data that has attracted most attention of researcher in the last years is their availability at a very fine geographical level (1 square km), thus making it possible to estimate through them a number of statistics at sub-national level, particularly those related to the level and growth of economic activity, thus providing an answer to chronicle lack of official statistics at the level of disaggregation requested within the framework of the sustainable development agenda. The Defence Meteorological Satellite Program (DMSP) is a Department of Defence program of the US Air Force Space and Missile Systems Center, which started to capture imagery in the early 1970s through the Operational Linescan System (OLS) sensor. One of the primary objectives of the OLS sensors was to collect worldwide cloud cover observations twice per day. In 1992 the National Oceanic and Atmospheric Administration (NOAA) was established and it processed and archived the DMSP nighttime light satellite imagery for 22 years. The DMSP programme has been repeatedly upgraded over time, with the latest series in its Version 4 spanning data for the years 1992-2013 and actually publicly available from NOAA from its website (http://www.ngdc.noaa.gov/dmsp/ downloadV4composites.html). Satellites from DMSP-OLS measure light emissions in the evening hours between 8:30 and 10:00 pm local time around the globe every day. The OLS sensor has two broadband sensors, in the visible/near-infrared (VNIR, 0.4 − 1.1 ) and thermal infrared (10.5 − 12.6 ) wavebands. The OLS is an oscillating scan radiometer with a broad field of view (~ 3,000 km swath) and captures images at a nominal resolution of 0.56 km, which is smoothed on-board into 5x5 pixel blocks to 2.8 km. Scientists at the National Oceanic and Atmospheric Administration's (NOAA) National Geophysical Data Center (NGDC) process these raw data and distribute the final data to the public, following an undertaking of monumental difficulty. Original data are from the centre half of the 3000 km wide OLS swaths. Lights in the centre half have better geo-location, are smaller, and have more consistent radiometry. In processing the raw data, a number of filters are applied before releasing final results. Sunlit data are excluded based on the solar elevation angle. Glare is also excluded based on solar elevation angle. Moonlit data are omitted based on a calculation of lunar luminance (Croft 1979). The recorded daily data are pre-processed, by removing observations of cloudy days and sources of lights which are not man-made, such as auroral lights or forest fires. Data from all orbits of a given satellite in a given year are then averaged over all valid nights to produce a satellite-year dataset. These are the datasets that are distributed to the public. As a result, each satellite-year dataset reports annual light intensities for every pixel around the globe at a resolution of 30 by 30 arc seconds (approximately 0.86 square km at the equator) between 65 degrees S and 75 degrees N latitude. Data are released in three different versions: raw, stable lights and the calibrated versions. The stable lights version removes ephemeral events such as fires and background noise. The calibrated version is currently available only for 2006 and has the advantage of not being saturated (top-coded). Our analyses are based on the stable lights version. Data made available to the public by the NOOA have been geo-referenced at national and regional levels by digital number (DN) using the administrative areas and boundaries (level 0 and 1, respectively) provided in the form of shape-files by GADM, Version 3.6, available at https://www.gadm.org. A geolocation algorithm was used to map the data onto the 1km grid developed for the NASA-USGS Global 1km AVHRR project (Eidenshink and Faundeen, 1994), that limit error in geolocation in the project process. The light intensity values of the stable lights product are recorded in a fixed range of digital numbers (DN) from 0 (missing or completely dark) to 63 (bright). Sensor saturation implies that the satellites are not able to capture a light intensity higher than 63 DN. A small fraction of pixels, generally in rich and dense city areas, have DN values equal to 63. The saturation and blooming issues in DMSP/OLS NTL images are the main limiting factors in their use. Imagery from the DMSP-OLS satellite has a tendency to overestimate NTL imagery, an effect generally referred to as "blooming" in the literature. Blooming occurs when cells producing NTL cause lit pixels to extend beyond the source's true illuminated area. This phenomenon can be acute in OLS imagery and it is more pervasive over water and snow areas, as these reflect close lights more than dark ground. Blooming should be of particular concern when examining coastal metropolises, since changes in brightness tend to be bigger in area than associated land cover changes (Small and Elvidge, 2013). Typically, blooming is proportional to the SOL emitted by a light source, such as an urban area. Sensor settings vary over time across satellites and with the age of a satellite, so that comparisons of raw DN over years can be problematic. This explains why satellites, in the very last years, are replaced by new ones, accompanying them for their last few years of life. That happened for all satellites but the last, F16, substituted by the last orbiting F18 without an overlapping period. A map of night lights for Europe, including Eastern Europe and CIS countries, is represented in Figure 1 below. Figure 1 Plot of night lights of Europe (including Eastern Europe and CIS countries) There are several studies aimed at radiance calibration of DN over time across satellites (e.g., Li and Zhou 2017, and the literature cited therein). Their goal is to make data comparable across time, creating a consistent time series of satellite observation that eliminates abrupt jumps in the series, when passing from observations of one satellite to another. DMSP light data collected in different years (and satellites) may have variations in gain settings, sensor degradation, and change in atmospheric condition. We did not perform such calibration on the original data, but we control for such issues, whenever appropriate, by using panel regression estimations with fixed effects for time and satellites. Such estimations are able to take into considerations the differences in the capacity of satellites to identify lights intensity due to obsolescence. For years with two satellite observations, the arithmetic average of the two outcomes is considered in the empirical applications. The DN is not exactly proportional to the physical amount of light received (called true radiance) for several reasons. The first is sensor saturation, which is analogous to topcoding. Further, the scaling factor ("gain") applied to the sensor in converting it into a digital number varies for reasons that are not explained, possibly to allow Air Force analysts to get clearer information on cloud cover. Unfortunately, the level of gain applied to the sensor is not recorded in the data. The DMSP night-time lights provide the longest continuous time series of global urban remote sensing products, now spanning 22 years. The flagship product is the stable lights, an annual cloud-free composite of average digital brightness value for the detected lights, filtered to remove ephemeral lights and background noise. The follow on to DMSP for global low-light imaging of the earth at night is the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB), flown jointly by the same NASA-NOAA Suomi National Polar Partnership launched in 2011. These data are available for a shorter time period (data are indeed available on a monthly basis only from 2012 onwards, annually only for 2015-2016), but they are of greater precision than DMSP images and made available to public in a very timely way, after some few days from the end of each month. They offer substantial improvements in spatial resolution, radiometric calibration and usable dynamic range when compared to the DMSP low light imaging data. VIIRS DNB key improvements over DMSP-OLS data include a vast reduction in the pixel footprint (15 arc-second, about 500 m), uniform ground instantaneous field of view from nadir to edge of scan, lower detection limits, wider dynamic range, finer quantization, in-flight calibration and no saturation ). Prior to averaging, the DNB data is filtered to exclude data impacted by stray light, lightning, lunar illumination, and cloud-cover. Cloud-cover is determined using the VIIRS Cloud Mask product. GDP, PPPs and Explanatory Variables The data on GDP are obtained from the World Bank, World Development Indicators database, which contains data by country on different measures of national accounts. Those used in this paper include current local currency unit data, current US dollars data, and data in PPPs (current international US dollars). PPPs time series can be obtained implicitly by dividing current data in local currency by the corresponding data expressed in PPP, in current international US dollars. An alternative indicator often used as proxy for GDP is electricity consumption. We consider here an electric power consumption (kWh per capita) indicator obtained from the World Development Indicators database. Night lights data have been used to derive a number of indicators for our empirical analyses, as follows. Let us indicate with the value, ranging from 0.51 to 63, and with the number of pixels with a value equal to . The sum ( ), mean ( ), and standard deviation ( ) of lights are defined as: where K may range from 63 (one satellite) to 126 (two satellites). Among the indicators constructed using night lights information, we considered a Gini night-light index. The index measures the extent to which the distribution of light intensities (in terms of DN) among pixels (the Lorenz curve of the traditional Gini index), deviates from a perfectly equal distribution. The Gini index measures the area between the Lorenz curve and this hypothetical line of absolute equality, expressed as a percentage of the maximum area under the line. Thus, a Gini index of 0 represents perfect equality, while an index of 1 implies perfect inequality. The data set used in analyses includes also population data, which are extracted from the World Bank national and sub-national population total estimates of the de facto mid-year population at national and first level administrative division. We construct the Gini coefficient using only information from nightlight as per the formulas below, where it is assumed that the 's represent values of lights, and the 's the pixels pertaining to those values. =1 As an alternative to the Gini, we also consider as concentration measure the Bonferroni inequality index (Bonferroni 1930), which is based on the comparison of the partial means and the general mean of the light distribution: Compared to the Gini, the Bonferroni index has a number of advantages, see e. g. Tarsitano (1989), most notably it is more sensitive at the lower tail of the income (light) distribution, where indeed night lights are concentrated in our sample: this is a common feature of most countries around the world, see Henderson et al. (2012). Other less used measures of concentration considered in this paper are the Mean Log Deviation, , and the first, second and third quartiles of the lights distribution, as well as the inter-quartile difference, , which have straightforward definitions. We also follow other authors in considering, as possible explanatory variables, indices aimed at measuring the extent of urbanization in the countries analysed. In this respect, it is quite common to use a threshold of = 7 or = 10, e. g. Imhoff et al. (1997), as the values to discriminate between urban and non-urban areas. The first index, the Urban Light Index, , has been proposed by Yi et al. (2014): where and ∑ are the count of and the number of lit pixels, respectively. Here, ( ) indicates the brightness, which reflects the light intensity of each area, while ∑ reflects the weight of . Finally, as an alternative indicator of urbanization, we consider in our analyses the Night Light Index, , and its two subcomponents, the Mean Light Intensity Index, , characterizing light intensity, and the Light Area Index, , characterizing the light spatial distribution of each area. This is an index originally proposed by Yang et al. (2009), and it is supposed to accommodate for three main factors affecting the degree of urbanization: urban population, industrial structure, and build-up area distribution. The index and the components are defined as follows: In computing the index, the sum of pixels at the denominator includes also those with DN = 0. is the ratio of the actual lights compared to its maximum value (the value obtainable if all pixel were saturated): it represents a measure for light intensity. is the percentage of lit pixels over total area (lit and unlit) of the country. A sense of our data-set is provided by the statistics for GDP and some derived nightlights indicators for the 17 Eastern Europe and CIS countries considered, as reported in Table 1. In 'Stan' countries, a high fraction of pixels, generally above 90%, is unlit. This is a characteristic in common with Russia, where the unlit area is a 92.5% of the entire territory. The lit area is predominant in most Eastern Europe countries, notably Czechia (5.0%), Poland (17.3%), Slovakia (28.5%), Hungary (36.8%), Bulgaria (46.0%) and Romania (47.0%). Czechia, Poland and Slovakia show a relatively high degree of urbanization and, in general, percentages of lit pixels -in the frequencies over 10 larger than 35%. Top-coded areas are virtually non-existent in Kazakhstan, Moldova, Kyrgyzstan and Tajikistan. With the exception of Moldova, these countries show the lowest levels of population densities in the whole region. Overall, higher values of mean lights tend to be associated with higher variability among frequencies. Higher mean values of are found in richer realities having top GDP per capita values in terms of PPPs, Czechia, Poland, Slovakia, Hungary and Russia, showing a clearly positive correlation between lights and GDP, which seems at odds with Martinez (2019) findings of a negative relation between GDP and night-time lights. In this respect, electricity consumption growth rates data seem less correlated with GDP changes than satellite information on light average growths, and give misleading indications over the whole period i.e. in Azerbaijan, Moldova, Tajikistan and Uzbekistan. Among the poorest and relatively sparsely populated countries of CIS, like Kyrgyzstan and Tajikistan, a great percentage of pixels are unlit, the average intensity of lights is low (below 8.0), the degree of urbanization shows the minimum values in the region, and top-coded areas are practically absent. While richer countries tend to have higher average digital numbers, geography and population density also play strong roles. The mean reaches its peak in richer realities, notably Eastern Europe countries, which show the highest levels of GDP indicators among the countries in the sample. For these two countries, the indicators in Table 1 display a quite similar pattern: low percentages of unlit area, relatively strong urbanization levels and higher values of light concentration, average percentage of topcoded areas, and relatively high population density. TABLE 1 HERE Cross-section and panel comparisons usually perform better among countries with similar culture in terms of use of lights (i.e. energy-saving policies), geographical characteristics, population density, and top-coding magnitude. As clearly evidenced from the descriptive analyses above and the indications emerging from Table 1, this is not completely the case in our sample of countries, which however show, in their distinct trajectories and patterns, especially those of Eastern Europe and CIS countries, some sub-regional commonalities and trends. In the empirical part of this work, we will also explore whether changes in dispersion measures (like the Gini and the Bonferroni indices, the inter-quintile as well as the standard deviation of lights), the degree of urbanization, the fraction of unlit and topcoded area, contribute additionally in modelling and forecasting GDP growth and PPPs measures and map them at sub-national levels. Model and Empirical Results The analytical approach used here is similar to the one proposed in Henderson et al. (2012), who in their pioneering work used a panel model with country and year effects to predict GDP at the international level through night lights, and where country effects controlled for factors like lighting technology and investment in outdoor lighting, whereas year effects monitored differences in light sensitivity across the satellites and changes in global external conditions, like technology and economic conditions. In our applications, we estimate a panel model where the dependent variable, , , represents GDP, and the , are the explanatory variables, defined through different night-lights metrics, population and energy consumption data. The measures of GDP considered are those that permit, based on model fitted data, to estimate PPP measures, which are not directly supposed to be in relation with measures of night lights. These are GDP in current US dollars and in PPP. Concretely, the various steps followed in the analyses are as follows: (a) Identification of the best performing series in each group of night-light-based indicators (standard measures, dispersion indices, measures of urbanization, other series, including population and energy consumption) using pooling regressions; (b) Estimation of panel data models for both GDP in current US dollars and PPP (current international US dollars) with national data for the 17 countries; (c) Conversion of the estimated values of the model for GDP in current US dollar to local currency; (d) Derivation of implicit PPP estimates from the two models and comparison with World Bank PPP time series estimates; (e) Application of the coefficients obtained with the estimation of the national model for GDP in current US dollars to sub-national night-lights indicators available at NUTS 1 level; (f) Comparison of the data estimated in step (e) with the official regional available data published by countries to verify the existence of a MAU problem; and (g) Use of the estimated coefficient to obtain further space disaggregation of the interest series, namely GDP and PPP. Let us analyse, step by step, how the procedure above was carried out for our data-set. Preliminary analyses on our two GDP series, , , suggest that -based on Pesaran (2007) CIPS tests -, the panel should be estimated in first differences. The preliminary analyses made using pooled regressions on the rate of growth against standard measures of lights (sum and mean in log terms, and the corresponding per-capita values), dispersion measures (the Gini and the Bonferroni indices, the mean log deviation, the inter-quintile difference as well as the standard deviation of lights), different measures of urbanization (the night light intensity index, , and its two components, the urban light index, , with lower threshold at 7 or 10 ), population density, as well as energy consumption, shows that the series performing better are the sum of light per-capita, the ratio of standard deviation to mean of lights, the Gini concentration index, and the urban light index, with = 10. Given the upwards trend characterizing lights and GDP data, both series are expressed in log-difference in our panels, while other series, for their bounded characteristic, are considered in level form. A flavour of our data-set, composed by 221 observations when expressed in growth rates, is provided in the conditioning plots reported in Figures 2 and 3. Bars at top indicates the corresponding graphs (by years) from left to right, starting from the bottom. Further insights on the heterogeneity across years and countries are provided in Figure 4, where it emerges a certain degree of country and time-heterogeneity along the reference period. The results obtained from the estimations of pooled linear regression, random and fixed effects models, reported in Table 2, show a strong significance of the exogenous variables identified above for both GDP series. The Hausman tests statistics of 6.827 and 7.160 respectively for GDP in PPP (current international US dollars) and GDP at current US dollars, with their associated p-values of 0.145 and 0.128, lead to accept the null of random effects against a fixed effects model, while Breusch-Pagan Lagrange Multiplier tests for random effects uniformly reject the null that variances across entities are zero in all models considered. TABLE 2 HERE After conversion of the estimated values of the model for GDP in current US dollar to local currency using official exchange rates available in the World Bank data-base, implicit PPP estimates are obtained from the two models using the coefficients reported in Table 2, and comparisons are made with World Bank PPP time series estimates, after reporting estimates of growth rates to level variables for the two measures of GDP. The correlation of the two series with official PPPs is strong, equal over the whole 2000-2013 sample and for all data obtained for the 17 countries, to 0.981. Similar results are obtained for GDP measures used in the analyses. When the estimated coefficients are applied to subnational lights indicators at the level-1 administrative official boundaries, and the results compared with the available data on GDP at current US dollars, the correlation between the official and estimated level data show values close to those obtained for PPPs (step (f) of our procedure), thus making it feasible to proceed to step (g). Mapping of GDP and PPP data are not reported here, but will be included, in the form of summary choropleth maps, in an extended and completed version of this paper. Conclusions Spatially disaggregated maps of GDP and PPP, especially if updated on an annual basis or at higher frequency, would be extremely beneficial for tracking the effectiveness of policy efforts in specific areas or, for example, evaluating the consequences of natural disasters, conflicts or other general policy purposes. Satellite images in the form of night lights could help in better understanding those economic phenomena and their spacetemporal dynamics. The sub-national analyses carried out in this paper had a twofold objective. First, to examine the feasibility of applying the country level approach to the sub-national level on a country-by-country basis, and second to explore the opportunity of using global/regional models for countries where sub-national data on GDP are either missing or deemed to be unreliable. Furthermore, attempts to estimate sub-national PPPs data, although important to identify space price dynamics and measuring, amongst others, poverty lines at sub-national level, so far have provided quite unsatisfactory results, whilst the way to obtain information through traditional approaches is practically unfeasible for costs reasons. The analyses and outcomes of this research rest on the assumption that coefficients describing GDP at the national level continue being of use at the finer disaggregated geographical level. MAUP is a well-known problem in geography and spatial analysis. However, there is scarce research on MAUP's impact in studies that make extensive use of satellite images, particularly those obtained from DMSP images, see e. g. Chen and Nordhaus (2019). Indeed, the majority of literature on socio-economic spatial disaggregation through night lights rests on the assumption of negligible MAUP. This is indeed a line for future research on the GDP-PPP-nightime images relation, possibly with use of sensitivity analysis. While the OLS is remarkable for its detection of dim lighting over a long time span, the quality of its mapping products could be improved in a number of ways. The main shortcomings of the OLS data include the following, in part resolved by the introduction of the new VIIRS products: (a) granular spatial resolution; (b) lack of on-board calibration; (c) limited dynamic range; (d) signal saturation in urban populated centres; (e) limited data recording and download capabilities; and (f) lack of multiple spectral bands for discriminating lighting types. The use of VIIRS data could clearly improve on the results presented in this paper, permitting estimations and updating of maps at higher frequencies, but longer time series of data would be necessary to obtain sufficient information for use in a panel framework. The research could also expand by analysing images captured by other non-US satellites. European data on earth observations are another incredible source of statistics information, with Copernicus being perhaps the most ambitious earth observation programme to date. This initiative, headed by the European Commission in partnership with the European Space Agency, is actually providing accurate, timely and easily accessible information. The information provided by this incredible source of information for Sustainable Development Goals monitoring and reporting is in its preliminary phase, but there is an enormous amount of information awaiting for investigation to help shape the future of our planet for the benefit of all, leaving none behind.
7,334.6
2019-01-01T00:00:00.000
[ "Economics", "Environmental Science", "Geography" ]
Transparency-Aware Segmentation of Glass Objects to Train RGB-Based Pose Estimators Robotic manipulation requires object pose knowledge for the objects of interest. In order to perform typical household chores, a robot needs to be able to estimate 6D poses for objects such as water glasses or salad bowls. This is especially difficult for glass objects, as for these, depth data are mostly disturbed, and in RGB images, occluded objects are still visible. Thus, in this paper, we propose to redefine the ground-truth for training RGB-based pose estimators in two ways: (a) we apply a transparency-aware multisegmentation, in which an image pixel can belong to more than one object, and (b) we use transparency-aware bounding boxes, which always enclose whole objects, even if parts of an object are formally occluded by another object. The latter approach ensures that the size and scale of an object remain more consistent across different images. We train our pose estimator, which was originally designed for opaque objects, with three different ground-truth types on the ClearPose dataset. Just by changing the training data to our transparency-aware segmentation, with no additional glass-specific feature changes in the estimator, the ADD-S AUC value increases by 4.3%. Such a multisegmentation can be created for every dataset that provides a 3D model of the object and its ground-truth pose. Introduction Household robotics is a challenging topic as it includes a lot of different tasks in a dynamic environment.For example, robots should be able to set the table, fetch objects, and place them in different places.To be able to manipulate any object, the robot needs to know that the object exists and where it is located in the world.This can be achieved, for example, with cameras perceiving color and depth information of the environment.Many works have been published about 6D pose estimation of opaque objects in different scenarios [1][2][3][4][5][6][7][8][9].However, humans tend to use drinking devices, salad bowls, or decanters made out of glass; thus, the robot needs to be able to cope with transparent objects as well as opaque objects.There are two big differences between those objects: depth data are often unreliable, and clutter has a different effect.The problem with the depth data is that normal depth cameras rely on infrared light and laser range finders use high-frequency light.This may work fine with opaque objects where the light can be reflected but not really with transparent objects, as light just passes through and is either lost or reflected by objects or walls somewhere behind the actual object.This can distort the depth data quite a lot.There are different common approaches to tackle this problem: trying to reconstruct the depth data, as performed for example in [10][11][12], simply trying to solve the 6D pose estimation with stereo RGB images [13] or by just using single RGB images [14,15].The latter is what our work focuses on. Another difference between opaque and transparent objects is that clutter might not have the same visual effect.If an opaque object stands in front of another object, it covers the other object to a certain degree, and the object behind is only partly visible.With transparent objects, however, this is not always the case.A big glass object in front of another usually only covers small parts on its edges where the light reflects differently.But because its main part is just transparent, it is possible to see the object behind it quite well.Figure 1 highlights that the big salad bowl on the table (Figure 1a) stands in front of four other objects, but those are still visible, as a closer look on the wine glass in Figure 1c shows.Nevertheless, the ground-truth segmentation and the resulting bounding box Figure 1b are, by definition, only on the directly visible part of the object.This adds another difficulty for two-stage approaches.Usually, they first detect the object within a bounding box and scale the bounding box to a fixed size to then run a second network on that image.Now, if an object is largely occluded, the remaining part is scaled (Figure 1b), and consequently, the object has a much different scale than if not occluded.It might be quite helpful to prevent the wrong scaling and extend the bounding box to the whole object (Figure 1c) and teach the network that the hidden but visible part also belongs to the object.Thus, we propose to rethink the bounding box and segmentation for transparent clutter.We call this transparency-aware segmentation (TAS) and transparency-aware bounding box (TAB).The contributions of this paper are to • modify ground-truth segmentation and bounding boxes to cover whole objects in cluttered transparent scenes, • compare training with transparency-aware and original segmentation and its influence on the pose, • apply our approach to a cluttered challenging glass scene using only RGB data without modifications specific to the glass dataset. Related Work As knowing the 6D pose of an object is probably essential for any robotic manipulation, there exist various methods to estimate it.They can be differentiated between direct methods that directly output a 6D pose such as PoseNet [17] and GDR-Net (geometryguided direct regression) [18], or indirect methods that produce intermediate outputs and use RANSAC or PnP solvers to estimate the pose [1,15].Another conceptual difference is the number of input images.Zebrapose [1] is a single-image approach that involves splitting the object into multiple regions with several sub-regions, interpreting every region as a certain class and as a multi-class classification problem before solving the resulting correspondences via a PnP solver.A multiple-view approach is, e.g., CosyPose [6], which uses a single image for initial object candidates, matches those candidates across other views, and globally refines object and camera poses.Another approach improving most results is to include depth data.FFB6D (full-flow bidirectional fusion network for 6D pose estimation) continuously fuses information from RGB and depth data within their encoder and decoder structured networks, followed by a 3D key-point detection and instance segmentation and least-square fitting [3]. All those methods, as well as most other work in this area, focus on texture-less, cluttered, and opaque objects; however, only a few works have been published specifically on transparent objects.As pointed out before, transparent objects face some further difficulties.There are some approaches using depth data, with some work on depth completion, as in [10], which uses affordance maps to reconstruct the depth data.Another approach is made by Xu et al. [11].They try to reconstruct the depth image with by surface normal recovery from the RGB image and use this to calculate an extended point cloud.Using the RGB image and the constructed point cloud, another network estimates the final pose.Because of this depth problem there are also direct methods to use only RGB images like [14].They adapt the idea of GDR-Net to use geometric features of objects and fuse those with edge features and surface fragments specified for transparent objects.From these fused features, a dense coordinate and confidence map is obtained and processed by GDR's patch-PnP.An indirect approach is [15] by Byambaa et al.They also take only a single RGB image, but apply a two-stage approach where a neural network extracts 2D key points, followed by a PnP algorithm afterwards.These works have in common that they adapt their networks somehow to the specific characteristics of transparent materials, making them more specific and less suitable for general-purpose object pose estimation.We want to keep our network and training structure identical to previous work and rather change the ground-truth definition of the training data itself in a glass-dependent way.The idea is to enhance pose estimation on transparent objects without reducing the generality of the estimator. In previous work, our group has proposed an indirect general approach for RGB and RGB-D images to estimate 6D poses [4] with a specific focus on symmetrical objects [19].It splits the process into two stages, based on the idea that neural networks have been proven to predict well which object is where in the image and that mathematical algorithms can solve 2D-3D correspondence problems.In the first stage, a neural network is trained to predict a segmentation, an object image, and an uncertainty map of the predicted object image.This object image represents in each pixel the corresponding 3D point of the object in object coordinates in a second stage.These learned correspondences of the 2D pixels with the 3D object points within the object image are then solved by a generalized PnP algorithm, taking the uncertainty into account.This approach works well on symmetrical and opaque objects.In contrast to Xu et al., we do not change the structural training network and extend it with specific features for glass but rather reuse our general approach and simply adjust the training data to achieve better results on transparent objects. Definitions for bounding boxes and segmentation of objects, which are necessary for most training processes in the area of 6D pose estimation, are hardly mentioned and discussed.Sometimes, it is implicitly displayed in figures [14] or encoded in the choice of the object detector [1] but rarely explicitly declared or discussed.In the case of bounding boxes, most works estimate axis-aligned bounding boxes, as in [20][21][22], and only few estimate object-oriented bounding boxes [23,24], but to our knowledge, there is no discussion about segmentation and the fact that one pixel can belong to more than one object, e.g., in the case of transparent objects or scenes seen through a window. Approach We want to investigate whether changing the training data considering the specialities of transparent objects enhances the pose estimation without having to adapt the network with special glass features.To achieve this, we use a multisegmentation.Conceptually speaking, we extend the normal segmentation mask, where each pixel is assigned to exactly one object or background, by assigning multiple objects to pixels in cases where clutter normally hides object pixels.The idea is visualized in Figure 2b where the fully colored pixels represent the original ground-truth segmentation, and the object border indicates that the objects continue.The pixels within the borders belong to all overlapping objects, not just to the object in front. Training In the training processes, sketched in Figure 3, we consider three different variants in the segmentation and bounding box process.We want to investigate and assess the effect of a consistent size and scale of the object (transparency-aware bounding box), as well as of the transparency-aware segmentation definition.Figure 4 visualizes the three differences.In the first variant (Figure 4a), the original ground-truth mask and the resulting bounding box are used.The second variant (Figure 4b) also uses the ground-truth mask, but the transparency-aware bounding box (TAB) encloses the whole object plus an additional five pixels to every side.This ensures a consistent size and scale of the object and adds some more context around the object.The third training set is built by using the transparency-aware segmentation (TAS) mask and bounding box (Figure 4c), including even the formally occluded parts of the object.As glass is transparent, i.e., even though there might be some glass in front of the object of interest, the object of interest is still partly visible.Only the edges of the occluding objects really disturb the image.Having a fixed quadratic input size of the networks requires using the bounding boxes and calculating a region of interest that fits the networks' needs.As in our previous work [4], we use the longest side of the bounding box to fit a square around the object (uniform scaling).The resulting ROI is used to crop the segmentation, RGB image, and our ground-truth object image.The images are either upscaled or downscaled to meet the second stage input size.Then, the two networks are trained independently of each other.In the training phase, a multisegmentation is used to create our novel segmentation masks and bounding boxes, which is our main contribution (blue box).The dimensions of the bounding box are used to derive a squared region of interest.This region is used to crop the RGB input image as well as the ground-truth object image and segmentation.The two networks are trained independently, the segmentation network is trained to predict a segmentation map based on the defined ground truth, and the object image network is trained on the ground-truth object image.As its loss is on a pixel level, the ground-truth segmentation mask is used to only account for the loss for the pixels that are actually part of the object. Prediction At prediction time, an external set of bounding boxes of specific objects of the complete image is needed.To obtain these bounding boxes, any object detection algorithm can be used.This bounding box detection is a slightly different problem and out of the scope of this paper.Many methods were developed to find and classify specific objects within an image and produce bounding boxes [20][21][22]25,26].These boxes can then be used to find more attributes like the color, materials, a text description of the object, and the 6D pose of the object.We focus on 6D pose estimation.Thus, for the prediction phase in our approach, we use the ground-truth boxes and the transparency-aware boxes, respectively, for the network training.For a real-life application, any object detector could be connected in front of our workflow, which is visualized in Figure 5.These two are used pixel-wise to solve the 2D-3D correspondence with the gPnP, while only the pixels are taken into account that belong to the object estimated by the segmentation network.This resolves into a 6D pose for this object. Dataset We chose the ClearPose dataset [16] to test our approach because it contains several real image scenes, in total over 350,000 annotated images, with many transparent, symmetric objects within cluttered scenes and annotated ground-truth segmentation maps and ground-truth poses.Other known available real-world datasets are ClearGrasp [27] and TransCG [28], which use plastic glasses instead of realistic real glass and include almost no glass clutter, making them uninteresting for our comparison.The complete ClearPose dataset includes several sets of scenes of different types, for example, normal objects placed jumbled on a table but also objects with filled liquors or with an additional translucent cover.We focus on set number 4. Set 4 includes about 43,000 RGB-D images with segmentation masks and poses.It consists of 6 scenes, all of them including the same 12 different objects: drinking glasses, bottles, bowls, and a knife and a fork, either made of glass or plastic, placed on a table.The difference between the scenes is their colorful tablecloth and the arrangement of the objects.This arrangement is random but physically correct, i.e., glasses are standing upright or upside down or lying flat on the table but do not balance on edges or lean on each other.The lighting seems to be realistic indoor room light, produced by fluorescent tubes and spotlights.To obtain the different images, a camera was moved around the table at different heights.This results in different light reflections and blurriness in the images.Sometimes, the unstructured environment of the lab where the images were taken is visible, depending on the camera position. The ground-truth data were generated by a digital twin scene of the real scene and an estimated camera movement using ORB-SLAM3.With this information, the ground-truth segmentation and the true-depth image were rendered.This makes total sense for opaque objects but not so much for transparent objects.As visible in Figure 1, some pixels show different objects at the same time, where the bowl and the wine glass are behind it.The ground-truth segmentation is a gray image, with only one channel, encoding for each pixel if it is background or belongs to a specific object, but it does not represent multiple objects at the same pixel.Thus, for our experiment, we extended the ground-truth segmentation mask by using the provided 6D pose of the object and the provided 3D model and rendered a transparency-aware segmentation for each object in each image.For training, we used the included ground-truth and our own multisegmentation.Figure 2 shows one example image of the test set and our rendered multisegmentation. Experiments To evaluate our idea, we built expert networks for each object in Set 4, meaning we have one network for each object.In theory, this enables us to be unlimited in the number of objects we can process.However, in practice, this is limited by the memory needed.As we use these networks to evaluate Set 4 of the ClearPose dataset, we have 12 networks, one for each object, which we evaluate sequentially. Network Architecture Our network is not new, but for the sake of completeness, we would like to present the used network architecture.In contrast to our approach in [19], we separate the original network into two networks for the segmentation and for the object image and train both of them independently.Although both have the same structural encoder (Table 1) with three layers of a pre-trained MobileNet and additional convolution layers, the decoder (Table 2) is different.This is schematically shown in Figure 6.The segmentation network predicts the segmentation, and the object image network predicts the object image, as well as an uncertainty map representing the uncertainty of each pixel in image space.The object image network from Figure 3 is in detail divided into one encoder and three heads as sketched in Figure 6.Table 2 shows the used structure of layers for each head, where they only differ in their output layer. Training We trained our networks on three NVIDIA A40 and two NVIDIA TITAN V using TensorFlow 2.11 and Python 3.8.To train the segmentation networks, we used a batch size of 64 and Adam as the optimizer with a fixed learning rate of 0.0001 and trained for 300 epochs.The object image network was trained with a batch size of 128 and used Adam as the optimizer with a fixed learning rate of 0.001 for 100 epochs on the object image and for another 200 epochs on the uncertainty map.The differences between the learning parameters resulted from the different computation powers.For each training set, we trained both networks for all objects, keeping all structural parameters the same, with no extra modification for any object or the training set.The networks have an input size of 224 × 224 and produce a segmentation mask, an object image, and an uncertainty map, each of size 112 × 112.Training and evaluation are performed on these three different training sets for each individual object to investigate the differences in 6D pose estimation.For a real-life application, an additional bounding box detector would be necessary. As a metric, we use a symmetry-aware average point distance error (ADD-S) because almost all objects are somehow symmetrical.Only the knife and fork do not have a rotation symmetry axis.This is based on the pairwise average distance (ADD) [29] but includes the fact that points are ambiguous.Pure ADD would require the network to learn to guess the annotated rotation, which is arbitrarily set by the annotation.ADD-S is suitable to handle symmetrical and asymmetrical objects.We calculate the area under the curve (AUC) for thresholds from 0.0 m to 0.1 m, as in [16].[19].As this is just a technical detail, we will refer to p O * and p O together as an object image in the following.The pixel weight w px estimates the uncertainty in pixel space.In the first training stage, only the object images (p O * and p O ) are trained.In the second stage, the weights of the encoder are set and only the pixel weight head (w px ) is trained.The segmentation network has a simple common encoder-decoder with a shortcut structure.The layers of the encoder and decoder are listed in Tables 1 and 2. ConvX-a 2D convolution operation, ConcatX-a concatenation layer.The number of channels and the output size depends on the layer before, where the input channels and input size is determined. Results We trained both networks separately.The segmentation network can not only give insights on their individual performance but also whether the network structure is suitable to identify the object at all.As stated before, the encoder structure of both networks is identical, and only the decoder differs. Segmentation The segmentation networks were trained for each object individually, each on the three different training sets: original, transparency-aware bounding box (TAB), and transparencyaware segmentation including the bounding box (TAS).Table 4 displays the recall and precision values on a pixel level averaged over all objects.The highest values were achieved by our transparency-aware segmentation.However, it is interesting that the transparencyaware bounding boxes have the lowest values.This might be because, for the additional context, without TAS, a lot more background pixels are added, producing a slight bias to background pixels as the precision is similar, but the recall values are worse.Overall, this also means that our network structure is, in general, suitable to detect object pixels within the image.Otherwise, the segmentation would not be as good. 6D Pose Estimation For the pose estimation, we use the prediction stage as displayed in Figure 5.The bounding boxes are either based on the original segmentation or the transparency-aware segmentation, depending on what data the respective network was trained on.The overall results on Set 4 are displayed in Table 5. Changing the ground-truth definition from original to TAS increases the AUC by 4.33% over all objects. In order to compare our results with a reference value on this glass data set and to put them into a context, we compared them to Xu et al. [11] and FFB6D [3].However, we do not claim to compete against their full approaches, as we use ground-truth bounding boxes without using any object detector.Even though [14,15] are methodologically more similar to our approach, we cannot compare ourselves to them as they use different data sets with little to no occlusion.Xu et al. developed a network with specific glass features, and FFB6D was a state-of-the-art network for pose estimation at the time when the dataset was published.Both have better results; however, both use RGB-D data, whereas our network only uses RGB data.In previous work, including depth data achieved an increase of up to 24% percent [4].When FFB6D was trained and tested with ground-truth depth data, it achieved an improvement of almost 15%. To highlight the difficulty of the dataset, we show two example images and the resulting pose in Figure 7.The used networks were trained with TAS. We evaluated our networks sequentially and a 6D estimation prediction step took 0.175 s on average.This includes the segmentation and object image network as well as the complete gPnP stage for all objects in the image.However, this excludes the time needed to load the networks, which would be necessary if the RAM is not big enough to keep all networks needed in memory.In our experiments, we needed around 42GB RAM to have all 12 networks running at the same time.It is worth noting that an additional object detector would also need some resources in a real-world application. To distinguish and have a more fine-grained result, in Figure 8 we calculated the ADD-S accuracy-threshold curve for all objects up to a threshold of 0.1 m, which is the same as the threshold for the ADD-S AUC.Interestingly, the results are quite different for different objects.It seems to work best on tall objects with a continuous symmetry axis.Not surprising is the fact that the knife and fork do not perform well.The special properties of these two objects are discussed in detail in Section 6.4.As our network could not deal with these two objects, we omit them in the next evaluation step. Dependence on Level of Occlusion To investigate the effect of the different training data, we performed an ablation study comparing the original bounding box and segmentation approach to our TAB and TAS.To obtain a more fine-grained result in relation to covered glass, we split the test data by their visibility using the percentage of the number of originally assigned ground-truth pixels to the number of TAS-rendered segmentation pixels (Figure 9).It is noticeable that our transparency-aware segmentation increases the AUC by 20% in the range of visibility up to 40% compared to the original.The strong effect leverages out as more of the object becomes visible, which makes sense because the training data is becoming similar.It is also noteworthy that the TAB also increases the AUC up to 14% in the range of 20-40% visibility and also performs better on all other images compared to the original.It seems like having the right object size and scale helps obtain a good object image, which encodes the geometric form, because even though the segmentation is worse than in the original approach, the 6D pose is significantly better, up to a visibility of 80%.This effect vanishes in the last scenario with 80-100% visibility.The difference in the aspect ratios of the two versions is not that big anymore as the original also encloses the whole object.The only real difference is the additional context.This was created by adding five pixels in each direction.The results indicate that this has only little influence as original and TAB are almost equivalent.An interesting effect occurs in the area 40-80%.The TAB achieves higher values than our TAS training data.Manual insights into test data in this range (see four examples in Figure 10) indicate that in these images, there is a lot of overlapping, disturbing the view of the object behind, as the edges reflect the light differently.This can result in actual hiding as in Figure 10a, where the right part is just not visible anymore, or in deforming the object as in Figure 10b, where the bottom of the bowl visually bends as seen through the glass in front of it.In these cases, it seems logical that the TAS might be too big, covering parts that do not belong to the object visually. Knife and Fork There are two very challenging objects within the dataset: a transparent knife and fork (see examples in Figure 11).Both objects were hardly recognized by our network and did not profit from the different training settings.These two objects probably profit the most from the depth data, as they always lay flat on the table.Even if the depth camera does not recognize the object, it will detect the table underneath it.The fork has a maximum height of 15 mm and a knife of about 2 mm, meaning that the table depth is still a good approximation of the depth data for these objects.But without depth data, in RGB-only networks like ours, these objects could be anywhere in the room, making it really hard to predict.If we exclude these two objects, we obtain an AUC value of 43.26% with the original input and 48.36% for our transparency-aware approach.The right image highlights that even if the knife is visible, it has a completely different appearance because of reflections along the whole object. Conclusions We introduced a new way of defining segmentation masks and bounding boxes for transparent objects, being aware of the characteristics of glass and other transparent materials, to improve RGB-based pose estimation without having to adapt the network itself.With only a little effort, we render the multisegmentation mask, which can be used to create transparency-aware segmentation and bounding boxes.This can be achieved independently of the dataset if 3D object models and ground-truth poses are available.Using these instead of the original training data in the training process benefits the 6D pose estimation of a given RGB estimator if the objects can be recognized by the network at all.This is particularly true when there is strong occlusion and objects are only partially visible according to the original definition. One downside of our approach is that a 3D object model and a ground-truth pose have to exist to generate the transparency-aware segmentation mask.This is not as easily applicable if the objects of interest are parameterizable or articulated without a fixed model.However, most work is still required on fixed and stable objects; thus, it seems like a good and easy option to increase the accuracy of 6D pose estimation for transparent objects just by adapting the training set.The success of TAB suggests the hypothesis that presenting the objects in the right scale, even if they are occluded in the training data, as it is with the original segmentation, might also help 6D pose estimation on opaque objects. Figure 1 . Figure 1.Example crops from a scene in the ClearPose [16] dataset with typical glass clutter, a wide overview (a) of the scene.The original bounding box (b) and an object transparency-aware bounding box (c), as well as their resulting regions of interest for the second stage. . The implementation is performed as one segmentation image for each object individually within the image.Modifying and testing different training data with our indirectly working pose estimator requires different processes for training and prediction. Figure 2 . Figure 2. Example RGB image of the dataset (a) and the corresponding multisegmentation mask (b), where the fully colored pixels represent the original ground-truth segmentation, and the object border indicates that the objects continue.The pixels within the borders belong to all overlapping objects, not just to the object in front. Figure 3 . Figure 3.In the training phase, a multisegmentation is used to create our novel segmentation masks and bounding boxes, which is our main contribution (blue box).The dimensions of the bounding box are used to derive a squared region of interest.This region is used to crop the RGB input image as well as the ground-truth object image and segmentation.The two networks are trained independently, the segmentation network is trained to predict a segmentation map based on the defined ground truth, and the object image network is trained on the ground-truth object image.As its loss is on a pixel level, the ground-truth segmentation mask is used to only account for the loss for the pixels that are actually part of the object. Figure 4 . Figure 4. Different mask and bounding box sizes for the RGB image crops of Figure 1.(a) Displays the original mask and resulting bounding box, and (b) also uses the original mask but the transparencyaware bounding box.The right image (c) displays the transparency-aware segmentation and its resulting bounding box. Figure 5 . Figure 5.In the prediction phase, a bounding box for the object of interest is needed.It can come from any object detector.The dimensions of the bounding box are used to create a square around the object and use this as a region of interest for further processing.The incoming RGB image is cropped and scaled by this ROI and fed into a pre-trained segmentation and object image network.The object image network estimates the 3D points in object coordinates and an uncertainty map in pixel space.These two are used pixel-wise to solve the 2D-3D correspondence with the gPnP, while only the pixels are taken into account that belong to the object estimated by the segmentation network.This resolves into a 6D pose for this object. Figure 6 . Figure 6.A more detailed sketch of our used network structure.The object image network has three heads.p O * and p O represent two different aspects of the object image points p O , which is described in more detail in[19].As this is just a technical detail, we will refer to p O * and p O together as an object image in the following.The pixel weight w px estimates the uncertainty in pixel space.In the first training stage, only the object images (p O * and p O ) are trained.In the second stage, the weights of the encoder are set and only the pixel weight head (w px ) is trained.The segmentation network has a simple common encoder-decoder with a shortcut structure.The layers of the encoder and decoder are listed in Tables1 and 2. Figure 7 .Figure 8 . Figure 7. Two examples of difficult images where the estimated pose is rendered as a green contour on top of the RGB image.In (a), the wine glass is almost completely covered by another wine glass with reflections from the fluorescent tubes and has another glass in the back which also causes some edge distractions.The salad bowl (b) has multiple objects in the front and much background clutter. Figure 9 .Figure 10 . Figure 9. Results of the different training data distributed over the visibility of the objects, e.g., the bar of 0-20% visibility includes all images where the object is, based on the ground truth, only up to 20% visible. Figure 11 . Figure 11.Example images of the fork (a,b) and the knife (c,d).Even if only the object is present, no distractions, (a), it can be really hard to see the fork (it is centered and points upwards).In (b,c) other glasses are within the bounding box and it is almost impossible to see the objects of interest.The right image highlights that even if the knife is visible, it has a completely different appearance because of reflections along the whole object. operation, DenseX-a custom dense block as described in Table3. Enc.-Layer from the Encoder, ConvX-a 2D convolution operation, ConcatX-a concatenation layer, UpX a 2D upscaling layer, DenseX-a custom dense block as described in Table3, MN X-the used layer of the MobileNet v2 version of keras. Table 4 . Results of the test set segmentation on a pixel level.Best results in bold. Table 5 . [16]lts of the test set.The results from Xu et al. and FFB6D are taken from[16]. It still achieves better results than the original training data.
7,670.6
2024-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Towards reduction in bias in epidemic curves due to outcome misclassification through Bayesian analysis of time-series of laboratory test results: case study of COVID-19 in Alberta, Canada and Philadelphia, USA Background Despite widespread use, the accuracy of the diagnostic test for SARS-CoV-2 infection is poorly understood. The aim of our work was to better quantify misclassification errors in identification of true cases of COVID-19 and to study the impact of these errors in epidemic curves using publicly available surveillance data from Alberta, Canada and Philadelphia, USA. Methods We examined time-series data of laboratory tests for SARS-CoV-2 viral infection, the causal agent for COVID-19, to try to explore, using a Bayesian approach, the sensitivity and specificity of the diagnostic test. Results Our analysis revealed that the data were compatible with near-perfect specificity, but it was challenging to gain information about sensitivity. We applied these insights to uncertainty/bias analysis of epidemic curves under the assumptions of both improving and degrading sensitivity. If the sensitivity improved from 60 to 95%, the adjusted epidemic curves likely falls within the 95% confidence intervals of the observed counts. However, bias in the shape and peak of the epidemic curves can be pronounced, if sensitivity either degrades or remains poor in the 60–70% range. In the extreme scenario, hundreds of undiagnosed cases, even among the tested, are possible, potentially leading to further unchecked contagion should these cases not self-isolate. Conclusion The best way to better understand bias in the epidemic curves of COVID-19 due to errors in testing is to empirically evaluate misclassification of diagnosis in clinical settings and apply this knowledge to adjustment of epidemic curves. Introduction It is well known that outcome misclassification can bias epidemiologic results yet is infrequently quantified and adjusted for in results. In the context of infectious disease outbreaks, such as during the COVID-19 pandemic of 2019-20, false positive diagnoses may lead to a waste of limited resources, such as testing kits, hospital beds, and absence of the healthcare workforce. On the other hand, false negative diagnoses contribute to uncontrolled spread of contagion, should these cases not self-isolate. In an ongoing epidemic, where test sensitivity (Sn) and specificity (Sp) of case ascertainment are fixed, prevalence of the outcome (infection), determines whether false positives or negatives dominate. For COVID-19, Goldstein & Burstyn show that suboptimal test Sn despite excellent Sp results in an overestimation of cases in the early stages of an outbreak, and substantial underestimation of cases as prevalence increases to levels seen at the time of writing [1]. However, understanding the true scope of the pandemic depends on precise insights into accuracy of laboratory tests used for case confirmation. Undiagnosed cases are of particular concern; they arise from untested persons who may or may not have symptoms (under-ascertainment) and from errors in testing among those selected for the test. We focus on misclassified patients only due to errors in tests that were performed as part of applying the World Health Organization's case definition [2]. Presently, the accuracy of testing for SARS-CoV-2 viral infection, the causal agent for COVID-19, is unknown in Canada and the USA, but globally it is reported that Sp exceeds Sn [3][4][5]. In a typical scenario, clinical and laboratory validation studies are needed to fully quantify the performance of a diagnostic assay (measured through Sn and Sp). However, during a pandemic, limited resources are likely to be allocated to testing and managing patients, rather than performing the validation work. After all, imperfect testing can still shed a crude light on the scope of the public health emergency. Indeed, counts of observed positive and negative tests can be informative about Sn and Sp, because certain combinations of these parameters are far more likely to be compatible with data and reasonable assertions about true positive tests. In general, more severe cases of disease are expected at the onset of an outbreak (and reflected in tested samples as strong clinical suspicion for the test produces higher likelihood of having the disease) but the overall prevalence in the population would remain low. Then, as the outbreak progresses with more public awareness and consequently both symptomatic and asymptomatic people being tested, the overall prevalence of disease is expected to rapidly increase while the severity of the disease at a population level is tempered. It is reasonable to expect, as was indeed reported anecdotally early in the COVID-19 outbreak, for laboratory tests to be inaccurate, because the virus itself and its unique identifying features exploited in the test are themselves uncertain, and laboratory procedures can contain errors ahead of standardization and regulatory approval. Again, anecdotally, Sn was supposed to be worse than Sp, which is congruent with reports of early diagnostic tests from China [4,5], with both Sn and Sp improving as the laboratories around the world rushed to perfect testing [6][7][8] to approach the performance seen in tests for similar viruses [9,10]. Using publicly available timeseries data of laboratory testing results for SARS-CoV-2 and our prior knowledge of infectious disease outbreaks, we may be able to gain insights into the true accuracy of the diagnostic assay. Thus, we pursued two specific aims: (a) to develop a Bayesian method to attempt to learn from publicly available time-series of COVID-19 testing about Sn and Sp of the laboratory tests and (b) to conduct a Monte Carlo (probabilistic) sensitivity analysis of the impact of the plausible extent of this misclassification on bias in epidemic curves. Sharing Data and methods can be accessed at https://github.com/paulgstf/misclass-covid-19-testing; data are also displayed in Appendix A in the Supplemental Material. Data We digitized data released by the Canadian province of Alberta on 3/28/2020 from their " Fig. 6: People tested for COVID-19 in Alberta by day" under "Laboratory testing" tab [11]. Samples (e.g., nasopharyngeal (NP) swab; bronchial wash) undergo nucleic acid testing (NAT) that use primers/probes targeting the E (envelope protein) (Corman et al. 2020) and RdRp (RNAdependent RNA polymerase) (qualitative detection method developed at ProvLab of Alberta) genes of the COVID-19 virus. The data were digitized as shown in Table A1 of Appendix A in the Supplemental Material. The relevant data notes are reproduced in full here: "Data sources: The Provincial Surveillance Information system (PSI) is a laboratory surveillance system which receives positive results for all Notifiable Diseases and diseases under laboratory surveillance from Alberta Precision Labs (APL). The system also receives negative results for a subset of organisms such as COVID-19. … Disclaimer: The content and format of this report are subject to change. Cases are under investigation and numbers may fluctuate as cases are resolved. Data included in the interactive data application are up-to-date as of midday of the date of posting." Data from the city of Philadelphia were obtained on 03/ 31/2020 [12]. It was indicated that "test results might take several days to process." Most testing is PCR-based with samples collected from an NP swab, performed at one of the three labs (State Public Health, Labcorps, Quest). In addition, some hospitals perform this test using 'in-house' PCR methods. There is a perception (but no empirical data available to us) that Sn is around 0.7 and there are reports of false negatives based on clinical features of patients consistent with COVID-19 disease. Issues arise from problems with specimen collection and timing of the collection, in addition to test performance characteristics. The data were digitized as shown in Table A2 in Appendix A in the Supplemental Material. Bayesian method to infer test sensitivity and specificity A brief description of the modelling strategy follows here, with full details of both the model and its implementation given in Appendix B in the Supplemental Material. Both daily prevalence of infection in the testing pool and daily test sensitivity are modelled as piecewiselinear on a small number of adjacent time intervals (four intervals of equal width, in both examples), with the interval endpoints referred to as "knots" (hence there are five knots, in both examples). The prior distribution for prevalence is constructed by specifying lower and upper bounds for prevalence at each knot, with a uniform distribution in between these bounds. The prior distribution of sensitivity is constructed similarly, but with a modification to encourage more smoothness in the variation over time (see Appendix B in the Supplemental Material for full details). The test specificity is considered constant over time, with a uniform prior distribution between specified lower and upper bounds. With the above specification, a posterior distribution ensues for all the unknown parameters and latent variables given the observed data, i.e., given the daily counts of negative and positive test results. This distribution describes knowledge of prevalence, sensitivity, specificity, and the time-series of the latent Y t , the number of truly positive among those tested on the t-th day. Thus, we learn the posterior distribution of the Y t time-series, giving an adjusted series for the number of true positives in the testing pool, along with an indication of uncertainty. As discussed at more length in Appendix B in the Supplemental Material, this model formulation neither rules in, nor rules out, learning about test sensitivity and specificity from the reported data. Particularly in a high specificity regime, the problem of separating out infection prevalence and test sensitivity is mathematically challenging. The data directly inform only the product of prevalence and sensitivity. Trying to separate the two can be regarded statistically as an "unidentified" problem (while mathematicians might speak of an ill-posed inverse problem, or engineers might refer to a "blind source separation" challenge). However, some circumstances might be more amenable to some degree of separation. In particular, with piecewise-linear structure for sensitivity and prevalence, strong quadratic patterns in the observed data, if present, could be particularly helpful in guiding separation. On the other hand, if little or no separation can be achieved, the analysis will naturally revert back to a sensitivity analysis, with the a priori uncertainty about test sensitivity and infection prevalence being acknowledged. Some of the more reliable PCR-based assays can achieve near-perfect Sp and Sn of around 0.95 [3,[6][7][8]. . We expected Sp to be high and selected a time-invariant uniform prior bounded by 0.95 and 1. However, early in the COVID-19 outbreak problems with the sensitivity of the diagnostic test were widely reported owing to specimen collection and reagent preparation, but not quantified. Based on these reports, we posited a lower bound on prior Sn of 0.6 and an upper bound of 0.9. We cannot justify a higher lower bound on Sn, since obtaining a sample is challenging as the virus may not be detectable in the cultured area based on timing of infection, despite replicating in other parts of the respiratory tract. Also, known variation in testing strategy over time could drive variation in Sn over time. Consequently, we adopted a flexible data-driven approach by allowing sensitivity to change over time, within the specified range (see Appendix B in the Supplemental Material). The prevalence of truly infected among those tested likely changed over time as well --for example due to prioritization of testing based on age, occupation, and morbidity [13] --but this is difficult to quantify, as it differs from population prevalence of infected that would be "seen" by a random sample (governed by known population dynamics models). Thus, our model also allows this prevalence to vary over time across a broad range. Monte Carlo (probabilistic) uncertainty/bias analysis of epidemic curves We next examined how much more we could have learnt from epidemic curves if we knew sensitivity of laboratory testing. To do so, we applied insights into the plausible extent of sensitivity and specificity to recalculate epidemic curves for COVID-19 in Alberta, Canada. Data on observed counts versus presumed incident dates ("date reported to Alberta Health") was obtained on 3/28/2020 from their " Fig. 3: COVID-19 cases in Alberta by day" under "Case counts" tab [11]. The count of cases is shown in Table A1 as C t * and they are matched to dates t (same as dates of laboratory tests). We also repeated these calculations with data available for City of Philadelphia, under a strong assumption that date of tests is the same as date of onset, i.e. Y t * = C t * . We removed March 30-31, 2020, counts because of a reported delay of several days in laboratory tests. For each observed count of incident cases C t * , we estimated true counts C t = C t * / f Sn under the assumption that specificity is indistinguishable from perfect. Here f Sn is the assumed sensitivity for the purpose of uncertainty analysis, to not be confused with the posterior distribution of Sn derived in Bayesian modelling. We considered a situation of no time trend in line with above findings, as well as sensitivity either improving (realistic best case), or degrading (pessimistic worst case). We simulated various values of f Sn using Beta distribution ranging in means from 0.60 to 0.95, with a fixed standard deviation of 0.05 (parameters set using https://www.desmos. com/calculator/kx83qio7yl). It is apparent that epidemic curves generated in this manner will have higher counts than the observed curves, and our main interest is to illustrate how much the underestimation can bias the depiction. Our uncertainty/bias analysis only reflects systematic errors for illustrative purposes and under the common assumptions (and experience) that they dwarf random errors. Computing code in R (R Foundation for Statistical Computing, Vienna, Austria) for the uncertainty analysis is in Appendix C in the Supplemental Material. Inference about sensitivity and specificity In both jurisdictions, there is evidence of non-linearity in the observed proportion of positive tests (Fig. 1), justifying our flexible approach to variation of sensitivity and prevalence that can exhibit a quadratic pattern in observed prevalence between knots. The data in both jurisdictions is consistent with the hypothesis that the number of truly infected is being under-estimated, even though observed counts tend to fall within 95% credible intervals of posterior distribution of the counts of true positive tests (Fig. 2). The under-diagnosis is more pronounced when there are both more positive cases and the prevalence of positive tests is higher, i.e. in Philadelphia relative to Alberta. In Philadelphia, the posterior of prevalence was between 5 and 24% (100's of positive tests a day in late March) but in Alberta, the median of the posterior of prevalence was under 3% (30 to 50 positive tests a day in late March). This is not surprising because the number of false negatives is proportional to observed cases for the same sensitivity. The specificity appears to be high enough for the observed prevalence to produce negligible numbers of false positives, with false negatives dominating. There was clear evidence of shift in posterior distribution of specificity from uniform to favouring values > 0.98 (Fig. 3). In Alberta, posterior distribution of Sp was centered on 0.997 (95% credible interval (CrI): 0.993, 0.99995), and in Philadelphia it had a posterior median of 0.984 (95%CrI: 0.954, 0.999). Our analysis indicates that under our models there is little evidence in time-series of laboratory tests about either the time trend or magnitude of sensitivity of laboratory tests in either jurisdiction (Fig. 4). Posterior distributions are indistinguishable from the priors, such that we are still left with an impression that sensitivity of COVID-19 tests can be anywhere between 0.6 and 0.9, centered around 0.75. One can speculate on the departure of the posterior distribution from uniform prior given that the posterior appears concentrated somewhat around the prior mean of 0.75 (more lines in Fig. 4 near the mean than the dotted edges that bound the prior). However, Fig. 1 Proportion of observed positive tests in time with 95% confidence intervals; knots between which sensitivity and true prevalence were presumed to follow linear trends are indicated by red triangles any such signal is weak and there is no evidence of a time-trend that was favoured by the model. Uncertainty in epidemic curves due to imperfect testing: Alberta, Canada Figure 5 presents the impact on the epidemic curve of degrading sensitivity that is constant in time. As expected, when misclassification errors increase, uncertainty about epidemic curves also increases. There is an under-estimation of incident cases that is more apparent later in the epidemic when the numbers rise. Figure 6 indicates how, as expected, if sensitivity improves over time (green lines), then the true epidemic curve is expected to be flatter than the observed. It also appears that observed and true curves may well fall within the range of 95% confidence intervals around the observed counts (blue lines). If sensitivity decreases over time (brown lines), then the true epidemic curve is expected to be steeper than the observed. In either scenario, there can be an under-counting of cases by nearly a factor of two, most apparent as the incidence grows, such that on day March 24, 2020 (t = 19), there may have been almost 120 cases vs. 62 observed. This is alarming, because misdiagnosed patients can spread infection if they have not self-isolated (perhaps a negative test results provided a false sense of security) and it is impossible to know who they are among thousands of symptomatic persons tested around that time per day (Table A1). Uncertainty in epidemic curves due to imperfect testing: Philadelphia, USA Figure 7 presents the impact on the epidemic curve of degrading Sn that is constant in time. As in Alberta, when misclassification errors increase, uncertainty about epidemic curves also increases. It is also apparent that the shape of the epidemic curve, especially when counts are high, can be far steeper than that inferred assuming perfect testing. Figure 8 indicates that if sensitivity improves over time (green lines), then the true epidemic curve is expected to be practically indistinguishable from the observed one in Philadelphia: e.g. it is within random variation of observed counts represented by 95% confidence intervals (blue lines). This is comforting, because this seems to be the most plausible scenario of improvement in time in quality of testing (identification of truly infected). However, if sensitivity decreases over time (brown lines), then the under-counting of cases by the hundreds in late March 2020 cannot be ruled out. We again have the same concern as for Alberta: misdiagnosed patients can spread infection unimpeded and it is impossible to know who they are among the hundreds of symptomatic persons tested in late March 2020 (Table A2). In all examined scenarios, in both Alberta and Philadelphia, the lack of sensitivity in testing seems to matter far less when the observed counts are low early in the epidemic. The gap between observed and adjusted counts grows as the number of observed cases increases. This reinforces the importance of early testing, at least with respect to describing the time-course of the Discussion Given the current uncertainty in the accuracy of the SARS-CoV-2 diagnostic assays, we tried to learn about sensitivity and specificity using the time-series of laboratory tests and time trends in time test results. Although we are confident that typical specificity exceeds 0.98, there is very little learning about sensitivity from prior to posterior. However, it is important to not generalize this lack of learning about sensitivity, because it can occur Fig. 6 Uncertainty in the epidemic curve of COVID-19 on March 28, 2020 in Alberta, Canada, due to imperfect sensitivity (Sn) with standard deviation 5%; assumes specificity 100%: increasing or decreasing sensitivity in time Fig. 7 Uncertainty in the epidemic curve of COVID-19 on March 31, 2020 in Philadelphia, USA, due to imperfect sensitivity (Sn) with standard deviation 5%; assumes specificity 100%: time-invariant sensitivity when stronger priors on prevalence are justified and/or when there are more pronounced trends in prevalence of positive tests. We therefore encourage every jurisdiction with suitable data to attempt to gain insights into accuracy of tests using our method: now that the method to do so exists, it is simpler and cheaper than laboratory and clinical validation studies. However, validation studies, with approaches like the one illustrated in Burstyn et al., are still the most reliable means of determining accuracy of a diagnostic test [14]. Knowing sensitivity and specificity is important as demonstrated in uncertainty/bias analysis of impact on epidemic curves under some optimistic assumptions of near-perfect specificity and reasonable range of sensitivity. The observed epidemic curves may bias estimates of the effective reproduction number (R e ) and magnitude of the epidemic (peak) in unpredictable directions. This may also have implications for understanding the proportion of the population non-susceptible to COVID-19. As researchers attempt to develop pharmaceutical prophylaxis (i.e., a vaccine) combined with a greater number of people recovering from SARS-CoV-2 infection, having insight into the herd threshold will be important for resolving current and future outbreaks. Calculations such as the basic and effective reproductive number, and the herd threshold depend upon the accuracy of surveillance data described in the epidemic curves. As the title suggests, we view the lab time-series and the epidemic curve as two distinct entities: Figs. 1 through 4 are based on the former. This distinction is important to stress, because a lot of the public-facing dashboards etc. are plotting new cases by report date instead of, or in addition to, by symptom onset date; both are commonly labelled as "epidemic curves" while strictly only the latter should be referred to as such. We emphasise this distinction by adopting different notation to count positive test results on t th day t as Y t vs. incident cases on t th day C t . Future work is envisioned which links Y t and C t. , so that joint inference could be undertaken for data from jurisdictions which report both series. In situations where only the lab-testing series is available, external prior knowledge could be used to describe implications for the epidemic curve. As an example, while the modelling in Dehning et al. [15] is in a very different direction, one component of their model is an informative prior distribution on the reporting delay between infection date and lab report date. Limitations of our approach include the dynamic nature of data that changes daily and may not be perfectly aligned in time due to batch testing. There are some discrepancies in the data that should be resolved in time, like fewer cases tested positive than there are in epidemic curve in Alberta, but the urgency of the current situation justifies doing our best with what we have now. We also make some strong ad hoc assumptions about breakpoints in segmented regression of time-trends in sensitivity and prevalence, further assuming that the same breakpoints are suitable for trends in both parameters. Although not as much of an issue based on our Fig. 8 Uncertainty in the epidemic curve of COVID-19 on March 31, 2020 in Philadelphia, USA, due to imperfect sensitivity (Sn) with standard deviation 5%; assumes specificity 100%: increasing or decreasing sensitivity in time analysis, we do need to consider imperfect specificity, creating false positives, albeit nowhere near the magnitude of false negatives in the middle of an outbreak. This results in wasted resources. In ideal circumstances we employ a two-stage test: a highly sensitive serological assay that if positive triggers a PCR-based assay. Twostage tests would resolve a lot of uncertainty and speculation over a single PCR test combined with signs and symptoms. Indeed, this is the model used for diagnosis of other infectious diseases, such as HIV and Hepatitis C. Our work also only focuses on validity of laboratory tests, not sensitivity and specificity of the entire process of identification of cases that involves selection for testing via a procedure that is designed to induce systemic biases relative to the population. Conclusions We conclude that it is of paramount importance to validate laboratory tests and to share this knowledge, especially as the epidemic matures into its full force. Insights into ascertainment bias by which people are selected for tests and are then used to estimate epidemic curves are likewise important to obtain and quantify. Quantification of these sources of misclassification and bias can lead to adjusted analyses of epidemic curves that can help make more appropriate public health policies. Additional file 1. Abbreviations Y t * : Count of persons who tested positive at time t; n t : Count of persons tested at time t (observed from surveillance data); Y t : True count of persons who tested positive at time t (latent); C t * : Count of persons having onset of symptoms at time t and who have tested positive; C t : True count of persons having onset of symptoms at time t and who have tested positive (latent); Sn t : Ensitivity of test P(Y t * = 1|Y t = 1) at time t (subscript t is suppressed for simplicity in text); Sp t : Pecificity of test P(Y t * = 0|Y t = 0) at time t (subscript t is suppressed for simplicity in text); e Sn: Sensitivity of ascertainment of incident case, P(C t * = 1|C t = 1); e Sp: Specificity of ascertainment of incident case, P(C t * = 0|C t = 0)
5,835.2
2020-06-06T00:00:00.000
[ "Medicine", "Mathematics" ]
Polarization-resolved characterization of plasmon waves supported by an anisotropic metasurface - DTU Orbit (22/03/2019) Optical metasurfaces have great potential to form a platform for manipulation of surface waves. A plethora of advanced surface-wave phenomena such as negative refraction, self-collimation and channeling of 2D waves can be realized through on-demand engineering of dispersion properties of a periodic metasurface. In this letter, we report on polarization-resolved measurement of dispersion of plasmon waves supported by an anisotropic metasurface. We demonstrate that a subdiffractive array of strongly coupled resonant plasmonic nanoparticles supports both TE and TM plasmon modes at optical frequencies. With the assistance of numerical simulations we identify dipole and quadrupole dispersion bands. The shape of isofrequency contours of the modes changes drastically with frequency exhibiting nontrivial transformations of their curvature and topology that is confirmed by the experimental data. By revealing polarization degree of freedom for surface waves, our results open new routes for designing planar on-chip devices for surface photonics. Introduction Metasurfaces are a two-dimensional analogue of bulk metamaterials. They represent a dense array (usually periodic) of subwavelength scatterers, [1] which are often called meta-atoms. The term metasurface was introduced recently [2], but such objects are fairly well-known in electromagnetism as impedance or frequency selective surfaces. They have been actively studied for more than 100 years [3] aiming radio frequencies and microwaves. The majority of the results reported so far are related to free space optics with functioning of metasurfaces in the transverse configuration. In this case the leaky and quasi-guided resonances play a major role [44,45]. However, for on-chip photonic applications, metasurfaces are expected to operate in the in-plane mode, when the surface modes move to the forefront. For routing of optical signals and all-optical networking, the precise control over directivity of surface waves is needed. One of the ways is to involve Dyakonov surfaces waves, since high directivity is their intrinsic feature [46][47][48][49][50]. However, their weak localization and very specific existence conditions impede large-scale implementation of Dyakonov waves in photonic circuits. Alternatively, it is possible to exploit the conventional SPPs whose directivity and wave fronts can be engineered via nanostructuring of plasmonic interfaces. In [51] Liu and Zhang showed that isofrequency contours of SPPs propagating along a metallic grating can have elliptic, flat, or hyperbolic shape depending on geometry of the grating. This results in broadband negative refraction and non-divergent propagation of SPPs. A visible-frequency non-resonant hyperbolic metasurface based on a nanometer-scale silver/air grating was implemented in [52]. Recently, it has been predicted that the spectrum of a resonant metasurface described in terms of local anisotropic surface conductivity tensor consists of two hybrid TE-TM polarized modes that can be classified as TE-like and TM-like plasmons [53,54]. Their isofrequency contours are of elliptic, hyperbolic, 8-shaped, rhombic, or arc form depending on the frequency. Such variety of shapes can support diverse phenomena, e.g. negative refraction, self-collimation, channeling of surface waves, and a giant enhancement of spontaneous emission of quantum emitters due to the large density of optical states. The similar phenomena can be observed for metasurfaces implemented using nanostructured graphene monolayers and naturally anisotropic ultrathin black phosphorus films [14, 54,55]. In this work, we report the characterization of a resonant anisotropic plasmonic metasurface consisting of a dense array of thin gold elliptic nanodisks supporting propagation of plasmon modes in the optical range. We characterize dispersion of both TE-and TM-like plasmons with attenuated total internal reflection spectroscopy and reveal topological transition of their isofrequency contours. The anisotropic metasurface was fabricated on a fused silica substrate using electron beam lithography followed by thermal evaporation of a 20 nm thick layer of gold and a lift-off process. The fabricated structure is a 200×200 µm 2 array of cylindrical gold nanodisks with the elliptical base. The period of the array is 200 nm in both directions, while the long and short axes of the nanodisks are 175 and 140 nm, respectively [see Fig. 1(a)]. To facilitate surface waves propagation in the symmetric environment, the sample was subsequently covered by a 200 nm layer of transparent resist (ZEP 520A) with the refractive index closely matching that of silicon oxide in the visible and near-IR spectral regions [56]. To characterize the dispersion of the surface waves we resorted to attenuated total internal reflection spectroscopy in Otto geometry [see Fig. 1 Fabrication and characterization To excite the surface waves, one needs to provide the wavevectors of the exciting wave residing under the light line of the dielectric environment of the metasurface (i.e., silicon oxide substrate and resist superlayer). For this purpose, we used a zinc selenide (ZnSe) hemicylindrical prism with the refractive index of around 2.48 in the near-IR range [57]. The sample was attached to the prism with a polymer screw to minimize the weakly controllable air gap between the sample and ZnSe interfaces [see the inset in Fig. 1(b)]. In this configuration (known as the Otto geometry), surface waves can be excited via evanescent coupling of light incident at the ZnSe-sample interface at angles greater than the critical angle, which is about 36 • in the spectral range of interest. By measuring reflectance spectra at different angles of incidence θ, it is possible to reconstruct dispersion of surface waves excited at the metasurface. In the experiment, the sample was illuminated by a supercontinuum laser source (NKT Photonics SuperK Extreme) polarized with a Glan-Taylor prism and focused by a series of parabolic mirrors on the sample surface through a ZnSe prism to a spot of approximately 150 µm in size. The reflected light was collected with another parabolic mirror and then sent to a spectrometer (Ando AQ-6315E) through an optical fiber. The sample and the collection optics were mounted on separate rotation stages, which allowed for reflectance spectra measurements in a broad range of incident angles (from 10 • up to 60 • ). Numerical simulations mimicking the experiment were carried out using the frequency-domain solver of the COMSOL Multiphysics package. The simulation cell with periodic boundary conditions in both directions is shown schematically in Fig. 1(b). The exact dimensions of the structure were verified by means of scanning electron microscopy [ Fig. 1(a)]. The refractive indices of the materials were obtained from literature (ZEP 520A [56], zinc selenide [57], gold [58], and fused silica [59]). The size of the air gap in the simulations was chosen to be 25 nm accordingly to the best matching of the simulated spectra with the experimental ones. Figure 2 shows the experimental and simulated reflectance maps plotted in "wavelength -angle of incidence" axes for both the TE-and TM-polarized excitations and three azimuthal angles ϕ = 0 • , 45 • , 90 • describing orientation of the plane of incidence with respect to the long axis of the nanodisks. The measured reflectance maps demonstrate good correspondence with the simulated ones for all considered cases. Some discrepancy can be attributed to inhomogeneities of the sample and the deviations of the air gap size in the experiment. Discussion The reflectance maps demonstrate a rich variety of near-and far-field features corresponding to the regions above and below the critical angle, respectively. The pronounced reflectance dips above the critical angle stand for the surface modes supported by the metasurface. In the near-IR range (λ > 750 nm), two types of surface waves are excited: a short-wavelength TM-like plasmon and a long-wavelength TE-like plasmon [60,61]. Observation of these two types of plasmons agrees with the predictions of the local analytical model for resonant anisotropic metasurface [53]. The spectral position of the reflectance dips corresponding to these modes strongly depends on the orientation of the sample, clearly demonstrating the anisotropy of their dispersion. The TE-like mode has no frequency cut-off and can propagate at arbitrary low frequencies, where its dispersion curve asymptotically tends to the light line. This means that the mode becomes weakly confined and leaks into the ZnSe prism. It results in broadening of the resonance and decrease of its intensity. The TM-like mode residing in the shorter wavelength region has a cutoff frequency that depends on the propagation direction. Importantly, due to controllable dispersion both TE-and TM-like plasmons can exhibit wavevectors and density of optical states larger than that of plasmons at gold-air interface. These properties are essential for high-resolution imaging and sensing. The origin of the two-dimensional TE-and TM-like plasmons is the following. The TE-like mode is formed due to the coupling of electric dipoles induced in the plasmonic nanodisks in the direction perpendicular to the wavevector of surface wave. Existence of such mode is possible only for the negative polarizability of the plasmonic particles [62], i.e. at the frequencies lower than their plasmon resonance [53]. The TM-like mode is formed due to the coupling of the dipoles oriented along the propagation direction. Existence of such mode is possible only for the positive polarizability of the plasmonic particles, i.e. at the frequencies higher than plasmon resonance [62]. Due to the elliptic shape of the nanodisks and their interaction with neighbors, the degeneracy of the localized plasmon resonance is lifted. This results in pronounced anisotropy of the dispersion of surface modes and in hybridization of their polarization. The latter is seen from the numerical results presented in Fig. 2 (middle row, ϕ = 45 • ). In this direction with low crystallographic symmetry, both TE and TM modes are coupled by incident both TE-and TM-polarized light. This minute effect is not resolved in the measured reflectance maps, which can be attributed to non-optimal prism-to-sample distance realized in the experiment. The dipole plasmon resonances are clearly manifested at small angles of incidence (below the critical angle) in both polarizations as the broad peaks. Strong dependence of their spectral position on orientation of the metasurface and polarization of the excitation wave confirms non-degeneracy of the dipole plasmon resonances of the nanodisks. The surface modes of plasmonic metasurfaces can be also formed due to the coupling of higher multipole plasmon resonances, e.g., quadrupolar, octupolar etc [1]. The reflectance maps shown in Fig. 2 contain additional quadrupole branches near 700 nm, observed under both TM-and TE-polarized excitation. These modes are characterized by barely visible dispersion (low group velocity) and a narrow spectral half-width both above and below the critical angle. The in-plane quadrupole modes in a thin nanodisk are dark modes for the normal excitation. The calculated surface charge density for TE-and TM-like plasmons plotted in Figs. 3(a), 3(b) verify that the modes are formed due to coupling of the dipole plasmon resonances of the nanodisks, which is also confirmed by multipole decomposition (Table 1). The quadrupole nature of the modes near 700 nm becomes apparent from both surface charge density profiles [Figs. 3(c), 3(d)] and multipole decomposition (Table 1). Almost dispersionless behavior of the quadrupole modes is due to stronger field localization and weaker interaction between neighboring particles in comparison with the dipole modes. Anisotropic properties of the plasmon modes are manifested most clearly in isofrequency contours plotted within the first Brillouin zone. Figure 4 demonstrates spectral evolution of the isofrequency contours calculated numerically for both TM-and TE-like surface waves. These contours are exhibited as the blue curves lying between the light circles of glass-resist and ZnSe. To compare the calculated isofrequency contours with the experimental data, we extract the position (ω, k) of the reflectance minima corresponding to the surface waves from the measured reflectance maps for three orientations of the sample. The extracted points are shown with crosses imposed on the isofrequency contours in Fig. 4 and demonstrate decent agreement with numerically calculated reflectance minima. The discrepancies observed in the short-wavelength region are most likely to be concerned with the inaccuracy of the model of gold permittivity dispersion. At low frequencies, far from the plasmon resonance, the anisotropy is vanishingly small, and an isofrequency contour of the TE-like plasmon is very close to a circle. With the increase of the frequency the circle clearly transforms into the ellipse (λ = 1450 nm). Then the contour ruptures giving rise to a forbidden range of propagation directions along the x-axis (λ = 1280 nm). At higher frequencies (λ = 950 nm) TE-like plasmon can propagate only in narrow angular bands in the vicinity of the diagonals of the Brillouin zone. Directivity of the TM-like plasmon (allowed angular band) demonstrates even more dramatic evolution with the change of the wavelength. Near the frequency cutoff (λ ≈ 900 nm) the surface wave propagates nearly along the long axis of the nanodisks completely vanishing in the other directions. At higher frequencies (λ ≈ 750 nm) it propagates only along the short axis of the nanodisks. Such a tunability can be exploited for on-chip routing of optical signals. The form of the isofrequency contours predicts the relationship between the group and phase velocities and defines the shape of the wavefront and character of propagation [63,64]. For example, a positive curvature corresponds to the divergent propagation, a negative curvature corresponds to the self-collimated propagation, a flat contour corresponds to the non-diffractive regime. So, the TM-like plasmon demonstrates self-focusing along the long axis of the nanodisks at λ = 830 nm and along the short axis at λ = 770 nm, when the hyperbolic regime takes place. Therefore, the considered metasurface support elliptic, hyperbolic, and more complex dispersion regimes, which could be fine-tailored at the fabrication stage by deliberate shaping of the particles. Summary To conclude, we have studied dispersion and polarization properties of the surface waves supported by a thin metasurface composed of resonant plasmonic nanoparticles. The particles are arranged in a dense square array with a subwavelength period. They have the shape of elliptic cylinders, and due to such shape the metasurface exhibits strong anisotropic properties. The metasurface has been characterized by means of polarization-resolved total internal reflection spectroscopy in the broad range of wavelengths between 600 nm and 1600 nm under different angles of incidence and orientations of the structure. Existence of the resonant surface waves with anisotropic dispersion bands has been confirmed. The striking difference with the case of metal-dielectric interfaces is that along with the expected TM-like plasmons the structure supports the TE-like surface states, both maintaining the highly directional propagation regime. Full-vectorial simulations consistently support our findings. For both types of waves we have analyzed the spectral evolution of the isofrequency contours. Highly nontrivial and polarization-dependent transformation of the contours has been observed. Numerical analysis helped us to classify the principle bands as tightly bound electric dipole resonances. We have also shown that such metasurface supports quadrupole modes with extremely weak dispersion due to weak interparticle coupling. Our findings on a resonant anisotropic metasurface supporting directional and polarization-dependent surface waves provide a flexible platform for on-chip surface photonics for various applications, such as processing and routing of optical signals in quantum communication systems, high-resolution sensing and enhancement of non-linear processes. Table 1 demonstrates contribution of dipole and quadrupole moments obtained from multipole decomposition of the polarization within the single gold nanoparticle of the metasurface. The plane of incidence is parallel to the long axis of elliptical particles. The angle of incidence is θ = 50 • (see Fig. 2). The components of electric dipole and quadrupole moments of the particle are obtained using the following relations [65]:
3,529.4
2017-12-25T00:00:00.000
[ "Physics", "Materials Science" ]
Smooth and Resilient Human–Machine Teamwork as an Industry 5.0 Design Challenge : Smart machine companions such as artificial intelligence (AI) assistants and collaborative robots are rapidly populating the factory floor. Future factory floor workers will work in teams that include both human co-workers and smart machine actors. The visions of Industry 5.0 describe sustainable, resilient, and human-centered future factories that will require smart and resilient capabilities both from next-generation manufacturing systems and human operators. What kinds of approaches can help design these kinds of resilient human–machine teams and collaborations within them? In this paper, we analyze this design challenge, and we propose basing the design on the joint cognitive systems approach. The established joint cognitive systems approach can be complemented with approaches that support human centricity in the early phases of design, as well as in the development of continuously co-evolving human–machine teams. We propose approaches to observing and analyzing the collaboration in human–machine teams, developing the concept of operations with relevant stakeholders, and including ethical aspects in the design and development. We base our work on the joint cognitive systems approach and propose complementary approaches and methods, namely: actor–network theory, the concept of operations and ethically aware design. We identify their possibilities and challenges in designing and developing smooth human–machine teams for Industry 5.0 manufacturing systems. Introduction The European Commission has presented a vision for the future of European industry, coined 'Industry 5.0', in their policy brief [1]. Industry 5.0 complements the technoeconomic vision of the Industry 4.0 paradigm [2] by emphasizing the societal role of industry. The policy brief calls for a transition in European industry to become sustainable, human-centric, and resilient. Industry 5.0 recognizes the power of industry to become a resilient provider of prosperity, by having a high degree of robustness, focusing on sustainable production, and placing the wellbeing of industry workers at the center of the production process. Industry 4.0 is equipping the factory floor with cyber-physical systems that involve different human and smart-machine actors. These systems have much potential to improve the production process and to provide more versatile and interesting jobs for employees, as called for in the Industry 5.0 concept [1]. Reaping the full benefits of this potential requires that the new solutions are designed as sociotechnical systems, as suggested by Cagliano et al. [3], Neumann et al. [4] and Stern and Becker [5]. As illustrated in Figure 1, human-machine teams can include human actors with different skills, collaborative robots, and artificial intelligence (AI)-based systems for assistance or supervision. The design should cover the technical solutions and work organization in human-machine teams in The design should cover the technical solutions and work organization in human-machine teams in parallel. Task allocation in the human-machine teams should be designed to utilize the best capabilities of all actors, to ensure fair task allocation and to keep human work meaningful and manageable [6]. Smart solutions based on collaborative robots and AI have much potential in easing physically and mentally demanding tasks. However, nowadays, these kinds of solutions tend to provide a two-fold experience to the workers: on the one hand, workers may feel that the work has become a passive and boring monitoring job, while on the other hand, they describe overwhelming challenges when they need to solve smart machine malfunctions [6]. There is clearly a need to develop work allocation and teamwork in humanmachine teams so that human workers feel they are in the loop and that human jobs remain meaningful and manageable. There are still many tasks where humans are superior to machines. Some manual tasks are far too challenging to automate, and human practical knowledge and intuition cannot be replaced with the analytical capabilities of AI. AI is good at learning from past data, while humans can think 'out of the box' and provide creative solutions. Integrating these two complementary viewpoints can bring about an ensemble that is greater than the sum of both human and AI capabilities [7]. Introducing smart solutions provides an opportunity to consider the roles of all actors, raising questions such as: What are the new tasks that each team member is responsible for? How should the team members collaborate and communicate? How can the competence development plans of each team member be supported? Worker roles and related skills demands should be considered in the design of human-machine systems. This is in line with the initial value system in sociotechnical design [8]: while technology and organizational structures may change in the industry, the rights and needs of all employees must always be given a high priority. These rights and needs include varied and challenging work, good working conditions, learning opportunities, scope for making decisions, good training and supervision, and the potential for making progress. Moreover, in the dynamic Industry 5.0 environment, workers should be able to change their roles flexibly on the fly, based on learning new skills or simply for change [6]. Industry 5.0 includes three core elements: human-centricity, sustainability and resilience [1]. Resilience refers to the need to develop a higher degree of robustness in indus- Smart solutions based on collaborative robots and AI have much potential in easing physically and mentally demanding tasks. However, nowadays, these kinds of solutions tend to provide a two-fold experience to the workers: on the one hand, workers may feel that the work has become a passive and boring monitoring job, while on the other hand, they describe overwhelming challenges when they need to solve smart machine malfunctions [6]. There is clearly a need to develop work allocation and teamwork in human-machine teams so that human workers feel they are in the loop and that human jobs remain meaningful and manageable. There are still many tasks where humans are superior to machines. Some manual tasks are far too challenging to automate, and human practical knowledge and intuition cannot be replaced with the analytical capabilities of AI. AI is good at learning from past data, while humans can think 'out of the box' and provide creative solutions. Integrating these two complementary viewpoints can bring about an ensemble that is greater than the sum of both human and AI capabilities [7]. Introducing smart solutions provides an opportunity to consider the roles of all actors, raising questions such as: What are the new tasks that each team member is responsible for? How should the team members collaborate and communicate? How can the competence development plans of each team member be supported? Worker roles and related skills demands should be considered in the design of humanmachine systems. This is in line with the initial value system in sociotechnical design [8]: while technology and organizational structures may change in the industry, the rights and needs of all employees must always be given a high priority. These rights and needs include varied and challenging work, good working conditions, learning opportunities, scope for making decisions, good training and supervision, and the potential for making progress. Moreover, in the dynamic Industry 5.0 environment, workers should be able to change their roles flexibly on the fly, based on learning new skills or simply for change [6]. Industry 5.0 includes three core elements: human-centricity, sustainability and resilience [1]. Resilience refers to the need to develop a higher degree of robustness in industrial production, better preparing it for disruptions. A central part of resilience is adaptability in the teams of humans and smart machines, where responsibilities and work division must be smoothly altered to react to changing situations. In this paper, we investigate the challenge of smooth and resilient human-machine teamwork from a human-centric design point of view. The long research tradition in joint cognitive systems [9,10] provides a good basis for understanding the focus of the design: the overall system with common goals and various human and machine actors. Cognitive systems engineering is an established practice for designing complex humanmachine systems [11,12]. However, complementary approaches and methods are needed to understand and develop the dynamic nature of the overall system where the capabilities and skills of all the actors change over time, and where the task allocation must be revised resiliently to adapt to the changes in the environment. In this paper our aim is to address the following research question: What kinds of approaches and methods could support human-centric, early-phase design and the continuous development of human-machine teams on the factory floor as smoothly working, resilient and continuously evolving joint cognitive systems? In Section 2, we give an overview of related research regarding industrial humanmachine systems and their design. In Section 3, we provide an overview of the research on joint cognitive systems and cognitive systems engineering, and we assess how these approaches could be utilized in designing human-machine teams on the factory floor. In this section we also identify the need for complementary approaches and methods. In Section 4, we propose three complementary approaches and methods to support humancentric early-phase design and the continuous development of human-machine teams on factory floor. The proposed complementary approaches and methods include: actornetwork theory to observe dynamic human machine teams and their behavior; the concept of operations as a flexible co-design and development tool; and ethically aware design to foresee and solve ethical concerns. Finally, in Section 5, we discuss the proposed approach and methods, comparing them to other researchers' views. Related Research Several researchers have addressed the need to extend the focus of the design of industrial systems to the whole sociotechnical system (e.g., [3][4][5]). Cagliano et al. [3] suggest that technology and work organization should be studied both at the micro (work design) and the macro level (organizational structure) to implement successful smart manufacturing systems. Neumann et al. [4] propose a framework to systematically consider human factors in Industry 4.0 designs and implementations, integrating technical and social foci in the multidisciplinary design process. Their five-step framework includes (1) defining the technology to be introduced, (2) identifying humans in the system, (3) identifying for each human in the system the new and removed tasks, (4) assessing the human impact of the task changes, and (5) an outcome analysis to consider the possible implications on training needs, as well as the probability of errors, quality, and work well-being. The micro-level design of the sociotechnical system should focus on the sensory, cognitive and physical demands for the workers ( [3,4]), human needs and abilities [5], change of work ( [3][4][5]), job autonomy [3], supervision [4], support for co-workers [4] and interacting with semiautomated systems [5]. The macro-level design of the sociotechnical system should focus on organizing the work [5], decision making [5], social environment at the workplace [4] and the division of labor [5]. Stern and Becker [5] point out that many future industrial workplaces present themselves as human-machine systems, where tasks are carried out via interactions with several workers, production machines and cyber-physical assistance systems. An important design decision in these kinds of systems is the distribution of tasks between the various human and machine actors. Stern and Becker [5] propose design principles for these kinds of human-machine systems focusing on usability, feedback, and assistance systems. Grässler et al. [13] discuss integrating human factors in the model-based development of cyberphysical production systems. They claim that human actors are often greatly simplified in the model-based design, thus disregarding individual personality and skill profiles. They propose a human system integration (HSI) discipline as a promising approach that aims to design the interaction between humans and technology in such a way that the needs and abilities of humans are appropriately implemented in the system design. In their approach, HSI concepts allow for the integration of individual capabilities of the workers as a fundamental part of a cyber-physical production system to enable the successful development of production systems [13]. Pacaux-Lemoine et al. [14] claim that the techno-centered design of intelligent manufacturing systems tends to demand extreme skills from the human operators as they are expected to handle any unexpected situations efficiently. Sgarbossa et al. [15] describe how the perceptual, cognitive, emotional and motoric demands on the user are determined in the design. If these demands are higher that the capacity of an individual, most probably there will be negative impacts both on system performance and worker well-being. Designers should understand employee diversity and the human factors related to new technologies [15]. Pacaux-Lemoine et al. [14] propose that human-machine collaboration should be based on a shared situational view at a plan level, a plan application level (triggering tasks) and at the level of directly controlling the process. Kadir and Broberg [16] recommend following a systematic approach for (re)designing work systems with new digital solutions as well as involving all affected workers early in the design of new solutions. They also emphasize follow-up and standardizing the new ways of working after the implementation of new digital solutions. In current research, human teamwork with collaborative robots and with autonomous agents has been studied separately. O'Neill et al. [17] argue that collaboration with agents and with robots is fundamentally different due to the embodiment of the robots. For example, the physical presence and design of a robot can impact the interactions with humans, the extent to which the robot engages and interests the human, and trust development. Most human-robot interaction research focuses on a single human interacting with a single robot [18]. As robotics technology has advanced, robots have increasingly become capable of working together with humans to accomplish joint work, and thus raising design challenges related to interdependencies and teamwork [19]. The influence of robots on work in teams has been studied, focusing on task-specific aspects such as situational awareness, common ground, and task coordination [20]. Robots also affect the group's social functioning: robots evoke emotions, can redirect attention, shift communication patterns and can require changes to organizational routines [20]. Communication, coordination, and collaboration can be seen as cornerstones for human-robot teamwork [19]. Humans may face difficulty in creating mental models of robots and managing their expectations for the behavior and performance of robots, while robots struggle to recognize human intent [19]. Van Diggelen et al. [21] suggest design patterns to support the development of effective and resilient human-machine teamwork. They claim that traditional design methodologies do not sufficiently address the autonomous capabilities of agents, and thus often result in applications where the human becomes a supervisor rather than a teammate. The four types of design patterns suggested by van Diggelen et al. [21] support the description of dynamic team behavior in a team where human and machine actors each have a particular role, and the team together has a goal, which may change over time. O'Neill et al. [17] present a literature review on empirical research related to humanautonomy teams (HAT) (teams of autonomous agents and humans, excluding human-robot teams). They state that the 40-year-long research tradition of human-automation interaction provides a vital foundation of knowledge for HAT research. What is different is that HAT involves autonomous agents, i.e., computer-based entities that are recognized to occupy a distinct role in the team and that have at least a partial level of autonomy [17]. Empirical HAT research is still in its infancy. From the 76 empirical papers that O'Neill et al. [17] found, 45 dealt with a single human working with a single autonomous agent, and only one paper studied a team with multiple humans and multiple agents. The impacts of Industry 4.0 on the factory floor work have been studied under the theme "Operator 4.0", first introduced by Romero et al. [22]. Operator 4.0 describes human-automation symbiosis (or 'human cyber-physical systems') as characterized by the cooperation of machines with humans in work systems, designed not to replace the skills and abilities of humans, but rather to co-exist with and assist humans in being more efficient and effective. Operator 4.0 studies introduce different operator types, based on utilizing different Industry 4.0 technologies. These operator types include, for example, a smarter operator with an intelligent personal assistant and a collaborative operator working with a collaborative robot [23]. The Operator 4.0 typology supports these changes in industrial work, but the focus is mainly on the viewpoint of individual operators rather than teamwork. Kaasinen et al. [6] further extended the Operator 4.0 typology by introducing a vision of future industrial work with personalized job roles, smooth teamwork in human-machine teams, and support for well-being at work. Romero and Stahre [24] suggest that engineering smart resilient Industry 5.0 manufacturing systems implies adaptive automation and adjustable autonomous human and machine agents. Both human and machine capabilities are needed to avoid disruption, withstand disruption, adapt to unexpected changes and recover from unprecedented situations. The predictive capabilities of both humans and machines are needed to alert each other of potential disruptions. Adjustable autonomy supports modifying task allocation and shifting control to manage disruptions. With unexpected changes, human operators provide the highest contribution to resilience due to their agility, flexibility, and ingenuity. When recovering from unprecedented situations, human-machine mutual learning supports gradually improving self-healing properties. Future manufacturing systems will be complex cyber-physical systems, where human actors and smart technologies operate collaboratively towards common goals. Human actors and technology may be physically separated, but not in their operation, where from a functional viewpoint, they are coupled in a higher-order organization, where they share knowledge about each other and their roles in the system. Human operators are generally acknowledged to use a model of the system (machine) with which they work. Similarly, the machine has an image of the operator. The designer of a human-machine system must recognize this and strive to obtain a match between the machine's image and the user characteristics on a cognitive level, rather than just on the level of physical functions. Different research and design approaches have been proposed to manage the challenge of designing these kinds of complex socio-technical systems. In the design of socio-technical systems, technical, contextual, and human factors viewpoints should be considered. The socio-technical systems are dynamic and resilient, and the design should support designing adaptive systems with adaptive actors. These needs are becoming increasingly important as the systems include AI elements that facilitate continuous learning. The established design tradition of joint cognitive systems [9,10] is focused on designing complex human-machine systems, and thus provides a good basis for designing factory floor human-machine teams for Industry 5.0. In the following section, we describe the joint cognitive systems approach and related research. We also analyze how the joint cognitive systems approach could be complemented to support human centricity in the early-phase design and continuous development of dynamic and resilient human-machine systems for Industry 5.0. Designing and Studying Joint Cognitive Systems Hollnagel and Woods [9] argue that the disintegrated conception of the humanmachine relationship has led to postulating an "interface" that connects the two separate elements. They point out that adopting an integrated approach would "change the emphasis from the interaction between humans and machines to human-technology co-agency" (p. 19). Understanding human-machine co-agency as a functional unity, led them to introduce a new design concept called a joint cognitive system (JCS). According to them, a "JCS is not defined by what it is but what it does" (p. 22). Consequently, the emphasis is on considering the functions of the JCS rather than of its parts. These authors see that the shift in perspective denotes a change in the research paradigm regarding human-technology interactions. This change in the research paradigm is widely adopted in the field of cognitive system engineering (CSE), which JCS can also be viewed as being part of. They represent an alternative stream of research on human factors and the design of human-technology interaction ( [11,12]), which earlier, to a large extent, was based on information-processing theories. That is to say, the human actor is seen as a creative factor, and in design the aim is to maximize the benefits of involving the human operators in the system operation by supporting collaboration and coordination between the system elements. Hollnagel and Woods [9] characterize JCSs according to three principles: (a) goal orientation, (b) control to minimize entropy (i.e., disorder in the system), and (c) co-agency in the service of objectives. A cognitive system can be seen as an adaptive system, which functions in a futureoriented manner and uses knowledge about itself and the environment in the planning and modification of the system's actions [25]. Thus, the ability to maintain control of a situation, despite disrupting influences from the process itself or from the environment, is central. This requires considering the dynamics of the situation (i.e., creating an appropriate situational understanding) and accepting the fact that capabilities and needs depend on the situation and may vary over time [26]. Thus, according to Woods and Hollnagel [10] the focus of the analysis should be on the functioning of the human-technology-work system as a whole. They are especially interested in understanding how mutual adaptation takes place in a JCS and have recognized three types of adaptive mechanisms that they also labelled as patterns, that is, patterns of coordinated activity, patterns of resilience and patterns in affordance. From the point of view of design, these patterns can be seen as empirical generalizations abstracted from the specific case study and situation and they are used to describe the functioning of the joint system. Thus, in design, the analysis of the dynamics and generic patterns of the JCS is proposed to be of central importance. For example, in his analysis of the role of automation in JCS, Hollnagel [27] emphasizes the importance of ensuring redundancy among functions assigned to the various parts of the system. He recognizes four distinct conditions related to various degrees of responsibility of automation in monitoring and control functions that all have their advantages and disadvantages. The opposite ends of the spectrum are conventional manual control and full automation take-over. The two other conditions in between are operation by delegation (human monitoring, automation controlling) and automation amplified attention (automation monitoring, with human control). When considering these four different conditions, it is easy to understand that none alone can be superior to the others in all operative situations. Instead, Hollnagel emphasizes that a balanced approach is necessary for considering the consequences of the defined automation strategy. According to him, the choice of an automation strategy is always a compromise between the efficiency and flexibility of the JCS since an increase in automation can positively affect efficiency but can also cause a certain inflexibility in the system and vice versa. Therefore, he concludes that "human centeredness is a reminder of the need to consider how a joint system can remain in control, rather than a goal. Each case requires its own considerations of which type of automation is the most appropriate, and how control can be enhanced and facilitated via a proper design of the automation involved" [27] (p. 11). Norros and Salo [28] build partly on the ideas of the JCS (e.g., the notion of a pattern to describe joint system functioning) when developing their approach called the joint intelligent systems approach (JIS). The JIS approach acknowledges that technology is becoming ubiquitous and part of a new type of "intelligent environment", where the focus is no longer on interaction with an individual technological tool/solution. Thus, in JIS, the unit of analysis is extended to describe the structure of the joint system-a human, technology and environment system that is, by its very nature, coevolving. One influential researcher and author, Timo Järvilehto, has made particular contributions to how the relationship between humans and the environment is understood in JIS, that is to say, from a functional point of view, humans and their environment form a functional unity [29]. Here, the environment refers to the part of world that may potentially be useful for a particular organism. This understanding is also close to how, for example, Gibson [30] defines human activity in an environment through affordances. Thus, a central idea is that the system is organized by its purpose and shaped by the constraints and possibilities of the environment, which must be considered when maintaining adaptive behavior. Moreover, in the JIS approach, "intelligence" is not an attribute of technology or the human element as such but, instead, it refers to the appropriate functioning and adaptation of a system. The whole adaptive system, or "intelligent environment," becomes the object of design [28]. Therefore, it follows that a JIS approach advocates methodologies that aid the understanding of how the system behaves, that is, the patterns of behavior or way of working that express an internal regularity in the behavior of the system. For instance, a semiotically grounded method for an empirical analysis of people's usages of their tools and technology [31] is suggested. The joint cognitive systems approach has been proposed for the modeling of smart manufacturing systems by Jones et al. [32] and Chacon et al. [33]. Jones et al. [32] present smart manufacturing systems as collaborative agent systems that can include software and hardware, as well as machine, human, and organizational agents, among many others, in the service of jointly held goals. They propose modeling these agents as joint cognitive systems. They point out that cyber technologies have dramatically increased the cognitive capabilities of machines, changing machines from being reactive to self-aware. This allows humans and machines to work more collaboratively, as joint partners to execute cognitive functions. Humans and machines execute tasks as an integrated team, working on the same tasks and in the same temporal and physical spaces. This integrated view changes the emphasis from the interaction between humans and machines to human-machine co-agency. The behavior of these agents can be highly unpredictable due to the diverse technologies they use and behaviors they display. Jones et al. [32] propose that, since smart manufacturing systems represent joint cognitive systems, these systems should be engineered and managed according to the principles of cognitive systems engineering (CSE). Chacon et al. [33] point out that the Industry 4.0 paradigm shift from doing to thinking has allowed for the emergence of cognition as a new perspective for intelligent systems. They see joint cognitive systems in manufacturing as the synergistic combination of different technologies such as artificial intelligence (AI), the Internet of Things (IoT) and multiagent systems (MAS), which allow the operator and process to provide the necessary conditions to complete their work effectively and efficiently. Cognition emerges as goaloriented interactions of people and artefacts to produce work in a specific context. In this situation, complexity emerges because neither goals, resources or constraints remain constant, creating dynamic couplings between artefacts, operators, and organizations. Chacon et al. [33] also refer to a CSE approach in analyzing how people manage complexity, understanding how artefacts are used and understanding how people and artefacts work together to create and organize joint cognitive systems, which constitute the basic unit of analysis in CSE. The main challenges in the design of joint cognitive systems are recognized to be human-machine interactions and the allocation of functions between humans and autonomous systems. Therefore, methodologies have been developed to empirically approach and analyze the functioning of complex systems, including methods such as cognitive work analysis [12] and applied cognitive task analysis [34]. Regarding the allocation of functions in the design of joint cognitive systems, a key challenge is that the optimal allocation of functions depends on the operating conditions. In dynamic function allocation, several levels of autonomy are provided, and a decision procedure is applied to switch between these levels. Dynamic function scheduling is a special case of dynamic function allocation in which the re-allocation of a particular function is performed along the agents' timeline (e.g., [35]). Functional situation modelling (FSM), combining both chronological and functional views, is suitable for making considerations regarding dynamic function allocation ( [36,37]). Functional situation modelling is based on Jen Rasmussen's abstraction hierarchy concept [38] and Kim Vicente's cognitive work analysis methodology [12]. The chronological and functional views determine a two-dimensional space in which important human-machine actions and operational conditions are mapped. Actions are described under the functions that they are related to. The main function of the domain gives each action a contextual meaning. In the chronological view, the scenario is divided into several phases based on the goal of the activities; in the functional view, the critical functions that are endangered in a specific situation are illustrated [36]. Regarding interaction design, the main challenge is how to promote the mutual interpretation of the functional state of the joint system and the capability of adaptive actions to augment system resilience. On the one hand, this means assuring that the relevant human intentions, activities, and responsibilities are properly communicated to and understood by machines, and on the other hand, that the means by which humans interact with the machines meet their goals and needs. In particular, to ensure the smart integration of humans into the system, the focus should be on understanding how to improve the representation of system behavior and operative limits, in order to achieve its operative goals. One useful approach is to apply the ecological interface design (EID) approach, which is based on functional modelling [12] and the principles of James Gibson's ecological psychology [30]. EID offers a number of visual patterns for representing system states and opportunities for control; thus, it promotes the perception of functional states of the joint cognitive system in relation to the overall objectives of the activity. Our research question is: What kinds of approaches and methods could support human-centric, early-phase design and the continuous development of human-machine teams on the factory floor as smoothly working, resilient and continuously evolving joint cognitive systems? The JCS approach and CSE design approach provide good bases for the design. However, to support the early-phase co-design of joint cognitive systems with relevant stakeholders, methods are needed for the planned allocation of functions, responsibilities, collaboration, and interaction to be described and shared with all stakeholders so that they can comment and contribute the design. Furthermore, as the human-machine system is continuously evolving, and due to the planned allocation of functions, responsibilities, collaboration and interaction change, there is a need for approaches with which the current status of the joint cognitive system and its actors can be studied. The increasing autonomy of machines raises many ethical concerns, and such concerns should be studied in the early design phases. Human factors engineering (HFE) is a scientific approach to the application of knowledge regarding human factors and ergonomics to the design of complex technical systems. It can be considered as one engineering framework among several others under the CSE discipline [39]. It can be divided into tasks and activities in terms of which stages of the design process they are related to. Typically, HFE activities are classified into four groups: analysis, design, assessment, and implementation/operation. One of the first tasks in the analysis stage is to develop a Concept of Operations (ConOps) for the new system. The ConOps method was introduced by Fairley and Thayer in 1990s [40] as a bridge from operational requirements to technical specifications. The key task in the development of a ConOps is the allocation of functions and stakeholder requirements to the different elements of the proposed system. For example, it can be used to describe and organize the interaction between human operators and a swarm of autonomous or semi-autonomous robots. Typically, ConOps is considered to be a transitional design artefact, that plays a role in the requirements specification during the early stages of the design and involves various stakeholders. Thus, ConOps could support the early phase co-design of joint cognitive systems. Actor-network theory (ANT) was developed in the early 1980s by the French scholars of science and technology studies (STS) Michel Callon and Bruno Latour, as well as British sociologist John Law [41]. ANT is a theory but also a methodology, which is typically used to describe the relations between humans and non-humans and ideas of technology. The theory argues that technology and the social environment interact with each other, forming complex networks. These sociotechnical networks [42] consist of multiple relationships between the social, the technological or material, and the semiotic. Digitalization has made Sustainability 2022, 14, 2773 9 of 20 networks more complex and varied. In networks, the use of digital devices as actors can be just as important as human actors. They may be even more important, with smart machine actors that are not simply instruments for human interaction. To understand these unorthodox networks of human and non-human actors, more specific analysis tools are required. One option is to perceive them as actor-networks and name the actors as 'actants' when executing actor-network analyses. Thus, ANT could support the analysis of a continuously evolving human-machine system and the relations within it. When designing a smooth human-machine teamwork, striving for ethically sound solutions is an important design goal. While the importance of ethics is widely acknowledged in design, whether it concerns human-machine teamwork or other fields of applying new technologies, embedding ethics in the practical work of designers may be challenging. Several approaches have been proposed to support considering ethics in the design process, and most of them highlight the need for already considering ethics in the early phases of design. Proactive ethical thinking to address ethical issues is particularly highlighted in the Ethics by Design approach [43] that aims to create a positive ethically aware mindset for the project group to actively work towards designing ethically sound solutions. In the next section, we focus more on these three complementary approaches to support human-centric early phase design and the continuous development of humanmachine teams on the factory floor. We will describe the approaches and the related methods, and we will analyze their opportunities and risks. Approaches and Methods for Human-Centric Early-Phase Design and Continuous Development of Dynamic and Resilient Industry 5.0 Human-Machine Teams Visions of the future present a factory floor where humans, collaborative robots, and autonomous agents form dynamic teams capable of reacting to changing needs in the production environment. The human actors have personally defined roles based on their current skills, but they can continuously develop their skills, and thus can also change their role in the team. Similarly, autonomous agents and collaborative robots are able to learn based on their artificial intelligence capabilities. All the actors are in continuous interaction by communicating, collaborating, and coordinating their responsibilities. How can this kind of a dynamic entity be designed? Clearly, after it has been initially designed and implemented, it starts developing on its own as both human actors and smart machine actors learn new skills and gain new capabilities. Based on the literature analysis and our own experiences, we propose that a joint cognitive systems approach can be complemented with three promising approaches and methods: 1. Actor-network theory is focused on describing sociotechnical network and interaction via empirical, evidence-based analyses. It is promising as it treats human and nonhuman actors equally. Actor-network theory can help understand how a dynamic human-machine team works and how it evolves over time. This can support design and development activities. 2. Concept of operations is a promising method and design tool that provides means to describe different actors and interdependencies between them. Compared to earlier methods based on modelling, it better supports both the dynamic nature of the overall system and co-design and development activities with relevant stakeholders. 3. Human-machine teams raise ethical concerns related to aspects such as task allocation, machine-based decision making, and human roles. The gradually evolving joint system may introduce new ethical issues that were not identified in the initial design. Thus, an ethical viewpoint should be an integral part of not only the initial design but also the continuous development of human-machine teams. Figure 2 illustrates how the proposed three approaches incorporate different viewpoints for analyzing and designing Industry 5.0 human-machine teams as joint cognitive systems. In this section, we discuss actor-network theory, the concept of operations and ethically aware design in separate sub sections, analyzing what kinds of concrete support these approaches and methods could bring to the design and development of human-machine teams. We also identify potential challenges and risks. tion, machine-based decision making, and human roles. The gradually evolving joint system may introduce new ethical issues that were not identified in the initial design. Thus, an ethical viewpoint should be an integral part of not only the initial design but also the continuous development of human-machine teams. Figure 2 illustrates how the proposed three approaches incorporate different viewpoints for analyzing and designing Industry 5.0 human-machine teams as joint cognitive systems. In this section, we discuss actor-network theory, the concept of operations and ethically aware design in separate sub sections, analyzing what kinds of concrete support these approaches and methods could bring to the design and development of humanmachine teams. We also identify potential challenges and risks. Actor-Network Theory (ANT) Technology and the social environment interact with each other, forming complex networks between humans and non-humans. These networks consist of multiple relationships between the social, technological, and semiotic [41,42]. At best, actor-network analyses will clarify the roles and positions of actors, their relations, sociotechnical networks, and processes. Industry 5.0 will create new networks, relations, and alliances, which include humans and non-humans. On the factory floor, non-humans are typically artefacts: machines, devices, and their human-created components. At many workplaces, networks are digitalized, workers need to act and interact with different actors: other humans, various machines, and new technologies. Three types of interaction in digitalized networks can be found: social interaction between humans, human-machine interactions, and interaction between machines. Interaction between humans is social [44], and the oldest example of this is face-toface interaction (F2F). In the analyses of social networks, attention is focused on human or organizational actors and their nodes (or hubs) or on the relations between these actors. It requires cultural knowledge to understand social relationships. Successful social networking requires understanding, trust, and commitment between human actors. In other types of digitalized networks, at least one actor is non-human, and the interaction is characterized as a human-machine interaction (HMI) or, when interacting with digital devices, human-computer interaction (HCI). There is also a type of purely non- Actor-Network Theory (ANT) Technology and the social environment interact with each other, forming complex networks between humans and non-humans. These networks consist of multiple relationships between the social, technological, and semiotic [41,42]. At best, actor-network analyses will clarify the roles and positions of actors, their relations, sociotechnical networks, and processes. Industry 5.0 will create new networks, relations, and alliances, which include humans and non-humans. On the factory floor, non-humans are typically artefacts: machines, devices, and their human-created components. At many workplaces, networks are digitalized, workers need to act and interact with different actors: other humans, various machines, and new technologies. Three types of interaction in digitalized networks can be found: social interaction between humans, human-machine interactions, and interaction between machines. Interaction between humans is social [44], and the oldest example of this is face-toface interaction (F2F). In the analyses of social networks, attention is focused on human or organizational actors and their nodes (or hubs) or on the relations between these actors. It requires cultural knowledge to understand social relationships. Successful social networking requires understanding, trust, and commitment between human actors. In other types of digitalized networks, at least one actor is non-human, and the interaction is characterized as a human-machine interaction (HMI) or, when interacting with digital devices, human-computer interaction (HCI). There is also a type of purely non-human interaction between machines and computers, better known as the Internet of Things (IoT). In empirical analyses, actor-networks are usually approached ethnographically, as Bruno Latour and Steve Wooglar demonstrated in their classic research Laboratory Life: The Construction of Scientific Facts [45]; they showed the complexity of laboratory work. Latour and Wooglar wrote 'a thick description' with detailed descriptions and interpretations of the human and non-human activities they observed in the laboratory. As a result of their ethnographic field work, these descriptions were multi-dimensional, material, social, and semiotic. ANT stresses the capacity of technology to be 'an actor' (sociology) or 'an actant' (semiotic) in and of itself, which influences and shapes interactions in material-semiotic networks. Here, the strict division of human and non-human (nature and culture) becomes useless and all actors are treated with the same vocabulary. In an actor-network, actions can be blurred and mixed. For this reason, it can be hard to analyze causal relations or even to say which results different actions will produce in the network processes. On the factory floor, non-human actors can be simple tools such as axes or hammers, or more complex items such as mechanical or automated machines, robotic hands or intelligent robots, in addition to elements such as electricity, software, microchips, or AI. Many of these actors are material, which makes it possible to observe their action and interaction. AI and software may obtain their appearance from intelligent robots, chatbots, digital logistic systems or augmented reality. There are already examples of complex actor-networks in healthcare, where social and care robots are interacting with patients and seniors, registering their attitudes and habits, and collecting sensitive individual data. Christoph Lutz and Aurelia Tamò [46] analyzed privacy and healthcare robots with ANT. As a method, they argued that "ANT is a fruitful lens through which to study the interaction between humans and (social) robots because of its richness of useful concepts, its balanced perspective which evades both technology-determinism and social-determinism, and its openness towards new connected technology" [46]. Another essential key concept of ANT is 'translation'. According to Callon [41], translation has four stages: the problematization, interessement, enrollment and mobilization of network allies. John Shiga [47] has utilized translation in his analyses of how to distinguish human and social relations from technical and material artifacts (MP3s, iPods, and iTunes). His study shows how the non-human agency of artifacts constitutes the social world in translations of "industry statements, news reports, and technical papers on iPods, data compression codecs, and copy protection techniques". Shiga [47] argues that ANT "provides a useful framework for engaging with . . . foundational questions regarding the role of artifacts in contemporary life". Even if robots or other AI-based actors can make decisions and enact them in the world, they are not sovereign actors (cf. [41,48]) or legal subjects in society. All these artefacts are not only non-humans but concurrently created by humans. Their presence can be observed with the senses, or it can take a more implicit form. One can say that the most extreme version of a net of non-human actants is the Internet of Things (IoT), but that is not correct. Even in this case, the influence of humans can be seen in the ideas and execution of technology. IoT-based services that utilize sensor data, clouds and Big Data networks are examples of the most sophisticated technologies where non-humans and humans are both present. Actor-network methodology and analyses, for example, will show how things happen, how interaction works, and which actors will support sociotechnical network interaction. In general, it is a descriptive framework that aims to illustrate relationships between actants. Still, ANT does not explain social activity nor the causality of action. It does not give answers to the question of why. The power of ANT lies in asking how and understanding the nature of agency and multiple interactions in sociotechnical networks. By writing an ethnographical, rich description, it becomes possible to understand more precisely how the interaction process works. ANT makes the active, passive, or neutral roles intrinsically visible, which 'actants' have taken, mistaken, or obtained in the network during the process. Additionally, understanding observed disorders and problems between human and non-human agency will underline the need for better design. The results of actor-network analyses can be utilized in designing smooth human-machine teams. In the ANT method the purpose of action and the intention of agency are written in rich descriptions. With careful ethnographic fieldwork, the observed relations in the network, agency of humans and artifacts, and position of actors/actants can be determined in the analyses. The challenge of ANT is that the evaluation of human-machine teams becomes difficult if the executed fieldwork is poor, descriptions incomplete, and analyses too shallow or blurry. Then, there is a risk that the ANT approach will be unable to produce useful information for the early phases of design. Concept of Operations IEEE standard 1362 [45] defines a ConOps document as a user-oriented document that describes a system's operational characteristics from the end user's viewpoint. It is used to communicate overall quantitative and qualitative system characteristics among the main stakeholders. ConOps documents have been developed in many domains, such as the military, health care, traffic control, space exploration, financial services, as well as different industries, such as nuclear power, pharmaceuticals, and medicine. At a surface glance, ConOps documents come in variety of forms reflecting the fact that they are developed for different purposes in different domains. Typically, ConOps documents are based on textual descriptions, but they may include informal graphics that aim to portray the key features of the proposed system, for example its objectives, operating processes, and main system elements [46]. ConOps has the potential to be used at all stages of the development process, and a general ConOps is a kind of template that can be modified and updated regarding specific needs and use cases [47]. A ConOps document typically describes the main system elements, stakeholders, tasks, and explanations of how the system works [49]. A typical ConOps document contains, in a suitable level of detail, the following kinds of information ( [40,49]): the operational goals and constraints of the system, interfaces with external entities, main system elements and functions, operational environment and states, operating modes, allocation of responsibilities and tasks between humans and autonomous system elements, operating scenarios, and high-level user requirements. Väätänen et al. [50] pointed out that the ConOps specification should consider three main actors: the autonomous system, human operators, and other stakeholders. Initial ConOps diagrams can be created based on these three main parts of the ConOps structure. During the ConOps diagram development process, each actor can be described in more detail and their relationships with each other can be illustrated. On the other hand, the initial or casespecific ConOps diagrams can be used in co-design activities when defining, e.g., operator roles and requirements. The concept of operations approach illustrates how an autonomous system should function and how it can be operated in different tasks. Operators supervise the progress of the tasks, and they can react to possible changes. Robots can be fully autonomous in planned tasks and conditions, but in some situations an operator's actions may be required. Laarni et al. [51] presented a classification of robot swarm management complexity. In their classification, the simplest operation mode is when one operator supervises a particular swarm. The most complicated situation was seen when human teams from different units co-operated with swarms from several classes. This scenario also considers that the human teams work together, and autonomous or semi-autonomous swarms engage in machine-machine interactions. The following numbered list indicates in more detail the six classes of robot swarm management complexity presented by Laarni et al. [51]: 1. An operator designates a task to one swarm belonging to one particular class and supervises it; 2. An operator designates a task to several swarms belonging to one particular class and supervises its progress; 3. An operator designates tasks to several swarms belonging to different classes and supervises their progress; 4. Several operators supervise several swarms belonging to different classes; 5. Human teams and swarms belonging to one particular class are designated; 6. Human teams and swarms belonging to several classes are designated. A ConOps can be presented in different forms, and we have identified three types of ConOps that can be derived from the source: syntactic, interpretative, and practical. We propose that since the uncertainties and novelties between actors become larger when moving from the syntactic to the practical level, "stronger" ConOps tools are required. ConOps can be presented at different levels of detail so that, by zooming in and out of the ConOps hierarchy, different elements of the system come into focus. So, even though a ConOps is a high-level document by definition, this kind of high-level description can be outlined at different levels of system hierarchy. Regarding collaborative robots and human-machine teams, these can be demonstrated at the level of factory infrastructure within a particular domain of manufacturing (i.e., system of systems level), at the level of a swarm of collaborative robots in a single factory (i.e., system level), and in the context of human-AI teams controlling a single robot (i.e., sub-system level). Even though ConOps provides a good way to co-design human-machine systems with relevant stakeholders and to illustrate design decisions, the approach also has challenges and risks. Mostashari et al. [52] noted that ConOps can be perceived as burdensome rather than beneficial due to the laborious documentation requirements. Therefore, it may be difficult to justify its value to the design team and other stakeholders. Moreover, the contents and quality of ConOps documents can vary greatly, especially because ConOps standards and guidelines are poorly adhered to. Since the development of ConOps is often a lengthy and laborious process, it may be challenging to estimate the required resources. ConOps work should be designed and tailored to match the available resources and time in development or research projects. This may be challenging if the proposed concept should address a wide range of actors and relationships and dependencies between them in a comprehensive and reliable way. Ethically Aware Design The successful design of human-machine collaboration is not only a matter of effective operations, but also a matter of ethical design choices. We encourage considering ethical aspects already in the early design phases, as well as later, in an iterative manner when sociotechnical system and work practices evolve. For example, respecting workers' autonomy, privacy and dignity are important values to consider when allocating tasks between humans and machines and when designing how to maintain or increase the sense of meaningfulness at work, utilize the strengths of humans and machines and create work practices in which humans and machines complement each other. To ensure that ethical aspects are considered in the design of smooth human-machine teamwork, several approaches can be applied. Ethical questions can be identified and addressed by identifying values of the target users and responding to them [53], by assessing the ethical impacts of design [54] or by creating and following ethical guidelines (see, e.g., [55]). In each of these approaches, different methods can be utilized. One of the first and most established approaches for ethically aware design is valuesensitive design [53]. This is a theoretically grounded approach, which accounts for human values throughout the design process. The approach consists of three iterative parts: conceptual, empirical, and technical investigations. The conceptual part focuses on the identification of direct and indirect stakeholders affected by the design, definition of relevant values and identification of conflicts between competing values. Empirical investigations refer to studies conducted in usage contexts that focus on the values of individuals and groups affected by the technology and the evaluation of designs or prototypes. Technical investigations focus on the technology and design itself-how the technical properties support or hinder human values, and the proactive design of the system to support values identified in the conceptual investigation. While value-sensitive design focuses on identifying the values of stakeholders, ethics can also be approached from the perspective of assessing the potential ethical impacts of adopting new technologies. A framework for ethical impact assessment [54] is based on four key principles: respect for autonomy, nonmaleficence, beneficence and justice. Each of these principles includes several sub-sections and example questions for a designer, in order to consider the impacts of technology from the perspective of each principle. More straightforwardly, ethics can be considered in design by following guidelines that can be used as guidance throughout the design process. For example, Kaasinen et al. [56] define five ethical guidelines to support ethics-aware design in the context of Operator 4.0 solutions, applying the earlier work by Ikonen et al. [55]. The guidelines are based on the ethical themes of privacy, autonomy, integration, dignity, reliability, and inclusion, and they describe the guidance related to each theme with one guiding sentence. For example, to support workers' autonomy, the guidelines suggest that the designed solutions should allow the operators to choose their own way of working, and to support inclusion, the designed solutions should be accessible to operators with different capabilities and skills. Although established methods for ethically aware design exist, ethical aspects are not always considered in the design process. Even though ethics may be felt to be an important area of design, the lack of concrete tools or integration in the design process may prevent designers from considering ethical aspects in design. Ethical values or principles may sound either too abstract to be interpreted for design decisions or they may include an exhausting list of principles, which does not encourage inexperienced designers to study these principles. Moreover, the first impression of the design team may be that the new work processes would not include any ethical aspects, even though there may be several significant ethical questions when the process is regarded from a wider perspective of several stakeholders or from the perspective of selected ethical themes. In addition, new work processes may evoke unexpected ethical issues while new ways of working are adopted and gradually evolve. What would the key means be to overcome the challenges of embedding ethics into existing design practices? In line with the ethics by design approach (Niemelä et al., 2014), one of the most important starting points is to already elicit ethical thinking in the early design process as a joint action to the design team and relevant stakeholders. This approach has a number of benefits: it supports proactive ethical thinking, engages diverse ethical perspectives on the design, and encourages commitment to the identified values or principles. Ethics awareness is more likely to be maintained throughout the design process when the ethical aspects are not brought from the outside of the project but co-identified within the project team. Co-created guidelines or principles are easier to understand and to concretize than a list of ready-made aspects. Furthermore, they are probably more relevant to the existing organization as they include aspects identified as particularly pertinent to the work process being designed. An ethical design approach is particularly important in phases when value conflicts may occur and design compromises must be made. For example, pursuing both the efficiency of operations and workers' well-being may require making design choices that balance these impacts and design thinking that does not only consider short-term objectives, but also long-term sustainability. As all of the impacts of changing work practices and co-working with machines cannot be foreseen beforehand, an ethically aware mindset is also needed later to analyze the impacts and guide the process in the desired direction. Ethically aware design should reduce the risks of planning human-machine teamwork that does not meet the needs and values of the stakeholders. Yet, encouraging ethical thinking carries a risk of overanalyzing the potential consequences of work changes and excessive thinking about exceptional situations or risks of misuse. This may hinder or slow down the adoption of new ways of working. To avoid this, ethics should be seen as one perspective of design among other important aspects, such as usability, safety or ergonomics. A framework that includes all such viewpoints, such as a design and evaluation framework for Operator 4.0 solutions [56] can serve as a tool to create a joint ethically aware mindset in the design group as well as serving as a reminder to include ethics in the design activities throughout the design process. Ethics can be discussed as one theme in user studies or stakeholder workshops, and it can be considered when defining user experience goals for design. For example, a goal to make workers feel encouraged and empowered at work [57] supports ethical design that does not concentrate on problems or exceptional situations, but drives the design towards the desired, sustainable direction. In addition, ethical questions can be addressed through dedicated design activities that fit the design process of the organization. In practice, for example, creating future scenarios and walking them through with workers, experts or the design team may help to identify the potential ethical impacts of the design and may guide the design activities further (see, e.g., [58]). As the system and the work practices evolve, scenarios can be re-visited to review whether new ethical questions have occurred or can be identified based on the experience of the new ways of co-working. This supports the continuous pursuit towards an ethically sustainable work community and human-machine teamwork. Summary In this section we have presented three complementary approaches to support the design of Industry 5.0 human-machine teams as joint cognitive systems. In Table 1, we summarize and compare the approaches regarding their background, aims, results as well as benefits and challenges. These approaches support human centricity in the early phases of the design, and thus support the more technical cognitive systems engineering approaches. Results Actors in the human-machine team, their roles, connections, and interactions. Operational concept of the human-machine team, the responsibilities, and tasks of the actors. Ethical contribution to the system design, ethical guidelines. Form of results A rich description with network maps. Initial and use-case specific ConOps diagrams supplemented by user requirements tables and a high-level textual description of the system. Ethical principles and aspects to be considered in the design, potential ethical impacts, contribution to design decisions. Illustrates complex systems from the end-users' points of view concentrating specifically on the relationships between stakeholders and the technical system and use cases. Provides methods for identifying and considering ethical aspects in design, supports proactive ethical thinking, engaging of diverse perspectives to design and commitment towards ethically sustainable design. Risks and challenges Overly shallow or blurry analyses, misinterpreted purposes, and problems clarifying the responsibilities of the actors. Laborious to draft and allocate a suitable amount of resources to its development; difficulties in convincing stakeholders of its value. Overanalyzing potential consequences of technology adoption or work changes, excessive focus on exceptional situations or risks of misuse. Discussion Industry 5.0 visions describe sustainable, resilient, and human-centered future factories, where human operators and smart machines work in teams. These teams need to be resilient to be able to respond to changes in the environment. The resilience needs to be reflected in dynamic task allocation, and the flexibility of the team members to change roles. After a joint cognitive system has been initially designed and implemented, it continuously co-evolves as both human and machine actors learn, thus gaining new skills and capabilities. Additionally, changes in the production environment require the system to be resilient and adapt to respond to changing expectations and requirements. We have studied the kinds of approaches and methods that could support a humancentric early design phase and the continuous development of Industry 5.0 human-machine teams on the factory floor as smoothly working, resilient and continuously evolving joint cognitive systems. It is not sufficient simply to have methods for the design of a joint cognitive system; there is also a need for methods that support the monitoring and observing of a joint cognitive system to understand how the system is currently functioning, what the current skills and capabilities of different human and machine actors are and how tasks are allocated among the actors. Moreover, there is a need for co-design methods and tools to allow relevant stakeholders to participate in designing the joint human-machine system and influence its co-evolvement. Autonomous and intelligent machines raise different ethical concerns, and thus ethical issues should already be studied at the early phases of the design and during co-evolvement. The joint cognitive systems approach was introduced already in the 1980s and has changed the design focus from the mere interaction between humans and machines to human-technology co-agency [9]. This is important when designing human-machine teams as in these teams, human operators are not merely "using" the machines, rather, both humans and machines have their own tasks and roles, targeting common goals. Another characteristic feature of joint cognitive systems is the dynamic and flexible allocation of functions between humans and machines. The joint cognitive systems approach has been utilized in designing automated systems where the ability to maintain control of a situation, despite disrupting influences from the process itself or from the environment, is central. This requires managing the dynamics of the situation by maintaining and sharing situational awareness, and considering how the capabilities and needs depend on the situation and how they may vary over time [26]. Resilience to changes in the system itself and the environment are also central in designing human-machine teams. The design focus should be on supporting co-agency through shared situational awareness, collaboration, and communication. Jones et al. [32] point out that cyber technologies have dramatically increased the cognitive capabilities of machines, changing machines from being reactive to self-aware. Jones et al. identify the various actors in Industry 4.0/5.0 manufacturing systems as human, organizational and technology-based agents. Chacon et al. [33] point out how these agents are based on a synergistic combination of different technologies such as artificial intelligence (AI), the Internet of Things (IoT) and multi-agent systems (MAS). Manufacturing systems consisting of various human and machine agents can be seen as joint cognitive systems. Both Jones et al. [32] and Chacon et al. [33] propose a cognitive systems engineering (CSE) approach to designing these multi-agent-based manufacturing systems. CSE focuses on cognitive functions and analyzes how people manage complexity, understanding how artefacts are used and how people and artefacts work together to create and organize joint cognitive systems [33]. In CSE, the design focus is on the mission that the joint cognitive system will perform. The joint cognitive system works via cognitive functions such as communicating, deciding, planning, and problem-solving. These cognitive functions are supported by cognitive processes such as perceiving, analyzing, exchanging information and manipulating [33]. Cognitive systems engineering is based on various forms of modelling, such as cognitive task modelling, which is often a quite laborious process. As Grässler et al. point out [13], human actors are often greatly simplified in model-based design. Human performance modelling may then be hampered as it does not properly take into account the variability in personal characteristics, skills, and preferences. Another design challenge raised by van Diggelen et al. [21] is that traditional design methods do not sufficiently address the autonomous capabilities of machine agents, resulting in systems where the human becomes a supervisor rather than a teammate. The concept of operations method supports focusing on the teamwork in the design. The concept of operations is a high-level document providing a starting point for the modelling of a joint cognitive system by laying the basis for the requirements specification activity [40]. A ConOps can also be considered as a boundary object promoting communication and knowledge sharing among stakeholders, especially in the beginning phase of the design process. This supports the involvement of all affected workers early in the design, as suggested by Kadir and Broberg [16]. ConOps can also serve the continuous co-development of the system. As the overall joint cognitive system is dynamically evolving over time, it becomes important to understand the entity and different actors, as well as their roles and responsibilities. For that purpose, we propose actor-network theory (ANT) [41] as an old but promising approach. Actor-network analyses require ethnographic field work with observation and other evidence-based analytical methods that aim to describe the interaction between actors and make network processes visible. The ANT approach is unbiased: the interest is for all, both of human and non-human agency. This is important because, in human-machine teams, the available capabilities should be utilized similarly regardless of which actor is providing the capability. Actor-network analyses support the understanding of how something happens, how interaction works, and which actors support sociotechnical network interactions. However, ANT does not explain social activity, the causality of action or explain the simple reasons why things happen. Despite these limitations, it can support the understanding of how the dynamic joint cognitive system works, what the current roles of different actors are, and what their relationship is with the other actors. All of this supports the further development of the overall joint cognitive system as well as developing the individual actors. An important aspect when designing sustainable human-machine teams are ethical considerations. As Pacaux-Lemoine et al. [14] point out, techno-centered design tends to demand extreme skills from human operators. In the teams, human work should remain meaningful, and the design should not straightforwardly expect the human operators to take responsibility of all the situations where machine intelligence faces its limits. Still, the human actors should be in control and the machines should serve the humans rather than vice versa. In this paper, we have investigated the challenge of designing resilient humanmachine teams for Industry 5.0 smart manufacturing environments. We are convinced that the long tradition of joint cognitive systems research provides a good basis also for the design of smart manufacturing systems. However, we suggest that the heavy cognitive systems engineering approach can be complemented with the concept of operations design approach, especially in the early phases of the design. The ConOps approach supports the involvement of different stakeholders in the design of the overall functionality and operation principles of the system. ConOps documents facilitate the sharing and co-developing of design ideas both for the overall system and different sub systems. The documents illustrate the allocation of responsibilities and tasks in different situations. Actor-network theory provides methods to analyze and observe a gradually evolving, dynamic team of various actors, thus understanding how the roles of the actors and the work allocation evolve over time. This supports a better understanding and the further development of the system. While traditional design methods tend to see humans as users or operators, actor-network theory sees human and machine actors equally, and thus can support analyzing actual collaborations to identify where it works well and where it could be improved. Smooth human-machine teamwork requires mutual understanding between humans and machines, and this easily leads to machines monitoring people and their behavior. The task allocation between humans and machines should be fair. Among many other issues, this raises the need to focus on ethical issues from the very beginning and throughout the design process. Ethically aware design supports proactive ethical thinking, integrating the perspectives of different worker groups and other stakeholders. Moreover, ethically aware design provides tools to commit the whole design team to ethically sustainable design. A challenge with the proposed approaches and methods is that a moderate amount of work is required from human factor experts, other design team members and factory stakeholders to efficiently apply the approaches. Another challenge is to make sure that the results actually influence the design. Our future plans include applying the ideas presented in this paper in practice. This will provide a further understanding of the actual design challenges and how well the approaches and methods proposed here will work in practice. It will be especially interesting to study how to monitor and guide the continuous co-evolvement of a joint human-machine system.
15,614.2
2022-02-26T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Exact Solution for Three-Dimensional Ising Model Three-dimensional Ising model in zero external field is exactly solved by operator algebras, similar to the Onsager's approach in two dimensions. The partition function of the simple cubic crystal imposed by the periodic boundary condition along two directions and the screw boundary condition along the third direction is calculated rigorously. In the thermodynamic limit an integral replaces a sum in the formula of the partition function. The critical temperatures, at which order-disorder transitions in the infinite crystal occur along three axis directions, are determined. The analytical expressions for the internal energy and the specific heat are also presented. I. INTRODUCTION The exact solution for three-dimensional (3D) Ising model has been one of the greatest challenges to the physics community for decades. In 1925, Ising presented the simple statistical model in order to study the order-disorder transition in ferromagnets [1]. Subsequently the so-called Ising model has been widely applied in condensed matter physics. Unfortunately, one-dimensional Ising model has no phase transition at nonzero temperature. However, such systems could have a transition at nonzero temperature in higher dimensions [2]. In 1941, Kramers and Wannier located the critical point of two-dimensional (2D) Ising model at finite temperature by employing the dual transformation [3]. About two and a half years later Onsager solved exactly 2D Ising model by using an algebraic approach [4] and calculated the thermodynamic properties. Contrary to the continuous internal energy, the specific heat becomes infinite at the transition temperature T = T c given by the condition: sinh 2J kB Tc sinh 2J ′ kB Tc = 1, where (J ′ J) are the interaction energies along two perpendicular directions in a plane, respectively. Later, the partition function of 2D Ising model was also re-evaluated by a spinor analysis [5]. Up to now many 2D statistical systems have been exactly solved [6]. Since Onsager exactly solved 2D Ising model in 1944, much attention has been paid to the investigation of 3D Ising model. In Ref. [7], Griffiths presented the first rigorous proof of an order-disorder phase transition in 3D Ising model at finite temperature by extending the Peierls's argument in 2D case [2]. In 2000, Istrail proved that solving 3D Ising model on the lattice is an NP-complete problem [8]. We also note that the critical properties of 3D Ising model were widely explored by employing conformal field theories [9,10,11], self-consistent Ornstein-Zernike approximation [12], Renormalization group theory [13], Monte Carlo Simulations [14], the principal components analysis [15], and etc.. However, despite great efforts, 3D Ising model has not been solved exactly yet due to its complexity. It is out of question that an exact solution of 3D Ising model would be a huge jump forward, since it can be used to not only describe a broad class of phase transitions ranging from binary alloys, simple liquids and lattice gases to easy-axis magnets [16], but also verify the correctness of numerical simulations and finite-size scaling theory in three dimensions. Because there is no dual transformation, the critical point of 3D Ising model cannot be fixed by such a symmetry. We also discover that it is impossible to write out the Hamiltonian along the third dimension of 3D Ising model with periodic boundary conditions (PBCs) in terms of the Onsager's operators. In addition, due to the existence of nonlocal rotation, 3D Ising model with PBCs seems not to be also solved by the spinor analysis [5]. Therefore, the key to solve 3D Ising model is to find out the operator expression of the interaction along the third dimension. We note that the transfer matrix in 3D Ising model is constructed by the spin configurations on a plane, which the boundary conditions (BCs) play an important role to solve exactly 3D Ising model. In this paper, we introduce a set of operators, which is similar to that in solving 2D Ising model [4]. Under suitable BCs, 3D Ising model with vanishing external field can be described by the operator algebras, and thus can be solved exactly. II. THEORY Consider a simple cubic lattice with l layers, n rows per layer, and m sites per row. Then the Hamiltonian of 3D Ising model is H = − m,n,l i,j,k=1 (J 1 σ z ijk σ z i+1kj + J 2 σ z ijk σ z ij+1k + Jσ z ijk s z ijk+1 ), where σ z ijk = ±1 is the spin on the site [ijk]. Assume that ν k labels the spin configurations in the kth layer, we have 1 ≤ ν k ≤ 2 mn . As a result, the and E 2 (ν k ) are the energies along two perpendicular directions in the kth layer, respectively, and E(ν k , ν k+1 ) is the energy between two adjacent layers. Now we define ( Here we use the periodic boundary conditions along both (010) and (001) directions and the screw boundary condition along the (100) direction for simplicity [3] (see Fig. 1). So the spin configurations along the X direction in a layer can be described by the spin variables σ z 1 , σ z 2 , · · · , σ z mn . Because the probability of a spin configuration is proportional to We note that V 1 , V 2 and V 3 are 2 mn -dimensional matrices, and both V 1 and V 2 are diagonal. Following Ref. [4], we obtain where , and H * = 1 2 ln coth H = tanh −1 (e −2H ). In order to diagonalize the transfer matrix V ≡ V 1 V 2 V 3 , following the Onsager's famous work in two dimensions, we first introduce the operators in spin space Γ along the X direction under the boundary conditions mentioned above. Here a, b = 1, 2, · · · , 2mn, σ x a , σ y a and σ z a are the Pauli matrices at site a, respectively. Then we have L 2 a,b = 1 and with Q ≡ nm a=1 σ x a = ±1. It is obvious that the period of L a,b is 2mn. We note that these operators L a,b are identical to P ab in Ref. [4] except mn replaces n. H x and H z in the transfer matrix V can be expressed as Following Onsager's idea [4], we introduce the operators where x is an arbitrary index. Obviously, we have α −r = α r , β −r = −β r , β 0 = β mn = 0, γ −r = −γ r , and γ 0 = γ mn = 0. Eqs. (6) can be rewritten as where A s = mn a=1 L a,a+s and G s = 1 2 mn a=1 (L a,x L a+s,x − L x,a L x,a+s ). According to the orthogonal properties of the coefficients, we obtain From Eqs. (5)- (8), H x and H z have the expansions Because A mn+s = −QA s = −A s Q and G mn+s = −QG s = −G s Q, and combining with Eqs. (8), we have So we can investigate the algebra (8) with Q = 1 or -1 independently. However, we keep them together for convenience. In order to diagonalize the transfer matrix V , we must first determine the commutation relations among the operators α r , β r and γ r . Similar to those calculations in Ref. [4], we obtain Substituting Eqs. (8) into Eqs. (11), we arrive at where r = 1, 2, · · · , mn − 1, and all the other commutators vanish. Obviously, the algbra (12) is associated with the site r, and hence is local. Because α r , β r , and γ r obey the same commutation relations with −X r , −Y r , and −Z r in Ref. [4], we have the further relations We , where s = 1, 2, · · · , 2n, and When m = p = 1, Eqs. (14) recover the results in two dimensions [4]. It is obvious that A p,i and G p,j also satisfy the commutation relations (11). We have obtained the expressions of H x and H z in terms of the operators α r , β r and γ r in the space Γ. In order to get the Hamiltonian in the third dimension, we project the operator algebra in the space Γ into the Y direction. Then we have m subspaces Γ p (p = 1, 2, · · · , m), in which the operator algebra with period 2n is same with that in Γ. along the Y direction. Then we have A p,s = n a=1 L p a,a+s and G p, , which also obey the same commutation relations (11) and (12), similar to A p,s and G p,s . Then the Hamiltonian H y = m p=1 A p,1 . Because [L p a,a+s , L p b,b+s ] = 0 (see Fig. 2), we have [A p,s , A p,s ] = 0, which leads to A p,s ≡ A p,s due to their common local algebra (12). This is a renormalization of operator, which means that A p,s and A p,s have same eigenfunctions and eigenvalues in Γ p or Γ space. We note that V 2 is the transfer matrix along Y direction, which must be calculated in Γ rather than Γ p space by mapping A p,1 ≡ A p,1 in order to diagonalize total transfer matrix V . Therefore, we have Here, we would like to mention that H z = − m p=1 A p,0 ≡ −A 0 , which is same with that in (9). This means that when J 1 = 0, the Hamiltonian of 2D Ising model is recovered immediately. V ] = 0, V and Q can be simultaneously diagonalized on the same basis. In other words, the eigenvalue problem of V can be classified by the value ±1 of Q. The transfer matrix V with Eqs. (9) and (16) becomes where In order to obtain the eigenvalues of the transfer matrix V , we first diagonalize U r by employing the general unitary transformation: e i 2 ηr γr e ar (αr cos θr+βr sin θr) U r ×e −ar(αr cos θr+βr sin θr) e − i 2 ηrγr = e ξrαr . Here θ r is an arbitrary constant and can be taken to be zero without loss of generality, and cosh ξ r = D r , sinh ξ r cos η r = A r , tanh(2a r ) = Cr Br , sinh ξ r sin η r = B r cosh(2a r ) − C r sinh(2a r ), where We note that D 2 r + C 2 r − A 2 r − B 2 r ≡ 1, which ensures that 3D Ising model can be solved exactly in the whole parameter space. When H 2 = 0(i.e.J 2 = 0) and n = 1, or H 1 = 0(i.e.J 1 = 0) and m = 1, we have a r = H * . So Eqs. (19) recover the Onsager's results in 2D Ising model [4]. Then the transfer matrix V has a diagonal form III. TRANSFORMATIONS Following the procedure above, we can diagonalize the transfer matrix V , i.e. e mn−1 r=1 where sinh ξ r cos η * and We also have D 2 r + C 2 r − A * 2 r − B * 2 r ≡ 1. The transfer matrix reads where Similarly, we have e mn−1 r=1 Here, and The identity D 2 r + C 2 r − A ′2 r − B ′2 r ≡ 1 also holds. If H 2 = 0 or H 1 = 0, we obtain the critical temperature in 2D Ising model [3,4]. We note that the exact critical line (32) between the ferromagnetic and paramagnetic phases coincides completely with the result found in the domain wall analysis [17]. In the anisotropic limit, i.e. η = (H 1 + H 2 )/H → 0, the critical temperature determined by Eq. (32) also agrees perfectly with the asymptotically exact value H = 2[lnη −1 − lnlnη −1 + 0(1)] −1 shown in Refs. [18,19]. When H 1 = H 2 = H, the critical value H c = J/(k B T c ) = 0.30468893, which is larger than the conjectured value about 0.2216546 from the previous numerical simulations [12,14]. We shall see from the analytical expressions (35) and (36) of the partition function per atom below that this discrepancy mainly comes from the oscillatory terms with respect to the system size m along X direction, which were not taken into account in all the previous numerical simulations. We note that the thermodynamic properties of a large crystal are determined by the largest eigenvalue λ max of the transfer matrix V . Following Ref. [4], we have Here ∆ 1 = ∆ 3 = · · · = ∆ mn−1 = 1, which are same with the eigenvalues of the operators X r in Ref. [4]. We note that these two results above can be combined due to ξ −r = ξ r and ξ mn In order to calculate the partition function per atom λ ∞ = lim m,n→∞ (λ max ) 1 mn for the infinite crystal, we replace the sum in Eq. (34) by the integral where Similarly, the continuous A(ω), A * (ω), A ′ (ω), B(ω), B * (ω), B ′ (ω), C(ω), ξ m (ω), η(ω), η * (ω), and η ′ (ω) replace the discrete A r , A * r , A ′ r , B r , B * r , B ′ r , C r , ξ r , η r , η * r , and η ′ r , respectively, by letting ω = rπ mn . Here we emphasis that when H 2 = 0, or H 1 = 0, Eq. (35) is nothing but the Onsager's famous result in the 2D case [4]. We also note that very different from the 2D case, the partition function of 3D Ising model is oscillatory with m. Therefore, the conjectured values extrapolating to the infinite system in the numerical calculations seem to be inaccurate, and the 3D finite-size scaling theory must be modified. We consider the special case of J 1 = J 2 , where the calculation of the thermodynamic functions can be simplified considerably. After integrating, Eq. (36) can be rewritten as where It is surprising that Eq. (40) is nothing but that in 2D Ising model with the interaction energies (J 1 , J 2D ) and H 2D = J2D kB T . Therefore, ln λ ∞ − 1 2 ln[2 sinh(2H)] in three dimensions can be obtained from ln λ 2D ∞ − 1 2 ln[2 sinh(2H 2D )] in two dimensions by taking the transformation (41). In other words, the thermodynamic properties of 3D Ising model originate from those in 2D case. We can also see from Eq. (41) that both 2D and 3D Ising systems approach simultaneously the critical point, i.e. H * 2D = H 1 and H * = 2H 1 . It is expected that the scaling laws near the critical point in two dimensions also hold in three dimensions [6]. The energy U and the specific heat C of 2D Ising model with the quadratic symmetry (i.e. H 1 = H 2D ) have been calculated analytically by Onsager and can be expressed in terms of the complete elliptic integrals [4]. The critical exponent associated with the specific heat α 2D = 0. Because 3D Ising model with the simple cubic symmetry (i.e. H 1 = H 2 = H) can be mapped exactly into 2D one by Eq. (41), the expressions of U and C in three dimensions have similar forms with those in two dimensions. So the critical exponent α 3D of the 3D Ising model is identical to α 2D , i.e. α 3D = 0. According to the scaling laws dν = 2 − α and µ + ν = 2 − α [6], we have ν 3D = 2 3 and µ 3D = 4 3 . Up to now, we have obtained the partition function per site and some physical quantities when the z axis is chosen as the transfer matrix direction. However, if the x(y) axis is parallel to the transfer matrix direction, the corresponding partition function per site can be achieved from Eqs. (35) and (36) by exchanging the interaction constants along the x(y) and z axes. Therefore, the total physical quantity in 3D Ising model, such as the free energy, the internal energy, the specific heat, and etc., can be calculated by taking the average over three directions. We note that the average of a physical quantity naturally holds for 2D Ising model. Obviously, the difference between λ ∞ and λ p ∞ comes from the screw boundary condition along the X direction (see Fig. 1). We note that the k 2 term in Eqs. (44) and (45) vanishes, which can be seen as a feature of 3D Ising model. VI. CONCLUSIONS We have exactly solved 3D Ising model by an algebraic approach. The critical temperature T ic (i = 1, 2, 3), at which an order transition occurs, is determined. The expression of T ic is consistent with the exact formula in Ref. [17]. At T ic , the internal energy is continuous while the specific heat diverges. We note that if and only if the screw boundary condition along the (100) direction and the periodic boundary conditions along both (010) and (001) directions are imposed, the Onsager operators (15) along Y direction can form a closed Lie algebra, and then the Hamiltonian H y (16) is obtained rigorously. For PBCs, the Onsager operators along X or Y direction cannot construct a Lie algebra, and hence 3D Ising model is not solved exactly. Therefore, the numerical simulations on 3D finite Ising model with PBCs are unreliable due to the unclosed spin configurations on the transfer matrix plane. It is known that the BCs (the surface terms) affect heavily the results on small system, which lead to the different values extrapolating to the infinite system. However, the impact of the BCs on the critical temperatures can be neglected in the thermodynamic limit. Because the partition function per atom of 3D Ising model with H 1 = H 2 is equivalent to that of a 2D Ising model, the thermodynamic properties in three dimensions are highly correlated to those of 2D Ising system. When the interaction energy in the third dimension vanishes, the Onsager's exact solution of 2D Ising model is recovered immediately. This guarantees the correctness of the exact solution of 3D Ising model.
4,422.6
2021-10-01T00:00:00.000
[ "Physics" ]
Gated Silicon Drift Detector Fabricated from a Low-Cost Silicon Wafer Inexpensive high-resolution silicon (Si) X-ray detectors are required for on-site surveys of traces of hazardous elements in food and soil by measuring the energies and counts of X-ray fluorescence photons radially emitted from these elements. Gated silicon drift detectors (GSDDs) are much cheaper to fabricate than commercial silicon drift detectors (SDDs). However, previous GSDDs were fabricated from 10-kΩ·cm Si wafers, which are more expensive than 2-kΩ·cm Si wafers used in commercial SDDs. To fabricate cheaper portable X-ray fluorescence instruments, we investigate GSDDs formed from 2-kΩ·cm Si wafers. The thicknesses of commercial SDDs are up to 0.5 mm, which can detect photons with energies up to 27 keV, whereas we describe GSDDs that can detect photons with energies of up to 35 keV. We simulate the electric potential distributions in GSDDs with Si thicknesses of 0.5 and 1 mm at a single high reverse bias. GSDDs with one gate pattern using any resistivity Si wafer can work well for changing the reverse bias that is inversely proportional to the resistivity of the Si wafer. Introduction Various types of X-ray detectors, such as silicon (Si) pin detectors and silicon drift detectors (SDDs) , are used to measure the energy and photon count of X-ray fluorescence photons. Si X-ray detectors with a thick Si substrate, a large active area, and small capacitance are desirable [29][30][31][32]. A pin structure is used to collect charge carriers, the number of which are proportional to the energy of an X-ray photon. In X-ray fluorescence spectroscopy, the capacitance of a pin detector increases with the active area of the detector because the anode (n-type layer) and the cathode (p-type layer) have equal areas. The increase in the capacitance degrades its performance. However, SDDs have a much smaller capacitance than pin detectors [1]. This is because the anode, which is on one surface of the n − Si substrate (n − or i-layer), is much smaller than the pin detector, whereas the entrance window layer, which is the cathode on the opposite surface, is kept large [1]. The anode is surrounded by multiple p-type rings (p-rings), to which a different bias voltage is applied. The resulting electric field makes the electrons flow smoothly toward the anode. To form a sufficiently strong electric field toward the anode in the SDD, the p-rings are electrically coupled with expensive built-in metal-oxide-semiconductor field-effect transistors (MOSFETs) or implanted resistors. The 10-kΩ·cm Si wafers are more expensive than the 2-kΩ·cm Si wafers used in commercial SDDs. In the present study, to fabricate much cheaper X-ray detectors, we used a device simulation to design adequate gate patterns for GSDDs formed from 2-kΩ·cm Si wafers. Structure and Advantages of Gated Silicon Drift Detectors GSDDs have a cathode and only one p-ring, and to which the same reverse bias can be applied. Figure 1 shows half of a schematic cross section of a cylindrical GSDD with seven ring-shaped gates and one p-ring that does not contain MOSFETs or implanted resistors [37,38,[40][41][42][43]. In SDDs and GSDDs, n-type layers (anode and ground rings) and p-type layers (cathode, p-ring, and floating rings) are fabricated by the same processes. In SDDs, multiple inner p-rings located between the anode and the p-ring are formed. Compared with GSDDs, the extra fabrication processes in SDDs are for creating the built-in MOSFETs or implanted resistors to couple the p-rings together electrically, which lowers the yield rate of detectors. The passivating oxide layers (SiO 2 ) are formed, and the anode, p-ring, ground rings, and cathode are metallized. During metallization, the innermost p-ring is also metallized in SDDs, whereas gates are formed in GSDDs. In GSDDs, no extra fabrication processes are required to form the gates because the metal gates are formed on the SiO 2 during metallization of the anode and the p-ring. As a result, the fabrication of GSDDs is much simpler than that of commercial SDDs. Moreover, the same high reverse bias can be applied to the cathode, the p-ring, and all the gates, which means that GSDDs require only one high-voltage source. Therefore, GSDDs greatly reduce the cost of the X-ray detection system. Figure 1. Half of a schematic cross section of a cylindrical GSDD structure with one p-ring and seven gates. The same negative voltage was applied to the cathode, the p-ring, and all the gates. Device Simulation Processes The device simulations were carried out by using the ATLAS Device Simulator (Silvaco International). All the simulations were performed by solving Poisson's equation and the carrier continuity equations. This provides a complete description of the system in terms of electrical quantities, such as electric potential and electric field distributions, carrier densities, and current densities. The thicknesses of the n − Si substrate (d Si ) were 0.5 and 1 mm, and the values of ρ Si were 2 and 10 kΩ·cm. The radius of the anode (R a ) at the center of the cylindrical GSDD was fixed as 0.055 mm, which kept the capacitance of all GSDDs small. The widths of the p-ring (W p ), p-type floating rings (W f ), and n-type ground rings (W g ) were 0.545, 0.03 and 0.39 mm, respectively. The gap between the p-ring and the floating ring (G pf ) and the gap between the floating and ground rings (G fg ) were all 0.04 mm. The thickness of SiO 2 on the cathode (d c ) was 0.75 µm. The thickness of SiO 2 on the other side (d g ) was changed to constrain the electric field in the SiO 2 between the gates and the Si substrate at ≤ 2.5 MV/cm, which is less than the SiO 2 breakdown electric field of 10 MV/cm [44]. The sheet density of positive fixed charges in SiO 2 near the SiO 2 /Si interface (Q F ) was fixed as 3 × 10 10 cm −2 , which has been reported for the present fabrication process [45]. The acceptor densities of the cathode, p-ring, and floating rings were 1 × 10 18 cm −3 , and the donor densities of the anode and ground rings were 1 × 10 19 cm −3 . The depths of the cathode, p-ring, anode, ground rings, and floating rings were all 1 µm. Seven gates were considered in this study. Figure 1 shows that G a1 was the gap between the anode and the innermost gate, and G 12 , G 23 , G 34 , G 45 , G 56 and G 67 were the gaps between the gates, from the innermost to outermost. G 7p was the gap between the outermost gate and the p-ring. W 1 , W 2 , W 3 , W 4 , W 5 , W 6 and W 7 were the widths of the seven gates, from the innermost to outermost, respectively. The radii of the cathode (R c = 3 mm) and the GSDD chip (R chip = 3.5 mm) were fixed. As a result, the area inside the inner edge of the p-ring (S area ) was 18.9 mm 2 , which is nearly equal to that of commercial small-area SDDs. The same reverse bias voltage (V R ) was applied to the cathode, the p-ring, and all the gates. 0.5-mm-Thick GSDD Formed from a 10-kΩ·cm Si Wafer The d Si and ρ Si of the n − Si substrate were 0.5 mm and 10 kΩ·cm, respectively, and d g was 0.75 µm. In Gate A, the values of W 1 , W 2 , W 3 , W 4 , W 5 , W 6 , and W 7 were 0.1, 0.1, 0.19, 0.29, 0.39, 0.47 and 0.51 mm, respectively. G a1 was 0.04 mm and G 12 and G 23 were both 0.03 mm. G 34 , G 45 , G 56 , G 67 and G 7p were all 0.05 mm. Figure 2 shows the simulated electric potential distribution in the Si substrate inside the p-ring of the GSDD at V R of −60 V for Gate A. The voltage midway between the p-ring and the cathode was −37 V, and the electric field along the electric potential valley was strong enough to make all the electrons produced by an X-ray photon flow smoothly to the anode. Therefore, the electrons produced within the radius of the inner edge of the p-ring can be directed to the anode, indicating that the effective active area is approximately 18 mm 2 . Figure 2. Simulated electric potential distribution in the Si substrate inside the p-ring of a 0.5-mm-thick GSDD with R chip of 3.5 mm and ρ Si of 10 kΩ·cm for Gate A. A reverse bias voltage of −60 V was applied to the cathode, p-ring, and seven gates. Q F was assumed to be 3 × 10 10 cm −2 . Equipotential lines are shown at 1 V intervals. We fabricated GSDDs using the design of Gate A. In the GSDD, an energy resolution of 145 eV at 5.9 keV was obtained from a 55 Fe source at −38 o C [41]. The effective active area of the detector was found to be approximately 18 mm 2 by irradiating X-ray photons through a pinhole with diameter 0.1 mm [40], which is in good agreement with our simulation. These experimental results indicate that GSDDs with the design from which the simulated electric potential distribution similar to that in Figure 2 is obtained can work well. 0.5-mm-Thick GSDD Formed from a 2-kΩ·cm Si Wafer The value of ρ Si was decreased from 10 kΩ·cm to 2 kΩ·cm. Figure 3 shows the simulated electric potential distribution in the Si substrate inside the p-ring of the GSDD with Gate A at V R of −60 V. Because the voltage drops at G 67 and G 7p were too large, the electric potential was almost zero between the anode and the outermost gate, and also over approximately 60% of the n − Si substrate, where the electrons produced by an X-ray photon are recombined with the holes produced by the X-ray photon. To deplete the whole n − Si substrate, V R was increased from −60 to −300 V, following the relation Figure 4 shows the simulated electric potential distribution in the Si substrate inside the p-ring of the GSDD with Gate A at V R of −300 V. It is clear from Figure 4 that the whole Si substrate was depleted, and all the electrons produced by an X-ray photon flowed smoothly to the anode. This finding indicates that GSDDs with Gate A can work well for any Si resistivity if V R follows Equation (1). V R of −300 V was twice that of a commercial 0.5-mm-thick SDD using a 2-kΩ·cm Si wafer. Therefore, a gate pattern that can reduce V R was investigated. Because in Figure 3 the voltage decreases at G 67 and G 7p is too large, G 67 and G 7p in Gate B were decreased from 0.05 to 0.02 mm. The G 34 , G 45 and G 56 values were also decreased from 0.05 to 0.02 mm, and G 23 was decreased from 0.03 to 0.02 mm. The value of G a1 was increased from 0.04 to 0.07 mm, so that the potential at the innermost gate could be increased and the potential around the anode would not be zero. To keep R c in Gate B the same as R c in Gate A, the values of W 3 , W 4 , W 5 , W 6 and W 7 were changed to 0.21, 0.31, 0.41, 0.51 and 0.54 mm, respectively. To reduce V R from 200 to 150 V, which is V R of commercial 0.5-mm-thick SDDs using 2-kΩ·cm Si wafers, in Gate C the values of G a1 , G 12 , G 23 , G 34 , G 45 , G 56 , G 67 and G 7p were changed to 0.11, 0.02, 0.01, 0.01, 0.01, 0.01, 0.005 and 0.005 mm, respectively. To keep R c in Gate C the same as R c in Gate A, the values of W 1 , W 2 , W 3 , W 4 , W 5 , W 6 and W 7 were 0.01, 0.05, 0.24, 0.34, 0.44, 0.54 and 0.60 mm, respectively. Figure 6 shows the simulated electric potential distribution in the Si substrate inside the p-ring of the GSDD for Gate C at V R of −150 V. In the electric potential distribution, the voltage midway between the p-ring and the cathode was approximately −78 V, and consequently the electric field along the electric potential valley strong enough to make all the electrons produced by the X-ray photons flow smoothly to the anode. Figure 6. Simulated electric potential distribution in the Si substrate inside the p-ring of a 0.5-mm-thick GSDD with R chip of 3.5 mm and ρ Si of 2 kΩ·cm for Gate C. A reverse bias voltage of −150 V was applied to the cathode, p-ring, and seven gates. Equipotential lines are shown at 2.5 V intervals. 1-mm-Thick GSDD Formed from a 2-kΩ·cm Si Wafer To detect traces of hazardous or radioactive elements in food, soil, and the human body effectively, the absorption of X-ray fluorescence photons of these elements, such as Cd (23.1 keV) and Cs (30.8 keV), by GSDDs must be increased. However, the thickness of the Si substrates in commercial SDDs is approximately 0.5 mm; thus, the absorbed fractions of Cd and Cs X-ray fluorescence photons are 29.1% and 14.4%, respectively. In contrast, for a 1-mm-thick Si substrate, the absorbed fractions increase to 49.7% and 26.8%, respectively. In other words, the commercial SSDs up to 0.5 mm thick can detect photons with energies up to 27 keV for X-ray absorbance higher than 20%, whereas our gate pattern for the GSDD can detect photons with energies up to 35 keV. Here, we simulate the electric potential distribution in the GSDD with a Si thickness of 1 mm. In the 1-mm-thick GSDDs, d g was changed from 0.75 to 3 µm to avoid SiO 2 breakdown caused by the high electric field. In Gate D, the values of G a1 , G 12 , G 23 , G 34 , G 45 , G 56 , G 67 and G 7p were changed to 0.33, 0.06, 0.02, 0.02, 0.02, 0.02, 0.01 and 0.01 mm, respectively. To keep R c in Gate D the same as R c in Gate A, the values of W 1 , W 2 , W 3 , W 4 , W 5 , W 6 and W 7 were changed to 0.02, 0.07, 0.18, 0.28, 0.38, 0.47 and 0.51 mm, respectively. To deplete the whole n − Si substrate, the value of V R was increased from 150 to 600 V, following the relation V R ∝ d 2 Si (2) Figure 7 shows the simulated electric potential distribution for Gate D in the Si substrate inside the p-ring of the GSDD at V R of −600 V. The voltage at the saddleback was approximately −175 V. Because the average electric field toward the anode along the electric potential valley was approximately 950 V/cm, the average electron drift velocity was higher than 1 × 10 6 cm/s at the operating temperature (≤0 • C). This was caused by the electron mobility of 1450 cm 2 · V −1 ·s −1 in the Si substrate at room temperature [44]. This indicates that the electric field along the electric potential valley was strong enough to make all the electrons produced by the X-ray photons flow smoothly to the anode. For a Si pin diode with d Si of 1 mm and ρ Si of 2 kΩ·cm, a reverse bias of approximately −1500 V is required to deplete the whole Si layer. However, for the GSDD, a reverse bias of only −600 V was required, which is an advantage of GSDDs. Figure 7. Simulated electric potential distributions in the Si substrate inside the p-ring of a 1-mm-thick GSDD with R chip of 3.5 mm and ρ Si of 2 kΩ·cm for Gate D. A reverse bias voltage of −600 V was applied to the cathode, p-ring, and seven gates. Equipotential lines are shown at 10 V intervals. Conclusions GSDDs are inexpensive Si X-ray detectors because of their simple structure. Although we have investigated GSDDs with 10-kΩ·cm Si because we have designed thicker GSDDs to detect X-ray photons with high energies, 10-kΩ·cm Si wafers are much more expensive than 2-kΩ·cm Si wafers, from which commercial SDDs are fabricated. Therefore, GSDDs with 2-kΩ·cm Si were investigated to develop low-cost X-ray detectors that can accurately detect photon counts and energies of X-ray fluorescence photons with energies of up to 35 keV. Device simulations of GSDDs with 0.5and 1-mm-thick, 2-kΩ·cm Si substrates indicated that the X-ray detectors should work well when they are produced by using current fabrication processes. GSDDs with one gate pattern can work well for any resistivity Si substrate if the reverse bias is inversely proportional to the resistivity of the Si substrate. These findings indicate that the cost of portable X-ray fluorescence instruments can be reduced considerably.
4,009.6
2015-05-01T00:00:00.000
[ "Physics" ]
Simulation and risk analysis of dam failure in small terrace reservoirs in hilly areas . This study uses numerical simulation to analyse the dam failure and risk of small series (parallel) reservoir clusters in the hilly areas of China. Four small series (parallel) reservoirs in the Guozhuang River basin of Tai'an City are used as the target of the study, and a two-dimensional hydrodynamic reservoir failure numerical model was constructed to analyse the riskiness of various series (parallel) dam failure scenarios. The results of the analysis can provide some technical support for the forecasting and scheduling of small terrace reservoirs in hilly areas. Introduction In recent years, climate anomalies and extreme weather events have been frequent.The spatial and temporal distribution of precipitation has been seriously uneven, with record rainfall in many places.As a water hub that undertakes multiple functions such as flood control, irrigation, water supply and ecology, reservoirs are important infrastructures to ensure regional economic and social development.The majority of them are managed by townships or village collectives, and are affected by historical conditions and insufficient investment. Due to the suddenness of the event, there is little historical research on the measured data of dam failure floods, which are now usually studied using numerical calculations or physical model tests. In 1871, the French mechanic Saint-Venant proposed a system of Saint-Venant equations [1].In 1892, the German scholar Ritter obtained a simplified solution for free outflow in rectangular river valleys [2].In 1949, Schoklitch obtained an empirical formula for the calculation of the peak flow at a dam site in the event of an instantaneous partial breach [3].In 1951, Frank derived the equation based on the Ritter instantaneous total breach solution [4].In 1970, Su et al. applied the regressive solution to triangular, rectangular and parabolic crosssection open channels [5].In 1980, Chen [6] In 1982 and 1984, Hunt et al. considered the effect of friction on the dam-break problem for finite-length reservoirs [7].In 1986, Wu Chao and Tan Zhenhong [8] In 1986, Wu Chao and Tan Zhenhong derived a simplified solution for the U-shaped section dam-break wave by using the Riemann equation.1999, WuC [9] In 2001, Yan Echuan et al. derived an equation for the peak flow of a lateral partial breach of a flat-bottomed unresisted river dam [10].Although some research results have also been published on dam failure in groups of terraced reservoirs [11][12][13][14][15][16], they mainly focus on the discussion of the dam failure flow equation, while the numerical simulation of dam failure floods and the analysis of the inundation risk to the downstream areas need to be further explored and studied. Therefore, this paper takes four small reservoirs in series (parallel) in the Gozhuang River sub-basin of Geshishi Town, Ningyang County, Tai'an City, Shandong Province as the research object, and constructs a twodimensional hydrodynamic model to numerically simulate different reservoir combinations with various dam failure scenarios.The results of the study can provide a reference basis for the safe scheduling and operation of reservoirs in the basin, the planning of downstream areas, flood warning and the formulation of dam failure countermeasures, and are of great theoretical and practical significance. Overview of the study area The Guozhuang River basin is located in Ningyang County, Tai'an City, Shandong Province across Ge Shi Town and Mound Town, with a basin area of 23.54km2 .The upstream of the Guozhuang River is the Xing Shan Reservoir, which is divided into two branches at He Wa Village.The south branch of the Guozhuang River is 1.43km from the bifurcation and 2.66km from the Xing Shan Reservoir dam site; the south branch of the Guozhuang River is 1.58km upstream from the Xing Shan Southwest Reservoir dam site, 3.01km from the bifurcation and 4.24km from the Xing Shan Reservoir dam site.The north and south branches of the Guozhuang River converge at Maozhuang Qiaobei Village and join the Shiji River in the south of Da'an Village. 3 Mathematical model construction Model scope and main boundary conditions The calculation area is based on the watershed boundary of the Guozhuang River, specifically from the east to the east of the village of Xingshanzhuang, from the west to the confluence of the Yueya River and the Shiji River, from the north to the south of the dam site of the Yueya River Reservoir and from the south to the Shiji River.The calculated area is 38.62km².The two-dimensional hydrodynamic model uses an irregular grid.According to the requirements of the Technical Rules for the Preparation of Flood Risk Maps (for trial implementation), the maximum grid area should not exceed 0.1km² for irregular grids.Taking into account the model area, simulation accuracy, calculation time and software performance, for embankments, roads and other areas with drastic topographic changes, the calculation grid is appropriately encrypted under the premise of model stability; near the river channel, the buffer line of the Guozhuang River is buffered outward for 200m for encryption, and the maximum grid area within the buffer line is limited to 0.0001km², while the maximum grid area of the model area outside the buffer line is limited to 0.005km².The maximum grid area in the buffer line is limited to 0.0001km², and the maximum area of the model area outside the buffer line is limited to 0.005km².The mesh used for the calculation was generated by Mesh generator, with a total of 2357996 meshes and 118370 nodes.The elevation points used for terrain interpolation were extracted from the 5mDEM, and the river embankments and main roads were corrected according to the supplementary survey data.A total of 619,000 elevation scatter points were extracted, with a spacing of 5m for rivers, embankments and roads and 10m for other features. As all four reservoir dam types in the study area are earth and rock dams, this earth and rock dam breach was treated as an instantaneous lateral partial breach according to the Hydraulic Calculation Manual (2nd Edition). (1) Dam breach width Earthen dams are poorly resistant to scouring, and there is actual evidence that the breach will scour to the base of the dam or even form a local scour pit.The following calculation method is recommended by the Yellow Commission's Institute of Water Sciences based on the analysis of actual data. (1) where: b -average width of the mouth door (m).V -reservoir capacity at the time of dam breach (million m³). H -depth of water in front of the dam at the time of breach (m). k -coefficient related to the soil quality of the dam, with k values of approximately 0.65 for clayey soils and 1.3 for loamy soils. B -the width of the water surface along the dam axis or the length of the top of the dam when the dam breaks m.If the width of the water surface in the reservoir area at the dam site section is greater than the length of the dam, the width of the water surface in the reservoir area at the dam site section should be used.If the water surface in the reservoir area is wider than the dam length, the width of the water surface in the reservoir area at the dam site section shall be used. (2) Dam breach flow The instantaneous lateral partial breach of a rectangular breach is chosen for the calculation of earth and rock dam failure floods.As the breach b is smaller than the dam length B, this negative wave upstream of the breach will have the characteristic of providing water discharge in multiple directions, thus increasing the depth of the breach and consequently the flow rate.The maximum flow cannot be calculated directly using the instantaneous total dam break, but needs to be multiplied by a correction factor greater than 1 .The exponent of the correction factor, , is generally taken to be 0.25-0.4.In this case, = 0.4 based on experimental data from the overall model of the Yellow Commission dam failure flood evolution.(3) Dam failure flood process This time a quadratic parabola is used to generalise the flow process.Where: K -coefficient, generally take 4 to 5. V -reservoir dischargeable volume before dam failure (m³) 2) Initially determine the flow process line from Table 3.3-1 based on T, Qm and Q0. Check whether the water volume between the process line and the Q=Q0 line is equal to the dischargeable volume V.If it is not equal, adjust the T-value and retry the calculation until they are equal. Working condition settings In order to study different dam-break scenarios for small clusters of terraced reservoirs in hilly areas and their impact on disaster prevention objects in the downstream basin, this paper designs 10 series and parallel scenarios including single reservoir breaches and multiple reservoir breaches, as detailed in Table 3.The different scenarios show a single wave propagation downstream, with the maximum flow at the beginning of the breach when the breach occurs and then decreasing rapidly thereafter.The comparison of the different scenarios shows that the larger the reservoir capacity and the higher the dam site, the higher the breach flow.From scenarios 2 and 5, 3 and 6, 4 and 7 and 9, it can be seen that when there is a succession of breaches in the upstream gradient reservoirs, the resulting breached flow is greater than when there is a single reservoir breach, because as the breached flow evolves in the upstream reservoirs to the downstream reservoirs, the breached flow peaks will be superimposed and will be greater than the individual breached flows in the downstream reservoirs.As can be seen from Scenarios 4 and 8, the flow flood peaks from the simultaneous breach of the upstream and downstream reservoirs do not have a significant impact on the flow flood peaks from the separate breach of the downstream reservoirs.The distance between the two reservoirs is 1.58km.When the two reservoirs are breached at the same time, the peak flow will be significantly reduced during the evolution of the upstream reservoir breaching flood into the downstream reservoir due to the river roughness.The flood peak will then be smaller than its own peak flow. Analysis of the maximum flow along the route of the dam failure flood for different scenarios The peak flow is gradually reduced from the breach site onwards.The following is the trend of the peak flow for each representative section of the breach flood along the route of each scenario.From options 2 and 5, 3 and 6, it is known that the upstream and downstream reservoirs collapse together compared to the downstream reservoirs collapse alone The peak flows generated are not only the highest at the dam site, but also the maximum flows along the course of the flood as it evolves downstream.the higher the peak flow generated by the successive breaches of the terrace reservoirs, the faster the receding rate during the evolution, but always greater than the single reservoir breach flow.As can be seen from Scenarios 4 and 8, the maximum flood flows are the same when the downstream reservoirs are breached alone and when the upstream and downstream gradient reservoirs are breached simultaneously, but the maximum alongstream flow is greater in the latter than in the former.Scenarios 2 and 5, 3 and 6, 4 and 7 and 9 show that the inundation area resulting from the successive breaches of the terrace reservoirs is greater than that resulting from the single breach of the downstream reservoirs.Comparing Scenario 4, Scenario 7 and Scenario 8, it is clear that the inundation area resulting from a downstream reservoir breach due to the flow of the upstream reservoir is greater than the inundation area resulting from a simultaneous breach of the upstream and downstream reservoirs, while the inundation area resulting from a simultaneous breach of the upstream and downstream reservoirs is greater than the inundation area resulting from a separate breach of the downstream reservoir.According to the analysis in section 4.2, although the simultaneous dam failure in the upstream and downstream reservoirs produces a flood flow equal to the flood peak of the downstream reservoir alone, it is generated by the superposition of the two parts of the flood flow and therefore the flood volume is larger and the inundated area is greater. Analysis of coastal village inundation impacts The analysis of the two-dimensional dam failure flood calculation results shows that within the calculation range, the villages of Hehua, Xing Shanzhuang and Xiadaitang are closer to the Xing Shan reservoir dam site, and the leading edge and peak present time of the small reservoir dam failure flood is fast and the emergency response time is short, so it is necessary to develop a reasonable avoidance and relocation plan for the inundated villages.Given that scenario 10 caused the greatest inundation impact during this simulation, this most unfavourable scenario was considered for selection. Of the inundated features, 90.9% were arable land and only 9.1% were residential land.In this simulation, 10 villages were inundated: Hewa, Xingshanzhuang, Xiadaitang,Wangjiashidai, Xujiaying, Shijie, Guozhuang, Xiningjiazhuang,Sujiazhuang and Maozhuangqiao North.Residents of He Wa and Heng Shan Zhuang can move southeast along the path up the mountain to settlement 1 near Dongliang Highway, while Xu Jiaying, Shijie Tun, Xia Dai Tang and Wang Jia Xia Dai can move along the path to settlement 2 on the high ground near Dongliang Highway between them, and Go Zhuang, Xining Jia Zhuang, Su Jia Zhuang and Mao Zhuang Qiao Bei villages can move to settlement 3 on the high ground near G342 in the west.The village of Hehua has the shortest emergency transfer time before the front edge of the flooding arrives, and should therefore respond quickly and move as soon as instructions to do so are received. Conclusion In the analysis of the impact on downstream inundation in both single and continuous failure modes, all the conditions in this study show that when a single failure occurs in the downstream reservoirs, the maximum flow at the breach is greater than when a continuous failure occurs in the upstream and downstream terrace reservoirs, and the maximum flow along the course of the flood is also greater, and the impact on downstream inundation is more extensive in terms of area. The conclusions drawn from this simulation comparing flood simulations in Scenario 7 and Scenario 8 are not generalisable.When the upstream and downstream gradient reservoirs are breached simultaneously, the final inundation results differ depending on multiple factors such as the maximum breach flow of the upstream and downstream reservoirs, the distance of the river between the two reservoirs and the impediment of the in-channel roughness to the rate of flood evolution, and should therefore be analysed on a case-by-case basis.If the upstream reservoir is large and the two reservoirs are close together, the impact on the downstream will be even greater if a succession of reservoir breaches occurs.Therefore, the study of graded reservoir breaches is of great significance for flood control and the protection of people's lives and property, and will be of great value for the development of flood evolution theory and engineering applications in the future. Due to the complexity of the reservoir failure mechanism, different calculation methods yield different results.The calculation method used in this paper is relatively rough, and the results of the inundation analysis may be different from the actual situation, but it can provide some reference for subsequent studies on the analysis of dam failure floods in terrace reservoirs. Fig. 1 . Fig. 1.Overview map of the study area. 4 Analysis of the simulation results of the dam failure flood 4 . 1 Analysis of the flow processes in the different scenarios of routingThe analysis of breach flow characteristics is the main research content of the breach flood, and the analysis of the maximum flow of the breach flood is an important work in the flood control planning and engineering design and construction of the downstream urban areas of the reservoir, and the breach flow can reflect the great destructiveness of the breach flood.Based on the 10 dam breach scenarios developed, the flood flow process at the breach was analysed and calculated for each scenario, and the flow process line at the breach is shown in Figure2. Fig. 2 . Fig. 2. Flow process lines at the breach for different scenarios. Fig. 3 . Fig. 3. Comparison of maximum flows along the length of the dam breach without options. Table 1 . Table of reservoir parameters in the study area. name Size Longitude Latitude Total storage capacity (million m3 ) Dam height (m) Dam type . . . Table 4 . Table of parameters characterising the breach under different scenarios. Table 5 . Table of inundated areas under different working conditions.
3,840.2
2023-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Voltage-dependent dynamics of the BK channel cytosolic gating ring are coupled to the membrane-embedded voltage sensor In humans, large conductance voltage- and calcium-dependent potassium (BK) channels are regulated allosterically by transmembrane voltage and intracellular Ca2+. Divalent cation binding sites reside within the gating ring formed by two Regulator of Conductance of Potassium (RCK) domains per subunit. Using patch-clamp fluorometry, we show that Ca2+ binding to the RCK1 domain triggers gating ring rearrangements that depend on transmembrane voltage. Because the gating ring is outside the electric field, this voltage sensitivity must originate from coupling to the voltage-dependent channel opening, the voltage sensor or both. Here we demonstrate that alterations of the voltage sensor, either by mutagenesis or regulation by auxiliary subunits, are paralleled by changes in the voltage dependence of the gating ring movements, whereas modifications of the relative open probability are not. These results strongly suggest that conformational changes of RCK1 domains are specifically coupled to the voltage sensor function during allosteric modulation of BK channels. Introduction The open probability of large conductance voltage-and Ca 2+ -activated K + (BK or slo1) channels is regulated allosterically by voltage and intracellular concentration of divalent ions (Barrett et al., 1982;Moczydlowski and Latorre, 1983;Horrigan and Aldrich, 2002;Latorre et al., 2017). This feature makes BK channels important regulators of physiological processes such as neurotransmission and muscular function, where they couple membrane voltage and the intracellular concentration of Ca 2+ (Robitaille and Charlton, 1992;Hu et al., 2001;Wang et al., 2001;Raffaelli et al., 2004). The BK channel is formed in the membrane as tetramers of a subunits, encoded by the KCNMA1 gene (Shen et al., 1994;Quirk and Reinhart, 2001). Each a subunit contains seven transmembrane domains (S0 to S6), a small extracellular N-terminal domain and a large intracellular C-terminal domain (Wallner et al., 1996;Meera et al., 1997;Tao et al., 2017) (Figure 2a). Similar to other voltage-gated channels, the voltage across the membrane is sensed by the voltage sensor domain (VSD), containing charged amino acids within transmembrane segments S2, S3 and S4 (Díaz et al., 1998;Ma et al., 2006;Pantazis and Olcese, 2012;Tao et al., 2017). The sensor for divalent cations is at the C-terminal region and is formed by two Regulator of Conductance for K + domains (RCK1 and RCK2) per a subunit (Wei et al., 1994;Moss and Magleby, 2001;Xia et al., 2002;Zeng et al., 2005;Wu et al., 2010). In the tetramer, four RCK1-RCK2 tandems pack against each other in a large structure known as the gating ring (Wu et al., 2010;Yuan et al., 2011;Giraldez and Rothberg, 2017;Tao et al., 2017;Zhou et al., 2017). Two high-affinity Ca 2+ binding sites are located in the RCK2 (also known as 'Ca 2+ bowl') and RCK1 domains, respectively. Additionally, a site with low affinity for Mg 2+ and Ca 2+ is located at the interface between the VSD and the RCK1 domain (Shi and Cui, 2001;Zhang et al., 2001;Bao et al., 2002;Xia et al., 2002;Yang et al., 2007;Yang et al., 2008a;Tao et al., 2017) (Figure 2a). The high-affinity binding sites show structural dissimilarity (Zhang et al., 2010;Tao et al., 2017) and different affinity for divalent ions (Zeng et al., 2005). Apart from Ca 2+ , it has been described that Cd 2+ selectively binds to the RCK1 site, whereas Ba 2+ and Mg 2+ show higher affinity for the RCK2 site (Xia et al., 2002;Zeng et al., 2005;Yang et al., 2008b;Zhou et al., 2012;Miranda et al., 2016). Thus, intracellular concentrations of Ca 2+ , Cd 2+ , Ba 2+ or Mg 2+ can shift the voltage dependence of BK activation towards more negative potentials. Using patch clamp fluorometry (PCF), we have shown that these cations trigger independent conformational changes of RCK1 and/or RCK2 within the gating ring, measured as large changes in the efficiency of Fluorescence Resonance Energy Transfer (FRET) between fluorophores introduced into specific sites in the BK tetramer. These rearrangements depend on the specific interaction of the divalent ions with their high-affinity binding sites, showing different dependences on cation concentration and membrane voltage (Miranda et al., 2013;Miranda et al., 2016). To date, the proposed transduction mechanism by which divalent ion binding increases channel open probability was a conformational change of the gating ring that leads to a physical pulling of the channel gate, where the linker between the S6 transmembrane domain and the RCK1 region acts like a passive spring (Niu et al., 2004). Such a mechanism would be analogous to channel activation by ligand binding in glutamate receptor or cyclic nucleotide-gated ion channels, also tetramers (Sobolevsky et al., 2009;James et al., 2017). Our previous results do not support this as the sole mechanism underlying coupling of divalent ion binding to channel opening, since the gating ring conformational changes that we have recorded: 1) are not strictly coupled to the opening of the channel's gate, and 2) show different voltage dependence for each divalent ion. In addition, the recent cryo-EM structure of the full slo1 channel of Aplysia californica Tao et al., 2017) shows that the RCK1 domain of the gating ring is in contact with the VSD, predicting that changes in the voltage sensor position could be reflected in the voltage dependent gating ring reorganizations. Understanding the nature of the voltage dependence associated with individual rearrangements produced by binding of divalent ions to the gating ring is essential to untangle the mechanism underlying the role of such rearrangements in BK channel gating. To this end, we have now performed PCF measurements with human BK channels heterologously expressed in Xenopus oocytes, including a range of VSD mutations or co-expressed with different regulatory subunits. Here we provide evidence for a functional interaction between the gating ring and the voltage sensor in fulllength, functional BK channels at the plasma membrane, in agreement with the structural data from Aplysia BK. Moreover, these data support a pathway that couples to divalent ion binding to channel opening through the voltage sensor. Results Voltage dependence of gating ring rearrangements is associated to activation of the RCK1 binding site BK a subunits labeled with fluorescent proteins CFP and YFP in the linker between the RCK1 and RCK2 domains (position 667) retain the functional properties of wild-type BK channels (Miranda et al., 2013;Miranda et al., 2016). This allowed us to use PCF to detect conformational rearrangements of the gating ring measured as changes in FRET efficiency (E) between the fluorophores (Miranda et al., 2013;Miranda et al., 2016). Binding of Ca 2+ ions to both high-affinity binding sites (RCK1 and Ca 2+ bowl) produces an activation of BK channels, coincident with an increase in E from basal levels reaching saturating values at high Ca 2+ concentrations (Miranda et al., 2013 and Figure 1a). In addition, we observed that the E signal has the remarkable property that in intermediate Ca 2+ concentrations (from 4 mM to 55 mM) it shows voltage dependence besides its Ca 2+ dependence (Miranda et al., 2013 and Figure 1a). As discussed previously (Miranda et al., 2013), these changes in E with voltage are not conformational dynamics of the gating ring that simply follow the Figure 1. Voltage dependence of gating ring rearrangements is associated to activation of the RCK1 binding site. G-V (left panels) and E-V curves (right panels) obtained simultaneously at several Ca 2+ concentrations from (a) the BK667CY construct, (b) mutation of the RCK1 high-affinity site (D362A/D367A), (c) mutation of the Ca 2+ bowl (5D5A), or (d) both (D362A/D367A 5D5A). Note that the voltage dependence of the E signal is only abolished after Figure 1 continued on next page voltage dependence of VSD. For instance, at 0 Ca 2+ concentrations movements of the VSD occurs between 0 and +300 mV (Stefani et al., 1997;Horrigan et al., 1999;Horrigan and Aldrich, 2002;Zhang et al., 2014;Carrasquel-Ursulaez et al., 2015;Zhang et al., 2017). However, we do not observe changes in E between 0 and +240 mV (Figure 1a). Similarly, at 100 mM Ca 2+ , charge movement takes place between À100 and +150 mV (Carrasquel-Ursulaez et al., 2015), while our FRET signals at 95 mM Ca 2+ do not vary within this voltage range (Figure 1a). Independent activation of high-affinity binding sites by other divalent ions (Ba 2+ , Cd 2+ , or Mg 2+ (Miranda et al., 2016)) led us to postulate that Ca 2+ activation has a site-dependent relation to voltage. To further evaluate the effect of individual high-affinity Ca 2+ binding sites on the voltage-dependent component of the gating ring conformational changes we first selectively mutated the binding sites. Mutations D362A and D367A (Xia et al., 2002;Zeng et al., 2005) were introduced in the BK667CY construct (BK667CY D362A/D367A ) to remove the high-affinity binding site located in the RCK1 domain. Figure 1b shows the relative conductance and E values for the BK667CY D362A/D367A construct at different membrane voltages for various Ca 2+ concentrations. As described previously, the G-V curves show a significantly reduced shift to more negative potentials when Ca 2+ is increased, as compared to the non-mutated BK667CY (Figure 1a-b, left panels). Specific activation of the Ca 2+ bowl renders a smaller change in E values, which are not voltage-dependent within the voltage range tested (Figure 1b, right panel). To test the effect of eliminating the RCK2 Ca 2+ binding site -the Ca 2+ bowlwe mutated five aspartates to alanines (5D5A) (Bao et al., 2002). As expected, activation of only the RCK1 domain by Ca 2+ reduced the Ca 2+ -dependent shift in the GV curves ( Figure 1c, left panel). Even though the extent to which the E values changed with Ca 2+ was reduced (Figure 1c), there was a persistent voltage dependence equivalent to that shown in Figure 1a corresponding to the nonmutated channel (most appreciable at 12 mM and 22 mM Ca 2+ concentrations; Figure 1c, right panel) (Miranda et al., 2013). Further, at these two Ca 2+ concentrations the changes in E occurred within the same voltage range (+60-120 mV) in channels with the Ca 2+ bowl mutated ( Figure 1c) or not ( Figure 1a). This effect seems not to be attributable to Ca 2+ binding to unknown binding sites in the channel, since the double mutation of the RCK1 and RCK2 sites abolishes the change in the FRET signal ( Figure 1d). Altogether, these results indicate that the voltage-dependent component of the gating ring conformational changes triggered by Ca 2+ in the BK667CY construct depends on activation of the RCK1 binding site. Because the gating ring is not within the transmembrane region, it is not expected to be directly influenced by the transmembrane voltage. Therefore, the voltagedependent FRET signals must be coupled to the dynamics of the gate region associated with the opening and closing of the channel and/or those of the voltage sensor domain. The voltage-dependent conformational changes of the gating ring are not related to the opening and closing of the pore domain To test whether the voltage-dependent FRET signals relate to the opening and closing of the channel (intrinsic gating) we used two modifications of BK channel function in which the relative probability of opening is shifted in the voltage axis, yet the actual dynamics of voltage sensor are expected to be unaltered ( Figure 2b). We reasoned that, if the voltage-dependent FRET signals of the gating ring are coupled to the opening and closing, they should follow a similar displacement with voltage. The first BK channel construct is the a subunit including the single point mutation F315A, which has been described to shift the voltage dependence of the relative conductance of the channel to more positive potentials, by uncoupling the voltage sensor activation from the gate opening ( Figure 2c) (Carrasquel-Ursulaez et al., 2015). Figure 2d shows the relative conductance and E vs. voltage for the BK667CY F315A mutant at various Ca 2+ concentrations. Our results show that the shift of the (d). Data corresponding to each Ca 2+ concentration are color-coded as indicated in the legend at the bottom. Solid curves in the G-V graphs represent Boltzmann fits. For reference, grey shadows in (a-d) left panels represent the full range of G-V curves corresponding to non-mutated BK667CY channels from 0 mM Ca 2+ to 95 mM Ca 2+ (indicated with colored dashed lines). Data points and error bars represent average ± SEM (n = 3-14, N = 2-8). Part of the data in (a, b and c) are taken from (Miranda et al., 2013) and (Miranda et al., 2016). The second modification of BK function consisted in co-expressing the wild type a subunit with the auxiliary subunit g1 (Yan and Aldrich, 2010;Yan and Aldrich, 2012;Gonzalez-Perez et al., 2014;Li and Yan, 2016). In this case, the relative probability of opening is shifted to more negative potentials by increasing the coupling between the voltage sensor and the gate of the channel It should be noted that the extent of the shifts induced by the mutation are smaller than previously reported (Carrasquel-Ursulaez et al., 2015), which could arise from the different experimental conditions and/or our fluorescent construct. (e) The interaction with the g1 subunit favors the VSD-PD coupling mechanism (f) G-V (left) and E-V curves (right) of BK667CY a subunits co-expressed with g1 subunits. In all panels, data corresponding to each Ca 2+ concentration are color-coded as indicated in the bottom legend. Colored dashed lines represent the G-V and E-V curves corresponding to BK667CYa channels (Miranda et al., 2013;Miranda et al., 2016). The solid curves in the G-V graphs represent Boltzmann fits. The full range of G-V curves from 0 mM Ca 2+ to 95 mM Ca 2+ from BK667CY is represented as a grey shadow in left panels (d and f), for reference. Data points and error bars represent average ± SEM (n = 3-8; N = 2-3). DOI: https://doi.org/10.7554/eLife.40664.003 ( Figure 2e). This construct adds the advantage of representing a physiologically relevant modification of channel gating. Figure 2f shows the relative conductance and E vs. voltage in oocytes coexpressing the BK667CYa and g1 at voltages ranging from À160 to +260 mV, with three [Ca 2+ ] concentrations: nominal 0, 12 mM and 22 mM. As expected, the presence of the g1 subunit drives the relative conductance curves to more negative potentials ( Figure 2f, left panel) compared to the values obtained without g1 (Figure 2f, dashed lines). Remarkably, the change in the voltage dependence of the relative conductance induced by g1 does not alter the simultaneously recorded FRET signals ( Figure 2f, right panel), which remains indistinguishable from that recorded with BK667CYa ( Figure 2f, dashed lines). The dynamics of the VSD are directly reflected in the gating ring conformation Using the allosteric HA model of BK channel function, Horrigan and Aldrich (2002) proposed that Ca 2+ binding to the Ca 2+ bowl is coupled to the voltage sensor activation. Yet, the strength of that interaction (allosteric constant E) was smaller than those corresponding to Ca 2+ -or V-sensors with channel opening (Horrigan and Aldrich, 2002). Interestingly, when E was derived from gating currents data, a larger value was obtained (Carrasquel-Ursulaez et al., 2015). Further, Ca 2+ binding to the RCK1 domain (but not to the Ca 2+ bowl) is voltage-dependent (Sweet and Cox, 2008), which as the authors hypothesized might originate from physical interactions between the voltage sensors and the RCK1 domains. Additionally, using the cut-open oocyte voltage-clamp fluorometry approach, Savalli et al. (2012) showed that fluorescence emission from reporters within the VSD could change upon uncaged Ca 2+ stimuli. This evidence indicates that the VSD is coupled to the gating ring, but none of these approaches directly monitored the conformational changes of the gating ring structure. Therefore, we decided to explore whether the voltage dependence of the gating ring movements is attributable to the voltage sensor activation. To this end we modified the voltage dependence of the VSD activation by co-expression with b auxiliary subunits or by introducing specific mutations in the VSD (Figure 3 and Figure 4). The effects of co-expressing BK a subunit with the four different types of auxiliary b subunits have been extensively studied (Tseng-Crank et al., 1996;Behrens et al., 2000;Brenner et al., 2000;Cox and Aldrich, 2000;Uebele et al., 2000;Lingle et al., 2001;Zeng et al., 2001;Bao and Cox, 2005;Orio and Latorre, 2005;Yang et al., 2008a;Sweet and Cox, 2009;Contreras et al., 2012;Li and Yan, 2016). b1 subunit has been previously proposed to alter the voltage sensor-related voltage dependence, as well as the intrinsic opening of the gate and Ca 2+ sensitivity (Figure 3a) (Cox and Aldrich, 2000;Bao and Cox, 2005;Orio and Latorre, 2005;Sweet and Cox, 2009;Contreras et al., 2012;Castillo et al., 2015). Recordings from BK667CYa co-expressed with b1 subunits reveal the expected modifications in the voltage dependence of the relative conductance, that is an increase in the apparent Ca 2+ sensitivity ( Figure 3b, left panel) (Wallner et al., 1995;Cox and Aldrich, 2000;Bao and Cox, 2005;Orio and Latorre, 2005;Sweet and Cox, 2009;Contreras et al., 2012). In addition, it has been reported that b1 subunit alters the function of the VSD (Orio and Latorre, 2005;Castillo et al., 2015). Notably, the E-V curves are shifted to more negative potentials (Figure 3b, right panel), similarly to the described modification (Castillo et al., 2015). The structural determinants of the b1 subunit influence on the VSD reside within its N-terminus, which has been shown by engineering a chimera between the b3b subunit (which does not influence the VSD) and the N-terminus of the b1 (b3bNb1) (Castillo et al., 2015). We recapitulated this strategy. First, we co-expressed BK667CY a subunits with b3b and observed the expected inactivation of the ionic currents at positive potentials, yet with different blockade kinetics (see Figure 3-figure supplement 1) (Uebele et al., 2000;Xia et al., 2000;Lingle et al., 2001). The relative open probability of this complex is like BK667CYa alone, except that at extreme positive potentials the values of relative conductance at the tails decrease due to inactivation (Figure 3-figure supplement 1b, left panel). The values of E vs V remained comparable to those observed for BK667CYa (Figure 3-figure supplement 1b, right panel). We then co-expressed the b3bNb1 chimera (Castillo et al., 2015) with BK667CYa ( Figure 3c). This complex did not modify the relative conductance vs. voltage relationship (Figure 3d, left panel) as compared with BK667CYa alone (Figure 3d, grey shadow). On the other hand, while the magnitude of the FRET change is the same as in BK667CYa, the voltage dependence of E values at [Ca 2+ ] of 4 mM, 12 mM and 22 mM shifted to more negative potentials compared to the values of BK667CYa alone (Figure 3d, right panel, compare dashed to solid lines). Altogether, these results indicate that the alteration of the voltage dependence of the voltage sensor induced by the amino terminal of b1within the b3bNb1 chimera underlies the modification of the voltage dependence of the gating ring conformational changes, reinforcing the hypothesis that this voltage dependence is directly related to VSD function. VSD activation can also be altered by introducing single point mutations that modify the voltage of half activation of the voltage sensor, V h (j). This parameter is determined by fitting data to the HA allosteric model (Ma et al., 2006) or directly from gating current measurements (Zhang et al., 2014). Mutations of charged amino acids on the VSD have been reported to produce different modifications in the V h (j) values. In some cases, other parameters related to BK channel activation are additionally affected by the mutations. Mutation R210E shifts the V h (j) value from +173 mV to +25 mV at 0 Ca 2+ in BK channels (Figure 4a) (Ma et al., 2006). Consistent with this, introduction of this Left panel, G-V curves obtained at several Ca 2+ concentrations after co-expression of BK667CY with the b1 subunit, which induces a leftward shift in the E-V curves obtained simultaneously (right). (c) b3bNb1 chimeras produce similar effects to b1 on VSD function, since they retain the N-terminal region of b1 (Castillo et al., 2015). (d) G-V (left) and E-V curves (right) of BK667CY a subunits co-expressed with the b3bNb1 chimera. Data corresponding to each Ca 2+ concentration are color-coded as indicated in the legend at the bottom. Colored dashed lines represent the G-V and E-V curves corresponding to BK667CYa channels (Miranda et al., 2013;Miranda et al., 2016). The solid curves in the G-V graphs represent Boltzmann fits. The full range of G-V curves from 0 mM Ca 2+ to 95 mM Ca 2+ from BK667CY is represented as a grey shadow in left panels (b and d), for reference. Data points and error bars represent average ± SEM (n = 3-10; N = 2-4). DOI: https://doi.org/10.7554/eLife.40664.004 The following figure supplement is available for figure 3: and the coupling between the VSD and channel gate (Zhang et al., 2014). As previously reported, BK667CY E219R showed modified relative conductance vs. voltage relationships at different Ca 2+ concentrations (Figure 4d, left panel) (Zhang et al., 2014). In addition, this construct revealed a shift to more negative potentials in the E vs. voltage dependence at intermediate Ca 2+ concentrations (12 mM and 22 mM Ca 2+ ; Figure 4d, right panel), paralleling the reported negative shift in V h (j) (Ma et al., 2006;Zhang et al., 2014). Since mutations displacing the V h (j) to more negative potentials induce equivalent shifts in the voltage dependence of the gating ring motion (measured as E), we tested if other mutations previously reported to induce positive shifts on V h (j) (Ma et al., 2006) were also associated with changes of the E-V curves in the same direction. As shown by Ma et al., the largest effect on V h (j) is induced by the R213E mutation, producing a shift of DV h (j)=+337 mV (Figure 4e) (Ma et al., 2006). The BK667CY R213E construct showed a significant shift in the voltage dependence of the relative conductance to more positive potentials (Figure 4f, left panel). Notably, this effect was paralleled by a large displacement in the E vs. voltage dependence towards more positive potentials (Figure 4f, right panel). Taken together, our data show that modifications of the V h (j) values caused by mutating the VSD charged residues are reflected in equivalent changes in the voltage dependence of the gating ring conformational rearrangements, which occur in analogous directions and with proportional magnitudes at intermediate Ca 2+ concentrations. All these results on the VSD modifications and their corresponding changes in FRET signals support the existence of a direct coupling mechanism between the VSD function and the gating ring conformational changes. Parallel alterations of the voltage dependence of VSD function and gating ring motions by selective activation of the RCK1 binding site We have previously shown that specific interaction of Cd 2+ with the RCK1 binding site leads to activation of the BK channel, which is accompanied by voltage-dependent changes in the E values at intermediate Cd 2+ concentrations of 10 mM and 30 mM (Miranda et al., 2016). To further assess the role of the RCK1 binding site activation in the voltage dependence of the gating ring motions, we studied activation by Cd 2+ of selected BK667CY VSD mutants ( Figure 5). Addition of Cd 2+ to the BK667CY E219R mutant (Figure 5a) shifted the voltage dependence of E towards more negative potentials at intermediate Cd 2+ concentrations (10 mM and 30 mM; Figure 5b) when compared to non-mutated BK667CY (Figure 5b; dashed lines). This change in the E-V curves induced by selective activation of the RCK1 binding site with Cd 2+ paralleled the large negative shift (DV h (j) = À110 mV) previously reported with the E219R mutant BK channels (Ma et al., 2006;Zhang et al., 2014). We also tested Cd 2+ activation in the mutant BK667CY R201Q , which shifts the V h (j) parameter by 47 mV towards positive potentials (Figure 5c) (Ma et al., 2006). Addition of Cd 2+ rendered right-shifted E vs. voltage relationships (Figure 5d, right panel), following the direction of the predicted V h (j) shift described for this mutant BK channel (Ma et al., 2006). Finally, addition of Cd 2+ to the BK667CY F315A construct (Figure 5e) (Carrasquel-Ursulaez et al., 2015) did not have any effect on the E-V relationship (Figure 5f). These results are consistent with a mechanism in which specific binding of Cd 2+ to the RCK1 binding site allows voltage-dependent conformational changes in the gating ring that are directly related to VSD activation. Voltage dependence of Ba 2+ -induced gating ring movement is related to function of the channel gate Ca 2+ , Mg 2+ and Ba 2+ bind to the Ca 2+ bowl and trigger conformational changes of the gating ring region (Miranda et al., 2016). However, the effects of these ions on BK function and gating ring motions are fundamentally different. Notably, Ba 2+ induces a rapid blockade of the BK current after a transient activation that is measurable at low Ba 2+ concentrations (Zhou et al., 2012;Miranda et al., 2016) (Figure 6a). In addition, we previously showed that the gating ring conformational motions induced by Ba 2+ show a voltage-dependent component, which is not observed when Ca 2+ or Mg 2+ bind to the Ca 2+ bowl (Miranda et al., 2013;Miranda et al., 2016) (Figure 6b). We combined mutagenesis with the cation-specific activation strategy to identify the structural source of the voltage dependence in Ba 2+ -triggered gating ring motions. In this case, alteration of VSD function by mutating charged residues (Figure 6c and e) was not reflected in any change of the E vs. voltage relationships, as shown in Figure 6d and f for constructs BK667CY R210E and BK667CY R213E , respectively. These results indicate that the voltage dependence of Ba 2+ -induced gating ring conformational changes, unlike those induced by Ca 2+ and Cd 2+ through activation of the RCK1 binding site, may not be related to VSD activation. This conclusion is further supported by the lack of changes in Ba 2+ responses when mutations in the VSD were made in a RCK1 Ca 2+ binding site knockout (D362A D367A) background ( Figure 6-figure supplement 1b & c). Next, we studied the effect of Ba 2+ on BK667CY channels containing the F315A mutation (Figure 6g) (Carrasquel-Ursulaez et al., 2015). As shown in Figure 6h, the E values reached similar levels to those of nonmutated BK667CY channels at saturating Ba 2+ concentrations. However, at intermediate Figure 5. Voltage dependence of gating ring rearrangements after specific activation of RCK1 high-affinity binding site by Cd 2+ . (a) Effect of the VSD E219R mutation on the selective activation of RCK1 by Cd 2+ . (b) G-V (left panels) and E-V curves (right panels) obtained simultaneously at several Ca 2+ concentrations from constructs BK667CY E219R . (c) VSD R201Q mutation induces a positive shift of V h(j) (d) G-V (left panels) and E-V curves (right panels) obtained simultaneously at several Cd 2+ concentrations from constructs BK667CY R201Q (e) Effect of the F315A mutation on the selective activation of RCK1 by Cd 2+ . (f) G-V (left panels) and E-V curves (right panels) obtained simultaneously at several Cd 2+ concentrations from constructs BK667CY F315A . Data corresponding to each Cd 2+ concentration are color-coded as indicated in the legend at the bottom. Colored dashed lines represent the G-V and E-V curves corresponding to BK667CYa channels (Miranda et al., 2013;Miranda et al., 2016). The solid curves in the G-V graphs represent Boltzmann fits. The full range of G-V curves from 0 mM Cd 2+ to 100 mM Cd 2+ corresponding to non-mutated BK667CY is represented as a grey shadow in left panels (b), (d), and (f), for reference. Data points and error bars represent average ± SEM (n = 3-4; N = 2). DOI: https://doi.org/10.7554/eLife.40664.007 Figure 6. Voltage dependence of gating ring movements triggered by Ba 2+ . (a) The RCK2 site is selectively activated by Ba 2+ , which additionally induces pore block. (b) FRET efficiency (E) data obtained at several Ba 2+ concentrations from BK667CY constructs (Miranda et al., 2016). (c) Effect of the VSD R210E mutation after selective activation of the RCK2 binding site by Ba 2+. (d) E-V curves obtained at several Ba 2+ concentrations from Figure 6 continued on next page concentrations of Ba 2+ the E-V curves were shifted towards more positive potentials when compared with BK667CY channels (Figure 6h, dashed line). These results suggest that the voltage-dependent component of the conformational changes triggered by Ba 2+ binding to the Ca 2+ bowl are not directly related to VSD activation, but rather to the function of the channel gate. Discussion Using fluorescently labeled BKa subunit constructs reporting protein dynamics between the RCK1 and RCK2 domains, we previously demonstrated that the channel high-affinity binding sites can be independently activated by different divalent ions, inducing energetically-additive rearrangements of the gating ring measured as changes in the FRET efficiency values, E (Miranda et al., 2013;Miranda et al., 2016). Further, the effects of Ca 2+ , Cd 2+ and Ba 2+ on the E values showed a voltage-dependent component, for which we could not provide an explanation. Voltage dependence of Ca 2+ -induced rearrangements seemed to be specifically related to RCK1 activation, since only the mutation of that binding site resulted in voltage-independent E signals (Miranda et al., 2016 and Figure 1). One possibility to explain this result is the existence of direct structural interactions of the RCK1 domain and the VSD. Interestingly, the recently obtained cryo-EM full BK structure from Aplysia californica revealed the existence of specific protein-protein interfaces formed by the amino terminal lobes of the RCK1 domains facing the transmembrane domain and the VSD/S4-S5 linkers . According to the structural data obtained in saturating Mg 2+ and Ca 2+ concentrations, gating of the channel by Ca 2+ was proposed to be mediated, at least partly, by displacement of these interfaces causing the VSD/S4-S5 linkers to move, contributing to pore opening ( Tao et al., 2017); but see also (Zhou et al., 2017)). Our work provides functional data supporting this mechanism. Our data show that mutations altering the voltage dependence of BK VSD are reflected in the voltage dependence of the gating ring movements triggered by activation of the RCK1 binding site by Ca 2+ or Cd 2+ . Mutations altering VSD function by inducing large leftward shifts in the V h (j) values (Ma et al., 2006;Zhang et al., 2014) strongly correlate with negative shifts in the voltage dependence of the E signals. Likewise, mutations inducing positive shifts in the VSD voltage dependence of the voltage sensor function are reflected in E-V shifts towards more positive membrane voltages. Interestingly, we also observe a correlation between the changes in the slope of the G-V curves and that of the E-V curves (e.g. Figure 4f; see also Supplementary file 1), suggesting the existence of an interaction between the VSD and the gating ring. This idea is further supported by the effect of b1 which has been proposed to alter the voltage dependence of VSD function (Wallner et al., 1995;Cox and Aldrich, 2000;Nimigean and Magleby, 2000;Bao and Cox, 2005;Orio and Latorre, 2005;Contreras et al., 2012;Castillo et al., 2015). We observed that b1 and b3bNb1 induce a leftward shift in the E-V curves. Conversely, two experimental strategies known to influence the G-V curves without direct interference with the VSD did not affect the voltage dependence of E. The lack of effect on the E-V curves of the mutation F315A can be explained because the shift in the G-V curves arises from the influence of this mutation in the C !O transition with minor effects on the voltage dependence of the gating currents (Carrasquel-Ursulaez et al., 2015). Analogously, no change in the voltage dependence of E was observed after co-expression of BKa with the g1 subunit, which shifts the voltage dependence of pore opening by enhancing its allosteric coupling with the voltage sensor activation (Yan and Aldrich, 2010). As with the mutation F315A, the presence of g1 subunit produces a minor shift in the Q-V distributions, not paralleling the large shift in the G-V curves (Carrasquel-Ursulaez and Ramon Latorre, personal communication). A puzzling result from our previous study was the observation that Ba 2+ binding to the Ca 2+ bowl triggers voltage-dependent conformational changes (Miranda et al., 2016). Even though we still do not know the mechanisms of this unique response to Ba 2+ , here we learned that it is not related to the dynamics of VSD, but rather influenced by perturbations affecting the opening and closing of the channel at the pore domain. Why Ba 2+ but not Ca 2+ ? A possible answer for this question is that Ba 2+ has the additional property of blocking the permeation pathway (Miller, 1987;Neyton and Miller, 1988;Zhou et al., 2012), which could somehow be transmitted allosterically to the gating ring. If simply ion permeation blockade is what matters, then we might expect that blocking permeation with the high affinity quaternary ammonium derivative N-(4-[benzoyl]benzyl)-N,N,N-tributylammonium (bb-TBA) (Tang et al., 2009) should produce a voltage dependent FRET signal with Ca 2+ activation. But, it does not ( Figure 6-figure supplement 1d). Another possibility for the Ba 2+ effect could be a direct allosteric interaction between the intrinsic gating in the pore and the divalent binding site in RCK2, which needs to be tested further. Irrespectively of the fluorescent construct (Miranda et al., 2013) or the divalent ion used to activate the BK channel (Miranda et al., 2016), we have consistently observed that the conformational changes monitored as changes in the FRET efficiency are not strictly coupled to the intrinsic gating of the channel. In this study, we have found that the consequences of the voltage dependence of the intrinsic gating by manipulations of the VSD and the pore region are paralleled by the FRET efficiencies. These results rule out the possibilities that FRET signals derive from conformational changes in an unknown Ca 2+ binding site or that they are completely uncoupled to the intrinsic gating. In conclusion, our functional data show a strong correlation between the VSD function and the RCK1 conformational changes, suggesting a transduction mechanism from ion binding to change the channel activation. This transduction mechanism is in agreement with the existence of structural interactions between the RCK1 domain and the VSD. The correlation between VSD function and the RCK1 conformational changes is not observed between RCK2 and VSD, suggesting the existence of a different transduction mechanism that may include an indirect mechanism through the RCK1 or RCK1-S6 linker. Simultaneous fluorescent and electrophysiological recordings were obtained as previously described (Miranda et al., 2013;Miranda et al., 2016). Conductance-voltage (G-V) curves were obtained from tail currents using standard procedures. The G-V relations were fit with the Boltzmann function: G/Gmax = 1/(1 + exp (-zF(V-Vhalf)/RT), where Gmax is the maximum tail current, z is the voltage dependence of activation, V half is the half-activation voltage of the ionic current. T is the absolute temperature (295K), F is the Faraday's constant and R the universal gas constant. Fit parameters are provided in Supplementary file 1. Conformational changes of the gating ring were tracked as intersubunit changes of the FRET efficiency between CFP and YFP as previously reported (Miranda et al., 2013;Miranda et al., 2016). Analysis of the FRET signal was performed using emission spectra ratios. We calculated the FRET efficiency as E=(RatioA-RatioA 0 )/(RatioA 1 -RatioA 0 ), where RatioA and RatioA 0 are the emission spectra ratios for the FRET signal and the control only in the presence of acceptor respectively (Zheng and Zagotta, 2003); RatioA 1 is the maximum emission ratio that we can measure in our system (Miranda et al., 2013;Miranda et al., 2016). This value of E is proportional to FRET efficiency (Zheng and Zagotta, 2003). The E value showed is an average of the E value corresponding to each tetramer present in the membrane patch and represent an estimation of the distance between the fluorophores located in the same position of the four subunits of the tetramer. Where possible, the E-V relations were fit with the Boltzmann function: E = 1/ (1 + exp (-zF(V-Vhalf)/RT), where z is the voltage dependence of the gating ring movement (E) and V half is the half-activation voltage of the fluorescent signal. Fit parameters are provided in Supplementary file 1. Acknowledgments MH and PM were supported by the intramural section of the National Institutes of Health (NINDS). TG was funded by the Spanish Ministry of Economy and Competitivity (grants SAF2013-50085-EXP and RyC-2012-11349) and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 648936). We thank Deepa Srikumar for technical assistance and Andrew Plested for useful comments on the manuscript. The g1 clone and the b3bNb1 chimera were kind gifts from Chris Lingle and Ramon Latorre, respectively. The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. Figures 1-6. The G-V and E-V relations were fit with Boltzmann functions G/Gmax = 1/(1 + exp (-zF(V-Vhalf)/RT), E = 1/(1 + exp (-zF(V-Vhalf)/RT), where Gmax is the maximum tail current, z is the voltage dependence of activation (G) or gating ring movement (E), Vhalf is the half-activation voltage of the ionic current or the fluorescent signal. T is the absolute temperature (295K), F is the Faraday's constant and R the universal gas constant. Data availability All data generated and analysed during this study are included in the manuscript.
8,699.6
2018-12-11T00:00:00.000
[ "Physics", "Biology" ]
Protein Biomarkers in Glaucoma: A Review Glaucoma is a multifactorial disease. Early diagnosis of this disease can support treatment and reduce the effects of pathophysiological processes. A significant problem in the diagnosis of glaucoma is limited access to the tested material. Therefore, intensive research is underway to develop biomarkers for fast, noninvasive, and reliable testing. Biomarkers indicated in the formation of glaucoma include chemical compounds from different chemical groups, such as proteins, sugars, and lipids. This review summarizes our knowledge about protein and/or their protein-like derived biomarkers used for glaucoma diagnosis since 2000. The described possibilities resulting from a biomarker search may contribute to identifying a group of compounds strongly correlated with glaucoma development. Such a find would be of great importance in the diagnosis and treatment of this disorder, as current screening techniques have low sensitivity and are unable to diagnose early primary open-angle glaucoma. Introduction Glaucoma refers to a group of optic neuropathies with characteristic morphological changes in the retinal nerve fiber layer and the optic nerve head (ONH). These changes are associated with slow and progressive retinal ganglion cell (RGC) death, characteristic changes in neuroretinal rim tissue in the ONH, and visual field loss [1,2]. Primary openangle glaucoma (POAG) is the most prevalent form of glaucoma in the Western world [3][4][5]. One of the most important problems facing the field of ophthalmology is determining how to diagnose glaucoma early. So far, the threat of blindness is prevented by timely treatment through the lowering of intraocular pressure (IOP). The diagnosis of glaucoma requires a detailed examination of the optic disc structure and visual field; combinations of patient history and objective methods for the evaluation of the ONH, including the retinal nerve fiber layer (RNFL), visual fields, tonometry, and corneal thickness; and assessing the structure and function of the eye. Potential screening tests classify subjects as healthy, as glaucoma suspects, or as having glaucomatous pathology of an insufficient predictive power [17,18]. A significant problem in diagnosing the disease is limited access to the tested material. The process of neurodegeneration occurs in the optic nerve and RGCs; examination of these tissues in patients is not feasible. Less invasive and more accessible clinical testing for glaucoma could be improved if specific biomarkers were detected in body fluids such as the tear film, urine, and whole blood or serum [19,20]. According to the National Institutes of Health's Biomarkers Definitions Working Group, a biomarker is defined as a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to therapeutic intervention and has valuable applications in disease detection and monitoring of health status [21]. To identify a biomarker for clinical utility, it must be confirmed as valid, reproducible, specific, and sensitive. Biomarkers are needed for early diagnosis of this blinding disease, and prediction of its prognosis could promote precise treatment [22]. One of the important challenges of serum biomarker detection is related to the much lower abundance of most proteomic biomarkers than some disease-irrelevant serum proteins [23]. Among the biomarkers indicated in glaucoma formation are chemical compounds belonging to different chemical groups such as proteins, sugars, and lipids. In this review, we systematize the knowledge gained since 2000 about biomarkers characteristic for glaucoma, and we provide an overview of biomarkers in biochemical groups on protein and/or their protein-like derivation. Herein, we also include suggestions for appropriate research methods that will allow their detection in the biological material of patients. Proteins A protein comprises one or more long chains of amino acid residues. Because of their various biological functions, proteins can be categorized as enzyme catalysts, structural proteins, hormones, transfer proteins, antibodies, storage proteins, and protective proteins [24]. Selected proteins shown to be correlated with glaucoma development are presented in Table 1. Serum levels of NGF in glaucoma patients were significantly lower than those measured in healthy control subject (4.1 ± 1 pg/mL vs. 5.5 ± 1.2 pg/mL, p = 0.01). Subgroup analysis showed that serum levels of NGF were significantly lower in early (3.5 ± 0.9 pg/mL, p = 0.0008) and moderate glaucoma (3.8 ± 0.7 pg/mL p < 0.0001) but not in advanced glaucoma (5.0 ± 0.7 pg/mL, p = 0.32) compared to healthy control subjects. [29] BDNF NTG n = 20 NTG, n = 20 control/tear The mean level of BDNF detected in the tears of the normal subjects was 77.09 ± 4.84 ng/mL, and the BDNF level in the tears of case group subjects was 24.33 ± 1.48 ng/mL (p < 0.001). [30] BDNF POAG n = 25 POAG, n = 25 control/serum Mean BDNF level in the serum was 27.16 ± 5.53 ng/mL in the control subjects and 18.42 ± 4.05 ng/mL in the subjects with early-stage glaucoma; there were no significant differences in serum BDNF levels according to the subjects' age, gender, duration of glaucoma, mean IOP, or blood pressure (p > 0.05). [31] N-terminal fragment of the proatrial natriuretic peptide (NT-proANP, 1-98) Glaucoma and cataract n = 58 POAG, n = 32 cataract (control)/plasma and aqueous humor The plasma NT-proANP concentration was significantly increased in patients with POAG compared to those in the control group (7.00 vs. 4.65 nmol/L, p = 0.0054). The NT-proANP concentration in the aqueous humor was significantly higher in the POAG patients (0.47 vs. 0.09 nmol/L, p = 0.0112). There was no correlation between the NT-proANP values in the aqueous humor and the plasma of the POAG patients or between the NT-proANP values in the aqueous humor and IOP. [32] Asymmetric dimethylarginine (ADMA), a dimethylated isomeric derivative of the amino acid l-arginine Glaucoma n = 211 glaucoma, n = 295 control/serum A significant increase in serum ADMA concentration was detected in advanced glaucoma cases compared with control cases (p ≤ 0.0001). [33] Panel of 17 most differentially altered proteins POAG, PEXG n = 73(POAG), n = 59 (PEXG), n = 70 healthy controls/serum Seventeen most differentially altered proteins identified in this analysis confirmed that they were also overexpressed in the intact serum of newly recruited glaucoma patients. [36] Main matricellular proteins (SPARC, thrombospondin-2, and osteopontin) Acute primary angle closure (APAC), non-glaucomatous cataract n = 29 APAC, n = 12 previous APAC, n= 22 cataract/aqueous humor The levels of SPARC, thrombospondin-2, and osteopontin were significantly elevated in the APAC group as compared to the cataract group (p < 0.001, p < 0.001, and p = 0.009, respectively). [37] Farkas et al. [38] showed that elevated ferritin, an iron-regulating protein, is present in glaucoma. Serum ferritin is related to oxidative stress and inflammation. Lin et al. [35] revealed that an increased serum ferritin level was associated with a greater probability of glaucoma in a representative sample of South Koreans, and an increased serum ferritin level was associated with a high risk of glaucoma in men but not in women [34]. Nerve growth factor (NGF) and brain-derived neurotrophic factor (BDNF), members of the neurotrophin family, have been shown to control a number of aspects of survival, development, and function of neurons in both the central and peripheral nervous systems [39][40][41]. Several studies indicate that NGF and BDNF are involved in RGC survival [29]. Ghaffariyeh et al. [30,31] suggested that BDNF found in tears might be a useful biochemical marker for early detection of normal-tension glaucoma (NTG). They proposed that identification of this biomarker might be a reliable, time-efficient, and cost-effective method for diagnosing, screening, and assessing the progression of POAG. Oddone et al. [29] showed that BDNF and NGF serum levels were reduced in the early and moderate glaucoma stages, suggesting the possibility that both factors could be further investigated as potential circulating biomarkers for the early detection of glaucoma. Wang et al. [37] reported significantly elevated levels of secreted proteins such as cysteine (SPARC), thrombospondin-2, and osteopontin in patients with acute primary angle closure (APAC) compared to the cataract group (p < 0.001, p < 0.001, and p = 0.009, respectively). All four matricellular proteins were found to have a positive correlation with IOP in the current APAC group, but no correlation was found in the previous APAC or cataract groups. Alterations in sera proteins between patients with POAG, pseudoexfoliation glaucoma (PEXG), and healthy controls were presented by González-Iglesias et al. [36]. The authors identified the 17 most differentially altered proteins overexpressed in the intact serum of newly recruited glaucoma patients. They then proposed a panel of candidates for glaucoma biomarkers and suggested that those candidates are part of a network linked to regulating immune-and inflammatory-related processes. Peptides and Amino Acids Amino acids are the components that serve as substrates for protein synthesis (protein amino acids); they may be referred to as nonprotein amino acids [42]. The studies described here indicate the utility of selected amino acids as biomarkers for glaucoma. Homocysteine (Hcy) is an amino acid that serves as an intermediate in methionine metabolism to cysteine (Cys) [43]. Researchers proposed a correlation with glaucoma based on studies about the increased risk of cardiovascular diseases [44]. Lin et al. [25] suggested that increased levels of Hcy and Cys may be associated with glaucoma, especially in POAG. However, it may not be useful as a reliable biomarker in glaucoma. In a study from Lopez-Riquelme et al. [28], Hcy levels were significantly higher (p = 0.002) in the POAG group compared to the NTG and control groups. Lee et al. [26] also showed that the Hcy level is associated with the presence of glaucomatous RNFL defects. Conversely, Leibovitzh and Cohen [27] presented a retrospective cross-sectional analysis of the relationship between Hcy and IOP and concluded that Hcy may not be useful as a predictive parameter to recognize subjects prone to the development of elevated IOP. No clinical correlation between the Hcy level and IOP was found. Two dimethylated isomeric derivatives of the amino acid l-arginine-asymmetric dimethylarginine (ADMA) and symmetric dimethylarginine (SDMA)-were shown to be correlated with advanced glaucoma. The derivative ADMA is an endogenous inhibitor of nitric oxide synthase (NOS), while SDMA is a competitive inhibitor of the cellular uptake of l-arginine, the substrate for NOS. According to the nitric-oxide pathway in glaucoma pathogenesis, these metabolites are associated with endothelial dysfunction [33]. Endothelin-1, a peptide hormone that plays multiple complex roles in the cardiovascular, neural, pulmonary, reproductive, and renal systems [45], was shown to be elevated in the POAG group compared to NTG and control group serum samples [28]. The N-terminal fragment of the proatrial natriuretic peptide (NT-proANP, 1-98) is connected with cardiovascular effects, including the regulation of vascular tone, renal sodium handling, and myocardial hypertrophy. It is synthesized within the heart in response to myocardial stretch, and the development of glaucoma was identified to be associated with elevated levels in the plasma and the aqueous humor of patients with POAG [32]. Peptides and amino acids evaluated as potential biomarkers in glaucoma are presented in Table 1. Autoantibodies and Antibodies Autoantibodies may cause pathology by many different mechanisms and induce disease through a multitude of pathophysiological pathways. These include mimicking receptor stimulation, blocking neural transmission, induction of altered signaling, triggering uncontrolled microthrombosis, cell lysis, neutrophil activation, and induction of inflammation. Within diseases, multiple mechanisms may contribute to clinical manifestation [46]. According to Grus et al. [47], complex antibody profiles are very stable in glaucoma patients. Moreover, it has been suggested that autoantibody profiles (in body fluids such as serum, aqueous humor, or tears) may become powerful and highly specific tools to designate as markers in the diagnosis of glaucoma, characterized by early detection before the appearance of any clinical signs [48]. Gramlich et al. [48] presented a wide range of autoantibodies-such as anti-HSP70, antiphosphatidylserine, g-enolase, glycosaminoglycans, neuron-specific enolase, glutathione-S-transferase, a-fodrin, vimentin, myelin basic protein (MBP), glial fibrillary acidic protein (GFAP), retinaldehyde binding protein, and retinal S-antigen-and their role in glaucoma. Using an experimental autoimmune glaucoma (EAG) animal model, Hohenstein-Blaul et al. [18] demonstrated an IOP-independent loss of RGCs, accompanied by antibody depositions and increased levels of microglia. The correlation between neuronal damage and changes in autoantibody reactivity suggests that autoantibody profiling could be a useful glaucoma biomarker. The authors concluded that the absence of some autoantibodies in glaucoma patients reflects a loss of the protective potential of natural autoimmunity and may thus encourage neurodegenerative processes. Furthermore, a number of serum proteins identified by chromatography analysis of human glaucoma may represent diseased tissue-related antigens and serve as candidate biomarkers of glaucoma. However, it is unclear whether the IgG-bound serum proteins identified in this study reflect diseasecausing antigens [23]. Joachim et al. [50] compared the entire IgG autoantibody patterns against different ocular antigens (retina, optic nerve, and optic nerve head) in the sera of glaucoma patients and healthy subjects. All groups showed different and complex antibody patterns against the three ocular tissues. Joachim et al. [51] showed the significant differences between the IgG antibody profiles against retinal antigens of the glaucoma groups (PEX and POAG) and controls (up-and downregulations), and the identified biomarkers included heat shock protein 27, α-enolase, actin, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Additionally, very complex IgG antibody patterns against retinal antigens were found in all analyzed aqueous humor samples of the NTG and control groups (p < 0.001) [51,52]. The findings of Schmelter et al. [53] indicate that glaucoma is accompanied by systemic effects on antibody production and B cell maturation, possibly offering new prospects for future diagnostic or therapy purposes. In total, 75 peptides of the variable IgG domain showed significant glaucoma-related changes. Six peptides were highly abundant in POAG sera, whereas 69 peptides were minimal in comparison to the control group. Table 2 presents autoantibodies and antibodies evaluated as potential biomarkers in glaucoma. [49] IgG antibodies against retinal antigens NTG n = 21 NTG, n = 21 controls/aqueous humor α B-crystallin, vimentin, and heat-shock protein were analyzed as the antigen bands. [52] Cytokines and Growth Factors Cytokines and growth factors play an essential role in the functioning of the human body, modulating (among others) the immune and nervous systems. They are involved in intercellular communication and transmit signals to the appropriate cells by acting on receptors placed in their cell membranes. High levels of proinflammatory cytokines have been shown to have a significant impact on the development of glaucoma. Li et al. [54] demonstrated that the factor contributing to glaucoma development was the menopausal decrease in hormones in women, with a simultaneous high concentration of proinflammatory cytokine as interleukin-8 (IL-8) in the serum. This finding emphasizes the role of the immune system in the development of glaucoma [6]. Gupta et al. [55] analyzed tear films collected from patients without and with newly diagnosed POAG to assess the concentration of 10 proinflammatory cytokines-IFNγ, IL-10, IL-12p70, IL-13, IL-1β, IL-2, IL-4, IL-6, IL-8, and TNFα. Mean concentrations of tear film cytokines were shown to be lower in the glaucoma group for most of the tested cytokines, among which IL-12p70 may be the most important for diagnosis. The authors concluded that despite the small amount of protein available in the samples, the assessment of tear-film cytokines can be used as an indicator of early POAG. The remaining cytokines are also considered as factors that may enable the evaluation of glaucoma development. Tumor necrosis factor alpha (TNF-α) has been proven to be a proinflammatory cytokine that can play a role in glaucomatous neurodegeneration. Paschalis [56] showed that increasing concentrations of TNF-α and its receptors (TNFR1 and TNFR2), observed after ocular injury, can contribute to progressive damage to the retina and subsequent glaucoma. This relationship was confirmed even with well-controlled IOP in patients with the Boston Keratoprosthesis (KPro). Similar conclusions were described by Kondkar [57], who observed that an elevated level of tumor necrosis factor alpha (TNF-α) can induce RGC apoptosis and plays a key role in glaucoma neurodegeneration. This relationship was also confirmed in a different group of patients when researchers moderated a positive and significant correlation between the TNF-α level and cup/disc ratio as an important clinical index for pseudoexfoliation glaucoma (PEG) [58]. Therefore, these authors emphasized the possibility of using TNF-α as a biomarker in the early diagnosis of glaucoma and assessment of the severity of the disease. Growth factors activate the repair mechanisms in human cells by stimulating cells to divide, differentiate, and grow. Serum levels of nerve growth factor (NGF) and BDNF in patients affected by POAG with a wide spectrum of disease severity have proved to be significantly reduced compared to healthy controls [29]. The BDNF influences survival and growth of neurons, serves as a modulator of neurotransmitters, and participates in neuronal plasticity. Its decreased concentration and neurodegenerative effects are observed not only among patients with glaucoma but also those with Parkinson's and Alzheimer's diseases [59]. The important actions of insulin-like growth factor-1 (IGF-1) include neurogenesis, angiogenesis, protection against cells in the brain, anti-inflammatory effects, and anti-apoptotic effects. Its serum concentration decreases with increasing age in the elderly population [60,61]. The role of IGF-1 in neurodegenerative diseases is being studied intensively as a factor correlated with defective brain insulin signaling [62]. Dogan et al. [63] showed that IGF-1 levels in serum did not differ in the presence of PEX syndrome, with or without glaucoma. It is worth emphasizing that PEX is the most common identifiable cause of glaucoma, and aging is the major risk factor. This allows for not just possibilities in the search for glaucoma biomarkers but also those characteristics for neurodegenerative symptoms observed in elderly patients [64,65]. The role of transforming growth factor-β (TGF-β) is to control proliferation and differentiation in most cell types and to act as an anti-inflammatory agent. The frizzled secretion protein (SFRP) family consists of five human-secreted glycoproteins (SFRP1, SFRP2, SFRP3, SFRP4, SFRP5) that play roles in cell signaling. In addition, SFRP1 and SFRP5 may be involved in determining the polarity of photoreceptor cells in the retina. Guo et al. [66] evaluated bioactive transforming growth factor-β2 (TGFβ2) and secreted frizzled-related protein-1 (SFRP1) levels in the aqueous humor of different types of glaucoma: POAG, chronic angle-closure glaucoma (CACG), primary angle-closure suspects (PACS), and acute angle-closure glaucoma (AACG). The study was performed by means of an ELISA test, and patients with cataracts were considered as a control group. The concentration of this growth factor was significantly higher in the aqueous humor collected from POAG patients compared to control patients. However, this correlation was not identified in CACG, PACS, or AACG patients. Authors have also observed differences in the level of TGFβ2 depending on high and normal IOP in patients from the AACG group. There were no significant differences in the levels of SFRP1 analyzed in aqueous humor collected from tested groups. However, patients with primary POAG with high IOP had lower levels of SFRP1 than patients with normal IOP [66]. The studies assessing the role of cytokines and growth factors in glaucoma are presented in Table 3. Table 3. Cytokines and growth factors evaluated as potential biomarkers in glaucoma. The mean TNF-α level was significantly more increased in POAG cases than in control cases (p = 0.003). Logistic regression analysis showed that the risk of POAG was most significantly affected by TNF-α level (not by age and sex). [55] Serum levels of BDNF in glaucoma patients were significantly lower compared to healthy control subjects (p = 0.03). Additionally, serum levels of BDNF were significantly lower in early (p = 0.019) and moderate (p = 0.04) glaucoma but not in advanced glaucoma (p = 0.06) comparied to control subjects. Serum levels of NGF in glaucoma patients were significantly lower compared to control subjects (p = 0.01). Additonally, serum levels of NGF were significantly lower in early (p = 0.0008) and moderate glaucoma (p < 0.0001) but not in advanced glaucoma (p = 0.32) compared to healthy control subjects. [29] Insulin-like growth factor-1 (IGF-1) Pseudoexfoliation (PEX) with or without glaucoma n = 110 participants (age 65 years or older) who were divided into groups: 1. patients with PEX syndrome (n = 35), 2. patients with PEX glaucoma (n = 34), 3. participants without PEX or glaucoma (n = 41)/serum Statistically significant differences between the groups in terms of IGF-1 concentration were not observed (p = 0.276). IGF-1 levels in circulation did not differ in the presence of PEX syndrome with or without glaucoma. [63] Transforming growth factor-β2 (TGFβ2), secreted frizzled-related protein-1 (SFRP1) Different types of glaucoma n = 105 patients divided into five groups: cataract (control), POAG, chronic angle-closure glaucoma (CACG), primary angle-closure suspects (PACS), and acute angle-closure glaucoma (AACG)/aqueous humor The concentration of TGFβ2 in POAG patients (but not CACG, PACS, or AACG patients) was significantly higher compared to control subjectsDifferences in TGFβ2 concentration within AACG patients were observed after consideration of IOP (high and normal). The concentration of SFRP1 was not significantly different among the groups, but a statistically significant negative correlation between SFRP1 and IOP existed in the POAG group. [66] Hormones and Enzymes Retinal ganglion cells are known to express estrogen receptors. Prior studies have suggested an association between postmenopausal hormone (PMH) use and decreased IOP, suggesting that sex hormones may play a role in the development of glaucoma and decrease the risk for POAG [67]. Li et al. [68] showed that a decreased level of 17-βestradiol (E2) in the serum of postmenopausal women is correlated with an increased risk of glaucoma progression. This is consistent with the results of a previous study [67] that confirmed the use of PMH preparations containing estrogens can help reduce the risk of POAG. The proposed mechanism still needs to be confirmed, but it is known that RGCs express estrogen receptors, suggesting that PMH use may reduce the risk of glaucoma development. Canizales et al. [69] analyzed the possibilities of using factors related to oxidative stress as biomarkers for the early diagnosis of glaucoma. The mRNA expression level of several biomarkers of oxidative stress in the aqueous humor was assessed in patients with POAG compared to the control group. Authors have proved that the mRNA expression level of superoxide dismutase 1 (SOD1) is significantly reduced in patients with POAG than in control subjects. Li et al. [70] also studied the relationship between oxidative stress biomarkers, including serum superoxide dismutase (SOD), total antioxidant state (TAS), hydrogen peroxide (H 2 O 2 ), malondialdehyde (MDA), glutathione peroxidase, glutathione reductase, and visual field progression in patients with PACG. Serum SOD and TAS levels in the PACG group were significantly lower with simultaneously higher levels of MDA and H 2 O 2 compared to the control group. These results may indicate that oxidative stress is one of the key factors involved in the formation and development of PACG. The important studies assessing the role of hormones and enzymes in glaucoma are presented in Table 4. Table 4. Hormones and enzymes evaluated as potential biomarkers in glaucoma. Uric Acid: An Important Biomarker Combined with Protein Metabolism Uric acid (UA) is the final product of nitrogen metabolism in humans. Based on its protective effect against oxidative damage [71] in the central nervous system, UA was proposed as a biomarker for POAG. The relationship between serum UA concentration and glaucoma severity was explored. The level of serum UA in the POAG group was approximately 13% lower (p < 0.001) than that of the control group. The UA/creatinine (Cr) ratio was approximately 15% lower (p < 0.001) in patients with POAG compared with the control group [72]. In addition, the levels of UA were significantly lower in PACG patients compared with control subjects, which makes it an important candidate in reaction to oxidative stress in glaucoma pathogenesis [73]. The studies assessing the role of uric acid in glaucoma are presented in Table 5. Table 5. Uric acid as a potential biomarker in glaucoma. Detection Methods Proteomics for the discovery of biomarkers might reveal many important issues, including the inherent differences between biological fluids (and how these differences affect current analytical approaches) and experimental design to maximize efficiency [74]. The method allows us to unravel the biological complexity encoded by the genome at the protein level and is built on technologies that analyze large numbers of proteins in a single experiment [75]. Alterations in sera proteins between patients may be identified through a proven approach utilizing equalization of high-abundance serum proteins with ProteoMiner™ (Bio-Rad, California, USA), two-dimensional fluorescent difference gel electrophoresis (2D-DIGE), matrix-assisted laser desorption/ionization time-of-flight/time-of-flight (MALDI-TOF/TOF), and nanoscale liquid chromatography coupled to tandem mass spectrometry (nanoLC-MS-MS) analysis [36]. A large number of plasma proteins were also observed in tear fluid. The proteins found in tears play an important role in maintaining the ocular surface; changes in tear protein components may reflect changes in the health of the ocular surface. Using reverse-phase high-pressure liquid chromatography (RP-HPLC) and nanoscale liquid chromatography coupled to tandem electrospray ionization mass spectrometry (nanoLC-nano-ESI-MS/MS), 60 tear proteins were identified with high confidence, including well-known abundant tear proteins and tear-specific proteins such as lacritin and proline-rich proteins [76,77]. The search for markers concerns not only peptides and proteins but other groups of chemical compounds as well. Pan et al. [78] conducted a metabolomic analysis of aqueous humor samples using nontargeted gas chromatography combined with a time-of-flight mass spectrometer in patients with POAG undergoing surgery and their results of the patients undergoing cataract surgery. The mean age of the study participants was more than 70 years. The authors identified differences in the metabolomic profiles of the samples obtained from both groups of patients. Reduction of biotin, glucose-1-phosphate, methylmalonic acid, N-cyclohexylformamide 1, sorbitol, and spermidine was observed in POAG patients compared to control subjects. Conversely, it was found that mercaptoethanesulfonic acid 2, D-erythronolactone 2, D-thalose 1, dehydroascorbic acid 2, galactose 1, mannose 1, pelargonic acid, and ribitol were increased in participants with POAG compared to patients with cataracts. The obtained results may contribute to the development of a new therapeutic approach [78]. An example of a technique supporting protein identification is 2D electrophoresis, which allows for the possibility of analyzing proteome profiles to search for protein changes in the levels of pre-existing proteins, induction of new products, or coregulated polypeptides. However, 2D electrophoresis' limitations include the heterogeneity of biopsy material, the lack of procedures for quantifying protein changes, and the need for better image-analysis systems for supporting gel comparisons and databasing [79]. Methods of antibody-profile detection may be validated with specific antigen microarrays [80], or immunological tests based on antibody responses that could be used for diagnosis and screening purposes [18], or serological proteome analysis (SERPA) for initial autoantibody profiling [49]. The standard techniques also include serological methods such as enzyme-linked immunosorbent assay (ELISA), which offers specific detection of a wide variety of target analytes in different kinds of samples. However, ELISA assays have numerous limitations, such as laborious procedures, the need for a relatively large sample volume, and an insufficient level of sensitivity [81]. Multiple approaches of proteomic technologies are required to cover most of the metabolites. Consequently, metabolome profiling is hampered mainly by its diversity, variation of metabolite concentration by several orders of magnitude, and biological data interpretation [82]. Conclusions Currently, there is tremendous interest in research into biological markers in both life sciences and clinical sciences. A biomarker can be used as an unbiased differential indicator of disease onset, as an aid in classifying a disease state, or as an assessment of the severity and progression of the disease. Diagnostic and prognostic biomarkers may be critically useful for the timely treatment of many diseases. Therefore, the search for specific biomarkers is still a challenge and a goal of many clinical and research centers. Intensive research is currently underway to develop biomarkers for the diagnosis of glaucoma. In relation to the studies presented in this manuscript, most biomarkers were analyzed from blood (serum/plasma) samples. Interestingly, the next biological material used in the analyzed studies was the aqueous humor, although its collection is both difficult and invasive. There were only a few biomarker studies using tears or urine-materials relatively easy to collect. No research was found that tracked protein-biomarkers in saliva (Figure 2). Limited screening methods and the increasing number of glaucoma patients underscore the need for new biomarkers of POAG. Available clinical analysis tools for glaucoma have limitations, and most glaucoma patients show minimal symptoms at the time of diagnosis. Despite no gold standard for detection of progression, there are available standard automated perimetry, and more recently, optical coherence tomography, as established tests for this purpose. Nevertheless, finding molecular biomarkers and diagnostic factors is imperative in order to predict the occurrence of the disease and develop new treatments. The described possibilities of searching for biomarkers may contribute to the identification of a group of compounds strongly correlated with the development of glaucoma. This is of great importance in diagnosing and treating this disorder, as current screening techniques have low sensitivity and are unable to diagnose early POAG.
6,673.8
2021-11-01T00:00:00.000
[ "Medicine", "Biology" ]
Earth Diseases, Exploding Stars & Sea Ice Footprints-Part One The Supernova and Nova Impact Theory, SNIT, has purposed that pandemic diseases occur on Earth due to exploding star debris streams impacting our planet. A number of cases involving this phenomenon have been mentioned in papers by the author on internet. New information concerning the SNIT has become available as new papers were published. The new information is used in these results involving average velocity of debris streams between exploding star remnants and Earth. The locations of sea ice melts at both poles versus month and nova or supernova maximum hotspots have been analyzed for nova WZ Sagittae. Proof of changing WZ Sagittae average Alaskan temperatures verify SNIT model. Introduction In the beginning of the SNIT, it was assumed that the average velocity of an exploding star between the remnant and our planet was the same for all cases being studied. The mathematical relationship used to predict the impact time for debris stream of an exploding star is The knowledge of the average velocity between supernova, SN, 1006 and Earth became known due to many temperature records being set on Earth in the year 2012. The Constant in equation (1) for SN 1006 is 0.13337. The average velocity between Nova WZ Sagittae and Earth became known due to the recent impact of the 2001 major outburst of WZ Sagittae that was flagged by a Martian dust storm, an extensive ice melt at both poles, and extreme high temperatures in the Barents Sea. The Constant in equation (1) for this case of the WZ Sagittae major outburst is 0.119. The average velocity is higher when the distance between our planet and the remnant is a minimum. This is simple physics because the shorter distance of travel provides less resistance for the velocity of the debris stream. The eastern terminus for WZ Sagittae at 115 degrees west longitude places the black plague in the western USA. The CAM date of January 20 should be near the time of the western USA cases, but the author does not have this information. The western USA and Madagascar locate the latitudes for the northern and central tine for Satan's pitchfork on different dates for this case. The SNIT theory proposes that the Leonids meteor storms originate from the explosions of WZ Sagittae and therefore outbreaks of plague should be a result that correlates with large Leonids meteor showers [6]. Case 3 exploding star Vela Jr The distance to supernova Vela Jr is 652 light years. The Constant in equation (1) is 0.12 and the correction term is 78 years. The explosion was seen at 1250 AD giving the impact time as 1328 AD. The fourteenth century Black Death was in world's population was destroyed [2]. Case 2 exploding star WZ Sagittae The exploding star WZ Sagittae is a recurrent nova. It has super and normal outbursts that have occurred at various time intervals as shown in Figure 1 [3]. The nova WZ Sagittae is 142 light years from Earth on the constant for equation (1) is 0.119. The oubursts of WZ Sagittae impact Earth 16.9 years after they are seen; so, 16.9 years must be added to the times of the outbursts in Figure 1 to obtain the impact years. Figure 1 The location of the western terminus longitude of WZ Sagittae is 65 degrees east and it has a CAM date of July 20 [4]. The eastern terminus is 115 degrees west longitude and it has a CAM date of January 20 [4]. The western and eastern termini are the points of maximum debris particle densities and the CAM dates are the times they occur. Open Access | Page 143 | debris stream. It must be remembered that the high density longitude is a circular area that may be 30 degrees wide and not a point. Case 4 exploding star SN 1054 The distance of SN 1054 is 7,175 light years. Using equation (2) with thee known distance gives a constant for equation (1) of 0.13267. The time the light was seen 1054 AD. The correction term is 952 years. Using equation (1) gives the impact time of 2006 AD. The plague cases maximize as shown in Figure 3 [8] in the year 2006. Since SN 1054 is so far away the correction term is large and the accuracy is extraordinary. When studying the calving of glaciers in Antarctica, The deflection area, DA, of SN 1054 was found to match the longitudinal degree range for the moose die off in the northern USA. It is safe to say the plague cases match the same DA zone with some variation in longitude and latitude degrees due to the times of death being in different years in the USA. When discussing the region of maximum debris stream particle density, WT, DA, and ET are east, central, and west longitudinal locations and north, central, and south tines of Satan's pitchfork are north south locations that occur at all focal points. The DA area of SN 1054 caused the USA extensive flu outbreak in 2017 and is expected to repeat in 2020. Other India in 1334 AD. In places in Europe, the Black Death killed 50 percent of the population. The western and eastern termini for the debris stream of supernova Vela Jr in 1328 AD are 114 west and 66 east longitude, respectively (Table 1 pg 139). The Black Death should be most severe at these locations in the northern hemisphere because solar gravity focusing produces maximum particle density at these points. The high particle density point moves from west to east and back to west between the longitudes of the termini. The western hemisphere near St. Louis, USA and India in the eastern hemisphere fit these longitude locations. The disappearing culture of Cahokia, an ancient city near the location of St Louis in North America could be explained by the Black Plague of the 14 th century [7]. The location matches the longitude of the western terminus of the Vela Jr supernova. The high debris particle point also moves north to south in 11 year cycles due to the varying magnetic field of the Sun. Table 2 [6] lists European, India and China plagues attributed to supernova Vela Jr. Notice the latitude range in Table 1 and Table 2 is much smaller than the longitude range. The two impacts in Table 2 in the year 1334 are frontal and zonal. The frontal impact is in China when the debris stream is making contact with the planet. Frontal impacts are not restricted to the 180 degrees between the western and eastern termini. The zonal impact is in India when the planet is moving into the existing zone of the eastern terminus of the produces a mumps outbreak in Alaska starting in 2017 and continuing into 2018. Figure 4 shows the western terminus, deflection area, eastern terminus, and longitudinal locations with CAM dates for the four active energy sources, exploding years of large flu outbreak are predicted for the USA [9]. Case 5 wz sagittae mumps outbreak in Alaska The northern tine of the eastern terminus of WZ Sagittae These simple rules will be used to show polar sea ice melts match the predicted locations of hotspot storms created by the incoming debris streams from the four exploding stars. The data in Figure 4 are color coded and the red line tagged as 115W, Jan 20, and ET is the eastern terminus of nova WZ Sagittae. Rotating this line 30 degrees west gives a west longitude of 145 degrees. The west longitude for Anchorage, AK is 150 degrees. Figure 5 shows 391 cases of mumps that happened in Alaska and 351 cases occurred in Anchorage, AK [10]. The red arrow shows the CAM date of January 20 when the maximum disease particle density will be near Anchorage. The black line date indicates the approach of the focal point of maximum density from the east and the blue line indicates the direction of the dates when the focal point of maximum density returns to the west. The occurrence of mumps cases maximize when the eastern terminus of WZ Sagittae is over Anchorage, AK near CAM date January 20, 2018. There are more examples of this disease debris transmitted phenomenon from exploding stars in the referenced papers of the author. One exceptional example is star debris streams, that are causing global warming. The data are plotted on the Arctic minimum sea ice background for 2018. Since each line represents a maximum incoming particle density, each line also represents a location of incoming maximum kinetic energy which will be converted to heat that will melt sea ice. The equations for L, longitude, and RA, right ascension, on page 136 of reference [4] are used to generate all the data in Figure 4. The only independent variable required is the right ascensions for the four exploding stars. The calculation assumes the earth is simply a sphere orbiting the sun and does not consider the earth's magnetic field. The magnetic field density of the earth increases as either pole is approached. The higher values of magnetic field density closer to the poles cause the incoming charged particle debris stream to be deflected and the longitudes vary from the calculated values shown in Figure 4 and Table 3. Any heat transfer expert can see the warm lines need to be rotated 30 degrees clockwise to match the sea ice melt pattern shown in Figure 4. Using the concept of a symmetric magnetic field and knowing the magnetic field is in the opposite direction at the South Pole provides a rotation of 30 degrees in the counter clockwise direction at the South Pole. Simply stated incoming charged debris streams will rotate 30 degrees west Table 3 are at continuous longitudinal locations as they cycle between western and eastern termini locations. The January 20 eastern terminus location used in the mumps disease case is in the fourth line of the table. Each exploding star has two different dates for crossing DA locations, but only one date for an eastern or western terminus which are lines that are not crossed. V603 aquilae sea ice footprints The four maximum incoming stream density times will be the death of thousands of oblong nosed antelope in Asia in May of 2014. The cause of death was attributed to internal ruptured organs. The deflection area zone's longitude of SN 1006 was correlated with the kill zone of the antelopes and a large sea ice melt region in Antarctica. Table 3 shows the calculated CAM dates, longitudes, ET, DA, and WT for the active debris streams of the four exploding stars bombarding our planet. The color on the name in the first line of Table 3 matches the color for the debris stream By observing the sea ice slides for December 2017, January, and February 2018 shown as Figure 6A and Figure 6C; it can be seen that the freezing and heating energy fluxes are nearly equal because the edge of the sea ice barely moves for the maximum heating date. Supernova 1006 & 1054 and Nova WZ Sagittae & V603 Aquilae Focal Point Locations The January 1 location for the hotspot of V603 Aquilae at the South Pole is shown in Figure 7 as the green line. The green line is tracing the 102W -20 = 82W longitude line due to the subtraction of 30 + 20 degrees from the calculated studied next for nova V603 Aquilae. V603 January 1 sea ice melt From Table 3 nova V603 Aquilae should show a specific sea ice melt January 1, 132W, eastern terminus. Looking at Figure 1, it should be noted the hotspot is traveling clockwise after January 1 or to the west from the corrected position 162W that results from the addition of the 30 degrees to the calculated value 132W. The green line in Figure 6 traces the 162 west longitude line and one would assume, the lost sea ice due to the clockwise motion of the hotspot from the eastern terminus in the month of January 2018 is shown by Open Access | Page 149 | Figure 7A shows a small melt between Jan 1 and Feb 1 that may be due to V603 Aquilae or WZ Sagittae hotspot from DA location of November 2. Notice the match of hot spots on both sides of the pole in Figure 8A with green line. Sea ice melting should occur at both locations at different sides of the pole [12]. This indicates that the incoming debris stream is split into two different heating zones and the green circle of Figure 8A shows where melting should occur on the opposite side of the pole (compare Figure 6B and Figure 6C). There is no sea ice melting indicated at 83W longitude due value of 132W from Table 3. When the Longitude equation was derived, the plague longitude in the USA was used. Since the longitude of an incoming debris stream varies from north to south an additional 20 degrees needs to be added for the shift to the east for the southern tine. Since the motion of the hotspot at the North Pole in Figure 6 is clockwise or to the west, the motion of the South Pole hotspot will be counter clockwise or to the east. There is no blue open water in Figure 7 The hotspot has been over the sea ice loss area before the location of the July 1 st green line traveling counter clockwise and overflows the bar slightly before reversing direction and moving clockwise or east as shown in Figure 13 [11]. July is a freezing month in the Antarctic so the freezing and heating fluxes are near equal and competing with each other making the sea ice loss area small. The locations of the northern and southern debris stream focal points or hotspots shown in Figure 14 agree with the derived values from Table 3 except where corrections are noted. It is obvious the southern tine has done some melting to the east before July 1 [12]. The important part of the hotspot is indicated by the red circle in Figure 14A and it moves down the red line to the green line and back. The motion is along the sea ice edge [13]. The hotspot in Figure 14B at 94E longitude on the eastern shore of Antarctica and rebounding clockwise is in agreement with Figure 13 [13]. V603 October sea ice melt The melt of the sea ice before and after the green bar deflected area, DA, can be seen in Figure 15. The very large hotspot shown in Figure 16A [12,13] shows melts on both sides of green line near Asian shore shown in Figure 17 [11]. Heat is added to Laptev Sea unfrozen water in October to delay normal freezing time. The DA hotspot traveling clockwise of Figure 16B [13] in Figures 10A and Figure 10 where the heating occurs shown in red. The heating area for the northern tine's latitude shown in Figure 10 is too low to melt sea ice at North Pole, Figures 9 and Figure 9A. This slide is the comparison of the following month, April, to bracket the March 17 date. It denotes no visible significant change, but does not show possible sea ice thinning between the green lines of Figure 9A related to July ice melt. Southern tine which is a deflection area, DA, appears to be correctly predicted by green bar being right of center in loss sea ice area and moving from west to east or counter clockwise in Figure 10 [11]. Loss Sea ice area continues to grow in the correct direction, counter clockwise, after March 1 Figure 11A. Normally the loss of sea ice's beginnings will always be shown in the Climate4u sea ice Figure on the first day of the month containing the CAM date of the maximum hotspot; so, following months will no longer be shown in this paper. The reader may check following months if interested. The green line in Figure 10A V603 July sea ice melt Since this case is a western terminus and the counter clockwise, CC, rotation indicates the hotspot is headed east after touching the green bar on July 1 st , it has already been over sea ice loss area shown in Figure 12 before July 1 [14]. The reason the double and triple hotspots are formed is because of the small variation of the right ascension, RA, values of the three exploding stars. The right ascension value for WZ Sagittae is 20h 07m 36.5s. The right ascension value for V603 Aquilae is 18h 48m 54.6s. The difference of the two values is 1h 18m 42.9s giving 19.7 degrees longitude as the distance between the red and green lines of Figure 4. The unusual melt in February 2018 in the Bering Sea was not caused by the double hotspot of V603 Aquilae and WZ Sagittae, but was caused by the WZ Sagittae hotspot after thinning by other exploding star incoming energies. It was noted that February 1989 showed a similar decline of sea ice in the Bering Sea. WZ Sagittae being a recurring nova was actively impacting the planet in 1989 at the same longitude as today [16]. It will be interesting to calculate the impact date of nova V606 Aquilae that was visibly exploding in 1899 and 700 ± 86 light years away from our planet [18]. The right ascension value of V606 Aquilae is 19h 20m 24.3s. The difference of the RAs between V606 Aquilae and WZ Sagittae is 47m 12.2s giving 11.8 longitude degrees difference. The nearness of the RA values means the effect of impact will be near the same longitude. The constant from equation (2) is 0.1200. The correction term is 84 ± 19.32 years. The impact time of V606 Aquilae is 1983 ± 19.32 years which includes 1989. Therefore, there was a good chance of another double hotspot attack on the Bering Sea in 1989 by the nova WZ Sagittae and V606 Aquilae. Nova WZ Sagittae has been selected as the impacting heat source for early February melts in Bering Sea 2018 and 2019, but the late February melts heat source is unknown as shown in Figure 16B [4]. When the remnants location is greater than 15 the location of 13W longitude is shifted to the east. The location of the Western Terminus is 73E longitude. The value of the longitude of the Eastern Terminus is on the other side of the Earth or 180 degrees away at 107W, the theoretical value that would be plotted in Figure 4. Rotating the theoretical value 30 degrees due to the earth's magnetic field at the North Pole gives 137W longitude. For the hot spot to move to 166W, 29 degrees are required. Using the SNIT theory rule of one degree per day we add 29 days to the CAM date of January 30 to produce the date of March 1 for the melt of sea ice by the Veil Nebula debris stream, yearly. Shorthand for the same calculation for SS Cygni RA = 21h 42m 43s = 21.712h DEC = +43 [20] DOY = 44 = Feb 13 Rotated agrees with sea ice melt in Figure 15 again allowing for October to be a freezing month for the Antarctic. V603 sea ice melts summary The V603 melting sea ice longitudinal locations agree with derived theoretical values at both poles. The North Pole melt of July 1 is extraordinary and results due a double pass over of the melt area by the V603 hotspot. The theoretical values of Longitude for lines of maximum incoming heat flux must be shifted 30 degrees west at the North Pole and the resulting North Pole longitudinal value must be shifted 80 degrees east to find the correct South Pole longitudinal location. The shift of the theoretical values at both poles is due to the effect of the Earth's magnetic field. November and December Bering Sea Pause in Sea Ice Extent The motion of the SN 1054 hotspot in Figure 4 shows the hotspot was over 166W longitude, 175E (revolve 155W 30 degrees west) minus 166W degrees, before December 12 (one degree equals one day) or 19 days on the date of November 24. The hotspot is moving east and the vertical black line in Figure 18 shows the date for the center of the hotspots location at 166W. The center of the hotspot touches the area where sea ice extent is being measured at 180W shifting the date when the effect of the hot spot would be felt 14 days to the vertical red line [15]. The hotspot is not a point but it is a large circular area where the blue arrow in Figure 18 represents the radius of the hotspot circle in days. The conclusion is that the November 1 decrease of increasing sea ice extent is due to the SN 1054 hotspot. The red line of WZ Sagittae's eastern terminus in Figure 4 is to be shifted west 30 degrees to 145W on January 20. The central location of 166W occurs in 21 days with the hotspot of WZ Sagittae moving west giving the date of February 10. Since the right ascension values of the exploding stars have not changed, the longitude locations and dates of incoming energy should be the same for the red line and green line, 2017 and 2018 of Figure 18 [15]. Double or Triple debris stream hotspot phenomena The January 1 green line for V603 Aquilae in Figure 4 also locates the hotspot for WZ Sagittae on January 1 that is traveling toward its eastern terminus. The WZ Sagittae hotspot is not at its maximum incoming energy that occurs on January 20, but it contributes at the same location as the V603 maximum forming a maximum incoming total energy location. If this double hotspot location is moved 30 degrees west to correct the calculated value to the real value, the double hotspot is located at 166W longitude in the Bering Sea (January point Figure 16A). The hotspot for SN 1054 is also at the 166W longitude location on January 1 giving a triple hotspot. The maximum hotspot value for WZ Sagittae occurs January 20 and moves to the 166W location through 21 degrees longitude in 21 days giving the date February 10 for the unusual 8 day melt in the Bering Sea (February point Fig 16A) [16,17]. If the V603 Aquilae hotspot is not thermally effective January 1, WZ Sagittae and SN 1054 still provide a and Eastern Terminus = 122W Theoretical value = 92W for the hotspot to move to 166W takes 44 days = Mar 29. The variable star SS Cygni debri stream appears to cause the last Reduction of Bering Sea ice extent before summer in the northern hemisphere. See Figure 17. The important factor in sea ice melting may be the period of time elapsed after initial impact of the exploding star debris stream. Conclusions For the simple case of the Spanish Flu and R Aquarii, it can be observer that the impact time could have been in error by 85 years and it would have been impossible to connect the epidemic to the exploding star 710 light years from our planet. This is the first time that deep space astronomy has produced a significant effect on Earth's biosphere. Astronomers should be given the task to see if the author has missed any recent or near future debris stream impacts of importance. Currently it is the opinion of the author that the GK Persei explosion's impact may destroy the society of the USA and Western Europe beginning in the year 2083. Hopefully mankind will learn how to protect our planet from incoming exploding star debris streams before that date. The theoretical Locations if polar sea ice melts and hotspot storms do not always agree exactly as can be seen in the numerous Figures, but a high percentage of the cases do agree to a reasonable accuracy. The assumption of a constant longitude for incoming debris storms has been changed to a eschewed distribution resulting in a 30 degree shift to the west for the Arctic melts and storms and a 50 degree shift to the east for Antarctic melts and storms. Storms that occur at the extremes, eastern or western termini, of the 180 degree sector for the longitudinal travel of a debris stream, will change east west direction at the terminus location. Sea ice melts can occur in warm polar months that were the result of ice thinning in cold polar months. The pattern of ice melts could vary in latitude for following years for the western terminus and deflection area, but eastern terminus sea ice melts should reoccur at the same time and location yearly. To date doctors are doing a great job stopping pandemics with inoculations. If the debris particles could be stopped from impacting Earth's atmosphere, many lives could be saved an Earth could become paradise free of disease. It is sad that we spend so much effort inventing methods to destroy others when we should be researching how to protect everyone from death. The discovery of the Veil Nebula supernova melting sea ice in the Bering Sea means a number of other active debris streams are impacting Earth. The Veil Nebula supernova explosion was visible 3000 years ago and has been actively adding heat to our planet for over 2000 years at regular times in our orbit. If you have ever looked at the thermal model of our planet you can surmise what a bubble gum and bailing
5,987.8
2019-03-27T00:00:00.000
[ "Physics" ]
Recent advances in imaging crustal fault zones: a review Crustal faults usually have a fault core and surrounding regions of brittle damage, forming a low-velocity zone (LVZ) in the immediate vicinity of the main slip interface. The LVZ may amplify ground motion, influence rupture propagation, and hold important information of earthquake physics. A number of geophysical and geodetic methods have been developed to derive high-resolution structure of the LVZ. Here, I review a few recent approaches, including ambient noise cross-correlation on dense across-fault arrays and GPS recordings of fault-zone trapped waves. Despite the past efforts, many questions concerning the LVZ structure remain unclear, such as the depth extent of the LVZ. High-quality data from larger and denser arrays and new seismic imaging technique using larger portion of recorded waveforms, which are currently under active development, may be able to better resolve the LVZ structure. In addition, effects of the alongstrike segmentation and gradational velocity changes across the boundaries between the LVZ and the host rock on rupture propagation should be investigated by conducting comprehensive numerical experiments. Furthermore, high-quality active sources such as recently developed large-volume airgun arrays provide a powerful tool to continuously monitor temporal changes of fault-zone properties, and thus can advance our understanding of fault zone evolution. Introduction A crustal fault is a fracture or a zone of fractures that separates different blocks of crust and accumulates aseismic strain subjected to large stress concentrations (e.g., Yang 2010). When the energy associated with the accumulated strain is suddenly released, an earthquake occurs on the fault and may cause severe ground shaking. Geological studies of faults exposed on the ground indicate that a fault is characterized as a narrow zone, termed ''fault zone (FZ)'' (e.g., Chester and Logan 1986;Chester et al. 1993). Since it is composed of highly fractured materials (e.g., Chester and Logan 1986;Schulz andEvans 1998, 2000;Sammis et al. 2009), the FZ is seismically recognized as a low-velocity zone (LVZ) with reduction in seismic velocities and elastic moduli relative to the host rocks. It has been suggested that the FZ structure holds critical keys of earthquake generation and physics (e.g., Scholz 1990;Kanamori 1994;Kanamori and Brodsky 2004). Furthermore, material properties of the FZ may also influence the migration of hydrocarbons and fluids, as well as the morphology of the land surface (e.g., Wibberley et al. 2008;Faulkner et al. 2010). High-resolution imaging of the FZ structure not only advances our understanding of earthquake physics and thus of better seismic hazard preparation and mitigation, but is also critical to evaluate the long-term deformation of crust. Tremendous efforts have been made to obtain highresolution images of FZ structure, including geological, geodetic, and geophysical approaches. However, many fundamental questions concerning the mechanics, structure, and evolution of FZ remain unclear. Recently, there have been a number of new approaches to infer the LVZ structure, e.g., ambient noise cross-correlation using a dense seismic array (Zhang and Gerstoft 2014;Hillers et al. 2014), modeling FZ trapped waves from high-rate GPS data (Avallone et al. 2014), analysis of interseismic strain localization (Lindsey et al. 2014), and so on. In this paper, I review the results from a number of geophysical and geodetic techniques that were developed to image high-resolution fault zone structure. These techniques include gravity inversions, modeling Interferometric Synthetic Aperture Radar (InSAR) and GPS data, regional seismic tomography, FZ trapped waves, FZ head waves, FZ body waves, and ambient noise cross-correlation. Following these methods, I discuss the questions that remain open for debate and suggest a few research directions that may lead to a better understanding of the FZ structure and evolutions of crustal faults. FZ properties and their effects Direct measurements of the FZ properties are conducted on exhumed faults and by drilling active faults into seismogenic depths (e.g., Chester et al. 1993;Sieh et al. 1993;Johnson et al. 1994;Oshiman et al. 2001;Ma et al. 2006;Li et al. 2013b;Ujiie et al. 2013). The results have shown that the FZ includes a fault core that is usually a few centimeters to several meters in thickness (Fig. 1). Most of the tectonic deformation including coseismic slip is accommodated in the clay-rich fault core (e.g., Chester et al. 1993;Evans and Chester 1995). For example, the primary slip zone of the Flowers Pit Fault, a young normal fault located southwest of Klamath Falls, Oregon, has been found to be no more than 20 mm in width (Sagy and Brodsky 2009). Drilling across the Alpine fault, New Zealand, reveals a *0.5-m-thick principal slip zone bounded by a 2-m-thick fault gouge (Sutherland et al. 2012). Furthermore, core samples recovered from the Japan Trench Fast Drilling Project (JFAST) that were conducted by the Integrated Ocean Drilling Program (IODP) following the 2011 M W 9.0 Tohoku-Oki earthquake reveal a several-meter-thick pelagic clay layer where the plateboundary faulting is suggested to occur (Ujiie et al. 2013). In general, the size of the fault core is below the resolution of geophysical and geodetic imaging, and thus has to be analyzed by direct samples. Surrounding the fault core is a damage zone that is usually composed of highly fractured materials, breccia, and pulverized rocks (e.g., Chester et al. 1993;Evans and Chester 1995). The damage zone, or the LVZ, often has a width of hundreds of meters to several kilometers (Fig. 1). For instance, analysis of core samples from the Yingxiu-Beichuan fault that ruptured in the devastating 2008 M W 8.0 Wenchuan earthquake reveals a damage zone of approximate 100-200 m in width (Li et al. 2013b). Furthermore, fracture density has been found to drop drastically out of the central damage zones (100-m thick) along the Gole-Larghe Fault Zone in the Italian Alps ) and the Flowers Pit Fault, Oregon (Sagy and Brodsky 2009). The LVZ is often interpreted as a result of accumulative damages induced by past earthquakes. Therefore, the LVZ structure may reflect the behavior of past ruptures (e.g., Dor et al. 2006;Ben-Zion and Ampuero 2009;Xu et al. 2012). Furthermore, the LVZ can exert significant influence on properties of future ruptures based on results of numerical experiments (e.g., Harris and Day 1997;Huang and Ampuero 2011;Huang et al. 2014). For example, waves reflected from and propagated along the interfaces between the LVZ and intact rocks may strongly modulate the rupture propagation, including oscillations of rupture speed and generation of multiple slip pulses ). In addition, the LVZ may result in amplification of ground motion near faults (e.g., Wu et al. 2009;Avallone et al. 2014;Kurzon et al. 2014) and may influence long-term deformation processes in the crust (e.g., Finzi et al. 2009;. Moreover, cracks in the damage zone may store and transport fluids that play an important role on fault zone strength (Eberhart-Phillips et al. 1995). Due to closure of cracks and/or dry over wet cracks, seismic velocities in the LVZ may be elevated over time after a large earthquake. Such increase in seismic velocities of the LVZ indicates a healing process of the damage zone that is important to understand earthquake cycle and evolution of fault systems (Li et al. 1998;Vidale and Li 2003 Fig. 1 A schematic plot of a typical fault zone, after Chester and Logan (1986) Earthq Sci Pinto Mountain faults, Southern California (Fialko et al. 2002). In addition, many geophysical methods have been applied in deriving FZ properties such as gravity and electromagnetic surveys, seismic reflection and refraction, travel-time tomography, earthquake location, waveform modeling of FZ-reflected body waves, FZ head waves, and FZ trapped waves (e.g., Mooney and Ginzburg 1986;Ben-Zion and Malin 1991;Ben-Zion et al. 1992;Hole et al. 2001;Prejean et al. 2002;Waldhauser and Ellsworth 2002;Li et al. 2002;McGuire and Ben-Zion 2005;Bleibinhaus et al. 2007;Li et al. 2007;Yang et al. 2009;Roland et al. 2012). In the following, I briefly review a few seismological and geodetic methods that have been used to derive FZ structure. FZ waves The most frequently used seismological technique to image FZ structure is to model the FZ trapped waves (FZTW), which are low frequency wave trains with relatively large amplitude following the S wave (e.g., Li et al. 1990). This method has been applied on different faults around the world, such as the North Anatolian fault zone in Turkey , the Nocera Umbra and San Demetrio faults in central Italy (Rovelli et al. 2002;Calderoni et al. 2012), the San Andreas fault (SAF) near Parkfield (Korneev et al. 2003;Li et al. 2004;Li and Malin 2008;Wu et al. 2010), the Lavic Lake fault zone (Li et al. 2003b), the Calico fault zone (Cochran et al. 2009), the Landers fault zone (Li et al. 1994(Li et al. , 1999(Li et al. , 2000Peng et al. 2003), and the San Jacinto fault zone (SJFZ) in California (Li et al. 1997;Li and Vernon 2001;Lewis et al. 2005). Most FZTW studies have revealed a LVZ ranged from *75 m to *350 m in width with the shear wave velocity reduced by 20 %-50 % compared to the host rock. However, considerable uncertainties of FZTW modeling results due to the non-uniqueness and trade-off among FZ parameters have been noted Lewis et al. 2005). Furthermore, it is still open for debate whether the trapping structure is shallow or deep (Li et al. 1997;Li and Vernon 2001;Ben-Zion and Sammis 2003;Lewis et al. 2005). For instance, a 15-20-km-deep LVZ of the SJFZ was reported by Li and Vernon (2001) while another group argued that it was only 3-5 km deep (Lewis et al. 2005). Recent FZTW studies using newly deployed dense linear arrays also reveal shallow damaged zones near the Jackass Flat across the SJFZ (Qiu et al. 2014). In addition to the waves trapped within the LVZ, another type of wave that travels mostly along the fault interface, the so-called FZ head wave (FZHW), is also used to derive the FZ structure at depth. FZHWs are identified as low-amplitude and long-period precursory signals with polarities opposite to the direct P waves. The differential times between the FZHWs and direct P waves provide high-resolution information on the velocity contrast across the fault interface. For instance, measurements of such differential times and waveform modeling of head waves document 20 %-50 % of velocity contrast in the Bear Valley region of the SAF (McGuire and Ben-Zion 2005). Similarly, along-strike variation of velocity contrast along the Calaveras fault has also been suggested by modeling FZHW (Zhao and Peng 2008). In addition to inspecting waveforms directly, polarization analysis has been applied in automatically identifying FZHW (Allam et al. 2014b). By applying such automatic detection of FZHW and analysis of the differential times between the FZHW and direct P waves, average velocity contrasts of 3 %-8 % have been found along the Hayward fault, Northern California (Allam et al. 2014b). More recently, the FZHW has been observed along the Garzê-Yushu fault ruptured during the 2010 M W 6.9 Yushu earthquake (Yang et al. 2015). This is by far the first observation of FZHWs along a non-major plateboundary fault. Analysis of the time delays between the direct P waves and FZHWs suggests 5 %-8 % velocity contrast across the fault. More recently, waveforms of body waves transmitted through the FZ and reflected from its boundaries have been used to derive the degree of damage, width, and depth extent of the LVZ (Fig. 2) (Li et al. 2007;Yang and Zhu 2010a;Yang et al. 2011Yang et al. , 2014. This technique not only uses measurements of travel-time delays of direct P and S waves, but also takes into account of the differential times between the direct and FZ-reflected P and S waves. Thus, the trade-off between the FZ width and the velocity reduction is largely removed (Li et al. 2007). Since the body waves are of higher frequencies than the FZTW, the body waves sampling the FZ may resolve the FZ structure in higher resolution. Another advantage of this method is that it allows us to trace individual seismic rays that travel through and bounce back multiple times from the FZ boundaries, as the synthetic travel times and waveforms are computed using the Generalized Ray Theory (GRT) (Helmberger 1983). In addition, this technique can well constrain the depth extent of the LVZ if it is combined with the waves that are diffracted from the base of the LVZ (Fig. 2b), and/or the cross-fault travel-time delays of first arrivals (Yang et al. 2011). For instance, the Buck Ridge branch of the SJFZ was suggested to host a 2-km deep LVZ based on waveform modeling of the FZ-diffracted waves (Yang and Zhu 2010a). Ambient noise cross-correlation In the past decade, ambient noise cross-correlation (ANCC) has been extensively used in imaging subsurface structures of regional and continental scales (Campillo and Earthq Sci Paul 2003;Shapiro et al. 2005;Yao et al. 2006). As recent data recordings from dense arrays become available, the ANCC technique has also been applied in higher frequency band to derive high-resolution crustal structure. For instance, by performing noise cross-correlation at a frequency band larger than 0.5 Hz, Zhang and Gerstoft (2014) have retrieved reliable Green's functions from long recordings (6 months) of ambient noise at a dense array across the Calico fault. The temporary array consists of 40 intermediate-period and 60 short-period (L22 2-Hz corner frequency) seismometers in a 1.5 km 9 5.5 km grid adjacent to the Calico fault with minimum spacing of *50 m (Fig. 3). Similarly, FZ trapped waves were constructed from the scattered seismic wavefield recorded by the same array (Hillers et al. 2014). They also find the critical frequency of *0.5 Hz, a threshold above which the in-fault scattered wavefield has increased isotropy and coherency compared to the ambient noise (Hillers et al. 2014). The advantage of this approach is that the resolved structure is nearly independent on the background velocity model. The limitation of this technique, however, is that it can provide little constraint on the depth extent of the LVZ. In addition to the Calico array, the ANCC technique has also been used on larger dense arrays. Between January and June 2011, a very dense seismic array was deployed in the Long Beach area as part of an active-source survey in petroleum industry Schmandt and Clayton 2013). This array consists of more than 5200 high-frequency (10-Hz corner frequency) seismic velocity sensors within an area of 7-10 km, having a mean spacing of 120 m (Fig. 4a). As a by-product of the industry survey, the continuous recordings on the dense array provide unprecedented opportunities to image the subsurface structure at depths greater than petroleum reservoir. For instance, clear fundamental-mode Rayleigh waves were observed from ambient noise tomography between 0.5 and 4 Hz, roughly corresponding to 1-km depth and above . A fast velocity anomaly was found beneath the active northwest-southeast trending Newport-Inglewood fault system that is covered by the Long Beach array. Whether this velocity anomaly is associated with the damage fault zone needs further investigations. In addition to deriving shallow crustal structures, data recorded at such a dense array can be used in illuminating deep crustal properties. For example, a sharp change in Moho depth was inferred from coherent lateral variations of travel times and amplitudes of P waves at frequencies near 1 Hz (Schmandt and Clayton 2013). Gravity Owing to a large number of fractures and possible fluid interaction and chemical alteration within the FZ, densities of the LVZs are significantly smaller than those of host rocks. Thus, the LVZ is usually associated with a negative Bouguer gravity anomaly so that its structure can be inferred from gravity data. From modeling Bouguer gravity data along the SJFZ, a 2-5 km wide damaged zone on both sides of the surface trace of the fault was proposed in the region (Stierman 1984). Similar results were reported along the Bear Valley section of the SAF (Wang et al. 1986). In addition, analysis of horizontal components of the Bouguer anomaly can clearly delineate a major plate-boundary fault, e.g., the Red River Fault in the South China Sea (Li et al. 2013a). Note that the gravity data is very valuable in deriving FZ structure in the ocean because seismic and other geodetic data are significantly limited. However, it is challenging to determine the fine structure and depth extent Fig. 2 Schematic plots for the direct and FZ-reflected P and S waves (a) and FZ-diffracted body waves (b). After Yang and Zhu (2010a) Earthq Sci of the LVZ using gravity data alone due to nonuniqueness of the solution and relatively low resolution. Thus, application of gravity data to deriving FZ structure is considerably limited. InSAR data The aforementioned LVZ of the Calico fault was first discovered from InSAR data by comparing the ground (Fialko et al. 2002). Following this discovery, high-resolution satellite image data have been used to derive FZ structure, e.g., the Pinto Mountain fault and the Lenwood fault (Fialko 2004). It has also been pointed out that there might be considerable trade-off among the width, degree of damage (reduction in elastic moduli), and depth extent of the damage zone using the InSAR data ). Besides the coseismic deformation, interseismic strain derived from the InSAR data could also be used to determine the structure of the damaged FZ. For instance, the observed asymmetric patterns of interseismic deformation along the SAF and the SJFZ might be associated with kilometer-wide damage zones (Fialko 2006). However, such observations can be explained equally well by a model of a dipping fault zone (Fialko 2006). As such, application of the InSAR data to modeling FZ structure in depth should consider the nonuniqueness and trade-off among FZ parameters. GPS data In addition to modeling the InSAR data, deriving FZ structure from interseismic strain is also available by analysis of GPS data. Using a dislocation model in a heterogeneous elastic half space, Lindsey et al. (2014) have shown that a reduction in shear modulus within the FZ by a factor of 2.4 is required to fully explain the observed strain rate along the SJFZ, assuming a 15-km locking depth. Such reduction in elastic modulus and thus in seismic velocities is much higher than as imaged by regional seismic tomography (Allam and Ben-Zion 2012; Allam et al. 2014a). However, a similar magnitude of reduction in elastic modulus to the seismically determined value will result in a locking depth of 10 km, much shallower than the observed 14-18 km depth of seismicity along the SJFZ (Lindsey et al. 2014). Alternative possibility includes reduced yield strength within the upper FZ leading to the distributed plastic failure, as suggested by numerical modeling results (Duan 2010). In addition to recording interseismic strain, high-rate GPS data may also record coseismic deformation. For instance, FZ trapped waves have been found, for the first time, at a 10-Hz sampling frequency GPS station near the April 6th, 2009, M W 6.1 L'Aquila earthquake (Avallone et al. 2014). The GPS station was installed near L'Aquila a few days before the mainshock. The horizontal components of the mainshock waveforms contain a high-amplitude (43 cm peak-to-peak), nearly harmonic (1 Hz) wave train. Absence of this wave train in other nearby instrumental records indicates a local site effect, which is the wave energy trapped near the GPS station. The observation can be well fit by synthetic trapped waves in a model consisting of two quarter spaces separated by a 650-m wide LVZ that has 50 % velocity reduction and a Q value of 20 (Avallone et al. 2014). Depth extent of the LVZ Despite the past efforts on imaging high-resolution FZ structure, many fundamental questions concerning the mechanics, structure, and evolution of FZ remain unclear. For example, a central debate of imaging FZ structure is to what depth the LVZ extends (Yang and Zhu 2010b). A number of studies suggest that the LVZ may extend to the base of seismogenic depth, such as the SJFZ (Li et al. 1997;Li and Vernon 2001), the SAF near Parkfield (Wu et al. 2010), and the San Demetrio Fault in central Italy (Calderoni et al. 2012). In stark contrast, a group of studies propose that the LVZ only extends to shallow depths, e.g., a few km below the surface. For example, it has been suggested that the SJFZ hosts LVZs extending to 2-5 km in depth according to modeling of LVZ trapped and diffracted waves (e.g., Lewis et al. 2005;Yang and Zhu 2010a). The LVZ of the Calico fault is also suggested to be no more than 3 km in depth (Yang et al. 2011). Using a newly deployed dense seismic network, recent results of seismic imaging in regional scale have shown significant improvements in the resolution of crustal structure (e.g., Allam et al. 2014b;Zigone et al. 2014). Although the 100-m-wide LVZ is still below the current resolution of seismic images in regional scale, the newly obtained tomographic images of crustal velocity structure clearly show several-km-wide LVZs along the SJFZ, with strong velocity contrasts with host rocks (Allam et al. 2014b;Zigone et al. 2014). Furthermore, such LVZs are mostly prominent in the top 5 km, suggesting a shallow LVZ extent (Allam et al. 2014b;Zigone et al. 2014). The observed shallow LVZ structures are consistent with numerical simulation results on decreasing damage with depth (Ben-Zion and Shi 2005;Finzi et al. 2009;Kaneko and Fialko 2011). Such shallow LVZ structure is also implied from laboratory experimental measurements on rock samples from an exhumed fault, the Gole-Larghe Fault Zone (GLFZ) in the Italian Southern Alps ). The GLFZ is hosted in jointed crystalline basement and exposed across glacier-polished outcrops in the Italian Alps . Widespread occurrence of cataclasites associated with pseudotachylytes (solidified frictional melts) indicates ancient large earthquakes. Geochemical dating and analysis of the pseudotachylytes suggest that they were formed at 9-11 km depth, with ambient temperatures of 250-300°C. Using a combination of structural line transects and image analysis of samples collected across-fault strike, Smith et al. (2013) document a broadly symmetric across-strike damage structure of the GLFZ. The damage zone is distinguished by large variations in fracture density, distribution of pseudotachylytes, and microfracture sealing characteristics compared to the host rocks. The central damage zone is 100 m in width and is associated with highest fracture density. In addition to measuring the fracture density, Mitchell et al. (2013) have also measured acoustic P-wave velocities of the samples in laboratory. The results, however, show a higher P-wave velocity within the central damage zone than the surrounding alteration zones that have smaller fracture density. Why isn't the central damage zone a LVZ? The interpretation is that a large number of cracks within the central damage zone were associated with pervasive sealing of fractures at depth, which is consistent with low permeability as measured in samples . Therefore, the seismic wave velocities in the FZ may not be lower than those in the host rocks. Considering that the samples were exhumed from *10 km in depth, the results imply that the FZ damage may have been generated at greater depth following large earthquakes but may have healed rapidly due to fluid-rock interaction. FZ healing Healing process is an important component in understanding earthquake cycle and long-term evolution of fault systems (e.g., Li et al. 1998;Vidale and Li 2003). It has been pointed out that a FZ, at least the shallow portion, may experience an increase in seismic velocity with time, indicating the healing of the damage zone after a large earthquake (Li et al. 1998(Li et al. , 2003a. By conducting a pair of seismic explosions and analysis of travel-time changes of identical shot-receiver pairs across the Johnson Valley fault ruptured in the 1992 Landers earthquake, Li et al. (1998) have found a *1 % increase in both P-wave and S-wave velocities from 1994 to 1996. Such velocity increase is consistent with the prevalence of dry over wet cracks and/ or the closure of dry cracks (Li et al. 1998). Similar to the observations along the Landers rupture zone, the fault zones ruptured during the 1999 Hector Mine M W 7.1 earthquake have experienced a strengthening process during which the P-wave and S-wave velocities of FZ rocks increased by *0.7 %-1.4 % and *0.5 %-1.0 % between 2000 and 2001, respectively (Li et al. 2003a). To accurately document the degree of FZ healing in the field, repeatable seismic sources such as repeating earthquakes and explosions are often used. However, even repeating earthquakes identified from waveform correlations and man-made explosions may not provide perfect repeatable sources, which may lead to considerable uncertainties in estimating the relatively small changes in velocity over time. Moreover, the episodic behavior of such natural and anthropogenic seismic sources prevents continuous monitoring of the temporal changes of subsurface structures. The ANCC method, in comparison, is independent of seismic sources and can provide continuous measurement of temporal changes of medium properties (e.g., Brenguier et al. 2008). For example, more than 5 years of ambient noise data have been used to derive the velocity changes over time at the SAF near Parkfield. The results illustrate the coseismic damages due to the M 6.5 San Simeon and the M 6.0 Parkfield earthquakes and the consequent healing, i.e., increase in velocity (Brenguier et al. 2008). Many observations have been reported on temporal velocity changes in fault zones and volcanic areas from the ANCC results (e.g., Chen et al. 2014;Liu et al. 2014). However, the resolution of the ANCC results is often limited by the frequency of extracted signals. Recently, a new artificial seismic source has been implemented in probing high-resolution temporal changes of crustal velocities, termed transmitting seismic station (TSS) (Wang et al. 2012). The TSS is heavy-duty, easily controllable, and environmentally friendly. Its core part is an air-gun array that can suddenly release large-volume compressed air in a short-time interval, making highly repeatable (if not completely identical) seismic signals with cross-correlation coefficients larger than 0.99. The signals can be clearly observed at seismic stations over distances of *300 km after stacking, showing clear crustal phases (Fig. 5). Therefore, the TSS can not only be used to investigate FZ healing but also can provide valuable information of temporal structural changes over a larger scale. The TSS is currently under active development in China. After it was first launched in Binchuan, Yunnan, in April 2011 (Wang et al. 2012), another three test sites have been constructed in Xinjiang, Fujian, and Gansu provinces. Figure 5 shows the location of the TSS in Xinjiang and an example of the stacked waveforms excited from the air-gun source. LVZ boundaries It has been documented that the FZ damage may remain throughout (and beyond) earthquake cycles (Cochran et al. 2009;Yang et al. 2014). For instance, a 1.5-km-wide LVZ of the Calico fault has been suggested to be the long-lived damage zone given that there were no significant earthquakes along the Calico fault in the past thousands of years (Cochran et al. 2009;Yang et al. 2011). A more recent study also points out that the prominent LVZ near the Anza seismic gap along the SJFZ indicates slow healing process at least at shallow depths (Yang et al. 2014), as the Anza seismic gap has not experienced any surface-rupturing earthquakes for at least 200 years (Salisbury et al. 2012). Such healing process may be due to the closure of open cracks and fluid-sealing of fractures, which may in turn lead to less sharp boundaries between the LVZ and the host rock. Whether the velocity contrast occurs in a gradual transition zone or along a sharp interface may strongly affect the rupture properties of earthquakes on the fault and the FZ-related waveforms, e.g., the FZ-reflected waves. Prominent FZ-reflected P and S waves were first observed at a linear array across the Landers fault that was deployed shortly after the 1992 M W 7.3 Landers earthquake (Li et al. 2007). In contrast, such clear FZ-reflected phases are not identified coherently on the cross-fault arrays along the SJFZ and the Calico fault (Yang and Zhu 2010a;Yang et al. 2011Yang et al. , 2014. The difference may reflect the boundary properties between the FZ and the host rock. Since the array across the Landers fault was deployed shortly after the 1992 M W 7.3 earthquake, the earthquake-induced damage may have generated strong velocity contrast between the damage zone and the host rock, forming sharp boundaries that can generate clear FZ-reflected waves. In contrast, the SJFZ and the Calico fault have not experienced major ruptures in over 100 and 1000 years, respectively. Therefore, the LVZ boundaries might be gradual, resulting in less prominent reflected phases. Such gradual changes in FZ materials are observed by drilling into seismogenic faults. For example, a *50-m-thick alteration zone was found to obscure the boundary between the damage zone and fault core across the Alpine fault, New Zealand (Sutherland et al. 2012). Effects of the FZ properties on dynamic ruptures It has been well documented that the LVZ structure may strongly affect the dynamic rupture process (Harris and Day 1997;Huang and Ampuero 2011;Huang et al. 2014). Results of recent numerical simulations have shown the impacts on the rise time of slip and the transition from pulse-like to crack-like rupture as the LVZ width increases (Huang and Ampuero 2011). Considering large variations in width of damage fault zones (Savage and Brodsky 2011), e.g., the faults in the Eastern California Shear Zone, rupture propagation pattern for earthquakes occurring on The present numerical simulations of effects of LVZ on rupture dynamics are mostly conducted in 2-D cases with simplified LVZ structures. Many features in FZ structures revealed from observations are reasonably ignored. As mentioned above, there may be a transition zone between the damage zone and the host rock. Effects of such gradual changes in seismic velocity and elastic properties on rupture nucleation, propagation, and arrest are not well understood yet. Moreover, it has been pointed out that there are along-strike segmentations of the LVZ, as suggested along the SJFZ (Yang et al. 2014) and the SAF near Parkfield (Lewis and Ben-Zion 2010). Whether and how the segmented LVZs affect future earthquake ruptures needs to be investigated through numerical experiments. Furthermore, it has been shown that geometrically complex and heterogeneous fault properties also play important roles in dynamic rupture process (e.g., Yang et al. 2012Yang et al. , 2013. When coupled with the complex structure of the LVZ, which is likely the case in field, factors influencing rupture nucleation and propagation need to be investigated by conducting comprehensive numerical simulations. Conclusions and future directions Tremendous efforts have been conducted in imaging FZ structure and investigating FZ evolution and effects of FZ on rupture dynamics. In this paper, I briefly review the recent advances that have been used in deriving highresolution images of the FZ structure, including seismological and geodetic approaches. The available active source, dense array deployment, as well as the more advanced computational resources, lay out the road to obtain better images of the FZ structure and to advance our understanding of the FZ evolution, earthquake initiation, rupture propagation, and termination on the fault. Based on what are reviewed above, I think much can be learned by pursuing research in the following directions: (1) Deployment of large dense arrays. There are numerous examples showing the power of seismic arrays in deriving better images of the Earth structure. It also holds true for the relatively small structure such as a FZ. For example, the Long Beach array provides an unprecedented opportunity to study high-resolution FZ structure using a variety of techniques. Figure 4b shows a waveform record section along a linear profile of the array in which a number of coherent FZrelated phases can be identified, and then modeled to obtain a much-detailed FZ structure. Not only providing higher-quality data, the dense arrays will also promote development of new techniques in seismic imaging. (2) Development of new imaging techniques. So far modeling of FZ waves is mostly focused on separated seismic phases, such as FZ-reflected body waves, FZHW, and FZTW. By using a larger portion of the recorded seismic wavefield, we should be able to better resolve the FZ structure to which the different waves are sensitive. As more advanced computational resources are available, wavefield-based seismic imaging technique such as adjoint tomography can be applied in deriving high-resolution images of FZ structure. (3) Monitoring FZ evolution. As the highly repeatable seismic sources can be routinely used (Wang et al. 2012), continuously monitoring high-resolution FZ properties becomes feasible given that a dense seismic network is deployed across a FZ. With such a dense array and the high-quality active source, we can derive the high-resolution temporal variations of FZ structure and better understand how a FZ evolves with time.
7,540.4
2015-03-21T00:00:00.000
[ "Geology" ]
Isthmin1, a secreted signaling protein, acts downstream of diverse embryonic patterning centers in development Extracellular signals play essential roles during embryonic patterning by providing positional information in a concentration-dependent manner, and many such signals, like Wnt, fibroblast growth factor (FGF), Hedgehog (Hh), and retinoic acid, act by being secreted into the extracellular space, thereby triggering receptor-mediated responses in other cells. Isthmin1 (ism1) is a secreted protein whose gene expression pattern coincides with that of early dorsal determinants, nodal ligand genes like sqt and cyc, and with fgf8 during various phases of zebrafish development. Ism1 functions in early embryonic patterning and development are poorly understood; however, it has recently been shown to interact with nodal pathway genes to control organ asymmetry in chicken. Here, we show that misexpression of ism1 deletion constructs disrupts embryonic patterning in zebrafish and exhibits genetic interactions with both Fgf and nodal signaling. Unlike Fgf and nodal pathway mutants, CRISPR/Cas9-engineered ism1 mutants did not show obvious developmental defects. Further, in vivo single molecule fluorescence correlation spectroscopy (FCCS) showed that Ism1 diffuses freely in the extra-cellular space, with a diffusion coefficient similar to that of Fgf8a; however, our measurements do not support direct molecular interactions between Ism1 and either nodal ligands or Fgf8a in the developing zebrafish embryo. Together, data from gain- and loss-of-function experiments suggest that zebrafish Ism1 plays a complex role in regulating extracellular signals during early embryonic development. Electronic supplementary material The online version of this article (10.1007/s00441-020-03318-2) contains supplementary material, which is available to authorized users. Introduction Major patterning and tissue segregation events are established early during development. In zebrafish, by 6-h post fertilization (hpf), the germ layers are specified, and segregation occurs. Multiple extracellular signaling molecules play key roles in these early patterning processes, and their individual activities are carefully balanced by mechanisms such as the employment of several redundant signaling molecules for a single process and the presence of mutually antagonistic activities of different molecules. For instance, establishment of the dorso-ventral axis in zebrafish involves secreted factors from at least four signaling molecule families (Wnt, Bmp, Nodal, and Fgf), along with their respective inhibitors (Schier and Talbot, 2005). Another principle that has emerged over the last years is that besides the concerted action of agonists and antagonists, embryonic patterning in organisms also relies on the presence of specific receptors, co-receptors, and extracellular modulators that regulate the assembly of functional signaling complexes and their activity (see, e.g., Böttcher et al., 2004;Gritsman et al., 1999;Itasaki et al., 2003). The TGFβ signaling pathway belongs to the Nodal family of proteins and includes a large number of secreted factors that are thought to signal through type I and II receptor S/T kinases with intracellular signaling via the Smads (Constam, 2014). Nodal-related genes in zebrafish such as squint/sqt (nodal-related 1, ndr1), cyclops/cyc (nodal-related 2, ndr2), and southpaw/spaw (spaw) are secreted molecules that are involved in various developmental processes, including establishment of the mesendoderm and left/right asymmetry, and candidate receptors for Cyc and Sqt are the activinlike receptors ActRIIA (acvr2aa), ActRIIB (acvr2ba), and ActRIB (acvr1ba) (Shen, 2007;Schier, 2009). In addition to signaling via receptor S/T-kinases, Sqt and Cyc require the presence of a membrane-bound co-receptor, Egf-cfc, which is the affected molecule in the one-eyed pinhead/oep (teratocarcinoma-derived growth factor 1, tdgf1) mutant (Gritsman et al., 1999). Further, the type I receptor TARAM-A/Tar (acvr1ba) has been proposed as a Nodal receptor in zebrafish because both its activated and dominant negative forms can be used to interfere with normal levels of Nodal signaling during embryogenesis (Aoki et al., 2002;Dickmeis et al., 2001;Peyriéras et al., 1998;Renucci et al., 1996). Genes that share spatiotemporal expression profiles are likely to be involved in the same biological process (es) and are categorized into synexpression groups (Eisen et al., 1998;Niehrs and Pollet, 1999). Isthmin (ism), originally isolated in Xenopus and proposed as a member of the fgf synexpression group (Pera et al., 2002), is thought to act as an extracellular antagonist of NODAL signaling during chick development wherein it controls organ asymmetry (Osório et al., 2019). Further, Ism1 has been identified as a clefting and craniofacial patterning gene in humans (Lansdon et al., 2018) and is required for normal hematopoiesis in developing zebrafish (Berrun et al., 2018), apart from acting as an angiogenesis inhibitor and preventing tumor growth (Xiang et al., 2011). In other independent studies, Ism1 expression has been described in conjunction with transcriptome analysis of dorsal-ventral patterning genes and in a genomewide RNA tomography study of zebrafish embryos (Bennett et al., 2007;Fodor et al., 2013;Junker et al., 2014). Here, we describe in detail the spatiotemporal nature of zebrafish ism1 expression and reveal that its early expression overlaps with nuclear localization of β-catenin and with the expression of oep, sqt, and cyc. Consistently, early expression of ism1 depended on dorsalizing determinants such as β-catenin and Nodal signaling, whereas, in the later stages, brain-specific expression of ism1 was under the control of Fgf signaling. In contrast to Fgf-and nodal family gene mutants, global loss-of-function Ism1 mutants, generated using CRISPR/Cas9, did not show any developmental defects. Furthermore, in vivo single molecule fluorescence cross correlation assays did not show any interaction between Ism1 and nodal ligands, suggesting that while Ism1 may not play a direct role in modulating nodal signaling pathways during early zebrafish development, it may have a complex role in regulating extracellular signals during early embryonic development. Ism1 and ism2 expression during zebrafish embryonic development To identify embryonic tissues in which isthmin may be expressed, we performed whole mount in situ hybridization with riboprobes against ism1 and ism2 in embryos at various developmental stages. While ism1 could be detected early, i.e., during shield stage, tail-bud stage, and early somitogenesis, ism2 expression could not be detected until 24-h post fertilization (hpf) (not shown). Specifically, expression of ism1 was first detectable around the sphere stage ( Fig. 1a) and was localized dorsally, based on co-expression of the organizer gene bozozok (boz)/dharma (dharma) (not shown). Ism1 was also enriched in the dorsal embryonic margin until gastrulation (Fig. 1b). A faint expression around the marginal zone may correspond to the external yolk syncytial layer (YSL) and/or the presumptive mesendoderm. During early gastrulation, ism1 was expressed in the anterior-most cells of the hypoblast (Fig. 1c, d). Weaker ism1 expression found in the internal YSL at the pre-gastrula stages that became more pronounced in the shield stage (Fig. 1e, f). During gastrulation, expression in the dorsal hypoblast became restricted to predominantly axial levels (Fig. 1g, h). At the tailbud stage, expression was found in the posterior paraxial and medial aspects of the mesendoderm that also extended anteriorly to the prechordal plate ( Fig. 1i-k). During early segmentation, ism1 expression appeared in the midbrain-hindbrain region where it persisted into the later stages of embryogenesis. Prominent expression was also Fig. 1 Expression of ism1 and ism2 during zebrafish embryogenesis. Whole mount ISH was performed at the indicated stages during zebrafish embryogenesis with riboprobes against ism1 (panels a-s; detected in purple), fgf8 (panels l-p, detected in red) and ism2 (t-v; detected in purple). a, b, e, g Lateral views, dorsal to the right; c animal view, shield to the right; d corresponding dorsal view, animal to the top; f, h animal-dorsal views; i lateral view, anterior to the left; j, k corresponding dorsal views; l, m flat-mounted embryos, dorsal view, anterior to the left; n-p transverse sections at indicated positions in panels l and m. q-t, v Lateral views, anterior to the left. u Dorsal view, anterior to the left. Stages are indicated as % epiboly or as sph: sphere stage, sh: shield stage, tb: tailbud stage; ss: somite stage (5, 10, 18ss correspond to 5, 10, or 18 somite stage). Arrowheads in a, b, and c represent dorsal expression domain; arrows in e and f represent expression around YSL nuclei; white asterisks in i and m correspond to presomitic mesoderm; black arrowhead in j and s point to axial mesoderm/notochord; white arrowhead in k points to adaxial cells; white arrowhead in l and n corresponds to head mesenchyme; arrows in l point to ism1 expression close to Kupffer's vesicle; asterisks in q and s represent tailbud; red asterisks in l, m, q, and r correspond to the MHB; arrowhead in t points to expression in the nasal primodium; arrowheads in v represent scattered expression in the trunk; arrows in v point to expression in the tailfin bud. Scale bar: 200 μm in a-e and i-k; 100 μm in f-h, q, n-r, and r-v ◂ found in parts of the somitogenic mesoderm (Fig. 1l, m), around Kupffer's vesicle (Fig. 2l, arrowheads), and in the notochord (Fig. 1l, m, s). During somitogenesis, co-staining for fgf8 mRNA revealed close proximity between fgf8 sources and domains of ism1 expression, including in the tailbud/caudal mesoderm (Fig. 1l, m), and a strong overlap in the midbrain-hindbrain boundary region (MHB; Fig. 1o, p). In contrast, at 24 hpf, ism2 expression was most prominent in two bilateral streams of mesenchymal cells in the head region (Fig. 1t, u); additionally, punctate expression was observed in the tail/trunk (Fig. 1v). As the aim was to understand the role of Isthmin in early embryogenesis, we henceforth focused on only ism1. Our detailed expression analysis of ism1 during early embryogenesis in zebrafish indicates that its expression domain coincides not only with that of fgf8, as previously suggested (Pera et al., 2002), but also with that of early dorsal determinants and nodal genes such as sqt and cyc (see below). Therefore, we next investigated if either of these signaling pathways might control the expression of ism1 or vice versa. Regulation of ism1 expression during early development The early expression of ism1 in the dorsal aspects of the blastula suggests direct or indirect control by dorsalizing factors such as β-catenin. Additionally, nuclear localization of β-catenin in the dorsal aspects of the zebrafish YSL and in the overlying blastoderm is similar to the spatiotemporal regulation of ism1 in the early blastula. In the presence of a constitutively active form of β-catenin (CA-βcatenin), ism1 expression became ectopic and was visible in the entire blastula. Conversely, injection of a dominant negative variant of Tcf3 (DN-Tcf3) strongly reduced ism1 expression (Fig. 2a-c). Together, these observations indicate that early expression of ism1 is under the control of dorsal determinants. However, nuclear β-catenin is a transient signal and its expression, unlike that of ism1, may not encompass the entire blastoderm margin. Further, because Wnt signaling may extend more ventrolaterally in Xenopus and as β-catenin may also indirectly regulate ism1, we next analyzed additional factors that might regulate ism1 expression. The nodal signaling gene, sqt, and the homeobox transcription factor bozozok/boz, are two main targets activated by dorsal determinants. Injection of boz mRNA led to radial ectopic expression of ism1 in the marginal zone of the embryos, similar to the ectopic expression of fgf8 (Fig. 2i, j), indicating that boz acts upstream of ism1. Notably, boz was able to elicit ism1 expression even when β-catenin signaling was inhibited by injecting dominant negative TCF3 (Fig. 2c, d), implying that the Wnt/β-catenin pathway may not be strictly required for ism1 activation. Similarly, sqt presence was able to rescue ism1 expression in DN-Tcf3injected embryos (Fig. 2c, e), indicating that sqt is also a direct and upstream regulator of ism1. Notably, sqt expression pattern was similar to that of ism1 as it was observed not only in the dorsal aspects of the blastula but also in the circumference of the blastoderm margin, where it is involved in specification of the mesendoderm. Likewise, cyc was also expressed around the blastoderm margin in the late blastula, which is consistent with a possible influence on ism1 expression. Therefore, we next focused on a more detailed analysis of the correlation between ism1 and Nodal signaling in the early embryo. As the two Nodal-related ligands, Sqt and Cyc, are thought to be redundant, we analyzed ism1 expression in maternal-zygotic oep (MZoep) mutants because Nodal signaling is thought to be completely abolished in these animals (Gritsman et al., 1999). In MZoep blastulae, ism1 expression was observed in the YSL, but not in the blastoderm, indicating that ism1 expression in the YSL is independent of Nodal signals (Fig. 2f, g). During gastrulation, ism1 transcripts were not detectable in MZoep mutants except in the YSL, which is consistent with the fact that MZoep embryos lack most of the mesendoderm. However, at the tailbud stage, we noticed a small deeplayer expression domain in the dorsal side of the embryo, which may represent residual ventral mesendoderm after its convergence with the dorsal side of the embryo (Fig. 2h). Consistent with this interpretation, lefty1/antivin Fig. 2 Ism1 expression depends on dorsal determinants, and Nodal and Fgf signaling. Misexpression of constitutively active β-catenin induces ectopic ism1 expression b, asterisk compared with controls a; c conversely, dominant negative TCF3 diminishes ism1 expression; However, boz d and sqt e can induce ism1 (arrows) in the absence of Wnt/β -catenin signaling. f, g ism1 expression in the blastoderm (arrows) is absent in MZoep mutants, while its expression in the YSL remains unaffected (asterisks). h Residual ism1 expression in MZoep mutants at the end of gastrulation (arrowhead). Both boz j and sqt k elicit ectopic fgf8 expression (arrows, asterisk) compared with controls i. Both sqt m and cyc n elicit ectopic ism1 expression compared with controls l. o-q Neither sqt p nor ism1 are ectopically induced by fgf8 overexpression. r-z Fgf-dependence of ism1; r, s Pharmacological FgfR inhibition during gastrulation abolishes ism1 expression in the axial mesoderm and in the posterior paraxial mesoderm (white asterisks); v, w FgfR inhibition during somitogenesis abolishes ism1 expression in the MHB (red asterisks) and forebrain (arrowheads); (t,u,x,z) ism1 expression in the MHB (asterisk) and forebrain (arrowhead) is missing in fgf8/ace mutants. Note that expression in the MHB in panels x-z appears to depend on the genetic dose of fgf8. Genotypes (pink), stages (black), treatment (red), and riboprobes (blue) are indicated on the panels. a-g Lateral views, dorsal to the right; h, x-z lateral views, anterior to the left; i-p animal views, dorsal to the right if discernible; q dorsal view, animal pole to the top; r-w dorsal views, anterior to the left. Scale bar: 200 μm in a-e and i-q; 100 μm in f-h and r-z ◂ (lft1)-injected embryos that lack the mesendoderm did not display residual ism1 expression (data not shown). Conversely, injection of both sqt and cyc mRNA elicited ectopic ism1 expression in the blastula ( Fig. 2l-n), indicating that Nodal signaling is not only necessary for early ism1 expression but also that it is sufficient to ectopically induce ism1 expression. In contrast to these early-stage observations, later-stage ism1 expression appeared to be independent of Nodal signaling. Specifically, the onset of ectodermal ism1 expression in the MHB was not affected in MZoep embryos, and its expression in this region persisted even in the later stages, i.e., in the telencephalon as well (data not shown). The overexpression of sqt induced the ubiquitous expression of fgf8 in the sphere stage ( Fig. 2i, k), raising the possibility that fgf8 might mediate the induction of ism1 by sqt. Therefore, we investigated ism1 expression during perturbed Fgf signaling, wherein FgfR-dependent signaling was inhibited between fertilization and the late blastula stage using the small molecule inhibitor SU5402 (Reifers et al., 2000). This did not lead to a significant decrease in ism1 expression (not shown), and similar results were obtained after the injection of RNA encoding a dominant negative Fgf receptor (Hongo et al., 1999) at the 1-cell stage (data not shown). Moreover, we could not detect any changes in ism1 or sqt expression at the sphere stage in fgf8 mRNA-injected embryos ( Fig. 2o-q). These results imply that the earliest expression of ism1 is not controlled by Fgf signaling. In contrast, when SU5402 was added to the medium during the late gastrulation stages, ism1 expression was predominantly abolished (Fig. 2r, s); this pattern of change in gene expression is similar to that seen with other target genes of the Fgf signaling pathway, such as pea3, whose expression was used to ensure the efficiency of inhibitor treatment. Moreover, ism1 transcripts were completely absent from the MHB when embryos were treated with SU5402 from early to mid-somitogenesis (Fig. 2v, w), indicating that subsequent neuroectodermal expression of ism1 depends on FgfR signaling. A major Fgf ligand involved in MHB organization is Fgf8. Consistent with a role for Fgf8 in the regulation of ism1 expression in the MHB, ism1 expression was absent in homozygous fgf8/ace mutant fish and it was reduced in heterozygous mutants (Brand et al., 1996;Reifers et al., 1998) (Fig. 2t, u, x-z). Taken together, these results imply that ism1 expression is temporally controlled by different signaling systems ( Fig. 3a): while its early expression is controlled by dorsal determinants such as boz and sqt, blastoderm expression requires nodal signaling, and its later expression, especially in the midbrain-hindbrain boundary, is responsive to Fgf8 levels. Dorsalization of embryos in Ism1 deletion variants Ism1 is characterized by the presence of multiple sequences such as the thrombospondin-type 1 repeat (TSR), an adhesion-associated domain in Muc4 and other proteins (AMOP) sequence, a signal peptide sequence, and a conserved sequence present in all Isthmins, which we refer to as the Ism-specific domain (ISD) (Fig. 3b). To dissect the function of ism1 and these three protein motifs, we designed expression constructs with full-length ism1 and deletion variants and injected in vitro transcribed mRNA of these constructs into fertilized zebrafish eggs to verify if they caused patterning abnormalities. Consistent with earlier observations (Pera et al., 2002), full-length ism1 did not appear to cause strong morphological defects. However, a construct that lacked the N-terminal portion including the IND-motif but retained the signal peptide, ism1 (∆N), caused dorsalization of 23% of the injected embryos, and cyclopia, albeit with lower frequency ( Fig. 3c; Table 1). When we removed the remaining C-terminal portion of the molecule, we found that both the TSR and AMOP domains elicited a similar phenotype on their own and with similar penetrance (Ism1-TSR and Ism1-AMOP; summarized in Table 1), implying that both domains contribute to the effects observed with ism1 lacking the ISD motif. In contrast to the marked dorsalization and cyclopia, none of the deletion constructs disturbed the general morphology of the isthmic constriction, whose formation critically depends on functional Fgf signaling (Reifers et al., 2000;Meyers et al., 1998). As dorsalization was the most prominently observed phenotype, we next investigated if this early dorso-ventral polarity was disturbed in embryos injected with Ism1 deletion variants. Using din (chordin, chrd) as a dorsal-specific marker (Miller-Bertoglio et al., 1997) and evenskipped/ eve1 (eve1) as a ventral marker (Joly et al., 1993), we found that embryos injected with ism1 (∆N), ism1-TSR, or ism1-AMOP were dorsalized at the late blastula stage (Fig. 3d-g; Table 2). Further, as Sqt and boz act upstream of din while establishing the dorsal organizer in zebrafish, we tested if ectopic activation of din was preceded by ectopic expression of sqt or boz. Upon misexpression of ism1 (∆N), punctuate expression of sqt could be detected at the sphere stage itself; however, we did not observe ectopic boz expression, suggesting that the expansion of din could be mediated by ectopic activation of sqt alone (Fig. 3h-k). Consistent with the observed upregulation of sqt and din, we noted that wnt8 expression pattern was interrupted in injected embryos, indicating that ectopic activation of organizer-genes occurred at the expense of ventrolateral germ ring identity (Fig. 3l, m). Further, ectopic expression of fgf8 could be induced by ism1 (∆N) (Fig. 3n). These data are consistent with the notion that early dorsalization of the embryo upon ism1 (∆N) injection is due to overactivation of the Nodal signaling pathway. Fig. 3 Dorsalization of embryos in ism1 deletion mutants. a Schematic representation of upstream regulators of ism1 during various stages of embryonic development. b Schematic representation of the Ism1 protein and the amino acid length of its domains, signal sequence, Isthmin specific domain (ISD), thrombospondin-type 1 repeat (TSR), and an adhesionassociated domain in Muc4 and other proteins (AMOP). c Dorsalized and cyclopic phenotype observed in ism1 (∆N) mRNA-injected embryos at 24 hpf (see Table 1). d-g Dorsalization of the injected embryos at 50% epiboly, or late blastula (see Table 2). d, f Uninjected controls and e, g ism1(∆N)-injected embryos were probed for din and eve1 as dorsal and ventral markers, respectively. h, i boz appears unchanged after injection of ism1 (∆N). j, k, n din, sqt, and fgf8 can be induced by ism1 In contrast to the Ism1 versions lacking the N-terminal ISD domain, the mRNA for all versions of Ism1 contain the N-terminal half of Ism1, including wild-type Ism1, caused little, or no dorsalization in assays (Table 2), implying that the N-terminal half of Ism1 was responsible for the observed dorsalization. Generation and characterization of ism1 mutants To assess if Ism1 was required for embryonic development, we used CRISPR-Cas9 mediated mutagenesis to create stable mutant lines. Targeting two sgRNAs (Ts1 and Ts2) in the Ism-specific domain, ISD, resulted in multiple mutant alleles. We selected a 55-bp deletion and established founder lines for this mutation (Fig. 4a, b). This deletion is predicted to yield a truncated protein (91 instead of 461 amino acids) wherein the first 91 amino acids span the signal peptide, parts of the ISD domain (Fig. 4a) and terminate before the TSR and AMOP domains. Founders (F0) were out crossed to wild types (F1), and the mutant alleles were maintained as outcrosses. Mutants were identified by PCR, and the concomitant loss of a StyI restriction site was used to differentiate wild-type and mutant alleles (Fig. 4a′). In situ hybridization using an ism1 riboprobe yielded only faint ism1 mRNA staining in ism1 mutants, consistent with the notion that mutant ism1 mRNA, with its premature stop codon, undergoes nonsense-mediated decay (Fig. 4c, c′). Next, qRT PCR for Ism1 and Ism2 in 24 hpf embryos of ism1 mutants showed a significant reduction in only Ism1 mRNA with no change in Ism2 levels (Fig. 4d). Homozygous and heterozygous mutants generated by F1 in-crosses (F2) showed no developmental abnormalities and could be raised to adulthood, indicating that the loss of ism1 can be compensated for by other molecules during development. Next, we analyzed multiple markers during mes-endoderm differentiation and in the Nodal pathway but found no observable phenotypic differences between wildtype and homozygotic mutants. The hatching gland, marked by the expression of hgg1, which is derived from the pre chordal plate mesoderm and the anterior axial mesoderm, also showed no observable differences between the wildtype animals and ism1 mutants (Fig. 4e, e′). In oep mutants, hgg1 expression is strongly reduced or absent. Additionally, analysis of otx2, a marker for anterior neuroectoderm and later for the midbrain, showed a similar expression pattern in the wild-type and ism1 mutants, suggesting no defects in head formation observed in various mutants of the nodal pathway (Fig. 4f, f′). To explore whether the lack of developmental phenotypes was due to the specific allele generated, an independent sgRNA (Ts3) targeting a site within an intron downstream of the ISD was injected along with Ts1 and Ts2. The resulting 2.5-kb deletion mutant allele did not exhibit any developmental phenotype either (data not shown). Taken together, these loss-of-function results indicate that ism1 is dispensable during early embryonic development in zebrafish. Ism1 diffuses in the extracellular space, but does not directly interact with Fgf or Nodal ligands To test whether Ism1 is a secreted protein, we created an expression construct that contained the full-length Isthmin with a C-terminally fused GFP moiety (Ism1-GFP) and co-injected this with a membrane-bound mKate2 fluorescent protein (HRAS-mKate2) into one-cell-stage embryos. This resulted in a fluorescent signal at or close to the plasma membrane and in the extracellular space (Fig. 5a, a′, and a″, arrowhead), supporting the idea that Ism1 is a secreted protein. Further, overexpression of full-length Ism1-GFP did not lead to any obvious dorsalization phenotype and these animals were similar to those injected with the untagged full-length protein ( Supplementary Fig. S1a-b). Despite the dispensability of ism1 during early embryonic development, our results suggested a possible interaction between Ism1 and Fgf or Nodal signaling components. To assess if Ism1 interacts in vivo with components of either Fgf or Nodal pathways and to possibly modify their respective activities during development, we utilized fluorescence correlation spectroscopy (FCS), wherein interaction between biomolecules is detected through their correlated motion in space and time Wang et al., 2016;Yu et al., 2009). FCS is based on detecting fluorescence fluctuations in a small confocal volume (∼ 0.5 fl). Statistical analysis by autocorrelation of these fluctuations provides quantitative information on local concentrations and diffusion coefficients of the fluorescent molecules present. By cross-correlating fluorescence (FCCS) fluctuations in two spectral channels, bimolecular binding can be inferred, because only co-diffusing binding partners will lead to a considerable cross-correlation. In other words, a strong cross-correlation percentage indicates a high probability of detecting both fluorescent molecules in the observed volume due to the bimolecular interactions. Embryos that were coinjected with secreted forms of mRFP (Sec-mRFP) and eGFP (Sec-eGFP) showed no cross-correlation (negative control), while a control tandem construct with mRFP and eGFP showed cross-correlation (positive control) (Fig. 5b, c). Moreover, a test employing tagged Squint and Lefty proteins (Sqt-gfp, Lef-mcherry) showed significant cross-correlation of these two factors (45%) (Fig. 5d), as previously shown (Wang et al., 2016). These results established that this FCCS system can be used to infer binding partners in vivo. Full length Ism1-GFP fusion mRNA was injected into 1-cell stage embryos, and tagged ligands of Fgf and Nodal pathways, namely, Fgf8-mcherry, Sqt-RFP, Cyc-RFP, and Lefty-RFP, were injected at the 32-cell stage. Dual color FCCS was performed to evaluate potential interactions between Ism1 and Fgf8 or the Nodal ligands. Ism1-eGFP and Fgf8a-mRFP fusion proteins showed weak cross-correlation percentage (14% compared with 45% for Sqt-lft). Similarly, Ism1-rfp showed low cross-correlation percentage with nodal ligands sqt and cyc and the nodal antagonist lft (14%, 10%, and 15% respectively, compared with 45% for Sqt-lft). These FCCS data did not reveal strong interactions between ism1 and Fgf8a, or any of the nodal signaling molecules tested (Table 3; Fig. 5d). Next, to investigate how Ism1 diffuses in the extracellular space, FCS was carried out at the animal pole in sphere stage embryos where ectopically expressed ism1-eGFP was the only source of ism1. The FCS autocorrelation curve fitted well to a threedimensional diffusion model with a diffusion coefficient of 48 ± 7 μm 2 s −1 , and 100% of the molecules were fast diffusing. Taken together, these results indicate that Ism1 freely diffuses in the extracellular space, but that there is no strong interaction with Nodal components or Fgf8 (Fig. 5e). Next, to address if overexpression of full length Ism1 (gain-of-function) has an impact on Nodal signaling, we used mes-endoderm differentiation as the read-out as this process is a consequence of active Nodal signal transduction. Ism1-GFP mRNA or mCerulean was injected at the 1-cell stage, embryos were fixed at 50% epiboly (5.3 hpf), and in situ hybridization was used to evaluate markers belonging to the mes-endoderm lineage, namely, ntl for the mesoderm and Sox32 for the endoderm. However, ISH pattern and signal intensity were indistinguishable between the gain-offunction mutants and wild-type embryos, and these results were corroborated using quantitative RT-PCR, suggesting that Ism1 may not affect nodal signaling during embryonic development in zebrafish (Fig. 5f, g). Thus, even though Ism1 diffuses freely in the extracellular space, it does not directly interact with Fgf or Nodal ligands, and the overexpression of full length Ism1 does not alter mesendoderm differentiation. Discussion Secreted proteins have the potential to influence multiple signaling cascades, for example, secreted morphogens, such as Fgf and Wnt control patterning and development in a context-and concentration-dependent manner (Bökel and Brand, 2013). While the gene expression pattern of Ism1 was interesting and suggested possible association with sites of Nodal activity, such as the dorsal hypoblast, the YSL, the mesendodermal layer of the gastrula, and the prechordal plate, lossof-function, misexpression experiments with Ism1 deletion constructs yielded contrasting results. Nonetheless, misexpression of an Ism1 derivative without the ISD-containing N-terminus, Ism1(∆N), caused marked dorsalization of the blastula and, sqt and other genes characteristic of the dorsal blastula, were ectopically induced in the presumptive mesendoderm. Such dorsalization and ectopic induction of nodal genes are known to be caused by the ectopic activation of Nodal signaling (Erter et al., 1998;Rebagliati et al., 1998;Feldman et al., 1998;Shimizu et al., 2000). In contrast, lossof-function Ism1 mutants did not show any developmental defects, suggesting complex molecular interactions of Ism1 (∆N) in the early blastula stage embryos. Our results with the Ism1 (∆N) deletion constructs, namely, dorsalization, and nodal signaling, are in agreement with those from a recent study in chick, wherein Ism1 was shown to interact with a Nodal ligand and type 1 receptor ACVR 1B to influence organ asymmetry (Osório et al., 2019). However, observations with full length Ism1 in chicken could not be observed in zebrafish and this could be due to sequence differences downstream of the signal peptide (Osório et al., 2014) (Supplementary Fig. S1c). Additionally, ISH and qRT-PCR analysis with full length Ism1 overexpression did not reveal problems in mesendoderm differentiation, suggesting differences in molecular interaction between full length Ism1 and the Ism1 (∆N) protein. Loss-of-function experiments with morpholinos targeting the 5′UTR and the start codon to prevent Ism1 translation in zebrafish embryos result in phenotypes affecting the hematopoietic stem and progenitor cells (HSPCs), rather than abnormalities or phenotypes resembling known nodal pathway mutants (Berrun et al., 2018). Similarly, Xiang et al. (2011) have shown that isthmin1 may be an angiogenesis inhibitor, and knocking down isthmin-1 with morpholino targeting the splicing site yielded no phenotype affecting early embryonic patterning or morphogenesis (Xiang et al., 2011). Although disorganized inter-segmental vessels were observed in the study by Xiang et al., we did not test such phenotypes in our mutant lines. Importantly, these results using morpholinos are in line with our observations with CRIPSR mutants which also showed no observable early embryonic patterning phenotype. Furthermore, it is possible that the loss of Ism1 in the early embryos could be compensated for by the presence/activity of other genes, leading to the absence of a noticeable phenotype or by the phenomenon of genetic compensation triggered by mutant mRNA (El-Brolosy et al., 2019;Rossi et al., 2015). The diffusion coefficient of Ism1 (48 ± 7 μm 2 s −1 ) is comparable to that of Fgf8a (53 ± 7 μm 2 s −1 ), suggesting free diffusion in extracellular space; however, in contrast to Fgf8 , there was no slow diffusing fraction in Ism1. The slow diffusing fraction of Fgf8 is due to its interactions with heparin sulphate proteoglycans (HSPG) in the ECM, but Ism1 may not be restricted by such molecules in the extracellular space. While these results from FCS and FCCS cannot exclude the possibility of Ism1 interacting with other signaling pathways, the complex temporal regulation of ism1 argues for a more intricate role for this factor, maybe as a multivalent cofactor in extracellular signaling. Furthermore, shared molecular motifs and domains on Ism1 also argue for the presence of such a crosstalk between signaling pathways. For instance, R-Spondin2, a novel secreted activator of Wnt/β-catenin signaling, also possesses a TSP1 domain that does not seem to be necessary for Wnt-related activity (Kazanskaya et al., 2004). Moreover, ism1 is also expressed in the midbrain-hindbrain area, where its expression is critically dependend on FgfR-signaling. As the midbrain-hindbrain boundary is a known source of extracellular signaling molecules of the Fgf and Wnt class (reviewed in Raible and Brand, 2004;Gibbs et al., 2017), our results support a possible role for Ism1 in multiple signaling processes, and the presence of viable adult Ism1 mutants can shed light on its role in tissue homeostasis and regeneration. In summary, we describe in detail the expression profile of ism1 during development and its upstream activators during various stages of embryonic development in zebrafish. Further, dorsalization phenotypes with the deletion mutants support a role for ism1 in early patterning, while no developmental defects with the CRISPR mutants suggests that Ism1 is dispensable or its lack compensated for, during early embryonic development. In situ hybridization Whole mount in situ hybridization was performed as previously described (Reifers et al., 1998), and tissue sections were obtained using tungsten needles. Plasmids used for in situ hybridizations were obtained from the following sources: cyc, M. Rebagliati;din,M. E. Halpern;eve1,M. Fig. 4 Generation and characterization of ism1 mutants a CRISPR gRNA target sites (Ts) were located in the exon (Ts1 and Ts2) and in the intron (Ts3). The sty1 restriction site is marked in yellow. Scheme representing protein domains in wild-type Ism1 and in the truncated version in CRISPR mutants are shown. Schematic representation of the genotyping strategy shown in a′, loss of the Sty1 restriction site in the mutants will result in an 837-bp band, while in the Wt there will be two bands (i.e., 437 + 410). b Sequence alignment between Wt and Ism1 mutant embryos shows 55 bp deletion, in cyan (Ts), and yellow (Sty1 restriction site). c and c′ Whole mount in situ hybridization in 18 somite stage (18ss; 18 hpf) embryos for Ism1 shows decreased expression in Ism1−/− embryos compared with the wild type (Wt) embryos. d qRT PCR for Ism1 and Ism2 in 24 hpf embryos of ism1 −/− mutants show a significant reduction in only Ism1 mRNA with no change in Ism2 levels. e and e′ Whole mount in situ hybridization for hgg1, a marker for the anterior prechordal plate and the later hatching gland, shows similar expression and distribution patterns in the hatching gland between Wt and Ism1−/− embryos. DNA constructs and injections The full ism1 coding sequence was identified by a screen for Fgf-dependent genes (our unpublished data). A partial ism2 cDNA fragment was isolated by RT-PCR using cDNA from 24 hpf embryos, and the fragment was extended using inverse PCR and 5′RACE. cDNAs for ism1, ism1:eGFP, sqt, and fgf8 were subcloned into pCS2 + vectors for in vitro transcription. The sources of other plasmids used for mRNA injection are as follows: lefty1/antivin, K. Knight and J. Yost; ∆Xfgfr4a, H. Okamoto; and dnTCF3 and constitutively active β-catenin: T. Hirano. The dominant negative Tcf3 (dnTcf3) lacks the beta-catenin binding domain and has been shown to abrogate both beta-catenin translocation to the nucleus and betacatenin-mediated transcriptional activation (Molenaar et al., 1996). Capped mRNA for injection was transcribed from linearized plasmid templates using the SP6 mMESSAGE mMACHINE kit (Ambion). For zebrafish injections, mRNA was injected at a default concentration of 100-200 pg/ embryo in the one-cell stage. Pharmacological treatments To block embryonic FgfR signaling, embryos were incubated with diluted SU5402 (Calbiochem/EMD Biosciences) in embryonic medium E3 at a final concentration of 16 µM as described (Reifers et al., 2000). Confocal microscopy Ism1-eGFP mRNA-injected embryos were mounted in a drop of 1.5% low-melting temperature agarose in E3 buffer, and fluorescence was observed using a laser scanning microscope. CRISPR-Cas9 mutagenesis Two target sites (Ts1 and Ts2) in Ism1 cDNA were identified using the webtool chopchop (Montague et al., 2014). Both SgRNA and Cas9 mRNA were prepared as previously described (Jao et al., 2013). Cas9 mRNA was injected at 150 ng/µl, and Ts1 and Ts2 sgRNA were injected at 20 and 35 ng/µl, respectively. All injections were carried out in the wildtype AB strain, and injected embryos were raised and outcrossed with wild-type strains (AB). Founders were identified by PCR and restriction digestion of F1 embryos which was later also confirmed by DNA sequencing. Those mutations that predicted a premature termination of the protein were maintained as heterozygotes. For the 2.5-kb deletion mutant, all three sgRNA (Ts1-3) were coinjected at a concentration of 20 ng/µl each. The sequence for Ts1-3 are provided in Table 4. Genotyping Genomic DNA was extracted either from single embryos (1 day old) or from finclip samples (3 months old) (hot shot DNA extraction method) or from PFA fixed embryos after in situ hybridization (Yang and Gu, 2017). PCR was performed using primers that flank the target site; the primer sequences are provided in Table 6. PCR products were digested with the restriction enzyme Sty1 and analyzed by gel electrophoresis wherein wild types produced 427-and 410-bp bands, while mutants produced one 837-bp band. All genotyping results were later confirmed by sequencing. Fluorescence correlation spectroscopy (FCS) and Fluorescence cross correlation spectroscopy (FCCS) Messenger RNA coding Ism1 eGFP or Ism1 RFP was injected into the cytoplasm of one-cell stage embryos, and various nodal components were injected into one cell of a 32-cell-stage embryo expressing Ism1 eGFP or Ism1 RFP. FCS and FCCS were carried in embryos at the sphere and dome stage (4 and 4.3 hpf) using an inverted confocal microscope (Zeiss LSM 780) with a × 40 water immersion objective (NA 1.2). Analysis was carried out as previously described Yu Fig. 5 Ism1 interaction with Fgf8 and Nodal molecules in vivo a Ism1-eGFP and membrane localized mKate2 (HRAS-mKate2) was co-injected at 1-cell stage embryo, and images from live embryos were obtained at 50% epiboly (5.3 hpf). Ism1-eGFP was predominantly observed in the extra cellular space (white arrowhead) and close to the plasma membrane (yellow arrowhead). a′ and a″ Magnified images of the marked region in a. b, c By cross-correlating fluorescence fluctuations (FCCS) in two spectral channels, bimolecular binding can be inferred because only co-diffusing binding partners lead to a considerable cross-correlation. Coinjection of secreted mRFP (Sec-mRFP) and secreted eGFP (Sec-eGFP) showed no crosscorrelation, while a tandem construct with mRFP and eGFP showed cross-correlation. d. sqt and lefty have about 45% cross correlation, while that between Ism1 and fgf8a, sqt, cyc, or lefty was about 15%, which is in the range of random interactions (background). e The diffusion coefficient of Ism1-eGFP was measured using FCS in the extracellular space (indicated with arrowhead) of sphere stage embryos (4 hpf) injected with Ism1-eGFP mRNA at the 1-ll stage. Ism1-eGFP diffusion coefficient was 48 ± 7 μm 2 s −1 f and g. qRT PCR analysis of Ism1-eGFP or mCerulean (control) injected embryos (mRNA or DNA) showed no significant difference in the gene expression levels of the mesoderm marker ntl or the endoderm marker sox32. Embryos were harvested at 50% epiboly (5.3 hpf), and each sample represents a pool of 10 embryos. P values from unpaired t tests are indicated within the graph. Numbers of embryos analyzed for FCS and FCCS are presented in Table 3. The total number of embryos (n) analyzed in panel a (n = 8). Scale bar 20 μm ◂ , 2009). Measurements were taken from multiple embryos for each molecular pair, and the experiments were repeated at least twice. The number of embryos and total readings for each molecular pair are listed in Table 3. The plasmids used for generating various mRNA are listed in Table 5. The fusion constructs for sqt, cyc, and lefty have been previously described and used in FCCS experiments (Wang et al., 2016). For fgf8-RFP, the fgf8-GFP construct ) was used as a template to replace the fluorescent protein. Quantitative Real-Time PCR (qRT PCR) For qRT PCR analysis of Ism1 and Ism2 i n Ism1 −/− mutants and wild type, first, tail biopsies for genotyping were taken from 24 hpf embryos and the rest of the embryo was utilized for RNA extraction. PCR genotyping was performed to identify homozygotes before proceeding with qRT PCR analysis. Ism1-eGFP mRNA (200 pg) or the plasmid expressing Ism1-eGFP (25 pg) was injected into 1 cell stage wild type embryos. Embryos expressing strong eGFP were selected, 10 such embryos were pooled, and 3 such groups were collected. As control, mCerulean mRNA or plasmid was injected and processed identically as the Ism1-eGFP injected embryos. All embryos were collected at 50% epiboly (5.3 hpf), lysed in extrazol (BLIRT S.A.), and subjected to RNA extraction. All samples were treated with DNAse. One-step real time reverse transcription PCR (Takara) was performed to quantify expression of ntl and sox32 in the Ism1-eGFP embryos, and their levels were compared with that in control embryos. Beta-actin was used as the house-keeping gene to normalize expression values. All qRT PCR primers are listed in Table 6. Fold changes were calculated using the 2 −ΔΔC T (Livak and Schmittgen, 2001) and the two tailed, unpaired "t" test was used to calculate statistical significance at a "p" value of 0.05 (GraphPad prism, ver. 5.0). GGT CAT GGT CAT GCA CCA CGAGG
9,398.2
2020-12-28T00:00:00.000
[ "Biology", "Medicine" ]
Spinning U(1) gauged Skyrmions We construct axially symmetric solutions of U(1) gauged Skyrme model. Possessing a nonvanishing magnetic moment, these solitons have also a nonzero angular momentum proportional to the electric charge. features of small nuclei. The U (1) gauged Skyrme model was originally proposed by Callan and Witten to study the decay of the nucleons in the vecinity of a monopole [16]. Axially symmetric solutions of this model were constructed previously in [17], but the emphasis there was on the static properties of nucleons and not the calculation of its classical spin. We define our model in terms of the O(4) sigma model field φ a = (φ α , φ A ), α = 1, 2; A = 3, 4, satisfying the constraint |φ α | 2 + |φ A | 2 = 1, the Lagrangean of the Maxwell gauged Skyrme model is (up to an overall factor which we set equal to one) in terms of the Maxwell field strength F µν , and the covariant derivatives defined by the gauging prescription The energy-momentum tensor which follows from (1) is Here we note that the Skyrmion gauged with the purely magnetic U (1) field is a topologically stable soliton. This is stated in terms of topological lower bound on the static energy density functional of the purely magnetically gauged system, namely the T tt component of (3) with A t = 0, Defining the gauge invariant topological charge density as the gauge invariance of ̺ is manifest from (5), while it is easily checked that the finite energy conditions lead to the vanishing of the surface integral term in (6), as a result of which the topological is simple the volume integral of the first term, namely the winding number n or, Baryon charge. As was shown in [17] in detail, the energy density functional (4) is bounded from below by The ansatz.-In a cylindrical coordinate system, we parametrise the axially symmetric Maxwell connection as a(ρ, z) and b(ρ, z) corresponding to the electric and magnetic potentials, with n a positive integer -the winding number, and the polar parametrisation of the chiral field in terms of the two functions f (ρ, z) and g(ρ, z) as φ α = sin f sin g n α , φ 3 = sin f cos g, φ 4 = cos f, where ρ = |x α | 2 , α = 1, 2, and z = x 3 . In the following we will find it convenient instead to work with spherical coordinates (r, θ), i.e. ρ = r sin θ and z = r cos θ. After replacing this ansatz in (1), one finds the reduced lagrangeean The Euler-Lagrange equations arising from the variations of this Lagrangean have been integrated by imposing the following boundary conditions, which respect finite mass-energy and finite energy density conditions as well as regularity and symmetry requirements. We impose at infinity, and at the origin. For solutions with parity reflection symmetry (the case considered in this paper), the boundary conditions along the z-axis are and agree with the boundary conditions on the ρ-axis, except for g(r, θ = π/2) = π/2. It may appear from the boundary conditions (11)-(13) that the natural condition a| θ=0,π = n is not imposed. This is not done since its imposition in addition to (11)-(13) would be an an overdetermination. We have nontheless checked that a = n is satisfied on the z-axis by the numerical solutions. The constant V appearing in (11) corresponds to the magnitude of the electric potential at infinity and has a direct physical relevance. In the pure Maxwell theory, one can set set V = 0 (or any other value) without any loss of generality. In the U(1) gauged Skyrme model, however, such a gauge transformation would render the whole configuration time-dependent. Integration over all space of the energy density E yields the total mass-energy, E = T tt √ −gd 3 x. The total angular momentum is given by However, by using the field equations, the volume integral of the above quantity can be converted into a surface integral at infinity in terms of Maxwell potentials The field equations imply the asymptotic behaviour of the electric potential b ∼ V − Q/(2r) + O(1/r 2 ), the parameter Q corresponding to the electric charge of the solutions. Therefore the following relation holds which resembles the case of a monopole-antimonopole configuration in a YMH theory [7]. Note that the solutions discussed here possess also a magnetic dipole moment [17] which can be read from the asymptotics of the U (1) magnetic potential, A ϕ ∼ µ sin θ/r 2 . Numerical solutions.-Subject to the above boundary conditions, we solve numerically the set of four Maxwell-Skyrme equations. The numerical calculations are performed by using the program CADSOL [18], based on the iterative Newton-Raphson method. As initial guess in the iteration procedure, we use the spherically symmetric regular solutions of the pure Skyrme model. The typical relative error is estimated to be lower than 10 −3 . For a given Baryon number, the solutions depend on two continuos parameters, the values V of the electric potential at infinity and the Skyrme coupling constant κ. Here we consider solutions in the one baryon sector only, although similar results have been found for n > 1. The solutions with V = 0 have b = 0 and correspond to static dipoles discussed in [17]. A nonvanishing V leads to rotating regular configurations, with nontrivial functions f, g, a and b. Rotating solutions appear to exist for any value of κ. As we increase V from zero while keeping κ fixed, a branch of solutions forms. Along this branch, the total energy and the angular momentum increase continuously with V . The ration J/E increases also, but remains always smaller than one. At the same time, the numerical errors start to increase and we obtain large values for both E and J, and for some V max the numerical iterations fail to converge. An accurate value of V max is rather difficult to obtain, especially for large values of κ. Alternatively, we may keep fixed the magnitude of the electric potential at infinity and vary the parameter κ. In Figure 1 we present the properties of typical branches of solutions. In Figure 1a, the angular momentum and the energy are parametrised by V for several fixed value of κ, while in Figure 1b these quantities are parametrised with κ for several fixed values of V , including V = 0 corresponding to the non-spinning soliton. The energy bound in the purely magnetically gauged case with V = 0 is not saturated, as is the case also for the ungauged Skyrmion. We expect likewise that this numerically constructed solution is topologically stable, but cannot estimate the energy excess above the lower bound analytically. One can see from Figure 1b that, for a given value of κ, the energy of the spinning soliton is always smaller than the energy of the ungauged Skyrmion, but is larger than the energy of the corresponding nonspinning static gauged solution. The latter is gauged only with the magnetic field and minimises the energy functional, while the spinning system gauged with both the magnetic and the electric fields minimises the non-positive definite Lagrangian density, and the additional electric field does not feature in the topological lower bound. As a result, the spinning, electrically charged, solutions have higher energies than the static ones. The situation here is identical with that of the Julia-Zee dyon, in this respect. In Figure 2a we plot the energy density E = T tt , and in Figure 2b the angular momentum density T ϕt of a typical n = 1 solution as a function of the coordinates ρ, z, for κ = 0.72, V = 0.067. We notice that the energy density ǫ = T tt does not exhibit any distinctly localised individual components, a surface of constant energy density being topologically a sphere. However, this is a deformed sphere such that the profiles of E = T tt (r, θ) versus r for each value of θ are distinct and non overlapping. It presents a peak on the symmetry axis, and the density profiles decrease monotonically with r. Also, the electrically charged U (1) gauged Skyrmion rotates as a single object and the T ϕt -component of the energy momentum tensor associated with rotation presents a maximum in the z = 0 plane and no local extrema (see Figure 2b). Conclusions.-We have presented here the first example of spinning solution residing in the one-soliton sector of the theory which has a topologically stable limit. These solutions of the U (1)-gauged Skyrme model carry mass, angular momentum, electric charge and a magnetic dipole momentum. The electric charge is induced by rotation and equals the total angular momentum. Similar qualitative results have been found by adding to the Lagrangean (1), a self-interaction potential of the O(4) scalar field representing the pion mass. Nonzero pion masses lead to larger values for the energy and angular momentum. Also, we have found that similar to the ungauged case, the spinning Skyrmions admit gravitating generalisations, which are currently under study. These solutions satisfy also the generic relation (16).
2,160.2
2005-09-01T00:00:00.000
[ "Physics" ]
Molecular Prevalence and Identification of Ehrlichia canis and Anaplasma platys from Dogs in Nay Pyi Taw Area, Myanmar Ticks are vectors of different types of viruses, protozoans, and other microorganisms, which include Gram-negative prokaryotes of the genera Rickettsiales, Ehrlichia, Anaplasma, and Borrelia. Canine monocytic ehrlichiosis caused by Ehrlichia canis and canine cyclic thrombocytopenia caused by Anaplasma platys are of veterinary importance worldwide. In Myanmar, there is limited information concerning tick-borne pathogens, Ehrlichia and Anaplasma spp., as well as genetic characterization of these species. We performed nested PCR for the gltA gene of the genus Ehrlichia spp. and the 16S rRNA gene of the genus Anaplasma spp. with blood samples from 400 apparently healthy dogs in Nay Pyi Taw area. These amplicon sequences were compared with other sequences from GenBank. Among the 400 blood samples from dogs, 3 (0.75%) were positive for E. canis and 1 (0.25%) was positive for A. platys. The partial sequences of the E. canis gltA and A. platys 16SrRNA genes obtained were highly similar to E. canis and A. platys isolated from different other countries. Introduction Tick-borne bacteria and parasites are important pathogens of domestic dogs and are potentially of public health significance. At least five bacterial species, Ehrlichia canis, E. chaffeensis, E. ewingii, Anaplasma platys, and A. phagocytophilum, have been reported in domestic dogs [1]. E. canis is transstadially transmitted by the brown dog tick, Rhipicephalus sanguineus, and all feeding stages of tick can transmit the infection to susceptible dogs, and nymphal and adults can transmit E. canis for at least 155 days after detachment from an infected host [2]. Ehrlichia canis was the first Rickettsiales described in dogs and is the causal agent of canine monocytic ehrlichiosis (CME), which has a worldwide distribution, particularly in tropical and subtropical regions [3][4][5]. ese bacteria are classified in the family Anaplasmataceae, which includes obligate intracellular prokaryotic parasites that reside within a parasitophorous vacuole [6]. In canine hosts, E. canis is infective for monocytes [7]. Anaplasma platys infections in dogs are distributed throughout the world. A. platys is the causative agent of canine infectious cyclic thrombocytopenia, which infects the platelets, but infected dogs showed no clinical signs [8]. A. platys infection is difficult to detect not only "in vivo" because of the low bacteremias but also serologically because of cross-reaction with other Anaplasma species [9,10]. us, a PCR assay is a reliable method for the detection of A. platys infection in dogs [11]. e objectives of this study were to determine the presence of E. canis and A. platys in dogs and to compare Myanmar isolates with those from other regions. Herein, we used nested PCR and phylogenetic analysis to detect the molecular characteristics of E. canis and A. platys infections from dogs in the Nay Pyi Taw area, Myanmar. Study Site and Sample Collection. is study was conducted in four townships: Lewe ( (Figure 1). Between December 2016 and March 2017, blood samples were collected from 400 apparently healthy dogs. From the urban and rural areas of each township, 100 dogs were sampled. Most of the dogs are free roaming in rural Myanmar, while they belong to someone. Before taking blood samples, we explained our aim of the study to the owner, and we have already obtained consent for the experiment from the dog owners. Blood collection (approximately 3 ml) was performed from the sphenoid vein and jugular vein and put into ethylene diamine tetraacetic acid (EDTA) tubes. All collected samples were transferred to the laboratory at 4°C. Within 24 hr of sample collection, DNA extraction was conducted. During blood collection, dogs were examined for the presence of ticks, and if present, ticks were collected in plastic containers containing a small piece of wet sponge for further taxonomic identification. DNA Extraction from Canine Blood. Extraction of DNA from the blood samples was conducted by using a commercial DNA extraction reagent (DNAzol ® ) (Molecular Research Center, Inc., USA) according to the manufacturer's instructions [12]. e volume of blood used for DNA extraction was 100 μl. e extracted DNAs were eluted in 200 μl elution buffer and stored at −80°C. DNA concentration was estimated using a NanoDrop 2000 spectrophotometer ( ermoFisher Scientific, MA, USA). Polymerase Chain Reaction (PCR) to Amplify Ehrlichia and Anaplasma spp. For Ehrlichia spp., seminested PCR amplification of the gltA gene fragment was performed by using a SimpliAmp ermal cycler (Applied Biosystem, USA) as previously described [13]. Outer primers, EHRCS-131F (CAGGATTTATGTCTACTGCTGCTTG) and EHRCS-1226R (CCAGTATATAAYTGACGWGGACG), were used for the amplification of the first-round product (1,096bp), and inner primers, EHRCS-131F (CAGGATT-TATGTCTACTGCTGCT TG) and EHRCS-879R (TIGCKCCACCATGAGCTG), were used for the amplification of the second-round product (748bp). For Anaplasma spp., seminested PCR amplification of the 16S rRNA gene fragment was performed according to Inokuma et al. [14]. Outer primers, fD1 (AGAGTTTGATCCTGGCTC AG) and EHR16SR (TAGCACTCATCGTTTA CAGC), were used for the first-round product (1,000bp), and inner primers, EHR16SD (GGTACC(C/T)ACAGAAGAAGTCC) and Rp2 (ACGGCTACCTTGTTACGACTT), were used for the second-round product (1,000bp). PCR mixture contained approximately 20-100 ng of extracted DNA, 0.3 μM of each primer, 0.025 U/μL of Tks Gflex ™ DNA polymerase (Takara Bio Inc., Tokyo, Japan), and 1 × Gflex buffer in a volume of 25 μL. For both species, cycling conditions were denaturation for 1 min at 94°C, followed by 98°C for 10 s. e annealing temperature used was 50°C for 15s for Ehrlichia spp. and 55°C for Anaplasma spp., followed by 68°C for 90s for 40 cycles, and a final extension for 5 min at 68°C. e PCR products were visualized by electrophoresis on 1.5% agarose gels stained with RedSafe (NIPPON Genetics, Duren, Germany). Multiple sequence alignments of positive amplicons and gltA and 16S rRNA sequences from GenBank were performed using the ClustalW Version 1.8 [15]. Phylogenetic trees were inferred using neighbor-joining (NJ) analysis using MEGA software version 7.0 [16]. e distance matrix of nucleotide divergences was calculated according to Kimura's two-parameter model furnished by MEGA. A bootstrap resampling technique of 1000 replications was performed to statistically support the reliabilities of the nodes on the trees. Ethical Considerations Experiments were carried out in accordance with the guidelines laid down by the Institutional Ethics Committee. All studies using animal subjects were approved by the Ethics Committee of University of Veterinary Science, Nay Pyi Taw, Myanmar (approval no. 309/Katha (postgraduate)/ 2016. Results and Discussion Of the 400 dogs analyzed, 3 samples (0.75%) were positive for E. canis, while 1 sample (0.25%) was positive for A. platys (Table 1, Figures 2(a) and 2(b)). Descriptive data of sampled dogs and tick infestation are shown in Table 2. All PCRpositive samples for A. platys and E. canis were confirmed by sequencing results. In the phylogenetic trees based on gltA genes, E. canis was detected in dogs T1, T8, and T9 grouped in the same cluster as other E. canis strains, supported with a 100% bootstrap value. e A. platys 16SrRNA gene from dog Z4 was found in the same cluster as other A. platys strains, supported with a 100% bootstrap value. e sequences obtained were similar to those of E. canis strains from Philippines, Italy, Spain, France, China, and ailand (GenBank accession no. JN391409, AY647155, AY615901, AF304143, KX987357, KU765198, and KU765199) with similarities of 98.46-100% (Figure 3). A. platys 16S rRNA sequences obtained were similar to those of A. platys strains from India, ailand, Italy, Okinawa, Croatia, China, Spain, and South Africa (GenBank accession no. KT982643, Figure 4). e results from phylogenetic analysis confirmed that the amplified genes belong to the respective species. Sequences generated in the present study have been submitted to GenBank under accession numbers LC545959 to LC545962. In this study, molecular identification from 400 local dog samples demonstrated a prevalence of 0.75% for E. canis infection and 0.25% for A. platys infection. ere was no mixed infection in this study. According to the findings of this study, E. canis was found as more common canine tick-borne pathogen when compared to Anaplasma spp. In this study, the present results indicate a low prevalence of subclinical infection in dogs. In Turkey, the prevalence of E. canis from asymptomatic dogs was 4.9%, A. platys was 0.5%, and mixed infections of E. canis and A. platys were detected as 0.3% [17]. In Brazil, only 4.8% of the dogs were seroreactive to E. canis [18]. Previous studies have described that the molecular prevalence of E. canis ranged from 3.1% to 88% [19][20][21][22][23]. e variation might be due to the sample size, climatic conditions that directly influence the tick population, and the time of sample collection. Veterinary Medicine International A higher prevalence of E. canis and A. platys was also reported by some workers. In Praia, Austria, Gotsch et al. [24] indicated that the PCR examination for E. canis in dogs was 26.2% and A. platys was 7.7%. In North Carolina, USA, 33% of 27 dogs was A. platys PCR-positive [25]. In Okinawa, Japan, 32% of 200 stray dogs was positive by A. platysspecific PCR [26]. In fact, in the previous studies, the positive dogs were sick animals with clinical signs compatible with vector-borne diseases and admitted for medical treatment, while in the present study, all the dogs sampled were apparently healthy. Anaplasma platys is a thrombocytotrophic bacteria of dogs that is characterized by clinical abnormalities such as fever, anorexia, petechial haemorrhages, and uveitis [27]. e detection of A. platys infection in dogs was the first time in Myanmar. In this study, it was lower prevalence of A. platys (0.25%) than that in Italy (4%), Nigeria (6.6%), and Venezuela (16%). In Portugal, A. platys DNA has been detected in clinically suspected dogs living in the north and south of Portugal [28], while the overall national seroprevalence of Anaplasma spp. has ranged from 4.5% in apparently healthy to 9.2% in clinically suspect dogs [29]. e lower prevalence of A. platys in this study might be due to different DNA extraction methods, and the local breed of the examined dogs in this study seemed to be genetically resistant to tick-borne pathogens. In this study, the older dogs were more likely to be positive and could have a greater risk of tick-borne diseases. Moreover, younger dogs might be maternally immune to tick infection. Since local dogs are free roaming in rural areas, they have never been treated or removed of ticks, and they may naturally be resistant to tickborne diseases. However, further studies are necessary to identify the infections of E. canis and A. platys from both ticks and hosts. In this study, all the tick samples collected during sampling were morphologically and molecularly diagnosed R. sanguineus (data not shown). However, the occurrence of tick infestation in dogs in the study area was low (11%, 44/ 400) [30]. A total of 237 ticks were collected from 44 dogs with an average of 4-5 ticks per dog. ree out of four Veterinary Medicine International positive dogs were infected with ticks in the studied areas. ese data suggest that E. canis and A. platys might be shared by the same vector, R. sanguineus. In Myanmar, Chel [31] studied that the prevalence of R. sanguineus tick in Nay Pyi Taw area was 0% in the summer season, 84.7% in the rainy season, and 15.3% in the winter season. Asebe et al. [32] also discussed that in tropical climates, there is a marked decrease in tick population at the end of the rainy season and with progressive fall to almost zero in the dry season. In fact, as stated by Huang et al. [33], one of the reasons for the low prevalence of E. canis and A. platys might also be due to a very small number of R. sanguineus ticks collected in the present study. Moreover, this might be due to climatic conditions during the sampling period (from December to March), which were not favourable for development and survival of R. sanguineus. e partial sequences of the gltA and 16SrRNA genes obtained in this study were highly similar to strains of E. canis and A. platys isolated from different other countries. is implied that the E. canis and A. platys isolates found in Myanmar were not divergent from the strains of other countries. is might be due to the fact that transboundary movement of domestic and wild animals might carry infected ticks between Myanmar and neighboring countries. e vectors might distribute genetically similar pathogens among these countries. Conclusion e findings of this study are basic information regarding E. canis and A. platys infection in Myanmar. Moreover, further research related to the genetic diversity of E. canis and A. platys from another area of Myanmar should be conducted. Data Availability Sequences generated in the present study have been submitted to GenBank under accession numbers LC545959 to LC545962. Conflicts of Interest e authors declare that they have no conflicts of interest.
2,963.8
2021-02-08T00:00:00.000
[ "Biology" ]
Commercial bank digital transformation, information costs, and corporate financial constraints ABSTRACT The rapid development of digital finance has transformed traditional financial institutions. We investigate the effect and economic consequences of bank digital transformation on corporate financial constraints using data from China. The results show that bank digital transformation alleviates corporate financial constraints by decreasing information search, processing, and verification costs. Furthermore, the effect of bank digital transformation on corporate financial constraints is more pronounced for firms with higher contract intensity, more intangible assets, and poorer external information environment. We also find that bank digital transformation alleviates corporate financial constraints by increasing debt financing. In addition, we show that digital transformation promotes lending by big banks, resulting in the crowding-out effect. Finally, we find that bank digital transformation promotes the flow of credit resources to non-zombie firms, which effectively improves credit allocation efficiency. This paper extends research on digital finance and new structural finance from the perspective of bank digital transformation. Introduction The early lending market in China mainly comprised pawnshops, which required significant collateral and set extremely high interest rates (Li, 2002).During the Age of Discovery, the modern commercial bank began to take shape.The Industrial Revolution further stimulated the development of banking credit.The Information Revolution, which was marked by the invention and application of electronic computers, brought unprecedented changes to the banking industry.Economic changes and technological advances have caused the credit market to germinate, grow and then blossom, and the banking industry to evolve from manual bookkeeping to basic computerisation to the advanced use of information technology.However, neither traditional pawnshops nor modern commercial banks have fundamentally changed the basic tenet of using assets as collateral for loans. CONTACT Guochao Yang<EMAIL_ADDRESS>of New Institutional Accounting, School of Accounting, Zhongnan University of Economics and Law, No.182 Nanhu Avenue, Wuhan, China Paper accepted by Guliang Tang. With the current wave of digitalisation, data have become the fifth production factor adding to the well-established four: labour, land, knowledge, and technology.Data assets are the core assets of many emerging companies.However, despite their great value, there are no reliable accounting methods for measuring data assets.The inability of financial statements to reflect the actual value of data assets is a serious challenge for corporate valuation and has created a huge gap between the asset structure of many emerging companies and the collateral-based bank credit placement model.This, coupled with the fact that the lending decisions of commercial banks in China rely heavily on hard assets available for collateral (Qian et al., 2019), excludes companies with limited hard assets but high growth prospects from the financial market.This significantly restricts companies' incentives to innovate and develop.Accordingly, the inadequate supply of traditional financial services seriously constrains the transformation and highquality development of the Chinese economy. The gradual integration of digital finance methods and credit business could alleviate the mismatch between firms and credit resources.By using new technological tools such as information technology and big data in the credit approval process, banks could decrease the information search, processing, and verification costs and with better information, lending decisions would rely less on collateral (Cortina Lorente & Schmukler, 2018;Gong et al., 2021;Liang & Zhang, 2020;Sutherland, 2020;Zhang et al., 2021;Zhao & Tan, 2012).Specifically, banks can use these new technological tools to collect or obtain data on firms' e-commerce transactions, logistics, and settlements and on taxation, thus reduce information search costs.These technologies also allow banks to make efficient use of large datasets through data mining and feature extraction processes.They can create corporate portraits that can be used to judge corporate credit risks and approve credit limits, which helps to reduce information processing costs.In addition, digital supply chain finance systems combine the four flows (i.e.information flow, commercial flow, logistics, and capital flow) of business processes with the financing information chain, thus enhancing data credibility and reducing information verification costs. In summary, bank digital transformation may alleviate information asymmetry between banks and companies by decreasing information search, processing, and verification costs, and thus alleviating corporate financial constraints.For example, Zhao and Tan (2012) find that e-commerce platforms reduce information asymmetry between banks and companies, thereby alleviating the mismatch of credit resources.Gong et al. (2021) show that digital supply chain financial services help banks to obtain high-quality verifiable information, which improves credit allocation efficiency.However, digital finance may also lead to risk-taking behaviours of banks through scale effects and competition effects (Dai & Fang, 2014;Guo & Shen, 2019;Qiu et al., 2018), reducing resource allocation efficiency and exacerbating corporate financial constraints.Thus, the effects of bank digital transformation on corporate financial constraints in the context of digital finance remain an empirical question. To investigate the impact of bank digital transformation on corporate financial constraints, we use the Bank Digital Transformation Index data from the Institute of Digital Finance at Peking University and hand-collected bank loan data of Chinese listed companies to construct bank digital transformation indicators at the firm level.Our findings are as follows.First, bank digital transformation can alleviate corporate financial constraints.We get similar results when using three subs-indices of bank digital transformation (organisational, product, and cognitive), and taking into consideration potential measurement errors, omitted variables, and other endogeneity issues.Second, the effect of bank digital transformation on corporate financial constraints is more pronounced for firms with higher contract intensity, more intangible assets, and a poorer external information environment.Furthermore, bank digital transformation alleviates corporate financial constraints through increasing debt financing rather than lowering financing cost.Third, digital transformation promotes lending by big banks, resulting in the crowdingout of smaller banks.That is, digital transformation reinforces the information advantage of big banks.Finally, digital transformation drives banks to provide loans to non-zombie firms, contributing to improved credit allocation efficiency. The main contributions of this paper are as follows.First, our study contributes to the digital finance literature by exploring the impact of banks' implementation of digital finance tools on corporate financial constraints.Prior research on digital finance mainly focuses on P2P lending, equity crowdfunding, and mobile payments (Li & Shen, 2019;Liao et al., 2014;Mollick, 2014;Tang, 2019;Vallee & Zeng, 2019;Wang & Huang, 2018).Unlike P2P lending, equity crowdfunding, and mobile payments, which mainly serve individuals and micro and small enterprises, bank loans are the most important source of external financing for firms in China.1 Therefore, understanding the impact of digital finance on the real economy in China requires a full consideration of its impact on banks, which play a dominant role in the Chinese financial system.However, there are few studies on the effects of banks' implementation of digital finance.Thus, our study expands the digital finance literature by showing that bank digital transformation can narrow the information gap between banks and companies and hence alleviate corporate financial constraints. Second, our study re-examines the potential changes in the market structure of the banking industry in a digital economy.A common assumption is that under new structural finance, optimal banking structures vary between countries based on a country's stage of development (Lin & Li, 2001;Zhang et al., 2019).The logic of this assumption is that small banks have better access to soft information in local credit markets than big banks and thus perform better in servicing small business credit needs.However, technological developments may change the way banks overcome information asymmetry, altering the comparative advantages of big and small banks.We find that bank digital transformation significantly increases the marginal gains of big banks in processing soft information, thereby challenging the information advantage of small and medium-sized banks in local credit markets and resulting in a crowding-out effect.Therefore, by identifying the impact of digital transformation on banks' credit operations, our study further enriches the literature on new structural finance. Finally, our findings have some practical value.How to improve the credit allocation efficiency of traditional financial markets and effectively ease corporate financing woes are important issues in China's financial reform.Thus, our study, showing the positive effect of the digital transformation of banks on the efficiency of credit resource allocation, has important implications for policymakers seeking to effectively alleviate corporate financial constraints. The remainder of this paper is organised as follows.Section 2 introduces the institutional background and hypothesis development.Section 3 describes the research design.Section 4 discusses our main results.Section 5 presents additional analysis, and Section 6 concludes the study. Institutional background We use the Bank Digital Transformation Index data from the Institute of Digital Finance at Peking University to measure the digital transformation of commercial banks in China.This index, which started in 2010, has been used to analyse the digital transformation of commercial banks in China (Wang & Xie, 2021).It collects texts and financial data from the annual reports of 228 banks, including 6 large state-owned commercial banks, 12 jointstock commercial banks, 121 urban commercial banks, 51 rural commercial banks, 24 foreign banks, and 14 private banks.As of the end of 2018, these banks held 98.35% of the total assets of commercial banks in China.Thus, the banks in this index system are reliable representatives of China's commercial banks. Figure 1 illustrates the three dimensions used to construct the Bank Digital Transformation Index: cognition, organisation, and product.Cognition, as a precursor to behaviour, is the commercial bank's awareness, understanding, and attention to the technological changes associated with digital finance.Thus, a higher score on the cognitive sub-index indicates a higher likelihood that a bank uses digital technology to transform and upgrade its financial services.Organisational reforms help banks implement digital finance transformation effectively and successfully, as the ability of commercial banks to innovate, execute, and integrate their business strategies is dependent on their internal structure.In particular, setting up a professional digital finance department is vital for laying out and coordinating related processes.For example, commercial banks can establish a Fintech committee at the head office to help the whole bank formulate appropriate development strategies.Finally, for the product dimension, integrating digital technology and financial products helps banks customise their financial services and provide added value to companies. The Bank Digital Transformation Index consists of three sub-indices, representing the three dimensions.The cognitive sub-index measures the frequency of keywords related to digital finance in commercial banks' annual reports, such as Fintech, technology finance, and cloud computing.The organisational sub-index captures whether commercial banks have carried out relevant organisational reforms, such as setting up digital transformation-related departments, hiring IT executives or directors, and cooperating with digital finance companies.The product sub-index measures whether commercial banks have launched products such as mobile banking, WeChat banking, online loans, or e-commerce. Figure 2 illustrates the changes in the Digital Transformation Index and sub-indices scores of commercial banks in China from 2010 to 2018.As shown in the figure, the digital transformation of banks in China has steadily increased over this period, with the index score rising from 12.28 in 2010 to 82.32 in 2018.Furthermore, after the introduction of Internet finance in 2013 and the establishment of the Fintech Committee by the People's Bank of China (PBC) in 2017, the speed of this digital transformation substantially increased. Hypothesis development The rise of the digital economy has not only challenged banks' reliance on collateral to make lending decisions but has also changed the incentives for banks to gather and generate information.Specifically, before the emergence of the digital economy, collateral was a necessary and sufficient condition for banks' lending decisions, resulting in a lack of incentives for banks to collect costly information.Gorton and Ordonez (2014) find that banks are reluctant to collect and generate information when the perceived quality of collateral is high, which is common in China's banking system due to its excessive emphasis on collateral (Song et al., 2011).However, in the digital economy, business modes are gradually changing, and firms often lack tangible assets, which is forcing banks to make substantial changes to collect information to avoid losing customers. We examine the impact of bank digital transformation on corporate financial constraints from the perspective of information search, processing, and verification costs.First, digital transformation helps decrease banks' information search costs.Digital technology can provide easy access to multiple dimensions of information (Berg et al., 2020;Goldstein et al., 2019, Liberti & Petersen, 2019;Xie & Zou, 2012;Zhu, 2019), such as real-time operational data (e.g.e-commerce transactions and logistics) and external information (e.g.taxation).For example, Zhao and Tan, (2012) show that banks can use e-commerce platforms to collect information on firms' business and transactions, as well as information about peer firms for comparative analysis of investment risks.Zhu (2019) shows that big data, including satellite images, provide granular indicators of fundamentals and thus help banks to predict firms' future profitability.Liberti et al. (2022) find that information sharing helps banks improve access to credit and build relationships with high-quality borrowers.Therefore, digital technology helps reduce information asymmetry between banks and firms by decreasing information search costs. Second, digital transformation helps decrease banks' information processing costs.Digital technologies such as blockchain, cloud computing, and artificial intelligence can efficiently process massive amounts of data, thereby reducing banks' information processing costs.Information in traditional bank databases is uncoded and unstructured, and each project constitutes a separate data set.Thus, it is difficult for banks to integrate, process, and analyse these data.In contrast, using data mining techniques, banks can transform raw data into useful information that can directly improve lending decisions, resulting in a better assessment of a company's creditworthiness.For example, Barboza et al. (2017) and Jiang et al. (2018) find that machine learning can help banks predict corporate bankruptcy risk.Lee et al. (2019) construct an analytical framework that includes FinTech, information frictions, and financing gaps, and find that both information processing technology and information collecting technology can help banks identify firm quality and thus close the financing gap by lowering the probability of the misclassification of good firms as bad firms.Therefore, bank digital transformation helps reduce information asymmetry between banks and firms by decreasing information processing costs.Third, digital transformation helps decrease banks' information verification costs by cross-validating data.Before the emergence of the digital economy, banks generally verified the reliability of a company's financial information through offline due diligence.In contrast, the multi-dimensional information obtained by digital technologies makes it easier to certify the reputation and trustworthiness of firms.For example, digital supply chain finance enhances data credibility by combining the four flows in business processes (i.e.information flow, commercial flow, logistics, and capital flow) with upstream information.Gong et al. (2021) note that digital supply chain financial services help banks to obtain high-quality verifiable information, thereby improving credit allocation efficiency.Bushman et al. (2017) find that media coverage of borrowers can reduce information asymmetry not only between lenders and borrowers but also within a bank syndicate, thus attracting more non-relationship lenders to participate in loan syndication and reducing the cost of debt.Therefore, bank digital transformation helps reduce information asymmetry between banks and firms by decreasing information verification costs. Furthermore, in the ex-ante screening process, digital tools help banks to improve the assessment of corporate creditworthiness by building algorithms that harden corporate soft information and processing big data (Sutherland, 2018(Sutherland, , 2020;;Tang et al., 2020).Digital transformation also improves banks financial services by allowing them to customise financial services to meet borrowers' specific financial needs.In the ex-post monitoring process, digital transformation helps banks improve credit risk management (Sutherland, 2020;Tang et al., 2020).For example, digital tools help banks to monitor the actions of borrowers and prevent high-risk borrowers from engaging in high-risk investments.Thus, bank digital transformation can decrease information asymmetry and improve credit allocation efficiency, and this alleviates firms' financial constraints.This leads to our main hypothesis.H1: Bank digital transformation is negatively associated with corporate financial constraints. Research design Following the previous literature (e.g.Custódio & Metzger, 2014;Fazzari et al., 1988;Jiang et al., 2019), we construct the following model using firm-year observations.where INV i,t is the ratio of investment to assets for firm i in city c and year t, is measured as the capital expenditure scaled by lagged total assets.And cash flow (CF) is measured as the net cash flow from operating activities scaled by lagged total assets.The independent variable BF i,t is the weighted digital transformation index (BankTech) or sub-indices (BankOrg, BankPduc, or BankCon) of the banks that lend to firm i in year t.To construct a weighted bank digital transformation index at the firm-year level, we use the Commercial Bank Digital Transformation Index data from the Institute of Digital Finance at Peking University and employ firm-year-bank observations where a lending relationship exists: where BF j,t is the digital transformation index or sub-indices for bank j in year t and Ratio i,j,t is defined as the proportion of the loans provided by bank j to the sum of bank loans based on hand-collected bank loan information.To mitigate potential endogeneity issues, we use a one-period lag of bank digital transformation in equation (1).Meanwhile, to facilitate interpretation of regression coefficients, we use standardised digital transformation index or sub-indices as independent variables. We follow Custódio and Metzger (2014) and Jiang et al. (2019) to include the following firm-level control variables (X): first, firm ownership characteristics: whether or not the firm is a state-owned enterprise (SOE); second, financial indicators: size (SIZE), leverage (LEV), growth (TobinQ), performance (OPROA), the proportion of fixed assets (PPE); third, corporate governance: shareholding of the first largest shareholder (FIRST), board independence (INDP).We also include region-level control variables (Z), including: marketisation level (MktIndex), regional GDP growth rate (GDPG) and GDP per capita (GDPPer).η i and φ t are firm and year fixed effects.We expect α 3 is expected to be significantly negative.Table 1 shows the variable definitions.And we follow Petersen (2009) to use the standard errors clustered at the firm level. Sample selection Our sample includes A-share non-financial listed firms from 2011 to 2019.The key variable of interest is the degree of bank digital transformation, which is measured using the Commercial Bank Digital Transformation Index data from the Institute of Digital Finance at Peking University (Wang & Xie, 2021).We manually collect information on material bank loans disclosed in annual reports and interim announcements of listed companies to obtain detailed firm-year-bank loan information, and other financial data are obtained from the China Stock Market & Accounting Research Database (CSMAR). The sample selection process is as follows.First, we delete observations where the name of the bank or the amount of bank loans is missing.Second, we delete observations with missing bank digital transformation indices.Third, we exclude financial firms.This process leaves us with a final sample of 9,437 firm-year observations. Summary statistics Table 2 reports the summary statistics.We winsorise all continuous variables at the 1 st and 99 th percentiles.According to Table 2, the mean value of firm investment level (INV) is 0.06 and the mean value of cash flow (CF) is 0.04, which is consistent with the results of Jiang et al. (2016).The mean value of bank digital transformation index (not standardised) (BankTech) is 76.20. The impact of bank digital transformation on corporate financial constraints The regression results of equation ( 1) are shown in Table 3. Column (1) of Table 3 shows that the coefficient on the interaction term BF*CF is −0.041 and significantly negative at the 5% level, indicating that bank digital transformation helps to reduce corporate investment-cash flow sensitivity.Columns (2)-( 4) of Table 3 present the results on the Corporate investment, the capital expenditure scaled by the lagged total assets.Capital expenditure is calculated as follow: Cash paid to acquire fixed assets, intangible assets and other long-term assets + Cash paid to acquire subsidiaries and other business entities -Cash from disposing of fixed assets, intangible assets and other long-term assets -Cash from disposing of subsidiaries and other business entities -Depreciation of fixed assets -Amortisation of intangible assets -Deferred expenses. CF Cash flow, computed as the net cash flow from operating activities scaled by lagged total assets. BankTech The degree of bank digital transformation, the Bank Digital Transformation Index weighted by the loan amount at the firm-year level. BankOrg The organisational dimension of Bank Digital Transformation Index. BankPduc The product dimension of Bank Digital Transformation Index. BankCon The cognitive dimension of Bank Digital Transformation Index. Control variables SOE An indicator equal to one if the firm is an SOE, and 0 otherwise. SIZE The logarithm of the total market capitalisation of the company. LEV Corporate leverage, total liabilities divided by total assets. TobinQ Ratio of the market value of equity to the book value of equity. OPROA Company performance, operating profits divided by total assets. PPE The proportion of fixed assets, fixed assets divided by total assets. FIRST Ownership of the first largest shareholder. INDP Board independence, the proportion of independent directors. GDPG GDP growth rate of the city where the company is located. GDPPer The logarithm of GDP per capita in the city where the company is located.three sub-indices, which is consistent with that in column (1).Therefore, the above evidence shows that bank digital transformation helps to alleviate corporate financial constraints and supports H1.2 Endogeneity tests to solve the problem of measurement error The bank digital transformation index is correlated with bank characteristics (e.g.size) and may reflect their ability to provide service to firms, suggesting that there may exist measurement error in constructing BF.Specifically, larger banks are more likely to serve larger firms with less financial constraints.Meanwhile, larger banks are more likely to pursue digital transformation.As a result, firms with less financial constraints are more likely to have a higher degree of BF.To address the measurement error, we use the residual from regressing digital transformation index on bank fundamental characteristics.That is, we use bank-year data and regress bank digital transformation on bank fundamental characteristics, including total assets (BankSIZE), performance (BankROA) and leverage (BankLEV) to obtain the residual, and further construct a firm-year weighted index (Resid_BF). The corresponding regression results are reported in Table 4.After ruling out the effect of bank fundamental characteristics, we still get similar results. Endogeneity tests to solve the problem of omitted variables The results in the main test may be due to omitted variables.On the one hand, regional digital finance development helps complement traditional finance and improve access to credit (Tang et al., 2020;Xie et al., 2018), then the decrease in corporate financial constraints is likely to result from increase in digital finance development in the city where the company is located rather than from bank digital transformation.On the other hand, some unobservable time factors may simultaneously lead to an increase in bank digital transformation and a decrease in corporate financial constraints.Thus, we address the above omitted variable problems by adding the regional digital finance development level and its interaction term with cash flow (CF), and the interaction term of cash flow with time trends, respectively.The regression results (not tabulated for brevity) corroborate the OLS estimates.3 Instrumental variable test Unobservable firm characteristics may simultaneously lead to less financial constraints and easier access to loans from banks with higher levels of digital transformation, such as, the implicit government guarantees.To address the endogeneity concern, we then introduce the following instrument variables: first, the proportion of the labour receiving formal IT training and the proportion of employees who regularly use computers (IT_Index), from the 'Government Governance, Investment Environment and Harmonious Society: Improving the Competitiveness of 120 Chinese Cities' published by the World Bank in 2006; second, the ranking of 'Computer Science and Technology' department of the universities in the city On one hand, a city with many employees who have received formal IT training provides sufficient human capital for banks to develop digital technology.Similarly, the development of computer science in the city can lead to improvement in technology and increase in its talent, thus promoting bank digital transformation.On the other hand, corporate financial constraints are determined by their business decisions and financial development in the city and are unlikely to be directly affected by the IT index or the availability of top 20 universities for computer science. Further, following the previous literature (e.g.Braggion et al., 2017;Mukherjee et al., 2017), we run a first-stage regression based on bank-year data, i.e. regressing bank digital transformation index on instrumental variables, with control variables including: total bank assets (BankSIZE), bank performance (BankROA), bank leverage (BankLEV), regional bank competition (HHI), regional informatization (CityTech), the share of tertiary sector (Third), regional marketisation (MktIndex), regional GDP growth (GDPG) and GDP per capita (GDPPer).Specifically, HHI is measured as the opposite number of the shares of top three banks in the city, and CityTech is measured as the proportion of mobile phone users per one hundred people.Then, the fitted value (BankTech_hat) from the first-stage regression is used in the second-stage regression.Column (1) of Table 5 shows that, the coefficients on the two instrumental variables in the first-stage regression are significantly positive.Column (2) of Table 5 shows that the results of the second-stage regression are consistent with the OLS estimates. Corporate contract intensity High contract intensity usually indicates that the transaction is exclusive, inimitable, or even irreplaceable, such as human capital contracts and technology transfer contracts.As a result, contract-intensive transactions often lack an open market, which makes it difficult for outsiders, such as banks, to get reference prices for these transactions.As a result, it is difficult to obtain and verify information for firms with higher contract intensity, thus weakening their financing ability.Digital technology, with its information searching and processing advantages, helps banks to obtain information and deal with the complex information environment, and thus effectively reduces the information asymmetry between banks and highly contract-intensive firms.Therefore, we expect that the effect of bank digital transformation on corporate financial constraints is more pronounced for firms with higher contract intensity. We estimate the following model, where Group is a dummy variable for more contractintensive firms (MoreContract).Specifically, we follow Nunn (2007) to measure industry contract intensity by using information on intermediate inputs in the input-output table, and MoreContract takes the value of one if the industry contract intensity which the firm belongs to is in the top tecile of the sample and zero otherwise. And the industry contract intensity (z h ) is calculated by equation ( 4), following Nunn (2007) and Li and Wang (2010).In equation ( 4), θ hk =u hk /u h , where u hk is the value of input k used in industry h, and u h is the total value of all inputs used in industry h; R neither k is the proportion of input k that is neither sold on an organised exchange nor reference priced; R referprice k is the proportion of input k that is not sold on an organised exchange but is reference priced.A larger value of z h indicates that industry h relies more on specific investments and is more contract intensive. We expect α 7 to be significantly negative.In Table 6, the coefficients on BF*CF*MoreContract are significantly negative, which is consistent with our expectation. Corporate intangible assets In comparison with tangible assets, it is difficult to obtain information on intangible assets, leading to difficulty in asset valuation.And intangible assets also lead to soft information, thus making it more difficult for firms with more intangible assets to obtain loans from banks.However, digital transformation provides banks with high-quality technical tools for valuing intangible assets, thus making it easier for banks to process soft information.Therefore, we predict that the effect of bank digital transformation on corporate financial constraints is more pronounced for firms with more intangible assets. To test this expectation, Group in model ( 3) is replaced by a dummy variable for firms with more intangible assets (MoreInt), with MoreInt taking the value of one when the proportion of intangible assets of the firm is in the top tecile of the sample and zero otherwise.We expect the coefficients on BF*CF*MoreInt to be significantly negative.The regression results are shown in Table 7, where the coefficients on the interaction term BF*CF*MoreInt are significantly negative, which is in line with our expectation. Corporate external information environment The poorer the information environment of companies, the greater the information asymmetry between banks and companies.And bank digital transformation can help decrease information search, processing, and verification costs, thus reducing information asymmetry.Therefore, we expect that the effect of bank digital transformation on corporate financial constraints is more pronounced for firms with a poorer external information environment.To test this inference, Group in model ( 3) is replaced by the less following analysts dummy (LessAnalyst).Specifically, LessAnalyst takes the value of one when the number of following analysts is in the bottom tecile of the sample and zero otherwise.We expect α 7 to be significantly negative.The industry contract intensity index is derived from Nunn (2007).Since this index is only for the manufacturing industry, the sample observations in Table 6 are less than that in the main test. The results of Table 8 show that the coefficients on the interaction term BF*CF* LessAnalyst are significantly negative, which is in line with our expectation. Mechanism tests We further explore the channels through which bank digital transformation alleviates corporate financial constraints.We perform the tests based on the size of debt financing and use the following two variables: new long-term loans (ΔLTLOAN), measured as the change in long-term loans scaled by total assets; and new short-term loans (ΔSTLOAN), measured as the change in short-term loans scaled by total assets.We also perform the tests based on the cost of debt (Cost of debt), measured as the interest expense scaled by lagged liabilities.The impact of bank digital transformation on firms' long-term and short-term loans is shown in Panel A and B of Table 9.The results in Panel A show that the coefficients on BF are significantly positive (except for column (3)), indicating that bank digital transforma- tion alleviates corporate financial constraints by increasing long-term loans.And the results for short-term loans in Panel B show that the coefficients on BF are not significant, indicating that bank digital transformation does not affect firms' short-term loan financing.The result may be due to the fact that banks face less information asymmetry in short-term lending compared to long-term lending, thus banks benefit more from decrease in information asymmetry resulted from bank digital transformation in longterm lending.Besides, the results in Panel C show that the coefficients on BF are not significant, that is, bank digital transformation does not affect the cost of debt.This result may be due to the fact that the cost of debt financing is determined by corporate credit risk, and bank digital transformation does not change corporate credit risk and is unlikely to affect the cost of debt.In summary, bank digital transformation alleviates corporate financial constraints through increasing debt financing rather than lowering financing cost. The impact of bank digital transformation on bank market structure As big banks have larger abilities to access, invest and apply digital resources than small banks, digital transformation may narrow the gap between big and small banks in dealing with soft information, and reduce the information disadvantages of big banks in using soft information, thus intensifying competition between big and small banks.Therefore, digital transformation of big banks may threaten the dominant role of small and mediumsized banks in the local credit market, leading to a crowding-out effect.We estimate the following model using detailed firm-year-bank loan information. where loan size (Loan i,j,t ) is the logarithm of the amount of loans obtained by firm i from bank j in year t.The independent variable BF j,t is measured as the digital transformation index or sub-indices of bank j in year t.We define large state-owned commercial banks and joint-stock banks as big banks,4 and BigBank is the big bank dummy variable, which takes the value of one if bank j is a big bank and zero otherwise.η i , γ j , and φ t are firm, bank, and year fixed effects.In addition to the control variables in model (1), we further include bank-level control variables (B), including: total bank assets (BankSIZE), and bank performance (BankROA).The coefficients on BF*BigBank are expected to be significantly positive. Column (1) of Table 10 shows that the coefficient on BF*BigBank is significantly positive at the 1% level.Columns (2)-(4) report the results for the three sub-indices, which are in line with column (1).The above results show that bank digital transformation promotes lending by big banks, creating a crowding-out effect on small banks. The impact of the "Internet Finance" policy The People's Bank of China and other relevant government departments directly promulgated the 'Guiding Opinion on Promoting the Healthy Development of Internet Finance' in July 2015.This document aimed to guide the compliant and sustainable development of Internet finance and encourage banks and other financial institutions to rely on Internet technologies to transform and upgrade their traditional financial businesses and services. Banks with a high development of digital transformation have already invested in certain hardware and software, and their employees have corresponding skills to adapt to digital transformation.Thus, the 'Internet Finance' policy can quickly help them to transform and upgrade their business.However, it is difficult for banks with a low development of digital transformation to discard the knowledge and capabilities built on the traditional lending decisions and quickly adapt or transition to the new digital system.Therefore, it usually takes more time for the 'Internet Finance' policy to take into effect for banks with a low development of digital transformation.As a result, we expect the effect of the 'Internet Finance' policy to be stronger for banks with a high development of digital transformation than banks with a low development of digital transformation. Then, we distinguish two groups with a high or low development of bank digital transformation based on their bank digital transformation index in 2014 (one year before the implementation of 'Internet Finance' policy), and further compare the changes in corporate financial constraints between the two groups before and after the implementation of the policy.We estimate the following model: where Post is the 'Internet Finance' policy dummy variable, which takes the value of one in 2015 and beyond, and zero otherwise.HighBF is a dummy variable equal to one for sample observations with a high development of bank digital transformation, which takes the value of one if the bank digital transformation index in 2014 is in the top tercile of sample distribution, and zero otherwise.We also include firm fixed effects (η i ) and year fixed effects (φ t ).We expect the coefficient on Post*HighBF*CF to be significantly negative.The bank fixed effects absorb the effect of BigBank, so the coefficients on BigBank are not reported. Column (1) of Table 11 shows that the coefficient on Post*HighBF*CF is significantly negative, indicating that the 'Internet Finance' policy has a negative impact on the investment-cash flow sensitivity for firms with a high development of bank digital transformation. We further test the dynamic impact of the 'Internet Finance' policy.Specifically, using year 2011-2012 as the baseline period, the indicator variable Yr2013_2014 is the year 2013-2014; the indicator variable Yr2015 (or Yr2016) is the year 2015 (or 2016); the indicator variable Aft2017 takes the value of one when the sample year is 2017 and beyond.Then, HighBF and HighBF*CF are interacted with separate indicator variables for each year around the implementation of the policy.The results in column (2) of Table 11 shows that the coefficient on Yr2013_2014*HighBF*CF is insignificant, indicating that there is no significant difference in corporate financial constraints between the two groups before the implementation of the 'Internet Finance' policy.However, the coefficients on Yr2015*HighBF*CF and Yr2016*HighBF*CF are both significantly negative, indicating that the 'Internet Finance' policy has a stronger impact for firms with a higher development of bank digital transformation. Economic consequences of bank digital transformation In order to investigate the economic consequences of bank digital transformation in improving credit allocation efficiency, we explore whether bank digital transformation leads to a shift in credit resources towards efficient companies.We follow Li et al. (2020) and Liu et al. (2019) to define high-efficiency firms based on zombie firms.Specifically, we follow Huang and Chen (2017) to identify zombie firms (Zombie) by the real profit method and identify zombie firms as inefficient companies.Then, we examine whether bank digital transformation is more likely to alleviate the financial constraints of non-zombie firms.The regression results are shown in Table 12. Table 12 shows that the coefficients on BF*CF*Zombie are significantly positive at the 5% level, indicating that the alleviation of corporate financial constraints by bank digital transformation is mainly in non-zombie firms.In summary, the results in Table 12 show that bank digital transformation significantly alleviates the financial constraints of highefficiency firms and improves the efficiency of credit resource allocation. Conclusion In this paper we investigate the effect and economic consequences of bank digital transformation on corporate financial constraints.We find that bank digital transformation helps to alleviate corporate financial constraints, and this effect is mainly in companies with higher contract intensity, more intangible assets, and poorer external information environment.Mechanistic tests find that bank digital transformation alleviates corporate financial constraints by increasing debt financing rather than lowering financing cost.Further, bank digital transformation promotes lending by big banks and crowds out lending by small banks.Finally, we show that bank digital transformation leads to a shift in credit resources towards high-efficiency firms, improving the efficiency of credit allocation. Our findings also have important practical implications.First, we provide new clues for promoting financial supply-side reform.The Chinese banking system suffers from resource mismatch, such as banks selecting borrowers based on the collateral, resulting in financial exclusion of small and growth firms.We explore the mechanisms by which bank digital development affects the access of companies to credit, which sheds light on how to ease financing woes of small and medium-sized firms in China and promote financial supply-side reform. Second, our findings have important policy implications for improving the efficiency of credit resource allocation to promote high-quality economic development of the Chinese economy.In the current period, there still exist misallocation of credit resources in China, with a large amount of credit funds flowing into inefficient firms such as zombie firms, creating a great obstacle to high-quality development.We find that bank digital transformation drives credit resources to non-zombie firms, suggesting that bank digital transformation helps alleviate misallocation of credit resources in China.Therefore, the government departments in China should pay attention to bank FinTech and encourage integration of technology with finance and capital markets, so as to better promote highquality economic development. Figure 1 . Figure 1.Framework of Commercial Bank Digital Transformation Index. Figure 2 . Figure 2. The Digital Transformation Index and sub-indices of commercial banks from Peking University.Note: The left axis shows the Digital Transformation Index, the organisational sub-index, and the product sub-index; the right axis shows the cognitive sub-index. Table 3 . Effects of bank digital transformation on corporate financial constraints. The t-statistics, calculated using robust standard errors clustered at the firm level, are reported in parentheses.*, **, and *** represent significance at the 10%, 5%, and 1% levels, respectively.The same as below. Table 4 . Endogeneity tests to solve measurement error issue.Computer Science and Technology' rankings released by China Academic Degree and Graduate Education Development Center, College20 takes the value of one if a top 20 university for computer science is located in the city where the bank is incorporated, and zero otherwise. Table 6 . Cross-sectional tests based on corporate contract intensity. Table 7 . Cross-sectional tests based on corporate intangible assets. Table 8 . Cross-sectional tests based on corporate external information environment. Table 10 . Effects of bank digital transformation on bank market structure. Table 11 . The impact of 'Internet Finance' policy.The firm fixed effects absorb the effect of HighBF, while the year fixed effects absorb the effect of Post.Therefore, the coefficients on HighBF and Post are not reported.(2)Inorder to distinguish the treatment and control group, we only retain the observations with bank digital transformation data in 2014, so the sample observations in Table11are less than that in the main test. Table 12 . Economic consequences of bank digital transformation.
9,146.2
2024-01-02T00:00:00.000
[ "Business", "Economics", "Computer Science" ]
Collusion Fraud Risk Mitigation with Integration of Data Analytics in E-Tendering There are already mandates and recommendations for detecting indications of tender collusion, but the risk of collusion in e-tendering has not been handled properly. Meanwhile, data analytics competency has become a prerequisite for successful digital transformation. This study aims to reveal the projection of data analytics integration in controlling collusion risk in e-tendering. This study uses a quantitative research method. The object of this study includes data on the risk of tender collusion and the KPPU’s Decisions for 2021 and 2022. The results of this study reveal that the average similarity of bids is 0.5308, a parameter indicating the risk of collusion in tenders. Existing controls have not been effective in dealing with this risk. Control development can be designed by referring to KPPU regulations and recommendations to LKPP. Maximum control standards can be applied by developing preventive controls in the form of data analytics competence training for the Selection Committee so that they are able to detect indications of collusion in tenders. In addition, data analytics tools need to be integrated into e-tendering in the Electronic Procurement System (SPSE INTRODUCTION One of the major contributors to the country's economy is the procurement of goods / services (Arfanti, 2014). However, procurement can be a potential area for budget leakage (Faisol et al., 2014). In fact, the trend of corruption in procurement handled by the Corruption Eradication Commission (KPK) has also continued to increase (Kamal, 2019). Prevention of fraud in procurement can be done by applying information technology, one of which is e-procurement through Electronic Procurement Services (LPSE) (Faisol et al., 2014;Febrina, 2017). However, tender collusion cannot be completely handled through e-procurement practices (Faisol et al., 2014). More than 80% of the total business competition violation reports received by the Indonesian Competition Commission (ICC/KPPU) are complaints related to tender collusion (Wisny, 2016). In 2021, 65% of the total number of collusion investigations in business competition handled by the Indonesian Competition Commission (ICC/KPPU) KPPU was tender collusion cases. Meanwhile, there were 10 (39%) legal decisions of the 26 tender cases during 2021 (Kamal, 2022). In fact, the West Kalimantan Police have caught an Electronic Procurement Service (LPSE) official redhanded (Kiwi, 2022). This could be part of the signal that there is still collusion in tender, even though the information has not yet become a final legal decision. Tender collusion can result in state losses (Munawir & Hasibuan, 2017) and obstacles for business people who do not commit collusion (Febrina, 2017). From the perspective of the socio engineering system, tender collusion can also result in construction failures (Saputra et al., 2016). In addition, spending the budget through e-procurement is often carried out irresponsibly (Febrina, 2017;Keintjem, 2016). In fact, procurers can obtain unfair profits because the bid price becomes abnormal (Febrina, 2017;Keintjem, 2016;Saputra et al., 2016). This loss can be a burden to the wider community (Keintjem, 2016). The results of previous research indicate that there have been several discussions regarding the criteria and impact of tender collusion, the mode of collusion, efforts to prevent collusion, and law enforcement against tender collusion. Research conducted by Ustien, (2019) reveals the criteria for actions that can be categorized as tender collusion. According to Suryoprayogo (2022), as a consequence, the commitment-making official (PPK) can terminate the contract resulting from the tender collusion. From 2015 to 2018, it was found that 57% of unfair competition was a tender collusion (Purwadi, 2019). The modes of tender collusion include the same Internet Protocol (IP) address and mechanisms outside the e-tendering system (Wulan et al., 2019), association engagement (Suradiyanto & Pratiwie, 2020), and vertical and horizontal collusion (Manihuruk et al., 2016). Tender collusion can be prevented by using information technology in tender implementation (Suhermin, 2012). According to Febrina (2017), tender collusion can be prevented by creating an atmosphere of competition through procurement services. However, e-tendering has no effect on preventing tender collusion (Faisol et al., 2014). Sociologically, technical and non-technical obstacles can open up opportunities for tender collusion (Arfanti, 2014). In addition, there are still several regulations that conflict with the principle of competition, such as the requirement to appoint a subsidiary within a State-Owned Enterprise (Maria & Anggraini, 2013). The description above shows that there is still a research gap on the prevention of tender collusion. Previous research also revealed the handling of tender collusion by referring to the Law of the Republic of Indonesia Number 5 of 1999. The law strictly regulates the prohibition of unhealthy conspiracy and its law enforcement (Keintjem, 2016). The elements of tender colusion must be proven adequately (Munawir & Hasibuan, 2017). In enforcing the law, the Indonesian Competition Commission (ICC/KPPU) can give administrative sanctions to the perpetrators of tender collusion (Fitriani, 2021;Made, 2021;Maheswari, 2020;Wisny, 2016). Meanwhile, in dealing with tender collusion related to criminal acts of corruption, the Indonesian Competition Commission (ICC/KPPU) will cooperate with the Corruption Eradication Commission (KPK) (Ferdinand et al., 2020). There is also research that reveals the limitations of ICC's authority in law enforcement against procurement actors who are not bidders (Dwi Prabawa & Hadi, 2018). Detailed competition regulations in e-tendering can reduce tender collusion (Andriana, 2021;Ma et al., 2022;Sirait, 2020;Wibowo, 2021). This condition also reveals that there is a gap in research related to the handling of tender collusion. Tender collusion is also part of the fraud risks in procurement with the highest level (Kamal & Elim, 2021). However, on the LKPP website, the tender collusion risk is not yet part of the risk list in the Risk Management Document of Procurement Work Unit (MR UKPBJ). Procurement actors are not yet independent and professional so that it becomes one of the challenges for the national strategy for preventing corruption (Stranas PK) in the field of procurement. In fact, there is already a mandate regarding risk assessment and handling in Government Regulation Number 60/2008 concerning Government Internal Control System (SPIP) and the mandate of the Risk Management of Procurement Work Unit (UKPBJ MR) in Indonesian Procurement (LKPP) regulation number 10/2021 concerning Procurement Work Unit (UKPBJ). There is also a policy mandate for the use of information technology in the procurement of goods/services. In addition, competence in big data analytics is an important requirement to be able to play a role in digital transformation (Oktorialdi, 2019). However, the government procurement (PBJP) actors still work by relying on the electronic procurement system (SPSE). Data analytics competence has not been integrated into the tender process to handle the risk of tender collusion. LITERATURE REVIEW AND HYPO- THESIS Law Number 5 of 1999, concerning the prohibition of monopolistic practices and unfair business competition, explains, in Article 1 point 8, that a collusion or business collusion is a form of cooperation between business actors with the aim of controlling the relevant market for the interests of the colluding business actors. Meanwhile, the notion of colluding is a person who participates in a conspiracy to commit a crime or fraud and so on. If the prospective providers or bidders conspire, the prospective providers may be subject to sanctions in the form of, among other things, failing the tender process and being proposed to be blacklisted. (Keintjem, 2016). Collusion can result in unfair competition (Wisny, 2016), thereby injuring the principle of competition because it creates pseudo-competition (Anindyajati, 2018). There are several categories of tender collusion: horizontal collusion, vertical collusion, and/or horizontal-vertical collusion. Horizontal collusion is collusion between business actors/service providers. Vertical collusion is collusion between one or several business actors and the tender committee. Meanwhile, horizontal-vertical collusion is collusion among business actors, other business actors, and the tender committee (Anindyajati, 2018;Febrina, 2017) Indications of tender collusion can be identified through an analysis of the results of law enforcement or analysis of KPPU legal decisions (Wisny, 2016). Tender collusion is part of the integrity risk in the procurement process (OECD, 2016). To detect the risk of fraud in the form of collusion, procurement management can apply machine learning (ML) (García Rodríguez et al., 2022) and data analytics in e-tendering (Kamal & Elim, 2021). There are two data analytics techniques that can be used: statistical techniques and visualization techniques (CAGI, 2017) which can be applied to descriptive, diagnostic, predictive, and prescriptive analysis (principa.co, 2017). The application of data analytics can optimize procurement performance. Descriptive, predictive and prescriptive analytics are useful in improving the quality of information about the procurement environment and patterns related to procurement (Hallikas et al., 2021). In addition, the purpose and type of data analytics need to be considered in the application of data analytics techniques (Jugulum, 2016). There are several examples of implementing data analytics, such as fraud analytics (Baesens et al., 2015) and fraud risk management (Horsey, 2017). Data analytics can also be used to detect indications of fraud by analyzing several anomalies in the data (Banarescu, 2015;Gee, 2015;IIA, 2017) and uncover corruption schemes through the association analysis of some transaction data (Gee, 2015). The use of data analytics can help uncover the risks of collusion in road construction tenders in Poland (Anysz et al., 2019), and score collusion risk in tender in Korea by assigning its attributes based on law enforcement experience (OECD, 2017). There are several options for analytical techniques. The first option is Rules-based Analytics. This technique requires the role of human resources (HR) to identify rules first before implementing data analytics. Rules can be in the form of red flags and or certain restrictions from procedures/regulations. The second option is Distributional Analytics. This technique uncovers several anomalies in the distribution of the data population. The third option is Predictive Analytics. This technique uses historical data to predict possible future events. And the fourth option is Linkage/Social Network Analytic. This technique analyzes unstructured data through network and or social connectivity (Phillips & Lanclos, 2014). There is no research that reveals the design of controls in e-tendering and projections of the application of data analytics to detect the risk of fraud in the form of collusion. In fact, there is already a mandate to increase the use of information technology in government procurement. The need for the use of information systems must be adjusted to the development of business scale and data complexity in the 4.0 era. This has consequences for the urgency of using methods or techniques according to the digital transformation mandate (Bagas, 2018). The description above and several previous studies have not revealed the ideal control and data analytics integration to deal with the risk of fraud in the form of tender collusion. Therefore, this study focuses on the integration of data analytics as part of internal control to handle the risk of tender collusion. The research question posed is: what is the description of the control design for collusion risk in the current e-tendering? And what is the description of the integrated data analytics control over the risk of collusion in e-tendering? The purpose of this study is to reveal an overview of the integration of data analytics in controlling the risk of fraud in the form of collusion in e-tendering. METHODS This study uses quantitative methods with data analytics techniques in the form of statistical and visualization techniques (CAGI, 2017) supported by literature, normative, prospective and retrospective studies. The statistical techniques used are descriptive statistics in MS excel and decision trees in the RapidMiner application. Literature study is carried out by reviewing literature and papers related to the research objects (Arikunto, 2014). Normative study is carried out by referring to regulations related to research objects. Prospective study is used for risk analysis in tender collusion (Kamal & Elim, 2021). Meanwhile, retrospective study is carried out through an analysis of the 2021 and 2022 KPPU legal decisions related to the tender collusion on the website; Salinan Putusan Perkara 25-KPPU-I-2020.pdf. (Figure 1). RESULTS AND DISCUSSION Tender Collusion Risk Several signals of the possibility of horizontal tender collusion risks can be identified through: the rotation of bids between tender winners in several tenders (Keintjem, 2016;OLAF, 2017), association between several partners in several tenders (Anysz et al., 2019), single winner or low-bid partner (OECD, 2017;Vadász et al., 2016) or the number of bidders is 4 or less (Anysz et al., 2019), there is a provider cluster distribution (Vadász et al., 2016). The tender collusion risk level of 9.06 is shown in table 1. This level is greater than 4, which means that it needs handling (Kamal & Elim, 2021). The collusion risk in Figure 2 is horizontal collusion because those who conspire are fellow bidders (Anindyajati, 2018;Febrina, 2017). The tender collusion risk level needs to be reviewed. This high level of risk needs to be reduced to an acceptable or tolerable level of risk (LKPP, 2016). Referring to PP 60/2008, risk assessment is a source of information for developing control activities. Therefore, this level of risk needs to be handled / controlled both at its likelihood level and at its impact level. Control activities should principally address the sources of risk and the impact of risks. Handling the source of risk means to prevent risks from occurring through preventive controls. Thus, it is hoped that the likelihood of occurrence will be low, or even the cause of the risk can be eliminated. Mitigating the impact of risk means anticipating what must be done if the risk occurs through detection controls and or alternative controls. This needs to be done so that the impacts that arise can be minimized (BPKP, 2015). 112| M. Kamal, Collusion Fraud Risk Mitigation with Integration of Data Analytics in E-Tendering The process of evaluating risk controls needs to be carried out by reviewing existing controls, reviewing maximum control standards, and analyzing gaps from the two previous studies (BPKP, 2015), as shown in Figure 3. Handling Collusion Fraud Risk using Existing Controls The design of the existing controls can be reviewed and identified from Presidential Regulation 16/2018 jo 12/2021 and other related regulations. Some of these regulations disclose electronic tendering mechanisms and procedures. Meanwhile, a review of the tender process contained in the KPPU's decision regarding tender collusion can be a representation of the implementation of existing controls. The KPPU's decision reveals the chronology of the tender practices. The results of the analysis show that the implementation of the existing controls consists of arithmetic corrections, administrative evaluations, technical evaluations, price evaluations, and verification of qualifications. The existing controls are carried out one by one or applied to each bidder (Kamal, 2022). Meanwhile, procurement regulations reveal the mandate for the imposition of sanctions on providers if there are indications of collusion with other participants to set bid prices or indications of corruption, collusion and/ or nepotism in the selection of providers. Thus, procurement actors or Selection Committee should test the existence of these indications (Kamal, 2022). Judging from the KPPU's decision (Figure 4) which revealed evidence of at least 9 (nine) "similar" bids, the Selection Committee needs to make a comparison of the bids between bidders. Therefore, the residual risk from collusion fraud risk in the tender is still above the risk appetite (figure 5). The figure reveals that the residual risk is not yet in the risk appetite because the existing controls have nothing to do with efforts to compare bids between bidders. It can be concluded that the existing controls have not been effective in dealing with the risk of tender collusion (Kamal, 2022). Handling Collusion Fraud Risk using Standard Maximum Control In Decision Number 04/KPPU-L/2020 it is stated that according to the Presidential Regulation regarding procurement, if there are two indications of unfair business competition in the procurement of goods and services, the tender should be declared a failure. Controls need to be optimized to find these indications. The 11 decisions of KPPU ( Figure 4) reveal that there are at least 9 (nine) modes of "similarity in bids" with a mean value of 0.5308 per decision ( Figure 6). This shows that the risk of collusion fraud can be identified with an average similarity in bids above 50%. Figure 3. Risk Control Evaluation Process Source: (BPKP, 2015) Meanwhile, the highest similarity weight is in the form of similarity in typing or typing errors with a value of 90.91%. This can also provide insight that the likelihood of collusion fraud in tenders is high based on the fulfillment of the parameters of typing similarity or typing errors and several other similarity combinations. Therefore, maximum control standards for the risk of tender collusion must be developed for the detection of similarity in bids. Those who have the authority to carry out this detection are of course the Selection Committee of providers. In addition, several KPPU's decisions ( Figure 6) show that KPPU's recommendations to LKPP have been disclosed in KPPU decisions regarding tender collusion cases which can be classified into 4 groups of recommendations (Figure 7). The highest recommendation is to improve procurement regulations, especially to provide an accountable basis for the Selection Committee of providers in detecting indications of tender collusion. Meanwhile, the other 3 recommendations are closely related to the competence of human resources in procurement and in the implementation of electronic procurement system. Handling Collusion Fraud Risk using Data Analytics Integration Based on the 3 groups of recom-mendations related to HR in procurement, the development strategy needs to consider the existing electronic procurement system (SPSE). The Selection Committee of providers needs to get training related to competence in detecting collusion indications in accordance with the competency needs of the digital era. The detection tool used must be integrated into electronic procurement system (SPSE). If the digital competence of the Selection Committee is good and the detection tools are also well designed, it is necessary to make technical guidelines for the detection of indications of fraud as a reference for the Selection Committee in determining providers. To develop controls against the risk of tender collusion needs to optimize data analytics competence (Kamal & Elim, 2021). Data analytics competence is an important requirement to be able to play a role in the digital era (Oktorialdi, 2019). The application of data analytics requires human talent. Data culture must be built on human and organizational resources to increase organizational value (Grover et al., 2018). Therefore, goods and services procurement officials and government agencies must increase their data awareness and data analytics skills to be able to detect the risk of collusion fraud in e-tendering. An Overview of Data Analytics in E-Tendering There are several examples of using data analytics to detect collusion risks in tenders which can be used as ideas for integrating data analytics into e-tendering. In Korea, the integration of data analytics into the Bid-Rigging Indicator Analysis System (BRIAS) is carried out by automating the weight score of "the likelihood of bid rigging". The attributes used come from an analysis of the results of law enforcement that ever existed. These attributes include the success rate of tender winners, few bidders, and non-competitive tender (OECD, 2017). This non-competitive tender is not explained in detail. In Poland, an Artificial Neural Network (ANN) is used to predict the level of bid rigging. The results are in three categories: no collusion, there is an indication of collusion, and there is a strong indication of collusion occurring. The application of ANN is carried out using collusion parameters, such as a small number of tender participants, rotation of tender winners, and repetition of tender winners (Anysz et al., 2019). Based on KPPU's recommendation to LKPP (KPPU, 2022), LKPP can adopt BRIAS or ANN as tools inherent in electronic procurement system (SPSE). In addition, the Decision Tree (DT) can also be used to detect the likelihood of collusion risk. DT is used by dividing decisions from actual events along predictable variables or attributes by sorting out between 'fraud' and 'nonfraud' or normal (Vadász et al., 2016). DT is part of machine learning which is user friendly and can be used without having to understand statistics and does not require complicated formulas (Lee et al., 2022). This study provides an overview of the application of DT with a dataset ( Figure 8) which was developed from a study of KPPU decisions (Figure 4). The results of DT machine learning in the RapidMiner application show that the accuracy of predicting the likelihood of collusion risk is 85% (Figure 4). The results of the DT visualization provide insight into the roots of the Decision Tree which shows that the prediction of tender collusion occurs when there is a similarity in typing/typing errors in the bid with a combination of: a. Similarity in bidding metadata, or b. There are no similarities in metadata, but there are similarities in IP addresses and bidders' addresses Some examples of the application of data analytics in e-tendering can be part of the ideas in developing of maximum control standards. Therefore, studies on control development can be carried out by combining the results of studies on existing controls and maximum control standards. This can be done through the comparison table 1. Application of maximum control standard will be able to reduce the level of residual risk to the level of risk appetite. Figure 10 part A provides an overview of the application of data analytics integration in the electronic tender process. In this section it is revealed that preventive control is carried out prior to the implementation of the tender. Meanwhile, detection control is inherent in the tender process. As an idea, this example provides an illustration of collusion risk detection which is carried out after verifying qualifications or before determining the tender winner. Meanwhile, Figure 10 part B reveals the curve for handling tender collusion risk from the IR level to the RR level. Likelihood risk can be reduced by preventive controls. Meanwhile, the impact of risk can be reduced by detection controls. Data analytics competency training for the Selection Committee is part of prevention. (Lee et al., 2022) Data analytics practice is part of detection control. Data analytics is developed to be an inherent part of the tender evaluation process. CONCLUSION The risk of collusion in e-tendering is the highest level of risk in procurement. The KPPU's decision reflects that collusion in tenders is still high. More than 50% similarity of bids is an indication of collusion risk. However, the existing control has not been able to handle this risk. Therefore, it is necessary to develop accountable controls. KPPU's decision review provides recommendations to LKPP to improve the collusion risk management system in tenders. Maximum control standard can be designed by developing data analytics competencies for procurement officers, especially the Selection Committee of providers. Data analytics tools to detect indications of collusion also need to be designed so that they are integrated with the electronic tender process in electronic procurement system (SPSE). The results of this study support the results of research on fraud risk management strategies (Kamal & Elim, 2021), i.e, the description of the highest risk mitigation. In addition, the findings of this study also support the results of previous research on the use of data analytics (Anysz et al., 2019;Lee et al., 2022;OECD, 2017;Vadász et al., 2016) in detecting the risk of collusion in tender (Anysz et al., 2019;OECD, 2017;Vadász et al., 2016). It is hoped that the results of this research can be used to improve procurement policies, especially to support the achievement of the National Strategy for Corruption Prevention. Collusion will develop more dynamically, but many procurement actors are still not concerned with the development of a control system. Therefore, the findings of this study can be an innovative idea for realizing a credible and accountable procurement of goods/ services. The limitation of this research is that the KPPU's decision data still needs to be expanded and reproduced. In addition, the competence data of the Selection Committee of providers and procurement risk management in the work unit for the procurement of goods/services have not been included. This could be an opportunity for future research.
5,558
2023-06-30T00:00:00.000
[ "Business", "Computer Science" ]
Graphical Markov Models with Mixed Graphs in R In this paper we provide a short tuto- rial illustrating the new functions in the package ggm that deal with ancestral, summary and rib- bonless graphs. These are mixed graphs (con- taining three types of edges) that are impor- tant because they capture the modified inde- pendence structure after marginalisation over, and conditioning on, nodes of directed acyclic graphs. We provide functions to verify whether a mixed graph implies that A is independent of B given C for any disjoint sets of nodes and to generate maximal graphs inducing the same independence structure of non-maximal graphs. Finally, we provide functions to decide on the Markov equivalence of two graphs with the same node set but different types of edges. Introduction and background Graphical Markov models have become a part of the mainstream of statistical theory and application in recent years. These models use graphs to represent conditional independencies among sets of random variables. Nodes of the graph correspond to random variables and edges to some type of conditional de- pendency. Introduction and background Graphical Markov models have become a part of the mainstream of statistical theory and application in recent years.These models use graphs to represent conditional independencies among sets of random variables.Nodes of the graph correspond to random variables and edges to some type of conditional dependency. Directed acyclic graphs In the literature on graphical models the two most used classes of graphs are directed acyclic graphs (DAGs) and undirected graphs.DAGs have proven useful, among other things, to specify the data generating processes when the variables satisfy an underlying partial ordering. For instance, suppose that we have four observed variables: Y, the ratio of systolic to diastolic blood pressure and X the diastolic blood pressure, both on a log scale; Z, the body mass and W, the age, and that a possible generating process is the following linear recursive regression model where all the variables are mean-centered and the s are zero mean, mutually independent Gaussian random errors.In this model we assume that there exists a genetic factor U influencing the ratio and levels of blood pressure.This model can be represented by the DAG in Figure 1(a) with nodes associated with the variables and edges indicating the dependencies represented by the regression coefficients (γs).From the graph it is seen, for instance, that the ratio of the two blood pressures (Y) is directly influenced by body mass (Z) but not by age (W).Thus a consequence of the model is that the variables must satisfy a set of conditional independencies: for example, the ratio of the blood pressure is independent of the age given the body mass, written as Y ⊥ ⊥ W|Z.A remarkable result is that the independencies can be deduced from the graph alone, without reference to the equations, by using a criterion called d-separation.In fact, in the graph of Figure 1(a), the nodes Y and W are d-separated given Z.This can be checked using special graph algorithms included, for example, in packages gRain (Højsgaard, 2012) and ggm (Marchetti et al., 2012).For more details on DAG models and their implementation in R see the extensive discussion in Højsgaard et al. (2012). Hidden variables and induced graphs The model has four observed variables but includes an unobserved variable, that is, the genetic factor U. When U is hidden the model for the observed variables becomes As a consequence the model is still a recursive model and the parameters have a regression parameter interpretation, but contain some correlated residuals. The induced model is said to be obtained after marginalisation over U.In this model some of the original independencies are lost, but we can observe the implied independencies Y ⊥ ⊥ W|Z and X ⊥ ⊥ Z|W.Also it can be shown that it is impossible to represent such independencies in a DAG model defined for the The R Journal Vol.4/2, December 2012 ISSN 2073-4859 four observed variables.Therefore, we say that DAG models are not stable under marginalisation. A mixed graph with arrows and arcs, as shown in Figure 1(b), can be used to represent the induced independence model after marginalisation over U.In this representation, beside the arrows, represented by the γs, we have the arc Y ≺ X associated with the (partial) correlation ω YX . The graph of Figure 1(b) belongs to a class of models called regression chain graph models.This class generalises the recursive generating process of DAGs by permitting joint responses, coupled in the graph by arcs, and thus appears to be an essential extension for applications; see Cox and Wermuth (1996).Regression chain graphs can be used as a conceptual framework for understanding multivariate dependencies, for example in longitudinal studies.The variables are arranged in a sequence of blocks, such that (a) all variables in one block are of equal standing and any dependence between them is represented by an arc, and (b) all variables in one block are responses to variables in all blocks to their right, so that any dependencies between them are directed, represented by an arrow pointing from right to left.The graph shows how the data analysis can be broken down into a series of regressions and informs about which variables should or should not be controlled for in each regression. More general induced graphs The class of regression chain graphs is not, however, stable under marginalisation.For instance, suppose that the generating process for the blood pressure data is defined by the more general regression chain graph of Figure 2(a) where L is a further variable representing a common hidden cause of systolic blood pressure and body mass. Then, after marginalisation over L, the model can still be described by a linear system of equations with correlated residuals and can be represented by the mixed graph shown in Figure 2(b).But the resulting graph is not a DAG nor a regression chain graph because it contains the pair of variables (Y, Z) coupled by both a directed edge and a path composed by bi-directed arcs.Thus Y cannot be interpreted as a pure response to Z and in addition Y and Z are not two joint responses. Stable mixed graphs The previous illustrations show that when there are unobserved variables, DAG or regression chain graph models are no longer appropriate.The discussion could be extended to situations where there are some selection variables that are hidden variables that are conditioned on.This motivates the introduction of a more general class of mixed graphs, which contains three types of edges, denoted by lines, , arrows, , and arcs (bi-directed arrows), ≺ .In the case of regression models, explained above, lines generally link pairs of joint context (explanatory) variables and arcs generally link pairs of joint response variables. There are at least three known classes of mixed graphs without self loops that remain in the same class, i.e. that are stable under marginalisation and conditioning.The largest one is that of ribbonless graphs (RGs) (Sadeghi, 2012a), defined as a modification of MC-graphs (Koster, 2002).Then, there is the subclass of summary graphs (SGs) (Wermuth, 2011), and finally the smallest class of the ancestral graphs (AGs) (Richardson and Spirtes, 2002). Four tasks of the current paper In this paper, we focus on the implementation of four important tasks performed on the class of mixed graphs in R: 1. Generating different types of stable mixed graphs after marginalisation and conditioning. 2. Verifying whether an independency of the form Y ⊥ ⊥ W|Z holds by using a separation criterion called m-separation. Generating a graph that induces the same in- dependence structure as an input mixed graph such that the generated graph is maximal , i.e. each missing edge of the generated graph implies at least an independence statement. 4. Verifying whether two graphs are Markov equivalent , i.e. they induce the same independencies, and whether, given a graph of a specific type, there is a graph of a different type that is Markov equivalent to it. Package ggm The tasks above are illustrated by using a set of new functions introduced into the R package ggm (Marchetti et al., 2012).In the next section we give the details of how general mixed graphs are defined. The following four sections deal with the four tasks respectively.For each task we give a brief introduction at the beginning of its corresponding section.Some of the functions generalise previous contributions of ggm discussed in Marchetti (2006).The The R Journal Vol.4/2, December 2012 ISSN 2073-4859 ggm package has been improved and it is now more integrated with other contributed packages related to graph theory, such as graph (Gentleman et al., 2012), igraph (Csardi and Nepusz, 2006), and gRbase (Dethlefsen and Højsgaard, 2005), which are now required for representing and plotting graphs.Specifically, in addition to adjacency matrices, all the functions in the package now accept graphNEL and igraph objects as input, as well as a new character string representation.A more detailed list of available packages for graphical models can be found at the CRAN Task View gRaphical Models in R at http://cran.r-project.org/web/views/gR.html. Defining mixed graphs in R For a comprehensive discussion on the ways of defining a directed acyclic graph, see Højsgaard et al. (2012).A mixed graph is a more general graph type with at most three types of edge: directed, undirected and bi-directed, with possibly multiple edges of different types connecting two nodes.In ggm we provide some special tools for mixed graphs that are not present in other packages.Here we briefly illustrate some methods to define mixed graphs and we plot them with a new function, plotGraph, which uses a Tk GUI for basic interactive graph manipulation. The first method is based on a generalisation of the adjacency matrix.The second uses a descriptive vector and is easy to use for small graphs.The third uses a special function makeMG that allows the directed, undirected, and bi-directed components of a mixed graph to be combined. Adjacency matrices for mixed graphs In the adjacency matrix of a mixed graph we code the three different edges with a binary indicator: 1 for directed, 10 for undirected and 100 for bi-directed edges.When there are multiple edges the codes are added. Thus the adjacency matrix of a mixed graph H with node set N and edge set F is an |N| × |N| matrix obtained as A = B + S + W by adding three matrices Notice that because of the symmetric nature of lines and arcs S and W are symmetric, whereas B is not necessarily symmetric. Defining mixed graphs by using vectors A more convenient way of defining small mixed graphs is based on a simple vector coding as follows.The graph is defined by a character vector of length 3 f , where f = |F| is the number of edges, and the vector contains a sequence of triples type,label1,label2 , where the type is the edge type and label1 and label2 are the labels of the two nodes.The edge type accepts "a" for a directed arrow , "b" for an arc and "l" for a line.Notice that isolated nodes may not be created by this method.For example, the vector representation of the previous mixed graph is > mgv <-c("b","X","Y","a","X","Y","l","X","Q", "b","Q","X","a","Y","Q","b","Y","Z", "a","Z","W","a","W","Z","b","W","Q") Once again as in the DAG case we can use plotGraph(mgv) to plot the defined graph. Mixed graph using the function makeMG Finally the adjacency matrix of a mixed graph may be built up with the function makeMG.This function requires three arguments dg, ug and bg, corresponding respectively to the three adjacency matrices B, S and W composing the mixed graph.These may also The R Journal Vol.4/2, December 2012 ISSN 2073-4859 be obtained by the constructor functions DG and UG of ggm for directed and undirected graphs respectively.Thus for the previous mixed graph we can issue the command obtaining the same adjacency matrix (up to a permutation). Generating stable mixed graphs There are four general classes of stable mixed graphs. The more general class is that of ribbonless graphs: these are mixed graphs without a specific set of subgraphs called ribbons. Figure 3 below shows two examples of ribbons.The exact definition of ribbons is given in Sadeghi (2012a). Figure 3: Two commonly seen ribbons h, i, j . The lack of ribbons ensures that, for any RG, there is a DAG whose independence structure, i.e. the set of all conditional independence statements that it induces after marginalisation over, and conditioning on, two disjoint subsets of its node set can be represented by the given RG.This is essential, as it shows that the independence structures corresponding to RGs are probabilistic, that is, there exists a probability distribution P that is faithful with respect to any RG, i.e. for random vectors X A , X B , and X C with probability distribution P, X A ⊥ ⊥ X B | X C if and only if A, B | C is in the induced independence structure by the graph.This probability distribution is the marginal and conditional of a probability distribution that is faithful to the generating DAG. The other classes of stable graphs are further simplification of the class of ribbonless graphs.Summary graphs have the additional property that there are neither arrowheads pointing to lines (i.e.≺ • or • ) nor directed cycles with all arrows pointing towards one direction. Ancestral graphs have the same constraints as summary graphs plus the additional prohibition of bows, i.e. arcs with one endpoint that is an ancestor of the other endpoint; see Richardson and Spirtes (2002). However, for some ribbonless and summary graphs the corresponding parametrisation is sometimes not available even in the case of a standard joint Gaussian distribution. If we suppose that stable mixed graphs are only used to represent the independence structure after marginalisation and conditioning, we can consider all types as equally appropriate.However, each of the three types has been used in different contexts and for different purposes.RGs have been introduced in order to straightforwardly deal with the problem of finding a class of graphs that is closed under marginalisation and conditioning by a simple process of deriving them from DAGs.SGs are used when the generating DAG is known, to trace the effects in the sets of regressions as described earlier. AGs are simple graphs, meaning that they do not contain multiple edges and the lack of bows ensures that they satisfy many desirable statistical properties. In addition, when one traces the effects in regression models with latent and selection variables (as described in the introduction) ribbonless graphs are more alerting to possible distortions (due to indirect effects) than summary graphs, and summary graphs are more alerting than ancestral graphs; see also Wermuth and Cox (2008).For the exact definition and a thorough discussion of all such graphs, see Sadeghi (2012a). Sadeghi (2012a) also defines the algorithms for generating stable mixed graphs of a specific type for a given DAG or for a stable mixed graph of the same type after marginalisation and conditioning such that they induce the marginal and conditional DAG-independence structure.We implement these algorithms in this paper. By "generating graphs" we mean applying the defined algorithms, e.g.those for generating stable mixed graphs to graphs, in order to generate new graphs. Functions to generate the three main types of stable mixed graphs Three main functions RG, SG, and AG are available to generate and plot ribbonless, summary, and ancestral graphs from DAGs, using the algorithms in Sadeghi (2012a).These algorithms look for the paths with three nodes and two edges in the graph whose inner nodes are being marginalised over or conditioned on, and generate appropriate edges between the endpoints.These have two important properties: (a) they are well-defined in the sense that the process can be performed in any order and will always produce the same final graph, and (b) the generated graphs induce the modified independence structure after marginalisation and conditioning; see Sadeghi (2012a) for more details. The functions RG, SG, and AG all have three arguments: a, the given input graph, M, the marginalisation set and C, the conditioning set.The graph may be of class "graphNEL" or of class "igraph" or may be represented by a character vector, or by an adjacency matrix, as explained in the previous sections. The R Journal Vol.4/2, December 2012 ISSN 2073-4859 The sets M C (default c()) must be disjoint vectors of node labels, and they may possibly be empty sets.The output is always the adjacency matrix of the generated graph.There are two additional logical arguments showmat and plot to specify whether the adjacency matrix must be explicitly printed (default TRUE) and the graph must be plotted (default FALSE). > AG(exvec, M, C, showmat = FALSE, plot = TRUE) Verifying m-separation To globally verify whether an independence statement of the form A ⊥ ⊥ B | C is implied by a mixed graph we use a separation criterion called mseparation.This has been defined in Sadeghi (2012a) for the general class of loopless mixed graphs and is the same as the m-separation criterion defined in Richardson and Spirtes (2002) for ancestral graphs.It is also a generalisation of the d-separation criterion for DAGs (Pearl, 1988).This is a graphical criterion that looks to see if the graph contains special paths connecting two sets A and B and involving a third set C of the nodes.These special paths are said to be active or m-connecting.For example, a directed path from a node in A to a node in B that does not contain any node of C is m-connecting A and B. However, if such a path intercepts a node in C then A and B are said to be m-separated given C.However, this behaviour can change if the path connecting A and B contains a collision node or a collider for short, that is a node c where the edges meet head-to-head, e.g.c≺ or c≺ .In general, a path is said to be m-connecting given C if all its collider nodes are in C or in the set of ancestors of C, and all its non-collider nodes are outside C. For two disjoint subsets A and B of the node set, we say that C m-separates A and B if there is no mconnecting path between A and B given C. Function for verifying m-separation The m-separation criterion has been implemented in ggm and is available by using the function msep. The R Journal Vol.4/2, December 2012 ISSN 2073-4859 Note that there is still a function dSep in ggm for dseparation, although it is superseded by msep. The function has four arguments, where the first is the graph a, in one of the forms discussed before, and the other three are the disjoint sets A, B, and C. Examples For example, consider the DAG of Figure 1(a): We see that and W are m-separated given Z: > msep(a, "Y", "W", "Z") [1] TRUE and the same statement holds for the induced ancestral graph after marginalisation over U: > b <-AG(a, M = "U") > msep(b, "Y", "W", "Z") [1] TRUE This was expected because the induced ancestral graph respects all the independence statements induced by m-separation in the DAG, and not involving the variable U. As a more complex example, consider the following summary graph, Then, the two following statements verify whether X is m-separated from Y given Z, and whether X is mseparated from Y (given the empty set): > msep(a, "X", "Y", "Z") [1] FALSE > msep(a, "X", "Y") [1] TRUE Verifying maximality For many subclasses of graphs a missing edge corresponds to some independence statement, but for the more complex classes of mixed graphs this is not necessarily true.A graph where each of its missing edges is related to an independence statement is called a maximal graph .For a more detailed discussion on maximality of graphs and graph-theoretical conditions for maximal graphs, see Richardson and Spirtes (2002) and Sadeghi and Lauritzen (2012).Sadeghi and Lauritzen (2012) also gave an algorithm for generating maximal ribbonless graphs that induces the same independence structure as an input non-maximal ribbonless graph.This algorithm has been implemented in ggm as illustrated below. Function for generating maximal graphs Given a non-maximal graph, we can obtain the adjacency matrix of a maximal graph that induces the same independence statements with the function Max.This function uses the algorithm by Sadeghi (2012b), which is an extension of the implicit algorithm presented in Richardson and Spirtes (2002). Verifying Markov equivalence Two graphical models are said to be Markov equivalent when their associated graphs, although nonidentical, imply the same independence structure, that is the same set of independence statements.Thus two Markov equivalent models cannot be distinguished on the basis of statistical tests of independence, even for arbitrary large samples.For instance, it is easy to verify that the two directed acyclic graphs models X≺ U Y and X≺ U ≺ Y both imply the same independence statements, and are, therefore, Markov equivalent. Sometimes, we can check whether graphs of different types are Markov equivalent.For instance the DAG X U ≺ Y is Markov equivalent to the bidirected graph X ≺ U ≺ Z. Markov equivalent models may be useful in applications because (a) they may suggest alternative interpretations of a given well-fitting model or (b) on the basis of the equivalence one can choose a simpler fitting algorithm.For instance, the previous bi-directed graph model may be fitted, using the Markov equivalent DAG, in terms of a sequence of univariate regressions. In the literature several problems related to Markov equivalences have been discussed.These include (a) verifying the Markov equivalence of given graphs, (b) presenting conditions under which a graph of a specific type can be Markov equivalent to a graph of another type, and (c) providing algorithms for generating Markov equivalent graphs of a certain type from a given graph. Functions for testing Markov equivalences The function MarkEqRcg tests whether two regression chain graphs are Markov equivalent.This function simply finds the skeleton and all unshielded collider V-configurations in both graphs and tests whether they are identical, see Wermuth and Sadeghi (2012).The arguments of this function are the two graphs a and b in one of the allowed forms.For example, To test Markov equivalence for maximal ancestral graphs the algorithm is much more computationally demanding (see Ali and Richardson (2004)) and, for this purpose, the function MarkEqMag has been provided.Of course, one can use this function for Markov equivalence of regression chain graphs (which are a subclass of maximal ancestral graphs).For example, Figure 2 : Figure 2: (a) A regression chain graph model; (b) the mixed graph obtained after marginalisation over L, which is not a regression chain graph.
5,241.6
2012-01-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
CATHETER-RELATED URINARY TRACT INFECTION IN PATIENTS SUFFERING FROM SPINAL CORD INJURIES Urinary tract infection is commoner in patients with spinal cord injuries because of incomplete bladder emptying and the use of catheters that can result in the introduction of bacteria into the bladder.  patients suff ering from spinal cord injuries, admitted to the Institute for physical medicine and rehabilitation, Centre for paraplegia of the Clinical Centre of the University of Sarajevo, were included. Th e patients were divided in three groups according to the method of bladder drainage: Group A (n=) consisted of patients on clean intermittent catheterization; Group B (n=) consisted of patients with indwelling catheters; Group C (n=) consisted of patients who had performed self-catheterization. From a total of  urine samples,  (,) were positive and  (,) were sterile. More than  of the infected patients were asymptomatic. Th e overall rate of urinary infection amounted to about , episodes, and bacteriuria to , episodes per patient.  of infections (/) were acquired within seven days from catheterization. Infection was usually polymicrobial; the greatest number of urine samples / (,) included more than one bacterium. Th e vast majority of cases of urinary tract infection and bacteriuria are caused by Gram-negative bacilli and enterococci, commensal organisms of the bowel and perineum, representative of those from the hospital environment. Providencia stuarti (,) being the most common, followed by Proteus mirabilis (,), Escherichia coli (,), Pseudomonas aeruginosa (,), Klebsiella pneumoniae (,), Morganella morgani (,), Acinetobacter baumannii (,), Providencia rettgeri (,). , of isolates were Gram-positive with Enterococcus faecalis (,) as the most common. , of isolates were multidrug-resistant, and the highest rates of resistance were found among Acinetobacter baumannii (,), Providencia rettgeri (,), Pseudomonas aeruginosa (,), Providencia stuarti (,) and Morganella morgani (,). Lower rates of resistance were found in Group C, i.e. patients on intermittent selfcatheterisation. Eradication of organisms was achieved in only  (,) of patients; hence, antibiotic therapy had no or very low eff ect. CATHETER-RELATED URINARY TRACT INFECTION IN PATIENTS SUFFERING FROM SPINAL CORD INJURIES Amela Dedeić-Ljubović*, Mirsada Hukić Institute of Clinical Microbiology, Clinical Centre of the University of Sarajevo, Bolnička ,  Sarajevo, Bosnia and Herzegovina * Corresponding author Introduction Urinary tract infections in patients with spinal cord injury (SCI) are the most frequent complication due to vesical neurogenic alteration ().Several factors may act to predispose patients with neurogenic bladder to UTI.Th e most important of these are high pressure voiding, large amounts of post-voiding residuals, bladder catheterization, vesicoureteral reflux, bladder overdistension, stones in the urinary tract and outlet obstruction ().Recurrent UTI requires multiple courses of antibiotic therapy, thus markedly increasing the incidence of multidrug-resistant (MDR) bacteria ().Bacteriuria -the presence of bacteria in the urine -is very common in patients with an indwelling catheter ().Studies show that in patients with spinal cord injuries the incidence of bacteria in the bladder is - per catheterization, and - episodes of bacteriuria occur per  days of intermittent catheterization performed  times a day ().Abnormal levels of pyuria are present in the great majority of people with SCI who have indwelling catheters and also in those using IC.Lack of pyuria reasonably predicts the absence of UTI in SCI patients (). Th e majority of organisms are from the patients' own colonic flora and may be native inhabitants or new immigrants, that is, exogenous organisms from the hospital environment (, ).Bacteria enter the urinary tract trough the meatus, migrate to the bladder, and proliferate in the urinary tract.Within  hours of insertion of a catheter, a biofilm can be found on the surface of the catheter, drainage bag and mucosa.Th is biofilm consists of the Tamm-Horsfall protein, struvite and apatite crystals, bacterial polysaccharides, glycocalyces and living bacteria.The presence of the biofilm is thought to be responsible for the persistence of bacteriuria (, , and ).Additionally, exogenous organisms may colonize catheter equipment, if transferred via the hands of health care personnel ().Among short-term catheterized patients, Escherichia coli are the most frequent species isolated.Other common organisms are Pseudomonas aeruginosa, Klebsiella pneumoniae, Proteus mirabilis and enterococci.Particularly when antibiotics are in use, yeast may be isolated as well.Most bacteriu-ria in short-term catheterization is of single organisms.However, as much as  may be polymicrobial.Among long-term catheterized patients infections are mostly polymicrobial in up to  of urine specimens.Such specimens commonly have - bacterial species, each at concentrations of  CFU/ml or more; some may have up to - species at that concentration.As noted, these include common uropathogens such Escherichia coli, Pseudomonas aeruginosa and Proteus mirabilis, as well as less familiar species such as Providencia stuartii, Morganella morganii and Acinetobacter spp.Th is high prevalence of polymicrobial bacteriuria and of unfamiliar uropathogens is sometimes not recognized by clinicians and laboratories (, ).Th e use of intermittent catheterization has improved the care of these patients, but infections still arise, and the dilemma facing the urologist or physician is whether or not to administer antibiotic therapy (). Material and Methods Patients  patients suff ering from spinal cord injuries, admitted to the Institute for physical medicine and rehabilitation, Centre for Paraplegia Clinical Centre of the University of Sarajevo, were included.Th e patients were divided into the three groups according to the method of bladder drainage: Group A (n=) consisted of patients on clean intermittent catheterization; Group B (n=) consisted of patients with indwelling catheter; Group C (n=) consisted of patients who had performed self-catheterization. Patient data Demographic information (age, sex, spinal cord injury and time of injury, time and frequency of hospitalization, intake of systemic antibiotic with activity against urinary pathogens during the study period, method of bladder drainage) were collected.The age distribution was comparable for all the three groups and ranged from  to  years with a mean age of , years.Male/female proportion was / or ,/,. Specimen A total of  urine samples were examined. were from patients on clean intermittent catheteriza-tion,  of patients who had performed self-catheterization and  from patients with indwelling catheter.Urine samples were collected by catheterization or using the clean catch technique for patients able to void spontaneously.Urine specimens collected by catheterization were obtained by aseptically aspirating the clamped and disinfected catheter with a sterile syringe. Microscopic examination Staining of the uncentrifuged urine with methylene blue was performed.The presence of bacteria and leukocytes has been examined.The presence of at least one bacterium per oil-immersion field in a midstream, clean-catch, uncentrifuged urine correlates with ³ bacteria or more per millilitre of urine.The absence of bacteria in several fields in stained uncentrifuged urine indicates the probability of fewer than ³ bacteria/ml.A finding ≥  leukocytes per high-power field is considered abnormal (pyuria). Urine culture Quantitative urine culture was done at the Institute of Microbiology, Immunology and Parasitology, Clinical Centre of the University of Sarajevo.The filter paper method (Leigh and Williams, "Basic Laboratory Procedures in Clinical Bacteriology," WHO, Geneva, ) in which a given volume of urine is absorbed by a piece of fi lter paper and then put on a plate was performed.Filter paper strips were cut in the following dimensions: length , cm, width , cm, and in a length of , cm they were curved and then autoclaved.This curved part of the fi lter paper was put on a plate and held for - seconds.Blood agar plates (containing Columbia blood agar base by Becton Dickinson and  of sheep blood) were used for bacterial count and to facilitate the growth of fastidious microorganisms, particularly Gram-positive bacteria.Endo agar and Mac Conkey agar (Becton Dickinson) were used for selective isolation of Enterobacteriacea.Th ese media are specially designed to distinguish lactose-fermenting (pink to red) from nonlactose-fermenting colonies (colorless or slightly beige).Th e plates were incubated overnight at °C (±, °C) in bacteriological incubators under atmospheric conditions (Endo agar and Mac Conkey agar) and in an atmosphere enriched with  CO (blood agar); bacterial count was performed, and if judged to be significant, isolates were identified to the species level.According to the Medical Laboratory Manual (), Carbohydrate fermentation patterns and the activity of amino acid decarboxylases and other enzymes are used in biochemical differentiation ().Significant bacteriuria was determined to start from the level of  or more colonies of bacteria forming units per cm  . Antimicrobial susceptibility testing Susceptibility testing was only performed on bacteria considered signifi cant.For Gram negatives, the primary antibiotic sensitivity screening consisted of ampicillin, cotrimoxazole, nitrofurantoin, cephalexin, cefuroxime, gentamicin, and quinolones.For multiresistant strains (less than two susceptibilities left in the primary antibiotic sensitivity screening), amikacin, piperacillin+tazobactam, ceftriaxone, ceftazidime, and imipenem were tested.For Pseudomonas aeruginosa, we tested gentamicin, amikacin, ciprofloxacin, piperacillin+tazobactam, ceftazidime and imipenem.Penicillin, methicillin, gentamicin, nalidiksic acid, nitrofurantoin and trimethoprim-sulphametoxazole were tested in the case of staphylococci.Streptococci were tested to penicillin, ampicillin, amoxicillin, nitrofurantoin and trimethoprim-sulphamethoxazole.Disk diffusion was used according to the Kirby Bauer method and CLSI-criteria (Medical Laboratory Manual, Volume II: Microbiology ) (). Eff ectiveness of antibiotic therapy Th e eff ectiveness of antibiotic therapy was determined on the basis of the elimination of bacteria from urine (sterility of urine)  hour after the beginning of antibiotic therapy. Statistical analysis χ  test, student t-test and Spearman's Rank Correlation were used for statistical data processing.The significance of differences observed was assessed using Pearson's chi-square test, with p<, considered to be statistically significant.Test results are presented both graphically and in tabular form. Results Of the  urine samples  (,) were positive and  (, ) sterile.Patients who had performed self-catheterization (Group C) had a signifi cantly higher percentage of sterile urine culture versus Groups A and B (P<,) (Table ).Th e frequency of signifi cant bacteriuria (> CFU/ml) was calculated, giving the result of , (/).Th ere were no signifi cant differences between patients in respect of the percentage of significant bacteriuria (p>,) (Table ).Pyuria was found only in / (,) urine samples of patients on self-catheterization.This is a statistically significant difference (significantly less) in comparison with Groups A and B (p<,) (Table ). During hospitalisation there were signifi cant diff erences in the number of episodes of bacteriuria and urinary tract infection between the three Groups.Th e median number of episodes of bacteriuria for Groups A, B and C amounted to , : , : ,, and urinary tract infection to , : , : , respectively.Th e average number of urinary tract infections was , episodes per patient with minimum , and maximum  episodes.Graph . shows the number of urinary tract infections related to the three methods of bladder drainage., (/) of infections were acquired within seven days from catheterization.Patients on self-catheterization had a signifi cantly smaller number of episodes of bacteriuria and urinary tract infection in comparison to patients on clean intermittent catheterization or with indwelling catheter (p<,). Discussion Urinary tract infection is a serious complication of neurogenic bladder in SCI patients, related to high morbidity and mortality rates ().Despite improved methods of treatment, urinary tract morbidity still ranks as the second leading cause of death in the SCI patient (). The most important factor which may act to predispose the patients with neurogenic bladder to UTI is bladder catheterization (). In our study only , of urine samples remained sterile, while in all other samples one or more etiological agents were isolated.Similarly, Betran et al, () and Ramić () in their studies on urine samples from patients with spinal cord injury found high percentages of positive urine culture (, and   respectively).The presence of significant bacteriuria in our study (> CFU/ml) was found in , of patients, and the majority of samples showed the amount of  CFU/ ml (,).Billote-Domingo et al. () in their study conducted in Spain from April-November  report that  of samples had significant bacteriuria.Abnormal levels of pyuria are present in the great majority of people with SCI who have indwelling catheters and also in those using IC.Lack of pyuria reasonably predicts the absence of UTI in SCI patients (, ).In our study pyuria was present in only , of samples.Rather than infection, in most of the cases asymptomatic bacteriuria (colonisation) was found.Asymptomatic bacteriuria (colonisation of distal urethra) was found in many studies, mostly present in - of patients with spinal cord injury (, , , and ).Average number of urinary tract infection in our study was Polymicrobial bacteriuria is the rule in patients with indwelling catheters, and occurred in  of culture-positive urine specimens.Concerning the types of organisms causing urinary tract infections in our three Groups of patients, fi gures are very similar to those found in the literature about patients with chronic indwelling catheters (, , ), only Providencia stuartii being more highly represented in our population.Providencia stuartii is an important urinary pathogen in SCI with a degree of isolation twenty-fold higher than in the rest of patients and with antimicrobial multiresistance ().Th e reason for this frequent change in the pathological organism causing urinary infections needs to be further investigated.A possible explanation could be changes occurring in the urethral fl ora.Th e place of origin of these organisms causing infections needs to be clarifi ed.Probably, they are present in the residential fl ora in the fosse navicularis and are introduced into the urinary tract, despite disinfection, whenever catheterization occurs. As to the catheterization technique, it is widely accepted that intermittent catheterization, when compared with indwelling catheters, reduces the risk of urinary tract infection in SCI patients and is the preferred method of bladder drainage in this patient population (). Because of the disadvantages of indwelling urethral catheters it is the best to change to intermittent catheterization as soon as possible after injury.By doing this, the risk of bladder and kidney infection is reduced and the bladder will return to a more natural pattern of regular fi lling and emptying (, ).Patients can perform intermittent catheterization themselves without increasing the risk for infections.Th e programme followed in our hospital to learn this technique, therefore, fulfils the standard requirements of hygiene.As soon as pos- sible, patients are trained to assess self-catheterization. Th ere is a trend towards more resistant isolates in all three groups of patients.Prolonged or repeated exposure to antimicrobial agents and the consequent antibiotic pressure increase the risk of colonization and infection with multiresistant bacteria (, ,  and ).Waites et al. () have reported that  of isolates from SCI outpatients were multiresistant organisms. Many patients with signifi cant bacteriuria are considered to be colonized rather than infected, and treatment should be reserved for those with clinical symptoms or other signs of infection.Asymptomatic bacteriuria need not be treated with antibiotics (, ).Prophylactic antibiotics are not routinely recommended either, because of their cost, potential adverse eff ects and the increased rate of isolation of resistant organisms (). Conclusion Th e urinary tract of catheterized patients is highly susceptible to infection.Recurrent problems with these nosocomially acquired catheter-related urinary tract infections are the changes in the microbiological and antibiotic sensitivity pattern of the pathogens isolated.Th ere is an emergence of antibiotic-resistant organisms.Close urological follow-up is crucial in ensuring that adequate bladder drainage is achieved, avoiding the use of long term indwelling urinary catheters if at all possible.For those patients who require long term urinary appliances, patient education and strict attention to hygiene and catheter care policies is important.Th e most highly recommended preventive strategies include proper hand washing, aseptic insertion technique, maintaining a closed sterile drainage system and maintaining an unobstructed urine fl ow.Bacteriuria is inevitable in patients with long-term catheterization, and in most cases, treatment should be started only in the presence of symptoms.Treatment of UTI in patients with urinary catheters requires replacement of the catheter and selection of antibiotics based on the extension of the infection and the results of the urine culture. TABLE 1 . Positive and sterile urine culture in three groups of patients TABLE 4 . Number of isolates of three groups of patients TABLE 2 . Total number of bacteria/ml accordingly of the method of bladder drainage χ 2 =133,282 p<0,05 statistically signifi cant TABLE 3 . Pyuria in urine samples of three groups of patients χ 2 =1357,873 p<0,05 TABLE 5. Multidrug-resistance (strains resistant on ≥4 antimicrobials) , episodes per patient with minimum , and maximum  episodes.Th e mean frequency of urinary tract infection in a study byWoodbury et al. was ,±, for IC users ().The vast majority of cases of UTI in SCI patients are caused by Gram-negative bacilli and enterococci, commensal organisms of the bowel and perineum rep-resentative of those from the hospital environment.Organisms such as Escherichia coli, Klebsiella spp., Pseudomonas spp., Serratia spp., Providencia stuartii, Acinetobacter spp., yeast and staphylococci are relatively more common in patients with catheter-associated UTI.
3,861.2
2009-02-20T00:00:00.000
[ "Medicine", "Biology" ]
Habitat amount or landscape configuration: Emerging HotSpot analysis reveals the importance of habitat amount for a grassland bird in South Dakota Habitat loss and fragmentation are two important drivers of biodiversity decline. Understanding how species respond to landscape composition and configuration in dynamic landscapes is of great importance for informing the conservation and management of grassland species. With limited conservation resources, prescribed management targeted at the appropriate landscape process is necessary for the effective management of species. We used pheasants (Phasianus colchicus) across South Dakota, USA as a model species to identify environmental factors driving spatiotemporal variation in population productivity. Using an emerging Hotspot analysis, we analyzed annual count data from 105 fixed pheasant brood routes over a 24-year period to identify high (HotSpot) and low (ColdSpot) pheasant population productivity areas. We then applied classification and regression tree modeling to evaluate landscape attributes associated with pheasant productivity among spatial scales (500 m and 1000 m). We found that the amount of grassland at a local spatial scale was the primary factor influencing an area being a HotSpot. Our results also demonstrated non-significant or weak effects of fragmentation per se on pheasant populations. These findings are in accordance with the habitat amount hypothesis highlighting the importance of habitat amount in the landscape for maintaining and increasing the pheasant population. We, therefore, recommend that managers should focus on increasing the total habitat area in the landscape and restoring degraded habitats. Our method of identifying areas of high productivity across the landscape can be applied to other species with count data. Introduction Habitat loss and fragmentation are two of the greatest threats to wildlife conservation [1,2]. The fragmentation process involves the splitting of natural habitat into smaller, more isolated patches and is intrinsically coupled with habitat loss [3]. While habitat loss can lead to a decline in wildlife populations, habitat fragmentation may increase the cost of moving among habitat patches and therefore reduce the accessibility and suitability of surrounding patches for wildlife [3]. Habitat loss and fragmentation combined with greater exposure to human land uses have resulted in widespread declines in biodiversity. These landscape changes have been linked to negative impacts on populations of fish [4], mammals [5], birds [6,7], insects [8], and plants [9]. Indeed, one of the key questions in conservation biology is determining the effects of habitat loss versus habitat fragmentation per se [10,11]. This further leads to the debate of conserving multiple small or fewer large habitat patches [12]. Grasslands are among the most threatened biomes worldwide [13]. In North America, nearly 98% of the native northern tallgrass prairie has been lost to the cultivation of row crops and the planting of non-native grasses for livestock production [14]. Numerous species have suffered severe population declines as a result of the frequency and intensity of landscape changes. Grassland songbirds are experiencing the steepest population decline of any bird group [15]. From 1968 to 2008, 37% of grassland obligate bird species experienced a population decline [16]. In the United States, South Dakota has experienced a substantial decline in perennial grassland [17,18]. Between 2006 and 2012 South Dakota lost~76% of total extant grassland areas to other land uses [19] which has threatened its grassland species. The ring-necked pheasant (Phasianus colchicus; hereafter pheasant) is an edge-tolerant species that is negatively impacted by the conversion of grassland to cultivation [20,21]. Pheasants were introduced to the United States in the early 1900s and they soon adapted to not only coexist but thrive with primitive agriculture [22]. The landscapes at that time were high-quality pheasant habitats. Relatively primitive agricultural practices created a landscape containing a diversity of crop types established over a variety of field sizes [22,23]. Abundant weeds in the crop fields and inefficient harvest of grain leaving waste grains helped provide ideal brood habitat and high-quality winter cover [22]. For the past 30 years, however, cultivation has intensified leading to a decline in grassland and emergent wetland area or habitat quality [22,23]. In South Dakota, nearly 58% of grassland loss from 2006-2012 occurred in key pheasant regions [19], and pheasant populations have been declining since then [22]. For example, annual brood survey data in South Dakota indicated a nearly 41% decline in pheasant relative abundance from 2008 to 2018 [24], which coincided with a 37% reduction in the area of grasslands enrolled in the Conservation Reserve Program and a 24% increase in the area of harvestable corn and soybean [25]. Although introduced, pheasants are economically and socially important in South Dakota. According to the South Dakota Department of Game, Fish, and Parks [26], pheasants are the most sought-after and profitable upland game species, with pheasant harvest being a multimillion-dollar industry in the state. Pheasant hunting is also an important social activity that reunites families and friends [26]. A recent analysis suggested that in a single county in South Dakota, pheasant hunting generated $9.7 million in economic benefit and created 111 jobs (Gregory and Mills, unpublished data). Pheasants are important to the economy and culture of South Dakota [26] and, therefore, conserving pheasants can protect the habitat for native grassland species. By attracting funding from individual donors, and wildlife organizations, they act as a surrogate for broader biodiversity conservation, especially grassland species. The distribution, abundance, and survival of this species reflect the quality and conservation status of the grassland it inhabits. Understanding the drivers of recent broad-scale pheasant population declines in South Dakota is an important management objective and can provide insights into the sensitivity of a grassland species to landscape changes. Moreover, the dynamic nature of this agriculturally dominated landscape provides an opportunity to investigate species-habitat relationships and identify landscape attributes useful in predicting habitat quality. Furthermore, habitat loss or habitat fragmentation does not affect all species equally; sensitivity to these processes varies with species' numerous ecological traits. We, therefore, need to understand how the habitat amount and configurational landscape heterogeneity (connectivity of fragments, number of fragments) influence this species' abundance. With limited conservation resources, targeted and prescribed management at the appropriate landscape process at the appropriate spatial scale is required to optimize conservation efforts. Here, we aim to identify landscape factors, whether habitat amount or habitat configuration, influencing spatiotemporal variation in pheasant productivity across South Dakota. In this study, we used an emerging HotSpot analysis of annual pheasant brood survey data to investigate the spatial and temporal drivers of pheasant population dynamics. Specifically, we evaluated 1) the spatial and temporal variability of high and low pheasant productivity areas in South Dakota, 2) the spatial context and landscape heterogeneity of high pheasant productivity areas to areas under agricultural production, 3) the degree to which high pheasant productivity areas were correlated to natural land cover, and 4) how the inter-juxtaposition of agricultural land uses, and natural areas impacted pheasant productivity. Study system South Dakota is part of the prairie potholes ecosystem and is comprised primarily of open grasslands east of the Missouri River and upland steppe ecotypes in the west. Our study occurred primarily in eastern South Dakota, which was characterized by tallgrass prairie and highly fragmented by agriculture [27,28]. Our study system had a mid-continent mid-latitude temperature and precipitation regime characterized by cold snowy winters and hot dry summers. The average low temperature for January was~11˚C, while the average high temperature for July was~30˚C. Late springs and early summers experienced moderate rainfall with average annual precipitation of 508 mm [29]. Cultivated agriculture was the dominant land use and a key component of the regional economy, accounting for nearly $25.6 billion (~30%) of South Dakota's total economy [30]. Pheasant data We used annual pheasant brood survey data collected from 1993 to 2016 by the South Dakota Department of Game, Fish, and Parks. Annual pheasant brood surveys included counts of males, females, and broods observed along 110 fixed 48-km survey routes distributed across the pheasant range in South Dakota [31]. Routes were surveyed from 25 July to 15 August each year using standardized methods on mornings when weather conditions were optimal for detecting pheasants (i.e., clear skies, heavy dew, and light winds). During surveys, one observer counted the number of pheasants and broods observed within 0.2 km of the roadway while driving at a speed <48 km/hour [31]. Raw pheasant counts were converted into a pheasant � km -1 index of pheasant abundance [24]. We censored 5 routes that were west of the Missouri River where route density was too low to adequately parameterize the spatial analysis and account for the difference in land cover (i.e., dominated by mixed-grass prairie) from the tallgrass prairies (Fig 1). Spatial coverage of the remaining 105 routes (93 located east and 12 southwest of the Missouri River) aligned with areas where pheasant populations in South Dakota were concentrated; thus, our sampling extent included the majority of the pheasant population in South Dakota (Fig 1, [31]). Land cover data We used the Cropland Data Layer (CDL) to characterize the land cover for each route [32]. One drawback to the CDL for South Dakota is that data is not available before 2006 [33]. Therefore, we confined our analysis of the influence of land cover to the 11-year period from 2006 to 2016. We reclassified the original 133 CDL land-cover classes into five cover classes: grassland, row crops, small grains, wetlands, and others. Grass-dominated land cover ranged from native prairie to anthropogenically altered grasslands such as hay lands and pastures. Because of their spectral similarity, these different cover types were difficult to resolve in satellite imagery. Agricultural crops including corn, soybeans, and sorghum were categorized as row crops. Crops including wheat, barley, and oats were classified as small grains. Woody and herbaceous wetlands were classified as wetlands. The remaining land-cover types were classified as others [34]. Identifying areas of high, average, and low pheasant productivity To identify areas as high (HotSpots), average (AverageSpots), and low (ColdSpots) pheasant productivity, all routes were converted to point features using ArcGIS version 10.6 (ESRI, Inc., Redlands, CA, USA) where each point depicted the mid-point of the respective route. We then applied the Getis-Ord GI � statistic to conduct an independent HotSpot analysis of pheasant � km -1 for each year from 1993 to 2016 [35]. We used incremental spatial autocorrelation to identify the distance band threshold that exhibited maximum clustering [36]. Once we determined HotSpots, AverageSpots, and ColdSpots for each year of the 24-year study period, we created separate point feature files for significant HotSpots and ColdSpots stratified by year, and then bound those areas using a minimum convex polygon (MCP). This yielded a set of 24 HotSpot raster images and a set of 24 ColdSpot raster images, one for each year. In each of these HotSpot or ColdSpot MCPs, we coded 1 for those areas that were HotSpots or Cold-Spots, respectively, and 0 for all others. We then overlaid the HotSpot or ColdSpot MCP layers and summed the MCPs to calculate the number of times over our 24-year study period that an area was a HotSpot or ColdSpot. Similarly, we calculated areas that were AverageSpots. To characterize the trend in pheasant population across these different levels of pheasant productivity, we then calculated average pheasant � km -1 along routes identified as HotSpots, Cold-Spots, and AverageSpots. Determination of landscape characteristics We computed landscape metrics associated with our reclassified land-cover data for each route classified as either a HotSpot or as a ColdSpot annually for the 11-year period from 2006 to 2016 with FRAGSTATS version 4.2 [37]. All landscape metrics from FRAGSTATS were computed at two spatial neighborhoods (500 m and 1000 m). This process involved the creation of 500-m and 1000-m buffers around each route (1 and 2 times the average pheasant home range size during nesting and brooding seasons, respectively; [38,39]). We chose routes, instead of points, for creating buffers because landscape characteristics across these 48-km routes describe an area being a HotSpot or a ColdSpot and allow us to link spatial land cover land use attributes associated with that area which may influence that area being a HotSpot or ColdSpot. Reclassified land cover was then extracted for each of the buffered routes and used to calculate landscape metrics that we predicted would be important factors influencing pheasant HotSpots based on the ecology of gallinaceous birds. This included composition, contiguity, and fragmentation metrics of each land-cover class for each spatial neighborhood [40][41][42]. Composition metrics included the proportion of area of each land-cover type in each buffer. The contiguity of land cover was measured using the contiguity index, which represented the size and connectivity of patches of a given land-cover type on a scale of 0 (small patches) to 1 (large and contiguous patches). Fragmentation was measured using the number of patches, which summed the number of patches of each land-cover type at each scale. Increasing fragmentation of a land-cover type represented an increase in the number of patches. We also assumed that pheasant HotSpots would be impacted by the total number of patches in the landscape, and contiguity of the landscape at both 500-m and 1000-m scales. Data analysis The analysis of pheasant HotSpots was conducted using a classification and regression tree (CART, [43]) approach. CART is a nonparametric machine learning algorithm that makes no assumptions about relationships between features and is robust to correlated variables [44]. The CART analysis was conducted using the rpart package in the R programming language [45]. To begin the CART analysis, simple random sampling without replacement was used to partition the full data set into a training dataset containing 80% of the observations and a testing and validation data set containing 20% of the observations. A CART was then parameterized with all independent variables at both 500-m and 1000-m scales. The CART was applied first on the training data set and then on the test data set to assess the model generalizability and to evaluate any over-fitting of the model to the training sample. A confusion error matrix was then used to further evaluate model performance [46]. The total area that was a HotSpot for some period of time during the 11-year study period was 47,643 km 2 (~38% of the total study area). This included a core area of 3,512 km 2 that was consistently a HotSpot for all 11 years (Fig 4A). ColdSpots occupied a total of 20,846 km 2 (~17% of the study area), of which, there was a 454 km 2 core that was a ColdSpot for 9 of 11 years (Fig 4B. The maximum number of years that an area was a ColdSpot was 9 of 11 years as there was no overlap of ColdSpots from 2015 or 2016. HotSpots over the 24-year period had an additional area of 7,034 km 2 compared to HotSpots over the 11-year period. Similarly, ColdSpots for the 24-year period had an additional area of 3,938 km 2 compared to ColdSpots for the 11-year period (S1 File). We observed a decline in areas under AverageSpots over both the 11-year (−0.01 km 2� yr -1 ± 0.05) period and the 24-year (−0.02 km 2� yr -1 ± 0.01) period (Fig 5). ColdSpots showed a decreasing trend (−0.03 km 2� yr -1 ± 0.03) over the 11-year period and an increasing trend (0.05 km 2� yr -1 ± 0.03) over the 24-year period (Fig 5). Areas under HotSpots demonstrated a positive trend over both the 11-year (0.02 km 2� yr -1 ± 0.02) and 24-year (0.09 km 2� yr -1 ± 0.03) period (Fig 5). Environmental drivers of HotSpots The results of the CART analysis showed grassland area at a 500-m scale to be the only variable influencing HotSpots. HotSpots were predicted for sites with > 33% of the area under grassland at a 500-m scale (Fig 6). The result from confusion matrix was 0.75 which suggests that PLOS ONE Emerging HotSpot analysis reveals the importance of habitat amount for a grassland bird in South Dakota Discussion Over the 24-year period, pheasant populations in South Dakota have shown a positive population trend, however, during the latter decade, pheasant populations across South Dakota have declined. When we consider spatial variation in pheasant productivity, we observed high rates of decline across HotSpots, ColdSpots, and AverageSpots over the latter 11-year period. We anticipated that HotSpots would occur in areas of high-quality habitat and would support positive pheasant population trajectories [48,49]. Rather, HotSpots also contained declining pheasant populations, albeit at a slower rate than was observed for ColdSpots and Average Spots, suggesting HotSpots had relatively higher suitability for pheasant populations. One potential explanation is that land-use changes over the past decade have incurred an extinction debt upon pheasants and the pheasant population is still responding to the new landscape configuration [50,51]. Similar results have been shown to occur for birds [52], mammals [53], plants [54], and butterflies [51]. This delay in pheasant response can be explained by habitat fragmentation, and habitat loss, both of which degrade habitat quality and therefore affect breeding success, recruitment, and survival [55]. This result could also be attributed to an increase in predator populations in fragmented landscapes [55] which could further negatively impact the pheasant population. We observed an increase in the HotSpot area with a simultaneous decrease in pheasant abundance and an overall lowering of pheasant numbers required to be a HotSpot over years. This suggests that even though we observed an expansion of the HotSpot area across South Dakota, the overall quality of these Hotspots was declining to be more similar to AverageSpots or ColdSpots. We hypothesize that this could also be a response to increased habitat fragmentation restricting access to resources below a level suitable to sustain pheasant viability [54][55][56]. Fragmentation may also enhance predation pressure in landscapes by increasing predator abundance and inducing edge effects [57,58]. This further highlights the importance of identifying patches for prioritization in habitat management to deal with a potential extinction debt and avoid future population decline [54,59,60]. Apart from a mosaic of habitat that is necessary to fulfill the life stage requirements for pheasants, pheasant populations are significantly impacted by harsh weather conditions [20,26]. For example, drought is known to limit resources (e.g., concealment and food), which could necessitate increased movements and decreased rates of pheasant survival and reproduction [61], and harsh winter conditions (e.g., high snow depth) which can have severe negative impacts on pheasant survival [62]. The summer of 2012 was one of the harshest droughts in South Dakota history [29]. When coupled with a harsh winter numerous early season blizzards in 2013 [29], 2013-2014 was one of the worst pheasant productivity years across the state (Fig 3). To further exacerbate the situation, one of the largest net losses of grassland area to cultivation occurred from 2012 to 2014 [63]. During this period, we observed that HotSpots exhibited a significant reduction in grassland area and an increase in fragmentation among grassland patches (S1 File). The result of this land-use conversion and climatic stressors combined to result in the greatest per capita decline in pheasant counts observed in HotSpots throughout our analysis (Fig 3). Moreover, we note that in many cases it is not a single stressor that pushes a population past a threshold but a combination of stressors. For example, Sage-grouse (Centrocercus urophasianus) in Wyoming were relatively resistant to West Nile Virus or oil and gas fracking, but the combination of both stressors resulted in rapid population decline and in some cases extirpation [64]. Similarly, it appears that in South Dakota the combination of extreme weather events and rampant landscape conversion to cultivation is contributing to the observed pheasant decline. Despite pheasants being classified by some as habitat generalists, due to their distribution across a wide range of habitats [21,22,26], they are primarily a grassland species and require large tracts of grassland to successfully fledge offspring, and to support adult survival [38,65]. It was not surprising that area under grassland habitat was the main explanatory variable behind predicting an area to be a Hotspot at a small spatial scale. The positive relationship between habitat area and the number of individuals it can support is one of the most important phenomena in ecology and has been frequently used to describe the effects of area loss on species density or their frequency of occurrence [6,66]. Many studies on the impact of fragmented landscapes have demonstrated strong area effects on species abundances and concluded that differences in habitat area is a primary factor determining population persistence [67,68]. We did not find any significant relationship between fragmentation per se and pheasant HotSpots, suggesting that pheasants respond more strongly to habitat loss than to fragmentation. This could further be explained by the habitat amount hypothesis (HAH), which suggests that species density increases with the total habitat area in the landscape around sample site [11]. As such, the HAH implies that habitat fragmentation-configuration of patches in landscape-is ultimately non-significant in understanding species density, but they matter only to the extent that they influence the amount of habitat in the local landscape. Similar results, favoring HAH over fragmentation, were observed in many other studies [69][70][71]. Thus, these results reinforce the HAH and corroborate the idea that fragmentation per se has a weak effect on the ecological response of pheasants when the habitat amount is controlled. This study, therefore, helps to inform the debate on the relative importance of habitat amount versus fragmentation per se on species abundance [10,72,73]. Our results suggest that conservation efforts for pheasants should focus on habitat preservation and restoration. These results further contribute to simplification in decision-making policy related to species conservation, since efforts can focus on preventing habitat loss, as well as increasing or maintaining the total habitat amount in the landscape. We suggest that to improve management efficacy and the long-term persistence of populations, managers need to identify ecological factors at multiple scales that enhance, facilitate, or constrain populations. We recommend that managers should focus on preserving and restoring the maximum overall amount of habitat regardless of its configuration. Maintaining habitat amounts by managing habitat patches, large and small, could enhance the benefits of local management practices for pheasants. In landscape systems where the majority of the land is privately owned, groups of landowners may be incentivized to coordinate efforts at the landscape scale. This process can be expanded to include smaller parcels of public land by developing relationships with neighboring landowners and providing incentives for cooperative conservation agreements among private landowners to facilitate public × private landscape conservation cooperatives. For example, cooperative farming agreements can be utilized whereby private landowners plant crops in a rotation specified by managers to create a landscape mosaic maximally beneficial to pheasants; in turn, landowners may receive either direct payments or tax credits for their participation and adherence to management planting guidelines. This will help in creating more habitats for pheasants in the landscape. We used pheasants as a model organism to demonstrate the usefulness of the emerging HotSpot analysis for assessing the status of long-monitored species. We demonstrated a novel approach for identifying high productivity areas and factors influencing these areas for a species of management interest at a landscape scale, which could be extended for other species of management and conservation concern. Our use of an emerging HotSpot analysis is the first application of this approach to wildlife count data used to index populations and could be applied to any count surveys that index population abundance. An important feature of this analysis is that it produced an index of relative productivity regardless of annual variation in productivity because even in poor years the highly productive areas were still identified as being more productive relative to other areas of the landscape. Consequently, this analysis identified regions that were relatively more or less productive regardless of annual population performance. This is an important attribute of this analysis as other species of gallinaceous birds have been shown to have high periodicity in annual count data and to respond quickly to environmental conditions [74,75]. Our CART analysis approach also provided a framework for estimating the thresholds of important land-cover types and landscape metrics necessary for sites to be a HotSpot. Supporting information S1 File. Distribution and fragmentation indices of pheasant productivity in the landscape. (DOCX)
5,611.2
2022-09-26T00:00:00.000
[ "Environmental Science", "Biology" ]
Intraguild predation in three generalist predatory mites of the family Phytoseiidae (Acari: Phytoseiidae) The predatory mites, Neoseiulus californicus (McGregor), N. barkeri (Hughes), and Amblyseius swirskii Athias-Henriot, are important predators attacking many insect and mite pests. They can coexist in the same habitat and engage in intraguild predation (IGP). IGP was assessed among the exotic one N. californicus and the native species N. barkeri and A. swirskii as Intraguild predator (IG-predator)/intraguild prey (IG-prey) in either absence or presence of extra-guild prey Tetranychus urticae Koch (EG-prey). In the laboratory, the physiological parameters, longevity, fecundity, and predation rate of these predatory mites’ females, fed on EG-prey, were evaluated, where phytoseiid larvae are considered as (IG-prey) or combined IG-prey with EG-prey. All predatory species consumed larval stages of each other’s, but in case of N. californicus, females failed to sustain oviposition on N. barkeri larvae. Also, it was noticed that N. californicus females killed 3 times more A. swirskii larvae than N. barkeri larvae, whereas A. swirskii consumed more N. californicus than N. barkeri larvae, respectively. Neoseiulus californicus lived longer on T. urticae and A. swirskii larvae than on N. barkeri, while the latter survived longer on T. urticae only than on the other prey or with combinations with T. urticae. Amblyseius swirskii lived shorter when fed exclusively on T. urticae or IG-prey than on EG-prey combined with IG-prey. In choice experiments, N. californicus showed a higher preference to consume more T. urticae than any of phytoseiid larvae. The comparison between T. urticae and IG-prey diets definite the higher influence of T. urticae on the fecundity in N. californicus and N. barkeri than on IG-prey, whereas in A. swirskii fecundity was as equal on T. urticae as on IG-prey N. californicus larvae. A. swirskii seemed to be the strongest IG-predator. Background Neoseiulus barkeri (Hughes), Amblyseius swirskii Athias-Henriot, and Neoseiulus californicus (McGregor) (Acari: Phytosiidae) are efficient control agents of the mite pest Tetranychus urticae (Koch) (Tetranychidae). Neoseiulus barkeri is considered as a generalist predatory mite (type III subtype e) which can feed on Frankliniella occidentalis (Pergande) (Ramakers and Van Lieburg 1982), Thrips tabaci Lind. (Bonde 1989), spider mites (Momen and El-Borolossy 1999), stored product mites (Huang et al. 2013), and pollen grains (Addison et al. 2000). Amblyseius swirskii (type III subtype b) feeds usually on whitefly as well as spider, tarsonomid, and eriophyid mites, also thrips and pollen grains (Messelink et al. 2006;Momen 2009;Riahi et al. 2017), whereas N. californicus (the selective predator of tetranychid mites, type II) feeds on various species of the family Tetranychidae (McMurtry et al. 2013). Maleknia et al. (2016) stated that in both greenhouse and outdoor conditions, T. urticae is an important pest of cucumber and it is necessary to be controlled by predatory mites of the family Phytoseiidae. Intraguild predation (IGP) can occur when two or more predatory species sharing the same habitat and one species-being the intraguild predator (IG-predator) and the othersintraguild preys (IG-prey) and competing for the same prey (extra-guild prey (EG-prey) (Janssen et al. 2006;Momen and Abdel-Khalek 2009b). Some phytoseiid mites can kill and consume phytoseiid competitors when their natural or favorable mite/insect preys density is low (Ahmad et al. 2015). Discrimination between conspecific (the same predatory species) and heterospecific (another predatory species) individuals as (IG-prey) was known in some generalist phytoseiid mites (Schausberger 1999). Generalist phytoseiid mites (type III) preferred to predate on heterospecific to conspecific, although T. urticae was not reduced (Schausberger 1999). Ahmad et al. (2015) indicated that predation on heterospecific immature stage is the main aspect in IGP. In previous investigations, some traits of IGP in immatures and females predatory phytoseiid mites were investigated (Momen and Abdel-Khalek 2009a, b;Momen 2010;Ahmad et al. 2015). Amblyseius swirskii had higher predation rates on heterospecific prey Typhlodromus athiasae Porth and Swirski and Eusieus scutalis (Athias-Henriot) than on conspecific prey and all females failed to predate on eggs and protonymphs of its own (Momen and Abdel-Khalek 2009b). Also, A. swirskii was able to consume all stages (eggs, larvae, protonymphs, deutonymphs) of N. barkeri and P. persimilis Athias-Henriot (Maleknia et al. 2016). Neoseiulus barkeri females consumed more larvae and protonymphs of Typhlodrmous negevi Swirski and Amitai than its own (Momen 2010) and also attacked similar amounts of A. swirskii as P. persimilis (Maleknia et al. 2016). Neoseiulus californicus, Typhlodromips montdorensis (Schicha), and T. pyri (Scheuten) can feed on larval stages of each other and sustain oviposition (Hatherly et al. 2005). Both N. californicus and A. swirskii could serve as IG-predators and could develop on their IG-prey (Guo et al. 2016). Some factors may affect the strength of IGP and outcome of biological control and this primary is dependent on the predator species and ranges from harmful to harmless IG predators (Walzer and Schausberger 2013). These factors included predator aggressiveness, activity, and habitat characteristics (Walzer et al. 2004). Because there was no previous study exists on IGP among N. barkeri, N. californicus, and A. swirskii in the absence or presence of T. urticae, the aim of the present study was to determine the interactions among the generalist predators in absence and presence of the EG-prey T. urticae. As well, the effect of different IG-prey in ovipositional period, longevity, predation rate, and fecundity of predatory mite females as IG-predators was investigated. Also, comparison of all the above parameters for IG-predators fed IG-prey with those obtained on mixed (IG-prey) with (EG-prey) T. urticae was done. Mite rearing Initial culture of the two-spotted spider mite, T. urticae, was obtained from bean plants (Phaseolus vulgaris L.) grown in the field, at Giza Governorate, Egypt. It was maintained under the laboratory conditions of 25 ± 1°C, 60 ± 5% RH, and 16:8 h (L:D) photoperiod on acalypha plant Acalypha wilkesiana (Eupharbiaceae) as a wild plant in plastic trays. Fresh plants infested with T. urticae were placed in the trays weekly. Neoseiulus barkeri and A. swirskii were obtained from cucumber plants (Cucumis sativus L.), grown in Fayoum Governorate, while N. californicus was collected from pepper in Giza Province. In the laboratory, they were reared on whole acalypha leaves that densely infested with T. urticae in a growth chamber at 28 ± 2°C, 75 ± 5% RH, 16:8 h (L:D). New infested bean plants leaves were added to each predatory culture and the old ones were removed from each colony daily. The leaves were placed on water-saturated cotton pads in Petri dishes. Leaf disks Leaf disks, cut from leaflets collected from the middle part of acalypha plant, were placed in Petri dishes (6 cm in diameter). Each plate was considered a replicate. The leaf disks (with 3.5 cm of diameter each) were placed on a water-saturated cotton pad in the Petri dishes in order to keep the leaves fresh. Water-saturated, absorbent cotton strip, (1 cm wide), placed around the edge of the leaf disk, covered by a water-saturated cotton strip to prevent mites from escaping. Newly emerged female of each species and one male were transferred onto rearing leaf disk with excess of food and left to mate. The male was removed, and the female transferred to fresh leaf disk and left 24 h without food to guarantee that all females had been starved for an equal period of time. Each experiment consisted of 24 mated females on individual disks supplied with a specific prey species. Intraguild predation test In the experiments, predatory mites' females were considered as (IG-predator), while larvae of heterospecifics species were considered as IG-prey (Montserrat et al. 2012). The first set of experiments was female's predatory mite provided with only larval stages of its phytoseiid prey. Larval stage was selected because it is easy to handle and could be quickly selected once they had hatched. Larva was also a preferred stage for N. barkeri, A. swirskii, and N. californicus (Momen 2010). As a control, females of each predatory species were fed solely on T. urticae larvae. Choice test The second set of experiments was the choice test, which provided female mites of each predatory species with (50% of T. urticae larvae and 50% of phytoseiid larva) as a food source. Every 24 h, the ovipositional period, ovipositional rate, the number of each food source consumed (determined by larval corpses), and female survival, was recorded. All excess food and corpses larvae on each disk were removed at each observation period and replaced with an identical amount of food as previously supplied. Dead and eaten larvae were removed from arenas and replaced daily. The shriveled corpses of the dead larvae were taken as evidence of predation. Observations were made daily and predatory females were checked until their death. In the preference test (T. urticae and phytoseiid larvae), the number of T. urticae eaten compared with the number of phytoseiid larval prey consumed was determined. Statistical analysis One-way analysis of variance (ANOVA) (SPSS computer program) was conducted to evaluate the mean preoviposition and oviposition periods, longevity, mean total and daily number of eggs laid per female, mean total, and daily number of prey (IG-prey/EG-prey) consumed per female for each predator species kept on each of its prey sources. Before the analyses, data were checked for normality. Data were fitted with the assumption of normality, not transformed, and means were compared by Tukey's HSD (P = 0.05 level). Results and discussion Performance of Neoseiulus californicus (IG-predator) on Neoseiulus barkeri/Amblyseius swirskii (IG-prey) and Tetranychus urticae (EG-prey) Neoseiulus californicus lived significantly longer when fed solely on T. urticae and A. swirskii than when fed on A. barkeri or on a mixed diet of T. urticae and phytoseiid prey (F 6,161 = 85.70, P = 0.0001). The mean ovipositional period of N. californicus females fed on combined diets (10 T. urticae + 10 N. barkeri) performed shorter than those fed only on A. swirskii or T. urticae combined with other IG-prey. When N. californicus fed solely on N. barkeri larvae, females failed to sustain oviposition, while on T. urticae gave a higher fecundity rate than on A. swirskii or on a mixture of T. urticae and any other phytoseiid larvae (F 6,161 = 3364, P = 0.000) ( Table 1). Prey type or number influenced the number of prey consumed by N. californicus. When N. californicus fed solely on EG-prey/IG-prey/combined prey, a significant difference was observed in its predation rate (F 6,161 = 4009.60, 3017.76, P = 0.000). Neoseiulus californicus fed N. barkeri/A. swirskii larvae, its predation rate was significantly higher on the latter species (Table 1). Neoseiulus californicus consumed more T. urticae in total than those fed a mixed diet. It consumed significantly similar amount of IG-prey N. barkeri/A. swirskii, when combined with T. urticae (case of 20 T. urticae + 20 IGprey). It was also noticed that providing of T. urticae significantly decreased the predation rate of IG-prey (Table 1). Neoseiulus californicus consumed up to 3.68, 4.13, and 2.93, 3.67 T. urticae for every 1 N. barkeri and A. swirskii when fed on both prey sources. According to Lucas (2005), IGP can be unidirectional/bidirectional (mutual), the latter case when 2/3 predator species prey on each other and each predator is also prey and vice versa. The present study proved that predation rates of the 3 tested predator species were bidirectional in absence or presence of T. urticae as EG-prey. Previous research has demonstrated that N. barkeri, N. californicus, and A. swirskii can serve as either prey or predators in intraguild predatory interactions among biological control agents (Maleknia et al. 2016;Haghani et al. 2019). Females of N. californicus consumed IG-prey of N. barkeri and A. swirskii in non-choice experiments. Female's predator ate nearly 3 times more A. swirskii than N. barkeri. When T. urtice was combined, females of N. californicus fed on both T. urticae and both IG-prey with higher preference to T. urticae than phytoseiid larvae suggesting that the two-spotted spider mite is its preferred food. Resemble results were reported by Hatherly et al. (2005) who stated that when N. californicus offered a mixed diet of (phytoseiid larvae and T. urticae), it showed a marked preference for T. urticae. When N. californicus females were offered T. urticae combined with IG-prey, the IGP rate declined although the fecundity increased, except when fed on A. swirskii. This result is similar to those obtained by Meszaros et al. (2007) with Typhlodromus exhilarates Ragusa and T. phialatus Athias-Henriot and in general trends reported for the family Phytoseiidae (Schausberger 2003). The difference in IGP by N. californicus on A. swirskii and N. barkerii suggested that N. barkeri was unfavorable IG-prey to be fed and reproduce by N. californicus and could explain that both predators have differences in distribution in their habitat where N. californicus is more dominant on plants and always associated with tetranychid mites that producing heavy webbing; while N. barkeri living in soil/litter habitat while A. swirskii living on glabrous leaves (McMurtry et al. 2013). Moreover, N. californicus failed to sustain egg production when it was fed IG-prey N. barkeri. On the contrary, Farazmand et al. (2015) indicated that N. californicus was able to sustain oviposition on IG prey. When the population of T. urticae is low and that could happen at the beginning and end of the cropping season, N. californicus may be able to feed and reproduce on IG-prey A. swirskii to maintain its population for a short time and certainly not on IG-prey N. barkeri since predation on that diet for survival only and not to producing offspring. When N. barkeri was fed solely on N. californicus/A. swirskii larvae, females laid statistically similar total eggs production, while on T. urticae solely showed a higher fecundity than on both IG-prey or on a mixture of T. urticae and phytoseiid larvae (F 6,161 = 1529.19, P = 0.000) ( Table 2). When N. barkeri fed solely on EG-prey/IG-prey/combined prey, a significant difference was observed in its predation rate (F 6,161 = 7782.66, 1512.10, P = 0.000). Neoseiulus barkeri consumed more in total and daily number of T. urticae than those fed on a mixed diet ( Table 2). The mean total number of A. swirskii larvae eaten by N. barkeri was statistically higher than that of N. californicus. Providing of T. urticae significantly decreased the predation of IG-prey (Table 2). Neoseiulus barkeri consumed up to 1.85 and 1.48 T. urticae for every 1 A. swirskii while that ratio was 0.96 ad 0.90 T. urticae for every 1 N. californicus when fed on both prey sources ( Table 2). Females of N. barkeri fed daily on similar amount of both IG-prey A. swirskii and N. californicus. When T. urticae was combined with IG-prey, females of N. barkerii fed on both prey with preference to T. urticae than IG-prey A. swirskii also to IG-prey N. californicus than T. urticae. IG-prey might comprise a less nutritive food for the IG-predator, especially in its oviposition period (Walzer and Schausberger 1999). Also, the daily number of IG-prey consumed by N. barkeri was lower (nearly half) than that of prey offered, suggesting that IGpredator response was affected by food quality (Ahmad et al. 2015). The total egg production of N. barkeri on both IG-prey was similar and relatively higher than those fed IG-prey combined with EG-prey. Walzer and Schausberger (1999), Hatherly et al. (2005), Momen and Abdel-Khalek (2009b), and Farazmand et al. (2015) indicated that predatory phytoseiids receive more nutritional benefits from phytoseiid larvae in the absence of their main prey (EG-prey). Prey conversion rate was low for N. barkeri considering N. californicus as IG-prey to A. swirskii. According to Ahmad et al. (2015), that parameter could be a sign of the ability for population persistence when prey is moving back. Interesting results presently in the fecundity of A. swirskii fed exclusively IG-prey N. californicus was similar to those fed EG-prey T. urticae. IG-prey might be an equally good or better food source than the EG-prey (thrips) for both A. swirskii and N. cucumeris (Oudemans) (Buitenhuis et al. 2010). In their studies, Guo et al. (2016) indicated that IG-prey A. orientalis (Ehara) appeared to be better food source for the development of A. swirskii than EG-prey Bemisia tabaci Gennadius. They added that A. swirskii appears to be a less suitable prey for A. orientalis. In the contrary, Polis et al. (1989) demonstrated that the quality of IG-prey is often lower than the quality of EG-prey. Momen and El-Borolossy (2010) showed that A. swirskii was able to feed and develop on both IG-prey C. negevi and Phytoseius finitimus Ribaga, whereas the latter species failed to develop on other both IG-prey. Research has been done by Pratt et al. (2002) and also Xu and Enkegard (2010) indicated that some factors are responsible for the preference of predatory mites, such as plant architecture, prey stage Where applicable, the ratio of T. urticae to larval phytoseiids consumed by adult females is given. Values in each column followed by the same letter are not significantly different (P > 0.05) when Neoseiulus barkeri fed on 20 T. urticae/phytoseiid larvae are compared to their other four prey combinations preference, and interaction between the pest and the predator. Conclusion Base on this study, A. swirskii seems to be a stronger IG predator than both other species because it consumed more larvae of N. barkeri and N. californicus and also, laid more eggs on both IG prey. According to this study, A. swirskii, N. barkeri, and N. californicus are IG predators on each other even when T. urticae is present. The results of this study showed that IGP among these 3 phytoseiids is not unidirectional. Potential IG-interaction among these predatory mites may strongly influence the predator efficiency in T. urticae control. Information about the strength and direction of IGP among these predators can be helpful for choosing the best strategy of multiple releases to improve the control of T. urticae.
4,124.4
2021-01-05T00:00:00.000
[ "Biology", "Environmental Science" ]
Inversion Algorithm of Fiber Bragg Grating for Nanofluid Flooding Monitoring In the current study, we developed an adaptive algorithm that can predict oil mobilization in a porous medium on the basis of optical data. Associated mechanisms based on tuning the electromagnetic response of magnetic and dielectric nanoparticles are also discussed. This technique is a promising method in rational magnetophoresis toward fluid mobility via fiber Bragg grating (FBG). The obtained wavelength shift due to Fe3O4 injection was 75% higher than that of dielectric materials. This use of FBG magneto-optic sensors could be a remarkable breakthrough for fluid-flow tracking in oil reservoirs. Our computational algorithm, based on piecewise linear polynomials, was evaluated with an analytical technique for homogeneous cases and achieved 99.45% accuracy. Theoretical values obtained via coupled-mode theory agreed with our FBG experiment data of at a level of 95.23% accuracy. Introduction Enhanced oil recovery is a tertiary method designed to recover the remaining oil left in a particular reservoir after primary and secondary methods are exhausted [1][2][3][4][5]. The nature and location of reservoir placement in the subsurface makes it difficult to estimate the amount of remaining oil and the position of mobilized oil in the reservoir. The introduction of nanotechnology in enhanced oil recovery (EOR) has huge potential to increase total recovery in both light and heavy oil reservoirs. On the basis of previous studies [5][6][7][8][9][10][11], employing nanoparticles can shift reservoir wettability from oil-wet to water-wet and reduce oil viscosity. At high temperatures, however, nanoparticles can create a massive diffusion-driving force caused by a large surface-to-volume ratio [12][13][14], with the penetration of these tiny particles into pore spaces observable by using currently available technologies [15][16][17][18]. To address these problems, a new, advanced technology that can withstand a high-temperature high-pressure (HTHP) environments had to be be designed. Moreover, conventional methods are no longer applicable in high-temperature high-pressure environments. Low-frequency electromagnetic-wave energy can instead be used to stimulate oil in the reservoir due to its penetration depth. This method can enhance oil recovery via the interaction of nanomaterials in the form of nanofluids at the molecular level [19][20][21][22]. (1) where F mag is magnetic force, M s is magnetization saturation, V mag is particle volume, and → B is magnetic induction. On the basis of the literature, Yang et al. [32] reported on the direct coupling of a magnetic field with an electromagnetic (EM) wave in a Bragg sensor using a TbFeCo thin film (84-285 nm) as cladding; nevertheless, the magnetostrictive contribution could not discern the true influence of magneto-optic effects [33]. In addition, Pu et al. [34] developed ferrofluid as fiberoptic cladding to alter light transmission in a general single-mode fiber, but the ferrofluid's magnetic response was slow (i.e., Hz) and dictated by particle motion in the fluid [35] rather than ferromagnetic resonance (GHz). Moreover, the transmission-based magneto-optic coupling was not conducive to multiplexing many sensors onto a single fiber. The Faraday effect is a well-documented example of a magneto-optic effect that alters the imaginary component of permittivity, but without the effective index changes associated with real polarization directions. Therefore, further research is necessary to develop and understand the response of fiberoptic sensors integrated with magnetic sensing capabilities. The Bragg wavelength of optical fiber grating is a function of the grating period (Λ) and the effective refractive index (η eff ) of the fiber core, represented by Equation (2). Magnetic fluid is an example of a stable colloidal solution composed of ferromagnetic nanoparticles. The behavior of ferromagnetic particles that appear in magnetic fluids is dependent on the external magnetic field, so the refractive index of magnetic fluid can be seen to be magnetic-field-dependent [36][37][38]. Refractive index η is as below [36]: where µ r represents relative permeability. Nuclear magnetic resonance (NMR) is used for the real-time quantitative detection of multiphase flow in oil and gas wells and pipelines (Shi et al., 2019). The advantages of NMR include noninvasive flow detection, environmental protection, and full oil, gas, and water three-phase range. The considered fiber Bragg grating (FBG) detector, however, boasts an interaction with these nanofluids that was not reported elsewhere. The novelty of the FBG sensor is on its application in nanofluid-enhanced oil recovery (EOR) where it is used to monitor the flow of mobilized fluid in a reservoir. In existing core-flooding systems, the delineation of fluid mobilization and magnetic-field strength have not been developed. Oil mobilization can instead be detected using optical sensors followed by a computer algorithm to convert these sensor data and predict an image of oil movement [39]. The main objective of our research was to develop an adaptive computational algorithm based on finite difference and coupled-mode theory to image out oil mobility inside a porous medium. The following sections summarize the main ideas behind the involved optical sensors, magnetization, coupled-mode theory, and computational algorithms. Molecular-Dynamics Simulation In this work, an Angsi oilfield sandstone structure with 24% porosity, butane as oil, and 10,000 ppm of brine (H 2 O + NaCl) was simulated and optimized by the Forcite module of software suite Materials Studio 18.1. Nanoparticle structures were imported from the library of materials inside this software. Van der Waals interactions between different particles were calculated within the framework of the Lennard-Jones (LJ) potential. Molecular-dynamics (MD) simulations were completed in the canonical ensemble (NVT-amount of substance (N), volume (V), and temperature (T)) with a time step of 1 fs. A Nose thermostat was used to keep the temperature at 343.15 K, and a universal force field was then applied via an Ewald electrostatic method. In the MD simulation, the stress autocorrelation function is a summative function that can be used for estimating pressure correlation between two surfaces. Therefore, it takes into account the total effects of all atoms involved in the process [40], expressed as where P xy refers to an independent component of stress in the xy direction (or shear stress). For molecular fluid, two formalisms can be employed to estimate the stress tensor: atomic and molecular formalisms with minimal variance in the obtained result [41]. The stress tensor can be calculated on the basis of the motion of individual atoms in the system, as per below: where m ia, r ia , v ia , and f ia are mass, position, velocity, and force on an atom of molecule I, respectively. For molecular formalism, stress-tensor calculation is based on molecule motion in the system, given as: where m i, r i , v i and f i are mass, center of mass position, center of mass velocity, and the total force on molecules, respectively. Thus, a stress-autocorrelation function (SACF) can be employed to estimate shear stress between oil and rock surface, with and without nanoparticles. Experiment Work Our model comprised a cylindrical container as the core, glass beads as sandstone, and an iron rod with 60 turns of copper wire (solenoid) as a magnetic transmitter connected to an electromagnetic machine with a frequency of 200 kHz and FBG. The container was filled with glass beads representing sandstone in a porous medium and initially filled with 20% oil and 80% brine. Simulation Setup The equivalent form of a two-dimensional solution domain of core flooding was truncated, as seen in in Figure 1. The length and height of the solution domain were 30.90 and 3.80 cm, respectively. The domain of interest was discretized into 46,968 cells, and the dimensions of each cell were 500 × 500 µm. Both etched FBG sensors (FBG 1 and 2) were used to sense the strength of the magnetic field radiated by a solenoid source. FBG 1 and 2 were used to investigate the spectral reflectivity, bandwidth, and side lobes at Bragg wavelengths of 1534 and 1552 nm, respectively. The physical orientation of FBG 1 and 2 was assumed to be 20.0 and 30.0 cm, respectively, in the direction of the x-axis. Strength and diffusion rate were determined by iteratively applying a computational algorithm to solve the governing field equation of our magnetic field. Source modeling for the aforementioned simulation setup was then implemented in COMSOL Multiphysics software, and the source beam was inserted into the proposed computational algorithm. The 2D domain of interest with our solenoid-based magnetic source is shown in Figure 2. The intensity of the time-varying magnetic field was sensed by the FBG sensor in our real-time environment. Variation in intensity and the Bragg shift of sensed optical data occurred because of changes in the magnetic field and fluid dynamics. The strength of the magnetic field in our region of interest was computed by iteratively solving Equation (7): Strength and diffusion rate were determined by iteratively applying a computational algorithm to solve the governing field equation of our magnetic field. Source modeling for the aforementioned simulation setup was then implemented in COMSOL Multiphysics software, and the source beam was inserted into the proposed computational algorithm. The 2D domain of interest with our solenoid-based magnetic source is shown in Figure 2. Strength and diffusion rate were determined by iteratively applying a computational algorithm to solve the governing field equation of our magnetic field. Source modeling for the aforementioned simulation setup was then implemented in COMSOL Multiphysics software, and the source beam was inserted into the proposed computational algorithm. The 2D domain of interest with our solenoid-based magnetic source is shown in Figure 2. The intensity of the time-varying magnetic field was sensed by the FBG sensor in our real-time environment. Variation in intensity and the Bragg shift of sensed optical data occurred because of changes in the magnetic field and fluid dynamics. The strength of the magnetic field in our region of interest was computed by iteratively solving Equation (7): The intensity of the time-varying magnetic field was sensed by the FBG sensor in our real-time environment. Variation in intensity and the Bragg shift of sensed optical data occurred because of changes in the magnetic field and fluid dynamics. The strength of the magnetic field in our region of interest was computed by iteratively solving Equation (7): The magnetization force was given as where χ V(magnetite) was the volume magnetic susceptibility for magnetite. This magnitude of magnetization force was used to calculate the value of the refractive index as below: where a and b were proportional constants, and M was magnetization. This explains that the magnetization of our ferromagnetic materials was caused by the exposure of coated magnetite nanoparticles to an external magnetic field. The refractive index of magnetite nanoparticles changed following the fluctuation of magnetization at different external magnetic fields. A stronger magnetic field led to a higher refractive index. This process was iteratively performed to reduce the distance between modeling data (d m ) and processed data (d p ). Figure 3 shows the complete flow diagram of the adaptive iterative algorithm that predicted our profile of oil mobilization in a region of interest. Figure 4 shows the numerical validation of the proposed algorithms based on our analytical solution and experiment data. Finally, source modeling and a magnetic-field profile during fluid motion in the porous medium is shown in Figure 5. The magnetization force was given as where was the volume magnetic susceptibility for magnetite. This magnitude of magnetization force was used to calculate the value of the refractive index as below: where a and b were proportional constants, and M was magnetization. This explains that the magnetization of our ferromagnetic materials was caused by the exposure of coated magnetite nanoparticles to an external magnetic field. The refractive index of magnetite nanoparticles changed following the fluctuation of magnetization at different external magnetic fields. A stronger magnetic field led to a higher refractive index. This process was iteratively performed to reduce the distance between modeling data (dm) and processed data (dp). Figure 3 shows the complete flow diagram of the adaptive iterative algorithm that predicted our profile of oil mobilization in a region of interest. Figure 4 shows the numerical validation of the proposed algorithms based on our analytical solution and experiment data. Finally, source modeling and a magnetic-field profile during fluid motion in the porous medium is shown in Figure 5. Simulations of Fe2O3, Fe3O4, ZnO, Al2O3, and CNS In solid-state physics, band structure, otherwise known as the electronic band structure of a solid, describes the range of energies that an electron within a solid may or may not have. These existing energy bands are the allowed bands, while bands that do not contain energy are called energy gaps or forbidden bands. Band theory is used to describe physical properties, such as electrical Simulations of Fe2O3, Fe3O4, ZnO, Al2O3, and CNS In solid-state physics, band structure, otherwise known as the electronic band structure of a solid, describes the range of energies that an electron within a solid may or may not have. These existing energy bands are the allowed bands, while bands that do not contain energy are called energy gaps or forbidden bands. Band theory is used to describe physical properties, such as electrical Simulations of Fe 2 O 3 , Fe 3 O 4 , ZnO, Al 2 O 3 , and CNS In solid-state physics, band structure, otherwise known as the electronic band structure of a solid, describes the range of energies that an electron within a solid may or may not have. These existing energy bands are the allowed bands, while bands that do not contain energy are called energy gaps or forbidden bands. Band theory is used to describe physical properties, such as electrical resistivity and optical absorption. Figure 6 shows the band structure of (a) Fe 2 FBG Response for Fe2O3, Fe3O4, ZnO, Al2O3, and CNS Our research was designed to investigate the effects of FBG through the use of various fluidbased NPs for magnetic transmission at a frequency of 200 kHz. In this manner, we determined the most effective nanofluids that reacted with the FBG. Figure 7 illustrates the graph of FBG wavelength shift versus time as Fe2O3, Fe3O4, ZnO, Al2O3, and CNS were injected. FBG Response for Fe 2 O 3 , Fe 3 O 4 , ZnO, Al 2 O 3 , and CNS Our research was designed to investigate the effects of FBG through the use of various fluid-based NPs for magnetic transmission at a frequency of 200 kHz. In this manner, we determined the most effective nanofluids that reacted with the FBG. FBG Response for Fe2O3, Fe3O4, ZnO, Al2O3, and CNS Our research was designed to investigate the effects of FBG through the use of various fluidbased NPs for magnetic transmission at a frequency of 200 kHz. In this manner, we determined the most effective nanofluids that reacted with the FBG. Figure 7 illustrates the graph of FBG wavelength shift versus time as Fe2O3, Fe3O4, ZnO, Al2O3, and CNS were injected. Numerical Algorithm Based on Finite-Difference Technique In this research, a numerical algorithm based on a finite-difference (FD) technique was used and validated via the aforementioned analytical solution. Obtained results for both the analytical and Sensors 2020, 20, 1014 9 of 14 the adaptive numerical algorithm are shown in Figure 8a; the magnetic-field intensity curves exactly matched our proposed analytical solution. The relative error of the proposed algorithm was found to be 0:005429 for N x = 1000 data points. In this research, a numerical algorithm based on a finite-difference (FD) technique was used and validated via the aforementioned analytical solution. Obtained results for both the analytical and the adaptive numerical algorithm are shown in Figure 8a; the magnetic-field intensity curves exactly matched our proposed analytical solution. The relative error of the proposed algorithm was found to be 0:005429 for Nx = 1000 data points. Prior to implementation of our proposed algorithm, source modeling was performed in CST EM Studio (CST2012 version), and the obtained results are shown in Figure 8b. Figure 9 shows the magnetic-field distribution in a porous medium. The intensity of the magnetic field inside our core container was very weak. (a) Times series peaks of FBG obtained by coupled-mode modeling. Prior to implementation of our proposed algorithm, source modeling was performed in CST EM Studio (CST2012 version), and the obtained results are shown in Figure 8b. Figure 9 shows the magnetic-field distribution in a porous medium. The intensity of the magnetic field inside our core container was very weak. Discussion Dielectric materials have a large band gap due to being electrical insulators, exhibiting more energy excitement from valence to covalent bands and allowing electrons to move freely and conduct electricity ( Figure 6). The formation of these bands is mostly a feature of the outermost electrons in an atom [42]. Band gaps (Table 1) are essentially leftover energy ranges that are not covered by any band and that resulted from finite widths of energy bands, with widths dependent upon the degree of overlap in the atomic orbitals from which they arise [43]. Discussion Dielectric materials have a large band gap due to being electrical insulators, exhibiting more energy excitement from valence to covalent bands and allowing electrons to move freely and conduct electricity ( Figure 6). The formation of these bands is mostly a feature of the outermost electrons in an atom [42]. Band gaps (Table 1) are essentially leftover energy ranges that are not covered by any band and that resulted from finite widths of energy bands, with widths dependent upon the degree of overlap in the atomic orbitals from which they arise [43]. It is clear from Figure 6 that the upper half of the electronic band structure (above Fermi level) described the π*energy-antibonding band and the lower half (below the Fermi level) described the π energy-bonding band. Fermi energy lies exactly at the Dirac point: the π band was completely filled, while the π* band was empty; π* and π bands both degenerate at the K-point (Dirac point). Energy dispersions gradually deviated from linear relation in the higher-energy region [44]. As seen in Figure 7, the overall trend of the FBG response showed that there was a wavelength shift during Fe 3 O 4 injection. No wavelength shifts occurred for ZnO, Al 2 O 3 , and CNS over that time. This is due to the activation of magnetic material (Fe 3 O 4 ) by our magnetic field. On the basis of EM theory, the propagation of a magnetic field happens in a horizontal vector that gives a plane wave to the horizontal FBG. Magnetic fluid is a kind of stable colloidal solution of ferromagnetic nanoparticles. The behavior of ferromagnetic particles in a magnetic fluid is dependent on the external magnetic field, so the refractive index of magnetic fluid tends to be magnetic-field-dependent. The wavelength shift of FBG was due to magnetic force F m that is represented by the following equation [45]: where p e is the effective strain-optic coefficient, E is Young's modulus, and A is the cross-sectional area. Thus, the recovery factor of our injected nanofluid Fe 3 O 4 in the presence of a magnetic field and at a frequency of 200 kHz was 15% higher than that without EM application, as seen in We used a magnetic transmitter so that only magnetic material would have a wavelength-shift response of FBG towards the nanofluid's injection. Fe 3 O 4 nanofluid material was chosen to validate our adaptive algorithm. The exponential form of the analytical method used to predict our electromagnetic field was also used for validation. The gradient of magnetic-field intensity shown in Figure 8 shows that the maximum density field remained inside the solenoid, while the outer side of magnetic field intensity was sybaritically weak. Figure 5a shows the maximum value of the magnetic field inside the solenoid started at 3.63 × 10 −5 V.s/m 2 , and blue lines indicated that the value of magnetic-field intensity within a few centimeters dropped to 3.03 × 10 −6 V.s/m 2 . The source waveform for the adaptive algorithm was obtained by using data from a modeled source in COMSOL Multiphysics software. The proposed algorithm was used to iteratively solve a governing field equation for each node of the discretized domain. The source waveform was also introduced exactly at the same location as that discussed in Figure 1. The electrical properties of the porous medium were activated by filling up 24% of the pores with brine, oil, or air. Inside our solenoid, perfect conducting properties were introduced according to our lab's experiment setup, and the transmitter source was allowed to radiate magnetic flux towards the surrounding medium. In order to plot our results, an arbitrary unit was introduced to our plot contour, as can be seen in the graphed gradient of Figure 9. The diffusion of magnetic field in brine-, oil-, and air-filled pores can clearly be seen. Deriving from Maxwell's equations, the attenuation factor of the propagated EM wave can be generally written as [46]: where σ, ε, and µ represent electric conductivity, permittivity, and the magnetic permeability of the medium, respectively, and ω is the angular frequency of the emitted EM wave. For our air-filled porous medium, conductivity was negligible, and the attenuation factor in Equation (11) approached zero. However, for oil-and brine-filled pores (σ/ωε) 1 we thus obtained: Equation (12) indicates low-frequency EM wave propagation. The strength of the magnetic field in the air-filled pores, with a negligible attenuation factor, was dramatically higher than that in the oil-and brine-filled pores, which indicated a more significant attenuation factor. In comparison with the low-conductive oil-filled pores, magnetic-field strength dropped off quickly in the conductive brine-filled pores. Magnetic nanoparticles mixed with brine/oil were injected into the pores, and each of these particles experienced Lorentz force. The timed-varied strengths of the magnetic field caused the motion of these charges. The diffused magnetic field and magnetization force changed the cladding refractive index of our FBG sensor [47]. The intensity and Bragg wavelength shift of two consecutive FBG peaks are shown in Figure 8a. The distance between fall time and rise time for each main peak are indicated by d1-d4. In this case, d1-d4 indicate refractive-index values of 1:001, 1:257, 1:410, and 1:617, respectively. Continuous wavelet transformation was used to investigate temporal changes in the optical data of two consecutive peaks, as illustrated for each case in Figure 8b. The FBG response was a narrow-band reflective filter that was centered at the Bragg wavelength, with the spectral response for uniform FBG affected by grating length and refractivity. Spectral reflectivity was a summation of η ∞ e f f and η cl e f f , in which η co e f f was independent of the surrounding environment and constant. However, η cl e f f was dependent on magnetization-force strength sensed by our FBG sensors. Conclusions Wavelength shift only occurred for a horizontal FBG setup during Fe 3 O 4 injection. This indicates that the magnetic material (Fe 3 O 4 ) was activated by EM waves. This paper showcases an adaptive algorithm to simulate oil mobilization in the presence of a magnetic field and the resultant responses sensed by etched optical sensors based on FBG. An equivalent theoretical model based on an adaptive algorithm was then implemented. The considered algorithm was validated by comparing existing analytical solutions with a respective error of 0.005429. Our study clearly shows the ability to compute an adaptive algorithm to predict oil mobilization. We found that the proposed computational method displays robustness with respect to the perturbation parameter in approximating a solution.
5,430.6
2020-02-01T00:00:00.000
[ "Engineering", "Materials Science", "Environmental Science", "Physics" ]
MATHEMATICS IN APPLIED INFORMATICS EDUCATION – NEW CHOICES AND CHALLENGES MATHEMATICS IN APPLIED INFORMATICS EDUCATION – NEW CHOICES AND CHALLENGES We The traditional view of teaching Mathematics in Engineering environment In Engineering education, there was traditionally a big difference between the way of teaching Mathematics and other, specialised subjects e.g. from mechanical, electrical, civil engineering or informatics. The mathematicians like to expose the essence of Mathematics in the strictly logical form of reasoning using axioms, definitions an theorems, eventually with rigorous (or informal, but more intuitive) proofs whenever it can contribute to better understanding of explained concepts. The engineers are more interested in the solution of real-world problems, using selected mathematical apparatus for formulation of (simplified) computationally tractable model and for solution of that model problem with the aid of some of available (or suitably modified) algorithms, usually only with certain (prescribed or estimated) accuracy. Standing on this viewpoint, the practical engineers perceive many of traditional topics of algebra or calculus as more or less unnecessary or simply useless for engineering practice. On the other hand, mathematicians must follow the logical continuity and dependencies between introduced concepts. There is no way to fully understand the concept of Fourier transform without the basic knowledge concerning the definite integrals. In linear algebra, the basic concept is that of matrix. Without proper theoretical grounding (linear independence, determinants, inverse matrix, eigenvalues and eigenvectors, etc.) the students cannot fully understand and explore many of practical algorithms for solution of linear systems, optimisation algorithms such as simplex method, spectral analysis of linear processes, etc. Two principal questions in this field are about teaching material -(what to teach?) and teaching methods -(how to teach?). In our opinion, the latter question is far more significant. In present, the concrete information we give to students is not so important as the message it cares about ideas, leading to some intellectual or practical achievements. We should communicate ideas not the plain information. Our students should be able to make individual progress in the process of complementing their existing skills and knowledge with new tools and information as demanded by needs of their professional career. Thus, the common point for both mathematics and engineering educators is to prepare students to independent, logical, clear and continuous thinking. It is not sufficient to give the students many (maybe, highly practical) isolated pieces of information in the cookbook style. We should bring to the students the feeling of beautiful integral view of The Forest of Science, not to walk with them in the darkness of labyrinth of individual trees. If such walk is necessary, then it should be not too long and lead to a peak with far, clear views. We (the communities of mathematicians and engineers) should choose the adequate tools to be in state of resonance and to support each another, not to make the useless remarks about (supposed) weak points of either "engineering" or "mathematical" mode of thinking. In this paper, we will report our experience with computer assisted teaching of some topics in Numerical analysis and some negative habits observed in students' behaviour. Dangers coming from mindless use of computers With the wide accessibility of computers, there is the growing principal danger of overestimating their role in education and in the engineering practice. In the naive students' perception the computers with highly intelligent software can be used as substitute for the (painful) process of creative thinking. But also on the teacher's side there are some overly optimistic expectations from using tools like LMS (Learning Management Systems, e-learning). Michal Kaukič * In this paper, new ways of teaching Mathematics, which have been opened by the widespread of computers and notebooks and also by adequate software tools, are discussed. Based on our experience, we advocate the use of Open Source software as the mainstream tools in Mathematics and Informatics education. We give the more concrete examples of tools, used for teaching of Numerical Analysis topics, which can be successfully explored also for majority of other subjects in Applied Informatics and Mathematics. Some recommendations for the choices of suitable software and for further coordination in this field are given. The common sense, of course, tells us that nothing can replace the individual creative thinking and that the personality of teacher will always play the key role in the process of knowledge, intuition and skill transfer to the students. Let us bring the quotation from SEFI (European Society for Engineering Education) document [1]: There are some signs that the involvement of computers in the teaching of undergraduate engineering mathematics is beginning to gain momentum. However, the experience of the last thirty years warns that care must be taken. When the pocket calculator arrived we were told that there would be two advantages: first, the students would be relieved of hours of tedious calculation, leaving them free to concentrate on concepts and understanding; second, it was more likely that the calculations would be performed correctly. The reality is somewhat different. The most trivial of calculations is often subcontracted to the machine, the students have little or no feel for what they are actually doing or to what precision they should quote their answers. What they can do is to obtain obviously unrealistic results more quickly and to more significant figures. There has also been a tendency to replace analytical reasoning by trialand-error methods. This is the clear analysis of dangers, caused by unqualified use of computers. We should bear this in mind and to create the environment where computers are helpful but where critical, logical thinking is all the time at the first plane. In the next section we will show an example of such environment, we now use for the teaching of Numerical analysis. Pylab -the environment for experimenting and visualization of Numerical analysis concepts Effective teaching of Numerical Analysis is closely related to the problem of using computers as the laboratory, experimental equipment, in the sense which is well known from other natural sciences. Numerical analysis must take into account the real world imperfections, which can be happily ignored by theoreticians. Thus, in the field of Numerical analysis there is a big room for experiments. We believe that numerous computer-based experiments are essential for successful teaching of Numerical Analysis. The choice of appropriate software tools is of the fundamental importance. This choice, once made, determines the effectiveness and long-term maintainability of educational and research activities for many future years. On the first look, we will probably find three well-known commercial software systems for mathematical teaching and research: MATLAB, Mathematica, Maple. 1) Besides of certain advantages, commercial software systems have also many drawbacks (for more detailed discussion, see [2]). Our search for alternative software leads us to several Open Source systems with MATLAB-like capabilities. The most MATLAB-compatible among them was Octave [3]. In the course of using MATLAB and later Octave we constantly perceived the apparent deficiencies of underlying simple programming language. Moreover, programming in MATLAB can be done in very bad style by inexperienced students not encouraging them to learn new ways of thinking about problems. The MATLAB language simply is not general enough to be useful outside of its primary domain -matrix and vector manipulations. Although there is no universal system suitable for all areas of mathematical education, we can try to minimise the number of different tools used. The choice should be made so that students can reuse our tools in possibly widest area of their future professional career (having in mind preferably the students of Informatics). This brings us to the idea of using some of general purpose programming languages. The language of our choice should allow to express mathematical ideas with minimum programming effort and in "nearly mathematical" notation. In present, we use the Pylab environment, comprised of interpreted language Python [4], complemented by user-friendly interactive environment (IPython shell, [5]), modules for linear algebra and numerical analysis (see [6], [7]) and also module Matplotlib [8] for excellent quality two-dimensional graphics. We will show the typical use of this environment on the following example. First, we can give the nice graphical interpretation for this example. The solutions will be exactly the common points of zero contour levels of surfaces z 1 (x, y), z 2 (x, y), given by left sides of equations, i.e. The corresponding plot will be generated by the commands (exactly the same, as it would be in MATLAB): For the numerical solution of nonlinear systems we can use the function fsolve. The first argument of this function is the vector function, describing the given nonlinear system -the input is in our case the tuple (x, y) and the function returns the tuple of values (z 1 (x, y), z 2 (x, y)). The code for this function is as follows (we can save it to the file nlfct.py): def nlfct(xy): x,y = xy # xy has two components, separate them z1=sin(x*y*y)-cos(x*x-y)+0.2 z2=x**3+y**3-3*x*y return (z1, z2) The second argument for fsolve is the initial approximation of solution. Thus, we can get the first solution in interactive IPython environment by commands: run nlfct from scipy.optimize import fsolve r=fsolve(nlfct, (1.23, 0.55)) # The answer is: r=(1.2328735, 0.5521787) As always, we cannot blindly trust the results, we get from computer. But we can easily verify that the command nlfct(r), which computes the values of functions z 1 (r), z 2 (r) returns very small values (approximately Ϫ4.552 и 10 Ϫ15 , Ϫ3.997 и 10 Ϫ14 . The same is true for remaining three solutions, but we will not repeat the calculations here. We can see that Pylab is, indeed, the very high level language -no type declarations of variables and functions, no troubles with memory management. There are many ready-made useful functions for numerical analysis (data interpolation and approximation, linear algebra functions -eigenvectors and eigenvalues, determinant, inverse matrix, solution of linear equations, also in least-squares sense, numerical integration, optimisation with bound constraints, solution of differential and differential-algebraic equations and many others). Pylab has also many functions for statistics and probability theory. There is also a signal processing toolbox analogical to that of MATLAB. Using additional Python modules, we can easily add e.g. the SQL DBMS interface for working with commercial (Oracle) or Open Source (PostgreSQL, mysql, SqLite) SQL databases, there are other modules for discrete and continuous simulations, linear and, more general, convex programming. Now, we can summarise some experience we gained in teaching Numerical Analysis with Pylab. The very positive thing is the possibility of rapid prototyping and interactive debugging of little scripts. We have prepared the material about basic programming in Pylab (see [9]), which was sufficient for students to start working in the interactive environment. Unfortunately, many students have strange and non-productive habits of programming. IPython environment has excellent command completion, but students prefer the (erroneous) lengthy typing. Or, there is no need to position the cursor on the end of line before entering the command, but they have constantly made this action. IPython saves command line history, so any previous commands can be conveniently re-entered or edited, but students like to type the same thing many times, instead of using this feature. The biggest misbehaviour from students' side is the programming without thinking. They start typing something, having little understanding of the given task and the methods they will use for solution. They put emphasis on programming routine, but not on "The Art of Computer Programming" as seen by the classical works of D. Knuth [10]. We have the strong opinion that for beginners the clear and simple programming language should be used. Neither Pascal, nor C/C++ or Java can fulfil these basic requirements. We propose to try (at University level) to unify the programming tools for basic courses on algorithmization and programming using Python-based environment. The considerable amount of work should be done for analysis of existing tools and environments (especially non-commercial, OpenSource alternatives), the choice of basic software for applied informatics education and the customisation of chosen software for the specific needs of faculty and university. We think there should be first a broad discussion at inter-faculty level and afterwards the work group responsible for the implementation of decisions coming from this discussion should be formed. The process of implementation will, of course, take several years to be reasonably complete. Conclusion We hope that the reader got some feeling of what can be done with newly available Open Source software tools in the field of mathematical education at University level. The cultivation of mind in Mathematics and in Computer Science can be done in close cooperation. This can lead to the synergy effect of better understanding of the common points and the role of clear, logical, independent thinking in all applied informatics activities. This is especially important for making right key decisions, which will be crucial for the future of the fields of Applied informatics and Mathematics at our University and in our country. We saw the example of successful use of user-friendly environment, based on Open Source tools in teaching of Numerical Analysis. This environment, in our opinion, has the great potential of becoming the tool of choice for nearly all educational topics in traditional and modern branches of Mathematics (algebra, calculus, probability and statistics, graph theory, mathematical programming and optimisation, cryptography, numerical computations, computer graphics, computational geometry, etc.) and also for many important concepts of Applied Informatics (the basic principles of algorithmization, data structures, discrete and continuous simulation, stochastic processes, signal processing, parallel computations, etc.). For optimal results in applied informatics education (bachelor, undergraduate, graduate and doctoral study) we cannot use the multitude of software tools without any coordination. There should be clearly defined "central path" software, which will be used continually during all the time of study. We are convinced that the right choice for this durable software tool cannot be based on traditional, low level languages like Pascal, C/C++. The good candidate should be very high level language with clear, simple syntax and high expressive power -at present, the language of our choice is Python with suitable modules. Using it has also the additional benefit of learning a simple, useful general-purpose language, which students can use later in their career (important not only for students of Computer Science but also for general engineering students and mathematicians, too).
3,313.8
2006-09-30T00:00:00.000
[ "Mathematics", "Education", "Computer Science" ]
Virtual Reality Tool for Exploration of Three-Dimensional Cellular Automata : We present a Virtual Reality (VR) tool for exploration of three-dimensional cellular automata. In addition to the traditional visual representation offered by other implementations, this tool allows users to aurally render the active (alive) cells of an automaton in sequence along one axis or simultaneously create melodic and harmonic textures, while preserving in all cases the relative locations of these cells to the user. The audio spatialization method created for this research can render the maximum number of audio sources specified by the underlying software (255) without audio dropouts. The accuracy of the achieved spatialization is unrivaled since it is based on actual distance measurements as opposed to coarse distance approximations used by other spatialization methods. A subjective evaluation (effectively, self-reported measurements) of our system ( n = 30) indicated no significant differences in user experience or intrinsic motivation between VR and traditional desktop versions (PC). However, participants in the PC group explored more of the universe than the VR group. This difference is likely to be caused by the familiarity of our cohort with PC-based games. Introduction Cellular automata (CAs) are one of the earliest computer-based models that mimic some properties of biological systems [1]. In its digital form, an automaton is comprised of a lattice of multi-state cells that, once initialized, can change their state according to some fixed rules. These rules are usually local and simple but can create complex evolutive patterns in the automaton. Although a CA lattice can have arbitrary dimensions, one-or two-dimensional models have been traditionally preferred. In the latter case, square cells have been preferred over triangular or hexagonal ones to create a lattice; these three regular polygons are the only ones capable of tiling the plane without overlaps or gaps [2]. The complexity that CAs can achieve increases with the number of dimensions of their lattice. So does their inspection difficulty, which is usually performed using visual displays. In one-dimensional CAs, the state of a cell in the next iteration depends on its current state and those of the neighboring cells. Considering only adjacent cells, the state of three cells, two adjacent plus the cell itself, would be necessary and sufficient for computing the state of that cell (active or inactive, in the simplest case) in the next iteration. For these three cells, there are 2 3 = 8 possible state combinations and 2 8 = 256 rules. Registering the outcomes of each iteration in a second dimension results in patterns that are useful to classify the rules generating these patterns into those yielding homogeneous states (class 1), leading to stable or simply periodic structure (class 2), leading to non-periodic/chaotic-like behaviors (class 3), and those resulting in complex patterns with self-propagating properties (class 4) [3]. Although this classification is not exclusive to one-dimensional CAs, the classes are notoriously more difficult to determine in higher dimensions. In some cases, they have been observed only through two-dimensional slices of the full CAs [3]. In two-dimensional CAs, the number of adjacent cells can be up to eight (including those sharing vertices and edges), so the number of possible state combinations and rules increase accordingly. Traditionally, outcomes of these rules are mapped to time instead of a third geometric dimension. This is partly because 3D graphics techniques are often required and such techniques evolved long after 2D ones. The latter techniques were already available for many researchers who may have considered further geometrical dimensions as an unnecessary nuisance. There is no straightforward way to visualize or inspect three-or higher-dimensional CAs. Displays such as Virtual Reality (VR) headsets (featuring stereoscopic vision and spatial audio) could help in these tasks, but they would inherit natural limitations of human perception: vision and audition (the superior sensory modalities in humans) are more accute in front of a person and gradually deteriorate away from that direction [4]. However, audition trumps vision in allowing the perception of objects in all directions, even those which are occluded by closer ones. Hence, presenting spatially distributed sounds allows for larger data representations [5], and could ease the exploration of CAs more complex than one-dimensional ones. The main purpose of this research is to use VR techniques, including spatial sound, to explore three-dimensional CAs. By providing users with VR simulations, we intend to ease the inspection of such systems and gain higher introspection of their capabilities. We also address whether VR offers benefits in terms of user experience, intrinsic motivation, and subjective performance relative to non-immersive presentations of the same visualization and auralization. Additionally, we explore the use of these automata as a source of generative music: by associating each cell to a different timbre depending on its location, the 3D-automaton may be aurally rendered sequentially or in parallel to create melodic and musical textures. To that end, we have created a VR application where users donning Head-Mounted Displays (HMDs) and headphones are able, among other affordances, to navigate, modify rules, store, and retrieve CAs at will. We have focused on CAs similar to the Game of Life (Life) [6] in the definition of rules and lattice structure for our proof-of-concept application. The next section presents a literature review on CAs, and their applications in music. This is followed by an explanation on how our VR application and the spatialization engine were created and evaluated, segueing into a general discussion, and concluding remarks. Previous results of this research were presented in [7]. The current manuscript builds upon those results and comments of the reviewers, specifically, elaborating on the results of using HMD. Life and Life-Like CA The cellular automaton known as "Life" can be considered a simplification of the processes involved in the creation, evolution, and destruction of organic life. The state of a cell in the next iteration, "alive" or "dead", is determined by the following rules [8]: • Birth: A dead cell adjacent to three alive ones becomes alive. • Survival: Cells adjacent to two or three alive cells remain alive. • Death: Alive cells adjacent to four or more alive cells, or those adjacent to less than two alive cells die. Originally confined to a 2D-lattice (as shown in Figure 1), three-dimensional extensions were soon explored. In this case, the only regular solid that can fill the space is the cube. Each cubic cell has up to 26 adjacent cells considering those that share at least a vertex. Consequently, rules governing a cell state in the next iteration were generalized and defined in terms of four integer arguments (r 1 , . . . , r 4 ) in the range of 0 to 26, and the number of neighbors n [9]: • Birth: A dead cell becomes alive if r 1 ≤ n ≤ r 2 . • Survival: A cell remains alive if n ≥ r 4 and n ≤ r 3 . • Death: A cell dies if n > r 3 or n < r 4 . Figure 1. Recreation of "Big beacon," a period-8 oscillator pattern in Life described in [10]. One of the earliest reports on three-dimensional versions of Life was presented by Bays [11]. He formally defined rules governing this CA in terms of environment and fertility and formulated a set of theorems to construct these rules. Later on, several software implementations of three-dimensional CAs were proposed, for example, "Kaleidoscope of 3D life" [9] and CA3D [12]. A currently abandoned project, Kaleidoscope allowed users to define rules as well as the size of the lattice. Additionally, users could select various known presets and store their own patterns. CA3D is an HTML5 and Javascript program where the initial alive cells are set at random. Intersecting lines in the display are used to mark the center of cells, and users can explore this 3D grid by means of arrow keys or GUI buttons. One of the most impressive software implementations of cellular automata is Golly [13]. This open source, cross-platform application includes features such as bounded and unbounded universes, fast generating algorithms, etc. It also allows creation of 3D models wherein users can define initial conditions or set them at random. As in previously mentioned implementations, Golly inherits the limitations imposed by 2D-displays (e.g., controlling the user perspective by means of buttons), making this kind of CA exploration somewhat cumbersome. Generative Music The paths of music and algorithm-assisted composition crossed centuries before the conception of modern computers [14]. A complete review of these interactions is beyond the scope of this article. Instead, we limit the discussion to some applications of Cellular Automata in music. WolframTones [3] is arguably the most successful application of CAs to music. Users are presented with high level parameters that drive the generation of music. These parameters target relevant aspects in the generation process: By choosing a CA rule, initial conditions, vicinity range, among others, users can generate musical patterns according to different styles, orchestration, tuning system, tempo, etc. Admittedly, the resulting musical pattern does not always agree with the selected style, but it could be used as initial step towards a more personalized creation. Further, CAs have been used in granular synthesis of sounds [15]; to map cells directly to music notes, as in the case of CAMUS (Cellular Automata Music) [16]; or to sequence chord progressions (with limited success) [17]. Recently, Haron and colleagues [18] introduced an interactive sequencer where the movement of a user was employed to determine the configuration of Life in a 20 × 20 space. This automaton mapped cells to musical pitches, depending only on a single dimension (x-axis). The three-dimensional space was scanned sequentially (using different strategies) to create musical patterns. Using massive parallel processing hardware (Field-Programmable Gate Arrays-FPGAs), Nedjah et al. [19] used CAs for generation of melodic intervals complying with the Musical Instrument Digital Interface (MIDI) standard [20]. More recently, an algorithm capable of learning CA rules from MIDI sequences has been proposed in [21]. In this research, CA rules evolve with time and provide a way to produce new MIDI sequences based on them. Auralization Associating sounds directly with cell locations can be considered a case of data sonification [4]. In addition to cell-tone associations, the relative location of each cell with respect to a listener can also be beneficial to data exploration via sound spatialization, in what is known as auralization [22]. Whereas the mapping of cells onto sounds is arbitrary and offer limited difficulty from the software development point of view, correct spatialization of audio sources is difficult, especially in the near-field, i.e., when "The acoustic field is so close to an extended source that the effects of the source size are manifest in measurements" [23]. One reason for this is that the filters used to create binaural signals from monophonic ones are captured in anechoic conditions under the assumptions that (1) the sound source is small compared to the head, and (2) it is far enough that the wave front is planar. These assumptions are often violated in the near-field and when not, the recording process is painstaking, so with the exception of a few (e.g., [24][25][26]), most filters are recorded at a fixed distance outside the near-field. As a consequence, whereas horizontal and vertical angles (azimuth and elevation, respectively) are often accurately simulated in VR scenes, distance (the third dimension) is mostly approximated by means of different attenuation curves [27,28]. We have devised a better spatialization engine that correctly expresses the three dimensions, as explained in the following section. Method We decided to take full advantage of VR technologies (stereoscopic vision, spatial audio, limb-and head-tracking, etc.) to facilitate exploration of three-dimensional CAs. We also provide a desktop version (inheriting the interaction limitations pointed before) since VR technologies are not mainstream yet among consumers. The desktop version uses the same audio spatialization technique, and it was amply discussed in [7]. Although we focus on the VR version in this manuscript, pointers to the desktop version are added in the discussion when necessary. Our proof-of-concept application was programmed in Unity [28] since this platform allows deployment of applications to several platforms, including VR headset-and desktop-based ones [29], and the integration of alternative audio spatializers. Source code, executable versions, and video demonstrations of this project are freely available from https://github.com/YKariyado/LG (accessed on 1 February 2022). In what follows, we explain the main components of our solution. Apparatus Users of our system donned an HMD comprising 2880 × 1600 pixel stereoscopic display (HTC VIVE headset) and a pair of closed-circumaural headphones (Sony MDR-CD380). A single trackable controller with multi-function trackpad, grip, trigger, and other buttons were used to interact with the system, as shown in Figure 2a. Tracking the locations of the headset and the controllers as users moved through space was enabled by beaming infrared light from two HTC VIVE base stations. Headset, headphones, and infrared stations were connected directly to a server running Windows 10, equipped with an Intel i7 processor (six cores), NVIDIA GeForce GTX 1080 graphic card, 16 GB of RAM, and a 500 GB solid state drive for storage. This apparatus was located in a quiet room (average RT 60 = 322 ms and Noise Criterion NC = 25 [500]). The actual simulation area within this room was about 2 m 2 . An overview of the settings of our system is shown in Figure 2b. Cellular Automata We restricted our simulation to a cubic space with an edge length 12 ≤ l ≤ 2048 cells, defined by the user. As in Life, each cell has only two states and the rules for birth, survival, and death were built based upon the same four arguments r 1 , . . . , r 4 previously described in Section 2.1. Users can also determine the behavior of the automaton at its boundaries: if "periodic", opposite sides of the l-cube are treated as contiguous so that cells in these sides become adjacent. A large number of cells in the automaton that may need to be processed at a certain time could generate considerable latency between generations. For instance, the system may not be able to compute the next generation when it needs to be displayed. To minimize latency between the computation of the CA and its display (video and audio), we divided the workload into three parallel threads: A main thread to manipulate all visual assets of the game engine, and two auxiliary threads, one to compute the CA model according to the user-defined parameters (rules, periodicity, etc.), and the other dedicated solely to rendering audio. The current population of alive cells is maintained in a hash table of cell IDs. Iterating through all elements in this table, the number of alive neighbors n for a given cell is computed and added to a dictionary comprising pairs of cell ID and its corresponding n. Then, death and birth rules (in this order) are used to update the hash table, deleting or adding IDs as needed. Simultaneously, a 3D sparse matrix is gradually filled with alive cell coordinates. When a new CA iteration is completely computed, its corresponding matrix is added to a queue of matrices where future iterations are stored. When the main thread requires a new CA iteration, the oldest matrix in the queue is extracted. In addition, to minimize the latency introduced by refreshing the display, the main thread uses a sub-routine for swapping between two buffers of visible alive cells. A foreground buffer contains the cells currently in display while in a background one cells are being rendered. Further, the audio and video representations are synchronized in the same sub-routine by setting sonification parameters (i.e., file to playback, center frequency f c , and Q-factor of a filter), as explained in Section 3.4. Navigation Our simulation was comprised of Menu and Rendition modes, as shown in Figure 3. Initially presented with the Menu mode, users are able to switch between modes by pressing the Menu button of a controller (see Figure 2a). In the Menu mode (Figure 3a), users can adjust aspects of the CA such as length of the cube l, rule parameters, initially alive cells, behavior of the automaton at the boundaries, etc. Users can also store interesting CAs or retrieve previously stored ones for further inspection. The Menu mode allows users to define some aspects of the rendition such as its speed (in beats per minute), whether presenting the alive cells simultaneously or in a sequence of slices determined by their z-coordinate, start or stop the automaton, and when the sequential presentation is enabled, whether or not to follow the current slice, or choose a given slice for better inspection. Interactions with the user virtual interface are done by pointing to a given element (text-, check-box, slider, etc.) with a virtual laser-like beam and pressing the trigger button in the controller. When text boxes are selected, input was registered from a virtual keyboard. While the size of the universe is controllable by the user, displaying a large number of sources simultaneously becomes impractical. To avoid that, we fixed the display to a radius of six cells around the user in the Rendition mode. Only alive cells within this radius (max. 1728) are actively rendered: visually, by spheres whose color depended on their absolute location in the mesh (mapping Red, Green, and Blue channels to x, y, and z axes, respectively); aurally, by a sound depending on its current location. Dead cells, on the other hand, are represented by small semi-transparent black spheres with no sound. Initially, users are located at the center of the l-cube, but they are free to move to different locations of the automaton by physically waking around the simulation area, entering a set of coordinates in the Menu mode, by locating the camera at the center of the universe, or by means of the teleporting method of the VIVE Input Utility [30]. Besides the teleportation method, the displacement by controller/joystick method was also considered. However, and as it has been reported by Boletsis et al. [31], due to the motion sickness produced in pilot experiments, this method was discarded. The current location of a user within the universe is always visible at the top-right corner of the visual display, as shown in Figure 3b. In the VR version, video and audio are presented from an endocentric point of view [32], i.e., cells in the near field are centered at the middle of the head. For the desktop version, while audio is still endocentric, the video is presented from an egocentric point of view, effectively tethering a visual camera behind the head of a user's avatar. Sonification To represent the CA with sound, we opted for an auditory display in which features of the same musical timbre changed relative to the location and state of a cell. Since one of the design goals was to create musical sounds from CAs, the large maximum number of cells in each dimension (2048) became challenging, especially for representing pitch. There are many ways to map music scale features onto 3D spaces [33], but we resorted to Shepard tones [34] to finesse the problem of mapping pitch. Shepard tones take advantage of the circularity judgement of pitch (chroma) by which octaves of the same tone are considered to be at the same location, independent of their absolute frequency. For our environment, individual WAV files (sampled at 16 bit/48 kHz) for each Shepard tone were created in Matlab [35]. Twelve Shepard tones separated 100 cents from each other (i.e., a 12-TET-Tone Equal Temperament tuning) were created, each tone comprising 10 octaves. The lowest Shepard tone was set to have a first harmonic f 0 = 22 Hz. These complex tones were 250 ms long, and amplitude modulated to have instantaneous attack time, 63 ms of decay and release time, and −4.46 dB (re. full-scale) sustain level. In his invention, Shepard also included a band-pass filter. The effect of this filter is to reinforce sensation of a pitch at its center frequency f c and to let the tone harmonics to gradually fade as they recede from this frequency. In our implementation, f c took values between 100 Hz and 10 kHz in l equal intervals of a logarithmic scale and were mapped onto the x (lateral) axis, while Shepard tones in ascending order were mapped onto the y (vertical) axis, repeating each tone every twelve cells. The quality factor Q of the band-pass filter was mapped to the z (longitudinal) axis in equal intervals of a linear scale between 1 and 100. Whereas Shepard tones were computed in advance, filtering was implemented in Faust [36] and integrated in the audio pipeline so that it was computed online, previous to spatialization, as shown in Figure 4. Band-pass filter Spatializer Mono input [left, right] Channels Figure 4. DSP in the audio thread. Mono input signals coming from audio files are first band-pass filtered and then spatialized, creating a stereo signal that is sent to the audio buffer in Unity. Audio Spatialization Each alive cell in our system is an audio source whose location relative to the user is accordingly simulated. In Unity, audio sources are divided into audible (real) and inaudible (virtual) ones [37]. There are up to 255 audible sources at any given time in Unity. They are determined by a programmable priority, distance to a listener (closer/louder sources are promoted), and everything being equal, whichever audio is queued first, in that hierarchy. In our simulation, 1728 cells could be alive at any point. These are visually rendered without difficulty by the application but, because of the aforementioned restrictions, only 255 are audio rendered. To spatialize these audio sources, we implemented a plugin filter based on a reconstruction of Head-Related Transfer Functions (HRTFs) by Eigen decomposition, as described in [38]. In short, this method reconstructs HRTFs by adding to the mean HRTF computed from a distance-dependent HRTF database [25] the products of Eigenvalues and Eigenvectors specific to a given location until a spectral distortion criterion is reached (<1 dB between 0.1 and 16 kHz). Convolving an HRTF with a monophonic audio is an effective way to produce a spatialized version of it at the location where the HRTF was captured [39]. Not all possible locations in a virtual scene are included in HRTF databases. In such cases, interpolation between the closest HRTFs to the desired location is commonly used [40], but it is also possible to use only the closest HRTF when processing time is an issue. In our plugin, space around a listener is discretized differently depending on spherical coordinates: For elevation, −40 • ≤ φ ≤ 90 • , in 2 • steps. For azimuth θ, the discretization depends additionally on elevation: θ φ = 359 • cos(φ) + 1 • . Finally, for distance ρ, the original intervals at which the HRTFs were captured in [25] (i.e., [20,30], [30,40], . . . , [130, 160] cm) are sub-divided into ten equal parts. With these changes, the HRTF database used for our spatializer comprises 1,212,680 locations. No HRTF interpolation is applied in our solution, so cell locations are approximated by the closest entry in our database. The spatialization is performed on the previously mentioned audio thread by doing frequency-domain multiplication of the HRTF retrieved from the database and the fast-Fourier transformed audio signal associated with each alive cell. The simulation area was somewhat larger than the maximum distance recorded in our database, so the actual distance in the room was scaled down to fit that of the spatializer. Furthermore, audio sources are disabled once their corresponding audio file has been played. That way, the audio thread can incorporate incoming sources from the next iteration of the CA. To demonstrate our system, we present video recordings of a user interacting with the system on our repository [41]. Note that in order to render this monoscopic video from the original one (stereoscopic), some visual elements (such as the three-axes for navigation) are displaced. However, the auralization is correctly preserved. In addition, we shipped our system with initial configurations that yielded interesting patterns after 100 iterations of the automaton. We mainly found oscillators such as those reported previously in [9], for which a detailed description is given in [7]. These initial configurations can be loaded from the Menu mode, as previously explained. Objective Evaluation Performance tests were carried out on a MacBook Pro 2019 (processor Intel core I9-9880H with 8 cores at 2.3 GHz and DDR4 SDRAM memory with 32 GB at 2666 MHz) running Unity version 2020.1.9f1. Audio in the desktop and VR versions ran at the same sampling frequency (48 kHz) but differ in the size of the block used for processing audio: 256 and 1024 samples for the Desktop and VR version, respectively. The block size determines the maximum time that the audio thread could use to avoid audio dropouts (5.333 and 21.333 ms, respectively). To test the spatializer capabilities, we replaced audio files corresponding to cells in the CA by longer WAV files (60 s long sampled at 48 kHz/16-bits) since our short files posed no processing difficulties. These longer files were successively added every 5 s and the CPU time consumed by the spatializer was monitored using the Unity profiler [42]. Results of these tests are shown in Figure 5. For a block of 256 samples, the most stringent case, we found that on average the maximum CPU time used by the spatializer was 2.735 ms, whereas for the 1024 sample block, this time was 6.247 ms. These figures represent 51% and 29% of the block times, indicating that the maximum number of audio sources determined by default in Unity (255) can be spatialized without audio drop-outs with our current hardware set-up. Subjective Evaluation We performed a subjective experiment to compare the desktop and VR versions of our system. In addition to the performance achieved by the subject (i.e., covered distance and number of targets identified, as explained later), we measured user experience (UX) by means of the UEQ questionnaire [43], a set of 26 questions and interpretations, and intrinsic motivation (IMI) [44], another multidimensional scale, by means of 20 questions. Participants Permission for performing this experiment was obtained following the University of Aizu ethics guidelines. A total of 31 young adult students (22 year old, σ = 2.0) from the University of Aizu volunteered for this experiment. Data from one of them was excluded from further analyses, as this participant had hearing thresholds > 20 dB (HL) in at least two bands (0.125-8 kHz). Hearing thresholds were measured with an MA25 audiometer (MAICO, Eden Prairie, MN, USA). Participants were mostly Japanese right-handed males (only 4 were non-Japanese, 5 left-handed, and 3 females) who claimed to play regular videogames 8 h a week on average (min. = 0, max. = 35, median= 5). Only three participants claimed to regularly play VR games (1, 5, and 7 h a week). Materials The experiment was conducted with a set-up similar to that described in Section 3.1. For this experiment, a pair of HD 380 pro (closed) headphones (Sennheiser, Wedemark, Germany) was used instead. Sound level was adjusted to be comfortable for the participants. A game based on our CA visualization system was implemented. This game was comprised of two scenarios (demonstration and main task), and was built for VR and regular desktop. The main difference between the two versions lies in their navigation interface: While the navigation for the VR version is the same described in Section 3.3, the keys 'W,A,S,' and 'D' of a QWERTY keyboard were used to move forward, left, back, and right, respectively, as it is commonly found in PC games. In addition, left-click was enabled to rotate the camera and interact with UI elements in the Menu mode, and right-click for identifying a target pattern. The game had a fixed duration of 10 min (the remaining time was always visible), and participants earned points by finding a target pattern (pressing the trigger button in VR settings, or right-clicking in the PC). The current score was also always visible. Correctly identified patterns were indicated by visual effects and a unit increment in their score. Participants were unable to further interact with correctly identified patterns. Incorrectly identified patterns were also indicated by different visual effects, but participants were able to continue interacting with them. The demonstration scenario was presented first. It was comprised of four patterns, two repeated targets, and two distractors. For the main task scenario, elapsed time, pattern (target or distractor), and user locations were registered for analysis. In all cases, the patterns were oscillators to prevent that possible interactions between patterns could cause its annihilation. The automaton was set in advance, with 400 patterns (100 targets) distributed among the universe. Procedure The task was to play the described game and earn as many points as possible. After that, participants were instructed to fill out a questionnaire. Since one of the UEQ dimensions is novelty, and VR-based presentations are usually regarded as a more novel than desktop-based ones, participants were randomly assigned to two groups: the control group played the PC version of the game and the experimental group played the VR version. A session started with a brief explanation of 3D cellular automata. After that, an explanation of how to interact with the game was verbally provided. During the demonstration scenario, participants were asked to memorize the target pattern by looking at it from different perspectives. They were also monitored on how they performed the task, providing additional instructions when necessary. When the participants were judged to be proficient at the game, the main task scenario was presented. No feedback or further explanations were provided. After finishing the main task scenario, a questionnaire including UEQ and IMI questions was answered by the participants in a mandatory fashion. UEQ and IMI questions were answered using a discrete scale marked from 1 to 7. The UEQ dimensions were derived from questions where participants needed to rate the experience between two polar values such as "annoying/enjoyable", "creative/dull", etc., mapped to the extremes of the scale. The low and high extremes of the scale for IMI statements (e.g., "I think I am pretty good at this task", "I thought the task was very boring", etc.) were marked with "not at all true" and "very true", respectively. All interactions with the participants (verbal and written) were made in either Japanese or English, according to their preference. On average, participants spent less than 30 min to finish the whole experience. Finally, as a measure against the spreading of COVID-19, participants were requested to wear masks and nitrile gloves. Likewise, keyboard, headphones, mouse, etc., were disinfected after each participant and the room was ventilated. Only one participant was allowed in the room during the experiment. Results The UEQ questionnaire evaluates UX in six different dimensions: attractiveness, dependability, efficiency, novelty, perspicuity, and stimulation. These dimensions are derived from 7-point scale questions and zero-centered. The UEQ also allows for benchmarking results against data from 468 studies (21,175 persons). The VR version of the game was rated "excellent" in attractiveness when compared with the benchmark data while the PC version was rated "good". However, differences between the two groups were not significant, as indicated by a series of between-subject ANOVAs performed with the 'ez' library [45] in R [46]. These results are shown in Table 1 and Figure 6a. The IMI questionnaire evaluates the intrinsic motivation of the participants in five dimensions: (perceived) choice, (perceived) competence, effort, interest, and (perceived) value. No significant differences between the two groups were found for any of these dimensions, as shown in Figure 6b and Table 1. Attractiveness The PC group traveled on average more than twice the distance traveled by the VR group, as shown in Figure 6c. This difference was significant [F(1, 28) = 27.296; p < 0.001; η 2 G = 0.493]. In a similar fashion, the PC group earned on average 10 points more than the VR group. This difference, shown in Figure 6d, was also significant [F(1, 28) = 6.017; p = 0.021; η 2 G = 0.001]. However, the PC group also misidentified more targets than the VR group and this difference was not significant, as indicated by an ANOVA on the arcsinetransformed fractions between correct and number of responses [F(1, 28) = 0.029; p = 865] and shown in Figure 6e. Discussion and Future Work Auralization in our case is limited by the number of audio sources that Unity can render simultaneously. Our results show that the implemented spatializer was able to successfully handle these audio sources without dropouts. Life models capable of presenting even thousands of cells dwarf our proposal. However, we found that our small mesh was large enough to challenge the limitation of the underlying VR platform (Unity) and to successfully generate musical patterns with interesting rhythmical possibilities. We are now working on offering greater freedom in the mapping of acoustic features to cell properties by means of an editor. In a previous report [7], we included the results of a subjective experiment (n = 21) performed with the desktop version of our system. We reported that immersion and perceived quality was rated significantly higher when the audio spatialization was used than when no audio was presented. We also found no significant differences between the effects of sequential and simultaneous presentation. Our study suggests that any differences between VR and PC presentations are rather small and not observable with our sample size. These results are consistent with those of An et al. [47], who found no performance difference between the two groups (PC and VR) while playing a cultural game. More recently, the effect of platform (PC vs. VR) has been found to be less important than embodiment (opportunities to interact with and manipulate virtual assets) [48]. Furthermore, a limitation in our study due the familiarity of our cohort with PC video games and lack of experience with VR games could explain the similarities between these two groups. Familiarity with PC video games could also explain the larger distance averaged by our PC cohort since few VR participants used the teleporting method. The method was decided due the motion sickness produced in pilot experiments using other methods of displacement, such as controller/joystick method.Thus, we are interested in repeating this experiment with a larger and more balanced sample in terms of familiarity with both platforms. The results obtained from our CA visualization and auralization indicate that they are beneficial for the exploration of patterns within these models. In terms of immersion and perceived quality, subjective experiments based on a desktop version of our system also indicate improvements over visual-only explorations. In our simulations, we opted for an auditory display where band-pass filtered tones vary on pitch, center frequency, and quality factor of the filter, depending on the length of the CA. Although this arbitrary mapping helps to aurally explore the CA, it falls short in our ambitions to generate enticing music from it. As much as we adhere to the definition of music offered by Berio that "Music is everything that one listens to with the intention of listening to music" [49], we consider that other mappings could generate musical patterns more familiar to many users, for example by using the circle of fifths (a different circular judgement of western musical scales). In the same vein, since some rules in certain CAs have been proven to be universal Turing machines, like rule 110 in one-dimensional CA [50] or the Game of Life [51], we are investigating the generation of arbitrary melodies using Genetic Algorithms. In this case, we look for a set of rules and an initial state that results in a given melody after a certain number of CA iterations. We defer, however, such endeavors to further reports. Conclusions A VR tool for the exploration of three-dimensional CAs was introduced. In addition to the visual representation and exploration by means of HMDs, the implemented tool renders the CA using spatial sound. The audio spatializer in our research is capable of rendering the maximum number of audio sources allowed by the game engine. The spatializer developed for our research uses actual HRTF measurements, so accurate renditions of sources in the near field can be achieved. Contrary to our expectations, we found no benefit of exploring CAs using VR relative to non-immersive explorations in terms of user experience, intrinsic motivation, and subjective performance. The reasons for this outcome are not well understood, but they have been reported in the literature. It is possible that familiarity with desktop interfaces has a larger effect than we expected.
8,524.6
2022-02-08T00:00:00.000
[ "Computer Science" ]
The Effect of Fabrication Error on the Performance of Mid-Infrared Metalens with Large Field-of-View Mid-infrared large field-of-view (FOV) imaging optics play a vital role in infrared imaging and detection. The metalens, which is composed of subwavelength-arrayed structures, provides a new possibility for the miniaturization of large FOV imaging systems. However, the inaccuracy during fabrication is the main obstacle to developing practical uses for metalenses. Here, we introduce the principle and method of designing a large FOV doublet metalens at the mid-infrared band. Then, the quantitative relationship between the fabrication error and the performance of the doublet metalens with a large FOV from four different fabrication errors is explored by using the finite-difference time-domain method. The simulation results show that the inclined sidewall error has the greatest impact on the focusing performance, and the interlayer alignment error deforms the focusing beam and affects the focusing performance, while the spacer thickness error has almost no impact on the performance. The contents discussed in this paper can help manufacturers determine the allowable processing error range of the large FOV doublet metalens and the priority level for optimizing the process, which is of significance. Introduction The mid-infrared band (2.5-25 µm), which contains two atmospheric windows and a molecular "fingerprint" region, plays an important role in the medical and scientific fields, including in applications such as biological imaging, target detection and microscopic imaging, and imaging systems with a large field-of-view (FOV) are crucial in the above applications [1][2][3]. In large FOV imaging systems, the correction of off-axis aberrations (coma, astigmatism, field curvature, and distortion) is the key to obtaining high quality imaging; among these aberrations, the distortion causing geometric deformation can be eliminated by subsequent image processing [4,5]. The off-axis aberration correction is usually achieved by the combination of multiple lenses or lens arrays in traditional large FOV imaging systems, resulting in a large volume and heavy weight of the optical system. Metasurface, an optical device composed of a subwavelength scale and anisotropic or isotropic scatterers (micro-nano structures) arrayed on a substrate at sub-wavelength intervals provides a possible solution to realize lightweight and compact optical components. According to the generalized Snell's law of optics, by changing the parameters (shape, size, azimuth, etc.) of these micro-nano structures and the contrast of refractive index between them and the surrounding medium, the metasurface can flexibly adjust and control almost any electromagnetic wave parameters (phase, amplitude, polarization, frequency) [6][7][8][9]. In the application of the metasurface, the metalens is a metasurface Nanomaterials 2023, 13,440 2 of 12 with a lens function. It realizes the performance and function of the traditional lens while possessing the characteristics of being extremely thin and having easy integration [10][11][12][13], which is undoubtedly one of the most exciting and important features in research fields. Based on the above advantages, large FOV imaging devices based on a metalens have also been widely studied. One of the main approaches is to integrate two layers of metalens with different phase distributions on both sides of the same substrate [14,15]. The first layer of metalens is used to correct the spherical aberration and separate the normal incidence and oblique incidence rays so that they can be focused by different parts of the second layer of metalens. Different from a single-layer metalens [16,17], this construction can simultaneously correct spherical aberration and off-axis aberrations and is suitable for large numerical aperture (NA) application scenarios. During the design of any metalens, it is common to simulate and verify its phase, performance, or function. Even if the simulation results are excellent, the devices prepared usually do not exhibit equally good performance on account of the existence of processing errors and may even differ greatly [18,19]. Although various micro-nano manufacturing methods have also been developed rapidly in the past few decades, such as electron beam lithography (EBL), focused ion beam (FIB), direct laser writing (LDW), and template transfer technology [20][21][22][23]. However, random factors such as time, temperature, human operation, or inherent physical mechanisms, such as the standing wave effect [24] and the proximity effect [25], will introduce certain fabrication errors and affect the final performances. These factors are the challenges faced by any lithography technology. As a consequence, the fabrication of micro-nano structures with high resolution, high precision, and a high aspect ratio is the key to taking the metalens from theories to practical applications. Compared with the visible band and the near-infrared band, the metalens working in the mid-infrared band is easier to achieve large-scale, low-cost, and mass production because of its larger characteristic size and is thus easier to pursue in practical applications. Therefore, it is necessary to explore the influence between the fabrication error and the device performance to guide the processing of a high-quality metalens in the mid-infrared band. In this work, to analyze the influence of fabrication errors that may be introduced during the preparation process on the performance of a large FOV doublet metalens, the simulation analysis will be carried out in the mid-wave infrared from four aspects: critical dimension (CD) bias error of micro-nano structures, the sidewall inclination, the alignment between the two layers of metalens and the thickness of the substrate. In the second part of this paper, we first introduce the working principle and optimization design process of a doublet metalens with a large FOV and show the simulation performance results of the doublet metalens without error. In the third part, the XY and XZ plane light intensity distribution, modulation transfer function (MTF), and focusing efficiency are used as the performance evaluation indicators to compare the performance difference between the doublet metalens under each error condition and the no-error condition. Although the doublet metalens discussed in this paper operates in mid-infrared, the influence of each error on its performance is also a reference in other wavelength bands. Figure 1a is a schematic diagram of the doublet metalens, two layers of metalens with different phase distributions are integrated on both sides of the same substrate. Since the first layer of metalens is used to correct spherical aberration, it can be called the correction layer (CL), and the second layer of metalens is used for focusing, called the focusing layer (FL). In order to correct spherical aberration, the CL designed in this paper adopts the even polynomial as shown in Equation (1) [14,15], which has a phase distribution similar to the Schmitt plate [26] and can make the edge light diverge and the center light converge, which Nanomaterials 2023, 13, 440 3 of 12 is contrary to the light refraction law of spherical lens. Therefore, the Schmitt plate can play a certain role in correcting spherical aberration. Principle and Design Method ϕ(x, y)= M 5 ∑ n=1 a n r R 0 2n (1) where M is the diffraction order, r = x 2 +y 2 is the actual radial distance from each micronano structure to the center of the metalens, R 0 is the normalized radius of the metalens, a n is the optimization coefficient, the subscript n represents the number of optimization coefficients, and the degree here is 5. (1) where M is the diffraction order, r = x 2 + y 2 is the actual radial distance from each micro-nano structure to the center of the metalens, R 0 is the normalized radius of the metalens, a n is the optimization coefficient, the subscript n represents the number of optimization coefficients, and the degree here is 5. The second layer of the doublet metalens also adopts the phase profile formula as shown in formula (1), and the off-axis aberration can be eliminated to some extent by optimizing its optimization coefficients and the thickness of the substrate [27]. In this paper, the optimization design process of the large FOV doublet metalens is jointly completed by the optical simulation software ZEMAX and the 3D finite-difference time-domain (FDTD) method. The design steps are as follows: ray tracing is carried out through ZE-MAX to determine the ideal phase distribution of the CL, the thickness of the substrate, the ideal phase distribution, and the diameter of the FL; the FDTD Solutions is used to calculate the phase corresponding to the micro-nano unit structures with different size parameters to obtain the phase database; through MATLAB, phase matching is carried out at each coordinate of the metalens, where the difference between the ideal phase at the coordinate and the corresponding phase of the actual matching structure should be a minimum; verifying the final focusing performance through FDTD simulation. Before using ZEMAX for ray tracing optimization, we first determined that the doublet metalens substrate is BaF2, which is a low refractive index material commonly used in infrared, and the metalens layer (layers of Binary 2) is made of Si, a high refractive index material, and the high refractive index contrast of the two materials could enhance the phase modulation of micro-nano structures [13]. Figure 1b The second layer of the doublet metalens also adopts the phase profile formula as shown in Formula (1), and the off-axis aberration can be eliminated to some extent by optimizing its optimization coefficients and the thickness of the substrate [27]. In this paper, the optimization design process of the large FOV doublet metalens is jointly completed by the optical simulation software ZEMAX and the 3D finite-difference time-domain (FDTD) method. The design steps are as follows: ray tracing is carried out through ZEMAX to determine the ideal phase distribution of the CL, the thickness of the substrate, the ideal phase distribution, and the diameter of the FL; the FDTD Solutions is used to calculate the phase corresponding to the micro-nano unit structures with different size parameters to obtain the phase database; through MATLAB, phase matching is carried out at each coordinate of the metalens, where the difference between the ideal phase at the coordinate and the corresponding phase of the actual matching structure should be a minimum; verifying the final focusing performance through FDTD simulation. Before using ZEMAX for ray tracing optimization, we first determined that the doublet metalens substrate is BaF2, which is a low refractive index material commonly used in infrared, and the metalens layer (layers of Binary 2) is made of Si, a high refractive index material, and the high refractive index contrast of the two materials could enhance the phase modulation of micro-nano structures [13]. Figure 1b displays the basic dimensions of the doublet metalens optimized by ZEMAX, where the diameters of the CL and the FL are 40 µm and 74 µm respectively, the thickness of the substrate is 58 µm, FOV is 40 • . Figure 1c,d shows the ideal phase distribution of the doublet metalens. It can be seen from the figure that the phase coverage of the CL is small because its role is correcting aberration, while the FL is used for focusing so its phase should cover 2π. Figure 1e shows the MTF curves of the ideal doublet metalens obtained by ZEMAX optimization at each FOV and the diffraction limit, where the MTF at each FOV is consistent with the diffraction limit. Before using the FDTD to simulate the actual diffraction process of the doublet metalens, it is established that both the CL and the FL adopt the propagation phase design, which is the most suitable phase mechanism for wide-angle incidence [28], and it is not affected by the polarization state of the incident light, thus it has a wide range of application scenarios. The micro-nano structures adopt a cylindrical shape as shown in Figure 2a, and the introduced phase can be expressed as ϕ = 2π λ n eff H, where λ is the design wavelength, H is the height of the micro-nano pillar, n eff is the effective refractive index of the micro-nano pillar, and the n eff can be adjusted to obtain different phase responses by changing the diameter of the micro-nano pillar [29,30]. In order to accurately simulate the diffraction process and shorten the simulation time, the CL and the FL will be simulated separately, in which the propagation between the two layers of the doublet metalens and the propagation between the FL and the focal plane are calculated by far-field projection. Based on this, the no-error doublet metalens is simulated at a wavelength of 4.2 µm. According to the Nyquist sampling theorem, the unit period of micro-nano structures is determined to be 2 µm and the diameter varies from 0.5 µm to 1.5 µm. Figure 2b is the phase cloud diagram of the phase of the micro-nano structure changing with its height and diameter, based on this, the micro-nano structure height of the CL and the FL is determined to be 0.7 µm and 2.3 µm respectively by weighing the phase matching error and transmission. Figure 2c,d shows the comparison between the ideal phase and the actual phase of the CL and the FL, the abscissa is the radial distance to the center of the metalens. From the figure, it can be seen that the actual phase of the doublet metalens matches the ideal phase to a high degree. 74 μm respectively, the thickness of the substrate is 58 μm, FOV is 40°. Figure 1c,d shows the ideal phase distribution of the doublet metalens. It can be seen from the figure that the phase coverage of the CL is small because its role is correcting aberration, while the FL is used for focusing so its phase should cover 2π. Figure 1e shows the MTF curves of the ideal doublet metalens obtained by ZEMAX optimization at each FOV and the diffraction limit, where the MTF at each FOV is consistent with the diffraction limit. Before using the FDTD to simulate the actual diffraction process of the doublet metalens, it is established that both the CL and the FL adopt the propagation phase design, which is the most suitable phase mechanism for wide-angle incidence [28], and it is not affected by the polarization state of the incident light, thus it has a wide range of application scenarios. The micro-nano structures adopt a cylindrical shape as shown in Figure 2a, and the introduced phase can be expressed as φ = 2π λ n eff H, where λ is the design wavelength, H is the height of the micro-nano pillar, n eff is the effective refractive index of the micro-nano pillar, and the n eff can be adjusted to obtain different phase responses by changing the diameter of the micro-nano pillar [29,30]. In order to accurately simulate the diffraction process and shorten the simulation time, the CL and the FL will be simulated separately, in which the propagation between the two layers of the doublet metalens and the propagation between the FL and the focal plane are calculated by far-field projection. Based on this, the no-error doublet metalens is simulated at a wavelength of 4.2 μm. According to the Nyquist sampling theorem, the unit period of micro-nano structures is determined to be 2 μm and the diameter varies from 0.5 μm to 1.5 μm. Figure 2b is the phase cloud diagram of the phase of the micro-nano structure changing with its height and diameter, based on this, the micro-nano structure height of the CL and the FL is determined to be 0.7 μm and 2.3 μm respectively by weighing the phase matching error and transmission. Figure 2c,d shows the comparison between the ideal phase and the actual phase of the CL and the FL, the abscissa is the radial distance to the center of the metalens. From the figure, it can be seen that the actual phase of the doublet metalens matches the ideal phase to a high degree. Figure 3a displays the light intensity distribution along the actual focal plane and the focal length direction of the above no-error doublet metalens at three incidence angles of 0 • , 10 • , and 20 • , which were obtained through FDTD simulation. During the simulation process, the ring PEC (Perfect Electrical Conductor) material is used to act as the aperture stop, it is worth mentioning that through our simulation, it is found that the diffraction effect introduced by PEC ring itself is negligible, which means that its introduction has no impact on the realization of our system functions. The actual focal plane obtained by simulation is located at Z = 38 µm, which is a 2 µm difference from the design focal length, and the possible reason is that the metalens acts as a plane in ZEMAX, but when we use FDTD Solutions to simulate the diffraction process of the metalens, the metalens has a certain height so that the actual value and the design value have a certain deviation. Figure 3b shows the MTF curves obtained by the Fourier transform of the intensity distribution at the focal plane, which comprehensively reflects the resolution and contrast of the metalens [31,32], and the larger the coverage area under the MTF curve, the better the metalens. It can be seen from the figure that compared with the diffraction limit, the MTF curve at each FOV has a certain attenuation, which is caused by aberration and diffraction. Figure 3c shows the corresponding focusing efficiency, where the maximum focusing efficiency is 75.50%, the average focusing efficiency is 68.14%, and the focusing efficiency here is defined as the light intensity in an area of diameter 3 × FWHM (full width at half-maximum) on the focal plane divided by the total incident light intensity [29]. Simulated phase shift as a function of micro−nano pillar radius and height, the white and the black line represents the micro−nano structure height of the CL and the FL, respectively. (c,d) Ideal phase distribution of the CL and the FL, the blue plus and circle red markers represent the ideal and design phase, respectively. Figure 3a displays the light intensity distribution along the actual focal plane and the focal length direction of the above no-error doublet metalens at three incidence angles of 0°, 10°, and 20°, which were obtained through FDTD simulation. During the simulation process, the ring PEC (Perfect Electrical Conductor) material is used to act as the aperture stop, it is worth mentioning that through our simulation, it is found that the diffraction effect introduced by PEC ring itself is negligible, which means that its introduction has no impact on the realization of our system functions. The actual focal plane obtained by simulation is located at Z = 38 μm, which is a 2 μm difference from the design focal length, and the possible reason is that the metalens acts as a plane in ZEMAX, but when we use FDTD Solutions to simulate the diffraction process of the metalens, the metalens has a certain height so that the actual value and the design value have a certain deviation. Figure 3b shows the MTF curves obtained by the Fourier transform of the intensity distribution at the focal plane, which comprehensively reflects the resolution and contrast of the metalens [31,32], and the larger the coverage area under the MTF curve, the better the metalens. It can be seen from the figure that compared with the diffraction limit, the MTF curve at each FOV has a certain attenuation, which is caused by aberration and diffraction. Figure 3c shows the corresponding focusing efficiency, where the maximum focusing efficiency is 75.50%, the average focusing efficiency is 68.14%, and the focusing efficiency here is defined as the light intensity in an area of diameter 3 × FWHM (full width at halfmaximum) on the focal plane divided by the total incident light intensity [29]. In the process of manufacturing the metalens, errors will inevitably be introduced. For example, during the photolithography process, some accidental factors affect the manufacturing quality, such as the thickness of the photoresist, mechanical vibration, ash particles, temperature changes, development time, pretreatment conditions and so on, which can be solved through multiple process experiments and the optimization of process parameters. However, it is difficult to solve the problems caused by inherent physical In the process of manufacturing the metalens, errors will inevitably be introduced. For example, during the photolithography process, some accidental factors affect the manufacturing quality, such as the thickness of the photoresist, mechanical vibration, ash particles, temperature changes, development time, pretreatment conditions and so on, which can be solved through multiple process experiments and the optimization of process parameters. However, it is difficult to solve the problems caused by inherent physical mechanisms such as the standing wave effect and the proximity effect. At present, most of the high aspect ratio etching processes use inductively coupled plasma reactive ion etching (ICP-RIE) [33], which has the advantages of high control accuracy, large area etching consistency, good vertical characteristics and low material loss, etc. In the field of deep silicon etching, the currently widely adopted process flow is based on the etching-passivation gas alternation technology invented by Bosch [34,35], that is, a passivation layer is generated on the side wall of the etched material, and the etching gas will etch both Si and the passivation layer at the same time, which can protect the Si of the sidewall from lateral etching, but attention needs to be paid to controlling the time interval between the passivation and the etching. In the etching process, in addition to the above accidental factors, there are also some factors affecting the etching process, such as the selection and proportion of reaction gas, working pressure in the cavity, and inductively coupled plasma (ICP) power, all of which also affect the structure morphology and size. Critical Dimension (CD) Bias Error CD bias error refers to the inconsistency between the radial size of the fabricated micro-nano structures and the ideal design, which is a common error in nanomanufacturing technology. Most photoresist morphology is regular trapezoidal, and the photoresist at the edge is relatively thin, so the glue withdrawal phenomenon will occur during the etching, that is, the photoresist in the edge area is exhausted, the radial size of the pillars will become smaller and thinner. At the same time, in the Bosch process, in order to maintain anisotropic etching, a balance must be achieved between the etching agent and the passivator agent. If the balance is broken due to too much etching agent, the process will become more isotropic, resulting in CD loss. In the alternating cycle process of etching-passivation, the rough sidewall is inevitable [34,36]. Since the roughness is regular and occurs on a length scale much smaller than the wavelength, this error can be simplified as the reduction of the diameter of the micro-nano pillars to facilitate qualitative analysis. In this paper, the CD of micro-nano structures is reduced as a whole. Figure 4a shows a schematic of CD bias error, where M is the reduction coefficient. Based on this, we simulate and analyze the situation when the M coefficient is 95.0%, 90.0%, 85.0%, 80.0%, 75.0%, and 70.0%. The focusing efficiency and MTF curves obtained under each error condition are obtained under the respective actual focal plane, rather than the focal length under the no-error condition. Therefore, under some specific errors, one or two performances could be better than that under the no-error condition. Figure 4b,c shows the light intensity distribution along the XY and XZ directions at different FOVs under various error conditions. It can be found that with the increase of CD bias error, the sub-diffraction order is more obvious. At the same time, it can be discovered in the MTF curves as shown in Figure 4d that when the error is 80.0% and 70.0%, the MTF curves fluctuate, indicating that there exists field curvature. In order to facilitate the readers to distinguish the MTF curves under different errors and FOVs, this paper only selectively displays the MTF curves under partial errors. Besides, this paper also extracts the MTF values of each MTF curve at the frequency of 33.33 lp/mm and draws a broken line chart as shown in Figure 4e for the convenience of readers' comparison. From the chart, when the error increases from 100.0% to 70.0%, the average MTF value decreases from 0.716 to 0.154; meanwhile, Figure 4f shows that the average focusing efficiency reduces from 68.14% to 33.41%. In addition, we can discover that the focusing efficiency is higher when the errors are 95.0% and 90.0% than when there is no error, mainly due to the increase of transmittance of the FL, the essence is: the diameters of the micro-nano structures are reduced as a whole, the spacing between structures increases, that is, the coupling between structures decreases so that the phase modulation capacity and transmittance can be optimized to a certain extent, this phenomenon occurs within a certain CD reduction range. Inclined Sidewall Error The inclined sidewall error is that the radius of the upper surface and the lower surface of the micro-nano structure is not consistent, resulting in a slope-like sidewall. In addition to the accidental factors, the anisotropy of the process itself will also cause the inclined sidewall error. Its essence is: deep reactive ion etching (DRIE) is performed by continuous alternating cycles of the passivation and the etching to achieve the desired etching depth, but as the etching depth continues to increase, the active F-base density reaching the bottom will gradually become smaller, and the etching speed will also reduce so that the etching process and the passivation process cannot reach an effective balance, which will affect the sidewall roughness and verticality [37,38]. Although this effect can be reduced by controlling the alternating intervals of the etching process and the passivation process, the existence of an inclined sidewall cannot be avoided. To analyze its influence on the large FOV doublet metalens, in this paper, the same inclination is applied to all micro-nano structures when simulating a single error condition. Figure 5a is a schematic diagram of the inclined sidewall error, and Figure 5b-f shows the focusing performance results of the doublet metalens under the six cone angles of 1.0°, 2.5°, 5.0°, 7.5°, 10.0°, and 12.5°. As can be seen from Figure 5c, the sub-diffraction order is obvious from the cone angle of 7.5°, especially when the cone angle is 12.5°. This law is also reflected in Inclined Sidewall Error The inclined sidewall error is that the radius of the upper surface and the lower surface of the micro-nano structure is not consistent, resulting in a slope-like sidewall. In addition to the accidental factors, the anisotropy of the process itself will also cause the inclined sidewall error. Its essence is: deep reactive ion etching (DRIE) is performed by continuous alternating cycles of the passivation and the etching to achieve the desired etching depth, but as the etching depth continues to increase, the active F-base density reaching the bottom will gradually become smaller, and the etching speed will also reduce so that the etching process and the passivation process cannot reach an effective balance, which will affect the sidewall roughness and verticality [37,38]. Although this effect can be reduced by controlling the alternating intervals of the etching process and the passivation process, the existence of an inclined sidewall cannot be avoided. To analyze its influence on the large FOV doublet metalens, in this paper, the same inclination is applied to all micro-nano structures when simulating a single error condition. Figure 5a is a schematic diagram of the inclined sidewall error, and Figure 5b-f shows the focusing performance results of the doublet metalens under the six cone angles of 1.0 • , 2.5 • , 5.0 • , 7.5 • , 10.0 • , and 12.5 • . As can be seen from Figure 5c, the sub-diffraction order is obvious from the cone angle of 7.5 • , especially when the cone angle is 12.5 • . This law is also reflected in Figure 5b. At the same time, from Figure 5d, the MTF curves under large error conditions presented have fluctuations, that is, there have field curvature, it also can be manifested from Figure 5c that the focal lengths of the three FOVs are not in the same plane under large error conditions. From Figure 5e,f, when the cone angle increases from 0 • to 12.5 • , the average MTF value at the frequency of 33.33 lp/mm decreases from 0.716 to 0.155, and the average focusing efficiency decreases from 68.14% to 19.64%. The reduction of focusing efficiency is mainly reflected in two aspects. One is the reduction of transmittance of the FL. Figure 5e presents the broken line chart of the transmittance of the selected structure of the FL changing with its bottom radius. With the increase of the cone angle, there are more pillars that possess low transmittance, leading to the reduction of the transmittance of the FL. The other aspect is the decrease of the relative focusing efficiency on the focal plane. It can be seen from Figure 5h that when the cone angle increases to a certain extent, the phase provided by the structures cannot cover 2π and eventually will be unable to focus. Nanomaterials 2023, 13, 440 8 of 12 large error conditions. From Figure 5e,f, when the cone angle increases from 0° to 12.5°, the average MTF value at the frequency of 33.33 lp/mm decreases from 0.716 to 0.155, and the average focusing efficiency decreases from 68.14% to 19.64%. The reduction of focusing efficiency is mainly reflected in two aspects. One is the reduction of transmittance of the FL. Figure 5e presents the broken line chart of the transmittance of the selected structure of the FL changing with its bottom radius. With the increase of the cone angle, there are more pillars that possess low transmittance, leading to the reduction of the transmittance of the FL. The other aspect is the decrease of the relative focusing efficiency on the focal plane. It can be seen from Figure 5h that when the cone angle increases to a certain extent, the phase provided by the structures cannot cover 2π and eventually will be unable to focus. Interlayer Alignment Error The interlayer alignment error refers to the deviation between the center position of the currently prepared metalens pattern and the center position of the already prepared metalens pattern on the substrate. The causes of such errors include the deformation of the wafer itself, the error caused by the lithography machine, the uneven movement of the wafer workpiece, the error of the alignment mark and the environmental factors [39]. This process is difficult, and the prepared pattern should also be protected from destruction when processing the second layer of patterns. In order to quantitatively analyze the interlayer alignment error, this paper takes the center of the CL as the standard, and offsets the center of the FL in the X direction. Figure 6a is a schematic diagram of the interlayer alignment error, and ∆S represents its offset. In this paper, we simulate the different doublet metalens when ∆S is 5.0%, 10.0%, 15.0%, 20.0%, 25.0% and 30.0% of the diameter Interlayer Alignment Error The interlayer alignment error refers to the deviation between the center position of the currently prepared metalens pattern and the center position of the already prepared metalens pattern on the substrate. The causes of such errors include the deformation of the wafer itself, the error caused by the lithography machine, the uneven movement of the wafer workpiece, the error of the alignment mark and the environmental factors [39]. This process is difficult, and the prepared pattern should also be protected from destruction when processing the second layer of patterns. In order to quantitatively analyze the interlayer alignment error, this paper takes the center of the CL as the standard, and offsets the center of the FL in the X direction. Figure 6a is a schematic diagram of the interlayer alignment error, and ∆S represents its offset. In this paper, we simulate the different doublet metalens when ∆S is 5.0%, 10.0%, 15.0%, 20.0%, 25.0% and 30.0% of the diameter of the FL (D FL ), respectively. From Figure 6b,c, it can be seen that as the error increases, the focusing beam is deformed and the focusing performance degrades. From Figure 6d,f, when the error increases from 0.0% to 30.0%, the average MTF value at the frequency of 33.33 lp/mm decreases from 0.716 to 0.587, and the average focusing efficiency decreases from 68.14% to 43.88%. of the FL (DFL), respectively. From Figure 6b,c, it can be seen that as the error increases, the focusing beam is deformed and the focusing performance degrades. From Figure 6d,f, when the error increases from 0.0% to 30.0%, the average MTF value at the frequency of 33.33 lp/mm decreases from 0.716 to 0.587, and the average focusing efficiency decreases from 68.14% to 43.88%. Spacer Thickness Error Spacer thickness error is the difference between the actual thickness of the substrate and the ideal thickness. The spacer thickness in this paper is obtained by ZEMAX optimization, if we want the actual thickness of the substrate to be consistent with this value, a grinding process is required. The wafer grinding process, also known as the backside grinding process, is the process of thinning the back side of the wafer. Grinding the wafer thickness to the ideal range is one of its goals. The wafer grinding error is inversely proportional to the wafer thickness [40], so the quality of the wafer can be improved if the grinding thickness can be minimized. One of the objectives of this paper is to find the tolerable range of the spacer thickness error. Figure 7a shows the spacer thickness error schematic diagram, where the ∆L represents the spacer thickness increment. This paper simulates the doublet metalens under the situation when ∆L is 5.0%, 10.0%, 15.0%, 20.0%, 25.0%, and 30.0% of the ideal spacer thickness L, respectively. As can be seen from the light intensity distribution diagrams in XY direction and XZ direction in Figure 7b,c, even if the error increases to the maximum, it only has a slight influence on the performance of the doublet metalens. From the MTF curves and the broken line chart of focusing efficiency corresponding to Figure 7d-f, when the error increases from 0.0% to 30.0%, the Spacer Thickness Error Spacer thickness error is the difference between the actual thickness of the substrate and the ideal thickness. The spacer thickness in this paper is obtained by ZEMAX optimization, if we want the actual thickness of the substrate to be consistent with this value, a grinding process is required. The wafer grinding process, also known as the backside grinding process, is the process of thinning the back side of the wafer. Grinding the wafer thickness to the ideal range is one of its goals. The wafer grinding error is inversely proportional to the wafer thickness [40], so the quality of the wafer can be improved if the grinding thickness can be minimized. One of the objectives of this paper is to find the tolerable range of the spacer thickness error. Figure 7a shows the spacer thickness error schematic diagram, where the ∆L represents the spacer thickness increment. This paper simulates the doublet metalens under the situation when ∆L is 5.0%, 10.0%, 15.0%, 20.0%, 25.0%, and 30.0% of the ideal spacer thickness L, respectively. As can be seen from the light intensity distribution diagrams in XY direction and XZ direction in Figure 7b,c, even if the error increases to the maximum, it only has a slight influence on the performance of the doublet metalens. From the MTF curves and the broken line chart of focusing efficiency corresponding to Figure 7d-f, when the error increases from 0.0% to 30.0%, the average MTF value changes slightly from 0.716 to 0.729, and the average focusing efficiency changes from 68.14% to 68.57%. It can be seen that the discrepancy of the spacer thickness has little effect on the focusing performance of the doublet metalens. Conclusions In summary, the basic design principle and method of a mid-infrared doublet metalens with a large FOV are introduced, and the effects of common errors in micro-nano fabrication on the performance of the doublet metalens are simulated and analyzed, including CD bias error, inclined sidewall error, interlayer alignment error, and spacer thickness error. The simulation results show that both the CD bias error and the inclined sidewall error have a great impact on the focusing performance, and the influence of the inclined sidewall error is more serious. Under the situation of inclined sidewall error, MTF decreases rapidly and fluctuates obviously, and the focusing efficiency also decreases significantly with the increase of the error. As for the interlayer alignment error, it not only deforms the focusing beam but also affects the final focusing efficiency. At the same time, the effect of spacer thickness error on the focusing performance is minimal, and the effects caused by this error are negligible according to the performance analysis results. The influence between fabrication error and the performance of large FOV doublet metalens is explored in this paper, which can help manufacturers determine the allowable processing error range to guide the processing of high-quality doublet metalens. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
8,635.2
2023-01-21T00:00:00.000
[ "Physics" ]
Detection and explanation of anomalies in healthcare data The growth of databases in the healthcare domain opens multiple doors for machine learning and artificial intelligence technology. Many medical devices are available in the medical field; however, medical errors remain a severe challenge. Different algorithms are developed to identify and solve medical errors, such as detecting anomalous readings, anomalous health conditions of a patient, etc. However, they fail to answer why those entries are considered an anomaly. This research gap leads to an outlying aspect mining problem. The problem of outlying aspect mining aims to discover the set of features (a.k.a subspace) in which the given data point is dramatically different than others. In this paper, we present a framework that detects anomalies in healthcare data and then provides an explanation of anomalies. This paper aims to effectively and efficiently detect anomalies and explain why they are considered anomalies by detecting outlying aspects. First, we re-introduced four anomaly detection techniques and outlying aspect mining algorithms. Then, we evaluate the performance of anomaly detection techniques and choose the best anomaly detection algorithm. Later, we detect the top k anomaly as a query and detect their outlying aspect. Lastly, we evaluate their performance on 16 real-world healthcare datasets. The experimental results show that the latest isolation-based outlying aspect mining measure, SiNNE, has outstanding performance on this task and has promising results. Introduction Despite improvements in healthcare instruments, the presence of medical errors remains a severe challenge [1].Applying machine learning (ML) and artificial intelligence (AI) algorithms in the healthcare industry helps improve patients' health more efficiently.According to [2], around 86% of healthcare companies use machine learning and artificial intelligence algorithms.These algorithms help in many ways, such as medical image diagnosis [3,4], disease detection/classification [5][6][7], medical data analysis [8], medical data classification [9,10], drug discovery [8], robot surgery [8], detect anomalous reading [11], etc.Recently, researchers have been interested in detecting abnormal activity in the healthcare industry. Anomaly or outlier 1 is defined as a data instance that does not conform with the remainder of that set of data instances.In the healthcare domain, an anomaly is referred to as an unusual health condition or activity of a patient [12,13].A vast number of applications have been developed to detect anomalies from medical data [14][15][16][17].However, no study has been conducted to find out why these points are considered as an anomaly, i.e., on which set of features a data point is dramatically different than others, as far as we know.The problem of detecting such an explanation leads to outlying aspect mining (a.k.a, outlier explanation, outlier interpretation, outlying subspaces detection).Outlying aspect mining aims to identify the set of features where the given point (or a given anomaly) is most inconsistent with the rest of the data. In many healthcare applications, a medical officer wants to know the most outlying aspects of a specific patient compared to other patients.For example, you are a doctor having patients with Pima Indian diabetes disease.While treating a particular patient, you want to know in which aspects this patient differs from others.For example, let's consider the Pima Indian diabetes disease data set. 2 For 'Patient A' , the most outlying aspect will be having the highest number of pregnancies and low diabetes pedigree function (see Fig. 1), compared to other subspaces. Another example is when a medical insurance analyst wants to know in which aspects the given insurance claim is most unusual.The above-given applications are different than anomaly detection.Instead of searching the whole data set for the anomaly, in outlying aspect mining, we are specifically interested in a given data instance.The goal is to find out outlying aspects where a given data instance stands out.Such data instance is called a query q. These interesting applications of outlying aspect mining in the medical domain motivated us to write this paper.In this paper, we first introduce four anomaly detection techniques and outlying aspect mining methods.Later, we evaluate their performance on 16 healthcare datasets.To the best of our knowledge, it is the first time when these algorithms have been applied to healthcare data.Our results have verified their performance on anomaly detection and outlying aspect mining tasks and found that isolation-based algorithm presents promising performance, i.e., iForest perform well in anomaly detection and SiNNE perform well for outlying aspect mining task. The rest of the paper is organized as follows.Section 2 summarizes the principle and working mechanism of four outlying aspect mining algorithms and anomaly detection algorithms.Next, the experimental setup and results are summarized in Sects.3 and 4, respectively.Finally, we conclude the paper in Sect. 5. Existing methods Before describing different outlying aspect mining algorithms, we first provide the problem formulation. Basic notations and definitions Definition 1 (Problem definition) Given a set of n instances X ( X = n) in d dimensional space, a data point q ∈ X , is called anomaly iff, • q dramatically differs from others in full feature space.and a subspace S is called outlying aspect of q iff, • outlyingness of q in subspace S is higher than other subspaces, and there is no other subspace with the same or higher outlyingness. Outlying aspect mining algorithms first require a scoring measure to compute the outlyingness of the query in subspace and a search method to search for the most outlying subspace.In the rest of this section, we review different scoring measures only.For the search part, we will use Beam [18] search method because it is the latest search method and is used in different studies [18][19][20][21][22][23]. The flowchart of the complete process is presented in Fig. 2. Existing anomaly detection scoring measures LOF The core idea of density-based anomaly detection is the density of the anomalous object is significantly different The description of the data set is provided in Table 1. from the normal instance.The first local density-based approach, called LOF, which stands for Local Outlier Factor introduced by [24], which is the widely used local outlier detection approach.For any data object, the LOF score is the ratio of the average local density of its k-nearest neighbors to its local density [25].The LOF score of data object q is defined as follows: where lrd(q) max(dist k (x,D),dist(q,x)) , N k (q) is a set of k-nearest neighbours of q , dist(q, x) is a distance between q and x and dist k (q, D) is the distance between q and its k-NN in X .The LOF score represents the sparse- ness of the data object.Data objects with higher LOF values are considered as anomalies. iForest Liu et al. [26] presented a framework called Isolation Forest or iForest, which isolates each data point by axisparallel partitioning of the attribute space.To the best of our knowledge, iForest is the first technique that uses an isolation mechanism to detect anomalies.iForest builds an ensemble of trees called isolation trees (iTree).Each iTree is built using a randomly selected subsample without replacement from the data set.A random split is performed at each node on a randomly selected point from attribute space.The partition will terminate once all the nodes have only one data object or nodes reach the tree's height limit for iTree.The anomaly score for q ∈ R d based on iForest is defined as: where l i (q) is the path length of q in tree T i . Sp Rather than searching for k-nearest neighbor in the data set, [27] employs scoring measure based on the nearest neighbor (k =1) in random sub-samples ( S ⊂ D ).The Sp score of data object q is defined as follows: Fig. 2 The flowchart where dist(q, x) is a distance between q and x. In [27], authors have shown that Sp performs better than state-of-the-art anomaly detector LOF and runs faster than LOF.iNNE Bandaragoda et al. [28] proposed iNNE, which is stands for isolation using Nearest Neighbor Ensemble.The core idea behind iNNE is an anomaly is far away from its nearest neighbor, and the inverse is true for the regular object.iNNE implementation is influenced by iForest and LOF.The critical difference between iNNE and iForest is that iForest builds a tree from subspaces while iNNE builds hyperspheres using all dimensions.An isolation score of q is defined as follows: where cnn(q) = arg min The anomaly score for data object q is defined as: where I i (q) is isolation score based on sub-sample in i th set. Outlying aspect mining algorithms OAMiner Duan et al. [29] introduce Outlying Aspect Miner (OAMiner in short), which uses a Kernel Density Estimation (KDE) [30] based scoring measure to compute the outlyingness of query q in subspace S: where f S (q) is a kernel density estimation of q in sub- space S, m is the dimensionality of subspace S ( |S| = m ), h i is the kernel bandwidth in dimension i. Duan et al. [29] have stated that density is a bias towards high-dimensional subspaces-density tends to decrease as the dimension increases.Thus, to remove the effect of dimensionality bias, they proposed using the query's density rank as a measure of outlyingness.To find Sp (q) = min x∈S dist(q, x) the most outlying subspace of the query, the density of all data points needs to compute in each subspace, where the subspace with the best rank is selected as an outlying aspect of the given query.OAMiner systematically enumerates all the possible subspaces.In OAMiner, the author has used the set enumeration tree approach [31], which is widely used by the data mining research community.OAMiner searches for subspaces by traversing a depth-first manner [32].OAMiner used some anti-monotonicity properties to prune the subspaces.Given data set O , a query object q and subspace S, if rank(f S (q)) = 1, then every super-set of S cannot be a minimal subspace and thus can be pruned. Beam Vinh et al. [18] captures the concept of dimensionality unbiasedness and further investigates dimensionally unbiased scoring functions.Dimensionality unbiasedness is an essential property for outlying measures because the query object is compared in different subspaces with a different number of dimensions.They proposed two novel outlying scoring metrics (1) density Z-score and ( 2) isolation Path score (iPath in short).Their work showed that the proposed Z-score and iPath are dimensionally unbiased. Therein, the density Z-score is defined as follows: where µ f S and σ f S are the mean and standard devia- tion of the density of all data instances in subspace S, respectively.The iPath score is motivated by isolation Forest (iForest) anomaly detection approach [26].The intuition behind iForest is that anomalies are few and susceptible to isolation.iForest constructs t trees, where each tree is built from randomly selected sub-samples ψ ( ψ ≪ n ).Later, it divides using the axis-parallel random splits.Since in the outlying aspect mining context, the main focus is on the path length of the query; thus, authors have ignored other parts of the tree.In outlying aspect mining, the intuition behind the iPath score is that in the most outlying subspace, a given query is easy to isolate than the rest of the data. The process of calculating the iPath of query q w.r.t.sub-samples ψ of the data is where l i S (q) is path length of q in i th tree and subspace S. Vinh et al. [18] was the first to coin the term dimensionality unbiasedness. Definition 2 (Dimensionality unbiased [18]) A dimensionality unbiased outlyingness measure (OM) is a measure of which the baseline value, i.e., average value for any data sample O = {o 1 , o 2 , • • • , o n } drawn from a uniform distribution, is a quantity independent of the dimension of the subspace S, i.e., In [18,Theorem 3], it is proven that rank transformation and Z-score normalization have resulted in a constant average value in any data distribution.Furthermore, it is worth noting that the Z-score scoring function is not only normalized but also the variance of the normalized measures that are constant to dimensions. The overall beam search process is divided into three stages.All 1-D subspaces are inspected in the first stage to identify trivial outlying features.In the subsequent stage, an exhaustive search is performed on all possible 2 dimensional subspaces.In the third stage, the beam search is implemented at level l.The beam algorithm only keeps top W subspaces (called beam width) in the search process.The total number of subspace considered by the beam algorithm is in the order of O(d 2 + W d max ) where d max is the maximum dimension of subspace, and W is the beam width.sGrid Wells and Ting [23] introduced a simple grid-based density estimator called sGrid.sGrid is a smoothed variant of a grid-based density estimator [30].Let O be a collection of n data objects in D-dimensional space, x.S be a projection of a data object x ∈ O in subspace S. The sGrid den- sity of point q is computed as points that fall in a bin that covers point q and its surrounding neighbors. Their work showed that the proposed density estimator has advantages over the existing kernel density estimator in outlying aspect mining by replacing the kernel density estimator with sGrid.By replacing KDE with the sGrid density estimator, OAMiner [29] and Beam [18] run two orders of magnitude faster than their original implementation.However, sGrid is not a dimensionally unbiased measure, requiring Z-Score normalization.Again, it makes sGrid computationally inefficient. SiNNE Very recently, [21] proposed a Simple Isolation score using Nearest Neighbor Ensemble (SiNNE in short) = const.w.r.t S measure which from Isolation using Nearest Neighbor Ensembles (iNNE in short) method for anomaly detection [28].SiNNE constructs t ensemble of models Each model has ψ hyperspheres, where a radius of the hypersphere is the euclidean distance between a ( a ∈ D i ) to its nearest neighbor in D i . The outlying score of q in model M i , I(q�M i ) = 0 if q falls in any of the ball and 1 otherwise.The final outlying score of q using t models is: In their work, they argue that Z-score normalization is biased towards a subspace having high-density variance, and the definition of dimensionality unbiasedness needs to be revised.Furthermore, SiNNE is computationally faster than density and distance-based measures. Datasets In this study, we used 16 publicly available benchmarking medical datasets for anomaly detection; BreastW and Pima are from [33], 3 Annthyroid, Cardiotocography, Heart disease, Hepatitis, WDBC and WPBC are from [34] 4 and Arrhythmia, Lympho, Mammography, Musk, Thyroid, Vertebral, WBC, and Yeast are from [35]. 5 The summary of each data set is provided in Table 1. Algorithm implementation and parameters We use PyOD [36] Python library to implement anomaly detection algorithms.In terms of implementation of OAM algorithms, we used Java implementation of sGrid and SiNNE, which is made available by the authors [23] and [21], respectively.We implemented RBeam and Beam in Java using WEKA [37]. We used the default parameters of each algorithm as suggested in respective papers unless specified otherwise. Evaluation measure We used the area under the ROC curve (AUC) [39] and precision at n (P@n) 6 [40] as a measure of effectiveness for anomaly ranking produced by an anomaly detector.An anomaly detector with a high AUC indicates better detection accuracy, whereas a low AUC indicates low detection accuracy.Samariya and Ma [20] proposed a new kernel mean embedding-based evaluation measure in the outlying aspect mining domain.The intuition behind the evaluation measure is that in most outlying aspects, a query q is far from the distribution of data in those aspects. Definition 3 The quality of discovered aspects (or subspace(s)) S for a query q is computed as where K S (q, x) is a kernel similarity between q and x in subspace S. All experiments were conducted on a machine with an Intel 8-core i9 CPU and 16 GB main memory, running on macOS Big Sur version 11.1.We run each job on multiple single CPU treads, which is done using GNU parallel [42]. Empirical evaluation In this section, we present the result of four anomaly detection methods; LOF, iForest, Sp, and iNNE and four outlying scoring measures; Kernel Density Rank (1) (RBeam), Density Z-score (Beam), sGrid Z-score (sBeam) and SiNNE (SiBeam) using Beam search on medical datasets.All experiments were run for 1 h, and unfinished tasks were killed and presented as ' ‡' . Experiment-1: Performance of anomaly detection algorithms In this sub-section, we presented the results of four anomaly detection techniques: LOF, iForest, Sp, and iNNE in terms of AUC. The AUC comparison of LOF, iForest, Sp, and iNNE is presented in Table 2 (c.f.columns 2 to 5 of Table 2).It is interesting to note that no specific anomaly detection algorithm performs best in each dataset.However, iForest is the best-performing measure with having the best AUC in 10 datasets.In the last row of Table 2, the Avg.AUC of each anomaly detection method shows that iForest produced the best AUC while Sp had a significantly low AUC.Whereas LOF and iNNE produce comparative results. The total runtime, which includes pre-processing, model building, ranking n instances, and computing AUC, is presented in Table 2 (c.f.columns 6 to 9 of Table 2).Overall, Sp is the fastest measure compared to others.While iForest and iNNE almost take similar time. Experiment-2: Performance of outlying aspect mining algorithms We first use the iForest anomaly detection method for each data set to detect top k=10 anomalies; then, they are used as queries.Each scoring measure identifies outlying aspects for each anomaly (queries).We detect the quality of subspace using Eq. 1. Tables 3, 4, 5 and 6 shows the subspace found by four scoring measures and quality of discovered subspace on 16 real-world medical datasets.RBeam and Beam cannot finish on annthyroid and musk in an hour; thus, we presented as ' ‡' . Out of 160 queries, SiBeam detects a better subspace for 116 queries, and sGBeam detects a better subspace for only 23 queries.While RBeam detects better subspaces for 40 out of 140 queries and Beam only for 6 queries.Overall, SiBeam is the best-performing measure, and RBeam is a slow measure; however, it performs better than the Z-score-based measure.As mentioned in [20,21], Z-score-based measures are biased towards subspace having high variance.Thus, both Z-score-based measures perform worst in this comparison. Next, we visually present the discovered subspaces by different scoring measures of three queries from each data set.Note that each one-dimensional subspace is plotted using a histogram with 10 equal-width bins. Table 8 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the arrhythmia data set By visually comparing discovered subspaces by each measure, out of 48 queries (3 from each data set), SiBeam and sGBeam detect better subspaces for 39 and 18 queries.In contrast, RBeam and Beam detect better subspaces for 29 and 11 out of 42 queries.Overall, visually we can say that SiBeam performs best or comparative to RBeam, Beam, and sGBeam. Conclusion This paper shows an interesting application of OAM in the healthcare domain.We first introduced four anomaly detection and outlying aspect mining algorithms.Then, we presented a framework that not only detects anomalies but also explains why a given query is an anomaly; by providing a set of features where it is most outlying compared to others.Our evaluation on 16 medical datasets shows that iForest is the bestperforming measure.Furthermore, our experiment on the task of anomaly explanation (outlying aspect mining) shows that the recently developed isolation-based outlying scoring measure SiNNE outperforms other state-of-the-art outlying aspect mining scoring measures.In the medical domain, it is essential to have a fast Fig. 1 Fig. 1 Outlying aspects of Patient A on different features.The square point represents Patient A Table 9 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the breastw data set Table 10 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the cardiotocography data set Table 11 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the diabetes data set Table 13 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the hepatitis data set Page 17 of 23 Samariya et al.Health Information Science and Systems (2023) 11:20 Table 14 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the lympho data set Table 15 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the mammography data set Table 17 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the pima data set Table 18 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the thyroid data set Table 19 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the vertebral data set Table 20 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the wbc data set algorithm; thus, kernel density or Z-score-based scoring measures are not suitable while the data set is huge. Table 7 Visualization of discovered subspaces by RBeam, Beam, sGBeam, and SiBeam in the annthyroid data set Samariya et al.Health Information Science and Systems (2023) 11:20
4,867.8
2023-04-06T00:00:00.000
[ "Computer Science" ]
Correlation between sperm characteristics and testosterone in bovine seminal plasma by direct radioimmunoassay 1 The objectives of this study were to validate a non-extractive RIA for seminal testosterone and quantify the hormone using a solid-phase commercial kit, and study the correlation between testosterone in seminal plasma and sperm characteristics. Parallelism showed a correlation index r = 0.992 (Y = -5.47 + 1.073X; R2 = 0.985), indicating that the nonextractive method presented is indicated particularly for assessment of testosterone when establishing comparisons between samples. Overall mean (±SD) of testosterone level was 0.60±0.65 ng/mL. Correlation was only found between the seminal concentrations of testosterone and pH of the semen. Introduction Testosterone is the major circulating androgen in males; essential for normal spermatogenenesis and expression of secondary sexual characteristics.It is produced by Leydig cells, almost exclusively from testicular origin (Aümuller & Seitz, 1990).Sertoli cells are the main target for testosterone, and secrete androgen binding protein (ABP), which binds to the hormone present in the seminal tubule fluid.This event inhibits the absorption by cells in the wall of ductules and ensures the arrival of sufficient testosterone into the epididymus (Ruckebusch et al., 1991;Laudat et al., 1998).Its concentration in seminal plasma correlates to sperm concentration, percentage of motile spermatozoa, and other sperm characteristics (Laudat et al., 1998).Post & Christensen (1976) and Andersson (1992) observed positive correlation between male serum testosterone and cow pregnancy rates when evaluating fertility of crossbred bulls.Thus, testosterone is an important variable in studies of male fertility, and its determination has been object of research on extraction and chromatography procedures (Schanbacher & D'Occhio, 1982). Testosterone has been quantified in several kinds of samples of many animal species by use of radioimmunoassay (RIA).Hormone levels were determined by RIA after ether extraction procedure in serum, testicular interstitial fluid from rats (Sun et al., 1989).Incubated media from testicular tissue of hamsters were assayed by solid-phase RIA kit (Oh et al., 1995).Blood and seminal plasma testosterone of bulls were assessed by enzymatic immunoassay (EIA) after extraction with tertiary butylmethylether (Sauerwein et al., 1992;2000).Direct RIA has been applied to hypothalamic studies in bull, ram, and rat, and because there is no extraction, chromatography or transfer procedures, the recovery of serum testosterone is 100% (Schanbacher & D'Occhio, 1982). Therefore, the objectives of this study were to determine the T concentration in seminal plasma of mature Simmental bulls, validating a non-extractive RIA technique which employs a solid-phase commercial kit, and to observe the correlation between seminal testosterone and spermatic cells characteristics. Material and Methods Experimental procedures adopted by this experiment are in agreement with Principles of Ethics in Animal Research adopted by the Commission of Bioethics, School of Veterinary Medicine and Animal Science, University of São Paulo (FMVZ/USP, protocol # 403/2003). Local weather condition was monitored 60 days before and throughout the entire semen collection period.Data of R. Bras.Zootec., v.40, n.12, p.2721-2724, 2011 air temperature (maximum, minimum and average), relative humidity and solar radiation were obtained every 10 minutes from an in site meteorological station. Seven weekly semen collections were obtained from 8 purebred Simmental bulls (33 to 41 months age) by electroejaculation (Eletrogen © , Santa Lydia, Presidente Prudente, Brasil).Each ejaculation was divided in 2 subsamples.The first sub-sample was used to assess pH through an electronic probe and sperm characteristics, immediately after semen collection.This sub-sample was kept in warmed tubes, and 8 μL of semen were placed on pre-warmed slides and motility (%), vigor (1-5), and mass movement (1-5) were analyzed under 100 × by phase contrast microscopy (Axiostar, Carl Zeiss ® , Germany).Sperm morphology analysis was realized by humid chamber technique.Another aliquot of the sub-sample was diluted and fixed in pre-warmed (37 o C) formaldehyde buffered saline.After loading with one drop of diluted semen, Neubauer chamber was kept 15 minutes to allow sedimentation of cells before assessment.Evaluation was performed by counting 200 cells under 1,000 × by differential interference microscopy (model 80i, Nikon, Japan).The second subsample was kept in ice for approximately 20 minutes before processing.Seminal plasma was obtained by two-step centrifugation (500 g/15 minutes and 6.000 g/15 minutes at 4 o C) and stored in cryogenic tubes at -20 o C until assays could be performed. Determination of testosterone levels in seminal plasma was performed by RIA according to the manufacturer of the commercial kit ACTIVE TM DSL 4000 RIA (DSL, Webster, Texas, USA).After thawing, 50 μL of each sample were transferred to polypropylene test tubes and 500 μL of T[I 125 ] were added following 60 minutes incubation at 37 o C. Tubes were then decanted, inverted on paper towels for 5 minutes, and counted for one minute on the gamma counter.Standards and controls followed the same procedure. Throughout the study, results were presented as mean values and variation was expressed as standard deviation (±SD).Descriptive statistics, ANOVA, and Pearson correlation between testosterone levels and seminal characteristics (mass movement, vigor, motility, sperm abnormalities and pH) were achieved using the Statistical Analysis System (SAS, 1995). Results and Discussion During the overall experimental period with animals, including the 60 days before semen collections, average recorded temperatures were 17.6, 22.8, and 29.4 o C, for minimum, average and maximum, respectively.Mean humidity was 73.4% and mean solar radiation was 384 W/m 2 .Observation of climatic data provided valuable information, because it could be verified that animals were constantly exposed to mean temperatures above 22.0 o C and average maximum temperatures were above 30 o C in some moments.Relative humidity and solar radiation were also indicative that local weather conditions were typical of tropical climate.Some extreme values for the four parameters were observed around noon during a few days.On the other hand, it must be taken in account that animals were constantly exposed to such climatic conditions, since they remained in the pasture with no shelters to provide shadow.This situation is likely to impair steroidogenesis and spermatogenesis, influencing hormones and cells in semen samples. Testosterone was detectable even in low concentrations (0.02 and 25.0 ng/mL minimum and maximum sensitivity, respectively) (Table 1).Maximum intra and inter-assay coefficient of variation values were lower than 11.57 and 4.51%, respectively, indicating that results were precise and consistent within and among assays.The correlation index observed in the parallelism verification was r = 0.992 (Y = -5.47+1.073X;R 2 = 0.985). The variation in the percentage of bounds of the standard curve and the serial dilutions were parallel, showing parallel dose-response results (Figure 1).The position observed for both standard and serially diluted samples means that the seminal serially diluted samples under investigation resulted in parallel inhibition of the labeled antibody, binding to the antigen to be quantified (testosterone) in a similar pattern to that inhibition observed for the standard samples.No major cross reactions were detected.Schanbacher & D'Occhio (1982) also reported successful validation of direct RIA for testosterone in serum of five mammalian species, including bulls. Concentration of testosterone in seminal plasma varied among bulls (Figure 2) and consecutive ejaculations (Figure 3).Minimum and maximum values observed for a single ejaculation along the entire collection period were 0.05 and 2.96 ng/mL, respectively.Overall mean hormone level for multiple ejaculations was 0.60±0.65 ng/mL.Data reported by several authors show some different values in comparison with the present results.Sauerwein et al. (2000) verified that Simmental bulls had from 1.9 to 3.8 ng/mL of testosterone in the seminal plasma.Testicular tissue was found to have from 0.62 to 2.44 ng/mg of testosterone (Barth, 1993).Santos et al. (2004) verified that serum concentrations of testosterone in Nelore bulls ranged from 0.80 to 16.7 ng/mL, averaging 4.04 ng/mL.They observed an effect of the moment of sampling (P<0.01) with a peak of secretion at 9:00 h.Borg et al. (1991) found no significant alterations in testosterone levels associated with sexual behavior, although values were different for Angus bulls with different reproductive behavior performance.Bulls achieving at least 8 mounts and finishing a service showed higher serum concentrations than bulls with less than 8 mounts and non-servicing (3.8±0.1 and 3.9±0.1 ng/mL comparing to 3.7±0.1 and 3.3±0.2ng/mL, respectively).Borg et al. (1991) also verified that serum levels of testosterone changed before (4.0±0.5 ng/mL), during (3.6±0.7 ng/mL) and after (3.6±0.6 ng/mL) a 30-minute exposure to an estrous female, although with no significant differences.In the present study, bulls were not exposed to females and were randomly assigned to a constant sequence of semen collection from 8:00 to 16:00 hours.The two variables studied by Borg et al. (1991) were not subject of this research, but they could help to explain the differences observed between the present results and data from literature.It was expected that seminal testosterone would show higher values, based on previous reports and site of testosterone production.One of the possible causes of differences in the results is the interference of seminal proteins.If not removed, the quantity of available hormone for the antigenantibody reaction can be reduced.Individual differences, breed and age are also possible factors causing divergences from previous studies.Seminal characteristics showed low correlation to seminal testosterone (Table 2).Mass movement and vigor were found to have poor correlation with the androgen in the seminal plasma (P>0.05).Similarly, motility was not correlated to seminal testosterone (P>0.05).Santos et al. (2004) 0.25) between testosterone and mass movement, vigor, and motility, respectively).It is well known that the hormone is required by accessory sex glands as the prostate.Since their secretions influence seminal composition, it would be reasonable to suppose a relation between its concentration and spermatozoa activity.Total sperm abnormalities did not correlate to seminal testosterone (P>0.05).Positive correlation between seminal testosterone and total sperm abnormalities was expected, since the role of testosterone on the spermatogenic process is well established.However, Santos et al. (2004) have observed sperm defects having negative correlation(r 2 = -0.38,P<0.05) to serum levels of testosterone.It is possible that most of the secreted testosterone into seminiferous tubules and local circulation was metabolized at different rates by target cells at different moments. Seminal testosterone was shown to have a low negative correlation to pH (P<0.01).The current knowledge is that cell activity alters seminal pH and as the hormone-induced metabolism rises, the presence of acidic substances in the medium is increased, reducing the pH (Mann & Lutwak-Mann, 1981;Matsuoka et al., 2006).That would explain the inverse correlation between testosterone and seminal pH. Type of sample, breed environmental and experimental conditions are well known factors which could cause the difference between some of the results from the present report and the current literature.Thus, the authors are conservative, suggesting that RIA without extraction process should be better studied when the objective is to determine the total testosterone concentration in quantitative studies.There is still lack of evidences of the correlation between seminal testosterone and sperm morphology and motility, since the literature is not unanimous on this issue. Conclusions The employment of a solid-phase RIA kit is adequate to assess testosterone in bull seminal plasma, particularly in comparative studies.Mean seminal testosterone observed in Simmental bulls was 0.60±0.65,with values varying from 0.32±0.29 to 0.75±0.67ng/mL.No correlation was observed between seminal testosterone and sperm characteristics. Figure 1 - Figure 1 -Parallelism test between curves performed with T and serial dilutions containing 100, 50, 25, 12.5, and 6.25% of volume of control samples of bovine seminal plasma diluted in PBS BSA 1%. Table 1 - Quality control for RIA of T in bovine seminal plasma also verified low correlation values (0.32, 0.04 and Figure3-Variation in testosterone levels (ng/mL) in seminal plasma of 3 bulls over seven weekly collections. *Score values from 0 to 5. Table 2 - Mean seminal characteristics and respective coefficient of correlation with T concentration in seminal plasma
2,657.2
2011-12-01T00:00:00.000
[ "Agricultural and Food Sciences", "Biology" ]
$tt^*$ equations, localization and exact chiral rings in 4d N=2 SCFTs We compute exact 2- and 3-point functions of chiral primaries in four-dimensional N=2 superconformal field theories, including all perturbative and instanton contributions. We demonstrate that these correlation functions are nontrivial and satisfy exact differential equations with respect to the coupling constants. These equations are the analogue of the $tt^*$ equations in two dimensions. In the SU(2) N=2 SYM theory coupled to 4 hypermultiplets they take the form of a semi-infinite Toda chain. We provide the complete solution of this chain using input from supersymmetric localization. To test our results we calculate the same correlation functions independently using Feynman diagrams up to 2-loops and we find perfect agreement up to the relevant order. As a spin-off, we perform a 2-loop check of the recent proposal of arXiv:1405.7271 that the logarithm of the sphere partition function in N=2 SCFTs determines the K\"ahler potential of the Zamolodchikov metric on the conformal manifold. We also present the $tt^*$ equations in general SU(N) N=2 superconformal QCD theories and comment on their structure and implications. 1 Introduction In this paper we are interested in four-dimensional theories with N = 2 superconformal invariance. There are many well known examples of N = 2 quantum field theories (with or without a known Lagrangian description) that exhibit manifolds of superconformal fixed points (specific examples will be discussed in the main text). Although particular neighborhoods of these manifolds can sometimes be described by a conventional weakly coupled Lagrangian, the generic fixed point is a superconformal field theory (SCFT) at finite or strong coupling. It is of considerable interest to determine how the physical properties of these theories vary as we change the continuous parameters (moduli) that parametrize these manifolds 1 . A well-studied maximally supersymmetric example with a (complex) one-dimensional conformal manifold is N = 4 super-Yang-Mills (SYM) theory. Large classes of examples are also known in theories with minimal (N = 1) supersymmetry (see e.g. [1]). Four-dimensional superconformal field theories with N = 2 supersymmetry are particularly interesting because they are less trivial than the N = 4 theories, but are considerably more tractable compared to the N = 1 theories. A particularly interesting subsector of N = 2 dynamics is controlled by chiral primary operators. These are special operators in short multiplets annihilated by all supercharges of one chirality. They form a chiral ring structure under the operator product expansion (OPE). The exact dependence of this structure on the marginal coupling constants is currently a largely open interesting problem. In two spacetime dimensions the application of the 'topological anti-topological fusion' method gives rise to a set of differential equations, called tt * equations, which were employed successfully in the past [2,3] to determine the coupling constant dependence of correlation functions in the N = (2, 2) chiral ring. An analogous set of tt * equations in four-dimensional N = 2 theories was formulated using superconformal Ward identities in [4]. 2 In four dimensions, however, it is less clear how to solve these differential equations without further input. More recently, a different line of developments has led to the proposal that the exact quantum Kähler potential on the N = 2 superconformal manifold is given by the S 4 partition function of the theory [7]. The latter can be determined non-perturbatively with the use of localization techniques [8]. As a result, it is now possible to compute exactly the Zamolodchikov metric on the manifold of superconformal deformations of N = 2 theories via second derivatives of the S 4 partition function. Equivalently, the two-point function of scaling dimension 2 chiral primaries is expressed in terms of second derivatives of the S 4 partition function. We review the relevant statements in section 2. In the present work we take a further step and argue that, when combined with the tt * 1 The moduli of the conformal manifold in this paper should be distinguished from the moduli space of vacua, e.g. Coulomb or Higgs branch moduli, of a given conformal field theory. 2 In a different direction, tt * geometry techniques have also been applied to higher dimensional quantum field theories more recently in [5,6]. equations of [4], the exact Zamolodchikov metric is a very useful datum that leads to exact information about more general properties of the chiral ring structure of N = 2 SCFTs. Specifically, it provides useful input towards an exact solution of the tt * equations, which encodes the non-perturbative dependence of 2-and 3-point functions of chiral primary operators on the marginal couplings of the SCFT. In this solution, correlation functions of chiral primaries with scaling dimension greater than two are expressed in terms of more than two derivatives of the S 4 partition function. A review of the relevant concepts with the precise form of the tt * equations is presented in section 3. Such results can have wider implications. In subsection 3.5 we demonstrate that a solution of the 2-and 3-point functions in the N = 2 chiral ring has immediate implications for a larger class of n-point 'extremal' correlation functions. Moreover, it is not unreasonable to expect that 2-and 3-point functions in the chiral ring may eventually provide useful input towards a more general solution of the theory using conformal bootstrap techniques. In section 4 we demonstrate the power of these observations in an interesting well-known class of theories: N = 2 superconformal QCD defined as N = 2 SYM theory with gauge group SU (N ) coupled to 2N fundamental hypermultiplets. This theory has a (complex) one-dimensional manifold of exactly marginal deformations parametrized by the complexified gauge coupling constant τ = θ 2π + 4πi . For the SU (2) theory, which has a single chiral ring generator, we demonstrate that the tt * equations take the form of a semi-infinite Toda chain 3 . Solving this chain in terms of the SU (2) S 4 partition function provides the exact 2-and 3point functions of the entire chiral ring. Unlike the N = 4 SYM case, where these correlation functions are known not to be renormalized [9][10][11][12][13][14][15][16][17][18], in N = 2 theories they turn out to have very nontrivial, and at the same time exactly computable, coupling constant dependence that we determine. In section 4 we also comment on the transformation properties of these results under SL(2, Z) duality. In the more general SU (N ) case, the presence of additional chiral ring generators makes the structure of the tt * equations considerably more complicated. A recursive use of the tt * equations is now less powerful and appears to require information beyond the Zamolodchikov metric (e.g. information about the exact 2-point functions of the additional chiral ring generators) which is not currently available. We present the SU (N ) tt * equations and provide preliminary observations about their structure. Independent evidence for these statements is provided in section 5 with a series of computations in perturbation theory up to two loops. Already at tree-level, agreement with the predicted results is a non-trivial exercise, where the generic correlation function comes from a straightforward, but typically involved, sum over all possible Wick contractions. We find evidence that there are compact expressions for general classes of tree-level correlation functions in the SU (N ) theory. The next-to-leading order contribution arises at two loops. We provide an explicit 2-loop check for the general correlation function in the SU (2) N = 2 superconformal QCD theory. As a by-product of this analysis we present a 2-loop check of a recently proposed relation [7] between the quantum Kähler potential on the superconformal manifold and the S 4 partition function. Some of the wider implications of the tt * equations and interesting open problems are discussed in section 6. Useful facts, conventions and more detailed proofs of several statements are collected for the benefit of the reader in four appendices at the end of the paper. A companion note [19] contains a consice presentation of some of the main results of this work with emphasis on the SU (2) N = 2 superconformal QCD theory. The R-symmetry of 4d N = 2 SCFTs is SU (2) R × U (1) R . We concentrate on (scalar) chiral primary operators defined as superconformal primary operators annihilated by all supercharges of one chirality. These operators belong to short multiplets of type "E R 2 (0,0) " in the notation of [20] 4 . As was shown there, these must be singlets of the SU (2) R and must have nonzero charge R under U (1) R . We work in conventions 5 where the unitarity bound is Superconformal primaries saturating the bound ∆ = R 2 are annihilated by all right-chiral supercharges Q iα . We call them chiral primaries and denote them by φ I . Their conjugate, which obey ∆ = − R 2 , are annihilated by Q i α . We call them anti-chiral primaries and denote them as φ I . We write the 2-point functions of chiral primaries as By the symbol g J I we denote the inverse matrix i.e. g IJ g JK = δ K I . It is well known that the OPE of chiral primaries is non-singular where φ K is also chiral primary and C K IJ are the chiral ring OPE coefficients [22]. We also define the 3-point function of chiral primaries For an interesting recent discussion of other higher-spin chiral primary operators see [21]. 5 In these conventions the supercharges Q i α have U (1)R charge equal to −1 and Q iα have +1. The α,α are Lorentz spinor indices, while the i is an SU (2)R index. and we have the obvious relation between OPE and 3-point coefficients So far we have defined the chiral ring for one particular N = 2 SCFT. In general, such SCFTs may have exactly marginal coupling constants. In that case the elements of the chiral ring (i.e. the corresponding 2-and 3-point functions) will become functions of the coupling constants. The goal of our paper is to analyze this (typically non-trivial) coupling-constant dependence of the chiral ring. Marginal deformations We are interested in N = 2 SCFTs with exactly marginal deformations. We parametrize the space of marginal deformations (conformal manifold), called M from now on, by complex coordinates λ i , λ i . Under an infinitesimal marginal deformation the action changes by It can be shown that the marginal deformation preserves N = 2 superconformal invariance, if and only if the marginal operators are descendants of (anti)-chiral primaries with ∆ = 2 and R = ±4, more specifically where φ i is chiral primary of charge R = 4. The notation O i = Q 4 · φ i means that O i can be written as the nested (anti)-commutator of the four supercharges of left chirality. Their Lorentz and SU (2) R indices of the supercharges are combined to give a Lorentz and SU (2) R singlet. The overall normalization of factors of 2 etc. is fixed so that equation (2.10) holds. Notice that since the Q's have U (1) R charge equal to −1 the marginal operators are U (1) R neutral, as they should. From now on in this section and the next we use lowercase indices i, j, ... to indicate chiral primaries of R-charge equal to ±4. These are special since, via (2.7), they correspond to marginal deformations. We use uppercase indices I, J, .. to denote general chiral primaries of any R-charge. The Zamolodchikov metric is defined by the 2-point function 6 The conformal manifold M equipped with this metric is a complex Kähler manifold (possibly with singularities). The corresponding "metric" for the chiral primaries is We define the normalization of (2.7) in such a way that (2.10) The exact Zamolodchikov metric from supersymmetric localization In [7] it was shown that the partition function of an N = 2 theory on the four-sphere S 4 , regulated in a scheme that preserves the massive supersymmetry algebra OSp(2|4), computes the Kähler potential for the Zamolodchikov metric. The result is Combining this result with (2.10) we conclude that The partition function Z S 4 can be computed exactly for a certain class of N = 2 SCFTs, using supersymmetric localization [8]. Via (2.13) this immediately provides the 2-point functions of chiral primaries with scaling dimension ∆ = 2. Our strategy will be to use these 2-point functions and the tt * equations that we derive in the following section to compute the 2-point functions of chiral primaries of higher R-charge. In turn, this will allow us to compute the exact, non-perturbative 3-point functions of chiral primaries over the conformal manifold. 3 tt * equations in four-dimensional N = 2 SCFTs In this section we review the analogue of the tt * equations for 4d N = 2 SCFTs, which were derived in [4]. We omit proofs, which can be found there. tt * equations and the connection on the bundles of chiral primaries We parametrize the conformal manifold M by complex coordinates λ i , λ i . In general, the chiral primary 2-and 3-point functions are non-trivial functions of the coupling constants. In order to discuss the coupling constant dependence of correlators we have to address issues related to operator mixing. This mixing is an intrinsic property of the theory, similar to the (in general, non-abelian) Berry phase, which appears in perturbation theory in Quantum Mechanics 8 . The operator mixing in conformal perturbation theory has been discussed in several earlier works, here we mention those that are most relevant for our approach [4,[23][24][25][26][27][28]. In order to describe the operator mixing, it is useful to think of local operators as being associated to vector bundles over the conformal manifold. These bundles are equipped with a natural connection that we denote by (∇ µ ) L K = δ L K ∂ µ + (A µ ) L K . This connection encodes the mixing of operators with the same quantum numbers under conformal perturbation theory. The curvature of this connection can be defined in terms of an integrated 4-point function in conformal perturbation theory, by the expression The index L is raised with the inverse of the matrix of 2-point functions. The reason that the RHS is not identically zero, despite the antisymmetrization in the indices µ, ν, is that the integral on the RHS has to be regularized to remove divergences from coincident points. The need for regularization is one way to understand why we end up with nontrivial operator mixing. A very thorough explanation of the regularization procedure needed to do the double integral is given in [27] 9 . In the case of N = 2 SCFTs, and when considering operators in the chiral ring, this double integral can be dramatically simplified, given that the marginal operators are descendants of chiral primaries of the form O i = Q 4 · φ i and similarly for the antiholomorphic deformations. As was shown in [4], we can use the superconformal Ward identities to move the supercharges from one insertion to the other, and using the SUSY algebra {Q i α , Q jβ } = 2P αβ δ ij repeatedly, we get derivatives inside the integral. Then, by integrations by parts the integral simplifies drastically, and only picks up contributions which are determined by chiral ring 2-and 3-point functions and the CFT central charge c. The interested reader should consult [4] for details. The final result is that in N = 2 SCFTs the curvature of bundles of chiral primaries is given by The equations on the first line express the fact that the bundles of chiral primaries are (at least locally 10 ) holomorphic vector bundles over the conformal manifold. In the second line, R is the U (1) R charge of the bundle, c the central charge of the CFT and g ij is the 2-point function of chiral primaries of ∆ = 2, whose descendants are the marginal operators (2.7). These equations are the analogue of the tt * equations derived in [2] for the Berry phase of the Ramond ground states and the chiral ring of N = (2, 2) theories in two dimensions. 9 In [27] only 2d CFTs are discussed but several of their statements can be generalized to 4d conformal perturbation theory. 10 From now on, whenever we say 'holomorphic bundle', 'holomorphic section', 'holomorphic function' these terms should be understood in the sense of 'locally holomorphic', since the equations we derived are local and we have not analyzed global issues. There may be obstructions in extending the holomorphic dependence globally. Moreover, it can be shown [4] that the OPE coefficients of chiral primaries are covariantly holomorphic ∇ j C I JK = 0 (3.3) and that OPE coefficients obey the analogue of the WDVV equations [29][30][31] which have the form Here, and according to our notation, the indices i, j run over the marginal deformations, while K, L, can be any chiral primary. Finally, the supercharges and supercurrents are associated to a holomorphic line bundle L over the conformal manifold, whose curvature is given by 11 (3.5) The bundle L encodes the ambiguity of redefining the phases of the supercharges as Q i α → e iθ Q i α and Q iα → e −iθ Q iα (the superconformal generators transform as S → e −iθ S andS → e iθS , while the bosonic generators remain invariant). It is clear that this transformation is an automorphism of the N = 2 superconformal algebra. The equations (3.5) are saying that in the natural connection defined by conformal perturbation theory, the choice of this phase varies as we move on the conformal manifold. As we see from (3.5) the curvature of the corresponding bundle L is proportional to the Kähler form of the Zamolodchikov metric. The statements above are covariant in the sense that they hold independent of how we select the normalization/basis of chiral primaries as a function of the coupling constants. However, it is more practical to select a particular scheme, where we will see that the equations above reduce to standard partial differential equations for the 2-and 3-point functions, without any reference to the connection A on the bundles. A natural choice would be to select a basis of chiral primaries over the conformal manifold that consists of holomorphic sections of the corresponding bundles. Furthermore, from (3.2a) we see that it is possible to go to a holomorphic gauge (A j ) L K = 0, where ∇ j = ∂ j . In this gauge, the condition (3.3) simply becomes ∂ j C I JK = 0, so the OPE coefficients are holomorphic functions of the couplings. Let us denote the chiral primaries in the gauge where they are holomorphic sections as φ ′ I and the corresponding 2-point functions as In terms of these holomorphic sections, the curvature of the underlying holomorphic bundles can be simply expressed as and there is no longer any explicit dependence on the connection A. Here we used the compatibility of the connection and the metric on the bundle, see [27] for explanations. We could continue working with these holomorphic sections, but we need to pay attention to the following technical detail. The marginal operators O i can be related to the chiral primaries φ ′ i with ∆ = 2 by an expression of the form O i = Q ′ 4 · φ ′ i . The supercharges Q ′ can be viewed as sections of the holomorphic bundle L mentioned in equations (3.5). Having chosen a convention for O i and φ ′ i we have also chosen the conventions for the section Q ′ . Assuming O i is holomorphic (from (2.6)), the above choice of the holomorphic section φ ′ i implies that Q ′ is a holomorphic section of L. These conventions for the supercharges are not the standard ones following from the supersymmetry algebra. In the standard conventions, although the overall phase of the supercharges can be redefined in a coupling-constant dependent way due to the U (1) automorphism of the algebra, the "magnitude" of the normalization of the supercharges is fixed in order to satisfy the standard supersymmetry algebra Equivalently, the normalization of the 2-point function of the corresponding supercurrents is independent of the coupling constant. Since the supercharges Q with this standard choice have constant magnitude, they cannot be a holomorphic section of the bundle L. 12 Hence, the standard Q and the Q ′ above are different types of sections. What is the precise relation between them? Equation (3.5) implies that the combination can be a holomorphic section for an appropriate choice of the (coupling-constant dependent) phase of Q. K is the Kähler potential of the Zamolodchikov metric. Notice that the appropriate choice of the phase of Q depends on the choice of Kähler gauge. Under a Kähler transformation, There is an overall holomorphic factor e 2f c ′ and the original phase of Q has been shifted. With these specifications (3.7) is the relation between Q and Q ′ that we are looking for. This suggests the following choice of conventions: select chiral primaries φ I at any level of R-charge R so that φ ′ I = e − R c ′ K φ I are holomorphic sections. Equivalently, if we have already a choice of holomorphic sections φ ′ I (as above), then we define a new non-holomorphic basis by φ I = e R c ′ K φ ′ I . 13 The corresponding 2-point functions obey the relation where Q are supercharges with the standard 12 Had they been holomorphic sections with constant magnitude, we would conclude from (3.6) that the curvature of L is zero, which is inconsistent with the direct computation leading to (3.5). 13 Again, this definition of φI depends on the Kähler gauge and the resulting 2-point function g IJ transforms as g IJ → e normalization. The non-holomorphicity of φ i precisely cancels the non-holomorphicity of Q. In addition, the general OPE coefficients are the same in the two bases, C I JK = C I ′ J ′ K ′ , as a consequence of R-charge conservation. In the φ I -basis the curvature of the bundles becomes Inserting into (3.2b) we obtain the partial differential equations 14 Differential equations for 2-and 3-point functions of chiral primaries The result of this choice of gauge (scheme) is that the tt * equations reduce to differential equations for the 2-and 3-point functions, where there is no explicit appearance of the connection on the bundles. For the sake of clarity we summarize here the detailed form of the equations with all indices written out As we can see these differential equations relate the coupling constant dependence of 2-and 3-point functions of various chiral primaries. They have to be supplemented by equation (3.3), which in this gauge takes the simpler form 11) and the WDVV equations (3.4) In the examples that we will study later the conformal manifold is 1-(complex) dimensional, hence the WDVV equations are trivially obeyed and that is why we do not discuss them any further. In other N = 2 theories with higher dimensional conformal manifolds they may be nontrivial. Let us elaborate a little further on the notation in equation (3.10). The lowercase indices i, j run over (anti)-chiral primaries of ∆ = 2, R = ±4, or equivalently, over the marginal directions along the conformal manifold. We remind that chiral primaries of R = ±4 and dimension ∆ = 2 are those whose descendants are the marginal operators corresponding to λ i , λ j on the LHS. The capital indices run over general chiral primaries of any R-charge. These equations can be applied for each possible sector of chiral primaries. The function 14 The reader familiar with the 2d tt * equations should notice that the last term −g ij δ L K can be effectively removed by a slight redefinition, see the discussion around (4.9) for an example. g KM is the 2-point function of chiral primaries of charge R. The OPE coefficients C P iK relate the chiral primaries of charge R (corresponding to the index K) to the chiral primaries of charge R + 4 (corresponding to the index P ). The indices U, V correspond to chiral primaries of charge R − 4. Finally by C * Q jR we mean (C Q jR ) * . Remark on the curvature of the Zamolodchikov metric If we consider equation (3.10) specifically for the bundle of chiral primaries of R-charge 4 (whose descendants are the marginal operators) and using (2.10) and the general formula for the Riemann tensor of a Kähler manifold we get the equation We notice that the curvature of the conformal manifold obeys an equation, which is reminiscent of the one for the moduli space of 2d N = (2, 2) SCFTs with general values of the central charge, as some sort of generalization of special geometry [25,26]. Note on normalization conventions We emphasize once again that the differential equations (3.10) hold in a particular choice of normalization conventions described near the end of section 3.1. The benefit of this choice is that it allows us to circumvent the details of a non-trivial connection on the chiral primary bundles. These normalization conventions are typically different from the more common ones in conformal field theory where one diagonalizes the 2-point functions of conformal primary fields, (3.14) In the conventions (3.14) the OPE coefficients C K IJ are no longer holomorphic functions of the marginal couplings and therefore do not obey (3.11) (but they still obey (3.3)). In the examples of section 4 a natural basis of chiral primaries will lead to the holomorphic gauge of equation (3.11). Once there is a solution of the tt * equations in this basis, it is not hard to rotate to the more conventional basis (3.14). Global issues When studying the equations (3.10) it is important and interesting to explore certain global issues 15 of the bundles of chiral primaries over the conformal manifold M. The equations are local, since they were derived in conformal perturbation theory, but the conformal manifold may have special points (e.g. the weak coupling point g Y M = 0) and nontrivial topology like in the class S theories [32,33], where the conformal manifold is related to the moduli space of punctured Riemann surfaces. Because of these global issues, it is conceivable that in certain theories, the connection on the space of operators is not entirely determined by the local curvature expression (3.10), but there may be additional "Wilson line"-like configurations around the special points/nontrivial cycles on the conformal manifold. Moreover, whether we can find global holomorphic sections or not and if we can set ∂C = 0 globally, may be a nontrivial question. In this paper, since we are dealing mostly with the simpler superconformal QCD theories, we will not go into these global issues but we are planning to return to them in future work. Solving the tt * equations The resulting equations (3.10) are a set of coupled differential equations for the 2-and 3point functions of chiral primaries. In certain 2d N = (2, 2) QFTs the tt * equations could be solved [2,3] just from the requirement that the 2-point functions must be positive and from knowing the correlators in the weak coupling region. For this to work it was important that the chiral ring in 2d is finite dimensional. For example, in N = (2, 2) SCFTs a unitarity bound constrains the R-charge by |q| ≤ c 3 , which shows that in theories with reasonable spectrum the chiral ring is truncated. In 4d N = 2 SCFTs the chiral ring has no known upper bound in R-charge and if we try to apply these equations we end up with an infinite set of coupled differential equations. For instance, while in certain 2d examples one gets equations corresponding to the periodic Toda chain [3], in 4d N = 2 SCFTs we find equations similar to the semi-infinite Toda-chain (this will become more clear in section 4). Unlike what happened to 2d examples [2, 3], we have not been able to find a way to uniquely determine a solution of these equations, just from the requirement of positivity of the 2-point functions and the boundary conditions at weak coupling. On the other hand, in certain 4d N = 2 SCFTs, these equations have a recursive structure: if we somehow fix the coupling constant dependence of the lowest nontrivial chiral primaries, then the equations predict the 2-and 3-point functions of higher-charge chiral primaries. As we explained in section 2, the 2-point functions of chiral primaries of R-charge 4, are proportional to the Zamolodchikov metric on the conformal manifold. Hence, if we knew the exact Zamolodchikov metric as a function of the coupling, we would also know the 2-point function of chiral primaries of R-charge 4, and then by plugging this into the sequence of tt * equations we would be able to compute the 2-and 3-point functions of an infinite number of other chiral primaries. Progress in this direction becomes possible after the recent proposal [7], which relates the partition function of N = 2 SCFTs on S 4 computed by localization in the work of Pestun [8], to the Kähler potential of the Zamolodchikov metric on the moduli space. While this strategy allows us to partly solve the tt * equations, it would be interesting to explore whether it is possibile to determine the relevant solution of these equations without input from localization. This could perhaps be possible by demanding positivity of all 2-point functions of chiral primaries over the conformal manifold supplemented by some weak coupling perturbative data, in analogy to what was done in [3]. This is a very speculative possibility, which if true, would in principle lead to an alternative computation of the nontrivial information encoded in the sphere partition function, without the use of localization. We plan to investigate this further in future work. Extremal correlators By computing the 2-and 3-point functions of chiral primaries we can also get exact results for more general "extremal correlators". These are correlators of the form where φ I k are chiral primaries and φ J is antichiral, with R-charges related as First, it is convenient to use a conformal transformation of the form to write the correlator as where the x ′ 's on the RHS are related to x's by (3.16). For an extremal correlator in N = 2 SCFT, the superconformal Ward identities imply that is independent of the positions x i . Consequently, we are free to evaluate it in any particular limit. Let us define a new chiral primary φ I by fusing together all the chiral primaries where the symbol × refers to an OPE. Notice that, since all operators are chiral primaries, this multi-OPE is non-singular and associative, so the limit is well defined and it is simply given by a chiral primary φ I of charge R I = k R I k . Then we find that where on the last step we got the usual 2-point functions of chiral primaries (2.2). Due to the associativity of the chiral ring we can also write Re-instating the full coordinate dependence from (3.17), we can write the following formula for extremal correlators N = 4 theories Until this point we considered general theories with N = 2 supersymmetry. It is interesting to ask parenthetically how the formalism captures the properties of N = 4 theories. An N = 4 theory can also be written as an N = 2 theory, so our formalism should apply. The R-symmetry SU (2) R × U (1) R of the N = 2 viewpoint, is embedded inside the underlying SO(6) R of the full N = 4 theory. We proceed to flesh out the pertinent details and verify that the tt * equations work correctly in N = 4 theories. Consider an N = 4 gauge theory with semi-simple gauge group G. The theory has 6 real scalars Φ I , I = 1, ..., 6. It is useful to define the complex combination which is the bottom component of an SU (3) highest weight N = 1 superfield. The U (1) R symmetry that rotates this field corresponds to rotations on the 1-2 plane. The chiral primary, whose descendant is the N = 4 marginal operator, has the form From the N = 4 viewpoint this is the superconformal primary of the 1 2 -BPS short representation of N = 4 which contains, among other operators, the R-symmetry currents, stress tensor and marginal operators. General chiral primaries of charge R in 1 2 -BPS representations can be deduced from multitrace operators of the form where 2 n i = R. The trace is taken in the adjoint of G. The conformal manifold of this theory is parametrized by the complexified coupling up to global identifications due to S-duality transformations. θ denotes the θ-angle and g Y M the Yang-Mills coupling. An important point is that for N = 4 theories the Zamolodchikov metric on the conformal manifold does not receive any quantum corrections and in our conventions is equal to This means that the conformal manifold is locally a two-dimensional homogeneous space of constant negative curvature. The marginal operators O τ , O τ can be thought of as holomorphic and antiholomorphic tangent vectors to the conformal manifold. Since the manifold (3.27) has nonzero curvature, the marginal operators have a nontrivial connection. On the other hand, we will argue that the bundles encoding the connection for chiral primaries have vanishing curvature in N = 4 theories. This can be seen as follows: while from the N = 2 point of view the chiral primaries are only charged under U (1) R , in the underlying N = 4 theory they belong to representations of SO(6) R . Since the conformal manifold is one-complex dimensional and the holonomy of the tangent bundle is only U (1), it is not possible to have notrivial SO(6)-valued curvature for bundles over the conformal manifold, without breaking the SO(6) invariance of the theory. Hence we conclude that the bundles of chiral primaries for N = 4 theories must have vanishing curvature. One might wonder, how this statement can be consistent with the fact that the tangent bundle has nontrivial curvature and the fact that the marginal operators are descendants of the chiral primaries. The resolution is simple. Recalling the relation O τ = Q 4 · φ 2 , we can see that the curvature corresponding to O τ is given by the sum of the curvature of the supercurrents plus that of φ 2 . Since the latter is vanishing, we learn that the curvature of the tangent bundle comes entirely from that of the supercharges (3.5). It is easy to check that, using (3.5), the relation O τ = Q 4 · φ 2 and comparing with the curvature of the tangent bundle of (3.27), all factors work out correctly. Alternatively, we can verify the fact that the chiral primaries in N = 4 have vanishing curvature directly from the tt * equations. This can be done in two steps. The first step is to observe that in N = 4 theories, we have a non-renormalization theorem for 3-point functions [9][10][11][12][13][14][15][16][17][18], which can be expressed in equations as The second step requires taking the covariant derivative (either ∇ or ∇) of both sides of the tt * equation (3.2b). The covariant derivative of the RHS, which involves the two-point function coefficients g and the 3-point function coefficients C, vanishes from (3.28) and the compatibility of g with the connection, which implies ∇g = ∇g = 0. The vanishing of the covariant derivative of the RHS implies that the covariant derivative of the LHS also vanishes, from which we deduce that the bundles must have covariantly constant curvature. This allows a direct evaluation of the curvature in the weak coupling limit. Hence, in order to show that the curvature vanishes in N = 4 theories for all values of the coupling, it is enough to show that the RHS of the tt * equations (3.2b) vanishes in the weak coupling limit. All ingredients on the RHS of (3.2b) can be evaluated -in principle -by standard, alas rather involved in general, Wick contractions. In appendix C we provide an alternative derivation of the following general combinatoric/group theoretic identity This is an identity 16 for free-field contractions between traces that should hold for any semisimple group G. The subscript 2 refers to the chiral primary φ 2 = Tr[ϕ 2 ]. Using this identity, we can demonstrate the desired result, i.e. that the RHS of the tt * equation vanishes for N = 4 theories: in standard N = 4 gauge theories the central charge is 16 It is quite possible that this equation corresponds to a natural group-theoretic statement, but we have not yet investigated this in detail. See also section 5.2 for related explicit tree-level 2-point functions. related to dimG by Inserting this formula into (3.29) we find which is precisely what we wanted to show. As a final comment we would like to clarify a possibly confusing point. The tt * equations (3.10) predict that the chiral primaries in N = 2 theories have nonzero curvature even in the limit of weak coupling. Indeed, the relation between c and dimG is different for N = 2 theories compared to N = 4 theories and as a result (3.30) does not hold in N = 2 theories ((3.29), however, does hold). On the other hand, we argued that the curvature of operators in conformal perturbation theory is computed by (3.1). In the free limit the 4-point function inside the double integral, relevant for the computation of the curvature of chiral primaries, is the same in N = 2 and N = 4 theories. How can it then be, that in N = 2 the bundle of primaries has nonzero curvature even in the weak coupling limit, while in N = 4 the curvature vanishes? The answer is that the two processes, of taking the zero coupling limit and of doing the double regularized integral, do not commute. In principle, the correct computation is to first compute the integral at some finite value of the coupling, and then send the coupling to zero. If one (wrongly) first takes the zero coupling limit inside the integral, then operators whose conformal dimension takes "accidentally" small value at zero coupling, start to contribute to the double integral. At infinitesimally small coupling these operators lift and their contribution discontinously drops out of the double integral. Such operators are different between N = 2 and N = 4, thus resolving the aforementioned paradox. 4 N = 2 superconformal QCD as an instructive example Definitions The N = 2 SYM theory with gauge group SU (N ) coupled to 2N hypermultiplets (in short, N = 2 superconformal QCD or SCQCD) is a well known superconformal field theory for any value of the complexified gauge coupling constant (3.26). This theory will serve as a testing ground for the general ideas presented above. The bosonic field content of the theory comprises of: (a) the gauge field A µ and a complex scalar field ϕ in the adjoint representation The proportionality constant is convention-dependent (specific convention choices will be made below). The remaining fields of the chiral ring are generated by products of the fields We note in passing that the chiral ring defined in terms of an N = 1 subalgebra contains the additional mesonic superconformal primaries A sum over the gauge group indices is implicit, the index j = 1, . . . , 2N runs over the number of hypermultiplets, I, J , K = ± are SU (2) R indices, and the subindex 3 denotes that this particular combination belongs in a triplet representation of the SU (2) R 17 . Such primaries are not part of the N = 2 chiral ring defined in section 2.1 and therefore will not be part of our analysis. SU (2) with 4 hypermultiplets We begin the discussion with the SU (2) case which provides a simple clear demonstration of the general ideas in section 3. In this case, φ 2 is the single chiral ring generator. We normalize φ 2 by requiring the validity of the conventions (2.6), (2.7), (2.10) (see also section 5.1.1 for an explicit tree-level implementation of these conventions). We notice that since O τ is, by this definition, related to a holomorphic section of the tangent bundle of the conformal manifold, then as explained in section 3, φ 2 ∝ Tr[ϕ 2 ] (with a normalization that is a holomorphic function of τ ) is a non-holomorphic section of the bundle of chiral primaries. A holomorphic φ 2 arises by multiplying Tr[ϕ 2 ] with the non-holomorphic factor e − K 384 c , where K is the Kähler potential for the Zamolodchikov metric. In addition, the chiral ring includes a unique chiral primary φ 2n ∝ Tr[ϕ 2 ] n at each scaling dimension ∆ = 2n (n ∈ Z + ) (generated by φ 2 with repeated multiplication). We normalize the higher order chiral primaries φ 2n (n > 1) by requiring the OPE Notice that this choice is consistent with the holomorphic gauge (3.11). Moreover, as a straightforward consequence of the associativity of the chiral ring all the non-vanishing OPE coefficients are fixed to one; namely, one can further show that have a non-trivial dependence on the modulus τ . Our purpose is to determine the exact form of the functions g 2n (τ,τ ). This will immediately provide information about 3-point functions as well. Since we have a one-dimensional sequence of chiral primaries without any non-trivial degeneracies, the tt * equations (3.10) assume the following particularly simple form where n = 1, 2, ... and g 0 = 1 by definition. This infinite sequence of differential equations can be recast as the more familiar semi-infinite Toda chain ∂ τ ∂τ q n = e q n+1 −qn − e qn−q n−1 , n = 2, . . . by setting g 2n = exp (q n − log Z S 4 ). A reality condition on q n implies that g 2n are positive, which is expected by unitarity. In section 5 we collect several perturbative checks of equations (4.8). It may be interesting to classify the most general solution of the equations (4.8), subject to positivity over the entirety of the conformal manifold, but this is beyond the scope of the current paper. 18 Instead, in what follows we will use these equations to solve recursively for the 2-point functions as follows Knowledge of a single 2-point function, e.g. g 2 , implies recursively the precise form of all the rest. As we show now, for SU (2) this provides the complete non-perturbative determination of the 2-and 3-point functions of all chiral primary operators. 18 We do not expect positivity alone to fix the solution uniquely. It is worth exploring the possibility that positivity, in combination with the data of higher order perturbative corrections around the point weak coupling point Imτ = ∞, might lead to a unique solution, in analogy to 2d examples [3]. Exact 2-point functions We can use supersymmetric localization on S 4 and the formula (2.13) to determine the exact coupling constant dependence of g 2 . For the SU (2) SCQCD theory an integral expression for the sphere partition function gives [8] H is a function on the complex plane defined in terms of the Barnes G-function [35] as Further details are summarized for the convenience of the reader in appendix A. Z inst is the Nekrasov partition function [36] that incorporates the contribution from all instanton sectors. Consequently, implementing (2.13) we obtain the exact 2-point function of the lowest chiral primary φ 2 as The 2-point functions of the higher order chiral primaries can be computed recursively using (4.10). We will return to the resulting expressions momentarily. Exact 3-point functions The general non-vanishing 3-point function (4.14) follows immediately from the above data since C 2m 2n 2m+2n = C 2(m+n) 2m 2n g 2(m+n) = g 2(m+n) . In the second equality we made use of the OPE coefficients (4.6). This formula provides the non-perturbative 3-point functions of chiral primaries as a function of the modulus τ , including all instanton corrections. Following section 3.5 it is straightforward to extend this result to any extremal correlator of chiral primaries. While the above normalization of the chiral primaries is very convenient for the type of computations of the previous section, it is common in conformal field theory to work with orthonormal fieldsφ I for which (4. 16) In these conventions, the OPE coefficientsĈ K IJ depend non-trivially on the moduli. Converting to this normalization in the case at hand we find the structure constantŝ C 2m 2n 2m+2n = g 2m+2n g 2m g 2n . (4.17) Perturbative expressions The tt * equations have allowed us to obtain exact results for 2-and 3-point functions of the chiral primary fields. The resulting expressions depend implicitly on the S 4 partition function of the SU (2) theory, which is given in terms of an one-dimensional integral (4.11). It is interesting to work out the first few orders in the perturbative expansion of the exact expressions. This will be useful later on in section 5 when we compare against independent computations in perturbation theory. 0-instanton sector Consider the perturbative contributions around the weak coupling regime g Y M → 0, or equivalently τ → +i∞. Working with the perturbative (0-instanton) part of the S 4 partition function we obtain implies the perturbative expansion (see also [37]) In section 5 we verify independently the validity of the first two orders of these expressions (for arbitrary g (0) 2n ) in perturbation theory. For each of these 2-point functions, the leading order term comes from a tree-level computation. The one-loop contribution is always vanishing and the next-to-leading order contribution comes from a two-loop computation. The corresponding 3-point functions follow immediately from equation (4.15). In the alternative conventions (4.16) they follow from a straightforward application of equation (4.17). The first few coefficients arê 2n at every level g 2n . For the first terms we find It is straightforward to continue with higher n if desired. Analogous results can be obtained likewise for the general ℓ-instanton sector. From these 2-point functions we can also express the exact instanton corrections to chiral primary 3-point functions. It would be interesting to confirm these results with an independent perturbative computation in the ℓ-instanton sector. Comments on SL(2, Z) duality It is interesting to explore the transformation properties of correlators of chiral primaries in N = 2 SCQCD under non-perturbative SL(2, Z) transformations We expect that the Zamolodchikov metric obeys the identity or equivalently and taking into account the transformation (4.37), we notice that the validity of (4.39) requires the partition function Z S 4 to be SL(2, Z) invariant up to Kähler transformations The issue we would like to address here is the following: suppose that we have verified the correct SL(2, Z) transformation of g 2 . What is the SL(2, Z) behavior of the 2-point functions g 2n of the higher order chiral primaries? The tt * equations provide a specific answer. Assuming g ′ 2 = |cτ + d| 2 g 2 , it is easy to verify recursively from (4.10) that which is consistent with expectations. See [38] for a related discussion of the S-duality properties of chiral primary correlation functions in N = 4 SYM theory. SU (N ) with 2N hypermultiplets The case of general SU (N ) gauge group can be analyzed in a similar fashion. Unfortunately, for general N ≥ 3 it is less clear under which conditions we can identify the relevant solution of the tt * equations. We proceed to discuss the detailed structure of the SU (N ) tt * equations. The general SU (N ) N = 2 SCQCD theories possess N − 1 chiral ring generators represented by the single-trace operators (4.44) The general element of the chiral ring is freely generated from these operators and can be viewed as a linear combination of the primaries The operator that gives rise to the single exactly marginal direction O τ of the theory is We notice that the scaling dimension of the generic chiral primary (4.45) is ∆ = N −1 i=1 (i+1)n i . Obviously, there are values of ∆ where more than one chiral primary can have the same scaling dimension. Such chiral primaries can mix non-trivially with each other to exhibit non-diagonal τ -dependent 2-point function matrices. We verify this mixing explicitly in specific examples at tree-level in subsection 5.2. The OPE of the chiral primaries (4.45) can be chosen to take the form φ (n 1 ,...,n N−1 ) (x) φ (m 1 ,...,m N−1 ) (0) = φ (n 1 +m 1 ,...,n N−1 +m N−1 ) (0) + . . . , (4.47) or in more compact notation This choice allows us to fix the non-vanishing OPE coefficients to 49) in analogy to the SU (2) equation (4.6). In this way, once we choose the normalization of the chiral ring generators (4.44) the normalization of all the chiral primary fields is uniquely determined. We will consider a normalization of φ 2 that adheres to the conventions (2.6), (2.10). The remaining chiral primaries in (4.44) are chosen with an arbitrary normalizing factor N K (τ ) that is a holomorphic function of the complex coupling τ . The structure of the SU (N ) tt * equations In these conventions the tt * equations (3.10) become The addition of 2 in the index notation K + 2 refers to the element φ 2 φ K . The subindex ∆ on the indices has been added here to flesh out the scaling dimension of the corresponding chiral primaries. Sample tree-level checks of equations (4.50) (that exhibit the non-trivial mixing of chiral primaries) are collected in section 5.2. Similar to the SU (2) case the equations (4.50) relate 2-point functions of chiral primaries at three different scaling dimensions and can be recast in the recursive form However, unlike the situation of the SU (2) gauge group, the complicated degeneracy pattern of the general SU (N ) theory and the corresponding non-trivial mixing of the chiral primary fields makes this system of differential equations a far more complicated one to solve explicitly in terms of a few externally determined data (like the Zamolodchikov metric). Most notably, the LHS of equation (4.51) involves primaries that belong in a subsequence generated by multiplication with the field φ 2 . In contrast, the RHS involves in general 2-point functions of all available chiral primaries. This feature complicates the recursive solution of the system of equations (4.51). As we move up in scaling dimension with the action of φ 2 the number of degenerate fields will stay the same or increase. Increases are due to the appearance of additional degenerate chiral primary fields that involve the action of the extra chiral ring generators other than φ 2 , i.e. Tr[ϕ 3 ] etc. In such cases, there are seemingly new 2-point function coefficients that have not been determined recursively from the previous lower levels and represent new data that need to be provided externally. It is an interesting open question whether other properties (like the property of positivity over the entire moduli space) are strong enough to reduce the number of unknowns and fix the full solution uniquely. Despite the apparent complexity of (4.51), it is quite likely that this system has a hidden structure that allows to simplify its description. For example, in section 5.2 we find preliminary evidence at tree-level that one can isolate differential equations that form a closed system on the subsequence of the chiral primary fields (φ 2 ) n . If true, the data of such subsequences could be determined solely in terms of the SU (N ) S 4 partition function in direct analogy to the SU (2) case. Such possibilities are currently under investigation. 3-point functions The non-vanishing 3-point structure constants of the SU (N ) theory are This relation is the SU (N ) generalization of (4.14), (4.15). Consequently, a solution of the tt * equations (4.50) determines immediately also the 3-point functions (4.52). The conversion of the above results into the language of the common alternative normalization (4. 16) φ at each scaling dimension ∆, where the matrix elements N L K are suitable functions of the 2-point coefficients g KL . Once the matrix elements N L K are determined the 3-point structure constantsĈ IJK in the basis (4.53) can be written aŝ (4.55) Checks in perturbation theory In this section we perform a number of independent checks of the above statements in perturbation theory. These checks provide a concrete verfication of the validity of the general formal proof of the tt * equations in [4], and allow us to verify that the tt * equations were applied correctly in the previous section. In the process, we encounter and comment on several individual properties of correlation functions in N = 2 SCQCD. We work in the conventions listed in appendix B. SU (2) SCQCD We begin with a perturbative computation up to 2 loops of the 2-point coefficients g 2n in the SU (2) N = 2 SCQCD theory. Tree-level Let us start with a comment about normalizations in the general SU (N ) theory. At leading order in the weak coupling limit, g Y M ≪ 1, (and the conventions summarized in appendix B) the 2-point function of the adjoint scalars ϕ = ϕ a T a is T a (a = 1, . . . , N 2 − 1) is a basis of the SU (N ) Lie algebra. Normalizing the chiral primary operator φ 2 as we obtain which is consistent with the conventions (2.8), (2.9), (2.10). This is important for the validity of the tt * equations (3.10), or the equations (4.8) in the SU (2) case of this subsection. Specializing now to the SU (2) case we find that the 2-point function (5.3) has the treelevel coefficient We can read off the 2-point function coefficients g 2n of the higher chiral primary operators φ 2n = (φ 2 ) n from free field Wick contractions in the 2-point correlation function A brute-force computation gives g 2n = (2n + 1)! 6 n g n 2 . (5.7) With this result the tt * equations (4.8) reduce at tree-level to the differential equation which is found to hold for the g 2 given in equation (5.5). Quantum corrections up to 2 loops We proceed to compute the first non-vanishing quantum corrections to g 2n in perturbation theory. This will allow us to reproduce the Zamolodchikov metric derived from localization [7] at g 4 Y M order and will provide a test of the tt * equations at the quantum level. Furthermore, due to the discussion in section 3.2, this provides a g 4 Y M check of the chiral primary threepoint functions in a diagonal basis as well. We will use the techniques of [39], namely we will exploit the fact that quantum corrections for N = 4 SYM vanish at each order in perturbation theory 21 , so that we only need to compute the diagrammatic difference between the N = 2 and N = 4 theories. 20 At tree-level only the gauge part iπ 16 F a µν+ F µν+a of Oτ in (B.11) contributes. The auxiliary fields contribute only contact terms and the cubic interactions are subleading in gY M . The boson and fermion kinetic terms vanish on-shell. A similar observation was made in [12]. 21 See [40,41] for perturbative computations of 2-point functions of chiral primaries in N = 4 SYM. Following [39], it is easy to see that the diagrammatic difference between N = 2 and N = 4 at order g 2 Y M vanishes. It immediately follows that the theory does not receive quantum corrections to this order, consistent with the results from localization (4.21)-(4.25). We now examine the diagrams that contribute to order g 4 Y M to the 2-point function To understand what type of diagrams can contribute to this order, it is convenient to temporarily regard the adjoint scalar ϕ lines as external and change the normalization of the fields so that the coupling constant dependence is on the vertices. Diagrams which differ between N = 2 and N = 4 must involve hypermultiplets running in the internal lines. After a brief inspection of the N = 2 SCQCD Lagrangian it is not too hard to convince oneself that the only possible types of diagrams that can contribute to order g 4 Y M (and which differ between N = 2 and N = 4) come from two types of topologies, when trying to connect the 2n 'external lines' of ϕ at point x to the 2n 'external lines' of ϕ at point 0: Let us examine the former first. We denote the quantum corrected propagator as where S (0) (x − y) is the tree-level propagator (5.1) and we have used the fact that the g 4 Y M corrections are proportional to the tree-level propagator [39]. f 1 is a numerical constant that we will determine in the following. Regarding diagrams of type b), there are only two diagrams 22 that can contribute to this order, which are shown in figure 1. In the diagram D 1 hyperscalars run in the internal loop, while the diagram D 2 corresponds to the exchange of hyperfermions. In more detail, we define D 1 (x, y) and D 2 (x, y) as connected , (5.13) 22 We remind the reader that we are only considering diagrams which differ between N = 2 SCQCD and N = 4 SYM with the same gauge group. Also the statement that these are the only diagrams is true only for SU (2) gauge group. See [39] for useful background. where Ξ 1 and Ξ 2 are the interaction actions associated to the terms in the Lagrangian (B.6) coupling the vector sector to the hypermultiplet sector, namely 14) and we take Wick's contractions that correspond to connected diagrams only. It is easy to see that all the other diagrams either vanish or are identical to their N = 4 counterparts. We start by examining the gauge structure of these diagrams. Both are proportional to Tr(T a T c T b T d ) (or permutations thereof), so the difference between the N = 2 and N = 4 color factors reads where the factor of 4 in the equation above comes from the fact that the N = 2 theory has 4 hypermultiplets. It is thus convenient to define the quantity C ≡ δ ac δ bd + δ ad δ bc + δ ab δ cd , (5.17) and parametrize the contribution from these two diagrams as where f 2 is a numerical constant that we will determine momentarily. With these results, it is straightforward to work out the combinatorics and find the g 4 Y M corrections to the correlation functions g 2n as a function of the two contributions f 1 and f 2 . After some work we find that the result is where f 1 and f 2 are defined in equations (5.11) and (5.18) respectively. In order to derive the expression above, one has to consider all the possible ways to connect the propagators associated to φ 2n with those associated toφ 2n , with the insertion of g 4 Y M corrections coming from the diagrams described above. We notice that the contribution coming from g 4 Y M diagrams with two external ϕ lines has a different dependence on n compared to the one coming from diagrams with four external ϕ lines, reflecting the different combinatorial properties of these graphs. It is important to notice that the equation above is not automatically consistent with the tt * equations. In fact, we find that demanding that (5.19) satisfies the tt * equations leads to the non-trivial condition We conclude that the tt * equations do encode non-trivial information about the quantum corrections to chiral primary correlation functions, as they are sensitive to the ratio f 2 /f 1 . Determining this ratio by explicitly computing the relevant Feynman diagrams will thus provide us with a stringent test of these equations at the quantum level. We will now determine the value of f 1 and f 2 by computing the Feynman diagrams D 1 and D 2 . We will show that their ratio is precisely the one predicted by the tt * equations. Furthermore, the result will allow us to compute the g 4 Y M correction to the Zamolodchikov metric, providing thus a perturbative check of the results of [7]. Computation of f 1 and f 2 Recall that the tree-level propagator (5.1) reads As is customary, we work in momentum space and in dimensional regularization, where the spacetime dimension d is d = 4 − 2ǫ. The g 4 Y M correction to the propagator S (1) (x − y) was computed in [39], and is given by which in turn implies that To compute the remaining two diagrams, we employ standard techniques [42] to reduce any 3-loop loop integral to a linear combination of "master integrals", whose ǫ-expansion can be found in the literature. We will see in a moment that the only master integrals that we need are those that correspond to the topologies shown in figure 2. For the convenience of the The contributions coming from the diagrams D 1 and D 2 in momentum space will be denoted byD 1 (p) andD 2 (p) respectively. At the end of the computation, we transform back to position space using the formula This formula tells us that we only need to determine the 1/ǫ term in the Feynman diagrams of interest, since they are the only ones that can contribute to the finite part of the position space correlator (see also [40,41]). We will explicitly show that all the higher-order poles cancel exactly between the two diagrams D 1 and D 2 , as expected from extended supersymmetry. We first examine the diagram D 1 . We find that its contribution in momentum space is given byD where B 62 is the master integral associated to the topology of the corresponding diagram in figure 2 and C was defined in (5.17). Since the diagram is already in the "master integral" form, we do not need to further reduce it and we can directly use the result in equation (5.27). The Feynman diagram D 2 is more complicated, but can also be reduced to a linear combination of master integrals as explained above. We used the mathematica package FIRE [44] to perform the reduction. The result turns out to bẽ Combining the results in equations (5.24)-(5.28), we obtaiñ where the ellipses denote terms of order ǫ 0 or higher. It is pleasing to see that the 1/ǫ 3 and 1/ǫ 2 poles precisely cancel, as well as all the non-ζ(3) contributions to the simple pole. Finally, we use equation (5.29) to transform back to position space, so our final result reads Comparing with (5.18), we immediately get Using the results (5.23) and (5.34) we can confirm the relation (5.20), which -as was explained around equation (5.19) -implies the validity of the tt * equations for the entire chiral ring 2-point functions g 2n up to the relevant order! Moreover, using equation (5.19) we are able to provide an independent derivation of the g 4 Y M perturbative correction to the Zamolodchikov metric Recalling that the tree-level propagator is given by equation (5.1), we find perfect agreement with the result from localization (4.21) and the prediction of [7]. SU (N ) SCQCD at tree level We continue with a tree-level investigation of the tt * equations for the general SU (N ) group. The 2-and 3-point functions entering in (4.50) can be computed directly by straightforward Wick contractions. Examples of such computations will be provided below. However, before we enter these examples it is worth making first the following general point. Although the explicit implementation of Wick contractions can be rather cumbersome with complicated combinatorics, it is trivial to obtain the τ -dependence of the 2-point function at leading order in the weak coupling limit. In general, where g KM is coupling constant independent and contains the combinatorics from the contractions of the traces. From this expression the LHS of the tt * equations (4.50) follows trivially as The RHS of the tt * equations (4.50) has the form Notice that the tree level 2-and 3-point functions in this expression are exactly the same as the ones we encountered in section (3.6) in the context of N = 4 SYM theory. As a result, we can use the identity (3.29) to recast (5.38) into the simpler form We used the fact that for the SU (N ) theories dim G = N 2 − 1. Comparing the LHS (5.37) and the RHS (5.39) we find that the tt * equations are obeyed at tree level for any SU (N ) N = 2 SCFT and for all sectors of charge R in the chiral ring. The reader should appreciate that the short argument we have just presented is simpler than the general proof of the tt * equations in [4] because it makes explicit use of the special properties of correlators in a free CFT, such as the tree-level identity (3.29), and its proof in appendix C. SU (3) examples To illustrate the content of the above equations and the new features of the SU (N ) tt * equations (N ≥ 3) (compared to the SU (2) case) we consider a few sample tree-level computations in the SU (3) theory. SU (N ) observations After the implementation of (5.36) the SU (N ) tt * equations (4.50) take the following algebraic form at tree-level ∆ 4gM In this formula (x) n denotes the Pochhammer symbol (x) n = x(x + 1) · · · (x + n − 1) , (5.51) S 2n refers to the group of permutations of 2n elements and σ is the generic permutation in this group. Although currently we do not have an analytic proof of this formula, we expect that it holds generally for any value of the positive integers n ≥ 1, N > 1. For example, for N = 2 (the SU (2) case, where there are no degeneracies and equations (5.8) make up the full set of tt * equations) one can easily see that the Pochhammer formula (5.50) reproduces the result (5.5), (5.7). As another explicit check, notice that all the values of g 2n (for n = 1, 2, 3, 4) in the previous SU (3) section are consistent with (5.50). The intriguing fact about (5.50) is that it predicts values of g 2n (at all N > 1) that obey the tree-level version of the same semi-infinite Toda chain that followed directly from the tt * equations in the SU (2) case. This is not an obvious property of the matrix equations (5.49) at arbitrary N and hints at a hidden underlying structure that will be useful to understand further. Moreover, if (5.52) holds for g 2n at all N beyond tree-level it would allow us to use the SU (N ) S 4 partition function to obtain a complete non-perturbative solution of the two-point functions (φ 2 ) n (x) (φ 2 ) n (0) in the SU (N ) theory similar to the SU (2) case above. These issues and their implications for the structure of the SU (N ) tt * equations (as well as possible extensions to more general chiral primary fields) are currently under investigation. Summary and prospects We argued that the combination of supersymmetric localization techniques and exact relations like the tt * equations opens the interesting prospect for a new class of exact non-perturbative results in superconformal field theories. In this paper we focused on four-dimensional N = 2 superconformal field theories. Combining the tt * equations of Ref. [4] with the recent proposal [7] that relates the Zamolodchikov metric on the moduli space of N = 2 SCFTs to derivatives of the S 4 partition function we found useful exact relations between 2-and 3-point functions of N = 2 chiral primary operators. In some cases, like the case of SU (2) SCQCD, the tt * equations form a semi-infinite Toda chain and a unique solution can be determined easily in terms of the well-known S 4 partition function of the SU (2) theory. The solution provides exact formulae for the 2-and 3-point functions of all the chiral primary fields of this theory as a function of the (complexified) gauge coupling. We verified independently several aspects of this result with explicit computations in perturbation theory up to 2-loops. In more general situations, e.g. the SU (N ) SCQCD theory, the structure of the tt * equations is further complicated by the non-trivial mixing of degenerate chiral primary fields. We provided preliminary observations of an underlying hidden structure in these equations that is worth investigating further. The minimum data needed to determine a unique complete solution of the general SU (N ) tt * equations, and the structure of that solution, remains an interesting largely open question. It would be useful to know if a few fundamental general properties, like positivity of 2-point functions over the entire conformal manifold, combined with some 'boundary' data, e.g. weak coupling perturbative data, are enough to specify a unique solution. An exact solution of the tt * equations would have several important implications. In section 3.5 we argued that the explicit knowledge of 2-and 3-point functions of chiral primary operators can be used to determine also the generic extremal n-point correlation function of these operators. In a different direction these results can also be used as input in a general bootstrap program in N = 2 SCFTs to determine wider classes of correlation functions, spectral data etc. Interesting work along similar lines appeared recently in [46]. For the case of N = 2 SCQCD we note that the methods developed in [46] (e.g. the correspondence with two-dimensional chiral algebras) are best suited for a discussion of the mesonic (Higgs branch) chiral primaries and are less useful for the N = 2 (Coulomb branch) chiral primaries analyzed in the present paper. As a result, our approach can be viewed in this context as a different method providing useful complementary input. In the main text we considered mostly the case of N = 2 SCQCD theories as an illustrative example. It would be interesting to extend the analysis to other four-dimensional N = 2 theories, e.g. other Lagrangian theories, or the class S theories [32,33]. Eventually, one would also like to move away from N = 2 supersymmetry and explore situations with less supersymmetry where quantum dynamics are known to exhibit a plethora of new effects. Two obvious hurdles in this direction are the following: (i) it is known that the S 4 partition function of N = 1 theories is ambiguous [7]; (ii) it is currently unknown whether there is any useful generalization of the tt * equations to N = 1 theories [4]. A related question has to do with the extension of these techniques to theories of diverse amounts of supersymmetry in different dimensions, e.g. three-dimensional SCFTs. Originally, topological-antitopological fusion and the tt * equations [2,3] were also useful in analyzing two-dimensional N = (2, 2) massive theories. Therefore, another interesting direction is to explore whether a similar application of the tt * equations is also possible in four dimensions. Massive four-dimensional N = 2 theories, like N = 2 SYM theory, would be an interesting example. Related questions were discussed in [5]. where the integral is performed over the Cartan subalgebra of the gauge group G, is the Vandermond determinant, Z tree is the classical tree-level contribution, Z 1−loop is the 1-loop contribution and Z inst is Nekrasov's instanton partition function [36]. r denotes the radius of S 4 and q = e 2πiτ . |W| is the order of the Weyl group G. In the case of the SU (N ) N = 2 SCQCD theories the elements of the Cartan subalgebra are parametrized by N real parameters a i (i = 1, . . . , N ) satisfying the zero-trace condition N i=1 a i = 0, and The instanton factor Z inst has a more complicated form. General expressions can be found in [33,36,49]. In the main text we set r = 1 for the radius of S 4 . The special function H that appears in the one-loop contribution is related to the Barnes G-function [35] G(1 + z) = (2π) of the chiral primary φ 2 , that has the lowest scaling dimension ∆ = 2, controls the exactly marginal deformation 3) The complex marginal coupling is τ = θ 2π + 4πi , and we normalize the elementary fields of the theory so that the full Lagrangian in components takes the form the D-term potential for the hypermultiplet complex scalars Q. We use standard notation where In this normalization all the τ dependence is loaded on the vector part of the Lagrangian. 24 This is consistent with (B.2), (B.3) and the identification (B.11) F µν± = F µν ∓ iF µν is the (anti)self-dual part of the gauge field strength. D and F are respectively the D-and F -auxiliary fields of the N = 1 vector and N = 1 chiral multiplet that make up the N = 2 vector multiplet. C An (eccentric) proof of equation (3.29) In this section we will give a proof of equation (3.29). Instead of giving a direct combinatoric proof, we will proceed as follows. Consider the N = 4 SYM theory with gauge group G, in the free limit. This can also be thought of as an N = 2 SCFT. This theory has 6 real scalars Φ I , I = 1, ..., 6. We consider the complex combination The chiral primary, whose descendant is the marginal operator, has the form where 2 n i = R. The trace is taken in the adjoint of G. Similarly we define the anti-chiral primaries and the matrix of 2-point functions g KL = φ K φ L . Notice that the matrix of 2-point functions g KL is not diagonal in the basis of multitrace operators and is somewhat cumbersome to compute by considering Wick contractions. Our starting point is to consider the following 4-point function Here K, L can be different chiral primaries, but by R-charge conservation this 4-point function is nonzero only if K, L have the same R-charge. By Wick contractions it is not hard to see 24 The last term of the hypermultiplet interactions, g 2 Y M V(Q), appears to be gY M -dependent, but this is only so after we integrate out the D auxiliary field. Before integrating out D the Lagrangian Lvector has a term 1 2g 2 Y M D 2 and L hyper has no explicit gY M -dependence. that there are only three possible structures of the coordinate dependence for this correlator. So the general form is A = p 1 |x 12 | 4 |x 34 | 2∆ K + p 2 |x 12 | 2 |x 14 | 2 |x 23 | 2 |x 34 | 2∆ K −2 + p 3 |x 14 | 4 |x 23 | 4 |x 34 | 2∆ K −4 . (C. 6) In principle we can compute the constants p 1 , p 2 , p 3 by working out the combinatorics of the Wick contractions, however we will try to avoid this. By considering the double OPE in the (13) → (24) channel we learn that By considering the OPE in the (12) → (34) channel we have Finally from the OPE in the (14) → (23) channel we find Using these results we have completely fixed the 4-point function (C.5) in the free limit, in terms of the 2-and 3-point function coefficients which enter the tt * equations. However, the desired equation (3.29) expresses a nontrivial relation among these coefficients. We will now argue that the consistency of the underlying CFT implies the desired relation. We will establish the relation by the following argument. The tree level correlator (C.5) can be thought of as a correlator in a theory of only dim G complex scalar fields 25 . This by itself is a consistent conformal field theory with a central charge c scalar which is related to dim G by c scalar = 8 3 dimG. (C.10) To derive equation (C.8) we considered the OPE in the channel (12) → (34) and only kept the leading term, i.e. the identity operator. One of the subleading contributions involves conformal block of the stress energy tensor. In any consistent CFT the contribution of this block is fully determined using Ward identities, by the central charge of the CFT and by the conformal dimension of the external operators [20]. Our strategy is to: a) isolate the contribution of the conformal block of the stress energy tensor for the 4-point function (C.5), (C.6) written in terms of the data (C.7), (C.8), (C.9) and b) demand that this contribution is the same as that predicted by general arguments based on the Ward identities for CFTs. We will discover that this requirement leads to the desired formula (3.29). 25 Since we are in the free limit the presence of the other fields does not make any difference to the counting of the Wick combinatorics. We write equation (C.6) in a notation which is somewhat more convenient to perform the conformal block expansion where we have introduced the conformal cross ratios u = |x 12 | 2 |x 34 | 2 |x 13 | 2 |x 24 | 2 , v = |x 14 | 2 |x 23 | 2 |x 13 | 2 |x 24 | 2 . (C.12) It is easy to see that the term p 1 = g 22 g KL is coming from the exchange of the identity operator (the reason that it is not equal to 1 is because our 2-point functions are not normalized to be ∝ 1 |x| 2∆ ). With a little work on the conformal block expansion in the u → 0, v → 1 channel, we find that the block of the stress tensor comes with the coefficient A = 1 |x 12 | 4 |x 34 | 2∆ K . . . + 2 3 p 2 uG (2) (1, 1, 4, u, v) + . . . , (C. 13) where the function G (2) is defined in [20]. (C.15) Comparing this to what we found in (C.13) we conclude that consistency of the CFT demands the relation Using the expression (C.7), (C.8) and (C.9) it is straightforward to show that this implies
18,589.2
2014-09-15T00:00:00.000
[ "Physics" ]
Sustainable development in a developing economy : Challenges and prospects Sustainable development implies development which ensures maximization of human well being for today’s generation which does not lead to declines in future well being. Attaining this path requires eliminating those negative externalities that are responsible for natural resource depletion and environmental degradation. All human activities and developmental projects are associated with environmental degradation in one form or the other with the attendant generation of wastes. As a result of these, environmental problems of various types and intensities have emerged to threaten man’s wellbeing and the natural environment which serves as his life support system. In the light of the present global drive towards sustainable development and the concern of the Federal Government of Nigeria, fundamental strategies have been presented here to provide sound basis for comprehensive plans towards environmental management in Nigeria which is a fast developing economy. The basic paradigms of the interaction of man and the environment were used as basis for this study. The socioeconomic potentials of effective environmental management were presented. Application of the strategies has the potentials of contributing significantly to the national GDP and will also ensure that development is in harmony with the environment. INTRODUCTION The world Commission on Environment and Development, commonly referred to as the Brundtland Commission, defined the concept of sustainable development as "development that meets the needs of the present without compromising the ability of future generations to meet their own needs, and at same time takes into account the needs of the poor in the developing world".Sustainable development can be defined in technical terms as a development path along which the maximization of human well being for today"s generations does not lead to declines in future well being.Attaining this path requires eliminating those negative externalities that are responsible for natural resource depletion and environmental degradation.It also requires securing those public goods that are essential for econo-mic development to last, such as those provided by well functioning ecosystems, a healthy environment and a cohesive society.Sustainable development also stresses the importance of retaining the flexibility to respond to future shocks, even when their probability, and the size and location of their effects, cannot be assessed with certainty. The scope and scale of environmental problems has expanded considerably over the past three decades (Colby, 1991).This expansion range from pollution issues at local, regional and then international levels, to deforestation, soil erosion, declining water tables, and other forms of national resource depletion and degradation, to global concerns such as climate change and the ozone layer.This expansion has coincided with cedented growth in the scope and scale of human activities, and in many countries, improvements in human welfare.All human activities take place in the context of certain types of relationships between society and biophysical world (the rest of nature)."Development" involves transformations of these relationships.When human activities took place on a scale that was minor as compared to that of nature"s own, it did not matter much whether the relationships were of a "parasitic" or "mutualistic" type.However, world population has tripled and the world economy has expanded to 20 times of its size in 1900 (Speth, 1989).Vitousek et al. (1986) have estimated that humankind now is responsible for the consumption of some 40% of all terrestrial primary productivity.Matter and energy flows-the physical presence of the economy within the ecosphere now rival in magnitude the flow rates of many natural cycles and fluxes. RECENT SUSTAINABLE DEVELOPMENT ISSUES When modeling the impacts of the latest trends in CO 2 emissions, projections show that global average temperatures will increase by about 3.5°C by 2100 (Climate Action Tracker, 2012).This is well above the 2°C of warming considered by many to be threshold for triggering dangerous, runaway climate change (UK Met.Office, 2010).Even with rapid decarbonisation and a green growth revolution, most scientists now consider 2°C to be unobtainable, though this remains a target for political negotiations.Such rapid warming has fundamental implications for development and economic activity.More frequent and severe extreme weather, combined with ever growing numbers of people and assets in exposed coastal areas and floodplains will lead to massive economic losses.This is particularly so in Asia, where 125 millon people are expected to be exposed to tropical cyclones by 2030, double in number in 1990 (IPCC SREX, 2011;Peduzzi et al., 2011).Significant long term shifts and inter-annual variability in agricultural yields will amplify food insecurity through unpredictable supply.In a world of global food supply chains, direct climate impacts will have diverse, distant and indirect effects.For example, based on modeling warming of 4 °C, soya bean yield will be halved, at least in almost every developing country in which it is grown (Osborne et al., 2009). This threat of dramatic climate change hangs over the world in which resources are already scarce in many regions, with global scarcity of key resources a real risk under business as usual scenarios.By 2030, the world will need at least 50% more food, 45% more energy and 30% more water (United Nations Secretary General High Level Panel on Global Sustainability, 2012).Almost one quarter (23%) of the substantial increase in crop production achieved over the Okonkwo 739 past four decades was due to the expansion of arable land.Agriculture accounts for about 70% of water withdrawals, while water extraction from rivers and lakes has doubled since 1960 (Turral et al., 2011).Only 13% of global energy comes at present, from renewable sources, but the imperative of emissions reduction means that renewable energy must increase, with consequences for both land and water resources (Intergovernmental Panel on climate change 2012).Contemporary globalization presents a paradox of inequality.Inequality between countries (by moneymetric measures) is declining as a large cohort of developing countries catches with OECD nations in terms of national income and wealth.There is a corresponding change in the balance of global middle class, which Kenny and Summer (2011) expects to grow massively in developing countries in the next 20 years.Taking a metric of an annual level of per capita house-hold consumption of between $10 and $100 dollars of purchasing power parity (PPP) per day, Kenny and Summer (2011) estimates that the global middle class will increase from 1.8 billion people in 2009 to 4.9 billion by 2030 (United Nations, 2011). The world environmental conference that took place in Stockholm in 1972 drew world attention to the inextricable links between development and the environment.Incidentally, that conference took place at the height of the draught in the West African Sahel that caused so much human misery and death in that part of the African continent.Since 1972, the twin issues of economic development and environmental protection have engaged the attention of scientists and non scientists alike all over the world (Okonkwo, 2000). Environmental problems are manifestation of disharmony between human activities and the environment.When human population was small and his technological ability limited his activities inflicted little damage on the environment and such damages were repaired by the regenerative powers of nature. Although, as population increased and as man"s technological capabilities increased, man was able to temporarily dominate nature but at an increasing cost to his well-being and survival.Environmental problems of various types and intensities have emerged to threaten his well-being, and the natural environment which serves as his life-support system.Man has thus realized that development cannot be sustained and managed.There are essentially ecological limits to economic growth.Sustainable development and environmental protection and management are now the major issues facing mankind. The problem of sustainable development also brought the world together in a convention tagged United Nations Conference on Environment and Development (UNCED of Earth Summit) in Riode Janeiro (Brazil) in 1992.The conference took a critical appraisal of the state of global environment as well as proffered strategies for the mitiga- gation of environmental pollution for sustainable development. To ensure a practicable balance between development and environmental protection conscious efforts must be made as have been highlighted here.These efforts are to be undertaken by all stakeholders (government, community and industries) and should be pursued in a comprehensive manner. RELATIONSHIP BETWEEN ENVIRONMENTAL MANAGEMENT AND DEVELOPMENT There are five basic "paradigms" of the relationship between humans and nature or of "environmental management in development" (Colby 1991).Figure 1 shows graphically the nature of the evolutionary relationships between the five paradigms.Each paradigm has different assumptions about human nature, about nature itself and their interactions. The Frontier Economics treats nature as an infinite supply of physical resources (raw materials, energy, water, soil, air) to be used for human benefit, and as an infinite sink for the by-products of the consumption of these benefits, in the form of various types of pollution and ecological degradation.Deep ecology (Naess, 1973;Sessions and Devall, 1985) advocate the merging of scientific aspects of systems ecology with a "biocentric" (non-anthropocentric) or harmonious view of the relationship between man and nature.Among the basic tenets are intrinsic" biospecies equality, major reductions in human population, bioregional autonomy, promotion of biological and cultural diversity, decentralized planning utilizing multiple value systems, non growth oriented economies, non dominant (simple or low) technology and more use of indigenous management and technological systems.Environmental protection emphasized rational means for assessing the costs and benefits of development activities.This led to the institutionalization of "environmental impact statements".Resource management aim at incorporation of all types of capital and resources-biophysical, human, infrastructural and monetary into calculations of natural accounts, productivity, and policies for development and investment planning.Eco-development (Riddell, 1981;Glaeser, 1984) explicitly sets out to restructure the relationship between society and nature into a " positive sum game" by reorganizing human activities so as to be synergetic with ecosystem processes and services.Ecodevelopment emphasizes biophysical economics model of a thermodynamically open economy embedded within the ecosystem: biophysical resources (energy, materials and ecological processing cycles) flow from the ecosystem into the economy, and degraded (non-useful) energy and other by products (pollution) flow through to the ecosystem as shown in Figure 2. In addition to the economic justifications for adopting improved environmental governance practice, there are also strategically defensive justifications for doing so.Negligent corporate environmental stewardship cases have raised the ire of environmental interest groups, government regulators and society in general.Accordingly, improved environmental governance practices are viewed as a way to stave off both public protest and regulatory intervention (Reinhardt, 1999).In short, even for critics who view environmental governance as an overall cost of doing business, there is a degree of acceptance that environmental management practices have strategic defensive value (Palmer et al., 1995;Kiernan, 2001).Khanna (2005) framework depicted in Figure 3 summarizes many of the diverse forces that compel firms to adopt improved environmental manage- Massive killing of fish and aquatic, life and huge socio-economic problem ment techniques.With respect to earlier studies, a great deal is now known about the myriad of ways in which improved environmental governance can benefit firms; yet, the amalgamation of such knowledge into a functional strategic planning framework is still at an evolutionary stage.This has been highlighted previously by Porter and Kramer (2006), who in relation to the broader field of corporate social responsibility (CSR) pointed out that the prevailing approaches to CSR are so fragmented and so disconnected from business and strategy as to obscure many of the greatest opportunities for companies to benefit society. NATURE OF ENVIRONMENTAL PROBLEMS The broad divisions of environment comprises of land (terrestrial), water and air (atmosphere).As a result of man"s activities on the environment the natural characteristics and features of these are deterioration of water bodies and devastation of aquatic life, defacing of the land and deforestation, denaturization of the atmosphere,global warming etc. Table 1 shows some reported environmental degradation accidents. TYPES AND CAUSE OF ENVIRONMENTAL POLLUTION AND DEGRADATION There are three main types of environmental pollution namely, air, water and land population.Air pollution is caused by emission of gaseous pollutants into the atmosphere.These pollutants includes: sulphur dioxide, sulphur trioxide, nitrogen dioxide, nitrous oxide, hydrocarbon vapours, photochemical oxidants, particulates, hydrogen sulphide, asbestos dust, herbicides, pesticides, ammonia, carbon monoxide, radioactive substances and combustion products of fossil fuel, etc. Water pollution is caused by discharge into water bodies the following pollutants: oil and grease, heavy metals, chemical sludges, dyes, acid, bases, hospital wastes, wastes chemicals, etc. Land pollution is caused by accumulation of machinery scraps, municipal solids wastes (MSW), used packaging materials and plastics, industrial sludges, etc.The source of these environmental pollutants are industries, households, offices and small scale business centers. Industrial wastes Industrialization is the linchpin of development but various disasters which have continuously occurred over the last decades, implicate industries as major contributors to environmental degradation and pollution problems of various magnitudes (Katsina, 2004). Industrial wastes and emissions contain toxic and hazardous substances most of which can be detrimental to human health.These include heavy metals such as lead, cadmium and mercury, toxic and hazardous substances most of which can be detrimental to human health.These include heavy metals such as lead, cadmium and mercury, toxic organic chemicals like pesticides, polychlorinated bi phenyls (PCB5), petrochemicals and phenolic compounds (FEPA, 1991).In Nigeria, most industries discharge untreated and toxic liquid effluents into open drains, rivers, streams, etc.The solid wastes they generate are often dumped in heaps within the premises, while gaseous emissions and particulates matters are freely discharged into the air (Katsina, 2004).The effect of such uncontrolled pollution as seen in most of the heavily industrialised centers in the country adversely affects the nearby rivers, streams and underground water systems.Table 2 shows some wastes generated in typical manufacturing processes. Household wastes As households (a group of people sharing common cooking and housekeeping arrangements) carryout normal family activities, wastes are generated.Various factors influence the rate and modes of wastes generation, these include population growth, urbanization, industrialization, general economic growth, consumption patterns and practices of individuals and families (Nwaedozie, 2001).Nigeria Environmental Society in 1991 estimated that 20 kg of solid wastes is generated per capital income in Nigeria.This s is equivalent to 3.0 million tonnes in a year given Nigeria"s estimated population of 150 million.It is therefore expected that this quantity will continue to grow as the population continue to grow.Furthermore, using the United Nations estimated figures of 0.54 kg/day of wastes generation in developing countries, to calculate/estimate the quantity of municipal solid wastes (MSW) generated in Kaduna metropolis and using a population figure of about 1.5 m people, the waste generated is about 810 tons/day (Hussaini, 2004).The non-biodegradable wastes that are commonly generated in Nigerian household are various forms of plastic wares-bags, wrappers, containers, etc.There are also plastic kitchen and table-wares, e.g buckets, jerry cans, basins, cups, spoons, clothing articles as shoes, etc. the biodegradable wastes constitutes mainly of kitchen wastes.The generation of liquid wastes is essentially minimized by the adoption of water closet system by households however municipal wastes water that run through gutters is equally large in volume hence the need to handle it properly.This waste water has high biooxygen demand content. Offices and small scale business wastes The wastes generated by this category of operators include solid, liquid and gaseous wastes.Waste generators in this category include mechanic workshops, restaurants, small scale manufacturers, filling stations, retail and wholesale shops, government offices, etc. The typical wastes generated include waste oils and grease machinery scraps, kitchen wastes, plastic containers, scrap papers and office equipment, packaging materials both paper and plastics, hydrocarbon vapours and gases, fuel combusting gases, etc. STRATEGIES FOR ENVIRONMENTAL MANAGEMENT The current global magnitude and spread of environmental pollution requires a comprehensive approach to the realization of a balance between development and environmental protection to minimize the adverse effects of urbanization, population growth and industrialization which are typical of developing economies on people"s lives, the following steps need to be taken on wholesome basis. The five basic paradigms of man"s interaction with nature and his environment provide a sound basis for comprehensive approach to environmental management. Governmental sectoral policies The government needs to develop robust and integrated sustainability goals in sectoral policies.These sectors through the goods and services they provide, contribute to meeting human needs but, through their activities, also impinge on the resources available to other sectors and to future generations.The neglect of this interdependence in sectoral policies may jeopardize other policy objectives and reduce total well being.The following sectors need to be critically integrated.a) Energy is a key requirement for economic and social development, but certain forms of energy can damage environmental quality when they are produced, transported and used.Energy accounts for 85% of total greenhouse gas emissions in OECD countries.It also contributes substantially to emissions of sulphur oxides, volatile organic compounds and particulates.The challenge for energy policy is that of reducing the environmental costs of energy production and use while extending access to basic services in developing countries and preserving energy security.b) Transportation contributes to economic growth and to meeting social needs for access and mobility.But it also contributes to environmental degradation, depletion of non renewable resources, and damage to and loss of human health.The sectoral policy should address:( i) a better integration of transport and land planning policies, (ii) improvements in the use of transport infrastructure, (iii) shifts of demand for new vehicles towards more fuel efficient ones (through fiscal incentives) c) Past agricultural growth has been achieved with fewer workers and less land, but using more water, chemicals and machinery.This has led to increased pollution and natural resource use, greater homogenization of landscape and destruction of wildlife habitat.The policy direction should address (i) the strengthening of the agricultural knowledge system, to encourage farmers to adopt sustainable methods, (ii) measures to facilitate the structural adjustment of affected workers and communities, (iii) increased use of pollution charges, to correct environmental damage caused by agriculture. Public education The inter-governmental conference on environmental education held in Tblisi (in former USSR) on October 1997 stressed the need for an all -out education programme on environmental problems if nations are to be saved from environmental disasters (Atachia, 1989), it is then necessary to integrate the environmental education into the formal education system, that is, from pre-primary level to tertiary level of education.This will provide the necessary knowledge, understanding, values and skills required by the general public and many occupational groups for their participation in devising solutions to environmental questions.It is also necessary to evolve a vigorous non formal education programme for the "man-in-the street". Non-governmental organizations (NGOs) The National Policy on the environment recognize the need to include NGOs and community based organization (CBOs) in the implementation of its policy objectives (FEPA, 1991).The emerging global rapid, complex and often unpredictable political, institutional, environmental, demographic, social and economic changes has brought to fore the need for alternative solutions to the man"s problems and this will essentially revolve around NGOs and CBOs (Achi, 2001). The establishment of specialized NGOs with thrust towards solving environmental problems should be encouraged.Trust funds should also be established by corporate organization and well meaning individuals from which these NGOs can draw funds for their projects. Effective town planning Government at all levels should embark on development of functional and effective master plans for all centres of development, towns and villages.This will go a long way towards solving many ecological and environmental degradations being experienced.These master plans must also include adequate provision of central waste management schemes for handling of liquid, solid and gaseous effluents from the various human activities.The central waste handling facilities should be part of the plans for housing estates, markets, shopping complexes, industrial areas and designated sections of the towns and cities. It has been observed that this approach is grossly neglected in the recent developments across the country. Enforcement of existing environmental Regulations Pollution control and waste management objectives can be attained through a variety of policy instruments.These instruments can be categorized into: a.The command-and-control or direct regulation along with monitoring and enforcement.b.The economic strategies. The regulatory approach generally requires government to set health or ecology based ambient environmental objectives and specify the standards or amount of pollutants that can be discharged or the technology by which polluters should meet these objectives.In most cases, the command-and-control approach also specifies schedules or approach to the standards, permitting and enforcement procedure for facilities, liability assignment and penalties for non-compliance.The responsibility for defining and enforcing the standards and other requirements is shared in legislatively specified ways between the national, state and local governments. The major environmental laws in Nigeria could be said to be contained in the following legislations and their subsidiary legislations: These standards however should be continuously updated to meet international standards and accommodate emerging challenges imposed by technological developments, socio-political and cultural changes.Developing economies face challenges of being dumping grounds of the more developed economies and poor implementation of development plans, which lead to inundation of these economies with goods which sometimes are not environmentally friendly, there thus have a robust regulatory framework that will respond adequately to these challenges. The economic approach on the other hand, incorporating among other things the polluter pay principle (PPP) is usually adopted to introduce more flexible efficiency and cost effectiveness into pollution control measures.Under the PPP, a polluter pays a financial penalty or receives a financial reward for lower levels of pollution, the strict enforcement of the existing Okonkwo 745 regulations and continuous upgrading of these will assure better environmental protections.Furthermore, the polluter pay principle is expected to contribute to the pool of fund for government interventions in environmental and ecological degradation management. Waste minimization and recycling waste Waste minimization is the reduction of waste at source through technological innovation and behavioural change.Waste reduction is considered the top most priority in waste management hierarchy.Recycling involves the basic steps of separating and processing of recyclable waste materials from the waste stream and reselling of the reprocessed items.Recycling materials include glass, paper, cardboard, plastic, metals, etc, recycling is the third preferable option in the waste management hierarchy.The waste management scheme and the potentials for waste recycling activities is shown in Figure 4. Waste-to-wealth is a product of wastes recycling.In developing economies that face the twin problem of high unemployment and under employment rates, recycling can create a window of opportunities of engaging the population in productive activities of creation of goods and services which can contribute immensely to the GDP of the economy. Waste minimization an recycling strategy should be practiced at household and industrial sectors to achieve a safe environment. CONCLUSION Environmental management is a complex activity and needs a global and comprehensive approach, to achieve sustainable development.Comprehensive strategies for management of the environment have been presented.Nigeria being a fast developing nation should adopt these strategies to ensure that her developmental efforts are achieved in an environmentally friendly manner. The need for government action to limit environmental degradation is emphasized however it is expedient that the policy framework need to integrate all the sectors. The five basic paradigms of relationship between nature and environmental management and development was employed to proffer strategies for comprehensive environmental management in developing nations like Nigeria.The socio-economic potentials of environmental management were highlighted.It has been shown that effective environmental management also provide a window of economic emancipation through engagement in activities of waste-to-wealth; which will result in improved GDP of the nation.It has been suggested that environmental management being dynamic in nature will require continuous review so as to encourage best practices. Figure 2 . Figure 2. Evolution of environment and development paradigms. Figure 4 . Figure 4. Waste-to-wealth activity and processing profiles. Table 1 . Some reported environmental degradation cases. Table 2 . Typical waste materials generated from industrial manufacturing.
5,425
2013-08-31T00:00:00.000
[ "Environmental Science", "Economics" ]
Tourist Sentiment Mining Based on Deep Learning Mining the sentiment of the user on the internet via the context plays a significant role in uncovering the human emotion and in determining the exactness of the underlying emotion in the context. An increasingly enormous number of user-generated content (UGC) in social media and online travel platforms lead to development of data-driven sentiment analysis (SA), and most extant SA in the domain of tourism is conducted using document-based SA (DBSA). However, DBSA cannot be used to examine what specific aspects need to be improved or disclose the unknown dimensions that affect the overall sentiment like aspect-based SA (ABSA). ABSA requires accurate identification of the aspects and sentiment orientation in the UGC. In this book chapter, we illustrate the contribution of data mining based on deep learning in sentiment and emotion detection. Introduction Since the world has been inundated with the increasing amount of tourist data, tourism organizations and business should keep abreast about tourist experience and views about the business, product and service. Gaining insights into these fields can facilitate the development of the robust strategy that can enhance tourist experience and further boost tourist loyalty and recommendations. Traditionally, business rely on the structured quantitative approach, for example, rating tourist satisfaction level based on the Likert Scale. Although this approach is effective to prove or disprove existing hypothesis, the closed ended questions cannot reveal exact tourist experience and feelings of the products or services, which hampers obtaining insights from tourists. Actually, business have already applied sophisticated and advanced approaches, such as text mining and sentiment analysis, to disclose the patterns hidden behind the data and the main themes. Sentiment analysis (SA) has been used to deal with the unstructured data in the domain of tourism, such as texts, images, and video to investigate decision-making process [1], service quality [2], destination image and reputation [3]. As for the level of sentiment analysis, it has been found that most extant sentiment analysis in the domain of tourism is conducted at document level [4][5][6][7]). Document-based sentiment analysis (DBSA) regards the individual whole review or each sentence as an independent unit and assume there is only one topic in the review or in the sentence. However, this assumption is invalid as people normally express their semantic orientation on different aspects in a review or a sentence [8]. For example, in the sentence "we had impressive breakfast, comfortable bed and friendly and professional staff serving us", the aspects discussed here are "breakfast", "bed" and "staff" and the users give positive comments on these aspects ("impressive", "comfortable" and "friendly and professional"). Since the sentiment obtained through DBSA is at coarse level, aspect-based sentiment analysis (ABSA) has been suggested to capture sentiment tendency of finer granularity. To obtain the sentiment at the finer level, ABSA has been proposed and developed over the years. ABSA normally involves three tasks, the extraction of opinion target (also known as the "aspect term"), the detection of aspect category and the classification of sentiment polarity. Traditional methods to extract aspects rely on the word frequency or the linguistic patterns. Nevertheless, it cannot identify infrequent aspects and heavily depends on the grammatical accuracy to manipulate the rules [9]. As for the detection of sentiment polarity, supervised machine learning approaches, like Maximum Entropy (ME), Conditional Random Field (CRF) and Support Vector Machine (SVM). Although machine learning-based approaches have achieved desirable accuracy and precision, they require huge dataset and manual training data. In addition, the results cannot be duplicated in other fields [10]. To overcome these shortcomings, ABSA of deep learning (DL) approaches has the advantage of automatically extracting features from data [9]. Extant studies based on DL methods in tourism have investigated and explored tourist experiences in economy hotel [11], the identification of destination image [12], review classification [13]. Although DL methods have been applied in tourism, ABSA in tourism is scant. Therefore, this study reviewed sentiment analysis at aspect level conducted by DL approaches, compared the performance of DL models, and explored the model training process. With the references of surveys about DL methods [9,14], this study followed the framework of ABSA proposed by Liu (2011) [8] to achieve the following aims: (1) provide an overview of the studies using DL-based ABSA in tourism for researchers and practitioners; (2) provide practical guidelines including data annotation, pre-processing, as well as model training for potential application of ABSA in similar areas; (3) train the model to classify sentiments with the state-of-art DL methods and optimizers using datasets collected from TripAdvisor. This paper is organized as follows: Section 2 reviews the cutting-edge techniques for ABSA, studies using DL for NLP tasks in tourism, and research gap; Section 3 presents the annotation schema of the given corpus and DL methods used in this study; Section 4 describes the details of annotation results, model training, and the experiment results. Section 5 provides the conclusions and future extensions. Literature review An extensive literature review of the state-of-art techniques for ABSA and the studies using DL in tourism is provided in this section. Input vectors To convert the NLP problems into the form that computers can deal with, the texts are required to be transformed into a numerical value. In ML-based approaches, One-hot and Counter Vectorizer are commonly used. One-hot encoding can realize a token-level representation of a sentence. However, the use of One-hot encoding usually results in high dimension issues, which is not computationally efficient [15]. Another issue is the difficulty of extracting meanings as this approach assumes that words in the sentence are independent, and the similarities cannot be measured by distance nor cosine-similarity. As for Counter Vectorizer, although it can convert the whole sentence into one vector, it cannot consider the sequence of the words and the context. Nevertheless, in DL based approaches, pre-trained word embeddings have been proposed in [16,17]. Word embedding, or word representation, refers to the learned representation of texts in which the words with identical meanings would have similar representation. It has been proved that the use of word embeddings as the input vectors can make a 6-9% increase in aspect extraction [18] and 2% in the identification of sentiment polarity [19]. Pre-trained word embeddings are favored as random initialization could result in stochastic gradient descent (SGD) in local minima [20]. Based on the network language model, a feedforward architecture, which combined a linear projection layer and a non-linear hidden layer, could learn the word vector representation and a statistical language model [21]. Word2Vec [16] proposed the skip-gram and continuous bag-of-words (CBOW) models. By setting the window size, skip-gram can predict the context based on the given words, while the CBOW can predict the word based on the context. Frequent words also are assigned binary codes in Huffman trees because Also, due to the fact that the word frequency is appropriate to acquire classes in neural net language models, frequent words are assigned binary codes in Huffman trees. This practice in Word2Vec helps reduce the number of output units that are required to be assessed. However, the window-based approaches of Word2Vec do not work on the cooccurrence of the text and do not harness the huge amount of repetition in the texts. Therefore, to capture the global representation of the words in all sentences, GloVe can take advantage of the nonzero elements in a word-word cooccurrence matrix [17]. Although the models discussed above performed well in similarity tasks and named entity recognition, they cannot cope with the polysemous words. In a more recent development, Embeddings from language model (ELMo) [22], Bi-directional Encoder Representations from Transformers (BERT) [23] can identify the contextsensitive features in the corpus. The main difference between these two architectures is that ELMo is feature-based, while BERT is deeply bidirectional. To be specific, the contextual representation of each token is obtained through the concatenation of the left-to-right and right-to-left representations. In contrast, BERT applies masked language models (MLM) to acquire the pre-trained deep bidirectional representations. MLM can randomly mask certain tokens from the input and predict the ID of the input depending only on the context. Additionally, BERT is capable of addressing the issues of long text dependence. Nonetheless, researchers have combined certain features with word embedding to produce more pertinent results. These features include Part-Of-Speech (POS) and chunk tags, and commonsense knowledge. It has been observed that aspect terms are usually nouns or noun phrases [8]. The original word embeddings of the texts are concatenated with as k-dimensional binary vectors that represent the k POS, or k tags. The concatenated word embeddings are fed into the models (Do et al.,, Prasad, Maag, and Alsadoon, 2019 [9]). It has been proved that the use of POS tagging as input can improve the performance of aspect extraction, with gains from 1% [18,20] to 4% [24]. Apart from the POS, concepts that are closely related to the affections are suggested to be added as word embeddings [25,26]. POS focused on the grammatical tagging of the words in a corpus, while concepts that are extracted from SenticNet emphasize the multi-word expressions and the dependency relation between clauses. For example, the multi-word expression "win lottery" could be related to the emotions "Arise-joy" and the single-word expression "dog" is associated with the property "Isa-pet" and the emotions "Arise-joy" [26]. After being parsed by SenticNet, the obtained concept-level information (property and the emotions) is embedded into the deep neural sequential models. The performance of the Long Short-Term Memory (LSTM) [27] combined with SenticNet exceeded the baseline LSTM [26]. DL methods for ABSA This section reviews the DL methods used for ABSA, including Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Attention-based RNN, and Memory Network. CNN CNN can learn to capture the fixed-length expressions based on the assumption that keywords usually include the aspect terms with few connections of the positions [28]. Besides, as CNN is a non-linear model, it usually outperforms the linearmodel and rarely relies on language rules [29]. A local feature window of 5 words was firstly created for each word in the sentence to extract the aspects. Then, a seven-layer of CNN was tested and generated better results [29]. To capture the multi-word expressions, the model proposed [30] contained two separate convolutional layers with non-linear gates. N-gram features can be obtained by the convolutional layers with multiple filters. [13] put position information between the aspect words and the context words into the input layer in CNN and introduced the aspect-aware transformation parts. [31] integrated the attention mechanism with a convolutional memory network. This proposed model can learn multi-word expressions in the sentence and identify long-distance dependency. Apart from simply extracting the aspects alone, CNN can identify the sentiment polarity at the same time, which can be regarded as multi-label tasking classification or multitasking issues. As for researchers who considered ABSA multi-label tasking classification, a probability distribution threshold was applied to select the aspect category and the aspect vector was concatenated with the word embedding, which was then further performed using CNN. [32] combined the CNN with the nonlinear CRF to extract the aspect, which was then concatenated with the word embeddings and fed into another CNN to identify the sentiment polarity. [33] proposed a CNN with two levels that integrated the aspect mapping and sentiment classification. Compared with conventional ML approaches, this approach can lessen the feature engineering work and elapsed time [9]. It should be noticed that the performance of multitasking CNN does not necessarily outperform multitasking methods [19]. RNN and attention-based RNN RNN has been applied for the ABSA and SBSA in the UGC. RNN models use a fixed-size vector to represent one sequence, which could be a sentence or a document, to feed each token into a recurrent unit. The main differences between CNN and RNN are: (1) the parameters of different layers in RNN are the same, making a fewer number of parameters required to be learned; (2) since the outputs from RNN relies on the prior steps, RNN can identify the context dependency and suitable for texts of different lengths [34][35][36]). However, the standard RNN has prominent shortcomings of gradient explosion and vanishing, causing difficulties to train and fine-tune the parameter during the process of prorogation [34]. LSTM and Gated Recurrent Unit (GRU) [37] have been proposed to tackle such issues. Also, Bi-directional RNN (Bi-RNN) models have been proposed in many studies [38,39]. The principle behind Bi-RNN is the context-aware representation can be acquired by concatenating the backward and the forward vectors. Instead of the forward layer alone, a backward layer was combined to learn from both prior and future, enabling Bi-RNN to predict by using the following words. It has been proved that the Bi-RNN model achieved better results than LSTM in the highly skewed data in the task of aspect category detection [40]. Especially, Bi-directional GRU is capable of extracting aspects and identifying the sentiment in the meanwhile [23,41] by using Bi-LSTM-CRF and CNN to extract the aspects in the sentence that has more than one sentiment targets. Another drawback of RNN is that RNN encodes peripheral information, especially when it is fed with information-rich texts, which would further result in semantic mismatching problems. To tackle the issue, the attention mechanism is proposed to capture the weights from each lower level, which are further aggregated as the weighted vector for high-level representation [42]. In doing so, the attention mechanism can emphasize aspects and the sentiment in the sentence. Single attention-based LSTM with aspect embeddings [43], and position attentionbased LSTM [44], syntactic-aware vectors [45] were used to capture the important aspects and the context words. The aspect and opinion terms can be extracted in the Coupled Multi-Layer Attention Model based on GRU [46] and the Bi-CNN with attention [47]. These frameworks require fewer engineering features compared with the use of CRF. Memory network The development of the deep memory network in ABSA was originated from the multi-hop attention mechanism that applies the exterior memory to compute the influence of context words on the given aspects [36]. A multi-hop attention mechanism was set over an external memory that can recognize the importance level of the context words and can infer the sentiment polarity based on the contexts. The tasks of aspect extraction and sentiment identification can be achieved simultaneously in the memory network in the model proposed by [13]. [13] used the signals obtained in aspect extraction as the basis to predict the sentiment polarity, which would further be computed to identify the aspects. Memory networks can tackle the problems that cannot be addressed by attention mechanism. To be specific, in certain sentences, the sentiment polarity is dependent on the aspects and cannot be inferred from the context alone. For example, "the price is high" and "the screen resolution is high". Both sentences contain the word "high". When "high" is related to "price", it refers to negative sentiment, while it represents positive sentiment when "high" is related to "screen resolution". [48] proposed a target-sensitive memory network proposed six techniques to design target-sensitive memory networks that can deal with the issues effectively. Studies using DL methods in tourism and research gap To obtain finer-grained sentiment of tourists' experiences in economy hotels in China, [11] used Word2Vec to obtain the word embeddings as the model input, and bidirectional LSTM with CRF model was used to train and predict the data. The whole model includes the text layer, POS layer, connection layer, and output layer, in which CRF was used for data output, reaching an accuracy of 84%. [49] applied GloVe to pre-train the word embedding. To improve the performance, feature vectors, like sentiment scores, temporal intervals, reviewer profiles, were added into CNN models. Their results proved that temporal intervals made a greater contribution than the sentiment score and review profile for the managers to respond to the reviews. [50] explored the model that built CNN on LSTM and proved that the combined model outperformed the single CNN or LSTM model, with an improvement of 3.13% and 1.71% respectively. To summarize, DL methods have been extensively used to perform ABSA. However, ABSA in the domain of tourism is little in the literature. Therefore, this study aimed at conducting ABSA using a dataset collected from TripAdvisor for predicting sentiments. Based on the literature review, it can be observed that RNN models especially attention-based RNN models achieved better performance than CNN models in terms of accuracy. Therefore, attention-based gated RNN models including LSTM and GRU were used in this study, which is summarized in the following section. [14] conducted a series of ABSA on Semeval datasets [51,52] using various DL methods. The experimental results confirmed that RNN with an attention-based mechanism obtained higher accuracies but relatively low precisions and recalls. This is because the Semeval datasets are naturally unbalanced datasets in which the fraction of positive sentiment samples is significantly higher than the fractions of neutral and negative sentiment samples, which indicates the importance of fractions of sentiment samples in the datasets. Inspired by ABSA on Semeval datasets, four datasets with different fractions of sentiment samples were resampled from the dataset of TripAdvisor hotel reviews to investigate the effect of sample imbalance on the model performance. Also, optimizers to minimize loss play a key role in model training. Therefore, three optimizers including the state-of-art optimizer were used in this study to compare their performance. Corpora design Based on the consideration and the purpose of the study, the corpora in this study will be completely in English and will include reviews collected from casino resorts in Macao. A self-designed tool programmed in Python was implemented to acquire all the URLs, which were first stored and further used as the initial page to crawl all the UGC that belongs to the hotel. The corpus includes 61544 reviews of 66 hotels. The length of the reviews varied greatly, with a maximum of 15 sentences, compared to the minimum of one sentence. In terms of the size of the corpora that requires annotation, as there is no clear instruction regarding the size of the corpora, this study referred to Liu's work and SemEval's task. In machine learning based studies, it is reasonable to consider that the corpus that has 800-1000 aspects would be sufficient, while for deep-learning based approach, we think at least 5000 aspects in total would be acceptable. As the original data was annotated first to be further analyzed, 1% of the reviews were randomly sampled from the corpus. Therefore, 600 reviews that contain 5506 sentences were selected for ABSA in this study. Annotation Although previous works annotated the corpora and performed sentiment analysis, they did not reveal the annotation principles [51,53] and the categories are rather coarse. For example, [53] used pre-defined categories to annotate the aspects of the restaurant. The categories involved "Food, Service, Price, Ambience, Anecdotes, and Miscellaneous", which did not annotate the aspects of finer levels. In addition, the reliability and validity of the annotation scheme have not been proved. As the training of the models discussed above requires the annotation of domain-specific corpora, this study referred to [54]. The design of the annotation schema calls for the identification of aspect-sentiment pairs. Specifically, Α is the collection of aspects a j (with j ¼ 1, … , s). Then, sentiment polarity p k (with k ¼ 1, … , t) should be added to each aspect in the form of a tuple (a j , p k ). To ensure the reliability and validity, Cohen's kappa, Krippendorff's alpha, and Inter-Annotator-Agreement (IAA) are introduced in this study, which are calculated by the agreement package in NLTK. Both indicators are used to measure (1) the agreement of the entire aspect-sentiment pair, (2) the agreement of each independent category. LSTM unit The LSTM unit proposed by [25] overcomes the gradient vanishing or exploding issues in the standard RNN. The LSTM unit is consisted of forget, input, and output gates, as well as a cell memory state. The LSTM unit maintained a memory cell c t at time t instead of the recurrent unit computing a weighted sum of the inputs and applying an activation function. Each LSTM unit can be computed as follows: where W f , W i , W o , W c ∈  dÂ2d are the weighted matrices, and b f , b i , b o , b c ∈  d are the bias vectors to be learned, parameterizing the transformation of three gates; d is the dimension of the word embedding; σ is the sigmoid activation function, and ⊙ represents element-wise multiplication; x t and h t are the word embedding vectors and hidden layer at timet, respectively. The forget gate decides the extent to which the existing memory is kept (Eq. (2)), while the extent to which the new memory is added to the memory cell is controlled by the input gate (Eq. (3)). The memory cell is updated by partially forgetting the existing memory and adding a new memory content (Eq. (5)). The output gate summarizes the memory content exposure in the unit (Eq. (4)). LSTM unit can decide whether to keep the existing memory with three gates. Intuitively, if the LSTM unit detects an important feature from an input sequence at an early stage, it easily carries this information (the existence of the feature) over a long distance, hence, capturing potential long-distance dependencies. GRU A Gated Recurrent Unit (GRU) that adaptively remembers and forgets was proposed by [37]. GRU has reset and update gates that modulate the flow of information inside the unit without having a memory cell compared with the LSTM unit. Each GRU can be computed as follows: The reset gate filters the information from the previous hidden layer as a forget gate does in the LSTM unit (Eq. (8)), which effectively allows the irrelevant information to be dropped, thus, allowing a more compact representation. On the other hand, the update gate decides how much the GRU updates its information (Eq. (9)). This is similar to LSTM. However, the GRU does not have the mechanism to control the degree to which its state is exposed instead of fully exposing the state each time. Attention mechanism The standard LSTM and GRU cannot detect the important part for aspect-level sentiment classification. To address this issue, [43] proposed an attention mechanism that allows the model to capture the key part of a sentence when different aspects are concerned. The architecture of a gated RNN model considering the attention mechanism which can produce an attention weight vector α, and a weighted hidden representation r. r ¼ Hα T (13) where H ∈  d h ÂN is the hidden matrix, d h is the dimension of the hidden layer, N is the length of the given sentence; v a ∈  d a is the aspect embedding, and e N ∈  N is a N-dimensional vector with an element of 1; ⨂ represents element-wise multiplication; W h ∈  dÂd , W v ∈  d a Âd a , W m ∈  dþd a , and α ∈  N are the parameters to be learned. The feature representation of a sentence with an aspect h * is given by: where h * ∈  d , W p and W x ∈  dÂd are the parameters to be learned. To better take advantage of aspect information, aspect embedding is appended into each word embedding to allow its contribution to the attention weight. Therefore, the hidden layer can gather information from the aspect and the interdependence of words and aspects can be modeled when computing the attention weights. Annotation results In the first trial, Cohen's kappa and Krippendorff's alpha are obtained at 0.80 and 0.78 respectively. Which are highly acceptable in the study since the scores measured the overall attribute and polarity. To identify the category that has the largest variation between two coders, Cohen's kappa for each label was calculated separately. Results (Table 1) indicated that Polarity had the highest agreement, while attribute showed lower agreement among two annotators. At the end of the first trial, both coders discussed the issues they encountered when they were annotating the corpus and make efforts to improve the preliminary annotation schema. The problems include dealing with the sentence that is difficult to assign the aspects. Based on the revisions of the annotation schema, the coders conducted the second trial. With the revised annotation schema, the Cohen's kappa for the attribute and polarity is obtained at 0.89 and 0.91 respectively. In addition, Cohen's kappa and Krippendorff's alpha for the aspect-sentiment pair is computed by the end of the second trial, with 0.82 and 0.81 respectively, which indicated that the annotation schema in this study is valid. Model training The experiment was conducted on the dataset of TripAdvisor hotel reviews which contains 5506 sentences, where the numbers of positive, neutral, and negative sentiment samples are 3032, 2986, and 2725, respectively. Given a dataset, maximizing the predictive performance and training efficiency of a model requires finding the optimal network architecture and tuning hyper-parameters. In addition, the samples can significantly affect the performance of the model. To investigate the effect of sentiment sample fractions on the model performance, four subdatasets with 4000 sentiment samples subjected to different sentiment fractions were resampled from the TripAdvisor hotel dataset as the train sets, one is a balanced dataset and three are unbalanced datasets that the sample fraction of sentiment positive, neutral, and negative dominated, respectively. In addition, it is observed that the average number of the aspects in a sentence is about 1.4, and the average length of the aspects in a sentence is about 8.0, which indicates that one sentence normally contains more than one aspect and the aspect averagely contains eight characters. The number of aspects in train and test sets is more than 850 and 320, respectively, which confirms the diversity of aspects in the dataset of TripAdvisor hotel reviews. For each train set, 20% of reviews were selected as the validation set. Attention-based gated RNN models including LSTM and GRU were used for ABSA. Attention-based GRU/LSTM without and with aspect embedding were referred to as AT-GRU/AT-LSTM and ATAE-GRU/ATAE-LSTM, respectively. The details of the configurations and used hyper-parameters are summarized in Table 2. In the experiments, all word embeddings with the dimension of 300 were initialized by GloVe [17]). The word embeddings were pre-trained on an unlabeled corpus of which size is about 840 billion. The dimension of hidden layer vectors and aspect embedding are 300 and 100 respectively. The weight matrices are initialized with the uniform distribution U (À0.1, 0.1), and the bias vectors are initialized to zero. The learning rate and mini-batch size are 0.001 and 16 respectively. The best optimizer and number of epochs were obtained from {SGD, Adam, AdaBelief} and {100, 300, 500} respectively via grid search. The optimal parameters based on the best performance on the validation set were kept and the optimal model is used for evaluation in the test set. The aim of the training is to minimize the cross-entropy error between the target sentiment distribution y and the predicted sentiment distributionŷ. However, overfitting is a common issue during training. In order to avoid the over-fitting, regularization procedures including L2-regularization, early stopping as well as dropout were used in the experiment. L2-regularization adds "squared magnitude" of coefficient as a penalty term to the loss function. where i is the index of review; j is the index of sentiment class, and the classification in this paper is three-way; λ is the L2-regularization term, which modified the learning rule to multiplicatively shrink the parameter set on each step before performing the usual gradient update; θ is the parameter set. On the other hand, early stopping is a commonly used and effective way to avoid over-fitting. It reliably occurs that the training error decreases steadily over time, but validation set error begins to rise again. Therefore, early stopping terminates when no parameters have improved over the best-recorded validation error for a pre-specified number of iterations. Additionally, dropout is a simple way to prevent the neural network from overfitting, which refers to temporarily removing cells and their connections from a neural network [55]. In an RNN model, dropout can be implemented on input, output, and hidden layers. In this study, only the output layer with a dropout ratio of 0.5 was followed by a linear layer to transform the feature representation to the conditional probability distribution. Optimizers are algorithms used to update the attributes of the neural network such as parameter set and learning rate to reduce the losses to provide the most accurate results possible. Three optimizers namely SGD [56], Adam [57], and AdaBelief [58] were used in the experiment to search for the best performance. The standard SGD uses a randomly selected batch of samples from the train set to Table 2. Details of configurations and used hyper-parameters. compute derivate of loss, on which the update of the parameter set is dependent. The updates in the case of the standard SGD are much noisy because the derivative is not always toward minima. As result, the standard SGD may have a more time complexity to converge and get stuck at local minima. In order to overcome this issue, SGD with momentum is proposed by Polyak [56] (1964) to denoise derivative using the previous gradient information to the current update of the parameter set. Given a loss function f θ ðÞto be optimized, the SGD with momentum is given by: where α > 0 is the learning rate; β ∈ 0, 1 ½ is the momentum coefficient, which decides the degree to which the previous gradient contributing to the updates of the parameter set, and g t ¼ ∇f θ t ðÞ is the gradient at θ t . Both Adam and AdaBelief are adaptive learning rates optimizer. Adam records the first moment of gradient m t which is similar to SGD with momentum and second moment of gradient v t in the meanwhile. m t and v t are updated using the exponential moving average (EMA) of g t and g 2 t , respectively: where β 1 and β 2 are exponential decay rates. The second moment of gradient s t in AdaBelief is updated using the EMA of g t À m t ÀÁ 2 , which is easily modified from Adam without extra parameters: The update rules for parameter set using Adam and AdaBelief are given by Eqs. (23) and (24), respectively: where ε is a small number, typically set as 10 À8 . Specifically, the update direction in Adam is m t = ffiffiffiffi v t p , while the update direction in AdaBelief is m t = ffiffiffi s t p . Intuitively, 1= ffiffiffi s t p is the "belief" in the observation, viewing m t as the prediction of g t , AdaBelief takes a large step when observation g t is close to prediction m t , and a small step when the observation greatly deviates from the prediction. It is noted that the best models in the validation set were obtained by returning to the parameter set at the point in time with the lowest validation set error. Results and analysis As for the confusion matrix for a multi-class classification task, accuracy is the most basic evaluation measure of classification. The evaluation measure accuracy represents the proportion of the correct predictions of the trained model, and it can be calculated as: 11 Tourist Sentiment Mining Based on Deep Learning DOI: http://dx.doi.org /10.5772/intechopen.98836 Accuracy ¼ where C is the number of classes (C equals to 3 in this study); N is the sample number of the test set; TP i is the number of true predictions for the samples of the i th class, which is diagonally positioned in the confusion matrix. In addition to accuracy, classification effectiveness is usually evaluated in terms of macro precision and recall, which are aimed at a class with only local significance. As Figure 1 illustrates, the class that is being measured is referred to as the positive class and the rest classes are uniformly referred to as the negative classes. The macro precision is the proportion of correct predictions among all predictions with the positive class, while macro recall is the proportion of correct predictions among all positive instances. The macro F1-score is the harmonic mean of macro precision and recall. The macro-average measures take evaluations of each class into consideration, which can be computed as: MacroRecall where aspect embedding (ATAE-GRU and ATAE-LSTM). Taken Dataset 1 for example, the best accuracy in the test set using AT-GRU was 80.7%, while the best accuracy using ATAE-GRU was 75.3%; (2) Attention-based GRU performed better than attention-based LSTM. Taken AT-GRU and AT-LSTM for example, the accuracy and macro F1-score of AT-GRU for all datasets were higher than those of AT-LSTM; (3) The balanced dataset (Dataset 1) achieved the best predictive performance for all models. For the unbalanced datasets, the accuracy was exactly close to that of the balanced dataset. However, the macro precision, recall, and F1-score were significantly lower than those of the balanced dataset, which confirmed that the balanced dataset had the best generalization and stability in this study; (4) For Dataset 3 in which the neutral sentiment samples dominated, all of the models exhibited the worst predictive performance compared with other datasets. The candidate model for each dataset is illustrated in Figure 1. It is noted that the candidate model was selected according to accuracy. However, the model with a higher macro F1-score was selected as the candidate model instead when the accuracies of models were similar. Among 16 models, AT-GRU trained with the optimizer of AdaBelief and epoch of 300 in Dataset 1 achieved the highest accuracy of 80.7% and macro F1score of 75.0% in the meanwhile. Figure 2 illustrates the normalized confusion matrix of the best predictive model of which diagonal represented for the precisions. The precisions of positive and negative sentiment classification were about 20% higher than that of neutral sentiment classification, which confirmed that the need to boost the precision of neutral sentiment classification in order to globally improve the accuracy of the model in future work. Early stopping was used in this research to avoid overfitting and save training time. Figure 3 illustrates the learning history of AT-GRU using early stopping in four datasets, where the training stopped when the validation loss kept increasing for 5 epochs (i.e., "patience" equals to 5 in this study). For all datasets, the validation accuracy was exactly close to the training validation during the training procedure, which confirmed that early stopping was able to effectively avoid overfitting. Experimental results of A/P/R/F obtained based on training AT-GRU and AT-LSTM using early stopping. The accuracies obtained by AT-GRU and AT-LSTM were similar. For the balanced dataset, the accuracy and macro F1-score obtained by early stopping were significantly lower than that obtained by the corresponding model without early stopping. This is because the loss function probably found the local minima if the training stopped when the loss started to rise for 5 epochs. All of the optimizers used in this study were aimed at avoiding the loss function sticking at the local minima to find the global loss minima, therefore, using more epochs in the training was effective to obtain the best predictive performance model. On the other hand, for the unbalanced datasets, the accuracy and macro F1-score obtained by early stopping were similar to that obtained by the corresponding model without early stopping, which indicated that early stopping was effective to avoid overfitting as the loss converged fast in the unbalanced dataset. Although early stopping is a straightforward way of avoiding overfitting and improving training efficiency, the trade-off is that the model for test set possibly returns at the time point when reaching the local minima of loss function especially for the balanced dataset, and a new hyper-parameter of "patience" which is sensitive to the results is introduced. Three optimizers were used in this study to find the best model. Figure 4 illustrates the learning history of AT-GRU in four datasets. The gap between training and validation accuracy was the largest, which indicated that the worst generalization of Adam among three optimizers in this study although it converged quickly at the very beginning except for Dataset 3. Both SGD and AdaBelief can achieve good predictive performance with good generalization, however, AdaBelief converged faster than SGD, and the best results were achieved by AdaBelief. Conclusions and future extensions In this study, the hotel review dataset collected from TripAdvisor for aspectlevel sentiment classification was first established. The dataset contains 5506 sentences in which the numbers of positive, neutral, and negative sentiment samples are 3032, 2986, and 2725, respectively. In order to study the effect of the fraction of sentiment samples on the model performance, four sub-datasets with a various fraction of sentiment samples were resampled from the TripAdvisor hotel review dataset as the train sets. The task in this study is to determine the aspect polarity of a given review with the corresponding aspects. To achieve a good predictive performance toward a multi-class classification task, attention-based GRU and LSTM (AT-GRU and AT-LSTM), as well as attention-based GRU and LSTM with aspect embedding (ATAE-GRU and ATAE-LSTM), were optimized with SGD, Adam, and AdaBelief and trained with epochs of 100, 300, and 500, respectively. Conclusions from these experiments are as follows: 1. AT-GRU and AT-LSTM performed better than ATAE-GRU and ATAE-LSTM. Taken the balanced dataset as an example, the best accuracy in the test set using AT-GRU was 80.7%, while the best accuracy using ATAE-GRU was 75.3%. 2. Attention-based GRU performed better than attention-based LSTM. Taken AT-GRU and AT-LSTM for example, the accuracy and macro F1-score of AT-GRU for all datasets were higher than those of AT-LSTM. 3. The balanced dataset achieved the best predictive performance. For the unbalanced datasets, the accuracy was exactly close to that of the balanced dataset, however, the macro precision, recall, and F1-score were significantly lower than those of the balanced dataset, which confirmed that the balanced dataset had the best generalization and stability in this study. For the dataset in 15 Tourist which the neutral sentiment samples dominated, all of the models exhibited the worst predictive performance. 4. For the balanced dataset, the accuracy and macro F1-score obtained by early stopping was significantly lower than that obtained by the corresponding model without early stopping. However, for the unbalanced datasets, the accuracy and macro F1-score obtained by early stopping were similar to that obtained by the corresponding model without early stopping, which indicated that early stopping was effective to avoid overfitting as the loss converged fast in the unbalanced datasets. 5. For optimizers, both SGD and AdaBelief can achieve good predictive performance with good generalization, however, AdaBelief converged faster than SGD, and the best results were achieved by AdaBelief. This work includes the application of natural language processing technologies on the aspect-level sentiment analysis of the TripAdvisor hotel dataset, and there are still several extensions to be explored as follows: 1. Enlargement of the dataset. This study focused on the hotel in Macau, collecting 5506 reviews from TripAdvisor. To improve the model performance, hotels from other countries and regions can be collected into the dataset.
9,146.2
2021-07-19T00:00:00.000
[ "Computer Science" ]
Wrinkle force microscopy: a machine learning based approach to predict cell mechanics from images Combining experiments with artificial intelligence algorithms, we propose a machine learning based approach called wrinkle force microscopy (WFM) to extract the cellular force distributions from the microscope images. The full process can be divided into three steps. First, we culture the cells on a special substrate allowing to measure both the cellular traction force on the substrate and the corresponding substrate wrinkles simultaneously. The cellular forces are obtained using the traction force microscopy (TFM), at the same time that cell-generated contractile forces wrinkle their underlying substrate. Second, the wrinkle positions are extracted from the microscope images. Third, we train the machine learning system with GAN (generative adversarial network) by using sets of corresponding two images, the traction field and the input images (raw microscope images or extracted wrinkle images), as the training data. The network understands the way to convert the input images of the substrate wrinkles to the traction distribution from the training. After sufficient training, the network is utilized to predict the cellular forces just from the input images. Our system provides a powerful tool to evaluate the cellular forces efficiently because the forces can be predicted just by observing the cells under the microscope, which is much simpler method compared to the TFM experiment. Additionally, the machine learning based approach presented here has the profound potential for being applied to diverse cellular assays for studying mechanobiology of cells. The goal is to propose a much simpler method based on only phase contrast microscopy imaging. For this, authors seeded cells on silicon coated with fluorescent beads to allow performing on the same field traction force and wrinkling imaging. Then based on TFM data thay trained neuronal network to extract traction fields. Although I am not in a position, as biologist, to evaluate what is new here compared to ref 18 and 19, I have some concerns about: -the spatial resolution -the sensitivity of the methods -it possible or not application to cell layers -the fact that silicon wrinkling modifying the layer in 3D may affect TFM extraction These major points are not addressed here and they need at least to be evaluated. Also, as authors say, this would be mostly interesting for screening, high throuput, but no proof of principle for its application in this field is provided. Reviewer #3 (Remarks to the Author): Li et al. provide an elegant method to extract quantitative absolute traction force measurements from wrinkle images using deep learning approaches. The absolute measure of forces is typically done using Traction Force Microscopy (TFM), which requires the imaging of fluorescent bead displacement which can then be converted into absolute forces from the knowledge of the mechanical properties of the substrate. An associated method uses the observation of substrate buckling (leading to the observation of wrinkles) which only requires the acquisition of brightfield or phase contrast of the cells' substrate but is less quantitative with respect to absolute force measurements. Here, the authors generate a paired dataset of phase contrast images of wrinkles and TFM data, the latter being able to generate force maps. Then they used this data to train a conditional GAN neural network to predict force maps from such phase contrast images. This has the advantage to only require the acquisition of phase contrast images without the need for a reference image (as it typically required by TFM). The authors show nice quantifications of the performance of the approach on test dataset that show a broad agreement with ground truth and demonstrate the validity of the method. Deep learning constitutes a great approach to perform complex transformation of data such as wrinkle images into force maps. The authors rightly explain that the information is there but may be difficult to extract in a quantitative and spatially resolved manner as TFM does. What I am not sure about is how it compares to previous efforts to convert wrinkle images into force measures. The authors mention that wrinkle length and direction constitute two measures that relate to amplitude and direction of the forces but has there been any past efforts to convert these into force maps using deep learning or otherwise? The authors should discuss this in a bit more details, perhaps in introduction, and if no work has ever intended to do this quantitatively successfully, that will only strengthen the case for using deep learning and the present method. The authors quantify errors compared to ground truth TFM data from the ensemble distribution of force magnitudes and angles. They show an agreement of the best performing method (GAN from wrinkles) within about 30% and mention that more training data would improve that. The angular errors are within 20 degrees. I would like to see a little more characterisation of these errors as they form the basis of the method and would help understand the caveats compared to TFM (here considered as gold standard). So here, I think it would be useful to see the actual distributions of force amplitudes and force angles from both predictions and ground truth to show precision and potential biases. This could also be briefly discussed in my opinion. Additionally, since the approach is meant to spatially resolve the forces, it would also be useful to show spatial error maps (difference or root square error, RSE, or similar) of both magnitude and angle for the test datasets, as is commonly done for validation of deep learning producing images. This would also potentially highlight issues of where the errors come from mostly and how they relate to certain spatial features. This would be a nice complement to the correlation curve shown in Fig. 5a. I also have a concern about generalisation of the work. Although the authors show nicely that there is a decent agreement with TFM from the test data. It looks to me that all the dataset (both training and test) were acquired on the same day from the same dish. I would like to see whether the approach would generalise to a couple of test datasets acquired on a different day from a different dish, that were not present in the training data. This would constitute the ideal test dataset here. Biological variability as well as variability of how the substrate may be made can cause variability that may throw off the model and produce poor quality predictions. This is not uncommon. Reproducibility is also an issue here since neither the data nor the code for Deep Learning has been made available freely and directly. This is to me an important aspect that's missing and essential for transparency. Minor comments: Box plots in Fig. 5 are not defined. Temporal information of the time-course data shown in the movies are not indicated anywhere. Reply to reviewer's comments We gratefully acknowledge the constructive comments and suggestions of the referees and the editor. The responses to each reviewer's comments are listed below. Before replying to each comment, we would like to note that we reduced the number of the training dataset from 332 to 252. In the submitted version, we included same cells at different time (few hours difference; 20 cells) in order to increase the number of the training data. In the revised manuscript, we decided not to include those cells in order to show the robustness of our method. Note that it is common to do "data augmentation" in order to increase the number of training data in the field of machine learning, since the performance increase with the data amount. Instead of the image processing techniques that are usually utilized to increase the training images, we used same cells at different time to increase the data, in the submitted version. Since we noticed that the WFM works perfectly even without those same cells, we decided to remove those dataset from our training. REVIEWER #1: In this work, Honghan Li and colleagues describe an analysis pipeline showing how Deep Learning approaches can be used to predict the forces generated by cells from wrinkle force microscopy images. Such an approach promises that it would 1) enable fast traction force measurements, 2) avoid the phototoxicity associated with classical TFM approaches, and 3) easy to implement. Overall the work is of interest for the biological community interested in analysing how cells apply force on their substrate. My main concern is that the results are not described sufficiently to be reproduced by others. Also, this article is very much a proof of principle article, and no new biological phenomenon is described here. Response to Comment 1: We share the same concern about the importance of reproduciblity and as detailed below have now appended all the data and the codes alongside the paper, and have made them freely accessible on github. Comment 1-1: The authors do not provide the code, or the training dataset(s) used to produce the deep learning models presented here. The authors also do not provide the model themselves. I believe that these should be provided alongside the paper so that readers can have access to these critical materials. To use the strategy described here, one will need to reimplement the algorithm used, generate their training data and train them. This will drastically limit the usability of the method described here. Also, not data availability statement appear to be available. How were the Deep learning model implemented? No information concerning the programing language or essential library used are available. Regarding the approach itself, does the wrinkles affect cellular behaviour? or the amount of forces generated by the cells? Substrate patterning is well known to affect cellular functions (migration, proliferation and cell fate). Response to Comment 1-2: It is well-established that both the substrate stiffness (which affects wrinkles pattern) and the contact guidance mechanism (induced by wrinkles) affect the migration speed of the cells (Dokukina and Gracheva, Biophysical Journal, 2010). To test this in our experiments, we evaluated the proliferation ( Figure S2) and migration ( Figure S3) with three different substrates (CY with mixed ratio 1.2:1, CY with 1.0:1 and glass). Although the cell proliferation is slightly slower for glass substrate, there is no qualitative difference in the growth rate. On the other hand, the migration velocity is different for three cases, as expected (Dokukina and Gracheva, 2010). In summary, although there are difference in the cell movements due to the substrate differences, there is no fatal or harmful effect on the cell nature. We have added these results and the corresponding discussion on the impact of the substrate stiffness to the revised manuscript. In Supplemental materials: added) Figure S2 and S3 Comment 1-3: How prominent are the wrinkles? Response to Comment 1-3: This is a very good comment that can be addressed based on the following two points: • We added a new section "Wrinkle mechanics" in the materials and method. Based on the existing theories of non-linear wrinkle formation on elastic substrates, the wrinkle height is expected to be ∼ 200 nm. • In our previous study, we measured the wrinkle height of a sylgard substrate using AFM (atomic force microscopy) and the height was again in a range 200-300 nm (see Figure A; peak-to-peak height 500 nm). Although the material is different from the current paper (CY), it still gives us a rough estimate on the wrinkle height. We added the following section. page 7. In "Materials and Methods" • added) new section "Wrinkle mechanics" Comment 1-4: What is the force threshold for wrinkle formation? The wrinkle height of a sylgard substrate using AFM (atomic force microscopy). An AFM image and phase contrast image taken at the same position are superimposed. The peakto-peak height is ∼500 nm as examined along the black line in the AFM image; and note that cells are plated on a square micropatterned region to minimize the movement of the cells during the image acquisitions, so that a corner of the square is imaged in the AFM/optical images. Response to Comment 1-4: As shown in Fig.3(b), there is no wrinkle generation when the average force isf ≤ 10 Pa. Comment 1-5: What is the sensibility of this approach? It is also unclear how this approach can resolve smaller forces that may arise at different angles than the prominent wrinkles. Response to Comment 1-5: The sensibility of the current approach is 10 Pa: the cells exhibit wrinkles when the average forcef is greater than 10 Pa as shown in Fig. 3(b). Small forces can be recovered in our approach. As shown in Fig. 5, the force distributions are well predicted even when the place is far from the wrinkles. From the wrinkle geometry, our method predicts the landscape of force distributions and also how the forces decay with a distance. Comment 1-6: This approach uses PDMS for force measurement. What are the stiffness range that can be used to visualise wrinkles? Response to Comment 1-6: In the manuscript we have used the material with elasticity 5.4 kPa and as discussed in reply to the Comment 1-2 above, we have now extended the analyses to two additional substrates with stiffness 16.3 kPa (CY 1.0:1) and glass (see also Figure S3). Moreover, as we have now summarized in the new content "Wrinkle mechanics", not only the stiffness itself but the stiffness contrast between surface/bulk of the material is important. The wrinkle wavelength λ would be longer if the substrate E p becomes stiffer compared to the bulk E m . If we would like to observe 20 wrinkles for each cell with a size ∼ 100 µm, the stiffness ratio should be E p /E m ≤ 1500. page 7. In "Materials and Methods" • added) Note if one needs to observe 20 wrinkles for each cell with a size ∼ 100 µm, the stiffness ratio should be E p /E m ≤ 1500 from Eq. [9]. Comment 1-7: Should a new Deep Learning model trained for each stiffness to be analysed? Response to Comment 1-7: It is recommended to do the training again for different stiffness in most of the cases, except limited cases described below. Assume that the stiffness of oxidized layer is E 0 p and bulk substrate is E 0 m for our original experiment, while they are E 1 p and E 1 m for the experiment with new conditions. As we have described in the new section "Wrinkle mechanics", conditions for the wrinkle generation would be identical if the stiffness ratio is the same for new experiment: Therefore, only thing we need to modify is to multiply E 1 /E 0 to the force that CNN predicted, for this case. When the stiffness ratio is not fixed E 0 p /E 0 m = E 1 p /E 1 m , CNN cannot directly predict the force distribution since the wavelength λ (Eq. [9] in the manuscript) and the critical strain for the wrinkle generation ε c (Eq. [11]) is different for this condition. Although it might be still possible to estimate the strain considering the condition differences, it is still difficult to do the direct prediction as before. We pointed out this statement in the new section. page 7 "Wrinkle mechanics" • If the substrate stiffness is different from the trained condition, the system needs to be trained once again with a new set of data. However, the trained system is still applicable when the stiffness ratio E p /E m of a new substrate is the same as the trained condition since the criteria for wrinkle generation would be identical, as shown in Eqs. [9] and [11]. By multiplying the ratio of Young's modulus, we can estimate the force distribution just for this case. Comment 1-8: On the same point, Should a Deep learning model be trained for each cell type to be imaged? Can the authors demonstrate that this is not the case? Response to Comment 1-8: The wrinkle patterns are determined only from the traction distributions. Therefore, same principle can be applied to other cell lines unless the cells have comparable size with that of the training data. When the cell size is not comparable, the model should be trained once again since wrinkle patterns with similar length scale might not be included in the training data. Figure S6 shows the distribution of traction force for MEF (Mouse embryonic fibroblast), which has a comparable size with A7r5 cells (training data). Our machine learning approach well predicts the distribution from the wrinkle patterns. page 6: in "Discussions": • Additionally, since the wrinkle patterns are determined only from the traction distributions, we expect the same principle can be applied to other cell lines unless the cells have comparable sizes with those used in the training data. When the cell size is not comparable, the model should be trained once again since wrinkle patterns with similar length scale might not be included in the training data. Figure S6 shows the distribution of traction force for MEF (Mouse embryonic fibroblast), which has a similar size with A7r5 cells (training data). Our machine learning approach well predicts the distribution from the wrinkle patterns. In Supplemental materials: added) Figure S6 Comment 1-9: What is the resolution of the approach used here? The TFM maps used are of very low resolution. Can this approach also be used to resolve smaller details? Response to Comment 1-9: The spacial resolution of the force distribution is 3.44 µm × 3.44 µm. Yes, we can change the spacial resolution to smaller value but there will be a trade-off with the accuracy. The resolution is limited by the PIV method, which is used to evaluate the substrate displacement. This method finds the displacement by image cross-correlations, and the prediction accuracy becomes lower by decreasing the reference image size. We use this spacial resolution to ensure the prediction accuracy. Note that the spatial resolution is in a same order with other papers (for example, Tang et al., PLOS Computational Biology, 2014), and this is a common resolution used in TFM. Comment 1-10: The authors show that they obtain excellent predictions using their approach. It is, however, unclear how these analyses were done as no details are provided. How many images were analysed? Were these images part of the training dataset used to train the Deep Learning network? Response to Comment 1-10: We wrote the detail in the section "Traction force prediction using GAN", and we used N = 252 training images and 3 test images for the evaluation. The test images are not included during the training process. We repeated the test 5 times and the errors are evaluated with 15 total test images. 5 Comment 1-11: While I do not think that the authors should discover new biological phenomena, they should demonstrate that this approach can be used to observe meaningful changes in force transmission. For instance, show that forces are reduced following a drug treatment. Response to Comment 1-11: • In our previous paper (Ref. 20: Nehwa et al., BBRC, 2020), we used the current experimental setup (with the multi-well plate) to evaluate how the cellular force changes with 19 different drugs. By the drug treatment, the force, which is measured by the wrinkle length, became 20-130% compare to the control condition. This work clearly shows that the current approach can be utilized as a high-throughput system for force measurement. Note: In Nehwa et al., we use the machine learning system just to extract the wrinkles (step2 in the present work), and the evaluation of force distribution (step 3) is newly developed for the present work. • We add a new figure, Fig. 4, to show that the geometrical feature, whether the wrinkles are clustered or dispersed, can be used to estimate the relative strength of the cellular force and the force isotropy. page 3, "Simultaneous measurement of wrinkles and traction forces": • added) Figure 4 • added) Table 1 • added) Topological features of the wrinkle also give us insights into the force distributions. As shown in Fig. 4(a), some cells exhibit wrinkles in a single region (top) while the others show several separated regions of wrinkles (bottom). By categorizing the cell images (total 34 images) into two types as clustered patterns (9 images) and dispersed patterns (25 images), we summarize the difference in the force distributions in Figs. 4(b) and (c): the average forcef and force isotropy I are significantly larger (f : p < 0.01, I: p < 0.05) for clustered pattern, where isotropy is evaluated as I = |f p /f min p | where f min p is the smaller eigenvalue of S ij . The wrinkles are generated by the pairwise forces that are transmitted via focal adhesions. When the wrinkles are dispersed, cells tend to be elongated, as seen in the pictures, and we can hypothesize that cells might not be strong enough to generate continuous wrinkles. The current result suggests that the wrinkle topological features have rich information on the force distributions. • added) Finally, the correlations between cellular properties and characters of contractile forces are summarized in table 1. Since each variable has moderate positive correlations (∼ 0.5) with all other variables, it can be concluded that the cells are circular when the size is larger, and both force magnitude and isotropy increase with the cell size. Comment 1-12: The discussion is very much a summary of the study itself. I would recommend that the authors also discuss the limitations of the approach described here and its possible uses. Response to Comment 1-12: Thank you for this suggestion. We have added discussion of the current limitations and possible applications of our approach to the Discussion section in the revised manuscript. page 6, "Discussion": • added) The relationship between morphology and biological functions of living creatures has long been an intense subject of research. This topic has been investigated at the individual cellular level as well, in which cellular contractile forces were implicated in diverse functions including proliferation, differentiation, apoptosis, and tumorigenesis. Given its complicated nature, however, the whole relationship associated with cellular forces remains fully understood. In this regard, the technology described here has a potential to significantly advance research in this field as it allows for easier acquisition of the equivalent of TFM data. Indeed, we found the tendency of circular cells being stronger and higher in the magnitude and isotropy of the contractile force, respectively, compared to elongated ones. Thus, WFM will also be available, in addition to drug screening as we discussed below, for extensively probing how the force of cells is related to their functions including maintenance of morphological phenotypes. It is instructive to discuss limitations and advantages of the proposed approach. In this paper we have presented data for a stiffness range 5.4 − 16.3 kPa. In principal, it is recommended to do the training again for different stiffness in most of the cases, except limited cases described below. As we have described in the Methods section "Wrinkle mechanics", conditions for the wrinkle generation would be identical if the stiffness ratio is the same for new experiment: E 0 p /E 0 m = E 1 p /E 1 m . Therefore, only thing we need to modify is to multiply E 1 /E 0 to the force that CNN predicted, for this case. When the stiffness ratio is not fixed E 0 p /E 0 m = E 1 p /E 1 m , CNN cannot directly predict the force distribution since the wavelength λ (Eq. [9] in the manuscript) and the critical strain for the wrinkle generation ε c (Eq. [11]) is different for this condition. Although it might be still possible to estimate the strain considering the condition differences, it is still difficult to do the direct prediction as before. Additionally, since the wrinkle patterns are determined only from the traction distributions, we expect the same principle can be applied to other cell lines unless the cells are not much bigger than those used in the training data. When the cell size is not comparable, the model should be trained once again since wrinkle patterns with similar length scale might not be included in the training data. Figure S6 shows the distribution of traction force for MEF (Mouse embryonic fibroblast), which has a similar size with A7r5 cells (training data). Our machine learning approach well predicts the distribution from the wrinkle patterns. Comment 1-13: Fiji software is not referenced. Response to Comment 1-13: We have cited Fiji software in our latest version. Comment 1-14: It would be useful to provide the actual images used to train and generated by the GAN. Not just images with force vectors. Response to Comment 1-14: We have uploaded the training data to an open-access workplace, at https://github.com/ Minatsukiyoshino/Wrinkle_force_microscopy in the data folder. REVIEWER #2: The authors in this manuscript combine the well-recognized method of TFM with the much older described wrinkling imaging on silicon surfaces to propose an easier method to measure traction exerted by the cells on a deformable substratum using machine learning. The goal is to propose a much simpler method based on only phase contrast microscopy imaging. For this, authors seeded cells on silicon coated with fluorescent beads to allow performing on the same field traction force and wrinkling imaging. Then based on TFM data thay trained neuronal network to extract traction fields. Response to Comment 2-1: The biggest advance in this paper is the quantitative measurement of force distribution from the surface wrinkle. Previous works such as [Ref 18 -Burton and Taylor, 1997;Ref 26 -Fukuda et al., 2017] provided an estimated of the force magnitude using an analytical relation that predicts a dependence of the contractile traction force magnitude on the wrinkle length. Although the approach was successful in giving rough estimations of the force change in a glance, it has a disadvantage in predicting the force distribution, which is important to understand the cellular mechanotransduction and morphologies. Due to the complex force-geometry relation of the wrinkle, Ref 18 can only predict the approximate force directions. Our present method overcame this disadvantage by letting the system learn the force-geometry relation. Moreover, in terms of the physics of wrinkling there has been a long-standing debate on the force-wrinkle length dependence: While some earlier works (Ref 18) predicted a linear relationship, a rigorous analytical treatment of Ref 36 [Cerda and Mahadevan, PRL, 2002] showed that the relationship should be quadratic. As shown in Figure 3(c) of our manuscript, the experimental measurements indeed show a rather linear relation between the traction force and the wrinkle length, which validates a more recent theoretical prediction based on far-from threshold theory of wrinkling [Davidovitch et al., PNAS, 2011] and is consistent with the experiments on droplets on Polystyrene films [Huang et al., Science 2007]. Comparison with Ref. 19 [Li et al., BBRC, 2019] : Ref 19 is our previous work, and the wrinkle extraction using the machine learning is proposed (Note that the wrinkle extraction method is utilized in "Step2: Wrinkle extraction" in the current work). The current work is a drastic improvement from the previous work, and we can now predict the stress distribution from the microscope images. We stress this point in the introduction as follows. page 1, in "Introduction": • added) Although the geometrical information of wrinkles (Groenewold, 2001;Beningo and Wang 2002;Cerda and Mahadevan, 2003), such as wavelengths, would give an estimation in the force magnitude and direction, the geometry is still not enough to predict the local force distribution in a sub-cellular scale that is important to understand the cellular mechanotransduction and morphologies. page 3, in "Simultaneous measurement of wrinkles and traction forces": • added) While earlier work predicted a linear relationship between the traction force and the wrinkle length, analytical treatment showed that the relationship should be quadratic. The linear relation that is measured here validates a more recent theoretical prediction based on far-from threshold theory of wrinkling and is consistent with the experiments on droplets on polystyrene films. Comment 2-2: I have some concerns about, the spatial resolution Response to Comment 2-2: The spacial resolution of the force distribution is 3.44 µm × 3.44 µm. The spacial resolution can be set to smaller value but there will be a trade-off with the accuracy. The resolution is limited by the PIV method, which is used to evaluate the substrate displacement. This method finds the displacement by image cross-correlations, and the prediction accuracy becomes lower by decreasing the reference image size. We use this spacial resolution to ensure the prediction accuracy. Note that this spatial resolution is in a same order with other papers (for example, Tang et al., PLOS Computational Biology, 2014), and this is a common resolution used in TFM. Comment 2-3: the sensitivity of the methods Response to Comment 2-3: As shown in Fig. 3(c), the cells exhibit wrinkles when the average forcef is greater than 10 Pa. Therefore, the forces can be predicted via the wrinkles forf ≥ 10 Pa. Comment 2-4: it possible or not application to cell layers Response to Comment 2-4: Yes, we can apply this framework to predict the force of cell layers, but we need to do the training again with a dataset of cell layers. The cell mechanics are different for single cells and cell layers: for example, there are no cell-cell junctions and cell-cell force transmissions for single cells. We tried to predict the force of cell layers with a system that was trained with the single cell dataset, but the prediction was not good compared to the one we reported in the paper. We are planning to report a system to predict the force of cell layers in our next work. Comment 2-5: the fact that silicon wrinkling modifying the layer in 3D may affect TFM extraction Response to Comment 2-5: The wrinkle generation would only have small error (∼5%) on the strain estimation. Assume that there is a wrinkle with a shape ζ(x) = A cos(2πx/λ) where A is the amplitude and λ is the wavelength. By assuming that the amplitude is smaller than the wavelength (A/λ 1) [Ref 22: Groenewold, Phys. A: Stat. Mech its Appl., 2001], the arc length of a single wave can be given as This equation indicates that there would be a excess length change π 2 A 2 /λ when a segment with length shrinks to λ. As we discussed in a new section "Wrinkle mechanics", the parameters are λ ∼ 3 µm and A ∼ 0.2 µm in our experiments. Using the parameters, the excess length π 2 A 2 /λ change compared to the wavelength λ is given as π 2 A 2 /λ 2 ≈ 0.04. Therefore, there is only partial effect on the strain estimation with the wrinkle generation. We added new section as follows. page 7. In "Materials and Methods" • added) new section "Wrinkle mechanics" Comment 2-6: Also, as authors say, this would be mostly interesting for screening, high throuput, but no proof of principle for its application in this field is provided. Response to Comment 2-6: In our previous paper (Ref. 20: Nehwa et al., BBRC, 2020), we used the current experimental setup (with the multi-well plate) to evaluate how the cellular force changes with 19 different drugs. By the drug treatment, the force, which is measured by the wrinkle length, became 20-130% compare to the control condition. This work clearly shows that the current approach can be utilized as a high-throughput system for force measurement. Note: In Nehwa et al., we use the machine learning system just to extract the wrinkles (step2 in the present work), and the evaluation of force distribution (step 3) is newly developed for the present work. We added the following sentence. In Introduction: For instance, the force measurements with dozens of different drugs can be done simultaneously by implementing the wrinkle assay to a multi-well plate (Nehwa et al., BBRC, 2020). Li et al. provide an elegant method to extract quantitative absolute traction force measurements from wrinkle images using deep learning approaches. The absolute measure of forces is typically done using Traction Force Microscopy (TFM), which requires the imaging of fluorescent bead displacement which can then be converted into absolute forces from the knowledge of the mechanical properties of the substrate. An associated method uses the observation of substrate buckling (leading to the observation of wrinkles) which only requires the acquisition of brightfield or phase contrast of the cells' substrate but is less quantitative with respect to absolute force measurements. Here, the authors generate a paired dataset of phase contrast images of wrinkles and TFM data, the latter being able to generate force maps. Then they used this data to train a conditional GAN neural network to predict force maps from such phase contrast images. This has the advantage to only require the acquisition of phase contrast images without the need for a reference image (as it typically required by TFM). The authors show nice quantifications of the performance of the approach on test dataset that show a broad agreement with ground truth and demonstrate the validity of the method. Comment 3-1: Deep learning constitutes a great approach to perform complex transformation of data such as wrinkle images into force maps. The authors rightly explain that the information is there but may be difficult to extract in a quantitative and spatially resolved manner as TFM does. What I am not sure about is how it compares to previous efforts to convert wrinkle images into force measures. The authors mention that wrinkle length and direction constitute two measures that relate to amplitude and direction of the forces but has there been any past efforts to convert these into force maps using deep learning or otherwise? The authors should discuss this in a bit more details, perhaps in introduction, and if no work has ever intended to do this quantitatively successfully, that will only strengthen the case for using deep learning and the present method. Response to Comment 3-1: There is an attempt to only roughly evaluate the force magnitude using a simplified relation that the wrinkle length increases linearly with the contractile force (Ref. 18: Burton and Taylor, Nature, 1997) as we wrote in the introduction, but there is no attempt for quantitative conversion to the "actual" force distribution, to the best of authors' knowledge. Due to the complex forcegeometry relation of the wrinkle, note that Ref. 18 could only predict the approximate force directions. Our present method overcame this disadvantage by letting the system learn the forcegeometry relation, and thus we believe our method provides a huge advance over the previous studies, in obtaining the actual cellular force magnitude and distribution. We added the following sentence to stress this point. page 1, "Introduction": • added) Although the geometrical information of wrinkles (Groenewold, 2001;Beningo and Wang 2002;Cerda and Mahadevan, 2003), such as wavelengths, would give an estimation in the force magnitude and direction, the geometry is still not enough to predict the local force distribution in a sub-cellular scale that is important to understand the cellular mechanotransduction and morphologies. The authors quantify errors compared to ground truth TFM data from the ensemble distribution of force magnitudes and angles. They show an agreement of the best performing method (GAN from wrinkles) within about 30% and mention that more training data would improve that. The angular errors are within 20 degrees. I would like to see a little more characterisation of these errors as they form the basis of the method and would help understand the caveats compared to TFM (here considered as gold standard). So here, I think it would be useful to see the actual distributions of force amplitudes and force angles from both predictions and ground truth to show precision and potential biases. This could also be briefly discussed in my opinion. Response to Comment 3-2: Thank you very much for your interesting suggestion, in response to which we add new figure S4. As shown in the figure, there is a good agreement between the ground truth and the prediction. Although the angle has small deviation, the prediction still well capture the distribution of the ground truth. In Supplemental materials: added) Figure S4 Comment 3-3: Additionally, since the approach is meant to spatially resolve the forces, it would also be useful to show spatial error maps (difference or root square error, RSE, or similar) of both magnitude and angle for the test datasets, as is commonly done for validation of deep learning producing images. This would also potentially highlight issues of where the errors come from mostly and how they relate to certain spatial features. This would be a nice complement to the correlation curve shown in Fig. 5a. Response to Comment 3-3: We add new figures S2 in the appendix. As expected, the error is large at the image edge while it is relatively small at center. In Supplemental materials: added) Figure S5 Comment 3-4: I also have a concern about generalisation of the work. Although the authors show nicely that there is a decent agreement with TFM from the test data. It looks to me that all the dataset (both training and test) were acquired on the same day from the same dish. I would like to see whether the approach would generalise to a couple of test datasets acquired on a different day from a different dish, that were not present in the training data. This would constitute the ideal test dataset here. Biological variability as well as variability of how the substrate may be made can cause variability that may throw off the model and produce poor quality predictions. This is not uncommon. Response to Comment 3-4: 63 data is collected from the experiments of five different days and seven different dishes that are independently prepared. Since the training/test data were randomly picked from the data, our approach is generalized enough for different days/dishes (Note: 15 test data consist of experiments of four different days with five different dishes). We mentioned this point as follows. p. 7: Materials and Methods " Step 3: Prediction of traction force based on GANbased system" added) Note that 63 images are acquired from experiments on five different days and seven different dishes. Comment 3-5: Reproducibility is also an issue here since neither the data nor the code for Deep Learning has been made available freely and directly. This is to me an important aspect that's missing and essential for transparency. Response to Comment 3-5: We have uploaded the code and the data to an open-access workplace, at https://github. com/Minatsukiyoshino/Wrinkle_force_microscopy. • Temporal information of the time-course data shown in the movies are not indicated anywhere. • We wrote the time-interval between the frames in the movie caption.
9,016
2022-04-14T00:00:00.000
[ "Biology", "Computer Science" ]
Modelling and forecasting risk dependence and portfolio VaR for cryptocurrencies In this paper, we investigate the co-dependence and portfolio value-at-risk of cryptocurrencies, with the Bitcoin, Ethereum, Litecoin and Ripple price series from January 2016 to December 2021, covering the crypto crash and pandemic period, using the generalized autoregressive score (GAS) model. We find evidence of strong dependence among the virtual currencies with a dynamic structure. The empirical analysis shows that the GAS model smoothly handles volatility and correlation changes, especially during more volatile periods in the markets. We perform a comprehensive comparison of out-of-sample probabilistic forecasts for a range of financial assets and backtests and the GAS model outperforms the classic DCC (dynamic conditional correlation) GARCH model and provides new insights into multivariate risk measures. Introduction During the last years, cryptocurrencies gain more and more attention not only from ordinary investors but also from regulatory authorities and policy makers. Cryptocurrencies are decentralized currencies that are powered by their users with no central authority and therefore are independent of monetary politics and not controlled by the B Jie Cheng<EMAIL_ADDRESS>1 School of Computing and Mathematics, Keele University, MacKay Building, Keele ST5 5BG, UK existing banking system 1 . Bitcoin, the largest cryptocurrencies was created in 2009 and since then numerous other cryptocurrencies have been created. After a stable period of development, most of the cryptocurrencies started to climb and dramatically increased in the period 2016 to 2020 with pricing bubbles in 2018 (Corbet et al. 2018). After that, all major cryptocurrencies' prices have exhibited tremendous fluctuation with the sharpest drop during March 2020 selloff, as a result of the COVID-19 outbreak. Existing literature on the cryptocurrencies market includes studies focusing on hedging and safe-haven properties of cryptocurrencies (e.g. Bouri et al. 2017;Conlon and McGee 2020), market efficiency (e.g. Nadarajah and Chu 2017;Tran and Leirvik 2020), volatility patterns and portfolio of cryptocurrency markets (Katsiampa 2017), most of which provide the within-sample fit for univariate cases. On the other hand, to account for the structure linkage and interdependencies among the cryptocurrencies and other financial assets, different multivariate approaches including the GARCH-DCC models Ghabri et al. 2021), the GARCH-BEKK models (Katsiamp et al. 2019;Stavroyiannis and Babaros 2017) and GARCH-copula models (Bouri et al. 2018;Boako et al. 2019;Syuhada and Hakim 2020) have documented for volatility forecasting and risk management. While these studies provide useful analyses, they also confirm that both the conditional volatilities and the correlations of the cryptocurrencies change over time, especially during the bubble period in 2018 and the pandemic era in 2020. Therefore, we pay attention to the observation-driven time-varying multivariate generalized autoregressive score (GAS) model to examine the price dependency relationships and portfolio value-at-risk (VaR) of cryptocurrencies; particularly, Bitcoin (BTC), Ethereum (ETH), Litecoin (LTC) and Ripple (XRP) are considered. The generalized autoregressive score-driving model (GAS) is proposed by Creal et al. (2013), and it nests many well-known models, including GARCH (Bollerslev 1986) and ACD (Engle and Russell 1998) models. Tafakori et al. (2018) consider an asymmetric exponential GAS model to predict Australian electricity returns. Chen and Xu (2019) use both univariate and bivariate GAS models to analyse and forecast volatilities and correlations between Brent, WTI and gold prices. To the best of our knowledge, no other study has ever used the multivariate GAS model to forecast the volatility and correlation of cryptocurrencies. Due to the relatively young literature on cryptocurrency, there are few studies related to out-of-sample forecasting performance for both dependence structure and volatility. Amongst those, Syuhada and Hakim (2020) construct a dependence model through vine copula and provide the value-at-risk (VaR) forecasts. Chi and Hao (2021) show GARCH model's volatility forecast is better than the option implied volatility using the BTC and ETH prices. In our paper, we conduct out-of-sample forecasting performance for both point forecasts (e.g. VaR) and density forecasts. In order to see how effectively the GAS model treats different dynamic features simultaneously in a unified way, we compare the forecasting results with those of the classic dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (DCC-GARCH) model (Engle 2002). Our main findings are as follows: First, beside the most applied volatility models, GARCH, asymmetric GARCH specifications including GJR-GARCH and APARCH models are also considered for the univariate ETH, LTC, BTC and XRP return series. Interestingly, the additional parameters in these models, which are supposed to show the asymmetric volatility response to past returns (so-called leverage effect), are not significant for all the cryptocurrencies in this paper. These results are consistent with those found in Chi and Hao (2021) and Syuhada and Hakim (2020). Several studies apply the asymmetric GARCH models to cryptocurrencies' return series; however, they either use a GARCH-type model with Gaussian innovation (Cheikh et al. 2020) or show rather weak significant additional terms, which are supposed to reflect the asymmetry (Apergis 2021). One possible explanation is that the traders or investors from the cryptocurrency market are different to those from the stock market. Unlike the stock market which is usually dominated by well informed investors, the cryptocurrency market has more uninformed investors, and the volatility asymmetry, which can be traced to trading activity that has been guided by information asymmetry between well informed and uninformed traders in the market (Avramov et al. 2006), is not significant as it did in the stock market. Second, we find empirical evidence to show that the forecasting ability of the GAS model is better than those of the DCC-GARCH model. More specifically, the GAS model accounts for large price changes in a very natural way when updating the correlations and volatilities over time, especially during extreme events. This is particularly important when we form a portfolio risk and estimate the corresponding VaR forecasts. Through a sequence of statistical tests, our results prefer the GAS model to the DCC-GARCH models in terms of point (volatilities and correlations) forecasts, quantile (value-at-risk) forecasts and density forecasts. This paper is organized as follows: Section 2 describes the multivariate GAS model and the DCC-GARCH model. Section 3 provides the data source and preliminary analysis. In Sect. 4, we applied the two multivariate models to the daily cryptocurrencies and present the estimation results for the within-sample period. Moreover, we conduct out-of-sample forecasting performance for volatilities, correlations, VaRs and probability distributions for the two models. Section 5 concludes. The multivariate GAS model Let r t be an N -dimensional random vector at time t with conditional distribution where F t−1 contains all the information up to time t − 1, θ t is a vector of time-varying parameters depending on F t−1 and a set of static parameters φ for all time t. The GAS(p,q) model is an observation-driven model, and the time-varying parameters θ t are governed by the score of the conditional density in (1) and an autoregressive updating equation where κ, A and B are the coefficient matrices with proper dimensions and s t is the scaled score function with where the expectation is taken with respect to the conditional distribution in (1). The additional parameter γ is fixed. By choosing different values of γ , the GAS model encompasses some well-known models (e.g. GARCH, ACD and ACM models, see Creal et al. 2013, for a detailed discussion). In the application, we consider a GAS(1,1) model with γ = 0 and the conditional distribution in (1) follows a multivariate standardized Student-t distribution (Ardia et al. 2019). Therefore, the time-varying parameter vector θ (including location μ, scale σ , correlation ρ and shape ν parameters) is given by: and a natural choice for S t is identity matrix. The multivariate DCC-GARCH model Following Engle (2002), the DCC-GARCH(1,1) model is as follows. Let r t be an N-dimensional random vector at time t, we consider where F t−1 is the information available up to time t − 1, D t is a diagonal matrix such that D t = diag( h 11,t , · · · , h nn,t ) and h ii,t , i = 1, 2, · · · , N is the conditional variance obtained from the univariate model, which is usually GARCH-type model and R t is the dynamic conditional correlation matrix. More specifically, let then the time-varying correlation matrix Q t can be updated by whereQ is a symmetric time-invariant unconditional covariance matrix and Z t = D −1 t ε t . In our application, we assume ε t follows a multivariate standardized Studentt distribution, as we did in GAS(1,1) model. Empirical application Daily Cryptocurrencies data, Ethereum (ETH), Litecoin (LTC), Bitcoin (BTC) and Ripple (XRP), in US dollars, are obtained from https://www.cryptocompare.com 2 using a Python script. Our sample period is from 1 January 2016 till 31 December 2021. We split the sample into two parts, a within-sample period from 1 January 2016 to 31 December 2018, which includes a total of 1096 daily prices and out-of-sample period from 1 January 2019 to 31 December 2021. For each of the datasets, the returns r t of ETH, LTC, BTC and XRP are calculated as where P t is the daily closing price at time t. Cryptocurrency returns are extremely volatile, so we winsorized them at the 0.005% and 99.5% levels. Figure 1 displays the winsorized return series for ETH, LTC, BTC and XRP during the full sample period, i.e. from January 2016 to December 2021. We observe multiple volatile periods for different returns series, but they behave more similarly after 2018. During the March 2020 selloff, all of them experienced the most negative changes. It is worth mentioning that XRP suffered significant price fluctuations during first half of 2021 due to an SEC lawsuit Ripple faced at the end of 2020. Therefore, volatility changes of XRP were mostly caused by updates on the SEC lawsuits after 2021. Table 1 reports the descriptive statistics for the ETH, LTC, BTC and XRP return series. All of them have positive mean returns and leptokurtic empirical distributions for both sample periods. Moreover, the skewness for BTC (XRP) is negative (positive) across the full sample, while ETH and LTC present positive skewness before 2019 and negative one after 2019. For all returns series, the augmented Dickey and Fuller statistics reject the unit root null at 1% significance level, in favour of the stationary time series. The normality is significantly rejected by the enormous Jarque-Bera statistics, indicating the fat-tailed distribution. Engle's ARCH test (Engle 1982) results reveal the significant ARCH effect, highlighting the application of GARCH-type models. (2012), we first study the full sample rolling unconditional correlations between the ETH, LTC, BTC and XRP return series using a bivariate approach. We rescale the return series by subtracting their means and dividing by their standard deviations and specify the regression of the rescaled return r r m,t on the rescaled return r r l,t , with l, m = 1, 2, 3, 4 and l = m: r r m,t = μ +ρr r l,t + η t andρ is the estimated unconditional correlation between the two cryptocurrencies returns r m and r l . The time-varying estimated correlation is obtained by using a rolling window of fix length equal to 30 days. The rolling correlations of full-sample return series are plotted in Fig. 2. Before 2017, the correlation between BTC and LTC stays high and positive while those between ETH, BTC and XRP are low and negative. This is not surprising as Litecoin was one of the first "altcoins" to draw from Bitcoin's original open-source code to create a new cryptocurrency, therefore one of the most correlated altcoins with Bitcoin, while Ethereum is launched based on the platform which enables building and deploying smart contracts and decentralized applications, and compete against Bitcoin for market shares; XRP is created as a faster, cheaper, and more energy-efficient digital asset that can process transactions within seconds and consume less energy than some counterpart cryptocurrencies. From the beginning of 2017 to the middle of 2018, distinct spikes in the correlation can be generally found between the cryptocurrencies. Such spikes may reflect J. Cheng (Blau et al. 2020). Moreover, a significant drop in rolling window correlations can be observed at the beginning of 2021 in the cryptocurrency pairs ETH-XRP, LTC-XRP, and BTC-XRP. Again, this is due to the SEC lawsuit Ripple faced. The above bivariate approach considers two return series at a time, as such, cannot exploit the dynamic interdependence simultaneously. To address this issue, we consider the multivariate GAS and DCC models in the next section. In-sample results For notational convenience, let r t = (r 1 , r 2 , r 3 , r 4 ) be the returns of the four assets ETH, LTC, BTC and XRP at time t and ρ 12 , ρ 13 , ρ 14 , ρ 23 , ρ 24 and ρ 34 be the correlation of the return series ETH and LTC, ETH and BTC, ETH and XRP, LTC and BTC, LTC and XRP, and BTC and XRP, respectively. We use the multivariate GAS(1,1) model and the DCC-GARCH(1,1) model (hereafter GAS and DCC) we mentioned in the last section to fit the multivariate return series r t , respectively. Based on the fat-tail leptokurtic empirical distributions we obtained in Table 1, the conditional distribution of r t in the GAS model is specified by the multivariate standardized Student-t distribution; the univariate and multivariate residuals in the DCC model are also specified by the t-distribution. Asymmetric GARCH specifications including GJR and EGARCH models are also considered for both GAS and DCC models. Interestingly, the additional parameters in these models, which are supposed to show the asymmetric volatility response to past returns (so-called leverage effect), are not significant for all the cryptocurrencies in this paper. These results are consistent with those found in Chi and Hao (2021) and Syuhada and Hakim (2020). Several studies apply the asymmetric GARCH models to cryptocurrencies' return; however, they either use the GARCH-type model with Gaussian innovation (Cheikh et al. 2020) or show rather weak significant additional terms which are supposed to reflect the asymmetry (Apergis 2021). For the GAS model, the conditional distribution parameters are as follows: θ = (μ 1 , μ 2 , μ 3 , μ 4 , σ 1 , σ 2 , σ 3 , σ 4 , ρ 12 , ρ 13 , ρ 14 , ρ 23 , ρ 24 , ρ 34 , ν) where (μ 1 , μ 2 , μ 3 , μ 4 ), (σ 1 , σ 2 , σ 3 , σ 4 ), (ρ 12 , ρ 13 , ρ 14 , ρ 23 , ρ 24 , ρ 34 ), ν are location, scale/volatility, correlation and shape parameters of the conditional t-distribution, respectively. Following ( Table 2. It is clear that model 5 seems to be a reasonable choice, i.e. the GAS model with time-varying volatility and correlation, location and shape model is used for the return series r t during 2016 to 2019. The estimation results are presented in Table 3. All the parameters, especially the time-varying parameters of the model (left panel), are significant at the 5% level. We also present the unconditional parameters (right panel) by considering the long-term Table 4. The parameters can be divided into two parts, the results of the GARCH model for each individual return series (upper panel) and the dynamic correlation using multivariate t distribution (lower panel). In Figs. 3, 4, 5 and 6, we plot the estimated volatilities for ETH, LTC, BTC and XRP using both GAS and DCC models during the in-sample period, respectively. For all four return series, the DCC model seems to provide more fluctuant volatilities than the GAS model, especially during the 2018 crash period. Clearly, the extreme returns appear to have a strong effect on estimated volatilities for the GARCH models, whereas those for the GAS model appear to be robust. The correlation estimates from the two models, which are presented in Figs. 7 and 8, show a substantial difference though both models identify a significant persistence of correlations in high positive values between the cryptocurrencies since 2018. The GAS model suggests, in general, positive correlations, varying from -0.15 to 1 between It is worth noting that the dynamic correlations we derive from DCC multivariate modelling approach appear to be similar to the rolling correlations we estimate in the previously described bivariate setting while those by GAS approach seem to produce more smoothed correlation estimates due to its desirable robust future. Out-of-sample results We now turn to the out-of-sample (OOS) forecast performance of the two models. We compare the one-step-ahead forecasting performance of the GAS model and DCC model using a rolling window scheme. The length of the rolling estimation window is set to be 1096 observations, such that 1096 observations (from January 1 2019, until December 31 2021) are left for out-of-sample forecast evaluation. Volatility and correlation forecast evaluation To evaluate the forecasting performance of the two models, we construct two measures of realized volatility and correlation using intraday data. The realized volatility is computed as the sum of intraday returns (see, e.g. Andersen et al. (2001)), where r t,i is the intraday return on day t for intraday period i (i = 1, 2, · · · , N t ). We use transaction prices of ETH, LTC, BTC and XRP from January 2019 to December J. Cheng where r x,t,i and r y,t,i are the intraday return series for cryptocurrencies X and Y on day t for intraday period i (i = 1, 2, · · · , N t ) and RV x,t and RV x,t are the realized volatility for X and Y on day t. Following (Patton 2011), we use two popular and robust loss functions, mean square error (MSE) and Gaussian quasi-likelihood (QLIKE) to compare the forecast accuracy of the GAS and DCC models on the out-of-sample data. These two loss functions are given by, Fig. 3 Estimated volatilities of the ETH return using GAS and DCC models and whereσ 2 i ,ρ i are the rolling forecasts on volatility and correlation of day i by the two models, σ 2 i , ρ i are the realized volatility and correlation at day i, respectively. N is the total number of volatility/correlation forecasts. We also use the (Diebold and Mariano 1995) method to test for the null hypothesis that the forecasts by the GAS model are less accuracy than or equal to the forecasts by the DCC model. Table 5 reports the OOS losses for volatility and correlation, using the loss functions in (8) and (9), for the GAS and DCC models. The Diebold-Mariano statistics Fig. 8 Estimated correlation using DCC models on the loss differences are also presented to see whether the gains are statistically significant. Overall, the forecasting ability of volatility and correlation in the GAS model is superior to those of the DCC model. Judging by the MSE and QLIKE, it is significant that the GAS model delivers substantially better correlation forecasts than the DCC model though the two models provide similar correlation forecasts between the BTC and XRP return series in terms of MSE. The volatility forecasts comparison of MSE and QLIKE between the two models are mixed. The MSE favours the GAS model for all volatilities, while the QLIKE supports the GAS model for XRP volatility only. There is no evidence to show a significant difference of volatility forecasts for ETH, LTC and BTC in terms of QLIKE. These results can be further confirmed in the plots. The difference of correlation forecasts between the two models can be found across the whole OOS period (Figs. 9 and 10), while the volatility forecasts of BTC are similar for both models (Figs. 11,12,13 and 14). Noted that the DCC model continuously gives large volatility forecasts for all three return series when there are large changes in the return series. Interestingly, we find that, on average, for both models, the dynamic correlation forecasts between cryptocurrencies behave similarly in all pairs. The correlations remain positive and at high levels with a few fluctuations across the whole OOS period using GAS model, while those using DCC models gives more sensitive dynamics, especially after January 2020. This could be considered as the consequence of the COVID-19 effect on cryptocurrencies. In particular, during January 2020 to May 2020, weak correlation forecasts can be observed between XRP and other cryptocurrencies using both models, which is, again, due to the SEC lawsuit. This table presents the t-statistics from Diebold-Mariano (DM) tests of equal predictive accuracy for the rolling window out-of-sample forecasts with different models using corresponding loss functions. A tstatistic greater than 1.96 in absolute value indicates a rejection of the null of equal predictive accuracy at the 0.05 level. These statistics are marked with an asterisk. The sign of the t-statistics indicates which forecast performed better for each loss function: a negative t-statistic indicates that the GAS forecast produced smaller average loss than the DCC forecast, while a positive sign indicates the opposite Density forecast evaluation To conduct further the comparison experiment, we use the estimated results for each of the models in the previous section to get one-step-ahead density forecasts and the evaluation is based on scoring rules, which are widely used in weather and climate prediction (Palmer 2012) and financial risk management (Groen et al. 2013). Let y = (y (1) , · · · , y (N ) ) be an observation of the N -dimensional random vector, let f (.) denote a forecast density of y, let denote the set of possible values of y, and let F denote a convex class of probability distribution on . A scoring rule is a loss function: such that better forecast yields a lower score. A scoring rule S is said to be proper if the expected score is optimized, while the true distribution of the observation is issued Fig. 9 Out-of-sample estimated correlation using GAS models Fig. 10 Out-of-sample estimated correlation using DCC models as a forecast, i.e. for all f , g ∈ F. Furthermore, a scoring rule is called strictly proper if equality (10) holds only if f = g. J. Cheng Fig. 11 Out-of-sample estimated volatilities of the ETH return using GAS and DCC models Fig. 12 Out-of-sample estimated volatilities of the LTC return using GAS and DCC models A natural approach is the logarithmic score (Good 1952;Mitchell and Hall 2005;Amisano and Giacomini 2007), which is defined as: However, the logarithmic score is not sensitive to distance, which means it only rewards the predictive densities for assigning high probabilities to realized values but Fig. 13 Out-of-sample estimated volatilities of the BTC return using GAS and DCC models Fig. 14 Out-of-sample estimated volatilities of the XRP return using GAS and DCC models not the neighbourhood values. To overcome this problem, (Gneiting and Raftery 2007) introduce the energy score which is a generalization of the univariate continuous ranked probability score (CRPS) and allows for a direct comparison of density forecasts. The energy score is defined as: J. Cheng whereỸ is an independent copy of Y , so it is drawn independently from the same distribution f (.) as Y , . is the Euclidean norm. Gneiting and Raftery (2007) show that the energy score is strictly proper with β ∈ (0, 2). In application, β = 1 seems to be a standard choice and the score is usually calculated through Monte Carlo methods. Pinson and Tastu (2013) show that the discrimination ability of energy score may be limited, while the dependence structure of multivariate probabilistic forecasts is misspecified. To overcome this problem, Scheuerer and Hamill (2015) propose the variogram score which is based on pairwise differences: where N is the dimension of random vector y, x i and x j are the ith and jth component of a random vector x that is from the distribution f , w i j are nonnegative weights that allows one to emphasize pairs of component combinations and standard choice for weights is w i j = 1. p > 0 is the order of the variogram score. The variogram score is proper relative to the class of distributions for which the 2 p-th moments of all elements are finite and it is not strictly proper (Scheuerer and Hamill 2015). In application, the choice of p is a trade-off between all relative moments of the pairwise deviation and outliers. Typical choices of p include 0.5 and 1. To test the null hypothesis of equal predictive ability of two competing models based on a given scoring rule, we consider (Diebold and Mariano 1995) type tests using score difference. Given a scoring rule S, the score difference is defined as: wheref 1 andf 2 are the density forecasts. The null hypothesis of equal scores is: versus the alternative H 1 : E(d t ) = 0. It can be shown that, under the null hypothesis, with certain conditions (e.g. see Giacomini and White 2006), the statistic where n is the forecast sample size,d = 1 n n t=1 d t andσ 2 is a heteroskedasticity and autocorrelation-consistent variance estimator of σ 2 = var( √ nd). We applied the above three scores to evaluate and compare the density forecasts by GAS and DCC models. For variogram score, we present the results with different p values ( p = 0.5, 1, 2) as used in Scheuerer and Hamill (2015)). The overall density forecast can be evaluated using average scored during the whole out-of-sample period 6 and the DM statistics are obtained using the log score in (11), the energy score in (12) and the variogram score in (13). The score difference d t is computed by subtracting the score of the DCC model density forecast from the score of the GAS density forecast, such that negative values of d t indicate the better predictive ability of the forecast method based on the GAS model. Table 6 shows the average score differencesd n with The table presents the average score differenced * and the corresponding test statistics (with p values in the parentheses) for the log score in (11), the energy score in (12) and the variogram score in (13). The variogram scores are presented with p = 0.5, 1 and 2. The score difference d t is computed for density forecasts obtained from a GAS model with multivariate t innovations relative to the DCC model with same innovations, for daily ETH, LTC, BTC and XRP returns over the evaluation period 1 January 2019-31 December 2021 the accompanying tests of equal predictive accuracy as in (14). These results clearly demonstrate that both energy and variogram scoring rules suggest superior density predictive ability of the GAS model. The large values of average variogram score difference with p = 2 are caused by the nature of quadratic form, and the results are in accord with the simulation studies by Scheuerer and Hamill (2015). From the risk management point of view, it is also important to focus on the performance of density forecasts in the region of interest. Therefore, we compare the models in terms of correctly forecasting the 1% and 5% value-at-risk (VaR) at 1day horizons for both individual cryptocurrencies and different portfolios that can be constructed from the three cryptocurrencies. We define five different arbitrary portfolios, p jt = g j r t for given 4 × 1 weight vectors g j and for j = 1, 2, 3, 4, 5. By ordering the cryptocurrencies as ETH, LTC, BTC and XRP, we construct the following long-only and long-short portfolios: g 1 = (1/4, 1/4, 1/4, 1/4), g 2 = (1/4, 1/4, 1/4, −1/4), g 3 = (1/4, 1/4, −1/4, 1/4), g 4 = (1/4, −1/4, 1/4, 1/4) and g 5 = (−1/4, 1/4, 1/4, 1/4). The long-short positions reflect the relative value bets among these cryptocurrencies. We simulate 10000 sample paths for r t+1 = (r 1 , r 2 , r 3 , r 4 ) , denoted by r s t+1 for s = 1, 2, · · · , 10000 using the multivariate t distribution by the GAS and DCC models. We then construct the simulated individual returns r s i,t+1 for i = 1, 2, 3, 4 and portfolio returns p s j,t+1 = g j r s t+1 for j = 1, 2, 3, 4, 5. We use the sample of 10000 simulated paths to estimate the quantiles of the forecasting distribution at the 1-day horizon. The out-of-sample VaR accuracy is assessed through the unconditional coverage (UC) test (Kupiec 1995) and the conditional coverage (CC) test (Christoffersen 1998). Table 7 presents the UC and CC test statistics and the corresponding p values of the 5% and 1% VaR forecasts for both individual returns (upper panel) and four portfolios (lower panel). For the individual VaR forecasts, all results, except for BTC returns series, suggest that GAS model performs better than DCC model at the 1% and 5% quantile levels. The GAS and DCC models provide same results for the BTC return: the 1% VaRs forecasts perform reasonably well, but the 5% VaR forecasts are rejected for both tests. Meanwhile, the GAS model outperforms the DCC model in general The one-step-ahead VaR forecasts for both individual ETH, LTC, BTC and XRP returns and arbitrary portfolios are found simultaneously based on simulated innovation. By ordering the cryptocurrencies as ETH, LTC, BTC and XRP, portfolios 1, 2, 3, 4 and 5 are constructed using weight vectors g 1 = (1/4, 1/4, 1/4, 1/4), g 2 = (1/4, 1/4, 1/4, −1/4), g 3 = (1/4, 1/4, −1/4, 1/4), g 4 = (1/4, −1/4, 1/4, 1/4) and g 5 = (−1/4, 1/4, 1/4, 1/4), respectively. The column labelled UC and CC reports the unconditional coverage test of Kupiec (1995) and the conditional coverage test of Christoffersen (1998) with p values in the parentheses, respectively for all portfolios in the forecasting experiment. The only exception is the portfolio with weights g 2 = (1/4, 1/4, 1/4, −1/4) and g 5 = (−1/4, 1/4, 1/4, 1/4) at the 5% significant level and the portfolio with weight g 4 = (1/4, −1/4, 1/4, 1/4) at the 1% significant level, for which both model perform poorly. In Figs. 15 and 16, we show the 1% and 5% VaR estimates against the realized returns for portfolio 1, i.e. the long-only portfolio with equal weights for the four cryptocurrencies. We observe that typically the VaR estimates based on the DCC models are more extreme, confirming that the DCC model significantly overestimates the risk at both 5% and 1% quantile levels, especially when the return changes are large (e.g. April 2020 and May 2021). These results are in accordance with previous findings (Creal et al. 2011). The estimates of the DCC model are based on lagged squared returns and the forecasts thus move stochastically every day. However, the updating equation in the GAS model with the Student-t density provides a more moderate increase in the variance/correlation for a large absolute realization of return. The forecasts using the GAS model naturally inherit the return information. Overall, we conclude that the GAS model has better out-of-sample forecasting behavior. Conclusion We have investigated the co-dependence and portfolio VaR of cryptocurrencies using four popular virtual currencies (Bitcoin, Ethereum, Litecoin and Ripple). The results of the multivariate GAS model show strong dynamic interdependence among the cryptocurrencies throughout the sample period. Our out-of-sample forecasting period notably included the COVID-19 outbreak period, which lasted from early 2020 to the end of 2021. Thus, it sheds new light on the multivariate risk measures of cryptocurrencies for global investors. We examine the out-of-sample predictive performance of the multivariate GAS model for a range of financial assets at various quantile levels. Using a battery of scoring rules and backtesting procedures, our results show that the GAS model significantly outperforms the traditional DCC-GARCH model. These results still hold if different cryptocurrencies are considered. There is plenty of room for future research on the analysis of cryptocurrencies, especially during financial turmoil. We can extend the existing scoring rules (especially in multivariate cases) to a more flexible form to cover a particular region of the density. An alternative extension could explore the safehaven properties of cryptocurrencies, stablecoins and traditional assets. Under this framework, the dynamic correlations and the portfolio diversification can be studied systematically.
7,519.4
2023-01-16T00:00:00.000
[ "Economics", "Mathematics" ]
Image Retrieval Based on the Combination of Region and Orientation Correlation Descriptors A large number of growing digital images require retrieval effectively, but the trade-off between accuracy and speed is a tricky problem. This paperwork proposes a lightweight and efficient image retrieval approach by combining region and orientation correlation descriptors (CROCD). The region color correlation pattern and orientation color correlation pattern are extracted by the region descriptor and the orientation descriptor, respectively. The feature vector of the image is extracted from the two correlation patterns. The proposed algorithm has the advantages of statistic and texture description methods, and it can represent the spatial correlation of color and texture. The feature vector has only 80 dimensions for full color images specifically. Therefore, it is very efficient in image retrieving. The proposed algorithm is extensively tested on three datasets in terms of precision and recall. The experimental results demonstrate that the proposed algorithm outperforms other state-ofthe-art algorithms. Introduction The rapid and massive growth of digital images requires effective retrieval methods, which motivates people to research and develop effective image storage, indexing, and retrieval technologies [1][2][3][4]. Image retrieval and indexing have been applied in many fields, such as the internet, media, advertising, art, architecture, education, medical, biological, and other industries. The text-based image retrieval process first manually labels the image with text and then uses keywords to retrieve the image. This method of retrieving an image based on the degree of character matching in the image description is time-consuming and subjective. The content-based image retrieval method overcomes the shortcomings of the text-based method, starting from the visual characteristics of the image (color, texture, shape, etc.) and finding similar images in the image library (search range). According to the working principle of general image retrieval, there are three keys to content-based image retrieval: selecting appropriate image features, adopting effective feature extraction methods, and accurate feature matching strategies. Texture is an important and difficult-to-describe feature in images. Aerial, remote sensing pictures, fabric patterns, complex natural landscapes, and animals and plants all contain textures. Generally speaking, the local irregularity in the image and the macroscopic regularity are called textures, and the areas with repetitiveness, simple shapes, and consistent intensity are regarded as texture elements. After local binary pattern (LBP) [5], there are many similar methods proposed in recent years, i.e., local tridirectional patterns [6], local energy-oriented pattern [7], 3D local transform patterns [8], local structure cooccurrence pattern [9], local neighborhood difference pattern [10], etc. Color histogram is the most commonly used and most basic method in color characteristics; however, it loses the correlation between pixel points. To solve this problem, many researchers have come up with their own visual models. Color correlogram [11] and color coherence vector (CCV) [12] characterize the color distributions of pixels and the spatial correlation between pair of colors. The gray cooccurrence matrix [13,14] describes the cooccurrence relationship between the values of two pixels. Mehmood et al. present an image representation based on the weighted average of triangular histograms (WATH) of visual words [15]. This approach adds the image spatial contents to the inverted index of the bag-of-visual words (BoVW) mode. 1.1. Related Works. Color, texture, and shape are prominent features of an image, but a single feature usually has some limitations. To overcome these problems, some researchers have proposed multifeature fusion methods, which utilize two or more features simultaneously. In [16], Pavithra et al. proposed an efficient framework for image retrieval using color, texture, and edge features. Fadaei et al. proposed a new content-based image retrieval (CBIR) scheme based on the optimised combination of the color and texture features to enhance the image retrieval precision [17]. Reta et al. put forward color uniformity descriptor (CUD) in the Lab color space [18]. Color difference histograms (CDH) count the perceptually uniform color difference between two points under different backgrounds with regard to colors and edge orientations in the Lab color space [19]. Taking advantage of multiregion-based diagonal texture structure descriptor for image retrieval is proposed in the HSV space [20]. In [21], Feng et al. proposed multifactor correlation (MFC) to describe the image, which includes structure element correlation (SEC), gradient value correlation (GVC), and gradient orientation correlation (GDC). Wang and Wang proposed SED [22], which integrates the advantages of both statistical and structural texture description methods, and it can represent the spatial correlation of color and texture. Singh et al. proposed BDIP+ BVLC+CH (BBC) [23], which is represented by a combination of texture feature block difference of inverse probabilities (BDIP) and block variation of local correlation coefficients (BVLC) and color histograms. In [24], the visual contents of the images have been extracted using block level discrete cosine transformation (DCT) and gray level cooccurrence matrix (GLCM) in RGB channel, respectively. It can be represented as DCT+ GLCM. In addition, local extrema cooccurrence pattern for color and texture image retrieval is proposed in [25]. According to the texton theory proposed by Julesz [26], many scholars have proposed texton-based algorithms. Texton cooccurrence matrix (TCM) [27], a combination of at rous wavelet transform (AWT) and Julesz's texton elements, is used to generate the texton image. Further, texton cooccurrence matrix is obtained from texton image which is used for feature extraction and retrieval of the images from natural image database. Multitexton histogram (MTH) integrates the advantages of cooccurrence matrix and histogram, and it has a good discrimination power of color, texture, and shape features [28]. Correlated primary visual texton histogram features (CPV-THF) is proposed for image retrieval [29]. Square Texton Histogram (STH) is derived based on the correlation between texture orientation and color information [30]. Main Contributions. Considering that color, texture, and uniformity features are of relevant importance in recognition of visual patterns [17][18][19][20][21], an algorithm proposed in this paper combines region and orientation correlation descrip-tors (CROCD). This method entails two compact descriptors that characterize the image content by analyzing similar color regions and four orientation color edges in the image. It is based on the HSV color space since it is in better agreement with the visual assessments [20]. Contrasting with other approaches, CROCD features have the advantage of balancing operation speed and accuracy. The rest of the paper is organized as follows. In Section 2, the overall introduction and workflow of the algorithm are presented. Section 3 explains the proposed algorithm in detail. Experimental results are obtained in Section (3). Finally, the whole work is concluded in Section 4. Region Correlation and Orientation Correlation Descriptors There are different objects in an image. The same object is usually a certain area made up of the same or approximate color, which constitutes the texture of the internal area of the object. The edges of an object have distinct color differences from the surrounding ones, and the edges of every object are the same or similar in color. Based on the above analysis, this paper presents a method of combining region color correlation descriptor and orientation color correlation descriptor. This method is also an effective method of combining color, texture, and edges to retrieve images. Firstly, the color image is quantified and coded, and then, the region color correlation pattern is calculated by the region descriptor; after that, the region correlation vector is calculated. Secondly, the orientation color correlation pattern is obtained by the orientation descriptor, and the color correlation histogram of the four orientations is obtained by statistics of the correlation pattern. The orientation color correlation vector of the image is calculated. The feature vector of image is obtained by concatenating the two-color correlation vectors of region and orientation. Finally, use similarity distance measure for comparing the query feature vector and feature vectors of database and sort the distance measure, then produce the corresponding images of the best match vectors as final results. The workflow of the proposed algorithm is shown in Figure 1. The Algorithm Process 3.1. Image Color Quantization. Common color spaces for images are RGB, HSV, and Lab. Among them, the HSV space is a uniform quantized space, which could mimic human color perception well; thus, many researchers use it for image processing [17,[20][21][22]25]. The HSV color space is defined in terms of three components: hue (H), saturation (S), and value (V). H component describes the color type which ranges from 0 to 360. S component refers to the relative purity or how much the color is polluted with white color which ranges from 0 to 1. V component is used for the amount of black that is mixed with a hue or represents the brightness of the color. It also ranges 0-1. Image color quantization is a common method in image processing, especially in image retrieval. Assuming that the same objects are detected, the color will be slightly different 2 Journal of Sensors due to the influence of light, environment, and background. These effects can be eliminated by quantization with appropriate bins. On the other hand, quantization in image processing can also make the operation simple and reduce the operation time. Therefore, giving a color image I (x, y), the quantization is presented as follows [22]: (1) Nonuniformly quantize the H, S, and V channels into 8, 3, and 3 bins, respectively, as equations (1), (2) Calculate the value of every point according to formula (4). where Q s , Q v are the quantization bins of color S and V, respectively. As mentioned above, both S and V are quantified into 3 bins, respectively, so both values are 3. Substitute them into equation (4) to get the following formula: (3) Obtain the quantized color image. The quantized image is denoted by I Q , and I Q ðx, yÞ ∈ L i as follows: This set of points will be used for color statistics of the region and orientation descriptor, respectively, and the dimension of the quantized image I Q is denoted by bins. Region Correlation Descriptor. The concept of texton element is proposed by Julesz [26]. Texton is an important concept in texture analysis. In general, textons are defined as a set of blobs or emergent patterns sharing a common property all over the image. The features of an image have close relation to the distribution of textons. Different textons form different images. If the textons in the image are small and the color tone difference between adjacent textons is large, the image may have a smooth texture. If the texton is large and composed of multiple points, the image may have a rough texture. At the same time, a smooth or rough texture is also determined by proportion of textons. If the textons in the image are large and have only a few types, distinct shapes may be formed. In fact, textons can be simply expressed by region correlation descriptors in a way [19]. Five region correlation templates are presented here, as shown in Figure 2. The shaded portion of the 2 × 2 grid indicates that these values are the same. The process of extracting the region color correlation pattern I R is shown in Figure 3. Figure 3(a) is a schematic diagram of a descriptor. The template moves from top to bottom, left to right, in two steps throughout the image I Q . When the values in the grayscale frame where the image and template coincide are the same, these pixels are the color correlation region. The other templates are used successively to obtain the result pattern of that template. The corresponding shaded parts of the five templates in the quantization pattern I Q are retained, and the rest are left blank to obtain the 3 Journal of Sensors regional color correlation pattern I R , as shown in Figure 3(c). Calculate its histogram, constitute a quantization vector, and get the region color correlation vector HðI R Þ. Orientation Correlation Descriptor. The orientation templates are shown in Figure 4, which can be used to detect the lines with the same color in the orientations of horizontal, vertical, diagonal, and antidiagonal, respectively. In other words, the edge information of an image can be detected. Figure 5 shows the operation diagram of horizontal, vertical, diagonal, and antidiagonal descriptors from top to bottom. These templates move through the whole image I Q from top to bottom, left to right, in two steps. When the values in the grayscale frame where the image and template coincide are the same, the two pixels are the color correlation pixels of the orientation. The corresponding shadow part of the four orientation template in quantization pattern I Q is retained, and the rest part is left blank to obtain quantization pattern I O , as shown in Figure 5(d). Then, the quantization histogram of each orientation is counted, and the color correlation vector of the orientation is calculated. For the sake of illustration, only three quantization elements are taken as examples in Figure 5. In practice, it is the quantized value of image (0, bin-1). The specific steps are as follows: (1) Construct a statistical matrix of 4x bins. Each row of the matrix represents the orientation of horizontal, vertical, diagonal, and antidiagonal, respectively, and the number of columns is the bins of quantization (2) In the orientation color correlation pattern I O , if it meets one of the orientation descriptor conditions, add 1 to the corresponding quantization value in the matrix. Composition of Feature Vector. The objects may have the same texture, but the edge characteristics of the objects may be different. The two factors can complement each other to improve the retrieval accuracy. The region correlation descriptor represents the texture features of an object and mainly represents the texture features of some areas inside the object, and the features are 72 dimensions. The orientation correlation descriptor represents the edge characteristics of the object. Different objects usually have different edge distributions. By taking the respective averages and standard deviations of the colors in the four directions of the horizontal, vertical, diagonal, and diagonal edges, the average color value and color offset in the four edge directions can be expressed and the object edge features are only represented by 8-dimensional feature vectors, which can improve the retrieval efficiency. Therefore, the region correlation descriptor in these two operators works better, and the later experimental part also proves that. In Section 4.4, the experiments demonstrated that quantizing the HSV color space into 72 color bins nonuniformly is well suitable for our proposed algorithm. Therefore, HðI R Þ can represent the histogram of the region correlation image obtained by the region correlation descriptor, leading to a 72 dimensional vector. TðI O Þ can represent the orientation correlation image obtained by the orientation correlation descriptor, leading to an 8-dimensional vector. Finally, the two vectors are concatenated into a vector to obtain an 80dimensional vector representing the image. Figure 6 shows two images and their own feature vectors of CROCD. Experimental Dataset. For the purpose of experimentation and verification, experiments are conducted over the benchmark Corel-1K, Corel-5K, and Corel-10K datasets. (1) 1K dataset (as shown in Figure 7(a)), with a size of 384 × 256 (or 256 × 384), contains 10 categories of original residents, beaches, buildings, public buses, dinosaurs, elephants, flowers, horses, valleys, and food, with 100 images for each category, and a total of 1000 images. (2) 5K dataset (shown in Figure 7(b)), with a size of 187 × 126 (or 126 × 187), contains 50 categories of images, including lion, bear, vegetable, female, castle, and fireworks, with 100 images for each category, a total of 5,000 images. (3) 10K dataset (as shown in Figure 7(c)), with a size of 187 × 126 (or 126 × 187), contains 100 category images of flags, stamps, ships, motorcycles, sailboats, airplanes, and furniture and 100 images of each category, a total of 10,000 images. In this section, we evaluate the performance of our method by these Corel datasets. Performance Evaluation Metrics. The performance of an image retrieval system is normally measured using precision P T and recall P R for retrieving top T images defined by formula (9) and (10), respectively, where n is the number of relevant images retrieved from top T positions and R is the total Journal of Sensors number of images in the dataset that are similar to the query image. Precision is used to describe the accuracy of algorithm query. Recall is used to describe the comprehensiveness of algorithm query. The higher the precision and recall are, the better the function of the algorithm is. Precision and recall are the most extensive evaluation criteria for evaluating query algorithms. In these experiments, we randomly selected 10 images from each category. In other words, 100, 500, and 1,000 images are selected randomly from three datasets, respectively, as query images to compare various results. Similarity Measure. In the content-based image retrieval system, the retrieval precision and recall are not only related to the extracted features but also related to the similarity measurement. So, choosing an appropriate measure for our algorithm is a key step. In this experiment, we compared several common similarity criteria, such as Euclidean, L1, weighted L1, Canberra, and χ 2 . There are two feature vectors x = ðx 1 , x 1 ,⋯,x n Þ T and y = ðy 1 , y 1 ,⋯,y n Þ T extracted from images; their similarity measures can be expressed as Calculate the value according to the above formulas and sort it from smallest to largest. The smaller the value is, the more similar the two images are. Table 1 shows the comparison results of different distance measurement methods. The test dataset is Corel-1K, and the statistical precision and recall are taken, respectively, when the total returned images from 10 to 30. It can be seen that the commonly used Euclidean distance is not good, while weighted L1 is the best. The average precision and recall of HSV, RGB, and Lab are shown in Table 2. Images returned in the experiment range from 10 to 30. When color quantization is increased from 45 to 225 dimensions in the Lab color space, the precision and recall of the proposed method are both increased on the whole. There are the same in two other color spaces. On the other hand, the more quantization will increase the noise; thus, the precision and recall of the proposed method are both decreased when the quantization is 225 in the Lab color space. The highest precision of the top-10 image retrieval results is 79.2% and 71.5% in the RGB and Lab spaces, respectively. The best results are seen in the HSV space, which range from 78.7% to 83.2%. The precision of uniform quantization is not more than 81%; thus, we chose the HSV space of 72-dimensional quantization nonuniformly. In order to test our proposed algorithm, we compared the algorithms proposed by CDH [19], SED [22], BBC [23], DCT + GLCM [24], TCM [27], and MTH [28] on Corel-1K and compared the retrieval precision and recall of 10 categories when the top retrieval image is 15, as shown in Table 3. Five of the ten classes in the proposed method are the best, and its average precision and recall are obviously higher than other algorithms. In addition, the average precision and recall curve of the algorithm and other algorithms on Corel-1K dataset is shown in Figure 8. According to the results, the average precision of the proposed algorithm has been significantly improved from DCT+ GLCM, CDH, TCM, BBC, SED, and MTH up to 11.6%, 9.74%,7%, 5.54%, 5.27%, and 4.27%, respectively, when the top retrieval image is 15. Moreover, the area enclosed by the P-R curve of the proposed algorithm is the largest. Therefore, the precision and recall of the proposed algorithm are higher than the other six algorithms. Based on these analyses, this method has better robustness. To illustrate the universality of the algorithm, the precision and recall of the algorithm and other algorithms on Corel-5K and Corel-10K dataset are shown in Tables 4 and 5, respectively. When tested on Corel-5K and Corel-10K Journal of Sensors datasets, the P 10 of the proposed method is 60.2% and 50.02%, respectively, which are superior to the other six algorithms. To give an intuitive view, Figure 9 shows the P-R curves of the seven algorithms. It can also be seen from the figure that the algorithm proposed in this paper has the best effect. The region correlation descriptor (RCD) and orientation correlation descriptor (OCD) in the CROCD algorithm make different contributions to the retrieval results. Retrieval results of region correlation vector, orientation correlation vector, and their combination (CROCD) are shown in Table 6 on the datasets Corel-1K, Corel-5K, and Corel-10K when the returned image is 15. In the dataset Corel-1K, the precision of RCD and OCD is 71.42% and 38.54%, respectively. The combination of the two, that is, CROCD is 78.07%, and the precision is increased by 6.65%. In the datasets Corel-5K and Corel-10K, the precision of CROCD increased by 5.49% and 5.43%, respectively, compared with the bigger one between RCD and OCD. So, in both the region correlation vector and the orientation correlation vector, the region correlation vector makes a major contribution to the final retrieval result. The results of orientation correlation vector alone are not very good, but after combining with region correlation vector, the proposed algorithm is better than other state-of-the-art retrieval methods. For an intuitive display, the contents of Table 6 are shown in Figure 10. Figure 11 shows four images retrieved by CROCD from dataset Corel-10K and lists the first 30 returned images according to their similarity to the query images. The first 30 images returned from the tree branch (Figure 11(a)) and dinosaur (Figure 11(b)) images are related to the query images, respectively. And, of course, not all query images of these two categories have such effect, but it can be shown that the proposed algorithm has the superiority to those objects which have the obvious color and texture in the similar background. Of the 30 returned images in the snow mountain category (Figure 11(c)), 27 were returned correct. Those incorrect images (enclosed by the rectangular box), the three billow images, have similar colors and textures as snow mountains. Machinery category (Figure 11(d)) also has the 27 returned correct. In the three images returned by the error (enclosed by the rectangular box), they have similar textures and colors to the query image. Computational Complexity. The complexity of the proposed algorithm consists of the amount of calculations required to complete a retrieval which is divided into three parts: query image and database image feature extraction, similarity measurement, and ranking retrieval. As for feature extraction, the calculation amount of extracting the correlation features of the region is K × 17 M × N, and the calculation amount of extracting the correlation features of orientation correlation is K × ð5 M × N + 16 L + 8Þ, and the total is K × ð22 M × N + 16L + 8Þ, which is K × ½OðMNÞ + OðLÞ, where M and N are the length and width of the image. L is the dimensions of the image color quantization space. The variable K represents the total number of images in the dataset. As for similarity measurement, the weighted 1 criterion is adopted, and the calculation amount is K × ð4D − 1Þ, that is, Journal of Sensors the order of K × OðDÞ. The dimension of the feature vector is D. As for sort and search, the quick sort method is used. The calculation amount for sorting and searching the relevant images from the dataset is OðK log 2 KÞ + Oðlog 2 KÞ [24]. The total amount of calculation is The best retrieval results are shown in bold, which means that CROCD has the best performance on this condition. The best retrieval results are shown in bold, which means that CROCD has the best performance on this condition. The best retrieval results are shown in bold. Journal of Sensors The speed of extracting similar images to the query image depends on the feature vector length of the image. Lengthy feature vector takes more time in calculating the difference between query image and database images. The comparison of feature vector of the proposed method with other methods has been given in Table 7 for speed evaluation. Also, feature extraction time for one image has been given in Table 7 for all methods including the proposed method. These As demonstrated in the table, the proposed method is slightly slower than SED but faster than the other methods for feature extraction. The feature vector length of the proposed method is slightly longer than the DCT+ GLCM but shorter than other methods. Moreover, the proposed method outperforms the other methods in terms of accuracy as mentioned in different datasets. 14 Journal of Sensors
5,762.4
2020-06-10T00:00:00.000
[ "Computer Science" ]
A Hot-Hole Transport Model Based on Spherical Harmonics Expansion of the Anisotropic Bandstructure To represent the valence bands of cubic semiconductors a coordinate transformation is proposed such that the hole energy becomes an independent variable. This choice considerably simplifies the evaluation of the integrated scattering probability and the choice of the state after scattering in a Monte Carlo procedure. In the new coordinate system, a numerically given band structure is expanded into a series of spherical harmonics. This expansion technique is capable of resolving details of the band structure at the Brillouin zone boundary and hence can span an energy range of several electron-volts. Results of a Monte Carlo simulation employing the new band representation are shown. INTRODUCTION Efforts on numerical modeling of hot carrier transport published to date deal mainly with hot electrons.One reason might be that for electrons some important transport properties are readily revealed by assuming simple effective-mass band models.For holes, how- ever, an effective mass approximation is poor even very close to the F-point.Non-parabolicity is very pronounced and cannot be described by simple ana- lytic expressions.The warped-band model [1 ], which is essentially parabolic, cannot be implemented in the Monte Carlo technique without additional simplifica- tions [2]. The representation of the valence bands we present is specifically tailored to the needs of Monte Carlo transport calculations.These needs include efficient calculation of the scattering integrals and a straight-forward algorithm for the choice of the state after scattering. REPRESENTATION OF THE BANDSTRUCTURE To obtain the total scattering rate the transition proba- bility given by Fermi's Golden rule has to be inte- grated in the three-dimensional k-space.Because of the energy-conserving 8-function in the transition probability a coordinate transformation is desirable such that energy becomes one of the integration varia- bles.Assume that the band structure is given in polar coordinates" e %(k, f2).We now introduce a coordinate transformation (k, if2) ---) (e, f2) by inverting the function %(k, f2) with respect to k.The result of such an inversion is a function describing equi-energy Corresponding author.Tel: +43 58801-3719.Fax: +431 5059224.E-mail<EMAIL_ADDRESS>surfaces in k-space k K(v-, f2).Inversion of a func- tion is possible only in an interval where the function is monotonous.By inspection of the full band struc- ture one finds that both the heavy hole and split-off bands can entirely be represented by such functions K Above a hole energy of E x (3.04eV) inversion of the light hole band is no longer unique. In this work, we represent the function as a series of spherical harmonics. Kb(''2)3 Derivation of the scattering rates is considerably eased by taking the third power of as the function to be expanded.For symmetry reasons non-vanishing coefficients only exist for even values of and for rn being a multiple of 4. With (1) a set of functions ab,tm(e) contains the whole band structure informa- tion. The density of states of a band represented by (1) is solely determined by the zero order coefficient.d gb(t) 4/1:3 dv.a6,oo(v.),b-H,L, SO (2) All these mechanisms induce both intraband and interband transitions.Other than for electrons, over- lap integrals cannot be neglected for holes.The used approximations are of the form Gii ( + 3COS2[) Gij-----]sin2[3.The Coulomb scattering rate, which additionally depends on the solid angle of the wave vector, is expressed as a series of spheri- cal harmonics.In Eq. ( 5), (kj) denotes an average value over the solid angle, which is defined as (3 a 1 ./3 (kj)j, 00(e) The coefficients hjJ(e) being a result of integration can be expressed in terms of Legendre functions of the second kind. The distribution functions of the solid angle after scattering are given as spherical harmonics series.In a Monte Carlo procedure, the after scattering state can be chosen according to these distributions by a simple rejection technique. SCATTERING RATES Wihthin this framework, we derived the scattering rates for acoustic deformation potential (ADP) scat- tering in the elastic approximation, optic deformation potential (ODP) scattering and ionized impurity scat- tering (ION) in the Brooks and Herring formalism.8rc2hOv2 -aj,oo(e) In this work, we use the series expansion (1) to repre- sent the heavy and light hole bands up to eole 3.04eV, which is the band-energy at the X-points.The numerical band structure has been computed by a nonlocal empirical pseudopotential method. The functions ab,tm(e) are represented numerically by means of a finite element method.To ensure con- tinuous derivatives shape functions of third order have been chosen.The unknowns associated with the nodes of the energy grid have been determined by a variational approach [3].From numerical band data the functions ab,lm(E can well be computed for non- vanishing hole energies, but not for an energy of zero. To obtain the ab,tm(O) we expand the expression for the warped band approximation.In this way, our band model combines the warped band approximation in the vicinity of the F-point where not enough numeri- cal data points are available, and the numerical band structure for higher hole energies.band of silicon.Symbols refer to the data points of the numerical band structure, solid lines to the series expansion.Equi-energy lines in k-space are plotted in Figure 2. It turned out that at low energies less har- monics are required than at high energies.Therefore, we make the number of harmonics a function of energy.For instance, for the light hole band lmax 20 at 0.5eV, and lmax 60 at 3.0eV.The weak ripples at 3.0eV indicate that some higher order harmonics are still missing.In general, the higher the number of har- moncis, the better the details of the band structure can be resolved at the boundary of the Brillouin zone.On the other hand, for hole energies below Et (1.27eV), where the band structure does not yet touch the zone boundary, a lower value of Imax is sufficient (typically lmax < 28). As can be seen in Figure 2 the series representation provides states outside the first Brillouin zone which do not exist in reality.These artificial states yield an increased density of states and hence increased scat- tering rates.In the Monte Carlo procedure, scattering events to such artificial states outside the Brillouin zone are rejected and self-scattering is performed instead. In Figure 3 the simulated drift velocity is compared to measured data [4].The split-off band has been neglected in this simulation.The surrounding octagon indicates the boundary of the Briliouin zone CONCLUSION A new method to represent numerical valence band data for Monte Carlo transport calculations has been developed.A function basically describing equi- energy surfaces in k-space is expanded into a series of spherical harmonics.Depending on the energy range accounted for and the number of harmonics invoked the model can be considered either as an improved analytical band model or as a full-band model.In this work we demonstrated the full-band capabilities for hole energies up to E X (3.04eV). FIGURE FIGURE Comparison of numerical band structure (symbols) and the spherical harmonics expansion (lines) for the heavy hole band FIGURE 3 /FIGURE 2 FIGURE 3 Comparison of simulated and measured [11 hole drift velocities as function of the electric field at 300K
1,621.2
1998-01-01T00:00:00.000
[ "Physics" ]
A MORSE LEMMA FOR DEGENERATE CRITICAL POINTS WITH LOW DIFFERENTIABILITY It is still an open problem if the conditions on the cohomology may be withdrawn. On the other side, there are examples of Finsler metrics on rank one symmetric spaces with only a finite number of closed geodesics. One of the differences between the critical points theory for the Finsler and the Riemannian cases is that the Finsler energy is not C2. In fact it is twice differentiable at the critical points, but not in general outside the regular curves. Therefore, in order to have a Morse theory for the Finsler case, we need a Morse lemma for functions with such a level of regularity. This was done in [4] for the case of nondegenerate critical points and, as a consequence, they obtained the result of Gromoll and Meyer for the Finsler case with the additional hypothesis that the closed geodesics are nondegenerate circles of critical points in the space of closed curves (nondegenerate in the sense of Bott [1]). Introduction Morse theory has been successfully applied for proving existence and multiplicity results for extremals of various variational problems.In particular the following beautiful theorem of Gromoll and Meyer (see [3]). Theorem 1.1.If M is a compact, simply connected Riemannian manifold whose cohomology is not isomorphic to the one of a compact symmetric space of rank one, that is, a sphere or a projective space, then M has infinitely many closed geodesics (nontrivial and geometrically distinct). It is still an open problem if the conditions on the cohomology may be withdrawn.On the other side, there are examples of Finsler metrics on rank one symmetric spaces with only a finite number of closed geodesics.One of the differences between the critical points theory for the Finsler and the Riemannian cases is that the Finsler energy is not C 2 .In fact it is twice differentiable at the critical points, but not in general outside the regular curves.Therefore, in order to have a Morse theory for the Finsler case, we need a Morse lemma for functions with such a level of regularity.This was done in [4] for the case of nondegenerate critical points and, as a consequence, they obtained the result of Gromoll and Meyer for the Finsler case with the additional hypothesis that the closed geodesics are nondegenerate circles of critical points in the space of closed curves (nondegenerate in the sense of Bott [1]). The aim of this paper is to prove a degenerate-critical-point version of the Morse lemma as in [2] with conditions of low differentiability that, although stronger than those in [4], are verified by the Finsler energy.More precisely, let f : ᐁ ⊂ H → R be a C 1 function defined on an open set of a Hilbert space H. Suppose that f is twice differentiable at 0 and let N be the kernel of the symmetric operator A : H → H given by Thus we can look at z ∈ H as x + y ∈ N ⊥ ⊕ N. We will prove the following theorem. Theorem 1.2.If f is strongly differentiable at the origin, there is a neighborhood V of 0 in H and a homeomorphism ϕ : where g is a function g : Proof of Theorem 1.2 Recall that a function between two Banach spaces, f : E → F, is said to be strongly differentiable at where r(y) is the rest function of the Taylor's formula for f around x.In other words, f is strongly differentiable at x if and only if it is differentiable and, given ε > 0, there is a neighborhood of x where r(y) is ε-Lipschitzian (hence so is f ).It is clear also that if f is differentiable on a neighborhood of x and its differential f is continuous at x, then f is strongly differentiable at x.Moreover, if f is continuous in a neighborhood of x and strongly differentiable at x, with invertible differential, then f is invertible around x and the proof is the same as the classical inverse function theorem.Also the corresponding version of the implicit function theorem gives the following proposition. Proposition 2.1.Using the conditions and notation above, let 0 be a critical point of f and suppose that f is strongly differentiable at the origin.Then there is a continuous function g : U ⊂ N → N ⊥ on an open set U containing 0 such that (∂f/∂x)(g(y), y) ≡ 0 and g(0) = 0.Moreover, g is strongly differentiable at the origin and dg 0 = 0. Remark 2.2.If we write f (z) = f (x, y) and look at the restrictions of f on the planes N ⊥ ×{y}, the above function g gives us a parametrization of the critical points of such restrictions on a neighborhood of the origin in H. We go now into the proof of Theorem 1.2.Define h 1 : It is clear that h 1 is strongly differentiable at 0 and that dh 1 (0, 0) = I , hence h 1 is a homeomorphism of a neighborhood V 1 of the origin of H onto another neighborhood of 0, say 3) The symbol | • | means the norm induced by this inner product. From now on we are looking for a homeomorphism h 2 : We will search h 2 of the form where λ : and so, f (ϕ(z)) = ψ(z) if and only if λ satisfies Now observe that φ(0, y) = 0, thus any value we give for λ(0, y) satisfies the last equation on these points, in particular λ(0, y) = 0.For x = 0 and y ∈ N fixed, consider the function : R → R defined by (2.9) We want a neighborhood of 0 in R × (N ⊥ ⊕ N) where is a contraction and, therefore, λ = λ(x, y) will be a fixed point of .We begin estimating .Observe before that hence (∂φ/∂x)(0, y) = 0 and, since φ is strongly differentiable at the origin, it follows that, given ε > 0, if x + y is small enough, where k 1 , k 2 are constants.So, if we choose ε small and |λ| ≤ 1/(4k 2 ), for instance, we will have | | ≤ 1/2 and, using the mean value theorem, (2.12) Observe that (0, x, y) = φ(x, y)/|x| 2 .Then, taking λ 0 = 0, we get The function λ constructed in this way is bounded on an entire neighborhood of the origin and might be discontinuous only at the points (0, y).Since λ(x, y) ∈ [−η(x, y), η(x, y)] we see it is continuous at 0. However h 2 is continuous, even where λ is not, (2.17) As well as h 1 , h 2 is differentiable at the origin and dh 2 (0) = I .In fact, |h 2 (x, y) − x − y| = |λ(x, y)| |x| and the differentiability follows from the continuity of λ at 0. Unfortunately, we cannot guarantee the strong differentiability of h 2 at 0, what leads us to search the inverse h 3 of h 2 explicitly.Choosing a neighborhood of 0 in H in such a way that |λ(x, y)| < 1/2, the function h 3 defined below is the function we are looking for In fact, continuity is clear in the whole neighborhood except at the points (0, y).For these we have and so h 3 is continuous.That h 3 really is a local inverse of h 2 is just a simple verification of the equalities h 2 • h 3 = id = h 3 • h 2 .Finally we see that dh 3 (0) = I , (2.20) and using the continuity of λ at 0 again completes the proof. It is not difficult to see that the Finsler energy verifies the hypothesis of this Morse lemma so a Morse theory can be developed as in the Riemannian case.Moreover, the index formula works the same as in the Riemannian case (see [5]) so we can repeat the proof of [3] obtaining the analogue of Theorem 1.1 for the Finsler case. y) x − + y − y 0 2 = x + 2 |1 2 |1 2 , + λ(x, y)| 2 + x − − λ(x, y)| 2 + y − y 0 ). Observe that what h 1 is doing is to move an open set of {0} × N onto the parametrized surface of "critical points" (in the sense of Remark 2.2) {(g(y), y); y ∈ U ⊂ N}.Since A| N ⊥ is an isomorphism, we can write N ⊥ = H − ⊕ H + where H − and H + are the A-invariant subspaces where A is negatively defined and positively defined, respectively.This way we have H = H − ⊕ H + ⊕ N and z = x + y with x = x − + x + .It is convenient to introduce an inner product on N ⊥ , (•, •), equivalent to the former •, • , which makes the decomposition H − ⊕ H + orthogonal.It suffices to take
2,126.6
2000-01-01T00:00:00.000
[ "Mathematics" ]
Delft University of Technology Demodulation of a tilted fibre Bragg grating transmission signal using α-shape modified Delaunay triangulation Reflective Tilted Fiber Bragg Grating (TFBG) sensors have intriguing sensing capabilities due to the resonance-peaks present in their transmitted spectrum. Previous works measured the external refractive index (ERI) in which the TFBG sensor is placed, by considering the wavelengths or the envelope of the cladding-modes resonances. In this paper, primarily, we demonstrate the effectiveness of an alternative global technique, based on Delaunay triangulation, to analyze the TFBG spectrum for refractometer purposes. Hence, we performed the correlation between the area subtended by the upper and lower cladding-modes peaks and the ERI. An investigation on the goodness-of-fit correlation functions is also presented for TFBG sensors written in standardand thin-optical fibres and considering different values of the fundamental triangulation parameter a. 2020 The Authors. Published by Elsevier Ltd. This is an open access article under theCCBY license (http:// creativecommons.org/licenses/by/4.0/). Introduction Short-period gratings with periodicity around 500 nm, otherwise called Fiber Bragg gratings (FBGs), are formed by a permanent modulation of the core of a single-mode (SM) optical fibre (OF). The variation of the refractive index can be imposed with several refractive index profiles along the length of the Bragg grating which modify the propagation of electromagnetic waves inside and, hence, the spectrum of the FBG. Depending on the application of the FBG, uniform, chirped, Gaussian and apodizating are profiles commonly used [1]. In the case of tilted FBG sensors, the previous profile scan is written with a so-called tilt angle with respect to the fibre axis, so that the Bragg gratings are single-sided or, even double-sided tilted [2]. Adjusting the tilt of the gratings, the TFBG transmission spectrum acquires unique features which make it suitable in several applications. Indeed, the detection of the shifting in the spectrum of different peaks allows the separate and simultaneous measurement of strain and temperature at the sensor location [3][4][5][6][7]. Furthermore, the TFBG spectral bandwidth is composed of multiple resonance peaks, called cladding modes, that are more or less sensitive within a determined range of external refractive index (ERI) variations, based on the Bragg grating's characteristics. Therefore, changes of the wavelength and amplitude of these resonances can be exploited to use the TFBG as a refractometer [8][9][10][11][12][13]. Also, cladding modes with high effective refractive index are susceptible to macro-bending applied to the OF [14][15][16]. In other FBG types (such as uniform, chirped, Gaussian and apodizating), cladding resonances are seldom visible in the transmission spectrum; as a well-designed grating keeps the light in the core mode [1,2]. Long Period Gratings (LPG) sensors have been applied for refractometer purposes because their spectrum contains a number of attenuation bands coming from the coupling between the core guided mode and a subset of cladding modes [1]. The resulting peaks are sensitive to external refractive index changes [8], however, as a refractometer, LPGs suffer of some metrological issues, high temperature-strain-bending crosssensitivity, their grating's length prevents point measurements, short RI sensitivity range and higher cost of the sensor [5,10]. While, in the case of TFBGs, the cross sensitivity is a key point, in fact, a three-parameter optical sensor based on weakly slanted short-period gratings can be developed as the spectral response offers the possibility to considering the shifting of determined peaks and its total transmission power to detect simultaneously strain, temperature and ERI [5,17]. In the last years, attractive sensing abilities have arisen from the embedding of TFBG sensors inside composite materials to monitor their thermo-mechanical deformation state [18,19] and the degree of cure of the resin during the composite manufacturing [20,21]. The interest for refractometers based on TFBGs is also increasing in chemical and biochemical fields, because they offer the possibility of using an optical approach to detect chemical and biochemical species without resorting to luminescence-or absorption-based measurements [22,23]. A common application is the measurement of solute concentration in a liquid solution [24,25]. In this work, we propose a new technique based on a-shape modified Delaunay triangulation to demodulate the TFBG transmitted spectrum and perform a correlation with the ERI. The demodulation technique allows the calculation of the envelope area of the upper and lower cladding resonance peaks easily and quickly by considering the TFBG spectrum as a point set for which the Delaunay triangulation is applied. Then, the correlation between the normalized envelope area and the ERI range used during the calibration here performed. Then, we demonstrate that by changing the a-shape value of the triangulation a higher matching can be obtained between the fit functions and the correlation curve. Correlation methods between ERI and TFBG transmission spectrum Substantially, for refractometric purposes, all the demodulation approaches exploit the correlation between the cladding losspeaks in the spectrum and the ERI variation. However, a fundamental distinction can be considered between those methods using wavelength-encoded information (wavelength shifting [10] and cutoff resonance [26]), and information contained in transmission spectral changes of the cladding resonances (envelope [8] and area method [17], standard deviation method [24], and contour length approach [27]). These two demodulation classes are respectively called wavelength shift and global methods. Although the wavelength methods are easy to be applied, as they are based on the wavelength shifts of cladding resonances peaks caused by and ERI changes, the use of these techniques would require the temperature and RI sensitivities coefficients of some selected cladding resonances to be known during the calibration step. That makes these techniques sensitive to the cladding resonances to parameters. Moreover, in case of multi-parameter measurements, the coefficients of the sensitivity matrix are fundamental to uncouple opportunely the different perturbations (if the coefficients are close then it is not possible to separate individual effect). The second demodulation group is more suitable for simultaneous multiparameter measurements because these methods are independent of thermo-mechanical perturbations and they offer precise measurements in a certain ERI range. However, often these approaches have to follow many steps and they are time consuming to perform. In fact, as the spectral peaks of cladding modes are not uniform in amplitude and wavelength distribution, the process of obtaining the envelope of the upper and lower peaks is not always straightforward to perform. In this context, both the envelope curves can be smoother than the spectrum and/or the integral of the transmission, may undergo approximations to better adapt to the mathematical trend. Furthermore, the envelope functions are often generated through piecewise-functions whose integration process is computationally heavy and slow. In this paper we want to introduce a new global approach based on the Delaunay triangulation (also known as Delone triangulation) [28], to calculate the cladding resonances area in the TFBG spectra. This technique is simple and fast to implement, and its performance is independent of the shape of the spectrum of the TFBG. The technique is applied to analyze the cladding resonances of weakly tilted short-period gratings spectra when the sensors were totally surrounded by several liquids with different well-defined refractive indexes (±0.0002). Hence, the correlation between the spectra and the ERI variations was performed for several values of a fundamental triangulation parameter, the triangulation radius. This approach allows each minimal variation of shape peaks to be considered (according to the resolution of the interrogator device used to obtain the spectra), and moreover, based on the measuring ERI range, the slope of the correlation curve can be increased so that the resolution of the measurement system to the ERI change increases. Weakly tilted FBG theory and spectrum In a standard FBG, since the multiplexed gratings are approximately perpendicular to the optical axis of the fibre, coupling is only allowed between the modes propagating inside the core. Specifically, the core forward-propagating light is coupled only with the backward-propagating reflection core mode with a welldefined wavelength determined by the gratings. However, when a tilt angle is imposed, the tilted grating reflects part of the light into the cladding layer of the fibre, where it can be coupled from the cladding modes to the surrounding medium. Hence, a double coupling system occurs inside the core, and in the case of tilt angles h < 45°, the core forward-propagating mode is also coupled with the core and cladding backward-propagating modes [29]. When the tilt angle reaches a range close to 45°, a great amount of the reflected light in the cladding is irradiated out of the optical fibre, this effect is called radiation mode coupling. The angle range, in which radiation modes are present, depends on a critical angle that can be easily calculated by knowing the cladding and surrounding medium refractive indexes [30]. The radiation mode coupling generates a lack of resonances in the TFBG transmitted spectrum as the cladding modes are no longer guided as they are coupled out from the OF. With a further increase of h, out of the radiation range, the cladding modes are reflected so that they change their propagation direction from backward to forward and then the coupling with the core mode occurs in this same direction. The coupling modalities influence strongly the transmission spectrum of TFBGs, as well as the tilt angle affecting the coupling between the in-fibre propagating modes, consequently the shape of the TFBG transmission spectrum depends an the value of h imposed by the Bragg gratings. In this work only weakly tilted FBGs (h < 10°, also called reflective TFBGs) were used because the unique features of their spectra are more suitable for aerospace applications (for realtime and parallel thermo-mechanical measurements), however the approach introduced below could be applied to any kind of TFBG spectrum. Inside the TFBG, the light radiation conditions, with varying the tilt angle, can be represented by the core K core , grating K G and radiated K R light wave vectors. Specifically, the strongest modes coupling occurs when a phase-match condition is satisfied, which is written as: Since the refractive indices of the core and cladding are similar, we can consider the weakly guided OF approximation to be applicable. As a consequence the amplitudes of K core and K R may be assumed to be approximately the same. In Fig. 1, d is the radiation angle, K G is the grating period, K is the nominal grating period and h the tilt angle. From the vector compositions, it is possible to see that, for a tilt angle of less than 45°, the light of the core mode is coupled to the backwardpropagating core mode and the cladding modes. K R is completely dependent of the inclination of the Bragg gratings. This kind of mode coupling involves special spectral features when the FBG has a tilt angle of less than 10°. Specifically, three main regions can be identified in the spectrum: Bragg peak, Ghost peak, and cladding-mode resonances (Fig. 2). The Bragg peak is obtained by the same core-core mode coupling that generates the main peak in FBG spectrum. The Ghost res-onance appears in the spectrum slightly away from the Bragg peak, and it is the result of a group of low-order and strongly guided cladding modes coupling with the core light. A possible explanation for why the Ghost peak is single, although the coupling happens between multiple low-order cladding modes, could be that the wavelength resonance of each coupling is near to each other. This means the contribution to the reflection of each of these modes is such that, in transmission, the Ghost peak is a single peak but with larger bandwidth than the other resonances peak (as it is already possible to see in Fig. 2). These Ghost modes have a low interaction with the interface between the cladding and the surrounding as they propagate in a well-confined region of the cladding layer with a strong interaction with the fibre core. The Bragg and Ghost peaks are typically present in the spectrum of FBG with small tilt angles (h < 10°), however they become invisible for greater grating inclination. The wavelengths of the Bragg peak and the cladding resonances in the spectrum can be determined by applying the phase-match condition (Eq. (1)) and substituting the parameters characterizing the coupling between the modes. Then, the wavelengths of the Bragg (k Bragg ) peak and cladding (k clad,i ) resonances are obtained [31]: where n eff,core and n eff,clad,i are respectively the refractive index of the core and cladding modes. As the above equations show, after the coupling, the spectral position of each resonance is dependent on the effective refractive index n eff,i of the modes and the tilt angle and period of the gratings K G . Although, all the peaks are sensitive to thermo-mechanical perturbations (the wavelength shifting occurs when a temperature or strain variation is applied to the gratings), in the case of ERI variation, only the cladding resonances are affected by wavelength and amplitude peak loss [8,10]. Indeed, due to the surrounding RI changing, each cladding mode fits to the new condition, propagating with a different effective refractive index. The amplitude of the peaks is also affected when the surrounding RI becomes closer to the effective RI of the i-th mode, these modes become less guided within the cladding and its resonance is weak. While, when the external RI matches the effective RI of i-th mode then the mode is irradiated out and the relative resonance does not appear anymore in the spectrum. Specifically, the i-th cladding mode propagates out from the fibre because, being that the mode and the surrounding medium have the same refractive index, there is no boundary between the two layers and then the internal reflection is missing. Furthermore, the influence of strain and temperature variation on the cladding resonances amplitude and on the RI measuring needs to be considered. As demonstrated in [17,32], although a translation of the whole spectrum is observed with the application of strain to the OF, the area coming from the envelope of the cladding resonances is unaffected, and moreover, the only effect of a temperature variation on spectrum area is generated by the change of the surrounding RI. This is the physics theory behind the mechanisms that are the basis of the transmitted signal demodulation techniques presented in this section. The new approach, proposed in this paper, exploits the cladding resonances decay and the change in transmitted spectrum shape caused by the ERI effect. Delaunay triangulation Triangulation methods allow the partitioning of complex polygons into multiple triangles (in 2D) and polyhedrons into tetrahedrons (in 3D), which can then be used to compute the area, volume or to discretize a point set into a convex hull. Other applications in which triangulation is useful are the creation of interpolation functions and the generation of a refined mesh for FEM analysis of complex body parts [28,33]. Several algorithms based on different triangulation strategies have been developed over the years: recursive diagonal insertion, ear cutting, prune and search, decomposition into monotone polygons, divide and conquer, sweep-line, Graham scan, randomized incremental, and using bounded integer coordinates [34]. However, all these algorithms have a common triangulation definition such that the following mathematical conditions are satisfied: S is a set of points, T is the t i triangles array or triangulation whole, V i and l m are the vertices and edges of triangles, while i, m and n are the indices of the relative elements. The first expression in eq.4 describes the convex hull of S and is the union of all the triangles generated by the triangulation. The second condition in eq.4 establishes each point of S as a vertex of the generated triangles. The last third condition implies that the intersection between the convolution of the point set and the triangles, generated by the triangulation, coincides with the vertices or the edges of the triangles, or it is empty. Although the previous mentioned strategies may result in a more or less efficient triangulation algorithm, often the obtained triangles are skinny and with long edges. This is undesirable because the triangulation might be non-uniformly distributed, and consequently, the convex hull of the discretized point set may be different from the appropriate one. Indeed, the vertices of some triangles may tend to be spread out from each other or to not be connected properly to each other. Delaunay (D-)triangulation can be considered to avoid these issues and obtain well-shaped and uniformly distributed triangles (Fig. 3) [35]. Starting from Eq. (4), Dtriangulation can be defined by adding the so-called empty circumcircle property: where the (x i ,y i ) coordinates are in the plane of several points and c n are the circles generated during the triangulation. Briefly, Eq. (5) means for each edge of the triangles generated by the triangulation, a circle exists such that the end of the edges (vertices) are on the boundary of the circle, but at the same time, no other vertex of S is in the interior of the same c. When Delaunay triangulation is not applicable then the constrained version can be used, where two vertices of the same triangle of T must be coincident on the boundary of c and the third can be interior or outside of the circle c n [34]. As Fig. 3 shows, each point of the set is a vertex of one or more triangles generated on the circles, and simultaneously, each point is outside by any circle. Moreover, since for three points only one circle can intersect, therefore the written triangle, with vertices the same points, is unique. D-Triangulation demodulation approach In this section Delaunay triangulation is applied to the TFBG spectrum with the aim of calculating the cladding resonances area, and successively, to correlate it to the external RI variation. For the reasons reported in Section 3, the demodulation technique was applied to the bandwidth corresponding to the cladding resonances and half of the Ghost peak. This choice allows the spectral variations that happen in the bandwidth between the Ghost and the first peak of the cladding resonances to be taken into account. In Fig. 4, the TFBG transmission spectrum is processed as a sequence of points, for which their coordinates in the plane are known, these can be treated as a set S (Fig. 4). Each point in the spectrum is potentially a vertex of Delaunay T. However, if the technique was applied globally to S then the convex hull, resulting by the triangulation, most likely would be different from the actual envelope of the cladding resonance peaks. Indeed, without a parameter to control the proper connection distance between the vertices, very long edges l could be generated, by the algorithm, between vertices placed in far bandwidths. This issue can be easily overcome by introducing the notion of ashape in the algorithm of Delaunay triangulation [36,37]. Substantially, in this way, T occurs based on the Delaunay definition but, simultaneously, the triangulation is kept under control through the parameter a that represents the r radius of the circles c n , and to which can be attributed under different conditions. Specifically, to yield a suitable D-triangulation for our aims, let us consider: -Delaunay T = {t 1 , t 2 ,. . ., t n }, -P 1 (x 1 ,y 1 ), P 2 (x 2 ,y 2 ), P 3 (x 3 ,y 3 ) = V i (from Eq. (5)), Triangles generated inside circles with r a are allowed by the algorithm, as this allows small variations of the cladding resonances area to be considered and the 'mesh' to appear smooth and uniform. During the demodulation phase, we noted that the triangular mesh elements usually have smaller sizes when they are internally used in the convex hull of the spectrum (where the vertices concentration is higher), while longer edges l are used for the border elements. The a value influences strongly the resulting spectrum triangulation, and, with the aim to calculate accurately the cladding peaks area an optimized value should be applied to perform the discretization. Increasing again (a = 2, 6), the spectrum is perfectly discretized in triangles with a good peaks envelope. In these cases, due to large values of a, some external regions of the spectrum are triangulated as well (as the red arrows indicate in Fig. 5c) and d)). That could seem to be an issue in calculating the resonances area, however, as it will be afterwards demonstrated, it does not influence the success of the correlation. Rather, since these external areas grow and wane congruently with the cladding resonance peaks, a better fitting function of RI correlation might be obtained resulting in an higher predictability and smoother trend of the curve. Obviously not all radii a allow a useful area measurement to be determined, in fact a tradeoff analysis should be performed to select the optimum a, however a radii range exists in which each a value returns anyway an acceptable correlation (see Section 8). Correlation ERI-normalized area In this work, D-triangulation was applied for the demodulation of tilted FBG spectra written in Fibercore SM1500(9/125)P standard (cladding diameter 125 lm) and thin-single mode OFs (thin-SMF, cladding diameter 80 lm). Sensors were manufactured using the tilted phase mask technique by FORC-Photonics company. The sensors were interrogated with a 4-Channel NI PXI-4844 Universal Input Module based on Fabry-Pérot tunable filter technology able to scan, with 4 pm of resolution, the wavelength range between 1510 nm and 1590 nm. So, by considering the working bandwidth, 20 Â 10 3 points are available for triangulation in the spectrum. A Cargille oils set was used to surround the OF sensor in a well-defined RI environment, the employed liquids had RIs from 1.3 until 1.7 with 0.01 interval and accuracy of ± 0.0002 in the temperature range between 18 and 32°C. The following results are for a 5°tilted FBGs, 10 mm sensor length, written in standard and thin-single mode fibre. The coating layer (polyamide) was removed before the experiment to obtain a bare OF for the entire length of the TFBG. The temperature of the immersed sensor was monitored with a K-type thermocouple and kept at 25 ± 0.75°C. A digital camera microscope was used to check that no air bubbles remained at the interface between the oil and the cladding of the OF. Moreover, the OF was mounted on a translation stage so that the TFBG sensor can be kept straight during the dipping in a special polycarbonate bowl containing the RI liquids. Once the spectra were acquired via the DAQ system, these were processed by applying the D-triangulation through a dedicated algorithm. Below, considering the TFBG in standard-SMF, three overlapped spectra are reported after the meshing was performed with the sensor surrounded by three oils with different RIs: the red spectrum is for oil with RI = 1.47, in yellow and red when the TFBG is surrounded by a liquid with RI = 1.42, while in the case of RI = 1.33 the spectrum is composed of blue, yellow and red regions. As expected, Fig. 6 shows the reduction of the region formed by the D-triangulation of the cladding resonances as the surrounding refractive index increases. In particular, as reported below, the correlation graph shows, for a refractive index of 1.33, the triangulated cladding area is at its maximum (blue + red + yellow in Fig. 6) and incorporates all the cladding resonances. However, when the ERI grows, the upper and lower relative amplitude peaks fall, hence, for 1.42, the same blue region decreases to the yellow one and then to the red one for 1.47, so that, between the several areas, a 'funnel effect' happens. This effect is also visible for higher effective RI cladding modes after zooming into parts of the spectrum as seen in Fig. 7. Once the spectrum is discretized, the total area can be easily calculated by adding up the areas of all the triangles: From Eq. (7) it is proposed that A T (n) is a function of ERI. To investigate this, A T (n) for various ERI values is calculated, then the correlation between them is determined. By convention, a normalized area A is used to create the graph of the TFBG behavior trend with the surrounding RI variation [8]. A is defined as the ratio between the i-th area and a reference area: For this work, n ref corresponds to the value obtained for a surrounding RI of 1.33. This choice is arbitrary, however, as n ref has the maximum area, the correlation graph y-axis in Fig. 8 is then between 0 and 1. In Fig. 8, the two curves show a different trend although the only parameter that is changing between the TFBG customizations is the cladding diameter of the OF in which the sensors are engraved. From the correlation graph it is possible to deduce some features of the TFBGs spectra and behaviours. Starting from RI = 1, the trend is substantially linear around 1.33 where both curves have their maximum peak. In the said RI range, the same linear trend was detected that was found in previous works [8,38], so it was considered unnecessary to use RIs between 1 and 1.3 for the correlation. In this same range, the red curve (thin-OF) is above the blue curve, as the normalization condition is applied in same way for both the correlations, therefore the ratio A is bigger for the thin-OF TFBG. Considering that a reduction of the cladding diameter generates an enhancement of cladding mode coupling in TFBG sensors [38], the transmitted resonance peaks of the highest cladding modes (Fig. 9) have a much deeper loss than those recorded for standard OF (Fig. 3). As a possible explanation, for low RI values, is the high cladding mode peaks' contribution to building up the area is greater in the case of TFBG in-thin-OF written than standard waveguides so that the ratio with the maximum area is bigger. Moreover the two lines are not parallel, in particular the red line has a more accentuated slope, this means the susceptibility of cladding modes couplings to ERI is higher in a thin-OF TFBG sensor. From RI = 1.33 the trend of both curves changes showing the maximum TFBGs sensitivity to the RI variations. Though the standard-OF TFBG is not very sensitive until 1.4, this sensor is highly susceptible to RI variation between 1.4 and 1.46, and its curve has a strong slope. Meanwhile the other sensor exhibits a smooth trend and good slant along the entire range 1.33-1.46. The above description demonstrates how the TFBG sensor customization can influence their sensing abilities. Particularly, in this case, the TFBG in standard-OF is extremely sensitive between 1.42 and 1.46, while the second is susceptible to a more broad range, although having a lower sensitivity. The last RI interval (1.46-1.7) in our wavelength working window shows substantially the same behavior of both the sensors, which are more or less sensitive to RI changes. Although, the purpose of this work is not to study this behaviour, we tried to explain this phenomenon considering the effect of the internal reflection and the refraction at the interface between cladding and surrounding. When the surrounding RI reaches that of the cladding material all the cladding modes are irradiated out of OF because they propagate as if the two materials are optically the same material. However, by increasing the external RI, the two materials become optically different. In particular, part of the light continues to be irradiated externally, while some modes return to be reflected internally towards the core where they are coupled with the forward-propagating core mode. In fact, in the cladding of the OF, thousands of modes with different effective RIs are present for which the critical angle (or cut-off angle) is not the same, and in particular, when the RI is higher than the cladding RI, some modes are reflected back internally at the cladding-surrounding interface. Once the correlation curve is obtained with the D-T demodulation technique, a graphics and calculation time comparison with the other methods can be performed. In particular, below in Fig. 10, a graph containing several RI correlation curves obtained by applying the D-T and other demodulation techniques (envelope [8], wavelength shift separation [10] and area method [17]) is reported. Moreover, the necessary times to demodulate a single TFBG transmission spectrum in an auxiliary parameter (normal- 7. Triangulated area zoom of medium-high effective RI cladding modes peaks. Also here the 'funnel effect' is visible as smaller ERIs produce greater areas incorporating those obtained for lower ERIs. Fig. 8. Correlation ERI-Normalized Area performed through D-triangulation for 5°tilted FBG sensor written in standard (blue curve) and thin-cladding (red curve) OF, a = 2. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) ized area or wavelength shift separation) for each used technique are report in Table 1. From Fig. 10, the envelope and the D-T triangulation techniques provide very close correlation curves, however the area method provides a correlation curve restricted in a smaller range of excursion. The demodulation technique based on the wavelength shift separation develops a correlation curve until 1.45 because the selected peak was recognizable anymore after this RI value. From the times reported in Table 1, the fastest demodulation technique, between the main methods here analysed, is the wavelength shift separation. However, the D-T technique is the fastest of the global demodulation methods. Although the difference in time between the global techniques could not appear to be relevant, when a high number of spectra have to be demodulated, the difference in processing time-lapse between the techniques becomes important. It is enough to note that the necessary time to demodulate a TFBG spectrum using the Area method is the same as needed by the D-T technique to process more than 16 TFBG spectra. Moreover, the computational time factor is fundamental during the real-time measurement because this value should be smaller than the detection rate of the interrogation system to perform appropriate realtime monitoring. The wavelength shifting method is faster, however, considering the several drawbacks described in Section 2, the difference in time-lapse is so small to do not justify the use respect the benefits guaranteed by the D-T demodulation technique. Fitting function of correlation curves Once the correlation curve has been obtained, it is easy to understand the RI applicability range of the TFBG sensor, or in Table 1 Computational times of the analysed techniques to demodulate a single TFBG spectrum. TFBG demodulation technique Computational time (sec) D-T demodulation technique 0.120 Envelope method [8] 1.198 Wavelength shift separation [10] 0.095 Area method [17] 1.998 Fig. 9. Transmission spectrum of a 5°tilted FBG written in Fibercore thin-SM1500(9/125)P optical fibre surrounded by air. other words, where its sensing capability is more effective. However, for practicality, an appropriate fitting function, describing analytically the correlation curve, should be found in the range of interest. That allows the RI value to be obtained directly from the TFBG spectrum area, by solving the equation of the fitting function. However, if a entire RI measurement range is taken into account, then some normalized area values could give two different RIs. This ambiguity may be solved by considering the two statistical parameters Skewness and Kurtois during the demodulation technique as in [32], as the D-triangulation approach can be used with these statistical parameters, however, this topic is not addressed in this work. Since the optimum fitting can change based on the RI interval, here, only the best sensitivity RI range is considered. To attribute the goodness-of-fit to each function, the R-square (R 2 ) statistical approach was used. R 2 , also known as coefficient of determination, is the square of the correlation between the response values and the predicted response values, hence defined as the ratio between the deviation of the regression and the total deviation. R 2 is useful because, being constrained between 0 and 1, it provides an intuitive description of the goodness of the fit. The closer R 2 is to 1, the closer the data points are to the regression line. In this work, we attributed the R-square values to the fitting functions only with the purpose of evaluating the quality of our calibration. The next results show the fitting for the correlation curves of the standard and thin-OF TFBG sensors used in the previous section. Fig. 11 reports a plot of the fitting curves of the correlation points with different polynomial degree along a RI interval ). Though the linear regression is not properly able to represent the correlation trend because the points appear spread out with respect to the regression line, by already considering a quadratic degree, the fitting function is such that the fitting curve is closer to the data points. As described above, R 2 allows the quantification of the goodness of fit. In Tables 2 and 3 several R 2 values are reported, taking into account different RI ranges and polynomial degrees of fitting functions. In most of the cases a quadratic function is already enough to match the trend of A RI ð Þ. Taking into account the resolution of the interrogation system used to detect the TFBG spectra, the resolution of the RI measurement performed through the optical fibre sensor can be calculated. In this case, the NI PXI-4844 interrogator was used to obtain the TFBG spectral responses, which has a scanning wavelength of 4 pm and a power transmission resolution of 4 Â 10 À6 dB. The minimum detectable variation area is then obtained and, consequently, the minimum variation normalized value respect A T (n ref ). Considering for both the TFBG sensors, the same RI interval 1.41-1.45 and a triangulation parameter a = 2 with the respective linear fitting functions, the obtained RI resolutions for the standard-OF is 3.774 Â 10 À4 and 3.585 Â 10 À4 in the case of the thin-OF at 1.44. However, as previous works ( [8,17]) have already demonstrated, the resolution can be improved if a better RI reference is taken. Moreover, the resolution can also be improved using a particular a value and, of course, a different fitting correlation function. At this point, a flow chart is reported below (in Fig.12), as a summary and example of methodology to follow to integrate the algorithms of the here introduced demodulation technique, in a RI measurement system, also considering a possible real-time application. In the flow chart, the blue arrows show the way the analysis can be performed during the real-time measurements, and it can be noted that this is composed from only 6 steps. The reduction is due to the fact the TFBG was calibrated in advance, hence, its measurement algorithm is already optimized for all the RI range. Regarding the discretization or triangulation strategies, as described in Section 4, there are many possible options whose use depends substantially from the programmer; the here reported analysis were performed using the triangulation with divide and conquer method. For real-time application, the discretization in triangles could be performed using a moving and adaptive mesh, this would reduce the triangulation time and speed-up the calculation. Influence of a-parameter on correlation curve In this section, the problem posed at the end of Section 5, about the influence of the a value used to perform the correlation, is addressed. The first step was to audit the interval of a values in which the correlation curve has a physical meaning. Each correlation curve performed in this work and found in literature for any TFBG customization, is characterized by a slightly increasing upward trend until a maximum point. Then, from this peak, it shows a greater sensitivity RI range with a strong decrease, and from the minimum point the trend becomes again growing. Hence, there are substantially three trend intervals which can be physically justified (Section 3). These trend intervals are expected to remain constant if external disturbances (temperature fluctuations, bending), remain constant. Developing several correlations with many a values and taking into account the physical meaning above described, two different a intervals were obtained. For the standard-OF TFBG, the suitable interval is 1.6 a 6, while in the case of the thin-OF TFBG any a greater than 0.8 is useful to perform the demodulation technique. At this point, since the correlation curves keep essentially their same trends, it is not possible to appreciate graphically in detail the differences of using different a values in Fig. 13 and Fig. 14. For this reason, to evaluate how the quality of our RI calibration of TFBGs changes, a study focused on the variation of the R-square value of the fitting functions is necessary. Taking into account some a radii, the same previous fitting function polynomial degrees, and the widest sensitivity RI range of the same TFBGs (in which the correlation curve has the better slope), the calculated R 2 values are reported in Tables 4 and 5. From the R-square values, it is possible to suppose however that, any a value is suitable to obtain an acceptable fitted correlation curve. Nevertheless, using a specific a, we can study the fitting quality during the calibration, for example: by considering the TFBG in standard-OF, the best fit for functions of 2-and 3-th order can be obtained using a = 6, while in the linear or 4-th order fitting function, the D-triangulation should be performed with a = 1.6. For TFBG in thin-OF, an a = 4 provides the best fit quality by considering until a 4-th polynomial fitting function. However, when comparing the R-square values in Tables 4 and 5, as mentioned before, the use of another allowable a value does not cause a serious drop in the goodness of the correlation. Conclusions In this study a new concept of demodulation technique for tilted FBG sensors was introduced and demonstrated for refractometric purpose. In principle, the approach is based on the use of Delaunay triangulation to obtain the area of enclosed between the upper and lower peaks of the cladding resonances in the TFBG transmission spectrum, which are correlated to the surrounding RI. The technique is fast and simple to implement, in fact, the triangulation of the set of input data points from the interrogation of the TFBGs is the unique operating step necessary to obtain the area of the cladding resonances. In this way, the technique does not need the determination of the peaks, the creation of envelope curves and resolution of integrals, or the determination of sensitivity coefficients, saving time and reducing the computational power required. After the spectrum has undergone the meshing, the total area is easily calculated by the sum of each area of the triangles created in the spectrum. We demonstrated that this approach can be applied for different kinds of optical fibre in which the TFBG is written, and, there exists always an a range for which the correlation can be performed or rather improved. This demodulation technique can be applied for the analysis of a confined part, or for the total length, of the spectrum, and moreover, since its applicability does not depend on the spectrum shape, it can be used in any kind of TFBG customization. Potentially, these features allow this approach to be also exploited for real-time spectrum monitoring with a moving mesh, and this demodulation method is besides compatible with other techniques for simultaneous multiple measurements. Furthermore, in both the case of real-time measurements or data post-processing, when the spectrum is affected by some disturbances like connectors power losses or bending of the optical fibre, for which the TFBG signal could present some ripples, this approach should be able to operate without particular issues being insensible to the spectrum shape and power loss. In this paper, the technique was applied substantially to correlate part of the power transmitted by the TFBG with the surrounding refractive index, however, the same may be used in all the case where a power contained in a signal needs to be converted or correlated to another significant parameter. An immediate future research objective is a work focused on multi-parameter sensing using the demodulation technique based on the D-triangulation technique. CRediT authorship contribution statement Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
9,353.4
2020-12-15T00:00:00.000
[ "Physics" ]
Quality of Education and Economic Integration in Central America This paper uses the literature that has shown that the main determinant of the economic growth of Latin American countries is the quality of education, measured as the scores obtained in national or international standardized tests. Based on that evidence, it postulated that in the Central American economic integration scheme, if a given country‟s increases it‟s education quality, it‟s rates of investment and economic growth would increase. The increased economic dynamism in this country would lead to increasing its imports from other member countries and in consequence, would experience more rapid rates of economic growth. Moreover, it is shown that the GDP increases resulting from increases in education quality would be larger when they take place on a Central American basis, that is, when each country increases its education quality simultaneously, than when they occur on an individual basis, a result that can viewed as a regional concertation externality. The paper concludes arguing that it is important that Central American countries increase their education quality to acquire a capacity to grow endogenously. Introduction The quality of education is a topic that continues to receive attention in Economics. The interest stems from the evidence presented by various authors that quality of education is an important determinant of economic growth. In recent years itś observed that studies on the influence of human capital on economic development emphasize the role of the knowledge acquired by students in school, and not simply the number of years that students stay in the educational system. This knowledge, which represents the quality of education, is usually measured by the scores obtained by students in national or international tests of reading or mathematics. In view of its role in increasing economic growth, it is valid to postulate that if one or several member countries of an economic integration scheme improve their quality of education, they will experience increases in their economic growth rates, and therefore they will "share" their economic dynamism with the other member countries, through increases in intra-regional trade flows, thus giving rise to a regional network of economic growth impulses. The objective of this paper is the quantification of such a network of growth impulses at the Central American level. The next section presents a brief overview of the relationships between quality of education and economic growth. This is followed by the development of a theoretical model derived from Metzler (1950) model, which is estimated to compute the multiplier matrix, and to quantify the repercussions on the Central American countries derived from increasing the quality of education by one, or several, member countries. The virtuous circles resulting from increases in the quality of education are analyzed as well. The work ends with a series of conclusions. the quality of education allows the diffusion and acquisition of technologies, thus increasing the capacity for innovation, and therefore increasing labor productivity. It should be noted that there is microeconomic evidence of the role of the quality of education in increasing the income of individuals. Hanushek (2005) showed that students" labor market performance after graduating from high school was such that those with higher test scores in their final year of high school, subsequently had higher monthly earnings. There is also evidence that public expenditure directed to increasing quality of education pays for itself. Hanushek and Woessmann (2007) showed that the increase in the quality of education from 450 to 500 points in international tests in the last year of secondary school, leads the respective country in a horizon of 60 years, to have a GDP 25 percent higher than in the case in which it had not invested in increasing it. Regarding the Latin American countries, reference is made to the study by Hanushek and Woessmann (2009b), who estimated various equations to explain the per capita economic growth of these countries using various independent variables, including the scores resulting from the standardized tests carried out in Latin American countries by the Second Regional Evaluation of the Quality of Education in 2005 (SERSE, 2008). The results indicated that the average country scores were the most important determinants of economic growth in the region. Several authors have tried to identify the variables that determine the quality of education in a country. The study by Jackson, Johnson and Persico (2015) on the impacts of increasing spending per student in USA schools, reported that the 10 percent increase in spending per student in each year of the twelve years of public education, led to an increase in the level of schooling by 0.43 additional years, an increase of 9.5 percent in wages, and a reduction of the poverty rate of adults by 6.8 percent. Tsounta and Osueke (2014) presented evidence that the scores of the Latin American countries on the PISA tests increase with public spending on education as a percentage of GDP. Regarding the relationship between spending per student and their performance, Hanushek and Kimbo (2000) reported that the shortage of teaching materials had a negative impact on the quality of education. The positive impact of teachers" salaries has been reported by Lee and Barro (2001) and by Dalton and Marcenaro-Gutierrez (2010). Hanushek, Piopiunik, and Wiederhold (2014) reported that, in a sample of OECD countries, the quality of teachers and the amounts of their salaries had significant impacts on student scores in the PISA. On the other hand, based on the analysis of the results of grades 7 and 8 of the TIMSS, Woessmann (2003) and Woessmann and West (2006) reported that the negative impact of the number of students per teacher decreased as the level of teacher proficiency and wages increased. Of particular importance is the evidence that eighth grade students" standardized test scores increased proportionally with the time they spent at the preschool level (Schuetz, Ursprung, & Woessmann, 2008). The study by Arias, Yamada, and Tejerina (2002) on education and wages in Brazil, used the ratio of students to teacher as an indicator of the quality of education, and reported that as this index increased the rate of return of education fell, so that a reduction of 10 students in this ratio would lead to an increase in the average rate of return of 0.9 points. A similar result was reported by Card and Krueger (1992) for the US. With respect to the number of students per teacher, Barro and Lee (2001) reported that its reduction had a positive impact on quality of education, a result that has also been pointed out by McEwan (2014). Data The main source of data is the World Bank"s World Development Indicators. The means values and standard deviations of the variables are presented on table 1. Quality of Education and Macroeconomics in Latin American Countries As an indicator of the quality of education in the Latin American countries, this paper uses the scores on the Third Regional Comparative and Explanatory Study, prepared by the Latin American Laboratory for Assessment of Quality of Education, (UNESCO-TERCE, 2016), which included reading, mathematics, and science tests, conducted in 2013, in 15 countries of the region, and with a coverage of 67,000 students. The average country scores are presented on table 2. Regarding the determinants of the quality of education, the TERCE scores allow corroborating the findings of several studies that the scores are positively related to public spending on education as a percentage of GDP, as shown in Figure 1. Likewise, in accordance with the literature on this subject, Figure 2 shows that there is a negative relationship between the number of students per teacher, (studentsteacher), and the mathematics scores in third grade (thirdgrademath). Figure 2. Number of students per teacher and math scores in third grade The quality of education has an impact on macroeconomic variables. Figure 3 shows the relationship between sixth grade math scores and the 2013 national savings rate, (national savings), while Figure 4 shows a close association between third grade reading scores, (thirdgradescoring), and the rate of private investment (privateinvestment). Vol. 13, No.9;2021 In view of the role of public spending on education in increasing the quality of education (Figure 1), a positive relationship is also observed between spending on education as a percentage of GDP and the 2013 private investment rate, as can see in Figure 5. Importance for the objectives of this paper lies in quantifying the impact of quality of education on private investment. Equations that express the private investment rate in terms of variables that represent the quality of education are shown on table 3, where the qualitative variable Cuali, takes the value of one when the variables correspond to Chile, and zero otherwise. Table 3 shows that reading and science scores explain about 50 percent of the variance of private investment, while public spending on education explains 72 percent. Note. 1 In this and all tables the "t" statistics are shown underneath the corresponding coefficients. The Proposed Model The proposed model rests on the evidence of the role of quality of education in affecting certain national economic and social variables. This makes it possible to postulate that in an economic integration scheme characterized by macroeconomic interdependence relationships, the macroeconomic impacts resulting from changes in the quality of education in a member country will be transmitted to other countries through intra-regional trade channels, thus affecting the macroeconomic variables of all member countries. ijef.ccsenet.org International Journal of Economics and Finance Vol. 13, No.9;2021 The model developed is based on the well-known work of Metzler (1950), and in that sense it is linear and activated by demand, and assumes that there are no limitations on the supply side. The gross domestic product of country i, Yi, is given by: Yi = Ci + Gi + Ii + Igi + Eoi -Moi + ∑ (Eij -Mij) (1) Where: Ci = private consumption Gi = Public consumption Ipi = Private investment Igi = Public investment Eoi = extra-regional exports Moi = extra-regional imports Eij = exports from country i to j Mij = imports of i from j, equal to Eji And ∑ represents the summation sign. Private consumption and investment, Ci and Ipri, and public consumption and investment, Gi and Igi, as well as imports from outside the region, Moi, are assumed assumed to be determined by GDP: Intraregional exports, Eij, and imports, Mij, depend on the GDP of the importing country: By substituting equations (2) -(8) in equation (1) it is obtained that: (1 -f + ∑mij) Yi -∑xijYj = Eoi Where f = ai + bi + ci + di -ei The expression (9) can be represented in matrix terms: Where (A) is a diagonal matrix with elements equal to: Aii = 1 -f + ∑mij, and with elements outside the main diagonal equal to zero. B is a matrix with zeros on the main diagonal and with elements outside the main diagonal equal to: Y is the vector of GDPs of the countries, Eo is the vector of extra-regional exports, and (A + B) is the Metzler matrix (T). The vector of GDP can be obtained from the expression (10): The quality of education is introduced in the model through the result of equation (3) of table 3, which expresses the propensity to invest by the private sector, (ci), in terms of third grade reading score, Hi: ci = hi (Hi) Therefore, the elements of matrix A can now be written as: The matrix A can be written as: International Journal of Economics and Finance Vol. 13, No.9;2021 Where C and D are diagonal matrices with elements equal to: Equation (10) can be written like this: The effects of the changes in the quality of education in a given country on the GDP"s vector are found by deriving the expression (12) with respect to Hi: Since: It is obtained that: The multiplier matrix is equal to (T) -1 and therefore the expression (14) can be written like this: The acceleration of economic growth is given by the expression: Equation (16) implies that economic growth results both from the evolution of exogenous variables, that is, extra regional exports, as well as from structural change within the regional economic structure. The structural changes are the results of improvements of the quality of education. Results The parameters of equations (2) -(8) were computed using mean values of the variables for the years 2015 and 2016, taken from the World Bank"s World Development Indicators. With this basis, the Metzler matrix (A + B + C) was calculated, which is presented in Table 4, where the letters GUA represent Guatemala, ES, El Salvador, H0, Honduras, NI, Nicaragua, CR, Costa Rica and PA shows Panama. The multiplier matrix obtained by taking the inverse of the Metzler matrix, and is shown in Table 5: ijef.ccsenet.org International Journal of Economics and Finance Vol. 13, No.9; 2021 The multiplier matrix indicates that, for example, if Honduras" extra-regional exports increased by 100 US dollars, its GDP would increase by 158.36 US dollars, while the GDPs of Guatemala and El Salvador would increase by 25.10 and 14.31 US dollars respectively. Guatemala is the country that receives the largest multipliers from the other countries; in second place in terms of the size of the multipliers received is Costa Rica, followed by El Salvador, while the last place corresponds to Panama, which reflects its relative long distance from the other countries. Impacts of Changes in the Quality of Education Three cases of increases in the quality of education in given countries are analyzed below. In the first case, it is assumed that Guatemala"s score in reading at third grade increases by 50 percentage points. The expression is equal to: From the computations indicated by equation (15) changes in GDPs are given by the expression: This vector indicates that the GDP of Guatemala increases by 731.67 million US dollars, which is equivalent to a growth of 1.10 percent, while El Salvador experiences an increase of 19.73 million, or 0.10 percent. In the second case, El Salvador"s third grade reading score is assumed to increase by 50 points. The expression is equal to: Therefore, the changes of the regional GDP vector are found by equation (15) The results indicate that the 50-point increase in the third-grade reading score in El Salvador leads to its GDP increasing by 190.50 million, which is equivalent to a 0.81 percent increase in economic growth, while the GDP of Guatemala increases by 36.98 million, which represents an increase of economic growth of 0.06 percent per year. To estimate the cost of the 50-point increase in the third grade reading test, the following equation was estimated: Using this equation, the increase in public spending as a percentage of GDP required for the third-grade reading score to increase by 50 points, is 3.3 percentage points. After 25 years, El Salvador"s accumulated growth would be 20 percent of GDP, (0.81x25); given that the El Salvador"s tax ratio is 18 percent, the additional collection of tax revenue would be 3.6, (20x0.18), percent. It can see that the increase in spending on education by 3.3 percentage points generates an additional 3.6 per cent tax revenue; therefore, it can be deduced that this investment pays for itself. An additional calculation was carried out to compute the effects of all countries increasing their scores on the third-grade reading test by 50 points. In this case the matrix is equal to: El Salvador"s GDP experiences an increase of 1.0 percent, while Guatemala"s GDP increases by 1.22 percent, and Costa Rica"s by 0.80 percent. This exercise indicates that concerted action at the Central American level in relation to increasing the quality of education, can be more valuable in achieving higher economic growth rates than if the increases were made exclusively at the national level. This is a case in which regional concerted action yields greater benefits than national actions, insofar as it gives rise to a series of additional impulses of economic growth with regional coverage. In view of the additional increase in economic growth that El Salvador receives, over a 25-year horizon, the additional accumulated GDP would be 25 percent, which would lead to an additional tax collection of 4.5 percent, which is higher than the 3.3 percent required to increase its score by 50 points. In other words, concerted regional action to increase quality of education "reduces" the costs of the required investments. This result can = ijef.ccsenet.org International Journal of Economics and Finance Vol. 13, No.9;2021 be called the creation of regional externalities of a social nature, which has gone unnoticed in the literature on economic integration. It should be noted that these increases in the GDP of Guatemala and Costa Rica, of 811 and 456 million respectively, are equivalent to GDP increases resulting from the increases in their own extra-regional exports by 248 and 210 million. These increases in extra-regional exports depend on the international economy, but the increases in education quality depend on national fiscal and education policies. In other words, the improvement of quality of the education constitutes a means to "shield" the national economies from the vicissitudes of the international economy and to acquire the capacity for self-determination in economic matters. Import Substitution Scenarios The multiplier matrix was calculated under the scenario in which the average propensity to import from outside the region decreases by 0.10 in each country, and that this decrease in extra-regional imports translates into increases in the average propensity to import intra regionally by 0.02 in each country. In this case, the multiplier matrix shows elements of greater dimension than those corresponding to the original multiplier matrix, as seen on table 6. After performing the computations indicated by expression (15), using the matrix shown in This indicates that the structural change resulting from import substitution constitutes a valuable instrument for energizing economies within the framework of an economic integration scheme. Given the stagnation tendencies shown by the Central American countries after the reforms of the 1990s, import substitution emerges as an economic policy option of singular value. Role of Distance Figures 6 and 7 show the multipliers received by Guatemala from the other Central American countries in terms of distance, in kilometers. Figure 6 shows the case of the original multiplier matrix, while Figure 7 represents the multipliers under the assumption that a change in import substitution had occurred (table 6). It is observed that in both cases, as the distance from Guatemala City increases, the multipliers tend to fall, but less than in the case when import substitution has occurred Interdependence in an Integration Scheme As defined by Engerman (1968), the interdependence index of an economic system is defined as the increase in GDP in a given country resulting from the increase in its extra regional exports relative to the sum of the increases it experiences when the extra-regional exports of the other member countries increase, that is: These indices are shown in table 7 for the original multiplier matrix, and for the multiplier matrix that resulted from the import substitution exercise. A low interdependence index indicates that the corresponding country is more connected to the rest of the member countries and therefore the interdependence is greater. In other words, the low index implies that the country in question "shares" its income more with the other members of the integration scheme, that is, there is less economic polarization. It is observed that in the import substitution scenario, the interdependence indices of all the countries become lower, denoting increases in connectivity in the integration system. The country with the greatest interdependence with the rest of the system, in both cases, is Costa Rica, followed by Guatemala and El Salvador. In the case of import substitution, the disparities between the interdependence indices decrease substantially, indicating that all countries tend to become interdependent in the same degree. In other words, import substitution can make the integration system more balanced, which has implications for the distribution of the costs and benefits of integration. The above shows the potential benefits of reestablishing the import substitution model in the Central American countries, which, contrary to what was argued in the 1990s, was not "exhausted", as Rodrik (1998) has shown. With this base, the positive relationship between the tax rate, Tax 1, and exports, Exp 1 is constructed in quadrant (4). Virtuous Circles of Quality of Education Quadrant (5) shows the association between quality of education, Calid1, and the corresponding economic growth rate, Crec1, as has been shown for Latin American countries by Hanushek and Woessmann (2009) and Cá ceres (2018). Based on the relationships shown in quadrants (4) and (5), the positive relationship between the tax rate, Tax1, and the economic growth rate is constructed in quadrant (6), which depicts a relationship where an initial tax increase has led to increased economic growth Crec1. Quadrant (7) shows the increase in tax revenue, Tax2, which results from the increase in the economic growth rate. In other words, the economic dynamism generated by improvement of quality of education gives rise to a higher tax revenue, which in turn leads to increases in spending on education, Gasto 2, as shown in quadrant (8). With this quadrant and quadrant (6) the positive relationship between the original tax revenues, Tax 1, and the subsequent increase in education spending, Gasto 2, is constructed in quadrant (9). This indicates that the original additional fiscal effort subsequently led to increases in education spending and in its quality. In quadrant (10) it is shown that the increase in spending on education Gasto 2, gives rise to another round of improvements in quality of education, Calid 2, which in turn leads to increasing exports, Exp 2, shown in quadrant (11). Based on this and quadrant (9), the positive relationship between the original tax rate and subsequent exports is described in quadrant (12), while quadrant (13) describes the relationship between the original quality of education and future exports. Quadrant 14 shows the relationship between Calid 2 and Crec 2. Of particular interest is the association shown in quadrant 15 between the original tax effort, Tax1, and the future economic growth, Crec 2. The relationships of special interest in this Figure are, firstly, the associations between the original quality of education and future exports (quadrant 13), as well as the association between future exports and the original tax rate (quadrant 12), and the relationships between the original tax rate and future economic growth (quadrant 15), and between future quality of education and future economic growth (quadrant 14). History and Quality of Education Just as it is possible that a strong initial impulse to education expenditures and to the quality of education gives rise to the development of a virtuous circle, as shown on Figure 8, the absence of that impulse can lead to a vicious cycle of stagnation of quality of education and exports and economic growth. A current low level of quality of education can be seen as a reflection of poor support for education in the past, which can be related to the weak mobilization of tax resources; that is to say, low taxation in the past did not allow governments to carry out investments of great magnitude in education and health that would have led to the future development of exports and innovations. This can be associated with the low priority that the development of human capital has received throughout history in Latin American countries, and with the reluctance by high-income groups to pay taxes. Sokoloff and Zolt (2004) have presented per capita tax revenue data in the year 1870, in US dollars, for 10 Latin American countries and, as shown in the following Figures, this 1870 fiscal indicator is a "very early" indicator of quality of education in 2013, and of other macroeconomic variables of Latin American countries today. Figure 9 shows the positive relationship between per capita taxes prevailing in 1870, Tax 1870, and the third-grade reading scores in 2013, Tercerolectura. This Figure implies that current low quality of education in the region can be seen as an "inheritance" of past low tax revenues in the past. It can be argued that current quality of education reflects the level of taxation of about 150 years ago. In other words, deficient quality of education has been reproduced over time, reflecting the low initial taxation efforts. In countries with high initial per capita tax revenues, it was possible to carry out investments in human capital that generated increases in economic growth and thus created additional tax revenues, which in turn allowed the continuation of high investments in human capital. The opposite case occurred in countries with low initial levels of taxation. The positive association between Tax 1870 and the rates of private investment in 2013, private Investment, (Figure 10), and of economic growth, economic growth, (Figure 11), should be highlighted. Table 8 below presents the results of the estimation of equations that express the role of per capita taxes in 1870 on quality of education, and spending on education, and economic growth in 2013. Equations 1 and 2, whose dependent variables are the third-grade results in the reading and mathematics tests respectively, show that the coefficients of the variable Tax 1870 are significant, together with the qualitative variable "Cuali", which takes the value of unity when the dependent variable corresponds to Chile. The R2 of these equations are around 60 percent, indicating that the tax effort of a century and a half ago explains more than half of the variance in the quality of education in a sample of Latin American countries in 2013. Equation 3 shows that Tax1870, together with the qualitative variable, explains about half of the variance of public spending on education as a percentage of GDP in 2013. One implication is that if the tax effort is not increased, the situation of low-quality education will continue to reflect the past, and economic and social development will continue being elusive. It can be argued that an initial increase in tax resources will generate subsequent increases in tax revenues due to the increase in economic growth resulting from the original additional expenditure in the quality of education. In other words, it is possible to create a virtuous circle on the supply side of the economy by increasing tax revenues. On the contrary, the fall in taxation can lead to persistent subsequent declines in economic growth and tax revenues, to the extent that productivity and economic growth contract in the face of the deterioration of human capital. Thus, the only curve that matters is the one showing in Figure 8 that increasing current tax revenues leads to additional education expenditures, (quadrant 9), to subsequent improvements in the quality of education, (quadrant 10), and increasing economic growth (quadrant 15). The rest is fake news (Note 1). Of special interest is the result of equation 4 on table 8, whose dependent variable is the average per capita annual growth rate in the period 2005-2012. The coefficient for Tax 1870 is significant and explains 67 percent of the variance of economic growth even without a qualitative variable. It"s valid to deduce that the economic growth of the present partly reflects the low tax revenues of the past, or rather the lack of political will to mobilize tax resources and allocate them to education, 150 years ago. It can be postulated that the decision to increase taxation in some countries led to the initiation of cumulative process of human capital generation, which is reflected in the relatively high levels of human development today. Quality of Education, National Savings, Investment, and Economic Growth Given that the quality of education is a determinant of private investment, itś possible that the lack of quality education in a country becomes an obstacle to the dynamism of investment and economic growth. If this occurs in countries that are members of an integration scheme, their intraregional trade flows and the external sectors in general could become stagnant. The relationship between the quality of education and investment is analyzed next, in a framework in which the economy in question requires a minimum of skills or knowledge to give rise to the growth of investment. In Figure 12 national saving is given initially by the function S(1), which reflects the level of knowledge H1. Savings, (Ahorro), grows along the horizontal axis, thus sustaining the investment shown on the vertical axis. Investment, for its part, generates the economic growth rate shown on the left horizontal axis. Upon reaching point S1, national savings no longer translate into investment, due to the limitations or lack of knowledge and ijef.ccsenet.org International Journal of Economics and Finance Vol. 13, No.9;2021 skills required by investment. At this point increasing investment requires additional knowledge of operations management and co-ordination, as well as additional innovations. If this knowledge gap were not resolved, investment would not increase to levels above I1, and thus the economy would enter a period of stagnation at a persistent growth rate of g1. The knowledge gap can be bridged through increases in the quality of education, through increases in spending on education, (see Figure 1), so that the acquisition of new skills by the population would lead to national savings being now given by the function S(2). It is observed that S(2) has a greater slope than that of S(1) because the increase in the quality of education has led to an increase in the rule of law (Cá ceres, 2018a), which contributes to increasing investment. National saving grows to point S2, giving rise to I2 and g2; after which skills become again insufficient or obsolete to transform saving into investment. This situation of economic stagnation can be overcome by the generation of human capital with higher levels of education quality and with the generation of innovations, which will sustain growth rates above g2. It should be noted that when there is a knowledge gap, an ex-ante overabundance of national savings may occur which, in the absence of adequate knowledge to turn savings into investment, dissipates into private overconsumption and consequently, the national savings rate falls. This shows that behind the savings gap, and behind the external gap, typified by the two-gap model, one can find a gap that determines both, which is the knowledge gap. Another implication is that in an integration scheme, intra-regional trade may eventually lose dynamism and momentum, because of the population"s weak skills and knowledge, and the low capacity for innovation. This shows that within the framework of an integration scheme, development of human capital and the generation of innovations have important relevance to impart dynamism to intra-regional trade flows. In other words, it can be argued that development of quality human capital is an essential element to impart dynamism to economic integration. Figure 12. Knowledge gap as a constraint to economic growth Regional Quality of Education Strategy It"s convenient to structure a strategy for the quality of education at the Central American level that would allow each country to advance in this field, taking advantage of the benefits that exist because of concerted regional actions. This strategy would include the design and execution of national actions according to an agreed schedule and, likewise, would include actions of a regional nature based on consensus about areas where concerted actions can beneficial. Within the scope of national actions, those aimed at reducing the number of students per teacher, by building added school infrastructure and hiring more teachers, stand out. At the regional level, an action of special importance would be executing joint actions geared towards obtaining international co-operation resources to allocate them to areas that promote the quality of education. Other areas of special regional importance are the ijef.ccsenet.org International Journal of Economics and Finance Vol. 13, No.9;2021 preparation of content in specific subjects, such as mathematics, sciences, and the teaching of indigenous languages, as well as the production of textbooks. In special importance would be adoption of regional policies that will have the consensus of the Central American countries, in terms of supporting single mothers, supporting children with special education needs, those with disabilities, and LBGT students, as well as the indigenous and Afro-Central American student populations, among others. Figure 13 presents a summary of national and regional actions related to quality of education. One element of the regional strategy would be the adoption by the Central American countries of a common framework for evaluation and monitoring and the measurement of results, with the understanding that said evaluations would be presented to the Central American Presidents at their periodic meetings. Among the variables that would be subject to monitoring and evaluation, would be the following: public sector expenditure on education, number of students and teacher, scores on national and international standardized tests, female and male dropout rates, adolescent fertility rate, percentage of children who dont work or study, coverage of special education and early childhood development, enrollment in technical education, number of computers per pupil, teacher salaries, percentage of teachers with pedagogical training, number of reported cases of violence at school, among others. National actions Regional actions Increase public sector expenditure in education Joint actions to obtain financial resources Investment in school infrastructure Production of teaching content Investment in technology Regional gender policy Decrease number of students per teacher Regional education policy directed to Indigenous and Afro Central American populations Teacher accountability system Regional special education policy Number of books Regional policy to attend needs of persons with disabilities Teaching in Native languages Preparation of content on Indigenous and Afro Central American cultures Building libraries Regional literacy policy Reduction of school desertion Regional policy of measurement results Figure 13. National and regional actions to support the quality of education Conclusions This paper has presented evidence that the quality of education has regional impacts on the economic growth of member countries of an integration scheme. It should emphasized that the increase in the third grade reading score by 50 points in Guatemala and El Salvador allows these countries to increase their economic growth rates respectively by 1.1 and 0.8 percent; in the scenario in which the other Central American countries also increased their public spending to obtain similar increases in their scores in this test, the impacts on the growth rates of Guatemala and El Salvador would result in additional increases of 0.12 and 0.2 percent respectively. This scenario is based, of course, on the premise that Central American countries undertake coordination efforts in matters of economic and education policies. In view of the additional increase in economic growth that El Salvador receives, over a 25-year horizon, the accumulated additional GDP would be 25 percent, which would lead to an additional tax collection of 4.5 percent, higher than the 3.3 percent required to increase the score in 50 points. In other words, concerted regional action to increase the quality of education "reduces" the costs of the required investments. This can be called a creation of regional externalities of a social nature, which has been unnoticed in the literature on economic integration. It is apparent that the additional economic growth resulting from regional coordination are "free" for the Central American countries, given that their efforts at the national level become "more productive", since the other countries are carrying out similar efforts, which energize their own economies, as well as intra-regional trade and the economies of other countries in the region. This is a case in which regional concerted actions give rise to social externalities of economic growth through consensual actions for increasing the quality of education. Given the evidence that the quality of education is an important determinant of private investment, it follows that ijef.ccsenet.org International Journal of Economics and Finance Vol. 13, No.9;2021 the "business climate" passes through the quality of education. The case could happen that despite various measures to improve the business climate, private investment doesn"t react positively due to poor and stagnant quality of education. Another implication is that the pressures and arguments to reduce taxes are contradictory with respect to the objective of increasing private investment and economic growth, and, in general terms, with the purpose of improving the wellbeing of the population. In recent years much attention has been given to the issue of "trade facilitation", and many countries have made significant investments in this field. It should be noted that according to the results of this study, the quality of education, as well as public spending on education, constitutes a valuable trade facilitation measure. It should be noted that the benefits in terms of economic growth derived from the customs union between Guatemala and Honduras, which was initiated in 2018, have been estimated at 1.1 and 0.8 percent per year (Redondo, 2018), magnitude of the same order of size as what these countries would achieve through increasing their third-grade reading scores by 50 points. When Central American countries undertake simultaneous and concerted improvements in the quality of education, the resulting increases in economic growth in the countries will be of greater dimension than in the case when the improvement in education quality occurs in only one country. It follows that there are regional external economies that act to make investments in national education more "efficient". This has implications for the profitability of human capital investments, as well as for providing the economies of the integration scheme with greater resilience to the ups and downs of the international economy. Hence the importance of reaching regional agreements to promote the quality of education and in general terms to promote human development. It can be deduced from the results present in this paper that the economic integration process can reach a point of "saturation", or stagnation, that would manifest itself in the stagnation of intra-regional trade. This can overcome by boosting the quality of education and hence the capacity for innovation. One implication is the importance that the region under economic integration be an area of continuous generation of knowledge and innovation. The important benefits resulting from implementing an import substitution model like the one that prevailed until the early 1990ś should be highlighted. These benefits are additional to the increases in economic growth, since the increase in intra-regional trade would serve as a springboard to increase extra-regional exports, according to the evidence presented by Smith and Venables (1988) that economic integration in the EEC countries led them to increases their extra-regional exports, in view of the reduction in costs due to the economies of scale that the expanded market offered. Likewise, Webb and Fackler (1993) have shown that Costa Rica"s extra-regional exports were preceded by the export of the same products to the Central American market. In fact, there is a case to analyze the costs and benefits of integration in broader and dynamic terms that include aspects of knowledge generation, promotion of extra-regional exports, and regional coordination, among others, which would be more relevant than the method of computing measures of trade creation and diversion, which falls short of understanding the diversity and complexity of the integration process. It should be emphasized that the increase in the quality of education, as well as in import substitution, in as much as they stimulate intra-Central American trade, would increase quality employment, according to the evidence presented by Cá ceres (2018b) for the case of El Salvador, which would also contribute to reducing violence. This is another issue that should be included in the analysis of the costs and benefits of integration, especially considering that in some Central American countries the growth of the world economy, represented by the US economy, does not contribute to increasing quality employment (Cá ceres, 2021). The quality of education has other benefits in addition to boosting economic growth: it boosts governance, reduces self-employment and violence, and stops irregular emigration (Cá ceres, 2018b). For this reason, efforts to invest in higher levels of human capital in Central American countries must have the support of the international cooperation, given that donor countries obtain benefits from this type of investment. In this sense, Central American countries with the support from the international agencies of international co-operation, should reach agreements to reach specific goals related to human capital in a certain period, comprising prenatal care, coverage of early childhood education and technical education, pedagogical preparation of teachers, school infrastructure, and public health, as well as emergency employment programs, in a scheme of shared costs and benefits. It must be emphasized that in Latin America the main motivation or argument for the creation of regional integration schemes has rested on their role in boosting industrialization and economic growth, because of the economies of scale that result from the expanded market (Prebisch, 1950). The results shown in this paper highlight that this argument maintains its validity in the framework of the external economies resulting from ijef.ccsenet.org International Journal of Economics and Finance Vol. 13, No.9; 2021 regional agreements on social development. In addition, an integration scheme with increasing quality of education, and with continuous generation of innovations, may prevent the integration process from losing dynamism, particularly if regional consultation is used as an integration instrument. The increasing amount of human capital would contribute to strengthening the institutions, in view of the evidence that in Latin American countries both the indicators of accountability and control of corruption, as well as the rule of law, increase as a country acquires higher levels of human capital (Cá ceres, 2010). These results also demonstrate the feasibility of achieving regional endogenous growth and, furthermore, offer an analytical framework to the theories related to economic growth in a regional context.
9,651.6
2021-01-01T00:00:00.000
[ "Economics", "Education" ]
Research of Online Hand–Eye Calibration Method Based on ChArUco Board To solve the problem of inflexibility of offline hand–eye calibration in “eye-in-hand” modes, an online hand–eye calibration method based on the ChArUco board is proposed in this paper. Firstly, a hand–eye calibration model based on the ChArUco board is established, by analyzing the mathematical model of hand–eye calibration, and the image features of the ChArUco board. According to the advantages of the ChArUco board, with both the checkerboard and the ArUco marker, an online hand–eye calibration algorithm based on the ChArUco board is designed. Then, the online hand–eye calibration algorithm, based on the ChArUco board, is used to realize the dynamic adjustment of the hand–eye position relationship. Finally, the hand–eye calibration experiment is carried out to verify the accuracy of the hand–eye calibration based on the ChArUco board. The robustness and accuracy of the proposed method are verified by online hand–eye calibration experiments. The experimental results show that the accuracy of the online hand–eye calibration method proposed in this paper is between 0.4 mm and 0.6 mm, which is almost the same as the offline hand–eye calibration accuracy. The method in this paper utilizes the advantages of the ChArUco board to realize online hand–eye calibration, which improves the flexibility and robustness of hand–eye calibration. Introduction The application of industrial robots has greatly improved the production efficiency and product quality of enterprises [1,2]. However, the traditional working methods of industrial robots have been unable to meet the increasing production demand. Industrial robots equipped with vision sensors perceive the surrounding environment through image processing technology, which improves the "flexibility" of robot operations, and enables robots to complete more complex production tasks [3][4][5]. Hand-eye calibration is an indispensable part of the visual servo operation of the robot, which is an important bridge between the robot and the vision sensor. Hand-eye calibration is the process of solving the coordinate transformation relationship between the camera coordinate system and the robot coordinate system. The accuracy of hand-eye calibration directly affects the accuracy of the robot operation. For the research of hand-eye calibration, scholars have proposed many theories and methods. Wu et al. [6] and Yang et al. [7] solved the hand-eye problem by tracking the position of the calibration ball in 3D camera and the movement trajectory of the robot. Chen et al. [8] and Wang et al. [9] calibrated the relationship of the position between the camera and the calibrator by identifying the feature points of the 3D calibration object, and solved the hand-eye matrix through the corresponding posture of the robot. Although the hand-eye calibration method based on 3D calibrators is one of the most effective ways to solve the hand-eye matrix, the processing of 3D data consumes a lot of time, and the quality of the 3D point cloud has absolute influence on the calculation results. Compared with the hand-eye calibration method based on the 3D calibrators, the hand-eye calibration method based on 2D images is more widely used. Since the feature 2 of 11 points of the checkerboard pattern are easy to detect, most of the classical hand-eye calibration theories use it as the calibrator [10][11][12]. In recent years, other scholars have carried out research on hand-eye calibration methods based on the checkerboard. Deng et al. [13] proposed a hand-eye calibration method based on a unit octonion, and verified the effectiveness of the method using a camera and a checkerboard. Lee et al. [14] designed an automatic hand-eye calibration system based on a checkerboard calibration. In addition, as an image coding technique, an ArUco marker is often used for hand-eye calibration. Huang et al. [15] adopted the hand-eye calibration method, based on using ArUco markers and the support vector machine (SVM) detection method, to identify specific objects to realize flexible grasping of the robot. Feng et al. [16] realized the robot's autonomous assembly and scanning was based on vision guidance, by using the hand-eye calibration method based on the ArUco marker. The black and white interlaced pattern features of the checkerboard make the feature points easy to identify. However, when the checkerboard target is occluded, or the lighting conditions are poor, the pose of the calibration board is difficult to recognize. ArUco markers have the advantages of rapid detection and flexibility. However, the detection accuracy of the ArUco marker in corners is not very high. Furthermore, most hand-eye calibration algorithms are performed offline. When the relative position of the camera and the robot changes, the hand-eye calibration often needs to be reperformed. To solve these problems, an online hand-eye calibration method based on the ChArUco board is proposed in this paper. The ChArUco board is a combination of an ArUco marker and a checkerboard, and possesses the advantages of both. The ChArUco board rectifies the shortcomings of the ArUco marker's poor positioning accuracy by using the high sub-pixel detection accuracy of the checkerboard corners of the calibration board, and solves the problem of the poor anti-interference of the checkerboard through the unique encoding of the ArUco marker. Based on the detection advantages of the ChArUco board, this paper designs a closed-loop feedback adjustment system for the robot to realize online hand-eye calibration of eye-in-hand mode. The other parts of the paper are arranged as follows: Section 2 introduces the principle of hand-eye calibration and the ChArUco board; Section 3 describes the online hand-eye calibration method, based on the ChArUco board; Section 4 verifies the accuracy and feasibility of the method proposed in this paper; finally, Section 5 summarizes the work of this paper, and looks into future research issues. Hand-Eye Calibration and ChArUco Board The purpose of hand-eye calibration was to calculate the coordinate transformation relationship between the camera coordinate system, and the robot coordinate system, that is, to solve the rotation matrix R and the translation vector t. The essence of hand-eye calibration was to solve the problem of AX = XB [17][18][19]. Hand-eye calibration was divided into eye-in-hand and eye-to-hand modes, as shown in Figure 1. As shown in Figure 1b, the relative position of the robot base and the calibration board was unchanged, as was the relative position of the camera and the robot end. According to multiple sets of known invariants, the hand-eye calibration matrix could be solved. As shown in Formula (2), the hand-eye calibration matrix could be solved through the calculation of the robot pose transformation, and the camera extrinsic parameters. 3 of 11 where b T g represents the homogeneous matrix of the robot end coordinate system, relative to the robot base coordinate system; c T t represents the homogeneous matrix of the calibration board coordinate system, relative to the camera coordinate system; g T c represents the homogeneous matrix of the camera coordinate system, relative to the robot end coordinate system. Figure 1. Two hand-eye calibration modes: (a) Eye-to-hand; (b) Eye-in-hand. Where Base represents the base coordinate system of the robot; Camera represents the camera coordinate system; and Calibrator represents the calibration board. As shown in Formula (2), the hand-eye calibration matrix could be solved through the calculation of the robot pose transformation, and the camera extrinsic parameters. ( 1 ) ( 1 ) where b g T represents the homogeneous matrix of the robot end coordinate system, relative to the robot base coordinate system; c t T represents the homogeneous matrix of the calibration board coordinate system, relative to the camera coordinate system; g c T represents the homogeneous matrix of the camera coordinate system, relative to the robot end coordinate system. According to the described mathematical model of hand-eye calibration, the solution process of hand-eye calibration needed to obtain the homogeneous matrix c t T of the calibration board coordinate system, relative to the camera coordinate system, that is, it needed to calculate the external parameters of the camera. In this paper, the ChArUco board was selected as the calibration object to calculate c t T . ArUco markers have the advantages of flexibility and easy detection. However, the ArUco marker has the problem that the detection accuracy of the edge corners is not high. Even if sub-pixel processing is performed on the corner, the expected accuracy is still not achieved. The black and white interlaced pattern of the checkerboard makes the corners easy to detect. Unfortunately, the flexibility of the checkerboard is not as extensive as the ArUco marker. When the checkerboard was used as a calibration object, the checkerboard needed to be completely visible, and could be blocked. The ChArUco board possessed the advantages of both the checkerboard and the ArUco marker. In addition, the ChArUco board ameliorated the deficiencies of both. Figure 2 shows a schematic diagram of three calibration objects. According to the described mathematical model of hand-eye calibration, the solution process of hand-eye calibration needed to obtain the homogeneous matrix c T t of the calibration board coordinate system, relative to the camera coordinate system, that is, it needed to calculate the external parameters of the camera. In this paper, the ChArUco board was selected as the calibration object to calculate c T t . ArUco markers have the advantages of flexibility and easy detection. However, the ArUco marker has the problem that the detection accuracy of the edge corners is not high. Even if sub-pixel processing is performed on the corner, the expected accuracy is still not achieved. The black and white interlaced pattern of the checkerboard makes the corners easy to detect. Unfortunately, the flexibility of the checkerboard is not as extensive as the ArUco marker. When the checkerboard was used as a calibration object, the checkerboard needed to be completely visible, and could be blocked. The ChArUco board possessed the advantages of both the checkerboard and the ArUco marker. In addition, the ChArUco board ameliorated the deficiencies of both. Figure 2 shows a schematic diagram of three calibration objects. Online Hand-Eye Calibration Based on ChArUco Board As shown in Figure 3, when corner detection was performed on the ChArUco board, the corners of the checkerboard and the coding pattern were identified. According to the decoding information of the coding pattern, the positions of each corner of the calibration board could be sorted in an orderly manner. Even if the ChArUco board was partially Online Hand-Eye Calibration Based on ChArUco Board As shown in Figure 3, when corner detection was performed on the ChArUco board, the corners of the checkerboard and the coding pattern were identified. According to the decoding information of the coding pattern, the positions of each corner of the calibration board could be sorted in an orderly manner. Even if the ChArUco board was partially occluded, it did not affect the order of the corners. (c) Online Hand-Eye Calibration Based on ChArUco Board As shown in Figure 3, when corner detection was performed on the ChArUco board, the corners of the checkerboard and the coding pattern were identified. According to the decoding information of the coding pattern, the positions of each corner of the calibration board could be sorted in an orderly manner. Even if the ChArUco board was partially occluded, it did not affect the order of the corners. As shown in Figure 4, it is the calculation result of camera extrinsic parameters under different occlusion conditions of the ChArUco board. When the occluded area of the ChArUco board was small, it did not affect the calculation results of the camera external parameters. When the occluded area of the ChArUco board was too large, the external parameters of the camera could not be calculated. However, some coding patterns and checkerboard positions could still be recognized. As shown in Figure 4, it is the calculation result of camera extrinsic parameters under different occlusion conditions of the ChArUco board. When the occluded area of the ChArUco board was small, it did not affect the calculation results of the camera external parameters. When the occluded area of the ChArUco board was too large, the external parameters of the camera could not be calculated. However, some coding patterns and checkerboard positions could still be recognized. In addition, the pose of each recognizable encoding pattern on the ChArUco board could be computed as if the pose of a single ArUco marker were recognized. Figure 5a shows the pose of the ArUco marker in the camera coordinate system. Figure 5b shows the pose of each recognizable coding pattern of ChArUco in the camera coordinate system. Based on the detection advantages of the ChArUco board, this paper designed a closedloop feedback adjustment system for the robot to realize online hand-eye calibration. As shown in Figure 6, in the case of the eye-in-hand, the camera could be seen as a "tool" attached to the end of the robot. The "inaccurate" g T c was calculated by the robot-based tool calibration method [20]. In addition, c T t could be calculated by the detection of the ChArUco board. b T g could be obtained directly through the robot teach pendant. With the above information, the approximate positional relationship between the ChArUco board and the robot could be calculated. As shown in Figure 4, it is the calculation result of camera extrinsic parameters under different occlusion conditions of the ChArUco board. When the occluded area of the ChArUco board was small, it did not affect the calculation results of the camera external parameters. When the occluded area of the ChArUco board was too large, the external parameters of the camera could not be calculated. However, some coding patterns and checkerboard positions could still be recognized. shows the pose of the ArUco marker in the camera coordinate system. Figure 5b shows the pose of each recognizable coding pattern of ChArUco in the camera coordinate system. Based on the detection advantages of the ChArUco board, this paper designed a closed-loop feedback adjustment system for the robot to realize online hand-eye calibration. As shown in Figure 6, in the case of the eye-in-hand, the camera could be seen as a "tool" attached to the end of the robot. The "inaccurate" g c T was calculated by the robotbased tool calibration method [20]. In addition, c t T could be calculated by the detection of the ChArUco board. b g T could be obtained directly through the robot teach pendant. With the above information, the approximate positional relationship between the ChArUco board and the robot could be calculated. shows the pose of the ArUco marker in the camera coordinate system. Figure 5b shows the pose of each recognizable coding pattern of ChArUco in the camera coordinate system. Based on the detection advantages of the ChArUco board, this paper designed a closed-loop feedback adjustment system for the robot to realize online hand-eye calibration. As shown in Figure 6, in the case of the eye-in-hand, the camera could be seen as a "tool" attached to the end of the robot. The "inaccurate" g c T was calculated by the robotbased tool calibration method [20]. In addition, c t T could be calculated by the detection of the ChArUco board. b g T could be obtained directly through the robot teach pendant. With the above information, the approximate positional relationship between the ChArUco board and the robot could be calculated. Figure 7 shows the online hand-eye calibration process, based on the ChArUco board. Firstly, after the robot moved to the teaching posture, the camera captured the Figure 7 shows the online hand-eye calibration process, based on the ChArUco board. Firstly, after the robot moved to the teaching posture, the camera captured the ChArUco board calibration image. Then, it was judged whether the position of the calibration board satisfied the condition by detecting the ChArUco board. If it was not satisfied, the robot posture was automatically adjusted, according to the feedback of the ChArUco board position information, and the above steps were repeated. Otherwise, the data were saved, the posture of the robot adjusted, and the next teaching action was reached. When the collected hand-eye calibration data met the requirements, the hand-eye calibration matrix was solved. Experiments and Analysis In this paper, Hikvision's MV-CE050-30GM camera, FUJINON's CF8ZA-1S lens and ABB's IRB2600-20 robot were used to build the experimental platform. The experimental platform is shown in Figure 9. The camera was fixed on the end of the robot, and the calibration board was fixed on the experimental table. Based on this experimental platform, this paper tested the accuracy of the three calibration methods of the ArUco marker, Experiments and Analysis In this paper, Hikvision's MV-CE050-30GM camera, FUJINON's CF8ZA-1S lens and ABB's IRB2600-20 robot were used to build the experimental platform. The experimental platform is shown in Figure 9. The camera was fixed on the end of the robot, and the Experiments and Analysis In this paper, Hikvision's MV-CE050-30GM camera, FUJINON's CF8ZA-1S lens and ABB's IRB2600-20 robot were used to build the experimental platform. The experimental platform is shown in Figure 9. The camera was fixed on the end of the robot, and the calibration board was fixed on the experimental table. Based on this experimental platform, this paper tested the accuracy of the three calibration methods of the ArUco marker, the checkerboard and the ChArUco board, and carried out an online hand-eye calibration experiment based on the ChArUco board. In this paper, 10 sets of hand-eye calibration independent experiments were carried out, and 25 sets of hand-eye calibration data, based on the ArUco marker, the checkerboard and the ChArUco board, were collected in each experiment. The calculation results of camera extrinsic parameters for the three calibration methods are shown in Figure 10. In this paper, 10 sets of hand-eye calibration independent experiments were carried out, and 25 sets of hand-eye calibration data, based on the ArUco marker, the checkerboard and the ChArUco board, were collected in each experiment. The calculation results of camera extrinsic parameters for the three calibration methods are shown in Figure 10. In this paper, 10 sets of hand-eye calibration independent experiments were carried out, and 25 sets of hand-eye calibration data, based on the ArUco marker, the checkerboard and the ChArUco board, were collected in each experiment. The calculation results of camera extrinsic parameters for the three calibration methods are shown in Figure 10. Figure 11 shows the hand-eye calibration uncertainty, based on the ArUco marker, the checkerboard and the ChArUco board. From the analysis of the experimental data, it could be seen that the hand-eye calibration accuracy of the ArUco marker was the worst, and compared with the checkerboard and the ChArUco board, the maximum deviation was 1.5 mm. Compared with the checkerboard and the ChArUco board, the calibration accuracy of the two was almost the same. In online hand-eye calibration experiments based on the ChArUco board, 10 sets of online hand-eye calibration independent experiments were carried out. In a single set of independent experiments, 25 sets of robot teaching actions were set, and the ChArUco board was placed away from the center of the camera's field of view. Figure 12 shows the Figure 11 shows the hand-eye calibration uncertainty, based on the ArUco marker, the checkerboard and the ChArUco board. From the analysis of the experimental data, it could be seen that the hand-eye calibration accuracy of the ArUco marker was the worst, and compared with the checkerboard and the ChArUco board, the maximum deviation was 1.5 mm. Compared with the checkerboard and the ChArUco board, the calibration accuracy of the two was almost the same. Figure 11 shows the hand-eye calibration uncertainty, based on the ArUco marker, the checkerboard and the ChArUco board. From the analysis of the experimental data, it could be seen that the hand-eye calibration accuracy of the ArUco marker was the worst, and compared with the checkerboard and the ChArUco board, the maximum deviation was 1.5 mm. Compared with the checkerboard and the ChArUco board, the calibration accuracy of the two was almost the same. In online hand-eye calibration experiments based on the ChArUco board, 10 sets of online hand-eye calibration independent experiments were carried out. In a single set of independent experiments, 25 sets of robot teaching actions were set, and the ChArUco board was placed away from the center of the camera's field of view. Figure 12 shows the In online hand-eye calibration experiments based on the ChArUco board, 10 sets of online hand-eye calibration independent experiments were carried out. In a single set of independent experiments, 25 sets of robot teaching actions were set, and the ChArUco board was placed away from the center of the camera's field of view. Figure 12 shows the change of the camera angle of view, before and after the online hand-eye calibration In addition, as shown in Figure 13, this paper conducted uncertainty analysis on 10 sets of online hand-eye calibration experiments. It can be seen from the figure, that the uncertainty of the online hand-eye calibration, based on the ChArUco board, is between 0.4 mm and 0.6 mm, which is almost the same as the accuracy of the offline hand-eye calibration. The experimental results showed that the method in this paper effectively solved the problem of online hand-eye calibration, and also ensured the stability of the calibration accuracy. In addition, as shown in Figure 13, this paper conducted uncertainty analysis on 10 sets of online hand-eye calibration experiments. It can be seen from the figure, that the uncertainty of the online hand-eye calibration, based on the ChArUco board, is between 0.4 mm and 0.6 mm, which is almost the same as the accuracy of the offline hand-eye calibration. The experimental results showed that the method in this paper effectively solved the problem of online hand-eye calibration, and also ensured the stability of the calibration accuracy. In addition, as shown in Figure 13, this paper conducted uncertainty analysis on 10 sets of online hand-eye calibration experiments. It can be seen from the figure, that the uncertainty of the online hand-eye calibration, based on the ChArUco board, is between 0.4 mm and 0.6 mm, which is almost the same as the accuracy of the offline hand-eye calibration. The experimental results showed that the method in this paper effectively solved the problem of online hand-eye calibration, and also ensured the stability of the calibration accuracy. Based on the above experimental results, it can be seen that the online hand-eye calibration method proposed in this paper could effectively solve the problem of online hand-eye calibration, and had good performance in robustness and accuracy. Conclusions In order to reflect the practicability of this method, the performance of this method was compared with the existing methods, and the results are shown in Table 1. The methods of [21,22] could only obtain the position of the calibration plate in the image. In the process of hand-eye calibration, the methods of [21,22] needed to ensure that the calibration plate was visible as a whole, and the checkerboard feature points and circular patterns could not be blocked. The ChArUco board used in this method could obtain the position and posture of the calibration board, and had certain interference to the occlusion situation. Therefore, the method in this paper had better flexibility in robot adjustment. In the process of hand-eye calibration, the method in this paper had better robustness. The feature point calculation process of the ChArUco board used in this paper was more time-consuming, but the method in this paper was able to obtain the position and attitude of the calibration board, which reduced the number of adjustments of the robot. The overall time consumption was better than the methods of [21,22]. The method in this paper only needed 0.5-2 s to complete a single valid hand-eye calibration datum. In addition, the method in this paper had better flexibility and robustness, so that more effective hand-eye calibration data could be collected under the same disturbance. Therefore, the accuracy of hand-eye calibration would also be better. Aiming at the problem of inflexible offline hand-eye calibration in eye-in-hand mode, an online hand-eye calibration method based on the ChArUco board was proposed in this paper. The method in this paper utilized the advantages of the ChArUco board, which has the advantages of high sub-pixel recognition accuracy of the checkerboard corner, and the strong flexibility of the ArUco marker, to realize the positioning of the calibration board. The position relationship between the calibration board and the robot was established. Then, the closed-loop feedback automatically adjusted the robot by detecting the position of the ChArUco board in the image. Enough hand-eye calibration data were collected by robot automatic control to complete online hand-eye calibration. In this paper, the accuracy of hand-eye calibration based on the ChArUco board was verified by comparative experiments. The robustness and accuracy of the method were verified by online hand-eye calibration experiments. In the current research of this paper, the influence of the hand-eye calibration caused by the robot motion error was not considered. In future work, we will study the influence of robot motion error on hand-eye calibration accuracy, and consider how to eliminate the influence of robot motion error. This will be an interesting and meaningful research direction. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy. Conflicts of Interest: The authors declare no conflict of interest.
5,875.8
2022-05-01T00:00:00.000
[ "Physics" ]
Revised chrono-biostratigraphy of Lower Miocene deposits of the Eastern Mediterranean ( SW Turkey ) , based on calcareous nannofossils Lower Miocene deposits of the Güneyce Formation formerly described as the Elmalı Formation of Lutetian-Burdigalian age are located near the villages of Gökçebağ (Burdur) and Yakaören (Isparta), (southwestern Turkey), Eastern Mediterranean, and overlie the pre-Neogene tectonostratigraphic units of the Isparta Angle. The purpose of this study is to discuss new biostratigraphic data calibrated to originally classified nannofossil records. Three Early Miocene nannofossil biozones, NN1 Triquetrorhabdulus carinatus Zone, NN2 Discoaster druggii Zone and NN3 – Sphenolithus belemnos Zone, were defined in clastic sediments of the Güneyce Formation. In addition, one Lutetian biozone, NP16 – Discoaster tanii nodifer Zone, was recognized in the remaining outcrops of the Isparta Formation unconformably underlying the Güneyce Formation. Nannofossil assemblages of shallow marine deposits in the Güneyce Formation contain high amounts of reworked (Palaeogene and Cretaceous) specimens. New biostratigraphic data and sedimentary features of the Güneyce Formation clastics indicate shallow marine deposition and the beginning of the transgression, spreading over an erosional surface on the ophiolitic melange and Cretaceous to Eocene marine successions rising to the west of the region. structural constructions related to the study area. The accepted former (and common) view is that the Eocene marine sedimentary successions (Elmalı Formation) were overthrust by the ophiolitic melange (Gökçebağ Melange) after the Early Miocene (Fig. 1). However, the results of this study show that there are two angular unconformable marine successions, the Isparta Formation (Lutetian) and the Güneyce Formation (Early Miocene), overlying the ophiolitic melange (Fig. 2). In the region, the Yavuz Flysch (GUTNIC et al., 1979), was described as marine carbonate and clastics deposited during the Paleocene to Eocene (Lutetian interval) and the second succession, the Elmalı Formation (ŞENEL, 1997), was described as marine shelf clastics deposited during the Lutetian or ranging from the Lutetian to the Burdigalian. According to GÖRMÜŞ et al. (2001), the studied Güneyce Formation comprises six lithological units from the bottom to the top: (1) mudstone dominated facies (muddy facies); (2) sandstone dominated facies (sandy facies); (3) olistostrome facies; (4a) rhythmic sandstone-mudstone facies; (4b) carbonate facies; (5) sandstone facies and (6) coarse clastics-conglomerates facies. In this area, there are particularly clastic facies 1 to 4a. The overlying units of the investigated area are Plio-Quaternary volcanics and Quaternary alluvium or colluvial fan deposits. 3. MATERIAL AND METHODS In this study, 35 samples of various grain-sized clastic or carbonate rocks such as marls, mudstones, sandstones or limestones were collected and examined from 17 locations within the three measured stratigraphic sections. All the study material obtained from YAVUZLAR (2015) is preserved in the General Geology Laboratory of Suleyman Demirel University. The three stratigraphic sections, Necibin Tepe, Arapdere and Abidinoğlutaşı Tepe were researched in the field (Fig. 3). The sedimentary succession Article history: Manuscript received January 08, 2018 Revised manuscript accepted July 16, 2018 Available online October 17, 2018 INTRODUCTION The Cretaceous to Neogene marine successions are in the northern, western and eastern parts of the Isparta Angle.Within the context of the regional stratigraphy, sedimentology and tectonic evolution, many studies were carried out by GUTNIC et al. (1979), KARAMAN (1990KARAMAN ( , 1994)), YAĞMURLU (1994), ŞENEL (1997), GÖRMÜŞ & ÖZKUL (1995), GÖRMÜŞ et al. (2001GÖRMÜŞ et al. ( , 2004)), POISSON et al. (2003) and ROBERTSON et al. (2003).There are problems concerning the geological mapping, age of sedimentation and environmental interpretation of Palaeogene and Neogene sediments in the region.In previous studies the marine clastics and the carbonate rock series of the studied area were mainly described as Eocene flysch by GUTNIC et al. (1979), the Kayıköy Formation (Eocene) by YAĞMURLU (1994), Isparta flysch (Eocene) by GÖRMÜŞ & ÖZKUL (1995), the Elmalı Formation (Lutetian to Burdigalian) by ŞENEL (1997) and Yavuz Flysch (Eocene) by POISSON et al. (2003).Clastic and carbonaceous deposits, belonging to the Güneyce Formation in the studied area, were interpreted by BARRIER & VRIELYNCK (2008) as the Derinçay Formation corresponding to platform or shallow shelf carbonate or terrigenous clastics, deposited at the back-arc basin or on the low-land margin of the active subduction zone placed in the Eastern Mediterranean Basin.The scope of the study is to provide new biostratigraphic data from the sediments of the Isparta and the Güneyce formations for the new geologic map.The proposed new Lower Miocene marine clastics of the Güneyce Formation, which extend in the northern part of the Isparta Angle, are significant for the Eastern Mediterranean geology. GEOLOGICAL SETTING There are two different geological interpretations on the stratigraphy of the Palaeogene and Neogene marine sequences and their structural constructions related to the study area.The accepted former (and common) view is that the Eocene marine sedimentary successions (Elmalı Formation) were overthrust by the ophiolitic melange (Gökçebağ Melange) after the Early Miocene (Fig. 1).However, the results of this study show that there are two angular unconformable marine successions, the Isparta Formation (Lutetian) and the Güneyce Formation (Early Miocene), overlying the ophiolitic melange (Fig. 2).In the region, the Yavuz Flysch (GUTNIC et al., 1979), was described as marine carbonate and clastics deposited during the Paleocene to Eocene (Lutetian interval) and the second succession, the Elmalı Formation (ŞENEL, 1997), was described as marine shelf clastics deposited during the Lutetian or ranging from the Lutetian to the Burdigalian.According to GÖRMÜŞ et al. (2001), the studied Güneyce Formation comprises six lithological units from the bottom to the top: (1) mudstone dominated facies (muddy facies); (2) sandstone dominated facies (sandy facies); (3) olistostrome facies; (4a) rhythmic sandstone-mudstone facies; (4b) carbonate facies; (5) sandstone facies and (6) coarse clastics-conglomerates facies.In this area, there are particularly clastic facies 1 to 4a.The overlying units of the investigated area are Plio-Quaternary volcanics and Quaternary alluvium or colluvial fan deposits. MATERIAL AND METHODS In this study, 35 samples of various grain-sized clastic or carbonate rocks such as marls, mudstones, sandstones or limestones were collected and examined from 17 locations within the three measured stratigraphic sections.All the study material obtained from YAVUZLAR (2015) is preserved in the General Geology Laboratory of Suleyman Demirel University.The three stratigraphic sections, Necibin Tepe, Arapdere and Abidinoğlutaşı Tepe were researched in the field (Fig. 3).The sedimentary succession of the Güneyce Formation begins with terrestrial deposits including gypsum and coal-bearing sandy-mudstones, and continues with rhythmic sandstone and mudstone/marl alternations including sand-dominated levels.Clastic rock packages seen are partially similar to the depositional facies 1-4a of GÖRMÜŞ et al. (2001). Stratigraphic sections The Necibin Tepe stratigraphic section (Figs. 3 A, 4) belongs to the first sedimentary levels of the Güneyce Formation overlying the ophiolitic melange.In the lower part of the succession (up to 44 m), there are sandy mudstones with sandstone interbeds of sandy facies of the Güneyce Formation, followed by dolomitic limestone lenticular beds within the muddy marine sedimentation.In addition, these levels include lense-shaped gypsum and coal intercalations that could belong to very shallow olistostrome facies.Above sixty metres, there is a shallow marine sandstone and mudstone alternation in the upper part of the succession belonging to the rhythmic sandstone-mudstone facies of the Güneyce Formation.In the Necibin Tepe section, a transgression A new detailed version of the map, built on the existing geologic maps (GUTNIC et al., 1979, ŞENEL, 1997).resulted in deposition of shallow marine sediments on top of terrestrial sedimentary rocks and facies of the succession. The Arapdere stratigraphic section (Figs. 3 B, 5) belongs to the middle levels of the Güneyce Formation following the sediments of Necibin Tepe section.Here, sediments and facies of the succession exhibit the rhythmic alternations of mudstones and sandstones representing shallow marine sedimentation. The Abidinoğlutaşı Tepe stratigraphic section (Figs. 3 C, 6) belongs to the rhythmic sandstone-mudstone facies of the Güneyce Formation, overlies the ophiolitic melange and coincides with the sediments of the Arapdere section.The sedimentary successions and facies of this section also include mudstone and sandstone rhythmic alternation representing shallow marine sedimentation. Investigation methods The standard nannofossil preparation method was used without applying any concentration or cleaning process and taxonomic concepts were followed PERCH-NIELSEN (1985a), BOWN & YOUNG (1998).Analyses were carried out by polarized light microscopes at x2500 magnification, using a Nikon Optiphot-Pol and Leica DM2700 P, at the Department of Geology, Suleyman Demirel University.For each smear slide, nannofossil specimens were counted in 200 fields of view and the identified species were separated into two groups as autochthonous and reworked taxa. In addition, thinned (20-25 microns) petrographic sections were prepared from coarse grained rock samples such as sandstone and limestone (SAGULAR, 2003a(SAGULAR, , 2003b)).Nannofossil records within lithoclasts or within the matrix in thin-sections of a sandstone are considered to be mainly reworked sedimentary grains.The nannofossils within intraclasts or within the calcite cement of a calcarenite or limestone could be coeval with sedimentation (synsedimentary).Nannofossil records observed in lithoclasts are considered as reworked from extrabasin sources whereas those in intraclasts are considered as removed from elsewhere within the basin.Whereas, fossil data found in matrix or cement are considered either reworked or resedimented.Thus, the autochthonous or reworked nannofossil assemblages and their sedimentological properties were determined in both nannofossil smear-slides and thin-sections.The results of the laboratory determinations were correlated with field observations and previous studies.2006) used the nannofossil population as an indicator of sea level changes.SAGU-LAR (2003b) described another method based on reworked nannofossil data, where the reworked specimen proportions are used as an indicator of sea level changes in coastal environments.The method which SAGULAR (2003b) described is applied in this study. Nannofossil biostratigraphy Four nannofossil biozones according to MARTINI's (1971) zonation were identified (Fig. 10).Type locality: Arapdere section Remarks on assemblages: Discoaster druggii, Reticulofenestra haqii, Coccolithus miopelagicus and Sphenolithus moriformis are present.Apart from the samples of the measured section, Triquetrorhabdulus challengeri were identified in the mudstone spot sample BI016D.Remarks on assemblages: Sphenolithus belemnos, S. disbelemnos, S. compactus and S. conicus are considered as marker species in this zone. Assemblage composition 4.2.1. Necibin Tepe section In this section fourteen samples were studied and half of them were sterile.The following species were identified: Sphenolithus compactus, S. moriformis and Thoracosphaera heimii.Almost half of the assemblage contains reworked specimens from the Eocene: Reticulofenestra hesslandii, T. saxea and from the Cretaceous: Micula murus and Watznaueria barnesiae (Table 1). Above 60 m in the section, deposits turn to the 4a rhythmic sandstone-mudstone facies of the Güneyce Formation defined in calcareous nannofossil NN1, NN2 and NN3 zones at the Arapdere section.Below 60 m in the section, sediments contain only very sparse Miocene Sphenolithus compactus, the first appearance of which is also observed in the NN1 Zone (PERCH-NIELSEN (1985) and AUBRY (1989)) in the autochthonous nannofossil assemblage with 33 to 67% reworked nannofossils from the Cretaceous and 33 -100% from the Lutetian (Fig. 7).This section is very close to the Arapdere section (Fig. 2) and it is positioned at the base of the Arapdere section. Arapdere section From a total of eleven samples, two were thin-sections and contain reworked nannofossil taxa.2). In the samples of the Arapdere section, the NN1 zone contains 28% autochthonous and 72% reworked nannofossils.The NN2 zone contains from 50 to 60% autochthonous and 40 to 50% reworked forms.The NN3 zone includes 58 to 98% autochthonous forms and between 2 and 42% reworked nannofossil taxa (Fig. 8). The autochthonous species belonging to the NN3 zone display percentages from 50 to 73% and reworked species comprise 27 to 50% of the assemblage (Fig. 9). Biostratigraphy and palaeoecology Previous studies in this region were undertaken by GÖRMÜŞ et al. (2001) who first identified the NN1 and NN2 zones.The NN1 Zone was defined by the presence of: C. floridanus, C. abisectus, C. pelagicus, Discoaster deflandrei, D. bisectus, Helicosphaera obliqua, S. conicus, S. compactus, S. dissimilis and Z. bijugatus.The NN2 Zone was defined by the presence of Discoaster drug- A general distinction of autochthonous and reworked nannofossil taxa allows the definition of depositional and paleoenvironmental characteristics and leads to more reliable sedimentary and stratigraphic interpretations, especially for a fluctuating and unfavourable coastal environment (SAGULAR, 2003a, b).In this type of environment, marine deposits, such as the Güneyce Formation, generally contain very low amounts of autochthonous nannofossil assemblages.Reworked nannofossil assemblages are predominant in the sediments of the Güneyce Formation (Tables 1, 2 and 3).While autochthonous nannofossils display percentages between 33 and 100%, the reworked taxa account for between 6 and 100% of the assemblage composition.Based on the generally very high percentages of reworked taxa in the assemblages the Güneyce Formation can be interpreted as representing a very shallow marine environment (Figs. 7,8 and 9). The calcareous nannofossil assemblages of the Güneyce Formation, are dominated by the Sphenolithus genus, as being more common in lower latitudes, preferring warm and shallow waters than open ocean (PERCH-NIELSEN 1985b).According to WEI & WISE (1990) sphenolith diversity is higher in equatorial sites, being less diverse in the mid-latitude areas.Only S. moriformis existed in the high-latitude sites.Within the whole asemblage, 150 individuals were counted and S. moriformis is presented here as the most abundant species.Based on the work of the aforementioned authors Sphenolithus spp.abundance and diversity indicate a shallow marine environment of the mid-latitudes. Necibin Tepe section The clastic levels in approximately the first 44 m of the section have no nannofossil data due to a prevailing terrestrial or shoreline environment (Figure 4, Table 1).From 44 m -60 m, the presence of gypsum and microalgal (derived from dinoflagellate cysts) sapropelic coal lenses can be observed in the muddy levels of the succession.Due to the presence of such coal (Fig. 11) and gypsum lenses in the samples of the Necibin Tepe section, it is considered that the Early Miocene sedimentation was started in terrestrial and continued to occur in a littoral environment (Fig. 7).Besides, nannofossil data indicate the NN1 biozone, this section includes the beginning of the Early Miocene sedimentation of the Güneyce Formation. Arapdere section Depending on the original distribution of nannofossil assemblages within the Miocene to Cretaceous (Table 2), the Arapdere section represents transgressive shallow marine sediment levels in the Güneyce Formation.They include beach sediments or shoreline sandstones representing shallow marine environments such as littoral, tidal or the offshore-transitional zone or clastic shelf and contain mudstones with low coeval nannofossil contents.The distribution of autochthonous and reworked nannofossils in rock samples indicate sea level fluctuations in the Early Miocene shoreline transgression (Figs. 8 and 12). Abidinoğlutaşı Tepe section According to the nannofosil records of the Abidinoğlutaşı Tepe section, autochthonous nannofossil species decrease from the bottom levels and then become dominant again in the upper parts of the section (Fig. 9, Table 3).The interpretation of this section is that the sediments of these levels represent shallow marine depositional environments.Additionally, the presence of ascidian spicules in the samples 14G017B, 14G018B and 14G016B (Table 3) can be considered as an indicator of a shallow environment ( VAN NAME, 1945).From the bottom to the top of the section, the transgressive character and sea level fluctuations continue throughout.The proportions of autochthonous and reworked nannofossil assemblages in the samples of the Abidinoğlutaşı Tepe section indicate a generally transgressive trend with unstable sea level change observed in a very shallow coastal environment during the Burdigalian after the beginning of the Miocene (Figs. 9,13 and 14). Based on the new results, in addition to previous studies in the southern part of Isparta (GÖRMÜŞ et al., 2001;HEPDENİZ & SAGULAR, 2009), the marine carbonate and clastics of the Isparta Formation were deposited during the Lutetian and the shallow marine shelf clastics of the Güneyce Formation were deposited during the Lower Miocene. CONCLUSIONS The following nannofossil biozones are represented in this study: the NP16 -Discoaster tanii nodifer Zone, which was defined in the Isparta Formation and three zones in the Lower Miocene: the NN1 -Triquetrorhabdulus carinatus Zone, NN2 -Discoaster druggii Zone and NN3 -Sphenolithus belemnos Zone in the Güneyce Formation.Based on the high amount of reworked nannofossil species the Güneyce Formation was probably deposited in a shallow marine environment. The map of the study area was refined and two marine clastic sedimentary units were revised and described (the Isparta and Güneyce Formations).An Ophiolitic melange (Gökçebağ melange) was illustrated in regional maps in former studies as having been thrust over all the sedimentary sequence, in this study the map is modified according to the presented results.Marine clastic sediments unconformably overlie the ophiolitic melange.In addition, the 3D geologic map of the study area is revised and presented here. Figure 7 . Figure 7. Distribution of original nannofossil percentages in the Necibin Tepe section (* indicates nannofossil data in thin section). Discoaster tani nodifer Zone (NP16) Definition: The last occurrence (LO) of Rhabdolithus gladius to the LO of Chiasmolithus solitus Authors: HAY et al. (1967), emend.MARTINI (1970) Age: Lutetian, Middle Eocene Type locality: The NP16 Zone covers the Isparta Formation observed in only a few outcrops within the investigated area (south of Gökçebağ Village, west of the investigated area, Fig. 2).Remarks on assemblage: The LO of Chiasmolithus solitus in the upper part of the zone and the first occurrence (FO) of Reticulofenestra umbilicus and LO Blackites inversus at the base Figure 8 . Figure 8. Distribution of original nannofossil percentages in the Arapdere section (*indicates nannofossil data in thin section). Figure 9 . Figure 9. Distribution of original nannofossil percentages in the Abidinoğlutaşı Tepe section (* indicates nannofossil data in thin section). Figure 10 . Figure 10.Biozones and marker nannofossils of the Isparta and the Güneyce formations. Table 1 . List of calcareous nannofossil species distribution, counts and percentages in the Necibin Tepe section (* indicates thin section nannofossil data). Table 2 . List of calcareous nannofossil species distribution, counts and percentages in the Arapdere section (* indicates thin section nannofossil data). Table 3 . List of calcareous nannofossil species distribution, counts and percentages in the Abidinoğlutaşı Tepe section (* indicates thin section nannofossil data).
4,097.8
2018-10-10T00:00:00.000
[ "Geology" ]
Optimal control of multiscale systems using reduced-order models We study optimal control of diffusions with slow and fast variables and address a question raised by practitioners: is it possible to first eliminate the fast variables before solving the optimal control problem and then use the optimal control computed from the reduced-order model to control the original, high-dimensional system? The strategy"first reduce, then optimize"--rather than"first optimize, then reduce"--is motivated by the fact that solving optimal control problems for high-dimensional multiscale systems is numerically challenging and often computationally prohibitive. We state sufficient and necessary conditions, under which the"first reduce, then control"strategy can be employed and discuss when it should be avoided. We further give numerical examples that illustrate the"first reduce, then optmize"approach and discuss possible pitfalls. Introduction Optimal control problems for diffusion processes have attracted a lot of attention in the last decades, both in terms of the development of the theory as well as in terms of concrete applications to problems in the sciences, engineering and finance [20,39].Stochastic control problems appear in a variety of applications, such as statistics [17,16], financial mathematics [15,53], molecular dynamics [55,28] and materials science [57,6], to mention just a few.A common feature of the models used is that they are high-dimensional and possess several characteristic time scales.For instance, in single molecule alignment experiments, a laser field is used to stabilize the slowly-varying orientation of a molecule in solution that is coupled to the fast internal vibrations of the molecule, but ideally the controller would like to base the control protocol only on the relevant slow degree of freedom, i.e. the orientation of the molecule [56]. If the time scales in the system are well separated, it is possible to eliminate the fast degrees of freedom and to derive low-order reduced models, using averaging and homogenization techniques [51].Homogenization of stochastic control systems has been extensively studied by applied analysts using a variety of different mathematical tools, including viscosity solutions of the Hamilton-Jacobi-Bellman equation [8,18,1,42], backward stochastic differential equations [11,12,31], Gamma convergence [41,46] and occupation measures [37,38,36].The latter has been also employed to analyse deterministic control systems, together with differential inclusion techniques [21,58,24,5,59].The convergence analysis of multiscale control systems, both deterministic and stochastic, is quite involved and non-constructive, in that the limiting equations of motion are not given in explicit or closed form; see [35,22,33] for notable exceptions, dealing mainly with the case when the dynamics is linear.We shall refer to all these approaches-without trying to be exhaustive-as "first optimize, then reduce". On the other side of the spectrum are model order reduction (MOR) techniques for large-scale linear and bilinear control systems that are based on tools from linear algebra and rational approximation.MOR aims at approximating the response of a controlled system to any given control input from a certain class, e.g., piecewise constant or square integrable functions; see, e.g., [25,4] and the references given there.A very popular MOR method is balanced truncation that gives easily computable error bounds in terms of the Hankel norm of the corresponding transfer functions [44,23], and which has recently been extended to deterministic and stochastic slow-fast systems, using averaging and homogenization techniques [29,26,27].In applications MOR is often used to drastically reduce the system dimension, before a possibly computational expensive optimal control problem is solved.In most real-world applications, solving an optimal control problems on the basis of the unreduced large-scale model is prohibitive, which explains the popularity of MOR techniques.We will call this approach "first reduce, then optimize". The MOR approach: first reduce, then optimize In this paper we focus on optimal control of diffusions with two characteristic time scales.As a representative example, we consider the diffusion of a driven Brownian particle in a two-scale energy landscape in one dimension dx s = (σu s − ∇Φ(x s , x s / )) ds + σβ −1/2 dw s , where u is any time-dependent driving force (or control variable) and w t is standard one-dimensional Brownian motion.The potential consists of a large metastable part with small-scale superimposed periodic fluctuations, Φ(x, y) = Φ 0 (x) + p(y) with p(•) a 1-periodic function.A typical potential is shown in Figure 1.Now, if u is given as a function of time, say bounded and continuous, it is known that x s converges in distribution to a limiting process x s as → 0, where x s solves the homogenized equation [52] Here 0 < A < 1 is an effective diffusivity that accounts for the slowing down of the dynamics due to the presence of local minima in the two-scale potential.The property that x weakly converges to x in the sense of probability measures will be referred to as forward stability of the homogenized equation.Now imagine a situation, in which u depends on x s via a feedback law where c(•; ) is a measurable function of x. (For simplicity, we do not consider the case that c carries an explicit time-dependence.)Specifically, we choose u from an admissible class of feedback controls so that the cost functional is minimized for some given running cost L ≥ 0 associated with the sample paths of x s and u s up to a random stopping time τ of the process.The aim of the paper is to study situations where the cost functional evaluated at u , converges to J(u), with u being the limit of u (in some appropriate sense).Specifically, we are dealing with the situation that a property that we will refer to as backward stability.If the homogenized equation is backward stable, it does not matter whether one first solves the optimal control problem and then sends to 0 or vice versa, in which case the control u is simply treated as a parameter.One of the implications then is that we can compute optimal controls from the homogenized model, such as (2), and use them in the original equation when is sufficiently small. Unfortunately very few systems are backward stable in this sense, a notable exception being a system of the form (1) when the running cost L is quadratic in u, e.g.[38,Sec. 4.1].The reader may wonder why one should first reduce the equations before solving the optimal control problem anyway, rather than the other way round.One answer is that solving optimal control problems for high-dimensional multiscale systems is usually computationally infeasible, which often leaves no other choice; another answer is that there may be situations, in which a fully resolved model may not be explicitly available, but one only has a sufficiently accurate low-order model that captures the relevant dynamics of the system.In both cases one wants to make sure that the controls obtained from the low-order reduced model can be used in order to control the original system. Mathematical justification of the MOR approach In this article we consider the exceptional cases of backward stability and give necessary and sufficient conditions under which the reduced systems (disregarding the control) are indeed backward stable.It turns out that a class of optimal control problems that are backward stable are systems that are linear-quadratic in the control variable; they may be nonlinear in the state variables, though, and therefore cover many relevant applications in the sciences and engineering.Moreover we find that an additional requirement is that the controls of the multiscale system converge in a strong sense; an example of weak convergence, in which the systems fails to be backward stable due to lack of sequence continuity, is when the controls are oscillatory with rate 1/ around its homogenization limit, in case of which J (u ) does not converge to J(u) unless J is linear in u.For a related discussion of weak convergence issues in optimal control, we refer to [2,3].Similar problems for parameter estimation and filtering are discussed in [22,52,50,32,49]. Strong convergence of the control is a necessary, but not sufficient condition for backward stability of the model reduction approach (first reduce, then optimize), in which the control variable is treated as a parameter during the homogenization procedure.The class of control problems, which can be homogenized in the above way are systems of SDEs that can be transformed to systems in which the controls are absent.The class of such systems are linear-quadratic in the controls (but possibly nonlinear in the states), and can be transformed by a suitable logarithmic transformation of the value function of the optimal control problem: It can be shown (see [20]) that the log-transformed value function solves a linear boundary value problem that does not involve any control variables and can be homogenized using standard techniques.Once the linear equation has been homogenized, it can be transformed back to an equivalent optimal control problem that is precisely the limiting equation of the original multiscale control problem.A nice feature of the logarithmic transformation approach is that the optimal control can be expressed in terms of the solution of the linear boundary value problem, which can be solved efficiently using Monte-Carlo methods.This approach is helpful when the dynamics are high-dimensional, in which case any grid-based discretization of the above linear boundary value problem is prohibitive.(The case when the stopping time τ is deterministic and the log-transformed value function solves a linear transport PDE can be treated analogously.) Our approach is summarized in Table 1. Table 1: Schematic approach of the homogenization procedure using logarithmic transformation. The article is organized as follows: In Section 2 the model reduction approach for the indefinite time-horizon control problem with multiple time scales is outlined, with a brief introduction to dynamic programming and logarithmic transformations in Section 2.1.The model reduction problem is illustrated in Section 3 with three different numerical examples: underdamped motion of Langevin-type (Sec.3.1), diffusion in a highly-oscillatory potential (Sec.3.2), and the Gaussian linear quadratic regulator (Sec.3.3).The article contains three appendices: Appendix A discusses weak convergence under logarithmic transformations, Appendix B introduces the infinite time-horizon problem associated with the linear quadratic regulator example, Appendix C contains the proof of Theorem 3 and records various identities to bound the cost functional and the value function when using suboptimal controls. Multiscale control problem We start by setting the notation which we will use throughout this article.We denote by O ⊂ R n a bounded open set with sufficiently smooth boundary ∂O.Further let (z ,u s ) s≥0 be a stochastic process assuming values in R n that is the solution of where u s ∈ U ⊆ R n is the control applied at time s and w = (w s ) s≥0 is ndimensional Brownian motion and β > 0 is the (dimensionless) inverse temper-ature of the system.We assume that, for each > 0, drift and noise coefficients, b(•; ) and σ(•; ), are continuous functions on Ō, satisfying the usual Lipschitz and growth conditions that guarantee existence and uniqueness of the process [47]. Cost functional We want to control (4) in such a way that an appropriate cost criterion is minimized where the control is active until the process leaves the set O. Assuming z ,u 0 = z ∈ O, we define τ to be the stopping time i.e., τ is the first exit time of the process z ,u s from O. Our cost criterion reads where L is the running cost that we assume to be of the form with G being continuous on Ō.Note that the -dependence of the cost functional J comes only through the dependence of the control on z ,u s .We will omit the dependence on z in J(u; z) and write it as J(u) whenever there is no ambiguity. Logarithmic transformation In order to pass to the limit → 0 in (4)- (7), we resort to the technique of logarithmic transformations that has been developed by Fleming and coworkers (see [20] and the references therein).We start by recalling the dynamic programming principle for stochastic control problems of the form (4)- (7).To this end we make the following assumptions (see [20, for further details on the first two of the following assumptions) : Assumption 2 The running cost G(z) is continuous, nonnegative, and G(z) ≤ M 1 for all z ∈ Ō with bounded first order partial derivatives in z. Assumption 3 There exist constants γ, C 1 > 0, which are independent of , such that E(exp(γτ We define the generator of the dynamics z ,u s by Notice that the generator depends on the control u.When the control is absent we will use the notation L := L (0).The next result is standard (e.g., see [20, Sec.IV.2])) and stated without proof. be the solution of the Hamilton-Jacobi-Bellman (HJB) equation where the minimum goes over all admissible feedback controls of the form u s = c(z ,u s , s ; ).The minimizer is unique and is given by the feedback law The function V is called value function or optimal cost-to-go.The homogenization problem for ( 4)-( 7) can be studied using a multiscale expansion of the nonlinear PDE (8) in terms of the small parameter ; see, e.g., [7,38].In this article we remove the nonlinearity from the equation by means of a logarithmic transformation of the value function.Specifically, let By chain rule, which, together with the relation implies that ( 8) is equivalent to the linear boundary value problem for the function ψ .By the Feynman-Kac formula, (10) has an interpretation as a control-free sampling problem (see [47,Thm. 8.2.1]): where z s solves the control-free SDE Equations ( 8)-( 11) express a Legrendre-type duality between the value of an optimal control problem and cumulant generating functions [14,20]: In other words, where z ,u s satisfies the controlled SDE (4) and z s = z ,0 s . By the above assumptions and the strong maximum principle for elliptic PDEs it follows that (10) has a classical solution ψ ∈ C 1,2 (O)∩C( Ō).Moreover, combining Assumption 3, (11) and Hölder's inequality, we have that where p = βM 1 /γ + 1 and q = γ/(βM 1 ) + 1, and thus In the course of the paper we will drop the assumption that the operator L is uniformly elliptic and instead require only that is hypoelliptic [43].In this case the matrix σσ T can be semidefinite, if the vector field b satisfies an additional controllability assumption, known as Hörmander's condition [10], which guarantees that the transition probability has a strictly positive density with respect to Lebesgue measure, in which case (10) and ( 8) have classical solutions; cf.[20, Sec.IV]. Homogenization problem We now specify the class of multiscale systems considered in this article.Specifically, we address slow-fast systems of the form together with an exponential expectation Letting L denote the infinitesimal generator of ( 13), it holds that where Let us assume that ψ admits the following perturbation expansion in powers of : By substituting the ansatz into (15) and comparing different powers of we obtain a hierarchy of equations, the first three of which are We suppose that for each fixed x, the dynamics (13b) of the fast variables are ergodic, with the unique invariant density ρ x (y).Then by construction ρ x is the unique solution of the equation L * 0 ρ x (y) = 0, which together with the first equation of (16) implies that ψ 0 is independent of y.In order to proceed, we further assume that f 0 (x, y) satisfies the centering condition: The centering conditions, together with the strong maximum principle implies that the solution of the cell problem is unique, with ψ 1 (x, y) = Θ(x, y) • ∇ x ψ 0 (x).Multiplying ρ x (y) on both sides of the third equation in ( 16) and integrating with respect to y, we obtain where with Homogenized control system It follows using standard homogenization theory for linear elliptic equations (e.g.[48,51]) that for → 0 the solution of (15) converges to the leading term of the asymptotic expansion: where x s is the solution of the homogenized SDE with coefficients as given in (20). The corresponding asymptotic expansion of the value function V for → 0 is obained by the logarithmic transformation ( 12): Therefore, using the ansatz Using the log-transformation property of the cumulant generating function (p.8), we conclude that V 0 is the value function of the optimal control problem where the minimization is subject to the homogenized dynamics According to (9), the optimal feedback law for the homogenized problem reads Control of the full dynamics using reduced models Our goal is to find the optimal control policy û = (û 1, , û2, ) for the fast/slow system (13) for 1.Using Theorem 1 and the asymptotic expansion of V , we have Notice that the leading terms in (25) are related to the value function of optimal control problem for the reduced SDE.This indicates that we may design the control policy from the reduced problem and use it to control the original multiscale equation.This assertion is justified by the following result for the general optimal control problem (4)- (7).Theorem 3. Let Assumptions 1,2 and 3 hold and, furthermore, suppose that < (γ/β) 1/2 and |u t − ût | ≤ uniformly in t.Then we have The proof of the theorem can be found in Appendix C. Upon combining the above theorem with the formula for the optimal control policy in (25) we conclude that when the two time scales in the system are well separated, 1, the optimal control policy is well approximated by the leading order terms in (25) and results in a cost value that is nearly optimal.Remark 4. All considerations in this paper readily generalize to the averaging problem, i.e. when f 0 = g 1 = 0 in (13).This is not surprising since for averaging problems strong convergence ψ → ψ is expected to hold (when the diffusion coefficient α 1 in (13) is independent of the fast variable y).Related problems have been addressed in [49], in which the authors study parameter estimation and convergence of the maximum likelihood function under averaging and homogenization. Three prototypical applications In this section we apply the results presented in the previous section to three typical multiscale models.For each model we first state the optimal control problem along with its log-transformed counterpart, then we study the asymptotic limits of the value function and of the optimal control policy and give explicit formulae for the solution.The first two examples are taken from [49], while the third is adapted from [25]. Overdamped Langevin equation We consider the second-order Langevin equation where 1, x ∈ R n , β > 0, and Φ being a smooth the potential energy function.Introducing the auxiliary variable y we can recast (27) as We consider the solution of the optimal control problem under the controlled Langevin dynamics We notice that ( 28) is somewhat different to the form specified in Section 2, since there is no noise and hence no control term in the equation for x .The infinitesimal generator correpsonding to (28) is hypoelliptic (rather than elliptic).Yet the standard homogenization arguments apply, for here the fast variable is y and the noise is acting uniformly in y.As a consequence the generator of the fast dynamics is uniformly elliptic, ans hence the standard theory applies.Let Assuming that the linear boundary value problem (10) associated with ψ has a classical solution, then the dual relation V = −β −1 log ψ holds and the results of the previous section carries over without alternations. Homogenized control system From the above and the considerations from the previous section we can conclude that the leading term of V (x, y) satisfies the optimal control problem of the homogenized SDE, which is subject to the homogenized equation Equation ( 32) is called the overdamped Langevin equation that is obtained from ( 27) by letting the inertial second-order term tend to zero [45]. We now derive an explicit asymptotic expression for the optimal feedback law û t := û2, t , with û t = ĉ (x ,u t , y ,u t ) and From (30) and the expansion ψ As before Θ is the solution to the associated cell problem.To solve it we notice that the infinitesimal generator of (28) has the form which implies that the cell problem for Θ reads with unique solution Θ(x, y) = y.Combining it with (33), we obtain the sought asymptotic expression for the optimal feedback law: with V 0 as given in (31).We therefore conclude that the optimal control û for the Langevin equation ( 27) converges to the optimal control of the overdamped equation ( 32) as → 0.Moreover, Theorem 3 guarantees that the control value is asymptotically exact if we replace û with the control û = − √ 2∇ x V 0 in the multiscale dynamics (30).Hence the overdamped equation is backward stable. Langevin dynamics in a double-well potential As an example consider the case n = 1, with running cost G(x) = 1 in ( 29) and random stopping time τ = inf{s > 0 : x ,u s > 2} .The dynamics are governed by the double-well potential depicted in Figure 2A.As the homogenized problem is one-dimensional, the leading term V 0 of the value function V can be computed by solving a twopoint boundary value problem.The resulting leading term (36) for the optimal control û t = ĉ (x ,u t ) is shown in Figure 2B.We then computed the cost function J = J(û ) starting from three different initial points x 0 = 1.0, 1.2, 1.5, using the approximation û t ≈ − √ 2∇ x V 0 (x ,u t ) . Figure 3 clearly shows that J approaches its infimum V 0 (x 0 ) as → 0. A clear advantage of controlling the full dynamics using the optimal control obtained from the reduced model here is that the infinitesimal generator L of the original Langevin dynamics is not self-adjoint, whereas the infinitesimal generator L of the reduced dynamics is essentially self-adjoint.That is, not only do we benefit from a lower dimensionality of the reduced-order model (by a factor of 2), but we also avoid solving a boundary value problem with a non-selfadjoint operator. Diffusion in a periodic potential We now consider the SDE [16,51] x 0 = 1.0, approx x 0 = 1.2, approx x 0 = 1.5, approx Different colors correspond to different initial values x 0 .Lines marked with "×" are the value function V computed from the exponential expectation using Monte-Carlo.Lines marked with " " are the cost function J = J(û ), computed from the homogenized control with the original dynamics.We observe that the two values approach V 0 (x 0 ) as → 0 (horizontal line). In order to relate this system with the homogenization problem studied in Section 2.2, we introduce the auxiliary variable y = x / and reformulate (37) as where x s , y s are driven by the same noise w s .The associated value function reads Notice that the same noise and the same control are applied to both equations.Clearly V (x) = V (x, x/ ) and the dual relation V (x, y) = −β −1 log ψ (x, y) applies, where ψ is defined as in Section 2.2.The generator of (40) now is Homogenized control system Applying the results of Section 2, we conclude that the leading term of V (x) is the value function of the following reduced-order optimal control problem: minimize subject to the homogenized dynamics with the effective diffusivity In the above formula ρ(y) = Z −1 exp(−βp(y)) denotes the invariant density of the fast variable y and Θ(y) is the solution of the Poisson equation Specifically, we have (cf.[52] for details) The value function of the homogenized control problem ( 42)-( 43) and the corresponding optimal control satisfy where Lψ 0 (x) = KL 2 ψ 0 (x) = βG(x)ψ 0 (x), ψ 0 (x) ∂O = 0, as given in (18). Reduced model is not backward stable In contrast to the previous example, however, the optimal control û obtained from the homogenized equation alone does meet the requirements of backward stability.This can be understood by noting that the optimal control the original dynamics is given by the feedback law which can be formally derived from the expansion ψ (x, x/ ) = ψ 0 (x) + ψ 1 (x, x/ ) + . . . . After some manipulations we find that the asymptotic expression for c reads (46) where we used the shorthand c(x) = − √ 2K∇V 0 (x) in the last row.Therefore we conclude that c must be of the form Yet c(x, x/ ) does not converge to c(x) in any reasonable norm, for the x/ part keeps oscillating as → 0. What does converge, however, is the average: This fact is illustrated in Figure 5 which shows the oscillations of order one that are a consequence of the -periodic oscillations of the value function; since the optimal control law involves the derivative of the value function, oscillations of size in the value function turn into O(1) contributions to the optimal control.Figure 6 shows the difference between the homogenized value function V 0 (x) and its multiscale counterpart V (x) in the L 2 -norm.The figure also shows the L 2difference between the multiscale optimal feedback law c (x) and the corrected homogenized feedback law c(x, x/ ), including the oscillatory correction.This demonstrates strong O( ) convergence in L 2 of both value function and optimal control. Remark 5.The above case is an example in which using a reduced-order models for optimal control is not recommended, for J(û ) does not converge to J(û) as → 0. Nonetheless, Theorem 3 suggests that we can use the leading term of c in (46) as an approximation of the feedback law for the multiscale dynamics (39).The effect of the corrector estimate (46), is to enforce convergence of the derivative of the value function, which entails (weak) convergence of the optimal control and convergence of the optimal cost value (cf.[16] for an application in importance sampling). Mean first passage time and value function. As a specific example, we have solved the optimal control problem ( 38)-( 39) for the mean first passage time, with G(x) = 1 and τ being the first passage time of the set {x ≤ 1.5}, and compared it with the solution of the homogenized system (42)- (43).The potential Φ 0 is chosen to be a tilted double-well potential, Φ 0 (x) = −5(exp (−0.2(x + 2.5) 2 ) + exp (−0.2(x − 2.5) 2 )) + 0.01x 4 + 0.8x , We have solved the associated boundary value problems using the finite-volume method presented in [40] using a mesh sufficiently fine for the error to be smaller than a certain threshold.The resulting value functions are presented in Figure 7.For comparison, we have also simulated the multiscale system driven by the optimal control for the homogenized system (44), with ût = ĉ(x ,u t ) and ĉ = − √ 2K∇V 0 .This situation amounts to using the (wrong) homogenized control in the original multiscale dynamics.To illustrate the shortcoming of such an approach, we have calculated the control value by Markov-jump Monte Carlo (MJMC) simulations (see [40]).As it is shown in Figure 7, equation (47) does not capture the control value J(û ) as → 0; in order to reproduce the control value correctly, one must instead use the corrected as given in (46).10).Dashed line: numerical solution of eq. ( 18).: MJMC sampling of (47).: MJMC sampling using (48).Throughout the simulations we have set β = 2 Linear-quadratic regulator The third example is a multiscale linear quadratic regulator (LQR) problem that slightly falls out of the previous category.Specifically, we seek to minimize the time-averaged quadratic cost where I n×n denotes the n × n identity matrix.Specifically, plugging the ansatz into (51), it readily follows that S solves (52).Hence the optimal control for the linear quadratic regulator ( 49)-( 50) is given by the linear feedback law Under the above assumptions, the Riccati equation has a unique symmetric positive definite solution S for all values of > 0.Moreover, it follows that η = BB T : S , which is the principal eigenvalue of the linear eigenvalue equation for the log-transformed eigenfunction ψ = exp(−βV ).Notice that the eigefunction ψ corresponding to the principal eigenvalue −βη ≤ 0 is strictly positive as a consequence of the Perron-Frobenius theorem, hence its log transformation is well defined. Reduced Riccati equation Given the above assumptions on the matrices A and B, the homogenized version of the linear eigenvalue equation ( 53) can be easily computed, since the cell problem has an explicit solution.We find with the homogenized coefficients denoting the sum of the eigenvalues of the asymptotic covariance matrix of the fast degrees of freedom.The limiting eigenpair (η, ψ) is given by where S is the solution of the homogenized Riccati equation in accordance with the solution of the algebraic Riccati equation of singularlyperturbed LQR problems that has been discussed in the literature; see [22] and the references therein.It can be shown by perturbation analysis of the Riccati equation ( 52) using the Chow transformation (see, e.g., [34] and the references therein) that S corresponds to the top left k × k block of the matrix S up to O( 2 ).Moreover, for any open and bounded subset Ω ⊂ R n with smooth boundary, we have for V = −β −1 log ψ and some constant 0 < C 1 < ∞.The latter implies that uniformly on [0, τ Ω ] where τ Ω is the first exit time from Ω ⊂ R n and 0 < C 2 < ∞.For large values of β the probability that the process exits from Ω is exponentially small in β, i.e., the exit from the domain is a rare event (see, e.g., [60]) and hence we can employ the approximation τ Ω ≈ ∞ for all practical purposes. 270-dimensional ISS model We consider the 270-dimensional model of a component of the International Space Station (ISS) that is taken from the SLICOT benchmark library [13].In this case, n = 270 and l = 3 in equation ( 49); the dimension of the slow subspace is set to k = 4, because the spectrum of dimensionless Hankel singular values of the full system shows a significant spectral gap at k = 4 when the slow variables are chosen as the observed variables; see [26] for details.The original system is Hamiltonian, but we pay no attention to the specific geometric structure of the equations here; cf.[29] for related work.The corresponding control task for the 4-dimensional reduced system thus is to minimize subject to the dynamics with Ā and B as in (55).Without loss of generality, we have ignored the additive constant Q in the cost term that appears in the homogenized eigenvalue equation (54).As before the optimal control is given by the linear feedback law ûs = − BT Sx s . where S denotes the solution of (52).To verify the convergence of the value function numerically, we have computed eigenvalues of S and S , the matrix norms of S − S 11 and the norm of the matrix S with the S 11 block set to zero, called S r .Here S 11 refers to the upper left k × k block of the matrix S , in accordance with the notation in (50).Figure 8 shows this comparison for β = 0.01, which, given the parameters of the ISS model, amounts to the small noise regime; the plots clearly show that the convergence is of O( 2).We refrain from testing the convergence η → η of the corresponding nonlinear eigenvalue since the 1/ 2 singularity makes the evaluation of the trace term BB T : S numerically unstable for all interesting values of . A Weak convergence under logarithmic transformations As we have seen in Section 3.2 loss of backward stability of the model reduction approach is related to weak convergence of the multiscale controls.Weak convergence is mainly an issue for homogenization problems with periodic coefficients that do not involve any explicit time-dependence.For control problems on a finite time-horizon, a well-known result (e.g., see [48,Sec. 3] or [51,Sec. 20]) that is based on the maximum principle states that the convergence of the log-transformed parabolic equation is uniform on bounded time intervals under fairly weak assumptions. In the indefinite time-horizon case considered in this paper, however, the lowest order approximation gives only weak convergence.In general, weak convergence is not preserved under nonlinear transformation.That is, given a weakly convergent sequence ψ on R and a nonlinear continuous function F : R → R, we have ψ ψ ⇒ F (ψ ) F (ψ) . In our case, however, weak convergence follows from the properties of the logarithm and the fact that ψ is bounded away from 0. Let ψ be the solution of the elliptic boundary value problem (10) for T → ∞ and recall that ψ → ψ strongly in L 2 ( Ō) and ψ ψ weakly in H 1 ( Ō) . Since log C > −∞ and O ⊂ R n is bounded it follows that log ψ ∈ L 2 ( Ō) and, by the same argument, log ψ ∈ L 2 ( Ō). Convergence now follows from the fact that log(x) is Lipschitz continuous with a Lipschitz constant L This implies strong convergence of the value function.For the optimal control, the above conditions give only weak convergence, which is implied by: Lemma 7. We have log ψ log ψ weakly in H 1 ( Ō) Proof.It suffices to show that ∇ log ψ ∇ log ψ in L 2 ( Ō).To this end recall that ∇ψ ∇ψ in L 2 ( Ō) since ψ converges weakly in H 1 ( Ō).Then, for all test functions φ ∈ L 2 ( Ō), using again that ψ ≥ C > 0 pointwise and uniformly in , We look at the two integrals separately.Using that 0 < ψ ≤ 1 it follows that Ō) and ∇ψ ∇ψ weakly in L 2 ( Ō).Now for the second integral: since the weakly convergent sequence ψ and its limit ψ are bounded in H 1 ( Ō) we conclude that ∇ψ ∈ L 2 ( Ō), which together with the boundedness of |ψ − ψ| implies that (ψ − ψ)∇ψ ∈ L 2 ( Ō).So, by the Cauchy-Schwarz inequality, which, together with the last Lemma yields the assertion. B Ergodic control problem We briefly discuss the ergodic control problem of Section 3.3 that is known to be related to an elliptic eigenvalue problem [30,9,19].In principle, the equivalence of ( 53) and ( 51) directly follows from the logarithmic transformation.Here, we give an alternative derivation of the associated HJB equation, starting from the underlying Kolmogorov backward equation.To this end let for a continuous bounded function G : R n → [0, ∞) Further let ϕ(z, t) be given by By the Feynman-Kac formula ϕ (z, t) is the solution of Here denotes the infinitesimal generator of our generic uncontrolled diffusion process.Setting V = −β −1 log ϕ , we can rewrite Equation ( 58) in the form η = lim t→∞ V (z, t) t . Plugging the separation ansatz into (60) with L, Ḡ defined in (20).Now suppose This indicates that the leading nonlinear eigenpair (η 0 , V 0 ) satisfies η 0 = lim sup By ergodicity of the controlled process, the above expectation is independent of the distribution of the initial values; see [55] and the references therein. C Entropy bounds for the cost function In this section we study the cost function of the optimal control problem from the point of view of change of measure.Consider the SDE where u s is any bounded measurable control that is adapted to z s .Let µ and µ u denote the path measures generated by ( 61) and (62), respectively.Then by Girsanov's theorem [47], we have that Let a cost functional be given by where G satisfies Assumption 2 from Section 2.1.Here we use the notation E µu to indicate that the expectation is understood with respect to the probability measure µ u .Moreover the dependence of J on the initial value z is omitted.Let û = argmin J(u), then from Theorem 1 we know ûs only depends on z s .Let μ denote the measure µ û for simplicity.Our purpose here is to estimate |J(u) − J(û)| when ||u − û|| L ∞ is small.We will make use of the following definition. Definition 8.For two probability measures µ u , µ with µ u µ, the Kullback-Leibler divergence of µ u relative to μ is defined as We also assume that Assumption 3 from Section 2.1 holds: there exists γ > 0, such that E µ (e γτ ) = C 1 < +∞.As in Section 2.1, we have that Here and in the following, the conditioning on the initial value is omitted.We also need two technical estimates in order to study the convergence of the cost functional.We start with the following estimate. Figure 1 : Figure 1: Bistable potential (shown in red) with superimposed small-scale oscillations of period (in blue). Figure 3 : Figure 3: Overdamped Langevin dynamics.Cost function for different values of .Different colors correspond to different initial values x 0 .Lines marked with "×" are the value function V computed from the exponential expectation using Monte-Carlo.Lines marked with " " are the cost function J = J(û ), computed from the homogenized control with the original dynamics.We observe that the two values approach V 0 (x 0 ) as → 0 (horizontal line). Figure 4 : Figure 4: Controlled diffusion in a multiscale potential: minimize the transition time from the red to the blue region. Figure 5 : Figure 5: Value function and resulting optimal control (lower panel). Figure 6 : Figure 6: Strong L 2 convergence of value function and optimal control. Figure 8 : Figure 8: Hankel singular values and quadratic convergence of the matrix S in terms of the k dominant eigenvalues (upper left panel), the 1-1 matrix block (upper right panel) and the residual matrix S r (lower left panel); for smaller values of the numerical solution of the Riccati equation is dominated by roundoff errors, hence the results are not shown.The lower right panel shows the first 40 Hankel singular values (out of 270) when the slow variables are observed; the Hankel singular values are independent of .
8,967
2014-06-13T00:00:00.000
[ "Engineering", "Mathematics", "Physics" ]
Linking Entrepreneurial Activities and Community Prosperity/Poverty in United States Counties: Use of the Enterprise Dependency Index : More than 3000 U.S. counties are used to examine a hypothesis that the enterprise dependency index (population numbers/enterprise numbers, EDI) can serve as a measure of community prosperity/poverty. The theoretical derivation of EDI is presented. Then, a slightly nonlinear relationship between the total and poor populations of the counties is recorded. Poverty is slightly more systematically concentrated in smaller counties. The foregoing indicates that poverty forms part of the demographic–socioeconomic–entrepreneurial nexus of human settlements. The EDIs and poverty rates of counties are statistically significantly and positively correlated. The nonlinear power law, however, explains only about 45% of the variation, suggesting that the two measures are not identical. Further analyses confirm the independence of the two measures. Poverty is only one part of the two-part prosperity/poverty continuum. Measurement of poverty rates seemingly ignores the economic impacts of prosperity in communities. The analyses suggest that EDI, based on the ability of communities to ‘carry’ enterprises, is a more sensitive measure of community prosperity/poverty than the poverty rate. The hypothesis that EDI is a useful measure of community prosperity/poverty is accepted. Further research is, however, needed to optimize the use of this measure. Introduction Considerations of sustainability should not only include ecosystem health and economic development but also social justice [1]. Poor communities and sustainable development might, however, be incompatible, because these communities usually have few resources under their control and in situ resource exploitation is often an issue of survival [2]. The so-called Brundtland Commission considered sustainable development goals and introduced the normative imperative that the needs of the present generation should be met without compromising the ability of future generations to meet their own needs [3]. Poverty, racism, political marginalization, and the opportunity to make a livelihood should, therefore, feature in consideration of sustainable societies [1]. Community poverty, and its measurement, is, therefore, an important issue. The purpose of this contribution is to promote a different way to measure community poverty. It is based on the enterprise dynamics of human urban settlements. To develop the reasoning behind the suggestion, background information is provided. First of all, community poverty and ways to measure it are reviewed. Thereafter, new approaches to the study of urban socioeconomics are reviewed, leading to considerations of the enterprise dynamics of human settlements. The development of a population and enterprise-based method to measure community prosperity/poverty is then explained. Community Poverty For a long time, poverty research focused on the culture of poverty and not community poverty. Myrdal [4] was the first to ignore this dominant trend. He emphasizes the rise of an underclass of unemployed and unemployable people and families, which was a seemingly inevitable result of the structural and technological problems inherent in modern society [4]. Community poverty became an important national and international issue. For instance, President Lyndon B. Johnson declared a "War on Poverty" in January 1964 [5]. The ending of poverty then became a major issue in the policy discussions of richer countries [6]. In 2015, the international community formulated a number of Sustainable Development Goals, which include the aim of ending extreme poverty by 2030 [7]. Yet, poverty remains a problem. In 2018, poverty in the United States, one of the world's most prosperous nations, was still persistently high [8]. In 2017, an estimated 9.2 percent of the global population lived below the international poverty line of 1.90 USD a day [9]. There were still 689 million extremely poor people in the world in 2017. The measurement of poverty faces two distinct problems: the identification of the poor among a total population, and the construction of an index of poverty using the available information on the poor [10]. The former requires the choice of a criterion of poverty (e.g., the selection of a poverty line) and then determining those who satisfy that criterion (e.g., fall below the poverty line, sometimes referred to as the 'head count') and those who do not. Over a long period, the appropriateness of the choice of the poverty criterion elicited many opinions [11][12][13][14]. Yet, decades ago, Sen [10] had pointed out that the most common procedure is simply to count the number of poor and then to calculate the poverty rate, i.e., the percentage of the total population falling in the poverty group. Although the poverty rate is only a crude index [10], it is still widely used in research e.g., [15][16][17][18][19]. McGranahan and Beale [20] also used poverty rates to explain rural population loss in the U.S. They [20] remarked that, generally, economic models of regional growth and decline suggest that areas of high poverty should also be areas of population loss because, as opportunities decline in an area, poverty rates rise and people move to other areas in search of better opportunities. Outmigration subsequently reduces the poverty rate, such that poverty rates should ultimately equalize across areas. However, based on poverty rates, McGranahan and Beale [20] argued that two facts about rural distress in the U.S. refute this general model. Firstly, areas with high poverty rates and areas with large population losses usually have had these conditions for a long time. The conditions did not originate in the short term. Secondly, rural counties with high poverty rates in 1990 were no more likely to have population losses in the 1990 to 2000 period than other rural counties. QuickFacts of the U.S. Census Bureau also presents the poverty rate (percent of people in poverty) for states, counties, cities, towns, and zip codes in the U.S [21]. A major weakness of the poverty rate is that it deals with only one side of the twosided prosperity/poverty continuum of communities and societies. The continuum reflects the ratio between the number of not-poor (i.e., prosperous) and the number of poor people in a society. The poverty rate is not an ideal measure because the same number of poor people could be resident in communities with widely differing numbers of prosperous residents and, thus, widely differing states of prosperity/poverty. It is, therefore, worthwhile considering a poverty measure that is not based on a criterion of poverty (e.g., a poverty line), but which measures the prosperity/poverty state of a community. To explore this possibility, it is necessary to consider the enterprise dynamics of human settlements. Human Settlements and Enterprise Dynamics In the new millennium, urban scaling research [22] demonstrates that the spatial and temporal levels of social, economic, and political interactions of urban settlements are subject to constraints imposed by environmental conditions, technology, and institutions [23,24]). Scaling simply refers, in its most elemental form, to how a system responds when its size changes [25]. In other words, it can potentially reveal the underlying principles that determine the dominant behaviour of highly complex systems. The development of a Settlement Scaling Theory (SST) for cities [22,24] is based on the nonlinear scaling that has been observed in human settlements whose populations vary in size by many orders of magnitude [26]. SST provides a set of hypotheses and mathematical relationships that together generate predictions of how measurable quantitative attributes of settlements, including their business establishments, are related to their population sizes [24][25][26]. Business establishments (here called enterprises) are used in urban studies as fundamental units of economic analysis [26]. The reason is that innovation, wealth generation, entrepreneurship, and job creation all manifest themselves through the formation and growth of workplaces. Youn et al. [26] reported a surprisingly simple result: the total number of enterprises in U.S. metropolitan statistical areas (MSAs) is linearly proportional to their population sizes. This relationship is true for MSA populations that differ over many orders of magnitude. Thus, in a constant fashion, larger MSAs have proportionally more enterprises, and vice versa. Linear proportionalities (between population numbers and enterprise numbers) have also been observed for South African towns [27,28] and U.S. counties [29]. However, in all of these cases, be it MSAs, counties, or towns, there was also some variation around the lines of best fit between population and enterprise numbers. In other words, a specific population number could be associated with a range of enterprise numbers in different human settlements, or a specific enterprise number could be associated with a range of population numbers in different settlements. Toerien [28,29] argued that if more people are needed to support a given level of enterprises in one community compared to another, the buying power to support enterprises of the former group is lower and that of the latter group is higher. The former is a poorer community and the latter a more prosperous community. Enterprise Dynamics and Prosperity/Poverty of Communities The observed linear relationships between enterprise numbers and population numbers of human settlements [26,28,29] can be stated as: The regression coefficient, b, is: b = Enterprise numbers/Population numbers (2) The regression coefficient (b) indicates how many enterprises can be 'carried' by a specific population. It is a measure of the 'enterprise carrying capacity' (ECC) of a human settlement. Although the concept of carrying capacity is used widely in ecological disciplines [30], it is not used much in mainstream economics [25]. In ecological systems, carrying capacity is shaped by processes and interdependent relationships between finite resources and the consumers of those resources [30]. In human settlements, the enterprise carrying capacity is determined by the needs and wants of people and the money they have available for spending. This determines the number and types of enterprises from which they can buy. Toerien [28,29] called the inverse of the regression coefficient of Equation (2) above, (i.e., 1/b), the enterprise dependency index (EDI), and suggested that it is a measure of community prosperity/poverty. The ECC (enterprises/population) and EDI (population/enterprises) are obviously two sides of the same coin. They are both assessments of the prosperity/poverty statuses of communities. Because of the similarity, this contribution focuses mainly on the application of EDI in community prosperity/poverty analyses. It must be kept in mind that ECC could have been used in its stead. It should be possible to link demographic information (population numbers) and entrepreneurial information (enterprise numbers) and the prosperity/poverty information (EDI) of human communities. However, would such a linkage be useful? Because community poverty is an important consideration in many countries, including the United States, it could be very important. For instance, the Great Recession (2008 to 2010) increased neighbourhood poverty in the U.S. in the midst of affluence [15]. The result was the re-emergence of a racial and ethnic underclass living in inner-city neighbourhoods. Consequently, there is a need for a geographic redirection of poverty research because poverty should be understood in terms of where it is located [31]. Benzow and Fikri [31] reflected on the 1980 to 2018 poverty trends of American neighbourhoods. During this period, approximately 4300 neighbourhoods that housed 16 million Americans crossed the high-poverty threshold (a 30 percent poverty rate or higher). The quest to reduce poverty has historically relied on two levers: economic growth and the intentional redistribution of resources to the poor, either by the domestic state or foreign aid [7]. Unemployment and underemployment are some of the strongest predictors of poverty [8,32,33]. For instance, households whose usual breadwinners are out of work are three times more likely to be poor than working households [33]. Rifkin [34] reflected on the future impacts of technological development on enterprises, and, hence, on employment. He argued that a new age of global markets, automated production, and a near-workerless economy is in sight. What happens in the workplace in the future would have important implications for humanity, and future unemployment could be a serious problem. Whereas Rifkin [34] is concerned about the impact of enterprise dynamics on employment (as impacted by technology), it has become necessary to examine the possible application of the EDI in urban research in much greater detail. This is done here. Purpose of This Contribution The prime purpose of this contribution is to examine a hypothesis that EDI, an enterprise dynamics characteristic, is a useful measure of community prosperity/poverty. Until recently, the prosperity/poverty status of communities has not been measured using enterprise and demographic dynamics. Such an approach was shown to be potentially useful [28,29] and it is now subjected to more extensive analysis. Methods A specific challenge confronted the planning of this contribution. The hypothesis to be tested involved the evaluation of a new method that is not based on the number of poor people but on the ability of communities (the people in U.S. counties) to financially support enterprises. This meant that the analytical strategy had to be based on scaling laws (i.e., cross-sectional scaling [35]) regularly encountered in modern urban research. The analysis used data from more than 3000 U.S. counties. The 2017 dataset of the counties' demographic, socioeconomic, and enterprise characteristics was examined to test the utility of EDI as a measure of community prosperity/poverty. This dataset was selected because 2017 is long after the Great Recession (2008 to 2010) and before the onset of the COVID pandemic. It was expected to reflect normal conditions. The scaling analyses were used to: (i) determine if the number of poor people in counties is related to their total population, and (ii) examine whether the county enterprise and demographic dynamics are similar to those of cities [22][23][24][25]. These analyses would establish whether the SST philosophy [22,24] could be applied. Once this was established, the relationships between demography and enterprise numbers (relating to ECC), as well as between the enterprise numbers and demography (relating to EDI), of the counties were examined. Having established that population and enterprise numbers are linearly correlated, the variation of data around the regression line was examined. Pearl and Mackenzie [36] suggested that, in cases where confounding might be an issue, it is helpful to hold the value of one characteristic constant and to monitor the behaviour of another. The stated hypothesis implies that the prosperity/poverty state (measured as EDI) is a confounder of the relationship between population numbers and enterprise numbers and causes the variation observed. The suggestion of Pearl and Mackenzie [36] was applied for both population numbers and enterprise numbers. The results were then interpreted in terms of EDI dynamics. Thereafter, it was necessary to quantify the relationship, if any, between EDI and the level of personal income in counties. Hereafter, it was necessary to determine if the total income of counties is related to their enterprise numbers and population numbers, and hence their prosperity/poverty states. Finally, the relationship between EDI and the poverty rate was examined to assess the utility of each in estimating community poverty/prosperity. This allowed a final assessment of the stated hypothesis. Datasets Used The 2017 Poverty and Median Household Income Estimates dataset [37] was used to extract the 2017 numbers of officially poor people in U.S. counties. The poverty rates in this dataset were then used to provide estimates of the total population of each county. A dataset of the Bureau of Economic Affairs [38] provides estimates of the 2017 personal income levels of U.S. counties. The personal income levels of each county were multiplied by its estimated population numbers to provide estimates of total personal income in the counties. The County Business Patterns: 2017 dataset [39] provides estimates of the 2017 number of establishments (here called enterprises) in the counties. Power Law Analyses Scaling is a general analytical framework used by many disciplines to characterize how population-averaged properties of a collective vary with its size [22][23][24][25]. Power law analyses (log-log regressions) were used to examine if scaling is present in the relationships between the various micropolitan characteristics. Microsoft Excel software was used for all of these analyses. Scaling Terminology The scaling terms sub-linear, super-linear, and linear are used by West [25] in the application of power law analyses. These terms indicate the following: sub-linear scaling indicates disproportionate agglomeration of one socioeconomic characteristic at smaller or lower values of another characteristic of a county. It indicates economies of scale. Superlinear scaling is associated with disproportionate agglomeration of one socioeconomic characteristic at the larger or higher values of another characteristic of a county. It indicates increasing returns to scale. Linear scaling indicates that one characteristic is linearly associated with another characteristic irrespective of the size of the human settlement. Link between Demography and Poverty There is a strong and almost linear power law association (exponent = 0.95) between the total population and the number of poor people in more than 3000 U.S. counties ( Figure 1). Overall, poverty numbers form a reasonably constant fraction of county populations over many orders of magnitude. This suggests that there might be a limit as to how many poor people can be 'carried' by the prosperous ('non-poor') fraction of a population. However, there is a small scaling effect indicating a slight decrease in the poverty rate (%) associated with increasing population sizes (Table 1). Poverty is somewhat more prevalent in smaller than larger counties. The dynamics of poor populations clearly form part of the orderliness of the demographic-socioeconomic-entrepreneurial nexus of U.S. counties and, thereby, justifies the SST approach used in this analysis. Population-Enterprise Relationships The 2017 power law association between the population numbers and enterprise numbers of the selected U.S counties is depicted in Figure 2. The power law relationship (diagonal red line) has an exponent that indicates an almost linear relationship. The power law explains more than 95 percent of the variation (see R 2 in Figure 2). The power law is very similar to that reported for U.S. metropolitan statistical areas (MSAs) [26]. Youn et al. [26] reported an exponent of 0.98 ± 0.2 for MSAs (see Figure 1 in [26]). The exponent for the U.S. counties is 0.9855 (Figure 2). The power law covers a population range of more than four orders of magnitude, i.e., from less than one thousand to about 10 million people per county. The enterprise range also covers four orders of magnitude, i.e., from about 10 to more than 100 thousand enterprises per county. This power law provides information about the enterprise carrying capacity (ECC, the enterprises/population relationship) of the counties. As expected, a statistically significant power law also describes the relationship between enterprise numbers and population numbers (Figure 3). The power law relationship (diagonal red line) has an exponent that indicates a slightly sub-linear relationship. It also explains more than 95 percent of the variation (see R 2 in Figure 3) and covers enterprise and population ranges of more than four orders of magnitude. The power law provides information about the EDIs (the population/enterprises relationships) of the counties. The strong and extended proportionalities depicted in Figures 2 and 3 indicate a need for more detailed analyses. Despite the fact that strong population-enterprise or enterprisepopulation relationships are reflected in Figures 2 and 3, it is evident that some variation is still not fully explained by the power laws (see the distribution of data points around the power law lines in Figures 2 and 3). For instance, constant populations levels (such as those depicted in line AB in Figure 4) are associated with a range of different enterprise number levels. Similarly, constant numbers of enterprises (such as those depicted in line CD in Figure 4) are associated with a range of different population numbers. (2) in the text that if either the population or enterprise numbers of a group of counties are kept constant, variation in the magnitude of the other variable would indicate that the EDI, and hence the prosperity/poverty status of the counties, is changing. This was verified in two ways. Firstly, two examples of constant enterprise numbers per county (400 to 410 enterprises in Figure 5A and 1000 to 1025 enterprises in Figure 5B) are presented. Large populations with constant enterprises had high EDIs, indicating higher community poverty, and small populations with constant enterprises had low EDIs, indicating more prosperous communities. This conclusion was corroborated by higher poverty rates when EDIs were high and lower poverty rates when EDIs were low ( Figure 5A,B). Figure 6A) and 100,000 to 105,000 people ( Figure 6B). Constant populations with higher enterprise numbers indicated lower EDIs, i.e., more prosperous communities. Constant populations with fewer enterprises indicated higher EDIs, i.e., poorer communities. This was true irrespective of the levels of the constant populations. The former analyses suggest that the prosperity/poverty statuses of counties influence the population number-enterprise number relationships of U.S. counties. In other words, the number of enterprises in a county is not only a function of its population number but also of its prosperity/poverty state, i.e., the buying power of its population. Relationships between County Incomes, Populations, and Enterprises The foregoing conclusion prompted an analysis of the relationships between total county incomes and county populations ( Figure 7A) and county incomes and county enterprise numbers ( Figure 7B). Population numbers and enterprise numbers are slightly disproportionately and sub-linearly correlated with total county incomes (see equations in Figure 7A,B). In other words, county populations and enterprises are disproportionately higher in counties with lower incomes. Having money to spend is obviously an important county property. The spread of points around the regression lines in Figure 7A,B raised the question as to whether the prosperity/poverty statuses of counties might play a role in the above relationships. This possibility was tested by examining the EDIs of counties with constant incomes (Figure 8A,B). There was a gradient from low EDIs to high EDIs in each comparison, irrespective of the magnitude of constant county incomes. County prosperity/poverty statuses, therefore, play important roles as confounders. They moderate the relationships between total county incomes and population or enterprise numbers. Relationship between Personal Incomes and Prosperity/Poverty States What is the link between EDI (as a measure of prosperity/poverty) and the levels of personal income? Figure 9 shows that there is a negative relationship. Higher personal incomes indicating higher community prosperity are generally associated with lower community EDIs. Conversely, lower personal incomes are associated with higher EDIs, which is indicative of more poverty (Figure 9). However, this only explains about 34% of the variation, indicating a need to delve deeper. Relationship between the Enterprise Dependency Index and the Poverty Rate There is a weak but nevertheless highly statistically significant (p < 0.01) power law correlation (r = 0.45, n = 3134) between the EDIs and the poverty rates of the counties ( Figure 10). Higher EDIs are generally associated with higher poverty rates and vice versa. However, only some 19% of the variation is explained in this way (Figure 10). A wide spread of data points supports indications that poverty rates and EDIs are not identical. The lack of a strong relationship between EDI and the poverty rate in counties was confirmed. Counties grouped on the basis of constant poverty rates (10% and about 20%) have widely differing EDIs ( Figure 11). The EDIs of counties with a 10 percent poverty rate vary from as low as 20 people per enterprise to more than 80 people per enterprise. Counties with poverty rates of approximately 20 are associated with EDIs of less than 40 to more than 100. Similar results (not shown) were obtained for poverty rate groupings of approximately 5 and 15%. EDI measures the number of people needed to 'carry' the average enterprise in a county. The large non-poor fraction of county populations most probably masked the expression of community prosperity/poverty when measured by the poverty rate. Compared to EDI, poverty rate appears to be an inferior measure to reflect the relationship between entrepreneurship and community prosperity/wealth. Discussion Poverty research has many dimensions, ranging from efforts to: define poverty [40], measure poverty [14,41], model poverty [42], identify the determinants of relative poverty in advanced capitalist democracies [16], develop poverty policies [43], etc. Because U.S. neighbourhood poverty increased in the midst of affluence during the Great Recession, Lichter et al. [15] suggested that poverty should be understood in terms of where it is located. In other words, poverty research should also focus on communities. Community poverty is a constant concern all over the world [4][5][6][7]. Poverty is also important in considerations of sustainability [1]. Together with ecosystem health and economic development, community poverty should also be taken into account [1]. Sustainability is concerned with the well-being of future generations [1,44]. The Brundtland Report of 1987 provided the inspiration for questions about the meaning of sustainability as a concept [3,44]. Brundtland and her colleagues proposed that sustainable development should meet the needs of the present without compromising the ability of future generations to meet their own needs. Poverty clearly reduces the ability of future generations to meet their own needs. Sustainability is now almost always considered in terms of three dimensions: social, economic, and environmental [44]. Community poverty and its measurement and dynamics should form part of sustainability considerations. The measurement of community poverty is considered to be important [11,13,14]. The measurement of poverty rates became the established, and virtually only, way of determining the poverty states of communities [11][12][13][14][15][16][17][18][19]. This happened despite the knowledge that the poverty rate is a crude index [10]. To determine community poverty rates, some measure of poverty is usually chosen, and the number of the poor in communities is simply counted, and poverty rates calculated [10]. In this process, only one side (the number of poor people in communities) of the two-sided prosperity/poverty continuum is taken into account. The contribution of non-poor (prosperous) people to community prosperity/poverty is ignored. To overcome this weakness, a way was needed to estimate the state of community prosperity/poverty that is not dependent on the number of poor people. This study evaluated an alternative method based on demographic and enterprise dynamics to measure community prosperity/poverty. The proposed method [28,29] draws upon research in the new millennium that demonstrated strong orderliness in the demographic-socioeconomic domains of urban settlements [22,25]. Human settlements are extremely complex systems with many interdependent facets, e.g., social, economic, infrastructural, and spatial characteristics [22,25,[45][46][47][48]. These characteristics are highly correlated and interconnected and are driven by the same underlying dynamics [25]. Scaling phenomena are prevalent [22]. Such systemic regularities are embodied in SST (a scaling theory) [24] and provide windows to the underlying mechanisms, dynamics, and structures common to all such settlements. This study reports on a number of poverty-related community dynamics. Firstly, the dynamics of the number of poor people in U.S. counties are shown to be part of SST [22]. Scaling laws constrain the development of new theories [49]. Any theory that attempts to explain a phenomenon should be compatible with the empirical scaling relationships that the data exhibit [49]. A highly statistically significant (p < 0.01) power law relationship was detected between the number of poor people and the total populations of U.S. counties (Figure 1). The nature of such a relationship is not considered in most, if any, community poverty research efforts [11][12][13][14][15][16][17][18][19][20]. Yet, it has a direct influence on the calculation of poverty rates ( Table 1). The number of poor people in counties scales slightly sub-linearly with population size (see power law equation in Figure 1). In general, poverty rates tend to be systematically higher in smaller than in larger counties (Table 1). These dynamics support the suggestion that the poverty rate is a crude measure [10]. Secondly, the main focus of this contribution is to examine whether the relationships between population numbers and enterprise numbers in human settlements could be applied in the evaluation of the proposed method. In particular, it examines whether the EDI (enterprise dependency index) could serve as a measure of community prosperity/poverty. The EDI relates to how many people are associated with the average enterprise in a community. Strong linear relationships have been formerly observed between the number of people and the number of enterprises in groups of human settlements [26][27][28][29]. These highly significant and linear relationships meet the requirement for the development of new theories [49]. Linear relationships are also found in the evaluation of U.S. counties (Figures 2 and 3). They stretch over many orders of magnitude of population and enterprise numbers. The relationship for U.S. counties (Figure 2) is almost identical to the relationship for U.S. cities [26]. The number of county enterprises are, consequently, closely and positively associated with the sizes of county populations. Thirdly, county population sizes and enterprise numbers are closely correlated with county incomes (Figure 7). Higher personal incomes are generally associated with greater community prosperity and vice versa (Figure 9). Fourthly, a theoretical derivation of EDI as a measure of community prosperity/poverty from the linear relationship between population and enterprise numbers in human settlements is presented (Equations (1) and (2) in the text, Figure 4). Variation around lines of best fit of the relationships depicted in Figures 2, 3 and 7, is ex-plained in terms of differences in their EDIs, i.e., their prosperity/poverty states ( Figures 5, 6 and 8). The prosperity/poverty states of counties moderate the magnitudes of their incomes, populations, and enterprises. The size of a county population and the population's ability to pay enterprises for services and goods apparently determine the number of county enterprises. The prosperity/poverty states of counties and their measurement (as EDIs) are important socioeconomic derivatives that should be used in socioeconomic analyses and in considerations of sustainability. Fifthly, the question arose as to whether poverty rates and EDI reflect in similar ways on the prosperity/poverty statuses of U.S counties. For the first time, it is demonstrated that EDIs and poverty rates of U.S. counties are weakly, but statistically significantly related ( Figure 10). However, only about 20% of the variation is explained. These two community poverty measures are clearly not identical ( Figure 11). However, higher EDIs are generally associated with higher poverty rates and vice versa, yet counties with constant poverty rates are associated with widely varying EDIs ( Figure 11). The large non-poor (prosperous) fraction of county populations appears to be a confounder that partly masks the expression of community prosperity/poverty when measured by the poverty rate. EDI appears to be a more sensitive measure of community prosperity/poverty than the poverty rate. Given the results presented, the hypothesis that EDI is a useful measure of community prosperity/poverty is accepted. However, further research is needed to optimise the use of this measure. This includes actions such as examining the role of poverty in the sustainability of communities, determining the limits of the ability of human settlements to 'carry' poor people, and understanding poverty as a component of SST (settlement scaling theory) and how events such as the Great Recession or the COVID-19 pandemic affect the prosperity/poverty statuses of human settlements. There is scope for much more research. Conclusions Community poverty is an important socioeconomic and sustainability issue. The measurement of community poverty by way of poverty rates is known to be a crude measure. It deals with only one part of the two-part prosperity/poverty continuum in communities. There is a need for an alternative method to measure community poverty. The number of poor people and the total populations of U.S. counties are nonlinearly correlated. The population dynamics of poor people in U.S. counties fit into the settlement scaling theory (SST) and into the demographic-socioeconomic-entrepreneurial nexus of the counties. In general, poverty tends to be systematically higher in smaller than in larger counties. This fact is ignored in most poverty research. Earlier research indicates that new urban research that led to a settlement scaling theory (SST) could form the basis of a new way to estimate community prosperity/poverty. The method is not derived from the numbers or fractions of poor people in populations but is based on the ratio between population numbers and enterprise numbers. It determines an enterprise dependency index (EDI) in the form of how many people are associated with the average enterprise in a community. This contribution examines this method in greater detail. Linear relationships between population and enterprise numbers in U.S. counties extend over many orders of magnitude. The number of county enterprises is closely and positively associated with the size of county populations. Community poverty, measured as EDI, is a confounder. It moderates the former relationship. County population numbers and enterprise numbers are closely correlated with total county incomes. Higher personal incomes in U.S. counties are generally associated with more community prosperity and vice versa. Income appears to be an important driver in county socioeconomic dynamics. The size of a county's population and its population's ability to pay enterprises for services and goods apparently determine the number of county enterprises. The prosperity/poverty states of counties and their measurement (as EDIs) are important socioeconomic derivatives that should be used in socioeconomic analyses and considerations of sustainability. EDIs and poverty rates of U.S. counties are weakly, but statistically significantly, positively related. Higher EDIs are generally associated with higher poverty rates and vice versa, but there is much variation. These measures are not identical. EDI appears to be a better measure of community prosperity/poverty than poverty rates based on the number of poor people. The hypothesis that EDI is a useful measure of community prosperity/poverty is accepted. There is much scope for further research about the utility of EDI. Funding: This research received no external funding. Institutional Review Board Statement: Ethical review and approval were waived for this study, because ethical issues were not at issue. Informed Consent Statement: Not applicable. Data Availability Statement: Publicly available data was used in the study. The author is prepared to deal with requests.
7,753.2
2022-02-28T00:00:00.000
[ "Economics", "Sociology" ]
Analysis of myocardial enzyme spectrum in 230 COVID-19 patients of Chongqing, China. Background A novel coronavirus disease COVID-19 outbreak caused pandemic in China and worldwide. In addition to pneumonia, Cardiac failure is also a clinical outcome of coronavirus (COVID-19) patients and one of the leading causes for the death of COVID-19 patients. This study focused on a spectrum of cardiac enzymes to provide biomarkers for the severity of cardiomyopathy, and provide guidance of clinical treatment. Methods 230 coronavirus patients (182 mild and 48 severe cases) enrolled in Three Gorges Hospital of Chongqing University from January to March 2020 were analyzed for a spectrum of cardiac injury enzymes including α-hydroxybutyric dehydrogenase (αHBDH), lactic acid dehydrogenase (LDH), creatine kinase (CK), and creatine kinase isoenzyme (CK-MB). Results The severe cases had significantly higher myocardial enzyme levels than mild cases, regardless of male and females. Males appeared to be more susceptible than females to COVID-19 induced heart injury, having higher CK and CK-MB in mild cases, and higher αHBDH and LDH levels in severe cases. Age is also a susceptible factor to COVID-19, but affected males were younger than females. Conclusions This study reveals that the heart is also a major target of COVID-19 infection, and myocardial enzyme spectrum assays could help the diagnosis, prognosis and guide the treatments to prevent heart failure in COVID-19 patients. Abstract Background A novel coronavirus disease COVID-19 outbreak caused pandemic in China and worldwide. In addition to pneumonia, Cardiac failure is also a clinical outcome of coronavirus (COVID-19) patients and one of the leading causes for the death of COVID-19 patients. This study focused on a spectrum of cardiac enzymes to provide biomarkers for the severity of cardiomyopathy, and provide guidance of clinical treatment. Methods 230 coronavirus patients (182 mild and 48 severe cases) enrolled in Three Gorges Hospital of Chongqing University from January to March 2020 were analyzed for a spectrum of cardiac injury enzymes including α-hydroxybutyric dehydrogenase (αHBDH), lactic acid dehydrogenase (LDH), creatine kinase (CK), and creatine kinase isoenzyme (CK-MB). Results The severe cases had signi cantly higher myocardial enzyme levels than mild cases, regardless of male and females. Males appeared to be more susceptible than females to COVID-19 induced heart injury, having higher CK and CK-MB in mild cases, and higher αHBDH and LDH levels in severe cases. Age is also a susceptible factor to COVID-19, but affected males were younger than females. Conclusions This study reveals that the heart is also a major target of COVID-19 infection, and myocardial enzyme spectrum assays could help the diagnosis, prognosis and guide the treatments to prevent heart failure in COVID-19 patients. Background The novel coronavirus (COVID-19) pneumonia has caused a pandemic in China and even the world since its discovery in Wuhan in December 2019. It is a respiratory infectious disease caused by beta coronavirus subtypes. The pneumonia diagnosis and treatment plan (Trial Version 7) [1] explicitly proposes that based on autopsy and puncture tissue pathological observations, the virus may accumulate the heart and degenerate and necrotize myocardial cells. However, no changes in myocardial enzymes have been reported after the onset of the disease. There are currently over 2,200,000 infected patients worldwide, so the summary of myocardial enzymes in these patients is of signi cance for understanding the disease and for subsequent guidance of clinical treatment. Study Subjects The study subjects were patients with COVID-19 pneumonia clinically diagnosed from the Three Gorges Hospital of Chongqing University from January 2020 to March 2020. All cases met the clinical diagnostic criteria for new type of coronavirus pneumonia [1] promulgated by the National Health and Health Commission at different times; severe patients included severe and critically ill patients, based on the above diagnosis standard; mild patients include light and general patients with the above diagnostic criteria. Methods This retrospect analysis includes 230 COVID-19 patients in 943 assays (each patient was subjected to 3-5 assays). The myocardial enzyme spectrum assays include α-hydroxybutyrate dehydrogenase (α-HBDH), lactate dehydrogenase (LDH), creatine kinase (CK), and creatine kinase isoenzyme (CK-MB), routinely performed in the Three Gorges Hospital. The data analysis was designed by Cardiac Vascular Surgery Department and analyzed by Statistician. Statistics Data were expressed as Mean ± SD (standard deviation) and analyzed with SPSS 22.0 software. The data was subjected to ANOVA analysis, followed by Chi-Square tests and multivariable tests, and Student's t tests. The criteria of signi cance were set at p < 0.05. Results Myocardial enzyme spectrum in 230 COVID-19 patients In 230 patients (182 of whom were mild), a two-sample independent t-test was performed on changes in myocardial enzymes in the mild and severe groups. The results are as follows: all indicators are statistically different, and each indicator of the severe patients is relatively high; and there was a statistical difference between the average age of the mild group and the severe group, and the age of the severe group was higher. Table 2.Myocardial enzyme spectrum in mild and severe COVID-19 female patients In 103 female patients (81 of whom were mild), two independent sample t tests were performed on different myocardial enzymes according to the severity of the disease (mild and severe), and the results were as follows: the differences in the four detection indicators were statistically signi cant, and the various indicators of the severe patients were relatively high. Table 3.Myocardial enzyme spectrum in mild and severe COVID-19 male patients In 127 male patients (101 of whom were mild), two independent sample t tests were performed on different myocardial enzymes according to the severity of the disease, and the results were as follows: the differences in the four detection indicators were statistically signi cant, and each indicator of the severe patient was relatively high. Table 4.Sex-difference in enzyme spectrum in mild COVID-19 patients 182 mild patients (81 females) were tested by two independent samples for myocardial enzyme changes of different genders. The difference was statistically signi cant in CK (81.07 ± 125.28) U/L and CK-MB (11.91 ± 5.49) U/L. Table 5.Sex-difference in myocardial enzyme spectrum in severe COVID-19 patients Forty-eight patients with severe illness (including 22 females), two independent sample t tests were performed on changes in myocardial enzyme spectrum of different genders. The differences were statistically signi cant for α-HBDH, LDH, and age. Among them, male patients had higher detection values, respectively. α-HBDH (285.26 ± 119.57) U/L, LDH (380.24 ± 177.83) U/L. Discussion COVID-19 is a new deadly and highly contagious infectious disease. This new coronavirus resembles SARS coronavirus, and gets into cells via the ACE2 receptor, which is highly expressed in the lung and heart, and lung is the major target of the COVID-19, causing COVID-19 pneumonia leading to death [2]. Heart is also a major target of this new coronavirus, and clinical observation revealed cardiomyocytes injury [3,4], however, the focus is on the Tropin (cTNI/cTNT), B-type natriuretic peptide (BNP) and Ntermina pro-brain natriuretic peptide (NT-proBNP) [4,5] α-HBDB, LDH, CK, and CK-MB are routine clinically examined myocardial enzymes [6].They exist in normal cardiomyocytes, and under the normal conditions, serum levels these enzymes are low. If the corona blood ow is suddenly reduced, the permeability of cardiomyocytes increased, and these enzymes could then be released into blood, and their serum levels are regarded as acute cardiomyocytes injury and useful biomarkers to evaluate the severity of heart injury [7]. It is reported that the enzymes α-HBDH and CK are quickly increased 3-8 h during cardiac infarction, 10-36 h reached the peak, and therefore as a sensitive biomarkers of acute heart injury. In 12-36 h, CK-MB reached the peak; the more sever the heart injury the higher the enzymes. The myocardial enzyme spectrum is wildly used as the diagnosis and prognosis index. In addition, LDH increase also seen in liver injury and pulmonary embolism, and increases in CK-MB are also seen in angina pectoris, pericarditis, and other kinds of myocardial injury [8]. In this study, we examined myocardial enzyme spectrum in 230 COVID-19 patients with 943 repeated measures. Severe cases showed signi cantly higher myocardial enzymes than mild cases, and males were more susceptible to COVID-19 induced myocardial injury, and had higher enzyme activities than females, and in severe cases, the age in males is younger than females, suggesting that males is more vulnerable to heart injury than females, and young males are also vulnerable to the disease. This gender difference in COVID-19 induced heart injury is probably due to X chromosomes and female sex hormones, which could render females more resistant to virus infection. In further analysis of mild vases and severe cases, the statistical signi cant changes are dramatic between mild and sever cases, and somewhat correlated with gender differences [9]. Compare with the light female patients, male light patients also had higher CK and CK-MB, suggesting that male patients could be more susceptible to heart infarction or viral myocarditis and prone to accompany liver and other major organ failures. This needs to be further investigated. In regard to the severity of disease, severe patients had higher α-HBDH, LDH, CK and CK-MB than light patients, and the severe cases are older patients that mild cases. Thus, we conclude that severe patients are more susceptible to coronavirus-induced heart injury, consistent with the observation of Li et al [10][11][12]. Thus, it is important to monitor these parameters in severe cases, the give early drug intervention to prevent heart failure. Whether the suddenly increased α-HBDH, LDH, CK, and K-MB could predict the prognosis needs further investigation to guide clinical treatment and management of severe COVID-19 patients. Summary This study clearly demonstrates that heart is a target of COVID-19 infection, and the mechanism needs to be further elucidated. For severe and critical patients, especially for elderly, in addition to lung injury, the cardiomyocytes injury should also be monitored with myocardial enzyme spectrum to guide drug interventions to prevent heart failure. Myocardial enzyme spectrum in 230 COVID-19 patients. The myocardial enzyme spectrum assays including α-hydroxybutyrate dehydrogenase (α-HBDH), lactate dehydrogenase (LDH), creatine kinase (CK), and creatine kinase isoenzyme (CK-MB) in 230 COVID-19 patients were determined. Values for CK-MB* were multiplied 10-fold to allow the changes to be more visible. Data are mean ± SD. *Signi cantly different from Mild cases, p < 0.05.
2,363.2
2020-05-18T00:00:00.000
[ "Medicine", "Biology" ]
Beyond lockdown? The ethics of global movement in a new era ABSTRACT A collection of recent works offer a route into rethinking the ethics of borders at a time when the rules and practices of global mobility have been called into question by the coronavirus pandemic. What counts as a legitimate justification for the closure of borders and who gets to decide? Who has responsibility for the protection of refugees? Just how practical is the ideal of ‘open borders’ and is there a trade-off between justice in immigration and the stability of a liberal political order? While some commentators have claimed that the coronavirus pandemic sounded the death knell for the ideal of open borders, its true import is to highlight our mutual vulnerability and the need for effective global co-ordination of migration and asylum. The four contributions I discuss provide vital moral arguments and conceptual distinctions relevant to thinking about the contours of a post-pandemic regime of global mobility. While they differ on the question of who the liberal state may justifiably exclude, and on the desirability and practicality of cosmopolitan reform, they converge in assigning states a far greater role in protecting the human rights of vulnerable non-citizens and in their condemnation of a cruel and repressive status quo. Introduction The international state system affords freedom of movement and residency rights to individuals in inverse proportion to the degree that they need them. Those with power and resources can -in 'normal' times -easily circulate between the world's countries, while the impoverished and oppressed -including those who count as refugees under international law -face severe, life-threatening obstacles to finding liveable conditions. This situation arises from a system that gives states discretionary power over who to admit onto their territory, subject to important (though limited) legal constraints on refugees and human rights that they can flout without serious repercussions. The mobility-curtailing lockdown measures deployed against COVID-19 forced many people otherwise unaffected by border controls to confront the fraught question of who gets to move and why. The pandemic has underlined the many reasons, from care to subsistence, that human beings need to travel, the pain of being separated from loved ones and the intrinsic value of feeling unrestricted. At the same time, the public health crisis dramatically worsened pre-existing trends towards more restrictive and repressive modes of enforcement, prompting a mass shutdown of borders (with many states not recognizing exemptions for asylum) 1 ; a dramatic expansion in state surveillance and a spike in xenophobia and racism; all at a time when the economic and political fallout is pushing millions across the globe into destitution and conflict. For some commentators, the pandemic sounds the death knell for unchecked movement as a political ideal and underlines the primacy of the nation state (Glasman 2020). But the triumph of exclusionary nationalism is by no means inevitable. The virus has provided undeniable evidence of humanity's shared vulnerability and interdependence; the fact -as the director of the WHO put it -that 'no one is safe until everyone is safe' (Tidley and Bauomy 2020, August 18). The example of countries, such as Portugal, which at the start of the crisis automatically accorded residency rights to irregular migrants and asylum-seekers, shows that greater repression and precarity are not inevitable (Drury 2020, March 28). At this juncture, it pays to revisit some fundamental principles regarding border controls. The basic moral situation entails that, when a person presents themselves at the borders of a state, they find themselves confronted by an authority which demands their compliance as a matter of right and which can deploy legal threats and violence against them without any of the mechanisms of accountability we expect in other contexts. Four recent works of political philosophy address themselves to this complex and morally fraught reality, adding to a burgeoning and increasingly sophisticated literature on migration and refugees. The four booksby Chris Bertram (2018), David Owen (2020), Alex Sager (2020) and Michael Blake (2020) -cover the institution of refugeehood and what states owe to those in extreme need; the moral grounds and limits of a state's purported 'right to exclude;' the case for open borders, and the ethics of resisting an unjust status quo. Written before the global health crisis began in 2020, their arguments are even more salient in its aftermath as the rules and practices governing international mobility are redefined. All four authors share the view that the international status quo is deeply unjust, but disagree on the reasons why and just how radical the response should be, ranging from Sager's full-blooded defence of open borders as a practical goal we should move towards to Blake's 'institutional conservatism' (Blake 2020, 10), involving a defence of the right to exclude tempered by human rights and the political virtue of 'mercy'. These authors are also grappling with vexed questions about the role of philosophical thinking on migration faced with the simplistic dogmas and undisguised cruelty of populist nationalism. Like the rest of political philosophy, the sub-field of immigration ethics has been confronted by recent methodological calls for a suitably 'realistic' approach that can meaningfully guide change in our 'non-ideal' -that is, deeply unjust -world (Little and MacDonald 2015). As these works show, however, appeals to realism are rarely conclusive. On the one hand, we are urged to confront the uncomfortable electoral reality of widespread public hostility towards greater immigration and, on the other hand, the immense suffering caused by real-world border enforcement practices and their pervasive racism disturbs neat, idealized defences of the state's 'right to exclude'. Just how ambitious reform should be, and which facts about contemporary immigration practices philosophy should take into account, is a matter of ongoing dispute at a time when fundamental assumptions about movement and membership are in question. Refugees and humanitarian assistance The 1951 Refugee Convention defines a refugee as someone with a well-founded fear of persecution on account of their 'race, religion, nationality, membership of a particular social group or political opinion'. This 'political' category of refugees focuses on the distinct wrong which comes from the denial of membership by a persecuting state. It is commonly distinguished from 'humanitarian' refugees, who are the unfortunate victims of violence, war, crop failure, climate change and 'natural' disasters. From the perspective of the moral interests of the refugee, the exclusion of this latter group from the Convention appears arbitrary. As Bertram puts it, the Convention ignores many people who 'common sense would suggest have a valid claim to protection ' (2018, p. 42). To some extent, international practice recognizes the claims of those not covered by the Convention. As the refugee system developed, the norm of 'non-refoulment' was extended to those whose human rights are under threat but who do not meet the Convention's definition, while the UN High Commission on Refugees also works with humanitarian refugees. Some scholars have argued that the Convention itself should be extended to those who are not threatened from persecution (Shacknove 1985). Others argue that the focus on persecution is justified, since the provision of asylum has a distinctive expressive role to play in condemning oppressive states internationally (Price 2009). Owen's insightful book aims to reconciles these concerns, arguing for an expansion in protection to all those whose rights are under threat, but one which takes account of the distinct political responses that different categories of refugee call for. Both Owen and Bertram's books are published as part of Polity's handy 'Political Theory Today' series with both offering short, accessible introductions to their subjects, which are also original contributions in their own right. Owen's approach is to analyse the institution of refugeehood in light of how it has historically developed taking seriously the state interests that underpin the system along with its guiding normative ideals. The world today is parcelled up between sovereign states whose legitimacy derives from their role in protecting the human rights of those individuals within their territory. For Owen, it follows that the existence of refugees is always evidence that some states are failing in their role, requiring international society to step in 'in loco civitatis' and offer substitute protection (2020, p. 12). The institution of refugeehood thus acts as a 'legitimacy repair mechanism' that reaffirms 'the minimal conditions of the imagined reconciliation of an international order of sovereign states and a cosmopolitan order of human rights' (2020, p. 47). Understood in this way, the protection of refugees not only answers to the morally urgent need of the world's most vulnerable groups; it functions as a global 'public good' from which all states benefit in terms of their legitimacy (2020, p. 100). An important contribution of Owen's work is the three categories of refugees he proposes, each with their specific vulnerabilities and calling for different institutional responses. Refugees needing 'asylum' are threatened politically by their state and stand in need of substitute membership, the provision of which expresses condemnation of the state that persecutes them. Those who require 'sanctuary' face a situation of political breakdown and/or generalized violence and need a society they can integrate into, which requires admission to a country with a similar language and culture. Refugees needing 'refuge', meanwhile, are fleeing disasters and ought to be accorded safe temporary shelter in a nearby country (Owen 2020, Chapter 3). Owen offers a nuanced set of considerations for determining 'fair shares' in the allocation of refugee protection among states, which takes into account a state's capacity and resources and the type of protection required, along with the family ties and preferences of refugees themselves. Importantly, it also takes into account special obligations derived from a state's contribution to background global injustices that generate refugees (Owen 2020, pp. 86-94). The full practical implications are not set out, though we might assume that participation in recent unjust wars and out-sized contributions to climate emissions would figure. In the current context, where destination states are making aggressive efforts to violate the existing legal rights of refugees, there seems to be little prospect of achieving such a dramatic expansion in protection, even while the pandemic presents a humanitarian crisis comparable in its global reach to that which precipitated the formation of the current refugee system after world war two. Still, Owen's approach strikes a useful balance in offering moral critique that is attuned to the role human rights play within the existing international order of sovereign states concerned to safeguard their own legitimacy. The current under-supply of protection to refugees is the product of a motivational deficit among citizens and political leaders rather than states' lack of capacity. In theorizing about refugees, of course, we are already adopting a 'non-ideal' perspective (in the Rawlsian sense) in supposing that at least some states are not protecting human rights. Yet it would be too concessive to assume that the present unwillingness of receiving states to act justly is a consideration we should give significant ethical weight to in the name of realism since this risks holding up something as a model that is 'unpalatable, unbearable and unjust' (Fine 2020, 9-10). In 'Justice, Migration and Mercy', Michael Blake starts from a cosmopolitan commitment to human rights shared with Owen, Bertram and Sagar, and likewise laments the callous treatment of refugees by self-interested states. Of the four authors, however, Blake is the most sceptical when it comes to institutional schemes that will require significant amounts of collective co-ordination and reform at the global level, focusing instead on state policy-making with particular reference to the US. In the literature on migration, Blake is well-known for offering a 'jurisdictional' argument for a state's right to exclude that focuses on what is distinctively owed by states to those under their coercive power (Blake 2013). In this thoughtful and intricately argued book, he provides a more fully worked out vision of what the jurisdictional approach implies, including some notable reflections on refugee protection. The starting point for Blake is that all human beings have a right to membership in a state in which their human rights are protected, but not to a state of their own choosing. This is because when a state takes on a new member its citizens assume a set of obligations for the protection and fulfilment of that person's human rights. Even where there are no direct financial costs, these new obligations impinge on citizens' freedom and morally speaking citizens are only obliged to assume them when the human rights of the person in question are not protected elsewhere (Blake 2020, Chapter 4). This is an argument for the right to exclude, then, but one which entails greater rights of entry than the status quo arising from the basic claim each individual has to secure protection of their human rights. In focusing on the co-operative character of states as coercive entities, Blake's argument at least aims in the right direction, singling out the most pertinent facts about statehood relevant to the justification of a public system of rights and obligations. By contrast, rival discussions of a state's ownership of its territory (Risse 2012) or political institutions (Pevnick 2011) and national culture (Miller 2005) each understand exclusion with reference to something 'accidental' about the state as a political community, as Blake puts it (2020, p. 51). Blake is less convincing, however, when it comes to defending the principle that individuals have a right to be free from unwanted obligations which is necessary for his defence of unilateral rights of exclusion to succeed. We regularly incur new obligations towards individuals without giving our consent, as with the obligations we incur to a new neighbour or someone else's new-born child, and Blake's response to these counterexamples is not wholly persuasive. His standout proposal on refugees is that states should offer the persecuted not merely admission, but active protection, including through coercive intervention in the affairs of persecuting states (Blake 2020, Ch., 5). States, then, would be under not merely a negative duty to not prevent Syrian refugees from entering their territory, but a positive duty to provide them with carriers and to coerce the Syrian state if necessary to stop them interfering. In this way, for Blake, the Refugee Convention can be interpreted in line with the doctrine of R2P, as one of a 'set of legal and moral guarantees offered to those facing persecution, which collectively provide the authorization for coercive force to be directed against their persecutors' (2020, p. 111). While the proposal of a duty to provide evacuation to refugees is valuable, the wider argument would seem to involve a significant expansion in the justifications states have for the use of armed force and Blake says little about how such a system would be managed to prevent self-interested intervention and dangerous escalation. Blake hopes to show that the argument for coercive intervention is a natural extension of the positive duties of assistance to refugees that we already recognize. However, this leads him to make the unconvincing claim that carrier sanctions (which states levy on airlines who allow refugees to board without the correct documents) are wrong because they violate a positive duty of assistance, rather than refugees' negative rights to be free from coercive interference (Blake 2020, 101). Yet, as with David Miller's attempt to deny that certain forms of external border controls involve coercion, the view rests upon an implausibly narrow conception of coercion (Miller 2010) . It would seem to entail, for instance, that a state that levies fines against a publisher who publishes my political tract would not be coercively interfering with my freedom of speech, but merely withdrawing a positive duty of assistance. While generating these unpalatable conclusions elsewhere, this view has the unfortunate effect of obscuring the power exercised when states seek to legally stop people from moving. How open should borders be? Questions of what counts as coercion and what is owed morally speaking to those who are coerced, are of course central to thinking not merely about the claims of refugees, but the whole permissibility of immigration enforcement. Typically, those most sceptical about claims to unilateral state discretion over immigration draw moral comparisons between a state's use of force at the border and its use of force on its own territory, while those who defend a right to exclude point to purported disanalogies. In 'Do states have a right to exclude?', Bertram stakes out a position firmly in the former camp. He begins from the Kantian idea that the use of force against others must be justified on terms that they are capable of accepting and then considers which global norms for the governance of immigration could justify a state's use of force at the border (Bertram 2018, pp. 51-55). While Blake and Owen take an immanent approach that departs from existing institutions and practices, Bertram's approach is more philosophically systematic. He adapts the Rawlsian device of a 'veil of ignorance' to model an impartial choice situation for arriving at a just immigration regime. In this procedure, the representative choosers who must live under such a regime are ignorant of their citizenship, race, religion, and social class, though they are aware of certain general facts about human beings and the functioning of societies, which -importantly -includes an awareness that human beings have certain generic human interests that mobility can secure (Bertram 2018, pp. 56-60). A Rawlsian method of this kind was famously used by Joseph Carens to argue for open borders in his path-breaking article that helped establish the sub-field of immigration ethics (Carens 1987), though Bertram's approach differs in according greater significance to state interests in the regulation of competing claims, resulting in a set of recommendations that are procedural in character. Four possible models are considered from behind the veil of ignorance. The first entails total state discretion over borders. This would be rejected, Bertram suggests, because it would effectively consign those suffering extreme disadvantage to their home states. A second model is an idealized version of the status quo affording states discretion over borders, constrained by human rights and refugee law, but under conditions of full compliance (rather than the current widespread flouting of international law). This, too, Bertram argues, would be rejected given the risks of assuming a form of citizenship which leaves one's basic interests unprotected if one cannot meet the visa requirements of a state with robust rights protection and falls outside the protection of the Convention (Bertram 2018, pp. 62-65). The third possible model is open borders. Bertram thinks this would be more appealing than the first two to impartial choosers, given the more extensive opportunities it offers individuals to secure their life chances. Bertram believes, however, that there are sometimes good reasons why states can control borders, highlighting the limits on the numbers who can sustainably live in 'ecologically fragile areas of the planet' (Bertram 2018, 67). We might likewise note justifications based on public health brought into view by COVID-19, though here it is important to be cautious. The appropriate rationale for these acts of territorial quarantining is that they are exceptional and time-limited acts employed by states to fulfil their primary obligation to protect the lives of their citizens, rather than a more permanent abridgement of mobility rights (Lanard 2020). Despite nationalist claims to the contrary, the closure of borders to migration would not provide much protection from infectious diseases short of an extreme move towards national autarky. It was, after all, international trade and tourism that propelled the spread of the pandemic, rather than the settlement of refugees and immigrants (Caplan 2020). The challenge here, of course, is to prevent the identification of a legitimate state interest from being weaponized by rich states as a rationale for socially distancing from the citizens of poor states now labelled as 'contagious'. Crucially, Bertram's fourth procedural model -the one he believes impartial decision-makers would chooseintroduces a test of global public justification designed to filter out sectional defences of privilege of this kind. A global constitutional convention would flesh out global norms to govern immigration which would then be enforced by an international adjudicatory body. The details of this are somewhat sketchy, but we are told that it 'would involve a range of different actors, including states, NGOs and a representative selection of affected persons, including, most important, migrants themselves' (Bertram 2018, 70). A convention process would, Bertram predicts, establish a presumption in favour of freedom of movement, but it would also consider under what circumstances it can legitimately be curtailed by state interests. Those inclined to a realist perspective might regard Bertram's approach as an exercise in top-down cosmopolitan 'moralism', which is insufficiently attentive to the role of power politics in the international arena. Yet it is notable that the recommendations he arrives at are distinctly political ones aimed at bolstering political processes and empowering groups who are routinely ignored in shaping the laws and institutions that determine global mobility. His recommendations centre respect for the autonomy of migrants and refugees as political agents, rather than seeing them as mere objects of coercive regulation or humanitarian pity. It is an ideal worth heeding even if we are some way off the much more utopian prospect of a global constitutional convention. For his part, Blake's identification of states as the appropriate arena for political claim-making among equals does not require the empowerment of all those whose interests are affected by border controls. It makes sense therefore that he should think that citizens' decision-making power over admissions is most appropriately disciplined by internal constraints -in the form of the political virtues -rather than the external constraint of democracy. The most novel part of Blake's book is an intriguing defence of the relevance of the virtue of 'mercy' to discourse on migration policy where the language of justice has limits. Mercy, for Blake, is exercised when an agent refuses to enforce their rights against a person out of concern for the success of that person's life. Someone without mercy does no wrong, but is a 'bad example of what a person can be' (Blake 2020, 8). According to Blake, when a state fails to grant entry rights for those seeking family unification or valuable career options, and even when it fails to grant residency to undocumented migrants who have been settled for many years, they are not committing an injustice. Rather, they are failing to be sufficiently merciful (Blake 2020, Chapter 9). I seriously doubt that many of these categories of migrants do not also have claims grounded in justice -and the case of long-term undocumented residents facing deportation stands out as a particularly glaring case -but Blake's efforts to broaden the moral vocabulary with which philosophers discuss migration is welcome. Perhaps with an eye to US evangelicals, he notes that mercy has strong resonance in the Christian tradition, and that a number of Christian groups have a proud history of providing support framed in this way (Blake 2020, pp. 197-198). There remains the concern however that mercy leaves power-divides unchallenged in taking the perspective of powerful rich Westerners, affording mercy to the weak. A contrasting approach is taken by Luis Cabrera, who likewise offers an analysis of migration policy through the lens of the political virtues, but talks instead of the need for 'political humility' among states so as to reorient global institutions towards the moral claims and perspectives of those they exclude (Cabrera 2019). A reliance on mercy sits uneasily with the equal respect owed to migrants and their sense of self-respect, positioning them in the submissive position of requesting gifts and charity, rather than demanding what is due to them as a matter of right (Feinberg 1970). It is no coincidence that, in the republican tradition, the antonym of freedom as non-domination is to be 'at the mercy' of another (Skinner 1998, 43). Indeed, for Alex Sager, whose work draws on republican political theory, the fact some groups are placed at the mercy of others by borders provides an additional reason to condemn them (Sager 2014). Like Blake, he thinks that the language of abstract justice does not capture all we want to say about immigration. Instead of reaching for the classical virtues, however, he aims to offer arguments that speak to activists engaged in life-and-death struggles against the violence of border controls. Sager is concerned that other open borders advocates, such as Carens, see their proposals as meant for some indeterminate future, rather than the here and now. His short and punchy book aims to galvanize activism by foregrounding those at the rough end of the immigration system and giving 'a central place to the violent policing of political borders and how this promotes structural injustice against (often racialized) immigrant groups' (Sager 2020, 3). For Sager, arguments for open borders are 'among the strongest in political philosophy and applied ethics' (Sager 2020, 2), and the movement for open borders represents the next step in liberatory struggles, comparable to women's suffrage and the LGBT movement. Sager mounts a multi-pronged case for open borders with a handy overview of philosophical arguments based on freedom, democracy and economic justice. The most interesting and original part of his contribution however focuses on the need to dismantle systems of violence and oppression. The core idea is that, even if a state's right to exclude was in principle justifiable, the way in which this right is enforced, through a brutal and unaccountable system of containment, involves unacceptable harms (Sager 2020, Chapter 4). The argument resonates with the analysis of empirical scholars, such as that of geographer Reece Jones in 'Violent Borders' and anthropologist Ruben Andersson in 'Illegality inc', making a case for inclusion, based on the violence of the existing system and its historical role in upholding race and class-based inequalities (Andersson 2014;Jones 2016). Sager is sceptical of any methodological approach which assumes -as Blake does (2020, pp. 3-4) -that historic issues of racial justice can be bracketed in thinking about immigration policy. Drawing on critical migration studies, Sager demonstrates how immigration controls construct people as citizen and migrant, inferior and subordinate, so as to generate a pool of cheap exploitable labour. In our existing world -and not the idealized world of philosophical imaginingsborder controls are structurally racist, he argues, bearing analogy with notorious past projects of racialized containment, including Apartheid and even slavery (Sager 2020, 36). Refreshingly, Sager seeks to demonstrate that open borders are not just desirable but practically achievable. In doing so, he mobilizes a wide range of arguments and empirical evidence, drawing from history and the social sciences. Much of this is vital and riveting stuff, debunking some of the persistent myths of pro-borders advocates around the supposed 'threat' immigrants pose to the economy, the rule of law and so on. Other arguments look a little too quick. At one stage, for instance, Sager cites the fact that more migrants currently move to adjacent countries than further afield as evidence for the fact fears of 'mass migration' under a system of open borders are misplaced. It is indeed likely that many of these fears are exaggerated. But it is not obvious that migration trends, under the hugely coercive containment regime Sager criticizes, are useful in predicting how many people would move were there no such obstacles and automatic membership, as he favours (Sager 2020, 68). More broadly, there is an unacknowledged tension between, at one moment, using empirical literature to paint a dark picture of a deeply racist and unjust status quo in Western countries in the context of comparatively limited immigration, before sketching a much more optimistic and tolerant vision of how things will play out in the event that borders are opened. As someone who leans towards the open borders side of Sager, I have no obvious answer to this. It strikes me as worth acknowledging that, in the event of fully opening borders, many people are likely to want to move and that this has the potential to place considerable strain on political institutions if left unaddressed. In future debates, advocates of open borders ought to think much more systematically about the various potential trade-offs involved and how the proposal sits within a wider project to create the conditions for background justice and resilient human rights protection at a global level. Resistance to immigration law Part of the practical turn in migration ethics has involved theorists focusing attention on the permissibility of resisting unjust border laws. Ever since the migration and refugee 'crisis' of 2015, when record numbers of people from Syria, Afghanistan and Iran attempted to enter Europe, we have become accustomed to seeing spectacular forms of disobedience to immigration controls. Media images portray flimsy overcrowded dinghies, the slow march of refugee collectives, the breaching of fences and barbed wire, makeshift protests at border sites and violent repulsions by border guards. Over the course of the COVID-19 pandemic, there have likewise been courageous acts of resistance by migrants who have protested the closure of encampments and their exposure to the virus in cramped, unhygienic camps and detention centres (Guerrero 2020, 01 September). Key ethical questions here include what authority (if any) immigration law carries for non-citizen outsiders, whether such actions count as civil disobedience, what moral considerations ought to constrain resistance and the duties of citizens (Hidalgo 2019). Bertram endorses extensive rights to evade, deceive and resist border guards on the basis that existing immigration law lacks legitimacy as a sectarian assertion of state power. He suggests the normal reasons for complying with unjust law, which apply to citizens, are not relevant to migrants who are not co-authors of the law and derive no benefits from membership (Bertram 2018, Chapter 3). Sager in turn defends rights of resistance to any immigration laws that would 'make people worse off than they would otherwise be by interfering with their ability to access opportunities' (Sager 2020, 92). Yet, accepting that the violation of border laws can be justified, how should we categorize such acts and how (if at all) should they be constrained? Should migrants aspire to be 'civil', for instance, in the sense denoted by the traditional understanding of civil disobedience? For Owen, the efforts of refugees to thwart containment measures are indeed acts of 'transnational civil disobedience', involving 'a refusal to be governed unjustly' (Owen 2020, 110). By violating entry laws -often at serious risk to themselves -refugees highlight how the international state system falls short of its own claims to legitimacy. Owen's analysis here dovetails with recent arguments by other theorists (Benli 2018;Celikates 2019). There is an undoubted appeal to this labelling, which associates refugee's efforts with the heroic historical tradition of the Suffragettes, Martin Luther King and others and calls attention to their demands as disenfranchised agents. Yet there are also concerns. In the case of refugees, the focus on the supposed illegality of their actions may detract from the fact that seeking asylum is a right under international law and it is often states themselves who are behaving illegally in forcefully repelling refugees (Scheuerman 2018, 175). The label of civil disobedience may be a better fit for border-crossing by 'economic' migrants and others who do not fit the definition of the Convention (Cabrera 2010). Even here, however, there is a basic tension with the traditional normative core of civil disobedience as a public, political act in which agents accept the moral burdens of action (including acceptance of arrest in some accounts) in order to demonstrate their conscientiousness and convince others (Smith and Cabrera 2015). There have been some cases of unlawful border-crossing that are consciously conducted as civil disobedience (Nigg 2015). As a general framework for the justification of illegal immigration, however, the theory of civil disobedience risks burdening migrants with a set of expectations that, given their precarious status, heighten their risk of detention and deportation. Instead, it may be better to think of such actions as a form of principled political resistance unconstrained by the normative expectations traditionally attached to civil disobedience (Blunt 2018;Delmas 2018) Immigration ethics in an era of populist nationalism The gulf between philosophical reflection and popular political discourse is never more glaring than in the case of immigration ethics. The recent coronavirus crisis has superimposed itself on a much more prolonged crisis of the liberal order which has seen populist nationalists make headway in a number of established democracies using xenophobic and racist language to drum up fears of immigrants taking jobs, putting pressure on public services, and bringing crime and terrorism. In a 2018 interview, Hilary Clinton posed this challenge in the following terms, in a passage quoted by both Blake and Sager: I think Europe needs to get a handle on migration because that is what lit the flame . . . I admire the very generous and compassionate approaches that were taken particularly by leaders like Angela Merkel, but I think it is fair to say Europe has done its part, and must send a very clear message -'we are not going to be able to continue to provide refuge and support'.-because if we don't deal with the migration issue it will continue to roil the body politic (Blake 2013, 140). The claim by Clinton that Europe 'has done its part' in the provision of asylum is not defensible. As Owen points out, 85% of refugees around the world are living in developing states in countries near those they have left (Owen 2020, 98). The Syrian conflict has produced 5.6 million refugees living abroad of which Turkey hosts 3.6 million, Lebanon 879,000 and Jordan 661,000 (UNHCR 2020b). The EU -with a population of 500 million -saw just over 1 million asylum applications (UNHCR 2019). According to Owen's analysis -and any minimally plausible account of 'fair shares' in refugee hosting -wealthy Europe states have not done their part. Indeed, through their aggressive policing of 'Fortress Europe', they are complicit in a catastrophic humanitarian situation. For Sager, Clinton's words merely give succour to racism and xenophobia, acting as 'fodder for far-right or populist parties' (Sager 2020, 86). Such claims, he suggests, need to be confronted and exposed as bigotry. On the other hand, Blake interprets Clinton's argument as identifying a genuinely tragic choice: states can keep liberal institutions or admit everyone with a just claim, but not both. He worries about a 'bigot's veto' which entails any vote for more open borders, in line with the demands of liberalism, would increase support for authoritarian nationalism whose resistance could 'place the future of that polity at risk' (Blake 2020, 118). He thinks his own proposals for justice in immigration would increase support for nationalists, while open borders definitely would (Blake 2020, 139). In regard to the nationalistic attitudes of fellow citizens, much hinges on whether we regard antipathy to immigration as a fixed constraint, deeply rooted in human psychology, or something more malleable and contingent. Both Bertram and Sager do a good job showing just how recent and constructed is the notion of homogenous nation-states maintaining their demographic balance through restrictive immigration controls. There has, nonetheless, been a fairly consistent pattern of opposition to immigration among voters in prosperous destination states. 2 As is well-known, this opposition is often accompanied by false empirical beliefs about both the scale of immigration and its negative impact across salient policy domains, such as welfare, crime and the economy (see e.g. Denvir 2020, p. 183;Goodfellow 2020, 152). Unfortunately, shifting these beliefs is not simply a matter of exposing voters to the appropriate evidence or countervailing arguments. This is because false beliefs on immigration are themselves frequently derived from ethnic bias rooted in identity-based loyalties, which accounts for why these beliefs uniformly cast immigration in a negative, rather than positive, light. As Peter Higgins argues, the 'pervasiveness and persistence of the belief that immigration is economically harmful in the face of compelling evidence to the contrary is a testament to the power of xenophobia and racism ' (2013, p. 202). The solution is not simply a matter of pointing out the correct empirical facts or 'exposing' the underlying prejudice at work. Rather, the task is to confront the nexus of interests that reinforce nationalist framings that turn issues of distributive justice and power into identity-based arguments, which includes the echo chamber of right-wing media and social media that amplifies nationalist partisans (Müller 2019). As well as being intrinsically unjust, the path of appeasing nationalists who hold democracy to ransom is that it has a tendency to embolden them, advancing their goals and legitimizing their narratives (Denvir 2020;Goodfellow 2020). While there is nearly always a hard core of voters who lean towards populist nationalism, it flourishes best under conditions of fear and precarity where its scapegoating message takes root. Many of the policy solutions to address the electoral threat of populist nationalism are therefore in line with those made urgent by the pandemic, including social welfare provision, decent healthcare and housing and putting an end to low-paid and insecure work (Solberg and Akufo-Addo 2020). Conclusion While the contours of a post-pandemic regime of global mobility are as yet unclear, the crisis presents a unique moment to re-evaluate the most basic assumptions that should govern it. One danger is that pro-borders nationalists will assert a disturbing new justification for excluding the disadvantaged which stigmatizes them as bearers of disease and that 'emergency' measures to close borders will become normalized, as happened after 9/11. It is to be hoped that, in the aftermath of a crisis that impacted the freedom of many unaccustomed to restrictions on their movement, those with relative privilege in the citizenship power they possess might reflect on the immense cruelty and harm caused by borders. There is an urgent need for effective international co-ordination of migration and asylum with the perspectives of those most directly impacted by border controls included as a matter of justice. The four contributions I have discussed offer illuminating conceptual distinctions and compelling moral arguments to feed into this debate. While they disagree on what the contours of an ideal system would be, they converge in foregrounding the moral claim of every human being to a minimally decent existence and their condemnation of a lethal and repressive status quo. Disclosure statement No potential conflict of interest was reported by the author.
8,887.2
2021-01-02T00:00:00.000
[ "Philosophy", "Political Science", "Sociology" ]
Rhythmic movement as a tacit enactment goal mobilizes the emergence of mathematical structures This article concerns the purpose, function, and mechanisms of students ’ rhythmic behaviors as they solve embodied-interaction problems, specifically problems that require assimilating quantitative information structures embedded into the environment. Analyzing multimodal data of one student tackling a bimanual interaction design for proportion, we observed the (1) evolution of coordinated movements with stable temporal – spatial qualities; (2) breakdown of this proto-rhythmic form when it failed to generalize across the problem space; (3) utilization of available resources to obtain greater specificity by way of measuring spatial spans of movements; (4) determination of an arithmetic pattern governing the sequence of spatial spans; and (5) creation of a meta-rhythmic form that reconciles continuous movement with the arithmetic pattern. The latter reconciliation selectively retired, modified, and recombined features of her previous form. Rhythmic enactment, even where it is not functionally imper-ative, appears to constitute a tacit adaptation goal. Its breakdown reveals latent phenomenal properties of the environment, creating opportunities for quantitative reasoning, ultimately supporting the learning of curricular content. 1 Attending to physical movement as a characteristic of an embodiment approach to research on mathematics education The objective of this paper is to contribute to a growing body of educational research scholarship that has been promoting the theorization of mathematics learning as a process of guided reflection on situated physical enactment (Bamberger & diSessa, 2003;Kelton & Ma, 2018;Nemirovsky & Ferrara, 2009;Radford, Arzarello, Edwards, & Sabena, 2017;Roth & Thom, 2009;Simmt & Kieren, 2015;Sinclair, Chorney, & Rodney, 2016). Specifically, some scholars informed by the embodiment turn in the cognitive sciences have been evaluating the thesis that individual comprehension of mathematical concepts emerges through discursive objectification of tacit sensorimotor adaptations to the social enactment of cultural practice (Abrahamson, 2009(Abrahamson, , 2014Abrahamson & Trninic, 2015;Nemirovsky, Kelton, & Rhodehamel, 2013;Radford, 2009). Often operating in the design-based approach to educational research, these scholars of enactive mathematics have created task-based activities that offer students opportunities to (a) develop new goal-oriented sensorimotor schemes for moving effectively within the constraints of a learning environment; (b) reflect on their solutions in qualitative, pre-symbolic register; and ultimately (c) refine and consolidate the solutions via appropriating normative frames of reference from the target discipline (Abrahamson & Lindgren, 2014). One consequence of a research focus on students' physical movement is enhancing our capacity to appreciate and investigate any rhythmic qualities these may bear. Radford (2015), who approaches mathematical thinking as Bfully material, corporeal, embodied, and sensuous phenomenon^(p. 82), implicates rhythm as a central organizing principle of thinking. Radford calls for further research on the evolution of rhythm components in mathematical thinking. Here, we are interested in particular in the emergence of rhythmic qualities and their iterative adaptation to emergent environmental constraints on effective movement. Drawing on constructivist, enactivist, and coordination-dynamics literature, we believe that this process, in which physical actions fall into regulated spatial-temporal forms, is largely tacit. We maintain that this tacit process is important for educational researchers to understand, because the process is pivotal in coordinating effective enactment in new interaction environments, such as those designed to foster conceptual change. In order to demonstrate what we mean by emergent rhythmic qualities of students' enacted solutions to physical interaction problems as well as the pedagogical potential of these rhythmic movements and their research appeal more generally, the paper will consider empirical results from implementing a design for proportion that used the Mathematics Imagery Trainer (Abrahamson & Trninic, 2015). Discussing rhythmic qualities inherent in students' physical movements within this environment, we will theorize the micro-process by which tacit phenomenal features of sensorimotor interaction emerge for conscious reflection and elaboration that in turn lead to insight and codification relevant to mathematics learning. Our interest in the rise of latent features of an environment into a child's consciousness, as she engages in solving a situated problem, suggests the seminal work of John Mason (1989Mason ( , 2010 on the role of attentional shifts during mathematics learning. Indeed, in a sense, we are hoping to extend that general research orientation, which by and large has considered patternfinding sensory perception of static visual displays, such as geometrical inscriptions, so as here to foreground pro-action sensorimotor aspects of developing competence in handling interactive dynamical displays. Drawing also on the work of Roth and collaborators (e.g., Bautista & Roth, 2012), we thus examine movement forms students develop, perform, refine, and articulate in the course of participating in educational activities designed for learning mathematical content. We argue for the formative role of these emergent movement forms as creating opportunities for mobilizing students' proto-mathematical reasoning and learning. Thus, whereas other researchers have argued for students' generalization processes in interactive learning environments (e.g., Leung, Baccaglini-Frank, & Mariotti, 2013), our enactivist approach seeks to characterize these processes by revealing and foregrounding tacit coordinative aspects of these processes. In particular, we demonstrate the spontaneous incorporation of regulating temporal elements in students' explorative manipulation of interactive display elements. That is, we are interested in evaluating for any pedagogical affordances inherent in the emergence and adaptation of rhythmical qualities of physical performance that we observe in students' attempts to develop goal-oriented situated competence. More broadly, our research program looks to orchestrate the acculturation of enactive artifacts (Abrahamson & Trninic, 2015), that is, domain-general movement forms of cultural-historical significance. For example, our research agenda includes an interest in fostering public adoption of an enactive artifact for proportional equivalence in the form of a bimanual conceptual gesture, that is, a conventional multimodal sign bearing and evoking generic dynamical embodied meanings of mathematical nomenclature. The paper will now continue with a literature review of research on the tacit emergence of rhythmic structure in enactive mathematics learning (Section 2), followed by the case study (methods in Section 3, results in Section 4), and ending with implications for the research and design of mathematics learning (Section 5). 2 Rhythmic orientation to movement enactment as epigenetic inclination of the human neural architecture: implications for mathematics education The phenomenon of rhythmic qualities in children's physical enactment of repetitive movement has recently been considered by a range of scholars with interest in both typical and atypical cognitive development and learning (Trninic & Saxe, 2017). The phylogenetically evolved tendency of the human cognitive architecture to enact physical movement in coordinated rhythmic structure bears developmental advantages (Kelso & Engstrom, 2006;Richmond & Zacks, 2017;Vandervert, 2016). For example, motor-action researchers Spencer, Semjen, Yang, and Ivry (2006) demonstrated the utility of rhythm in constructing and enacting a temporal event structure consisting of bimanual actions. Presumably, constructing and routinizing tightly encapsulated event structures bears pragmatic advantages by way of freeing cognitive resources during motor enactment. Humans' epigenetic inclination to engage in regular spatial-temporal micro-routines in enacting cultural practice has drawn the attention of educational researchers with an interest in envisioning new pedagogical horizons. For example, rhythmic features of social activity in generating musical performance have been found to facilitate the learning of ratio, fractions, and proportion (Bamberger & diSessa, 2003). Abrahamson (2004) has implicated rhythmic discursive gesture as marking students' negotiation between situated enactment and normative mathematical forms. Bautista and Roth (2012), who studied grade 3 students classifying threedimensional objects, documented the appearance of rhythmical hand movements apparently emerging from the students' dynamical haptic interaction with structural regularities in the material resources they were manipulating. They suggest that rhythm is both a resource and an outcome of engaging in geometry activities. In like vein, Sinclair et al. (2016) used rhythm as their focal analytic construct in investigating the mathematical activity of young children working with a tablet application designed for learning number. They implicate rhythmic actions as the embodied origin of cognitive structure, prior to planning and reflection. Radford (2015) conceptualizes situated rhythmic dynamics as a constituting quality of mathematical thinking. Radford lists four aspects of rhythmic behaviors he observed in analyzing the implementation of an algebraic pattern-generating activity: meter, rhythmic grouping, prolongation, and theme. These spatial-temporal qualities of students' multimodal enactment emerged as the students considered and eventually constructed one inscribed shape, then another, then another. The abovementioned spontaneous rhythmic qualities observed in empirical investigation of children's goal-oriented situated movement, we propose, can be seen as related to the cultural practices of measurement as defined by cognitive developmental psychology. Piaget, Inhelder, and Szeminska (1960) offer that BTo measure is to take out of a whole one element, taken as a unit, and to transpose this unit on the remainder of a whole: measurement is therefore a synthesis of sub-division and change of position^(p. 3). When we imbue this logicomathematical definition of measuring from Piaget et al. (1960) with the phenomenal parametrization of enacted rhythm from Radford (2015), we may speculate on the epigenesis of measuring operations as a cultural enhancement of rhythmic adaptation to contingencies of a learning environment. By this view, rhythmic enactment: (1) conserves the nature and size of the unit as it is extracted or ported (theme and prolongation) and (b) iterates the unit (rhythmic grouping and meter). This speculative epigenetic trajectory of rhythmic enactment bears implications for educational design. As educational design researchers, we seek to both foster and understand student mathematization of movement while accomplishing situated tasks. As such, our data analysis was initially geared to characterize which temporal-spatial attributes of manual movement precipitate the emergence of mathematical structures. In the course of our analysis, a new research interest arose concerning learners' adaptive responses when encountering contexts where their rhythmic movement falls short of achieving the task objective, leading to pivotal insights bearing conceptual potential. In our explorative study, we sought to contribute to the literature on the role of rhythmic movement in mathematics learning by investigating: (1) the emergence and adaptation of rhythmic movement through task-based interaction with instructional artifacts as well as (2) students' assimilation of quantitative frames of reference into their rhythmic enactment. We will demonstrate a case study of a student who at two milestone events of different phenomenological quality responded adaptively to experiences of enactment breakdown by modifying selective temporal-spatial attributes of her movement elements; in so doing, she subsumed an earlier set of local enactment patterns into a new global enactment pattern that led to articulating the design's targeted learning objective-proportional reasoning. Method The empirical context for this study was a design-based educational research project evaluating a new activity genre centered on an interactive technological device called the Mathematics Imagery Trainer for Proportion (MITp, Abrahamson & Trninic, 2015; see Fig. 1). K was an 11-year-old female student, one of 25 students participating voluntarily in a task-based semistructured clinical interview (for details, see Rosen, Palatnik, & Abrahamson, 2018). The interview lasted in total 18 min, where, following a brief task introduction, K manipulated two virtual iconic images of (a) hot-air balloons (7 min), (b) cars (4 min), and (c) crosshair targets (7 min). The interview took place in our laboratory and was audio-video recorded for subsequent analysis. We identified all events where the student expressed verbally new insight pertaining to an effective manipulation strategy. We then parsed the interview into episodes, which we characterize as subtasks, running from each insight to the subsequent insight. Subtasks were further coded as local (Bfinding green^by placing both cursors at once at particular screen locations and leaving the hands there statically) or global (Bkeeping green^while sliding the cursors up and down the screen continuously). We applied a grounded theory micro-genetic analysis methodology to our empirical data (e.g., Goldin, 2000), focusing on the student's range of physical actions and multimodal utterance around the available media (Nemirovsky & Ferrara, 2009) as well as on the taskeffectiveness of her actions. First, we attended to the student's actions that preceded her articulation of a new rule for Bmaking green.^We searched in particular for patterns in the timing and sequencing of her hand movements through space (Sinclair et al., 2016). A rudimentary choreographic notation system emerged, through our iterative analytic process, for marking the most frequently used movements (see Table 1). Second, we analyzed how K responded to our recurring question, BHow would you explain your strategy for finding green to another person?^We thus probed for an association between two facets of K's behavior: (1) apparent transformation in her explorative manipulation strategy (captured by means of the hand-movement notation rubric, see Table 1) and (2) apparent changes we observed in the succession of her multimodal discursive responses to the recurring interview probe. These transformations, we sensed, could be marking a sequence in the adaptive emergence of K's spatial-temporal micro-routine for enacting the task solution. Our analysis drew also on our earlier findings regarding K's case (Rosen et al., 2018), where we investigated for the emergence of new perceptual structures mediating her interaction with the technology. Fig. 1 The Mathematics Imagery Trainer for Proportion (MITp). The student manipulates two cursors along vertical axes, one by each hand. The task is to make the screen green and then keep it green while moving your hands. The screen will be green only when the heights of the two cursors above the screen base relate by a particular ratio unknown to the user (e.g., here 1:2). Otherwise, the screen is red We wish to emphasize that prior to the data analysis phase of this study we had not anticipated engaging in issues of rhythm. As such, we note also that our interview protocol had not been constructed with an eye on evoking rhythmic behaviors let alone unit-based mathematization of these behaviors. Thus, we cannot at this point depict K's case as generalizing to the set of our study participants. Rather, K is perhaps a case of where this educational design genre could evolve and where we or others could adapt the activity sequence so as to encourage these observed behaviors. 4 Breakdown in rhythmic enactment elicits latent phenomenal features of a problem space: the case of K adapting simple bimanual movements to the emergent constraints of a proportionality-based interactive environment In this section, we present results from the qualitative micro-analysis of K's hot-air balloons episode. The episode comprises K's attempts at performing a total of seven alternating local and global subtasks. As we will explain, K began with heuristic explorative hand movements that gradually took form as a locally effective, stable, and iterated enactment bearing situated temporal-spatial structure of coordinated bimanual movements (evolution of proto-rhythmic forms). As the episode ensues, though, this early, locally effective enactive form proves inadequate across the parametric span of the entire problem space (breakdown of proto-rhythmic movement form), so that K must respond to the environment's unexpected feedback by accommodating the form. In so doing, K engaged in reflective discourse with the tutor, who encouraged K to articulate how she had had to adapt her form; this intervention, in turn, led to K introduce a measuring unit and devise an arithmetic pattern governing the sequence of spatial spans (quantification of rhythmic movement form), which she then incorporated into a new scheme for enacting the adapted form (creation of meta-rhythmic form that reconciles continuous movement with the arithmetic pattern). K's multimodal actions, reactions, and utterances in addressing emergent problems in the course of attempting to satisfy the task thus made manifest her logical and increasingly quantitative reasoning. The structure of the report, below, takes into account Mason's (2002) notion of accounting of (noticing of what happened), and accounting for (theorizing why events occur). The first two subsections present an account of K's episode as enfolding along the following formative events: (Section 4.1) epigenesis-evolution and breakdown of proto-rhythmic movement form vis-à-vis emergent problems of enactment and (Section 4.2) cultural intervention-quantification of movement forms and consequent creation of a meta-rhythmic form that reconciles continuous movement with an arithmetic pattern. A final subsection (Section 4.3) continuous movement revisited as discreetly discrete: hidden rhythmic qualities of enacting proportion offers summative analysis of these formative events and provides an account for the role rhythmic qualities of enacting proportion play in student's mathematical thinking. Epigenesis: evolution and breakdown of rhythmic movement structures vis-à-vis emergent problems of enactment Local subtask The tutor asked K to Bfind green^at any location on the screen. Very soon (01:00) she did so. K placed her fingers on the screen, laterally aligned but not contiguous (••); she moved the fingers horizontally toward each other (→←), which does not change the feedback in this task; she moved the fingers vertically apart (↓↑) until achieving green; and then she held the fingers stationary (□□) at green. The tutor asked K to Bfind green^also at the top and then at the bottom of the screen. Attempting to find green at each screen location, K repeated one movement combination from the previous enactment (↑↓) while slightly altering another (→←). Asked to explain what she noticed about her hands' positions at these three locations (middle, top, and bottom), K articulated her rule for obtaining green as Bthe balloons were roughly one above the other^(02:05). Global subtask In the next subtask (02:26), the interviewer asked K to Bkeep green^on the screen while moving her hands from the bottom of the screen to the top of the screen. Note that this task a priori negates a direct utilization of the B↓↑^bimanual form that K had established to solve the succession of local tasks, because now her hands must necessarily move in a cooriented form, B↑↑^. Thus, where oppositional movements might be used contextually, correctively, they cannot constitute the base form. As such, to leverage her earlier discovery, K would need to decompose that form into its elements, modify one of them (B↓↑^becomes B↑↑^), and then recompose the elements. Alternatively, she must ignore the ill-adapted oppositional movement element and attend only to its end-result spatial property (the changing extent of the interval between her hands). All this, only if K had noticed the behavior of the spatial property, which she had not. As we will see, K developed for the global task a completely different scheme. K first found green as previously (••; →←; ↓↑; □□). She then moved her hands slowly in a fixed Bone above the other^formation, thus keeping a constant interval between the left-and right index fingers (↑↑), which resulted in a red screen. (Recall that the application measures for a goal ratio, and so keeping a fixed interval between the hands while raising them, rather than increasing the interval, will inevitably violate the goal ratio, so that the screen will turn from green to red.) As she raised her hands up along the screen, K responded to the color feedback by correcting the (relative) location of her fingers so as to re-achieve a green screen (↓↑). Yet though K thus effectively was gradually increasing the interval between her hands as she raised them, which is patently clear to any observer of these data, K nevertheless explained that the hands should be in the Bsame position, same distance from each other^(see Reinholz, Trninic, Howison, & Abrahamson, 2010, for similar findings with this design, where students' verbal report contradicts their actions, so that one might say that the body is at the vanguard of the student's mathematical discovery). Local subtask In her next attempt (03:15), K slid her fingers on the screen, enacting a more complex movement pattern (see Fig. 2). K repeated this pattern at different screen locations: bottom, middle, and top. Her movements were slow (approximately 5 s for the whole pattern of movements at each location) but deliberate. Asked to articulate her current rule, K turned to the screen and, gesturing toward it, said: BDown here [screen bottom] my hands were really close, and then up here [screen middle] they were a little apart, and then up here [screen top] they were really apart.^We thus see that K believes that the relative vertical positioning of the two hands matters in this particular activity. Curiously, she nevertheless perseverates in enacting lateral displacement of the hands (→←;←→), perhaps because this contextually redundant movement never bears any negative consequences. Global subtask When K was again asked to Bmaintain green^on the screen while moving her hands continuously (04:30), she placed both hands near the bottom of the screen and moved them both simultaneously up, vertically, with RH moving twice as fast as LH (↑↑x2). In so doing, K maintained green almost without performing any corrections. She then repeated this action, at the same pace, voicing over Well, I think one hand moves faster than the other as it goes up. My hands keep gradually farther apart [sic]. It stays green, whole way… So then eventually it's farther away from the other hand than it was at the beginning. (For other cases of students shifting between mathematically complementary visualizations of the goal bimanual movement, see Abrahamson, Lee, Negrete, & Gutiérrez, 2014) To summarize, K has developed two different sensorimotor schemes as her solutions to the tasks she was assigned (Fig. 3). For the local subtask of finding green at distinct screen locations, she determined the Bthe higher-the bigger^scheme governing the covariation of the interval's vertical location and spatial extension: Breally close,^Ba little apart,^Breally apart^(see Fig. 3, local task 3), and for the global subtask of keeping green while moving her hands continuously up, she determined the Bone hand faster than the other^scheme (see Fig. 3, global task 4). Note that both schemes were articulated in qualitative register. Cultural intervention: quantification of movement forms and consequent creation of a meta-rhythmic form that reconciles continuous movement with an arithmetic pattern In the hope of eliciting from K greater specificity on her movement rules, the interviewer chose to focus on the interval between the cursors: BOk. Do you have any sense of…kind of…how this [the interval] is changing? How much it is changing, how much faster it is moving?1 Fig. 3 Construction of two different sensorimotor schemes as solutions to the local and global tasks. 1 and 3 local tasks lead to Bthe higher-the bigger^scheme. 2 and 4 global tasks lead to Bone hand faster than the otherŝ cheme right-hand index fingers on the screen and then moved them away from each other along the vertical axis (••; ↓↑) until she got green (Fig. 4, local task 5). At 5:41, K used quantitative language for the first time in the interview: BMaybe they are twice as far apart… or more… actually… four times, I don't know.L ocal subtask Asked again to explain what happens at the different screen locations, K once again enacted the same movement pattern. It is as if K has quickly conjured a little empirical experiment comparing the behavior of the vertical interval at three different locations along the screen. Referring to the relative positions of the two balloon icons, as they appeared at the bottom, middle, and top of the screen, K said they were the following: Thus, K's quantitative construction of her solution enactment has itself now fallen into an articulated pattern, the arithmetic sequence of B0, 1, 2^mapped respectively onto the bottom, middle, and top of the screen. It is of note that enacting the same movement form B••; ↓↑l asted approximately 1 s at the bottom, approximately 2.5 s in the middle, and 4 s at the top (Fig. 4, local task 6). The interviewer then asked K whether she now knew Bhow to keep the screen green.Ĝ lobal subtask Without actually touching the screen, K gestured toward the screen a performance of continuous hand movements (↑↑x2). She then placed her hands on the screen and moved the virtual icons in the same manner (with one hand moving twice as fast) (Fig. 4, global task 7). (7:15) K: I would say, like, start at the bottom, and put them close together. And then, move one hand up faster… Wait, actually (switches hands to ↑x2↑), [inaudible] one hand up faster and, as I said, in the middle, they are separated like one balloon [inaudible] (makes a correction ↑↓), and at the top (makes a correction ↑↓) two balloons. Thus, in the current episode, K attempted to incorporate features of the bottom, middle, and top solutions to the local subtasks into a single global enactment, where her earlier local performances become milestone goals for the global continuous movement (see Fig. 4). Later in the course of the interview (14:00), the iconic manipulation cursors (the two hot-air balloons) were replaced with generic manipulation cursors (two crosshair targets). Working on the global subtask, K said, BOne of them has to go faster, to stay green….The same as the last time. Twice as fast, maybe.^K persistently articulated her new rules in quantitative register. Also, she did not use the target's height as a unit, but instead used more general terms: K thus articulated yet another insight, a ratio-like 1:2 rule, by which for every 1 unit you rise on the left, you rise 2 units on the right (see , on this Ba-per-bê nactment rule). We will now offer summative commentary on the earlier subsections. Continuous movement revisited as discreetly discrete: hidden rhythmic qualities of enacting proportion In tackling the local subtasks K explored the problem space in the following way. First, she constructed a pattern of movements at one location; next, she iterated the pattern at a sequence of locations; and then she refined the pattern. In so doing, K experimented with different combinations of the same movements (recall one of the iterated forms: ••; ↓↑; →←; ←→; ↑↓; →←) in an apparent attempt to decide whether any elements were disruptive, inefficient, or redundant. Applying Radford's (2015) framework for analyzing the rhythmic qualities of manual movements, we can characterize the initial result of K's exploration as a rhythmic structure in formation: patterns containing a movement (↓↑) leading to green (themes); iteration of Now applying the same framework to the next subtasks, we witness difference. K is not experimenting with themes anymore. On the one hand, there is the same rhythmic structure consisting of a stable and simple movement pattern: ••; ↓↑ (a theme); iteration of this pattern (rhythmic grouping); and constant locations at the bottom, middle, and top (three-syllable meter). However, a new quality of enactment will now emerge-arrhythmic prolongation. Namely, at the three locations (bottom, middle, top), the hands must traverse increasingly greater intervals so as to achieve green. In a sense, even as K set her rhythm she recognized a destructive element in it (see Table 2). Being engaged in reflective discourse with the tutor, K was encouraged to offer greater specificity in describing the variation she had perceived in her own enactment of the movement pattern at the three spatial localities. This specificity was achieved through measuring. K utilized the serendipitously available hot-air balloon icons themselves so as to quantify her qualitative descriptions of moving these icons (cf. Abrahamson & Trninic, 2015). A hot-air balloon icon appears to afford measuring: It has physical height, which is suitable for subtending a vertical interval; moreover, as a familiar worldly artifact, balloons prime and vectorize the student's sensory attention toward the vertical dimension, because that is a balloon's normative, anticipated, prominent trajectory (on the effect of iconic vs. generic manipulatives on students' task framing, see Rosen et al., 2018). Eventually, K's failed attempt to enact one and the same rhythmic movement pattern at all the three localities was a phenomenological mobilizer of breakdown and insight. As a consequence, K constructed a meta-rhythmic movement form that modulated selected phenomenal elements of locally effective movement patterns into a globally effective structure (same theme, same grouping, same meter, but an arithmetic sequence of prolongations-B0… 1…2…So it grows by one at a time^). K's measuring units and quantification enabled her to regain a species of globally effective rhythmic equilibrium that subsumed the locally effective schemes into enactive coherence. Table 2 Evolution of rhythmic components leads to breakdown: surfacing of different prolongations in local tasks prevents establishment of overall rhythmic structure for global task. Disequilibrium resolved: metarhythmic sequential structure with prolongations lasting respectively 0, 1, and 2 temporal-duration BunitsT In the global subtasks, K was oriented by the tutor to move along the vertical axis. She applied corrections (↓↑) to her movement (↑↑) so as to Bkeep green.^The higher her hands went-the bigger corrections she had to make. In her multimodal utterance to the tutor, K demonstrated that she had recognized a relationship between two complementary protomathematical notions of proportion: (1) as the hands rise, the distance between them increases and (2) the upper hand is rising faster than the lower hand (see Table 3). Though logically trivial, this connection, we submit, is psychologically and mathematically profound (see also . The enactive polysemy of one and the same (bimanual) movement, where different potential meanings are phenomenologically disparate yet conceptually complementary and where reconciling these would-be conflicted meanings creates powerful learning opportunity (see also Abrahamson & Wilensky, 2007). We argue that this reconciliation required of K to decompose and relinquish selected components from a locally effective rhythmic scheme, maintaining only those elements that still obtain for the new task. K's rhythmic enactment in the set of local subtasks had distinctive features: an iterated movement ↓↑ and an arithmetic sequence of prolongations-B0…1…2.^In the global task, the movement component of ↓↑ faded away as redundant, while the expression of spatial extensions in terms of units and quantities was transferred from local to global enactment (see Table 3). K had performed the task of making the screen green at several discrete positions on the screen and in so doing appeared to develop a rhythmic movement pattern of placing her hands alongside each other and then moving one hand up and the other down until she had achieved green. But when she transitioned to the next task of keeping the screen green while moving her hands continuously up the screen, this rhythmic movement pattern proved inadequate, so that K realized she was modifying the pattern along two dimensions: (1) she had to replace the downward movement with an upward movement, because she needed to raise both hands, and (2) she had to modify the spatial-temporal prolongation as corresponding to the screen location (the higher you go, the greater the spatial-temporal span). In the course of doing this work, K realized that her upper hand should move faster than the lower hand. She also attended to the spatial spans extending between her hands, and she used ad hoc units to quantify these spans. In turn, K then relied on these quantifications to create goal locations for her hands along the continuous upward movement. Once evoked, unitization and quantification informed K's movement in the due course of the interview (from the 7th minute and on), as evident from her utterances B…twice as fast…T able 3 Components of local rhythmic enactment are adopted differently in global enactment: the ↓↑ movement is implicated as redundant for global enactment, whereas the expression of spatial extensions in terms of units and quantities was transferred from local to global enactment Task Global enactment 2 ↑↑ with corrections (↓↑); Bhands are in the same position, same distance from each other4 ↑↑x2 with minor (↓↑) corrections; Bone hand moves faster than the other as it goes up. My hands keep gradually farther apart.7 ↑↑x2 and ↑x2↑; Bmove one hand up faster…, in the middle, they are separated like one balloon…, and at the top two balloons.R or B…twice as far each time…^instead of just Bfaster.^With the problem space now parsed by virtual units, K was able to maintain a 1:2 rhythm globally and reflect on this performance. In turn, this performance gave rise to K noticing yet a third proto-mathematical notion of proportion: the a-per-b expression of the quantitative relation between the left and right hands' respective quota along the parallel progression. Rhythm makes enactment mathematizable On the basis of our proposed interpretation for these empirical data, we are putting forth an argument for the instrumental role of individuals' rhythmic enactive forms in learning mathematics. These rhythmic enactment forms serve both initially as an efficient way of organizing one's interaction over space, time, materials, and other humans and subsequently as phenomenological mobilizers of change. Rhythmic movement is a tacit enactment goal mobilizing the emergence of mathematical structures in two interrelated ways: first, by creating temporal-spatial movement patterns and sustaining the learner's attention to these patterns as a means of organizing and regulating the enactment of a new comprehensive movement and, second, by alerting the learner's attention to latent irregularities in the enactment that result from encountering in the workspace unfamiliar information structures; these irregularities emerge as rhythmic breakdown, a breakdown that must be resolved through a new meta-pattern of enactment. As such, humans seek rhythmic structures to consolidate and simplify our natural and cultural mundane activities, thus relieving cognitive resources for coping with novelty, but we learn when our default actions fall out of rhythm upon encountering novelty. These breakdowns and their resolutions create pedagogical opportunities (Koschmann, Kuuti, & Hickman, 1998). In the presented case, a student's gradual refinement of rhythmic structures was interwoven with refinement of her reasoning and served as a means of solving a coordination task that instantiates the calculus of proportion. In the due course of an interview, we observed a self-organizing event structuring cycle. We observed feedback loops, where at first unsystematic and explorative movements were coordinated into proto-rhythmic patterns, and those patterns in turn were iteratively repeated. Students in embodied-learning environments incline toward rhythmic enactment (with stable theme, meter, grouping, and temporal-spatial prolongation, c.f. Radford, 2015). When rhythm is disturbed, cognitive resources are mobilized to restore equilibrium, for instance by means of unitizing. The entire process was gently steered by the instructor, who took measures to orient the student toward useful regions and behaviors of the interactive problem space, in so doing applying on the student socio-epistemic pressure to reflect on selected aspects of her actions. We note that the empirical context of our study was substantially different from previous studies probing rhythmic qualities of mathematical thinking (e.g., Bautista & Roth, 2012;Sinclair et al., 2016). Paraphrasing Radford (2015), in the current study, mathematical thinking was not only a movement of thought but an outcome of the learner's movement. In the embodied-interaction design that served as our empirical context for this study, perceptual information across the problem space became available to the student only through interaction; the information was not presented a priori or simultaneously, as in traditional algebraic patterngenerating activities that present assemblies of static visual displays. Thus, tasks where information becomes available only through enactment may solicit rhythmic behavior that Bcarries^a working hypothesis from one location to another. The inherent ephemeral nature of movement tacitly conjures rhythmic lulling as an embodied cognitive orientation, an organizing principle and mnemonic device for gravitating toward some encapsulating action form within continuous media. We argue that despite their ephemeral nature, students' actions in the course of an embodied-learning mathematical activity constitute appropriate events for productive proto-mathematical reasoning and learning. In particular, quantification of rhythmic movement forms may play a bridging role between construction and refinement of local rhythmic movement structures and their further decomposition and recomposition into a globally effective rhythmic movement structure. A sequence of formative events in K's case resonates with the findings of Spencer et al. (2006) on rhythm as a formative factor in constructing an event structure. More generally, forms that emerge through the social enactment of cultural practice mediate the development of intellectual activity (e.g., Newman, Griffin, & Cole, 1989;Radford, 2009). As such, rhythmic enactment can serve in transitioning from naïve to scientific reasoning (Abrahamson, 2004). This observation concurs with and expands findings of Bautista and Roth (2012) on bodily rhythm as a vital dimension of geometrical proficiency. Students participating in action-based embodied design activities are challenged to perform tasks that require the coordination of continuous motor actions. To achieve such coordination in the form of a task-effective sensorimotor scheme, the students may need to develop auxiliary enactive forms, for instance rhythm, which the designer had preconceived as grounding the target mathematical concepts (Abrahamson, 2014). We note that some students, like our case study, may be more attentive than others to ephemeral qualities of rhythmic structures. To support more students in these activities, we could look into technological resources and instructional practices for enhancing student production of rhythmic behaviors as well as for surfacing relevant features of these behaviors for their scrutiny and quantification. Compliance with ethical standards The research program was approved by and strictly complied with the university's Internal Review Board stipulations.
8,449
2018-09-03T00:00:00.000
[ "Mathematics", "Education" ]
Population Genetic Structure and Habitat Connectivity for Jaguar (Panthera onca) Conservation in Belize Background: Effective connectivity between jaguar ( Panthera onca ) populations across the American continent will ensure the natural gene flow and the long-term survival of the species throughout its range. Jaguar conservation efforts have focused primarily on connecting suitable habitat in a broad-scale. However, accelerated habitat reduction, limited funding, and the complexity of jaguar behaviour have proven challenging to maintain connectivity between populations effectively. Here we used individual-based genetic analysis in synthesis with landscape permeability models to assess levels of current genetic connectivity and identify alternative corridors for jaguar movement between two core areas in central and southern Belize. Results: We use 12 highly polymorphic microsatellite loci to identify 50 distinct individual jaguars, including 41 males, 3 females and 6 undetermined animals, from scat samples collected in The Cockscomb Basin Wildlife Sanctuary and The Central Belize Corridor. Using Bayesian and multivariate models of genetic structure, we identified one single group across two sampling sites with low genetic differentiation between them. We used fine-scale data on biodiversity features as vegetation types to predict the most probable corridors using least-cost path analysis and circuit theory. Conclusions: The results of our study highlight the importance of expanding the boundaries of the Central Belize Corridor to effectively cover areas that would more easily facilitate jaguar movement between locations. by correlating the spatial heterogeneity of landscapes with estimates of gene flow (18). This approach provides important opportunities to identify areas of conservation priority and provide critical knowledge on habitat fragmentation, dispersal ecology, and functional connectivity in complex landscapes for effective conservation planning and management actions (19). The establishment of corridors to improve population connectivity, particularly in Mesoamerica, has been among some of the most important efforts to prevent the loss of biodiversity in the world's biologically richest regions (20). One of the most important jaguar populations in Mesoamerica can be found in the forests of Belize, a core and critical area for the species throughout its range (21). According to the Belize National Protected Areas System Plan (22), 36% of Belize land territory is under conservation management. In particular, with its 390 square kilometres of protected forests, The Cockscomb Basin Wildlife Sanctuary harbours one of the largest concentrations of jaguars in the country (21,23). This area is connected to other Natural Protected Areas via the Central Belize Corridor, which extends over 750 square kilometres and is considered the most critical and important corridor of the Belize National Protected Areas Systems (24). Range-wide corridors have been established as a major tool to improve population connectivity and thus aid the subsistence of jaguars (5). However, to build a well-connected protection network, considerations must be taken on the spatial scale at which conservation strategies are implemented. Broad-scale conservation efforts will benefit from information gained at a finer scale, especially in heterogeneous or fragmented areas (3, 9,25). The effective collaboration among scientists, practitioners, non-governmental organisations and politicians will tap the full potential of reservoir projects and conservation actions across the jaguar's range. Here, we present a comprehensive study on genetic structuring and patterns of landscape 5 heterogeneity to identify alternative habitat corridors for gene flow between populations within two principal locations within a jaguar stronghold in Belize. Using jaguar scat samples, we investigate population genetic structure, levels of inbreeding and gene flow. Additionally, we correlate landscape features and patterns of gene flow to examine landscape permeability for jaguars between The Cockscomb Basin Wildlife Sanctuary and The Central Belize Corridor. Genetic variation A total of 536 scat samples collected across the two sampling areas were positively matched to P. onca. Other identified species included Puma concolor, Leopardus wideii, Leopardus pardalis, Herpailurus yaguarondi, and Canis familiaris. Genotyping revealed a total of 50 unique multilocus genotypes (37 from the Cockscomb reserve and 13 from the Central Corridor); these included 41 prospective males, 3 prospective females and 6 unidentified genders (Additional file 3). MICROCHEKER detected three loci (FCA 212, 229, and 075), showing signs of a null allele, but did not find evidence of scoring mistakes or large allele dropout (Additional file 1). The mean polymorphic information content PIC = 0.642. The genotyping results allowed the identification of 50 distinct individual jaguars, 37 corresponding to Cockscomb Reserve and 13 to Central Corridor, the geographical coordinates assigned to each individual was determined by averaging the coordinates of all the samples corresponding to that particular individual. Tests for departure from Hardy-Weinberg Equilibrium were 6 variable for each locus, with four loci deviating from HWE (Table1). This deviation could be explained by a deficit of heterozygotes within the population potentially caused by inbreeding or by the presence of null alleles (FIS = 0.22, p-value = 0.001). Furthermore, this test showed evidence of low genetic differentiation (FST = 0.021, p-value = 0.007; FIT = 0.237, p-value = 0.001); linkage disequilibrium was not significant for any pair of loci. The PCA of genetic diversity shows overlapping of the two sites, indicating overlapping of allele frequencies and little differentiation between groups ( Figure 1D). The AMOVA analysis revealed that less than 2% of genetic variance occurred among individuals in the Cockscomb Reserve and individuals in the Central Corridor; and showed low levels of genetic differentiation between groups (FST = 0.015, p-value = 0.026). Results from the Mantel test showed significant evidence of isolation by distance (Rxy = 0.167, p-value = 0.01; Figure 1C). Population Structure and relatedness Data analysis using STRUCTURE revealed that k = 1 had the highest mean probability of density value, and k = 4 had the highest delta-K value ( Figure 1B). This was consistent with the results from TESS where k = 1 also had the highest probability (ΔK = 10.15), in both cases no clear pattern of genetic structure can be observed when rendering the assignment probability in bar plots ( Figure 1B). Results from GENELAND revealed that k = 1 alsohad the highest probability in 8 of 10 runs, and the final map does not show a clear population boundary between sampling sites ( Figure 1E). The DAPC analysis showed that the lowest BIC value (68.42) corresponded to K = 2 and is represented in a single discriminant function; however, the BIC difference between K = 2 and K = 1 is negligible The performance of the seven relatedness estimators was analysed to provide information on the degree of resolution expected in our dataset. Mean relatedness amongst 7 individuals from the Central Corridor was -0.046 ± 0.068 (SE = 0.008) and from Cockscomb Reserve -0.15 ± 0.086 (SE = 0.003). Amongst all individuals mean relatedness was -0.01 ± SD 0.08 (SE = 0.002). Overall, individuals from the Central Corridor were more closely related to each other than to those in the Cockscomb Reserve. Landscape permeability The least-cost path analysis inferred the best route from the most northern point in the Population genetics This study presents estimates of genetic variation for individuals sampled within two core areas in Central and Southern Belize. Twelve polymorphic microsatellite loci were useful to successfully identify 50 distinct individuals that correspond to 41 males, 3 females and 6 undetermined sexes. The relatively low number of females could be explained by the sampling methodology rather than reflecting the proportion of sexes in the area. Sampling was conducted close to paths or dirt roads, and closer to human settlements; because 8 females could be more elusive, have smaller home ranges, hide their scats and avoid crossing open spaces and wide paths (3,12,26) this method could favour the sampling of male scats and therefore bias the analysis towards the more frequently observed sex. Studies on dispersal in large felines show that males are the dispersing sex, while females tend to be more philopatric (9,11,27,28); other measurements for genetic differentiation between sexes and more female scat samples are necessary to confirm sex-biased dispersal in this area. (29). Additionally, properties of our data as sample size, number of loci, polymorphisms, and null alleles could have also influenced their performance (30,31). Their power to detect population structure has been shown to decrease in accuracy at very low levels of population differentiation (FST<0.05) (29) as the number of estimated populations can be affected by a violation of model assumptions and cryptic relatedness (32). Furthermore, GENELAND and DAPC also indicated a single cluster as the most probable number of populations. GENELAND assigned individuals to each population considering the sampling locations and measurements of genetic 9 differentiation; as this method considered spatial autocorrelation and is more able to detect low levels of genetic differentiation, it more accurately reflects the true k (33). Congruent results from our genetic clustering analyses (k = 1) suggest there is no population structuring between individuals in the Central corridor and individuals in the Cockscomb Reserve. The very low levels of genetic differentiation found in this study could be the result of limited dispersal between sampling localities caused by behavioural characteristics of the species, but more likely caused by the presence of significant isolation by distance and a sampling gap between the two regions. Studies conducted with radio telemetry show that jaguars depend on large patches of habitat and can have home ranges that surpass 100 km 2 (3,21,34) however, females have smaller home ranges and tend to avoid roads and human-dominated landscapes at a higher degree, showing preference for intact forests (3, 35). Although having large home ranges and the ability to move considerable distances, jaguars tend to avoid human-dominated areas and show gender-specific differences (3, 35). Genetic subdivision across the country has been Local-scale connectivity The relatively high levels of gene flow and low genetic differentiation found in our study attest to the success of the corridor established to connect these two areas of Belize, which were a continuum of jaguar habitat in the distant past. These results are especially informative to aid conservation efforts in other areas of the species range, such as those of the Atlantic Forest of South America, where there is a lack of genetic connectivity among isolated remnant jaguar populations (36). However, anthropogenic barriers (such as Hummingbird highway) could be altering gene flow between core jaguar areas and should alert conservation managers to improve connectivity in future conservation actions and corridor management. The negative impact of roads on jaguar populations should be especially taken into consideration to improve existing corridors or design new ones; road construction and/or expansion within protected areas increases the accessibility of hunters to jaguars and their natural prey, and leads reduction of the potential of using these lands to sustain viable populations of top predators (37). Rabinowitz & Zeller (2010) conducted a range-wide model of landscape connectivity to identify potential corridors that connect jaguar populations across the Americas. Their study provided critical information for conservation actions such as corridor design across the range of the species. However, even if extremely useful for large-scale planning, the model proves more challenging for local or regional corridor design and zoning. The model depends on a least-cost path analysis that relies on coarse-grain environmental data to determine habitat connectivity and could ignore factors that affect how animals utilise the landscape (38). This range-wide corridor could be improved with fine-scale studies that advise targeted-conservation actions (12,38). Figure 1A). Currently, jaguars seem to move across the two sites, but this highway, other roads and urbanisation, in general, could be shaping population structure by presenting physical barriers to gene flow. To improve connectivity between these sites, the coverage of the Central Corridor needs to expand so that its boundaries cover the areas that would more easily facilitate jaguar movement as evidenced in this study. Conservation efforts should focus on habitat restoration of corridor networks that increase the resistance surface linking Cockscomb Reserve and the Central Corridor to secure movement between and across jaguar core areas could include building wildlife crossings where the resistance surface for movement breaks (e.g. highway junctions in Cayo District and Stann Creek District). The uncertainty over the dispersal ability of jaguars and the extent of use of corridors highlight the importance of incorporating data at a regional scale to better delineate corridors that facilitate gene flow (9,39). Conclusion Our results provide a screenshot of genetic patterns of animals whose scats were sampled Linkage disequilibrium (LD) between pairs of loci was performed in GENEPOP v. 4.2 (50) with default settings. We used GenAlex v6.0 (51) to explore multivariate patterns of molecular diversity relative to populations via Principal Coordinate Analysis (PCoA) and Mantel tests of matrix correspondence to test for Isolation by Distance (IBD);; we assessed the partitioning of genetic variation between sampling localities with Analysis of Molecular Variance (AMOVA).. Population Structure We estimated population genetic structure using Bayesian assignment methods with STRUCTURE v2.3.4 (52), which assigns individuals to a number K of genetically homogeneous groups, based on the Bayesian estimate in accordance to the expected Hardy-Weinberg equilibrium and absence of linkage disequilibrium between loci. We ran STRUCTURE with the LOCPRIOR option to allow sampling location to assist in the clustering, and we performed 20 independent runs for k = 1-10. We set a burn-in period of 100,000 and 1,000,000 MCMC iterations and assumed an admixture model with correlated allele frequencies. To determine the optimal number of clusters and render bar plots, we implemented the Evanno method (30) using POPHELPER package in R v.1.2.1 (53). Furthermore, we inferred spatial genetic structure with TESS 2.3.1 (54). This program assumes that population memberships follow a hidden Markov random field model where the log-probability of an individual belonging to a particular population, given the population membership of its closest neighbours, is equal to the number of neighbours belonging to this population (55). We tested the CAR, and BYM models with linear trend 16 surface to define the spatial prior for admixture (31); we set a burn-in period of 100,000 and 1000,000 sweeps through 10 independent runs testing the maximal number of clusters from 1-10. To decide the optimal K, we plotted the deviance information criterion (DIC) against Kmax. We used GENELAND v4.0.6 (55) as an additional method to infer the number of populations and the spatial location of genetic discontinuities. This program allows using georeferenced individual multilocus genotypes to infer the number of populations and uses the spatial location of genetic discontinuities between those populations. We determined K across 20 independent runs with 1,000,000 MCMC iterations. Thinning was set at 100, allowing K to vary from 1 to 10. We used the correlated allele model and set the maximum rate of the Poisson process at 50 (the number of individuals), the maximum number of nuclei in the Poisson-Voronoi tessellation at 150 (three times the number of individuals), and the uncertainty of spatial coordinates of the collection at 25 meters. We re-ran the analysis ten times to check for consistency across runs. To further explore the genetic diversity and structure among individuals, we reduced the dimensions via a Discriminant Principal Component Analysis (DAPC) without a priori group assignment using the ADEGENT package in R. v2.0.1 (49,56). The tools implemented in DAPC allow solving complex population structures by summarising the genetic differentiation between groups while overlooking within-group variation, therefore achieving the best discrimination of individuals into pre-defined groups (57). This multivariate method is useful to identify clusters of genetically related individuals when group priors are lacking. Estimation of clusters is performed by comparing the different clustering solutions using the Bayesian Information Criterion (BIC). We compared the results from the three Bayesian approaches and the DAPC to provide confidence in the spatial designation of genetic groupings. Relatedness Levels of genetic relatedness were calculated using seven estimators as implemented in the RELATED package in R v1.0 (58). Pairwise relatedness was calculated using the estimators described by Queller and Goodnight, 1989;Li et al., 1993;Ritland, 1996;Lynch andRitland, 1999 andWang, 2002), as well as the dyadic likelihood estimator described in Milligan (2003) Landscape Connectivity We predicted the most effective corridor via least-cost path analysis using GDISTANCE package in R v1.1 (66). This approach offers the shortest cost-weighted distance between two sampling points; the program allows calculating grid-based distances and routes and is comparable to ArcGIS Spatial Analyst (67), GRASS GIS (GRASS Development Team 2012), and CIRCUITSCAPE (68). The package implements measures to model dispersal histories and contains specific functionality for geographical genetic analyses (66). The least-cost path analysis was inferred from the most northern GPS point in the Central Corridor to the most southern point in the Cockscomb Reserve. Additionally, we used CIRCUITSCAPE v3.5 (68) to model resistance surfaces of the landscape as an alternative to the least-cost path analysis, which assumes that gene flow is associated to total cost along a single optimal path (69). Circuit theory considers all possible paths and is useful to assess different interactions between different landscape features. With these two approaches, we were able to identify the most probable routes for dispersal and gene flow between localities. Ecosystem preference costs were based on literature review and expert opinion of habitat use by jaguars (3, 5,6,35,[70][71][72]. To model each ecosystem as analogous to an electrical circuit, each pixel was assigned a resistance value in a scale of 0-9 based upon land cover ( Table 2). The resistance value represents the relative effort required to move from one point to another, and the map of resistance values is used to derive all possible pathways for jaguar movement. Spatial data were obtained from the Biodiversity and Table 2. Ecosystem types and cost values for jaguar movement based on expert knowledge. Values range from 0 (highly costly) to 9 (no cost for movement).
4,075.4
2020-01-01T00:00:00.000
[ "Environmental Science", "Biology" ]
QANOVA: quantile-based permutation methods for general factorial designs Population means and standard deviations are the most common estimands to quantify effects in factorial layouts. In fact, most statistical procedures in such designs are built toward inferring means or contrasts thereof. For more robust analyses, we consider the population median, the interquartile range (IQR) and more general quantile combinations as estimands in which we formulate null hypotheses and calculate compatible confidence regions. Based upon simultaneous multivariate central limit theorems and corresponding resampling results, we derive asymptotically correct procedures in general, potentially heteroscedastic, factorial designs with univariate endpoints. Special cases cover robust tests for the population median or the IQR in arbitrary crossed one-, two- and higher-way layouts with potentially heteroscedastic error distributions. In extensive simulations, we analyze their small sample properties and also conduct an illustrating data analysis comparing children’s height and weight from different countries. Introduction Factorial designs are popular in various fields such as ecology, biomedicine and psychology (Cassidy et al. 2008;Mehta et al. 2010;Kurz et al. 2015) as they allow us to study interaction effects between different factors alongside their main effects. In fact, Lubsen and Pocock (1994) pointed out that "it is desirable for reports of factorial trials to include estimates of the interaction between the treatments." The ANOVA-F-test is the most common tool for this but suffers from restrictive assumptions such as homoscedasticity and normality. Thus, several tests have been developed that allow for non-normal errors or are valid for heteroscedastic one-and two-way or even more general factorial designs (Johansen 1980;Brunner et al. 1997;Bathke et al. 2009;Pauly et al. 2015;Friedrich et al. 2017a, b;Harrar et al. 2019). All these procedures describe effects by (contrasts of) means. This is in line with a phenomenon observed in various areas: Comparisons are mainly based upon means or variances but not on their robust counterparts. This can be explained in part by the simplicity and elegance gained by using linear or, under independence, additive statistics. Nevertheless, it contradicts the important role of statistics based on quantiles, like the median and the interquartile range (IQR), in data exploration and modeling, e.g., in boxplots or summary statistics. The interest in analyzing quantiles has led to the development of quantile regression, which is commonly established nowadays (Koenker and Hallock 2001;Koenker et al. 2019). However, as, e.g., stressed by Beyerlein (2014) "it appears to be quite underused in medical research." One reason may be that, although there exist several approaches for specific designs (Sen 1962;Potthoff 1963;Fung 1980;Hettmansperger and McKean 2010;Fried and Dehling 2011;Chung and Romano 2013), there does not exist an equal abundance of methods based on quantiles for general factorial designs. There are procedures, at least for the median, but they often require strong distributional assumptions (as symmetry) or, at least, an extension to factorial designs is missing. Therefore, the main aims of the present paper are to develop inference procedures (tests and compatible confidence regions) (i) for the median, the IQR or any linear combination of quantiles. (ii) for factorial designs to study robust main and interaction effects. (iii) for general heterogeneous or heteroscedastic models beyond normality. (iv) being theoretically valid and performing satisfactorily for finite samples. To achieve these goals, we combine and extend the ideas of Chung and Romano (2013) (tests for equality of medians in one-way ANOVA models) and Pauly et al. (2015) (mean-based testing procedures in general factorial designs) to (simultaneously) infer arbitrary linear contrasts of general quantiles. In view of (ii) and (iv), we follow the idea of permuting studentized Wald-type statistics to obtain methods that are finitely correct in case of exchangeable data (e.g., under the null hypothesis of equal means/medians in the classic F-ANOVA normal model) but also asymptotically valid for general non-exchangeable settings. This alluring technique has originally been developed for special two-sample models (Neuhaus 1993;Janssen 1997;Janssen and Pauls 2003;Pauly 2011) and has recently displayed its full strength to obtain accurate methods in one-way (Chung and Romano 2013) and more general factorial designs Friedrich et al. 2017a;Smaga 2017;Harrar et al. 2019). However, to derive the fore-mentioned theoretical evidence in our general quantilebased approaches we could not employ the methods derived in the previously mentioned papers. In fact, to overcome some technical difficulties that occur when jointly permuting sample quantiles, we had to take a detour in which we extended some results for general permutation empirical processes and uniform Hadamard differentiability (van der Vaart and Wellner 1996) that are of own mathematical interest. Anyhow, this finally results in (i)-(iv), i.e., a flexible toolbox for inferring contrasts of different quantiles in factorial designs. In the special case of the median and its bootstrap-based variance estimator, we obtain the one-way permutation test derived in Chung and Romano (2013). Outline: We first introduce the model, estimators for population quantiles and how to formulate null hypotheses in them to test for certain main or interaction effects. In Sect. 3, we state the theory to handle the joint asymptotics for sample quantiles and their covariance matrix estimators. As the latter are crucial to obtain the correct dependency structure, we study different approaches: kernel density estimators, bootstrapping or certain interval estimates. As they are mostly only known for the sample median, we explain in Sects. 3.1-3.3 how to extend them to our general situations. From these findings, we deduce three different asymptotically valid testing procedures. To improve their small sample performance, we consider their respective permutation versions in Sect. 4, prove asymptotic exactness and analyze their power under local and fixed alternatives. To compare the small sample behavior of the resulting tests, we conducted extensive simulations presented in Sect. 5. Finally, we illustrate the new methodology by analyzing a recent dataset on height and weight of children from different countries in Sect. 6. All proofs details to higher-way layouts and additional simulation results are deferred to supplement. The setup We consider a general model given by mutually independent random variables, e.g., corresponding to the outcome from independent patients in randomized clinical trials, with absolutely continuous distribution functions F i and densities f i . This setup allows to incorporate factorial structure of different kinds by adequately splitting up indices. To accept this, consider, e.g., a two-way design with factors A (a levels) and B (b levels). Setting k = a · b, we split up the group index i = (i 1 , i 2 ) and model observations as X i 1 i 2 j ∼ F i 1 i 2 (i 1 = 1, . . . , a; i 2 = 1, . . . , b). Factorial designs of more complexity can be incorporated similarly ). Having the model fixed, we now turn to the parameters of interest: Choosing m ∈ N different probabilities 0 < p 1 < . . . < p m < 1, we want to study inference methods for the corresponding quantiles Pooling them in q = (q 1 , . . . , q k ) = (q 11 , . . . , q 1m , q 21 , . . . , q km ) , we are particularly interested in testing the QANOVA null hypothesis H 0 : Hq = 0 r for a contrast matrix H ∈ R r ×km of interest. Here, H is called a contrast matrix if H1 km = 0 r , where 1 d and 0 d are vectors in R d consisting of 1's and 0's only. Choosing the con-trast matrices in line with the design and the question of interest allows us to test various hypotheses about main and interaction effects. Moreover, we want to point out that respective confidence regions for corresponding contrasts of quantiles can be obtained straightforwardly by inverting the test procedures. In what follows, we will therefore focus on hypothesis testing but provide some exemplary confidence intervals in the context of the illustrative data analyses in Sect. 6. Turning back to the null hypothesis, we recall from general ANOVA that it is convenient to re-formulate it as H 0 : Tq = 0 km for the unique projection matrix T = H (HH ) + H (Brunner et al. 1997;Pauly et al. 2015;Smaga 2017). Here, A + denotes the Moore-Penrose inverse of the matrix A. In fact, both matrices, H and T, describe the same null hypothesis, while T has preferable properties as being symmetric and idempotent. To infer H 0 , we propose sensitive test statistics in the vector of corresponding sample quantiles. To introduce them, let 1{X i j ≤ t} denote the group-specific and pooled empirical distribution function, respectively, where n = k i=1 n i . Let X (i) 1:n i ≤ . . . ≤ X (i) n i :n i be the order statistics of group i. Then, the natural estimator of the quantile q ir is Examples of specific hypotheses To give some examples of hypotheses covered within this framework, we first consider a one-way design. For m = 1, we obtain the k-sample null hypothesis Here, I k ∈ R k×k denotes the unit matrix, J k = 1 k 1 k and we suppressed the second index of the quantiles (m = 1). Choosing p 1 = 1/2 gives the null hypothesis of equal medians which reduces to the null hypothesis of equal means in case of symmetric error distributions. Setting k = ab, we consider a two-way design with factors A (having levels i 1 = 1, . . . , a) and B (with levels i 2 = 1, . . . , b) and suppose that we like to formulate main and interaction effects in terms of quantiles: Here, ⊗ is the Kronecker product andq i 1 · ,q ·i 2 andq ·· are the means over the dotted indices. The latter hypotheses can also be described more lucid by utilizing an additive effects notation. To this end, we decompose the quantile q i 1 i 2 = q μ + q α i 1 + q β i 2 + q αβ i 1 i 2 from group (i 1 , i 2 ) into a general effect q μ , main effects q α i 1 and q β i 2 as well as an interaction effect q αβ i 1 i 2 assuming the usual side conditions i 1 q α Then, the null hypotheses can be written as This methodology can be straightforwardly extended to higher-way layouts as described in supplementary material. Beyond working with specific quantiles, it is also possible to infer hypotheses about linear combinations c q i = m r =1 c r q ir of quantiles. Here, c ∈ R k is an arbitrary vector, e.g., choosing c 1 = −c 2 = −1 for m = 2 and setting p 1 = 0.25 and p 2 = 0.75 lead to the group-specific interquartile ranges c q i = I Q R i . To obtain similar hypothesis in these parameters as above, the contrast matrix has to be specified to H = H ⊗ (c 1 , . . . , c r ), where H is one of the aforementioned contrast matrices. For example, H = P k together with the previous choices for c and p 1 , p 2 gives the null hypothesis {I Q R 1 = · · · = I Q R k } of equal IQRs among all k groups. However, the framework is much more flexible and even allows to infer hypotheses about IQRs and medians simultaneously by choosing p 1 = 0.5, p 2 = 0.25 and p 3 = 0.75 together with adequate contrast matrices. Asymptotic results To establish the joined asymptotic theory for the sample quantiles and their covariance matrix estimators, we assume non-vanishing groups throughout: Recall that the sample median will be asymptotically normal if the underlying density is positive and continuous in a neighborhood of the true median. This statement can be extended to the multivariate case (Serfling 2009), e.g., under the following assumption, which we consider throughout. Assumption 1 Let F i be continuously differentiable at q ir with positive derivative f i (q ir ) > 0 for every r = 1, . . . , m and i = 1, . . . , k. Proposition 1 (Theorem B in Sec. 2.3.3 of Serfling (2009) where Z i is a zero-mean, multivariate normal distributed random variable with nonsingular covariance matrix (i) given by its entries In general, the covariance matrix is unknown and, thus, needs to be estimated. Let us suppose, for a moment, that a consistent estimator (i) for (i) is available. Then, we could define a Wald-type statistic for testing H 0 : where ⊕ denotes the direct sum. By Proposition 1, the limiting covariance matrix (i) is positive definite, which implies that the Moore-Penrose inverse (Rao and Mitra 1971, Theorem 9.2.2). We summarize this as Thus, comparing S n (T) with the (1 − α)-quantile of the limiting null distribution defines an asymptotic exact level α test ϕ n = 1{S n (T) > χ 2 rank(T),1−α }. As Proposition 1 is not restricted to the null hypothesis, we can even deduce that n −1 S n (T) always converges in probability to (Tq) (T T) + Tq. Since Tq = 0 km implies (Tq) (T T ) + Tq > 0 (see Supplement for a verification) consistency follows. It remains to find appropriate estimators (i) for the unknown covariance matrices. For that purpose, we examine different strategies: "Brute force" via plug-in of a kernel density estimator into (6) or using a different approach that first estimates the diagonal elements (i) aa and then employs their following relationship with the remaining matrix elements: In the latter case, we consider two ways for estimating the variances (i) aa : Via bootstrapping (Efron 1979) or with the interval estimator proposed in Price and Bonett (2001). In the following, we explain all three possibilities in detail. Kernel estimator A popular way to estimate densities is so-called kernel density estimators, which are based on a Lebesgue density K : R → [0, ∞) with K (x) dx = 1 and a bandwidth h n → 0. For more flexibility, we allow for different choices within the groups and add the corresponding group index, i.e., we work with K i and h ni . Then, the kernel density estimator for f i is given by Nadaraya (1965) proved strong uniform consistency of (9), i.e., we have Assumption 2 Let K i be of bounded variation and f i be uniformly continuous. Furthermore, suppose that ∞ n=1 exp(−γ nh 2 ni ) converges for any choice of γ . Here, the convergence of the series ∞ n=1 exp(−γ nh ni ) is, e.g., implied by choosing h n,i = n −θ i for some θ ∈ (0, 1/2). We further note that Schuster (1969) discussed necessary and sufficient conditions for the stated uniform consistency. In particular, all f i need to be uniformly continuous. Moreover, the conditions on the bandwidths can be weakened when the kernel fulfills additional regularity conditions (Silverman 1978). Anyhow, combining Proposition 1 and the strong consistency of (9) yields consistency of the plug-in covariance matrix estimators. Bootstrap estimator In their one-way tests for equality of medians, Chung and Romano (2013) used the bootstrap approach of Efron (1979) to estimate the sample median's asymptotic variance. We adopt this idea for general quantiles. Therefore, for every group i, let X * i1 , . . . , X * in i denote a bootstrap sample (drawn with replacement) from the observations X i = (X i j ) j=1,...,n i . From this, we can calculate bootstrap versions of all previous estimators which we indicate by a superscript * . The mean squared error of the bootstrapped sample quantile q * ir given the data can be explicitly calculated using (3) Following Efron (1979), the probabilities P i j can be rewritten to where B n, p denotes a binomial distributed random variable with size parameter n and success probability p. In contrast to the standard jackknife method, the bootstrap median variance estimator ( σ * i (1/2)) 2 converges to 1/(4 f 2 i (F −1 i (1/2))) as desired (Efron 1979). Moreover, a detailed proof for strong consistency of this estimator was given by Ghosh et al. (1984) under Interval-based estimator McKean and Schrader (1984) introduced an estimator for the sample median standard deviation based on a standardized confidence interval. Later, Price and Bonett (2001) suggested to modify this estimator to improve its performance in small sample size settings. Both estimators are consistent (Price and Bonett 2001) and can compete with the aforementioned bootstrap approach in simulations (McKean and Schrader 1984; Price and Bonett 2001) with a slightly better performance of the Price-Bonnet modification. While both papers only treat the median, extensions to general quantiles follow intuitively and have already been used, e.g., for the 25%-and 75%-quantile in Bonett (2006). The (extended) McKean-Schrader estimator for the standard deviation of the pth sample quantile, p ∈ (0, 1), is given by are the lower and upper limits of binomial intervals. Here, z α/2 denotes the (1−α/2)-quantile of the standard normal distribution. Typically, α = 0.05 is chosen leading to z α/2 ≈ 1.96. A brief discussion on the effect of the choice α on the estimator can be found in Price and Bonett (2001). In fact, the Price-Bonnet modification concerns the choice of α: They propose to replace it in the denominator by the following finite sample correction (for ease of notation we suppressed the dependency on i) Clearly, α * n ( p) → α by the central limit theorem. For large sample sizes, the benefit of the correction is negligible and may even lead to computational problems due to n i j 1, especially for j ≈ n i /2. Thus, we only use the modifications for sample sizes smaller than 100 and recommend to set α * n ( p) = α for larger values (n i > 100). Moreover, the simulations of Price and Bonett (2001) reveal that additionally adding 2n −1/2 i to the denominator results in a slight reduction of bias and mean squared error. Altogether, we thus define their extended estimator for the respective standard deviation as As explained above, this estimator is consistent for the variance, leading to: Lemma 3 We have for all i = 1, . . . , k and a, b = 1, . . . , m: Utilizing the different choices of covariance estimators results in three different versions of the asymptotic test ϕ n . However, simulation results (Sect. 5) exhibit serious issues for small to moderate sample sizes which may be due to a rather poor χ 2approximation to the test statistic. To tackle this problem, we propose the initially mentioned technique of permuting studentized statistics. Permutation test For a better finite sample performance, it is often advisable to replace the asymptotic critical value of the test, here the (1 − α)-quantile of the χ 2 rank(T) -distribution, by a resampling-based critical value. For the current problem, we promote the permutation approach, which leads to a finitely exact test under exchangeability, i.e., under H 0 : F 1 = . . . = F k . Moreover, the proper studentization within the Waldtype statistic makes it possible to transfer the consistency and asymptotic exactness (under H 0 : Tq = 0) of the tests ϕ n to their permutation versions. To explain this, let X π = (X π i j ) i=1,...,k; j=1,...,n i be a random permutation of the pooled data X = (X i j ) i=1,...,k; j=1,...,n i . As for Efron's bootstrap, we draw new samples from the pooled data, but now without replacement. In other words, we randomly permute the group memberships of the observations X i j . Pooling the data affects our Assumptions 1 and 2 such that we need to replace the original distribution functions F i and their densities f i by their pooled versions To be concrete, we postulate Assumption 4 Let F be differentiable with uniformly continuous derivative f such that f (F −1 ( p r )) > 0 for all r , and K i be a kernel fulling Assumption 2. As in Chung and Romano (2013), it turned out that the asymptotic correctness of the permutation approach needs a certain convergence rate in (4): Theorem 3 Under H 0 : Tq = 0 km as well as under H 1 : Tq = 0 km , the permutation version S π n (T) of S n (T) with any of the covariance estimators (10)-(11) always mimics its null distribution asymptotically, i.e., Replacing the critical value χ 2 rank(T),1−α of the asymptotical tests with c π n (α), the (1 − α)-quantile of the conditional distribution function x → P(S π n (T) ≤ x|X), leads to three different permutation tests ϕ π n = 1{S n (T) > c π n (α)}. Under the assumptions given in Theorem 3, it follows that c π n (α) converges in probability to χ 2 rank(T),1−α irrespective whether the null hypothesis is true or not. Thus, we can deduce the asymptotic exactness of the permutation test and its consistency for general fixed alternatives (Janssen and Pauls 2003, Lemma 1 and Theorem 7). In addition, we prove in the next section that the permutation test has an asymptotic relative efficiency of 1 compared to the asymptotic test, i.e., the tests' asymptotic power values coincide for local alternatives. Local alternatives To study local alternatives, we need to replace Model (1) with its local counterpart given by a triangular array of row-wise independent random variables X ni j ∼ F ni (i = 1, . . . , k; j = 1, . . . , n i ) with absolutely continuous distribution functions F ni , corresponding densities f ni , quantiles q nir and quantile vector q n = (q n11 , . . . , q n1m , q n21 , . . . , q nkm ) . Within this framework, we discuss local alternatives Tq n = O(n −1/2 ), i.e., small perturbations of the null hypotheses, under the following additional regularity conditions: Assumption 5 For every i = 1, . . . , k, let F i be an absolutely continuous distribution function with corresponding density f i . Moreover, set for all n ∈ N and all x ∈ R. (ii) Suppose that f i is continuous and positive at q ir = F −1 i ( p r ) and that f ni converges uniformly to f i in a compact neighborhood around q ir for all r . (iii) For the permutation approach, suppose additionally (12), Assumption 4 and uniform convergence of f ni to f i in a compact neighborhood around q r = F −1 ( p r ) for every r . (1), condition (i) ensures the usual √ n-convergence of F ni to F i . Anyhow, the tests' asymptotic power functions can be described by means of a non-central χ 2 distribution: Simulations To assess the tests' small sample performance, we complement our theoretical findings with numerical comparisons. For ease of presentation, we consider 1. A one-way layout in which we like to infer the null hypothesis H 0 : {I Q R 1 = · · · = I Q R 4 } of equal IQRs, i.e., as described at the end of Sect. 2 we choose probabilities p 1 = 0.25 and p 2 = 0.75 and specify the contrast matrix as H = P 4 ⊗ (−1, 1). 2. A 2 × 2 layout in which we test for the presence of main or interaction effects measured in terms of medians, i.e., setting k = a ·b = 2 ·2 we infer the hypotheses H 0 : {H A q = 0 ab } (no main median effect of factor A) and H 0 : In addition, we present detailed simulations for a five-factor model in supplement. Data were simulated within Model (1) where we consider (a) balanced and unbalanced settings given by sample size vectors n 1 = (15, 15, 15, 15) and n 2 = (10, 10, 20, 20), respectively. (b) five different distributions for i j : the standard normal distribution (N 0,1 ), Student's t-distribution with d f = 2, 3 degrees of freedom (t 2 , t 3 ), the Chi-square distribution with d f = 3 (χ 2 3 ) and the standard log-normal distribution (L N 0,1 ). All distributions were centered by subtracting the respective median m i from i j . (c) a homoscedastic setting σ 1 = (σ 1 , . . . , σ 4 ) = (1, 1, 1, 1) and heteroscedastic designs σ 2 = (1, 1.25, 1.5, 1.75) and σ 3 = (1.75, 1.5, 1.25, 1). Together with n 2 , the latter represent a positive and negative pairing, respectively. The simulations were conducted by means of the computing environment R (R Core Team 2020), version 3.5.0, generating N sim = 5000 simulation runs and N perm = 1999 permutation iterations for each setting. The nominal level was set to α = 5%. We compare the type-1 error rate as well as the power values of our tests below. In both cases, we include all three variance estimation strategies introduced in Sects. 3.1-3.2. For the kernel density estimation, we choose the classical Gaussian kernel with a bandwidth according to Silverman's rule-of-thumb (Silverman 1986, Eq. (3.31)), where we applied the function bw.nrd0 from the R package stats to determine the latter. In case of the 2 × 2-median design, these methods are additionally compared with the current state-of-the-art tests for regression parameters in quantile regression: From the R package quantreg (Koenker et al. 2019), we choose the rank inversion method by Koenker and Machado (1999) for non-iid errors, the default choice in quantreg, and the wild bootstrap approach of Feng et al. (2011). For a fair comparison, we include the main factors A and B and their interaction in the respective regression model. Hence, regression parameters β A , β B and β AB are estimated, and corresponding p-values for testing H 0 : β A = 0 (no main effect A) and H 0 : β AB = 0 (no interaction effect) are derived by both quantreg approaches. Type-1 error In this subsection, we discuss the type-1 error control of all procedures. To simulate under the corresponding null hypotheses, we set μ i = μ i 1 i 2 = 0 in the 2 × 2-median-based cases and restrict to the homoscedastic setting σ = σ 1 for the 4-sample IQR testing question. The standard error of the estimated sizes in case of N = 5000 simulation runs is 0.3% if the true type-1 error probability is 5%, i.e., estimated sizes outside the interval [4.4%,5.6%] deviate significantly from the nominal 5% significance level. Table 1 Type-1 error rate in % (nominal level α = 5%) for testing the median null hypothesis of no main effect in the 2 × 2 design for the rank-based (Rank) and wild bootstrap (Wild) quantile regression approach as well as all asymptotic and permutation tests using the interval-based (Int), kernel density (Kern) and bootstrap (Boot) approach for estimating the covariance matrix The observed type-1 error rates for the 2×2-median design are displayed in Table 1 for testing the hypothesis of no main effect. It is readily seen that all asymptotic tests are rather conservative with type-1 errors reaching down to 1.7% for the bootstrapbased and 0.7% for the interval-based approaches, respectively. This conservativeness is less pronounced for the test based upon the kernel density variances estimator that exhibits values between 2.7% and 5.7% and a reasonable good error control in case of the standard normal and χ 2 3 distribution except for the settings with positive variance pairing. In contrast, all permutation methods control the type-1 error level reasonably well except for the situations with a skewed distribution and negative pairing. Here, we find error rates up to 7.2% for the tests based upon the interval-and kernel-based variance estimators. For the two quantile regression methods from the R package quantreg (Koenker et al. 2019), the observations are diverse: The rank-based approach tends to conservative test decisions in case of unbalanced sample sizes with observed error rates in the range 2.5−3.5%. However, in case a balanced homoscedastic design with symmetric errors, a slight liberality (6.4−6.9%) is detected. For all other settings, Table 2 Type-1 error rate in % (nominal level α = 5%) for the four-sample IQR testing problem of our asymptotic and permutation tests using the interval-based (Int), kernel density (Ker) and bootstrap (Boo) approach for estimating the covariance matrix the decisions are accurate. In comparison, the wild bootstrap strategy is liberal for almost all balanced settings (with observed error rates up to 7.7%) and conservative for all positive pairings (2.8−3.7%). Overall, the permutation procedure that uses a bootstrap variance estimator exhibits the most robust type-1 error control with values ranging from 4.7−6.4%. Summarizing the results for the interaction tests presented in supplement, we get a similar impression for the wild bootstrap quantile regression strategy and the six Waldtype procedures. For them, the only major difference is that the permutation methods also exhibit a fairly well error control for the settings with skewed distributions and negative pairing. However, the results for the rank-based quantile regression method are partially different: While the type-1 error rate is still accurate for balanced sample sizes, the decisions become very liberal in the unbalanced scenarios with estimated type-1 error rates between 6.1% and 10.1%. The type-1 error rates in the situation of the four-sample testing problem of equal IQRs are presented in Table 2. Here, the finite sample behavior of the asymptotic tests becomes even more extreme: For the symmetric distributions, the type-1 error rates are between 0.4% and 1.3% for the interval-based estimator and between 0.3 and 1.2% for the bootstrap approach, i.e., very conservative. In contrast, the decisions for the kernelbased method are quite accurate with values between 3.7% and 5.0%. Switching to skewed distribution, however, the type-1 error rates increase, leading to very liberal decisions in the log-normal case with values up to 10.2% for the kernel-based and 7.5% for the interval-based tests. Here, only the bootstrap-based method remained very conservative. In comparison, all permutation counterparts lead to satisfactory type-1 error control close to the 5% level. Due to the extreme behavior of the asymptotic tests in this setting, we conducted additional simulation results in supplement. Therein, all asymptotic tests for equality of IQRs more or less approach the 5% level for larger group-specific sample sizes n i ≥ 150. Power behavior Due to the diverse behavior of the asymptotic tests and the rank-based quantile regression method under the null hypotheses and for ease of presentation, we solely focus on permutation tests and the wild bootstrap quantile regression strategy here. The results for the asymptotic tests are presented in supple- Fig. 1 Power curves for the 2 × 2-median testing problem (first two rows) and for the four-sample IQR testing problem (last row) of the permutation PBK test (dash-dotted), the wild bootstrap quantile regression test (dashed) and the three permutation tests based on interval-based (long-dashed), kernel density (dotted) and bootstrap (solid) covariance matrix estimation, respectively, for n = n 2 , σ = σ 1 and shift alternatives μ = (0, 0, 0, δ) (median) or scale alternatives σ = (1, 1, 1, 1 + δ) (IQR) ment, and apart from their different levels under H 0 , their power curves run almost parallel to the ones of the permutation version. To achieve a scenario under the alternative in the 2 × 2-median test setting, we disturbed the respective null setup by adding a shift parameter δ = μ 2,2 to the last group. In addition to the aforementioned methods, we considered the permutation Wald-type test (PBK) of Pauly et al. (2015) which was developed for testing means in general factorial designs. Their procedure is implemented in the R package GFD (Friedrich et al. 2017b). For a fair comparison, we included their PBK test just for the cases where mean and median coincide, i.e., for the symmetric distributions. The results for the procedures inferring a main effect are presented in Fig. 1, while the corresponding power curves of the interaction tests are shown in supplement. Studying Fig. 1, we observe that the PBK test leads to higher power values compared to our tests for the normal distribution settings but is less powerful under the t 2 -and t 3distributions. An explanation may be given by the (asymptotic) efficiencies of the location estimators: While the sample mean is more efficient than the sample median under normal distributions, the situation is reversed for the two more heavy-tailed t-distributions. A comparison among the median-based permutation tests shows that the interval-based approach leads to lower power values than the other two methods for both t-distributions, while the bootstrap approach is slightly less powerful than the other two for the skewed log-normal distribution. Under normality, however, the tests' power functions are almost identical. In comparison, the wild bootstrap quantile regression method has considerably less power than all other methods for testing main median effects. The power curves for the interaction effects presented in supplement show a similar pattern for almost all tests. The only exception is the wild bootstrap approach which exhibits a similar power behavior as the permutation tests. Moreover, it is slightly advantageous for shift alternatives with δ > 1. To obtain alternatives for the four-sample IQR testing problem, we consider scale alternatives σ = (1, 1, 1, 1 + δ). For ease of presentation, we only show the results for normal as well as lognormal distributions here. The resulting power curves are plotted in Fig. 1. We can observe that the kernel density approach leads to lower power values compared to the other two methods. Recommendation Summarizing the findings, we recommend the use of the permutation methods over their asymptotic counterparts as they show a much better type-1 error control in case of small and moderate sample sizes (n i ≤ 200). However, there is no general recommendation for choosing between the three permutation versions as their power behavior (slightly) differed with respect to underlying settings, e.g., for comparing IQRs the interval-and bootstrap-based approaches performed better, while the kernel method exhibits the largest power for testing medians in a 2 × 2 design with heavy tails. In comparison with quantile regression, the advantage of the proposed factorial design approach is the simple incorporation of interaction effects without a loss in power. This benefit can be seen in the power simulations for the 2 × 2-median design, as shown in Fig. 1, and becomes even more pronounced in higher-way layouts, see Green et al. (2002) and Green (2012) and the additional simulation results in supplement. Illustrative data analysis A typical everyday situation in which we are confronted with quantiles is percentile curves for child heights and weights. We re-analyzed growth and weight data of children from five sites (Brazil, India, Guatemala, the Philippines, South Africa), which was provided to us by the COHORTS group (Richter et al. 2012). Both, height and weight, were converted to z-scores regarding the WHO child standards (de Onis et al. 2007). Having a comparison of percentile curves in mind, we test for effects in the 25%-, 50%-and 75%-quantile simultaneously. This also demonstrates the flexibility of the proposed methodology. For illustrative purposes, we focus on the following subgroups: Example 1 We compare the birth weight of firstborns from the countries (factor A) Brazil and South Africa including both genders (factor B). To avoid confounding Fig. 2 Group-wise boxplots (outliers are not displayed) for the birth weight data from Example 1 (left) and the height data from Example 2 (right) Table 3 For the effect of the country on the birth weight (Example 1) and the maternal height on the height at 2 years, the p values (in %) are shown for our asymptotic and permutation approach using the interval-based (Int), kernel density (Ker) and bootstrap (Boo) strategies for covariance matrix estimation effects regarding age, education or marital status, we restrict our analysis to 30-yearold or younger married mothers with a comparable education level of 9 completed school years. The n = 173 children are divided into n 1 = 65 boys and n 2 = 46 girls from Brazil and n 3 = 36 boys and n 4 = 26 girls from South Africa. We would like to infer whether there are differences between the countries regarding the boys' and girls' birth weight, respectively. Example 2 We investigate the effect of the mother's height (factor A) on the children's height at the age of 2 years. Both sexes (factor B) are included. We restrict to firstborns of unmarried mothers from the Philippines. For this analysis, we divide the women into the groups "small" and "tall" consisting of the women, respectively, being smaller and taller than the median height of 150cm. The group "small" consists of data for n 1 = 8 boys and n 2 = 13 girls, and in the group "tall," there are data for n 3 = 12 boys and n 4 = 11 girls. To get a first graphical impression, the group-specific box plots are presented in Fig. 2. In both cases, it appears that factor A (country and maternal height, respectively) leads to a shift of all three empirical quantiles of the children's height and weight. To infer this conjecture, we like to check for a main effect of factor A regarding the three quantiles q i = (q i1 , q i2 , q i3 ) , i = 1, . . . , 4 corresponding to the probabilities ( p 1 , p 2 , p 3 ) = (0.25, 0.5, 0.75) simultaneously. That is, we test H 0 : {q 1 +q 2 = q 3 + q 4 }. The p values of all three asymptotic and permutation tests (ignoring multiplicity) are summarized in Table 3. It is apparent that the asymptotic and permutation test leads to different decisions at the nominal level α = 5%. In fact, the seemingly present effect from Fig. 2 is Table 4 Point estimates θ for the difference θ i 2 = q 2i 2 − q 1i 2 of the countries' median with respect to sex for Example 1 together with permutation-based 95% confidence intervals Here, Int (interval-based), Ker (kernel density) and Boo (bootstrap) indicate the applied covariance matrix estimation technique not detected by any asymptotic tests with p values around 8-10%. In contrast, the p values of the permutation approaches are, except for the kernel density method in Example 2, less than 5%. To investigate the reasons why these decisions are different, we run additional simulations for the three-quantile testing problem under the sample size settings of Example 2. The results are presented in supplement and may explain the above decisions to some extent: As in Sect. 5, the asymptotic tests are quite conservative with type-1 error rates ranging between 0.8% and 4.2%. Moreover, the permutation kernel density approach is less powerful than the other two permutation methods under shift alternatives for skewed distributions. Beyond hypothesis testing, the theoretical results can also be used to formulate asymptotically valid confidence regions for contrasts of quantiles by inverting the corresponding tests. We exemplify this for the difference between two quantiles as effect parameter of interest. To this end, consider Example 1 and encode factor A (country) and factor B (gender) as follows: i 2 = 1 for the boys, i 2 = 2 for the girls, i 1 = 1 for Brazil and i 1 = 2 for South Africa. Then, for a fixed gender i 2 , the asymptotic correct z-and permutation-(1 − α)-confidence intervals for the difference θ i 2 = q 2i 2 − q 1i 2 of the countries' quantiles are ( q 2i 2 − q 1i 2 ) ± z α/2 √ n σ 2 1i 2 + σ 2 2i 2 , ( q 2i 2 − q 1i 2 ) ± c π ni 2 (α/2) √ n σ 2 1i 2 + σ 2 2i 2 , respectively. Here, σ 2 i 1 i 2 = (i 1 i 2 ) 11 is an estimator for the asymptotic variance of √ n( q i 1 i 2 − q i 1 i 2 ) using one of our strategies from Sects. 3.1-3.3 and c π ni 2 (α/2) is the (1−α/2)-quantile of the permutation distribution of √ n( q π 2i 2 − q π 1i 2 )( σ 2,π 1i 2 + σ 2,π i 2 ) −1/2 . To illustrate the application, we calculated the 95% permutation-based confidence intervals for the median difference separately for gender in Table 4. Ignoring multiplicity, we see that all three permutation procedures agree on a significant difference in the girl's median birth weight (at level α = 5%) but do not find a corresponding effect for the boys. Discussion While an abundance of methods exists for inferring means and mean vectors in general heterogeneous factorial designs (Johansen 1980;Brunner et al. 1997;Bathke et al. 2009;Zhang 2012;Konietschke et al. 2015;Pauly et al. 2015;Harrar et al. 2019), there are not so many methods for the analysis of medians or quantiles. To this end, we combined the idea of studentized permutations from heteroscedastic mean-based and one-way median-based ANOVA (Chung and Romano 2013) to establish flexible methods for inferring quantiles in general factorial designs which we coin QANOVA. In fact, we proposed three permutation methods in Wald-type statistics that only differ in the way the covariance matrix is estimated. All of them are applicable to construct confidence regions and to test null hypotheses about arbitrary contrasts of different quantiles. The resulting procedures are finitely exact under exchangeability of the data and shown to be asymptotically valid. In doing so, we had to extend some results about permutation empirical processes and uniform Hadamard differentiability (van der Vaart and Wellner 1996) that are of own mathematical interest. From them, we could deduce the asymptotic validity as well as results about the procedures' asymptotics under fixed and local alternatives. In the special case of the median, these results even reveal new insights into one-way permutation test of Chung and Romano (2013). In addition to the theoretical findings, we analyzed the procedures in extensive simulations. Our results indicate an accurate type-1 error control for the permutation methods in almost all settings. Only in case of skewed distributions and small unbalanced samples with a heteroscedastic negative pairing, a slight liberality was found when testing for main effects. Beyond this, we can recommend all three permutation methods with clear conscience. Currently, we work on implementing them within an R-package. We are confident that the current results can be transferred to questions about related quantile-based estimands, e.g., coefficients of quartile variation (Bonett 2006) as well as to complex ANOVA settings with correlated variables, e.g., quantilebased repeated measurements or complex MANOVA designs.
10,109.4
2019-12-19T00:00:00.000
[ "Mathematics" ]
A MODULAR MOBILE MAPPING PLATFORM FOR COMPLEX INDOOR AND OUTDOOR ENVIRONMENTS : In this work we present the development of a prototype, mobile mapping platform with modular design and architecture that can be suitably modified to address effectively both outdoors and indoors environments. Our system is built on the Robotics Operation System (ROS) and utilizes multiple sensors to capture images, pointclouds and 3D motion trajectories. These include synchronized cameras with wide angle lenses, a lidar sensor, a GPS/IMU unit and a tracking optical sensor. We report on the individual components of the platform, it’s architecture, the integration and the calibration of its components, the fusion of all recorded data and provide initial 3D reconstruction results. The processing algorithms are based on existing implementations of SLAM (Simultaneous Localisation and Mapping) methods combined with SfM (Structure-from-Motion) for optimal estimations of orientations and 3D pointclouds. The scope of this work, which is part of an ongoing H2020 program, is to digitize the physical world, collect relevant spatial data and make digital copies available to experts and public for covering a wide range of needs; remote access and viewing, process, design, use in VR etc. INTRODUCTION 3D mapping platforms are a core component for many and diverse workflows and are becoming increasingly useful the last years. There is a growing interest for platforms that allow accurate, but also fast and massive capturing of data for 3D mapping. Many commercial solutions are offered as products or services that target the recording of either outdoors or indoors environments. They usually rely on a combination of modern geospatial technologies such as laser scanning, GNSS navigation and photogrammetry. They come in different forms; systems mounted on cars, trolleys, backpacks, or even autonomous robots. Such systems can capture effectively large urban areas, or buildings and construction sites, but they remain very expensive due to the top-end hardware components they rely on. Urban planning and public infrastructure management are application areas that benefit from such technologies as they require constantly updated geographic information. Mobile mapping technologies have been widely used in a variety of applications in urban areas, for mapping transportation infrastructure, utilities, buildings, vegetation and lately for autonomous vehicle driving (Shi et al., 2017). A recent survey of such applications for lidar based mobile mapping is presented in the work of Wang et al. (2019). Real Estate, and Architecture, Engineering, Construction (AEC) sectors are also adopting digitization. Building Information Modelling (BIM) and Geographic Information Systems (GIS) are becoming standard tools that handle large amounts of geospatial data. Besides specialists, the general public is also daily consuming mobile mapping data through online tools or mobile apps like Google Street View (Anguelov et al., 2010). Of special interest is also the case of Mapillary, a collaborative alternative of Google Street View that allows users not only to access but also to capture street level videos or image sequences with any camera and upload them on a map. The development of new, more versatile mobile mapping systems is expected to grow due to i) an abundance of new medium/low cost sensors that are widely produced for the mobile phone and the automotive industry, and ii) constant advancement of the underlying methods and technologies from the robotics and autonomous navigation communities. The work presented here is part of an ongoing European H2020 STARTS Research Program called "Mindspaces" that aims to utilize 3D mapping among other technologies, towards artdriven adaptive outdoors and indoors design (Alvanitopoulos et al., 2019). The scope of our research is to provide a tool that captures multiple types of relevant spatial data of the environment such as raw video footage, georeferenced imagery, pointclouds etc. These can be subsequently exploited by designers and artists to collaborate with scientists and engineers towards the creation of innovative designs and experiences. In the following sections, after a short review of related work, we present the individual sensors and components of our platform, we describe their integration within the Robotics Operation System (ROS) and discuss processing workflows for generating 3D reconstructions from the collected data. Initial experiments are also presented. Mobile Mapping Systems 3D laser scanners, photogrammetry and surveying have been the typical means for 3D recording of physical world and manmade constructions. The scientific and technological advances during the last decade have made possible the adaptation in everyday use of much more scalable approaches of data capturing for 3D reconstruction. In this context, during the last years, several mobile mapping platforms are available in the market (Puente et al., 2013). These fall into two main categories, i) those that are suitable for mapping of large-scale outdoor environments and ii) those suitable for indoor scenes. Outdoor mapping Most major players in the geospatial market offer similar systems, like UltraCam Mustang by VEXCEL (VEXCEL, 2020), RIEGL (RIEGL, 2020), LEICA Pegasus by HEXAGON (LEICA, 2020) and VIAMETRIS (VIAMETRIS, 2020). All these systems combine proprietary high-end laser scanners with high performance INS/GNSS units and optionally 360 panoramic high resolutions multi-camera rigs. The latter two offer also backpack versions of their platforms for vehicle restricted areas. Imajbox by imajing, originally designed for trains and now updated for cars is a lower cost vision-based alternative (imajing, 2020). Indoor mapping For interior spaces different approaches exist. There are platforms with similar technologies like the car-mounted systems that are built on trolleys (NAVVIS, 2020), helmets (REscan, 2020) or backpacks (LEICA, 2020) (VIAMETRIS, 2020). Handheld devices like the PARACOSM PX-80 (PARACOSM, 2020) are also available but their accuracy is not directly comparable to the above systems. Matterport (MATTERPORT, 2020) has a dedicated solution for creating digital twins for the Real Estate market. Indoor spaces are scanned via a proprietary low-cost 360 camera with depth sensors, or lately via a mobile phone and all required processes as well as hosting of data is done on a web service they provide. For the Architecture, Engineering, Construction (AEC) market Doxel (DOXEL, 2020) provides automated solutions for quality inspection and progress tracking. They use artificial intelligence and autonomous robots that capture images and perform laser scanning surveys on a daily basis. Simultaneous Localization and Mapping Estimating the 6 Degrees of Freedom (DoF) motion trajectory of a mobile mapping platform is key to obtain georeferenced data. Direct Georeferencing from the GNSS/INS sensors is not always accurate and can fail in GPS restricted areas. Lately many systems adopt workflows from the robotics literature, like Visual Odometry or Simultaneous Localization and Mapping. A taxonomy and review of standard methods for visual odometry can be found in the well-known articles of Fraundorfer (2011a, 2011b). Simultaneous Localization and Mapping is currently under heavy research. Current state-of-the-art SLAM algorithms exploit a broad range of data, such as images, IMUs or laser scanners and achieve remarkable results (Zhang and Singh, 2015), especially in autonomous driving scenarios, whilst maintaining near real-time performance. Visual methods of SLAM can be divided into feature-based, where features are first extracted on images (Mur-Artal et al., 2015) and direct methods that exploit all image gradients on the available images (Newcombe et al., 2011). Forster et al., (2017) proposed a semidirect approach that combines direct methods for tracking pixels and features correspondences to refine both camera poses and structure by bundle adjustment. In a recent publication Kuo et al. (2020) propose a generic vision based SLAM solution, which is sensor-agnostic and adapts to arbitrary multi-camera configurations. Other approaches rely on 3D point cloud to image matching using specialized descriptors (Pujol-Miro et al., 2017), as well as on constraining a SLAM algorithm given a street map background (Vysotska and Stachniss, 2017). Structure from Motion Structure from Motion (SfM), for the last two decades, is widely considered as the dominant image-based technique for automatic image alignment and 3D model generation and can be employed in workflows of processing data from mobile mapping platforms. SfM is a well-studied topic in the research community with a lot of nearly production-ready implementations (Schonberger and Frahm, 2016) and extensions, such as integration of video from aerial platforms (Leotta et al., 2016). However, it's still an open research field and new approaches have emerged, for example in robust image matching, mainly due to recent developments in deep learning (Yi et al., 2016). SENSORS -COMPONENTS Most mobile mapping systems share similar sensors for recording simultaneously visual information, depth, 3D point clouds, as well as the position and the orientation of the system in the world. More specifically, the proposed space sensing platform can support multiple sensors. The current implementation ( Figure 1, Figure 2) consists of: i) four embedded 13MP machine vision cameras by econ-systems which can record still images or synchronized 4K video sequences, ii) a Velodyne® PUC VLP-16 LiDAR sensor which captures 3D point clouds, iii) an Xsens MTI-G-700 GPS/IMU unit that record absolute 3D positions and rotations and iv) an Intel RealSense T265 Tracking camera for relative positioning in GPS restricted areas (such as indoor scenes). Currently no lighting device is integrated in the platform. Lidar The platform uses a Velodyne® PUC VLP-16 LiDAR sensor for pointcloud recording. The specific sensor is selected for it's relatively low cost and high performance balance. It has a range of 100m, a positional accuracy of ~3cm, and 360 o horizontal and 30 o vertical fields of view (at 16 discrete channels). It can capture up to 600.000 points/second depending on the selected horizontal rotation velocity. It can be directly connected to a GPS/IMU device and supports data synchronization with precise GPS-supplied time via Pulse-Per-Second (PPS), in conjunction with a once-per-second NMEA GPRMC or GPGGA sentence. The lidar sensor is mounted on a ball camera tripod head on top of the platform to avoid occlusions from the other sensors and it is placed with an inclination of ~35 o to capture floors and ceilings. GPS/IMU For direct Georeferencing in outdoors spaces the platform uses the Xsens® Mti-G-700 GPS/IMU Unit. It is the 4 th generation motion tracker by Xsens and has built-in vibration-rejecting gyroscopes and accelerometer, a multi-GNSS receiver (GPS, GLONASS, BeiDou and Galileo) and a barometer. It measures attitude angles and accelerations and Xsens applies a Kalman Filter based sensor fusion algorithm to provide 3D position and orientation information. Tracking camera For GPS restricted areas like indoors environments the platform uses a new sensor by Intel®, the RealSense™ Tracking Camera T265. It is an embedded computer vision solution that combines two fisheye lens sensors with a combined close to hemispherical ~160 o FOV, an Inertial Measurement Unit (IMU) and an Intel Movidius Myriad 2 Visual Processing Unit (VPU) that runs a proprietary Visual SLAM algorithm directly on the device. The T265 is connected and powered via USB and outputs 6DoF data at a sample rate of 200Hz. Embedded PC & Laptop PC (optional) To host the multi-camera rig, the platform utilizes an NVIDIA® Jetson AGX Xavier™ development kit that is widely used for the development of end-to-end AI robotics applications. This kit bundles a carrier board, an integrated thermal solution together with the embedded system-on-module (SoM) Jetson AGX Xavier. It combines an 8-Core ARM v8.2 64-Bit CPU, a 512-Core Volta GPU with Tensor Cores and 32 GB 256-Bit LPDDR4x Memory. It is configured to run Ubuntu 18.04. An NVMe disk is added for storage and an LCD touch screen for control and visualization. Since this embedded system is powerful enough, our initial intention was to build the entire platform on it. This was partially achieved, except of support for the Xsens GPS/IMU unit, since no drivers were implemented for the ARM architecture. Thus, a laptop PC configured with Ubuntu 18.04 is an additional component used to include the GPS/IMU sensor (see distributed architecture implementation in Section 4.1). In an upcoming version of the platform we plan to replace the specific sensor with one compatible with the embedded PC. Power supply To power all the sensors and the embedded PC a 4 cell LiPo Battery of 5500mAh and 14.8V voltage was used. This provides enough power to run the platform for ~30min. When the platform is mounted on a car a typical 12v-220v inverter can be used instead. Mounting To combine physically all available sensors a prototype base was designed in 3D and then 3D printed (Figure 3). This design also provided a good approximation of all sensors' relative orientations (boresight alignment parameters). The base includes a typical camera mount that can be connected on a camera tripod on a dolly ( Figure 2) and pushed around to perform data collections of interior environments or relatively small outdoors areas (like squares, individual buildings etc). Alternatively, it can be mounted on a car roof, via a DSLR suction cup camera mount. To further optimize the capturing process the use of a gimbal to reduce sensors shake as well as a backpack form factor version are considered for future implementations. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2020, 2020 XXIV ISPRS Congress (2020 edition) SENSORS INTEGRATION & DATA PROCESSING Our platform aims to provide precise and fast 3D recording of outdoors and indoors spaces and consists of two main modules, i) the space sensing module that is responsible for data collection and ii) the 3D reconstruction module for processing all available data. The platform runs also in two separate modes, one for indoors and one for outdoors, each with some different hardware components, such as the tracking camera for indoors and the GPS/IMU sensor for outdoors. Space Sensing Module For the integration of all sensors into a single capturing system the Robotic Operation System (ROS) (Quigley et al., 2009) was adopted. ROS is a middleware that is widely used by robotics teams both in Academia and the development of commercial products. Several ROS based opensource projects that implement sensors integration are available. Lately ROS was also proven to be a suitable platform for building mobile mapping systems to capture 3D interior (Blaser et al., 2018) and underground environments ( (Blaser et al., 2019). ROS was selected since it is open source, it supports multiple programming languages (C++ and Python), it allows for lowlevel device control and is modular by design, making it relatively easy to add or remove devices. ROS implements a message-passing communication architecture. A node is created for each sensor, which publishes sensor data as messages on specific topics. Topics may contain raw or processed values and each data entry inside a topic is assigned with a timestamp. Real-time processes are usually implemented as nodes that subscribe to specific topics and then publish their estimations in new topics. For offline processing, all messages are recorded on a single "bag" file. This is implemented by a "rosbag" node that subscribes to all messages from available sensors and stores them on the disk drive in a "bag" file. Then "bag" files can be reproduced for developing and testing algorithms. ROS offers tools to monitor all recorded topics ("rqt-topic") ( Figure 4) as well as tools to visualize 2D and 3D sensor data (rviz) ( Figure 5). Since all data are published as topics with timestamps, these timestamps are recorded in the "bag" file. Synchronization during offline data access or processing is usually handled by taking the data of each sensor that corresponds to the nearest timestamp or by interpolation. Figure 4. Data from all sensors in ROS can be monitored via "rqt-topic" tool. Data entries are usually accessed through a timeline feature. More specifically, the proposed space sensing platform was implemented in ROS Melodic Morenia on Ubuntu 18.04. A node was created for each sensor. Nodes communicate with the sensors and publish their data on a suitable designed topic. Figure 5. Visualization of the pointcloud topic from the Velodyne VLP-16 Lidar node in rviz ROS tool. For the Velodyne® PUC VLP-16 LiDAR node, the official ROS "velodyne_driver" 1 and "velodyne_pointcloud" 2 packages were used. The first provides basic device handling for Velodyne lidars and publishes the raw data packets that are transmitted from the sensor through an ethernet connection. The second provides point cloud conversions. The Velodyne node publishes a "velodyne_points (sensor_msgs/PointCloud2)" topic which contains accumulated Velodyne points transformed in a selected frame of reference. An official ROS package "xsens_mti_driver" 3 was also used for the Xsens® Mti-G-700 GPS/IMU Unit Node. The node publishes a "tf (geometry_msgs/TransformStamped)" topic that contains 6 DoF orientation parameter (X, Y, Z translations and quaternion rotations) transformed in a selected frame of reference. A similar topic is published from the RealSense™ Tracking Camera T265 node that uses the ROS "realsense2_camera" 4 package. A new package was developed by our team for the multi-camera rig since no ROS compatible implementation was available. It is designed to work on the NVIDIA® Jetson AGX Xavier™, with custom made nodes and topics. The package consists of two subprograms, the "capturer" (C) and the "publisher" (C++). The first handles the cameras and captures images via v4l2 and gStreamer libraries, while the second is responsible to publish image data and metadata as a ROS topic. Initially we published image frames in a ROS topic but this approach lead to low FPS performance. In the current implementation the "capturer" app records 4 synchronised 4K videos at 30 FPS as .mkv files with H264 encoding format at a storage path defined by the "publisher" app. The latter publishes the start/end timestamps of the video sequence, as well as the timestamps and the frame_ids of every synchronized frame that is added to the buffer of the gStreamer. Video files require ~60MB/camera/minute. ROS natively supports a distributed architecture where sensors can run across multiple machines, which communicate through a local Network via a talker/listener logic. All nodes are configured to use a single ROS Master app ("roscore"), the address of which is defined by an environmental variable ("ROS_MASTER_URI"). Although the initial plan was to build the space sensing module on a single machine (NVIDIA® Jetson AGX Xavier™) where all sensors would be connected, due to incompatibility of the GPS/IMU sensor with ARM processors, the system was actually built following two alternative architectures (Figure 6). A single machine mode, when GPS/IMU is not used (for example in indoors environments) and a distributed one that supports the GPS/IMU device. In the latter configuration all sensors are connected on a laptop pc, except for the camera-rig, which by design requires to run on the Nvidia Jetson Xavier embedded computer. The space sensing module is executed by a script that launches all processes (Figure 7). A basic GUI for touch screens ( Figure 8) was also developed. It has tools to set the capturing parameters and start/stop the capturing session. Tools to inspect sensors connectivity and to assist the refocusing of each camera are also included. 3D Reconstruction Module Every capturing mission with the space sensing module collects multiple types of data, from the connected sensors, which are stored into a "bag" file. All data are organised based on their timestamps. For any given time point or period it is possible to retrieve the corresponding data (ie image frames, pointclouds and 6DoF motion trajectories) and apply 3D reconstruction workflows. Since the work presented here is part of an ongoing research, several alternative approaches are into consideration before concluding to an optimal workflow. More specifically, for the 3D reconstruction module of our platform we relied on existing software libraries, such as ORB-SLAM2 (Mur-Artal et al., 2015) and Google Cartographer (Hess et al., 2016) for vision-based and lidar-based SLAM respectively, AliceVision & Meshroom (Jancosek andPajdla, 2011 andMoulon et al., 2012) for Structure-from-Motion and Open3D (Zhou et al., 2018) for pointcloud processing. Direct georeferencing from the specific GPS/IMU sensor or the tracking camera is not preferable due to their limited accuracy. However initial tests have shown that the provided trajectories from these sensors can be used to assist vision-based or lidar based SLAM algorithms. The latter provide more accurate estimations of the platform's motion trajectory and initialization of orientation parameters for the individual pointclouds and the image frames. Global maps in the form of registered pointclouds are also provided but are most of the times sparse, incomplete, and noisy. In most cases though, the 3D models can be further improved by means of Structure-from-Motion Solutions. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII- B1-2020XXIV ISPRS Congress (2020 Since the four cameras of the platform capture images at high FPS rates, each data collection mission consists of several millions of image frames. Using the SLAM provided image orientations, key poses of the multi-camera rig are selected so that they capture the area of interest with sufficient overlap and leave no gaps. Only these key frames are used in a Structurefrom-Motion workflow through Meshroom open-source software framework. Meshroom implements a self-calibration bundle adjustment solution that supports Camera Rig Calibration. This allows for optimal estimation of the four cameras interior orientation parameters along with their relative orientation. This also leads to more accurate and consistent 3D reconstruction results. Finally, dense 3D point clouds are generated via Multi View Stereo 3D reconstruction algorithms. In GPS deprived areas, where absolute orientation is a requisite, relative path and reconstruction estimations can be updated by means of Ground Control Points (GCPs) measured through standard Surveying techniques. It must be mentioned that since all sensors were placed on a custom designed 3D printed case with known dimensions, good approximations of all sensors relative orientations were a-priori available. The effect of small misalignments was handled by the SLAM and bundle-adjustment solutions. An approach that we plan to further investigate is to update those relative orientation parameters by matching in 3D space the individual motion trajectories provided from the different sensors. EXPERIMENTS To demonstrate the effectiveness of a first prototype of the platform two data collection experiments are presented. Indoors Environment During the development of both the space sensing and the 3D reconstruction modules of the platform several experiments were conducted inside our office space. The presented survey corresponds to a single loop of the platform around an open desk area of ~90m 2 . Figure 9, shows four image frames from the multi-camera rig. In this experiment the 6 DoF motion trajectory from the RealSense™ T265 Tracking Camera was used to initialize a Structure-from-Motion Solution, with Camera Rig Calibration. A subset of the multi-camera rig image frames was used ( Figure 10) (images at a given distance interval). Figure 11 shows an accumulated pointcloud from individual scans of the Velodyne® PUC VLP-16 LiDAR. The 3D translations and rotations of each individual scan were interpolated from the synchronized trajectory of the tracking camera. Outdoors Environment A second, more complete data capture mission was carried out in the area around the cultural centre of Tecla Sala which is situated in the City of L' Hospitalet, in Barcelona, Spain. This is the first Pilot Use Case of the "Mindspaces" H2020 Research Program. The tripod dolly was moved slowly around the cultural centre building to ensure that there is sufficient overlap between scanlines and image frames. The whole survey with the mobile mapping platform lasted ~15 minutes. In this experiment a vision-based SLAM solution was used to estimate the motion and rotation trajectory of the platform and then an automatically selected subset of the collected image dataset was fed to the Structure-from-Motion workflow ( Figure 14). A dense point cloud was also computed via Multi View Stereo Dense Reconstruction (Figure 15). CONCLUDING REMARKS In this contribution we presented a first implementation of a modular mobile mapping platform that is based on commercial hardware components and open source software libraries. The integration of all sensors was carried out with the Robotics Operation System which allows for easy additions, changes and updates of the platform's components. Several improvements are under consideration. A first one is the replacement of the GPS/IMU sensor with one compatible with ARM CPUs. This will allow the platform to run exclusively on the NVIDIA® Jetson AGX Xavier™ development kit and thus minimize it's dependency on hardware, it's size and it's overall portability. The mounting of the platform on a camera gimbal is also considered to facilitate data collection sessions and obtain more stabilized data. The 3D reconstruction module requires further development and all workflows need to be thoroughly tested and evaluated with respect to their effectiveness, accuracy, and performance on well organised experiments. A final more general remark has to do with a well-known restriction of mobile mapping systems, which is the inability to capture spatial information that is not directly visible from street-level (i.e. building roofs, backyards etc). Occlusions due to obstacles such as buildings, parked cars or trees lead also to unavoidable gaps. To get complete digital copies of complex spaces mobile mapping missions need to be combined with either existing geospatial data from open databases, either with aerial missions from drones. This was initially planned for the Tecla Sala Pilot Use Case but was not realised because of a general ban of drone flight missions in the specific area. However, a second Pilot Use Case is currently under preparation in a more suitable area where licence to perform both mobile and aerial mapping missions is granted to the consortium. This will allow us to present soon the potential of combining mobile mapping with UAV photogrammetry.
5,845
2020-08-06T00:00:00.000
[ "Computer Science" ]
Stability Enhancement of a Single-Stage Transonic Axial Compressor Using Inclined Oblique Slots A casing treatment using inclined oblique slots (INOS) is proposed to improve the stability of the single-stage transonic axial compressor, NASA Stage 37, during operation. The slots are installed on the casing of the rotor blades. The aerodynamic performance was estimated using three-dimensional steady Reynolds-Averaged Navier-Stokes analysis. The results showed that the slots effectively increased the stall margin of the compressor with slight reductions in the pressure ratio and adiabatic efficiency. Three geometric parameters were tested in a parametric study. A single-objective optimization to maximize the stall margin was carried out using a Genetic Algorithm coupled with a surrogate model created by a radial basis neural network. The optimized design increased the stall margin by 37.1% compared to that of the smooth casing with little impacts on the efficiency and pressure ratio. Introduction The aerodynamic performance and stability of a compressor are affected by various factors. For transonic compressors, tip leakage vortex is a notable source of instability and loss, which greatly limits the machine's safety and performance. This mainly resulted from its interaction with the in-passage shock. Many studies showed that the topology of the tip leakage vortex changes considerably from design condition to near-stall/stall condition [1][2][3][4]. At the design condition, under a moderate pressure ratio, the vortex has a stable structure and can pass through the passage shock without difficulty. However, as the compressor becomes throttled toward the stall condition, the pressure ratio significantly increases, the shock moves forward, and the vortex experiences a severe deceleration across the passage shock barrier. This results in a vortex breakdown and formation of low momentum regions, which act as blockages to the main flow. As the compressor moves closer to the stall condition, the blockages expand until a limit is reached, and finally stall occurs, causing severe instability and degradation of aerodynamic performance. One way to delay stall inception induced by the tip leakage vortex is the casing treatment using slots. Airflow is recirculated through the slots due to the pressure difference. The recirculating flows re-energize the low energy air in the blockage zone. More detailed descriptions of the axial slot's enhancing mechanism and its interaction with the main flow can be found in the works of Wilke and Kau [5] and Schnell et al. [6]. Wilke et al. [7] examined the effects of semi-circular slots on the aerodynamic performance of the NASA Rotor Stage 37. Their best configuration, which comprised 4 slots per passage covering 50% of the blade's chord, increased the mass flow range by 60% with a 0.7% improvement in the maximum pressure ratio. Lu et al. [8] carried out an experimental and numerical study on a casing treatment using six bent skewed slots per passage in a low-speed subsonic compressor. With an exposure of 33.3% of the rotor's axial chord, the treatment provided an improvement of about 22% with a slight increase in isentropic efficiency. A numerical study by Goinis et al. [9] applied surrogate models with an evolutionary algorithm to search for an optimal axial-slot design for the first stage of a transonic axial compressor. The optimal configurations successfully increased the stall margin without a noticeable penalty in efficiency. Streit et al. [10] attempted to alleviate the efficiency penalty when using axial-slot treatment in a 1.5-stage transonic axial compressor with rotor redesign. The numerical results indicated that incorporating the slot treatment allowed a lower number of rotor blades and about a 0.7% gain in efficiency with a sufficient stall margin improvement. Ma et al. [11] focused on alleviating the efficiency deficit generally caused by the application of an axial-slot casing treatment. From an extensive study with 21 designs for a low-speed large scale compressor and a transonic compressor, two slots geometries with isoscelestrapezoid shape were devised, which increased the peak efficiency by about 0.5%. An axial-slot design by Inzenhofer et al. [12] nearly doubled the compressor's flow range with an improved pressure ratio. They concluded that the axial-slot enhanced the compressor's stability by alleviating the pressure difference at the blade tip and ingesting a part of the low-momentum leakage air stream. Zhang et al. [13] examined the impacts of inverse blade angle slots on the efficiency and operating stability of the NASA Rotor 67. Three designs with different axial coverages and skewed angles were tested, and the highest stall margin improvement was 24.3% with a 0.755% loss in efficiency. In the last couple of decades, surrogate-based optimization has shown great potential in the design of turbomachinery. The major advantages of this optimization technique are a significant reduction in time for the design and its ability to explore and discover unexpected designs. Samad and Kim [14] used the elitist non-dominated sorting genetic algorithm (NSGA-II) [15] and response surface approximation (RSA) [16] to optimize the blade shape of a transonic axial compressor. Two objective functions, total pressure ratio and adiabatic efficiency, were examined and with the two extreme-end designs of the Pareto front, they were increased by 1.76% and 0.41%, respectively. Kim et al. [17] conducted a shape optimization of an impeller in a centrifugal compressor, aiming to maximize the total pressure ratio. The pressure rise was successfully improved by 2.46% at the design condition using the Sequential Quadratic Programming (SQP) technique [18] coupled with a radial basis neural network (RBNN) [19]. Using the hybrid multi-objective evolutionary algorithm (MOEA) [20], Kim et al. [21] attempted to enhance the efficiency and mitigate the noise level of an axial fan. Khalfallah et al. [22] applied the NSGA-II algorithm and RBNN model to find an optimal design of a centrifugal compressor to maximize the efficiency and stall margin. Ma et al. [23] implemented the RBNN model with different optimization techniques to optimize a ring cavity for the stall margin improvement of a centrifugal compressor. Four algorithms, i.e., genetic algorithm (GA) [24], particle swarm optimization (PSO) [25], simulated annealing [26], and SQP, were tested and the PSO was most effective in terms of computing time and stall margin gain. In the present study, a new slot casing treatment design for a transonic axial compressor is proposed, namely inclined oblique slots (INOS). Although there have been many investigations of the slot casing treatment, they mostly concern the use of axial designs with multiple slots per passage. The unique design of the INOS treatment, which makes an oblique angle with the compressor's rotational axis, has not been investigated yet. In addition, the treatment comprises only one slot per passage, which may help facilitate the manufacturing process in real application. The focus of this study is on the treatment's impact on the compressor's operating stability and efficiency as well as pressure rise. Numerical analysis was used to analyze the compressor's aerodynamic performance. A GA algorithm was implemented with RBNN modeling to maximize the stall margin improvement by INOS. Compressor Model and Casing Treatment Design The test subject in the present work is the single-stage transonic axial compressor, NASA Stage 37. The report from Reid and Moore [27] provided detailed information regarding the compressor's geometry and aerodynamic performance. Other research has pointed out that this compressor exhibited a spike-type rotating stall that originated from the rotors' tip ( [4,7]), which makes it well-suited for the investigation of casing treatments. The compressor comprises 36 rotor blades rotating at a design speed of 17,185.7 rpm and 46 stationary stator blades. The tip clearances' values are 0.0400 cm under the rotor shroud and 0.0762 cm over the stator hub, which correspond to 1.44% of axial chord length at the rotor tip and 2.11% of the axial chord length at the stator tip, respectively. From the experiment by Reid and Moore [27], when running at 100% design speed and peak efficiency condition, the compressor's total pressure ratio and adiabatic efficiency were 2.00% and 84.00%, respectively, with a mass flow rate of 20.74 kg/s. At the near-stall condition, the compressor's total pressure ratio was 2.093 and the mass flow rate was 19.60 kg/s. The mass flow rate reached 20.93 kg/s at the choking condition. The reference pressure and temperature were 101,325 Pa and 288.15 K, respectively. The design and position of INOS are presented in Figure 1. The slots are positioned at the rotor tip casing. There are 36 slots in total and they are uniformly distributed with equal intervals around the compressor's casing. Table 1 shows the geometric parameters of the casing treatment with their reference values: axial location (LF), slot's width (W), slot's depth (D), oblique angle (α), inclined angle (β), and slot's circumferential coverage (γ). The coverage indicates the angle around the compressor axis. Except for the angles, the other parameters are non-dimensionalized using the axial chord length of the rotor blades at the tip (C R ). Numerical Method The aerodynamic analysis was conducted by solving the three-dimensional (3D) RANS equations with the commercial CFD code, ANSYS CFX 15.0 ® (ANSYS, Canonsburg, PA, USA) [28]. TurboGrid ® and ICEM-CFD ® were used to create a hexahedral mesh system for the computational domain. TurboGrid ® was used to generate the mesh for the rotor and stator blocks (more information can be found in [29]). The slot's mesh was created using ICEM-CFD ® with the determinant quality kept at no less than 0.7. To save the computing time, only one compressor's passage comprising one pair of rotor and stator was considered in the computation. As illustrated in Figure 2, the computational domain comprises three sub-domains: the rotor, stator, and slot. Assigning the boundary conditions, solving the governing equations, and post-processing the numerical results were performed by ANSYS CFX-Pre, CFX-Solver, and CFD-Post, respectively. In order to obtain steady-state numerical solutions, a fully implicit element-based finite volume method was implemented to discretize the 3D governing differential equations. For the advection terms, a high-resolution scheme using the principles of Barth and Jesperson [30] was used. It has a second-order accuracy in space. Air as an ideal gas was selected as the working fluid. An average static pressure was set at the outlet of the stator domain to obtain steady-state results. At the inlet of the rotor domain, the turbulence intensity and total temperature were specified as 5% and 288.15 K, respectively. Smooth and adiabatic conditions were applied to all wall boundaries. The side boundaries of the rotor and stator domains were specified with periodic conditions. To obtained steady-state results, these rotating and stationary blocks were linked by a general grid interface (GGI) method. As the GGI method, the frozen-rotor method was used at the interface between the rotor and stator domains (360 • /36 = 10 • for the rotor, 360 • /46 = 7.826 • for the stator) and also between the slot and rotor domains (4 • for the slot and 10 • for the rotor). The influences of the frozen-rotor method on the numerical results were compared with those of the stage method developed by Shim and Kim [31]. The k-ε turbulence model was used with a scalable wall function, while y+ values of the first nodes near the walls were maintained in a range of 20-100. In this work, the numerical calculations were performed until the near-stall point, which shows the highest pressure ratio. Due to the intrinsic unsteady characteristics of the stall/surge phenomenon, strict criteria for the near-stall point must be implemented to achieve a reasonably accurate assessment with steady simulations. The convergence criteria used in this work were those suggested by Chen et al. [32]: The variation between the inlet and outlet mass flow rates is less than 0.3%, the inlet mass flow rate fluctuation is less than 0.001 kg/s for 300 steps, and the change of adiabatic efficiency is less than 0.3% per 100 steps. To obtain the performance curve of the compressor, at the compressor's outlet, the average static pressure was varied from the choking condition (0 Pa) to the near-stall condition with a step size of 100 Pa around the peak efficiency condition and a step size of 10 Pa (which corresponds to about 0.0001 kg/s of mass flow rate) at the near-stall condition. The convergence criterion for the root-mean-square residual of each governing equation was limited to 10 −6 . The calculations were carried out by a computer with an Intel i7-4930K 3.4 GHz CPU. Each operating point was obtained after 2-3 h' simulation time, and it took about 70 h to find a performance curve on average. Performance Parameters Three parameters, i.e., the total pressure ratio (PR), adiabatic efficiency (η), and stall margin (SM), were used to assess the aerodynamic performance and operational stability of the transonic compressor with and without the casing treatment [29,33,34]: where p t , T t , and . m are the total pressure, total temperature, and normalized mass flow rate, respectively. The subscripts in and out indicate the values which are measured at the compressor's inlet and outlet, respectively. The subscripts peak and NS refer to the peak adiabatic efficiency and near-stall condition, respectively. The subscripts SC and CT refer to the smooth casing and casing treatment, respectively. Grid Dependency Test Detailed validation of the numerical results for the NASA Stage 37 compressor model without casing treatment can be found in the work of Dinh et al. [29]. The same numerical methods were used for the same compressor model in the present investigation. In Figure 3, the predicted performance curves show good agreements with the experimental data from Reid and Moore [27]. To evaluate the grid-dependency of the numerical results with the slots, the grid convergence index (GCI) based on Richardson extrapolation technique was analyzed in this work. This technique using GCI analysis was proposed by Celik and Karatekin [35] and is being widely accepted to find a grid-independent solution. In the computational domain except the casing slots, the grid system used in a previous work [29], was also used in the present work. Table 2 presents detailed results of the GCI analysis of the mesh system in the slots. The result for the stall margin converges monotonically as the number of mesh elements increases. Since the deviation between the first (S1) and second (S2) mesh is insignificant, S2 was selected as the final number of elements for further calculation. Optimization Technique The single-objective optimization problem is defined as follows: where F is an objective function and x (={x i }) is a vector of n design variables. An optimization algorithm performs a search procedure to find an optimal solution within the design space specified by the lower limit x L i and upper limit x U i of each design variable. The stall margin was selected as the objective function for optimization in the present study: A design of experiment (DOE) was created using the Latin hypercube sampling (LHS) method [36]. Sampling points (or design points) are generated by LHS and are composed of an i × j matrix, where i and j represent the number of sampling points and design variables, respectively. Each column j is filled with a randomly paired permutation level of 1, 2, . . . , i to form a Latin hypercube. The method is provided as the function lhsdesign in MATLAB [18]. The objective function was built by surrogate modeling using RBNN. RBNN is a two-layer neural network that comprises a hidden layer of radial basis function and a linear output layer. The two hyperparameters of this surrogate model are an error goal and a spread constant. The former's value can be chosen by the user based on the permissible error from the mean input response. MATLAB provides a built-in function, newrb, to apply RBNN modeling [18]. GA is a meta-heuristic population-based optimization technique, which replicates the process of natural selection, mutation, and evolution. Each solution is considered as a "chromosome", which contains "genes" that are optimized values of design variables. The algorithm first starts with a random population of solution-the first generation. To imitate the "survival of the fittest" in nature, the fitness of each solution-the objective function, is evaluated to find the best chromosome. The offspring of these superior candidates are then created through crossover and mutation. If the new chromosomes have better fitness, they will evolve the population by replacing the ones with poorer fitness. This process is repeated through generations until terminate conditions are reached and optimal solutions are found. In MATLAB, the single-objective GA is included in the optimization toolbox and can be used by calling ga function [18]. The parameters of the algorithm were set as follow: population size (50), crossover fraction (0.8), generation (400), and function tolerance (10 −4 ). The whole optimization process is presented in the flow chart in Figure 4. Figure 5 compares the performance curves between smooth casing and the reference INOS. In this figure, the mass flow rate is normalized by the maximum mass flow rate obtained at choke condition. As illustrated in the figure, the application of INOS extends the stable mass flow range of the compressor, by lowering the near-stall normalized mass flow rate from 0.9385 to 0.9303, with a small deficit of the pressure ratio. As a result, the stall margin of the compressor is increased from 9.95% of the smooth casing to 11.16% by installing the reference INOS. The peak adiabatic efficiency and maximal pressure ratio, however, are slightly reduced by 0.4% and 0.07%, respectively. Since the impact of INOS is more pronounced at near-stall condition, the flow field at the low mass flow rate condition is analyzed comparatively, in Figure 6, which presents the relative Mach number contours of smooth casing and reference INOS at 98% blade span. The blockages in the passages are characterized by a Mach number lower than 0.4. It is evident that with the casing treatment applied, the size of these low-speed regions is significantly reduced compared to that of smooth casing, both in rotor and stator domains. In the rotor domain, the blockage area is shrunk from 4.39% of the rotor passage with smooth casing to 4.10% with INOS. Furthermore, as illustrated in Figure 7, there is a notable decrease in the pressure difference between two sides of the rotor blade from the leading edge to the position of the slots at 40% blade chord. This helps to lower the amount of leakage flow fed to the vortex and impede the enlargement of the blockage zones. This reduction, however, may have caused the fall in pressure ratio at the near-stall condition. In addition, looking at the streamlines on the suction side of rotor in Figure 8, it is observed that the separation line (red dash line) is shorter with INOS. This change indicates that the casing treatment prevents flow separation from occurring at the tip region and from intensifying the instability. The overall improvement in stability is, therefore, speculated to be the combined result of all the changes above. Figure 9 shows that when INOS is used, a region of high entropy, which is an indication of efficiency loss, is formed near the location of the slots. This loss can be attributed to the flow recirculation where the airflow spends energy moving in and out of the slots. Parametric Study The selection of design variables and their ranges to generate the DOE is of great importance in surrogate-based optimization. Narrowing the range for each variable will reduce the size of the DOE required to approximate an accurate surrogate model. It is also necessary to perform a sensitivity analysis, i.e., a parametric study, to find the parameters which are most influential on the objective function. Among the geometric parameters of INOS, the oblique angle (α), inclined angle (β), and the slot's depth (D) were used for the parametric study. Their ranges are listed in Table 3. Figure 10 shows the effects of the oblique angle α on the stall margin, peak adiabatic efficiency (η peak ), and near-stall total pressure ratio (PR NS ). The stall margin reaches the maximum value of 13.17% at α = 4 • . Meanwhile, increasing the oblique angle generally reduces peak adiabatic efficiency and near-stall total pressure ratio except at α = 8 • . Notably, at α = 0 • , the application of INOS increases the pressure ratio at near-stall condition with negligible reduction in efficiency compared to the case without casing treatment. In Figure 11, it is found that the compressor achieves the highest stall margin of 13.59% when the inclined angle β is a right angle. Changing this angle has little impact on the peak adiabatic efficiency and near-stall total pressure ratio in a range of β from 60 • to 90 • . Figure 11. Effects of inclined angle on aerodynamic performance. As illustrated in Figure 12, the influence of the slot's depth on the stall margin and peak adiabatic efficiency is minimal. However, it has a discernible effect on the near-stall pressure ratio. D/C R = 10% shows the pressure ratio to be lower than that without casing treatment, but D/C R = 5% and 15% show 0.29% and 0.36% larger values, respectively. Based on the above results, sensitivities of the stall margin on the three parameters are compared in Figure 13. Since the sensitivity on the slot's depth is negligible in comparison with the other parameters, the two angles, α and β, are chosen as design variables for the proceeding optimization. Optimization The LHS method was used to generate 12 design points and their stall margin values were obtained using RANS analysis. A RBNN surrogate model was constructed for the objective function, i.e., the stall margin, from these results and an optimal design, was found at α = 3.58 • and β = 82.22 • using GA. The results of the optimization are summarized in Tables 4 and 5. The optimized stall margin was predicted to be 13.52%, but the result from RANS analysis at the optimal design was 13.64%, which are very close to each other. The optimization is clearly a success as the optimized INOS increased the stall margin to 13.64% from 11.16% of the reference design. In the following figures and tables, the smooth casing, reference design and optimized design are denoted by the subscripts, SC, REF, and GAopti, respectively. Figure 14 shows that the optimized INOS extend the compressor's operating flow range with little negative impact on pressure rise and efficiency. It also mitigates the efficiency loss compared to the reference design at the peak efficiency condition. Figure 15 illustrates the relative Mach number contours at 98% blade span at near-stall condition for the reference and optimized designs of INOS. The optimized design shows the reduced blockage area in the rotor passage: it is only 3.58% of the total passage area compared to 4.10% with the smooth casing. A more favorable pressure distribution is also achieved with the optimal design. As shown in Figure 16, the pressure differences at the rotor's leading edge and downstream of the slots are reduced compared to those of reference design. The improvement in the adiabatic efficiency with the optimized design can be explained by changes in entropy distribution shown in Figure 17. The high entropy area near the rotor's tip, which is the indication of loss, is smaller in the case of the optimal design in comparison with that of the reference design. Conclusions INOS casing treatment was designed to enhance the operating stability of the singlestage transonic axial compressor, NASA Stage 37. Its impacts on the aerodynamic performance of the axial compressor were analyzed using 3D RANS analysis. The results confirmed the effectiveness of the proposed casing treatment on improving the stability of the compressor without significantly penalizing adiabatic efficiency and pressure ratio. The reference INOS increased the stall margin of the compressor to 11.16% from 9.95% without casing treatment. The oblique angle (α) and inclined angle (β) of INOS were selected as the design variables for the optimization through a sensitivity analysis. The optimal design found at α = 3.58 • and β = 82.2 • using GA coupled with RBNN model, achieved a stall margin of 13.64% which is improved by 37.1% with only 0.17% and 0.002% reductions in the peak adiabatic efficiency and maximum pressure ratio, respectively, compared with the case without casing treatment. By applying the single-objective optimization, it was possible to create an INOS design with a greater stability enhancement capability. Nevertheless, to obtain in-depth information about the mechanism and advantages of the new treatment, a future work is needed to examine the unsteady behaviors at the nearstall condition, as well as the impact of the proposed casing treatment on the compressor's mechanical integrity.
5,709.2
2021-04-21T00:00:00.000
[ "Engineering" ]
Analysis of Arabidopsis glucose insensitive growth mutants reveals the involvement of the plastidial copper transporter PAA1 in glucose-induced intracellular signaling. Sugars play important roles in many aspects of plant growth and development, acting as both energy sources and signaling molecules. With the successful use of genetic approaches, the molecular components involved in sugar signaling have been identified and their regulatory roles in the pathways have been elucidated. Here, we describe novel mutants of Arabidopsis (Arabidopsis thaliana), named glucose insensitive growth (gig), identified by their insensitivity to high-glucose (Glc)-induced growth inhibition. The gig mutant displayed retarded growth under normal growth conditions and also showed alterations in the expression of Glc-responsive genes under high-Glc conditions. Our molecular identification reveals that GIG encodes the plastidial copper (Cu) transporter PAA1 (for P(1B)-type ATPase 1). Interestingly, double mutant analysis indicated that in high Glc, gig is epistatic to both hexokinase1 (hxk1) and aba insensitive4 (abi4), major regulators in sugar and retrograde signaling. Under high-Glc conditions, the addition of Cu had no effect on the recovery of gig/paa1 to the wild type, whereas exogenous Cu feeding could suppress its phenotype under normal growth conditions. The expression of GIG/PAA1 was also altered by mutations in the nuclear factors HXK1, ABI3, and ABI4 in high Glc. Furthermore, a transient expression assay revealed the interaction between ABI4 and the GIG/PAA1 promoter, suggesting that ABI4 actively regulates the transcription of GIG/PAA1, likely binding to the CCAC/ACGT core element of the GIG/PAA1 promoter. Our findings indicate that the plastidial Cu transporter PAA1, which is essential for plastid function and/or activity, plays an important role in bidirectional communication between the plastid and the nucleus in high Glc. Sugars have multifaceted roles in plant growth and development, including their roles as energy sources and signaling molecules (Smeekens, 2000;Ramon et al., 2008). During the life cycle, plants can sense sugar levels, control the expression of many genes involved in physiological and developmental processes accordingly, and thus modulate growth and development to adapt to changes in sugar levels (Smeekens, 2000;Gibson, 2005;Rolland et al., 2006;Ramon et al., 2008). Genetic approaches have been fruitful in elucidating the molecular components in sugar signaling; several Arabidopsis (Arabidopsis thaliana) mutants insensitive to high-sugar conditions have been isolated and well characterized at the molecular level (Smeekens, 2000;Rook and Bevan, 2003;Gibson, 2005;Rolland et al., 2006;Ramon et al., 2008). In particular, glucose insensitive (gin) mutant seedlings are able to grow in the presence of 6% (w/v) Glc, which causes developmental arrest in wild-type seedlings (León and Sheen, 2003;Gibson, 2005). Some of these mutants are allelic to abscisic acid (ABA) biosynthesis (e.g. gin1/aba2 and gin5/aba3) or signaling mutants (e.g. gin6/aba insensitive4 [abi4]), demonstrating extensive interactions between sugar and phytohormone ABA (Arenas-Huertero et al., 2000;Cheng et al., 2002;Finkelstein and Gibson, 2002;León and Sheen, 2003;Gibson, 2005). In addition, the GIN2 locus encodes a hexokinase (HXK1), which phosphorylates Glc to glucose-6-phosphate, and the gin2 mutants overcome developmental arrest in the presence of 6% Glc (Moore et al., 2003). It has also been well established that HXK1 acts as an evolutionarily conserved Glc sensor in sugar signaling (Moore et al., 2003;Rolland et al., 2006). Coordination of growth and developmental responses to sugar levels is a complex process due to the existence of cross talk between sugar and other signaling pathways. Plastids are known as sites of photosynthesis and of the synthesis and storage of biomolecules such as carbohydrates and hormones (Buchanan et al., 2000;Jung and Chory, 2010). Thus, the developmental, functional, and metabolic states of plastids can act as signals that modify the expression of nuclear genes (Nott et al., 2006;Pogson et al., 2008;Kleine et al., 2009;Pfannschmidt, 2010). Extensive genetic screens have been undertaken to identify mutants impaired in the plastid-to-nucleus retrograde signaling. Such mutants showed loss of intercompartmental communication, including aberrant control of the expression of nucleusencoded plastid genes at the transcriptional level (Nott et al., 2006;Koussevitzky et al., 2007;Pogson et al., 2008). Interestingly, the ABA signaling pathway is also implicated in retrograde signaling (Penfield et al., 2006;Shen et al., 2006;Koussevitzky et al., 2007;Kim et al., 2009;Priest et al., 2009;Jung and Chory, 2010;Leister et al., 2011). Indeed, Koussevitzky et al. (2007) demonstrated that ABI4, an AP2-type transcription factor, serves as a point of convergence and regulates nuclear gene expression in retrograde signaling. However, the impact of sugars on interorganellar communication between the plastid and the nucleus is relatively unexplored. Copper (Cu) is a microelement essential for living organisms as a cofactor (Palmer and Guerinot, 2009). However, excess Cu causes visible toxicity in Arabidopsis, indicating that adequate amounts need to be delivered to the various subcellular compartments (Shikanai et al., 2003;Abdel-Ghany et al., 2005;Palmer and Guerinot, 2009). In particular, the Arabidopsis plastidial P 1B -type ATPase Cu transporters, PAA1 (also known as AtHMA6) and its closest homolog PAA2 (also known as AtHMA8), play an important role in Cu delivery to plastids and, as a result, in the maintenance of Cu homeostasis (Shikanai et al., 2003;Abdel-Ghany et al., 2005). The Cu transporter PAA1, localized to the inner chloroplast envelope, transports Cu across the envelope into the stroma, and PAA2, localized to the thylakoid membrane, further transports Cu into the thylakoid lumen (Shikanai et al., 2003;Abdel-Ghany et al., 2005). Not surprisingly, paa1 paa2 double mutants are seedling lethal, indicating their important roles in Cu delivery during postembryonic growth and development (Shikanai et al., 2003;Abdel-Ghany et al., 2005). However, the potential role of PAA1 in sugar-induced intercompartmental signaling is largely unknown. In a screen for altered response in sugar signaling, we identified novel Arabidopsis mutants with insensitivity to growth inhibition in the presence of 6% Glc. We report here the genetic and physiological analyses of the recessive Glc-insensitive gig mutants (for glucose insensitive growth). In the presence of 1% Glc, however, the gig mutants showed a reduction of division potential in the root meristem, resulting in the retardation of root growth. Interestingly, under high-Glc conditions, gig is epistatic to both hxk1 and abi4, and the expression levels of the nuclear genes, in particular ABI4 and HXK1, were significantly decreased in gig. We found that the GIG locus encodes the plastidial P 1B -type ATPase Cu transporter PAA1 (Shikanai et al., 2003;Williams and Mills, 2005). Under high-Glc conditions, the addition of Cu had no effect on the recovery of gig/paa1 to the wild type, whereas exogenous Cu feeding, as reported previously (Shikanai et al., 2003), could suppress its phenotype under normal growth conditions. In the presence of 6% Glc, the expression of GIG/PAA1 was also altered by mutations in the nuclear factors HXK1, ABI3, and ABI4. A transient expression assay further revealed the interaction between ABI4 and the GIG/PAA1 promoter, suggesting that ABI4 actively regulates the transcription of GIG/PAA1, likely binding to the CCAC/ACGT core element of the GIG/PAA1 promoter. Our findings provide evidence for a novel function of the plastidial Figure 1. Isolation of gig with insensitivity to growth inhibition in the presence of 6% Glc. A, D, F, and H, Twelve-day-old seedlings of Col-0 (left) and gig (right) were grown on MS agar plates with different sugars as indicated: 6% Glc (A), 6% Suc (D), 12% Suc (F), and 300 mM mannitol (H). B, E, G, and I, Measurement of root length of 12-d-old seedlings in the presence of different sugars as indicated: 6% Glc (B), 6% Suc (E), 12% Suc (G), and 300 mM mannitol (I). C, Anthocyanin accumulation in Col-0 (blue) and gig (red) grown on 1% and 6% Glc. Statistical significance of differences was determined by Student's t test (* P , 0.05). Error bars indicate SE from three biological replicates. Bars = 0.5 cm. Identification of Novel Mutants Insensitive to High-Glc-Induced Growth Inhibition To date, genetic approaches have been successful in identifying constituents and elucidating their roles in sugar signaling (Smeekens, 2000;Rook and Bevan, 2003;Gibson, 2005;Rolland et al., 2006;Ramon et al., 2008). In an attempt to identify additional components in the sugar signaling pathway, we screened a population of approximately 1,500 M2 activation-tagged lines for insensitivity to growth arrest under high-Glc conditions. In the mutant screening, we identified one line that exhibited insensitivity to the inhibition of seedling growth in the presence of 6% Glc (Fig. 1A). In a root growth assay, root length of the line was longer, by approximately 3-fold, than that of wild-type Columbia (Col-0) seedlings (Fig. 1B). In addition, we analyzed anthocyanin accumulation, which is enhanced by high-Glc-induced growth inhibition (Tsukaya et al., 1991;Mita et al., 1997;Xiao et al., 2000;Baier et al., 2004;Teng et al., 2005;Jeong et al., 2010), in this mutant line. In the presence of 6% Glc, anthocyanin accumulation in the mutant was reduced by approximately 2.5-fold compared with that in Col-0 ( Fig. 1C). Our findings indicate that the mutant line exhibits an insensitive growth phenotype in high (6%) Glc. To determine whether the insensitive growth phenotype is Glc specific, we analyzed this mutant line under high-Suc conditions. In the presence of 6% Suc, root growth of the mutant was slightly more insensitive than in the wild type (Fig. 1, D and E). When grown in the presence of 12% Suc, which is nearly the same as 6% Glc in molarity, the growth phenotype of the mutant line was almost indistinguishable from Col-0 seedlings (Fig. 1, F and G). To test whether the phenotype was attributable to osmotic stress, we cultured both Col-0 and the mutant in the presence of 300 mM mannitol, which is also the same as 6% Glc in molarity. We found that both the mutant and Col-0 seedlings were indistinguishably similar in growth (Fig. 1, H and I). Our observations indicate that the insensitive growth phenotype of the mutant was attributed primarily to high levels of exogenous Glc, and hence we named the mutant gig. For further genetic analysis, gig was backcrossed at least four times to Col-0 and could be reproducibly identified in the F2 progeny based on its insensitivity to 6% Glc. Under high-Glc conditions, gig was segregated in the typical 3:1 ratio, indicating that the gig mutation was inherited as a single recessive Mendelian locus (Table I). The gig Mutant Exhibits Growth Retardation under Normal Growth Conditions To address whether its insensitive growth phenotype in the presence of 6% Glc is due to relatively lower inhibition of vigor in gig seedlings, we investigated the gig mutant in the presence of 1% Glc (hereafter referred to as normal growth conditions in this study). Interestingly, gig showed a short-root phenotype compared with Col-0 seedlings under normal growth conditions ( Fig. 2A). In a root growth assay, we found that differences in the growth rate between Col-0 and gig increased gradually as plants became mature, indicating that gig root growth was retarded (Fig. 2B). We further investigated the root meristem size of gig by measuring the number and length of ground cells from the quiescent center (QC) in the presence of 1% Glc, as described previously (Dello Ioio et al., 2007;Achard et al., 2009;Ubeda-Tomás et al., 2009;Heo et al., 2011). The root meristem size of gig was smaller than that of Col-0, suggesting a reduction of cell division in the meristem zone (MZ; Fig. 2, C-E). To further characterize this defect, a CYCB1::GUS mitotic marker (Donnelly et al., 1999) was monitored both in gig and Col-0, and indeed cell division potential in gig roots was significantly reduced (Fig. 2, F and G). Our findings indicate that the retarded root growth of gig is attributable to a decrease in cell division potential. In the Arabidopsis root meristem, the stem cell niche, including the QC and initials, replenishes all the cell files (Scheres, 2007). We thus investigated developmental defects in and around the stem cell niche of gig using cell-specific markers, including SCARECROW (SCR), SHORT-ROOT (SHR), QC25, and WUSCHEL RELATED HOMEOBOX5 (WOX5; Di Laurenzio et al., 1996;Helariutta et al., 2000;Nakajima et al., 2001;Sabatini et al., 2003;Gallagher et al., 2004;Sarkar et al., 2007). In the presence of 1% Glc, the stem cell niche of gig was indistinguishable from that of Col-0, indicating that the reduction of the root meristem size of gig is not due to defects in the stem cell niche (Supplemental Fig. S1). In addition to its short-root phenotype, gig adult plants were also dwarf compared with Col-0 under our growth conditions (16-h-light/8-h-dark cycles; Supplemental Fig. S2). Taken together, gig exhibited growth retardation under normal (1% Glc) growth conditions, unlike its insensitive growth under high-Glc (6% Glc) conditions. The gig Mutant Is Epistatic to abi4 and hxk1 under High-Glc Conditions The insensitive phenotype under high-Glc conditions raised the question of whether GIG plays a role in sugar signaling. To address this question, we adopted a genetic approach by using hxk1/gin2 and abi4/gin6 mutants, which are well-characterized mutants with defects in sugar signaling (Arenas-Huertero et al., 2000;Huijser et al., 2000;Finkelstein and Gibson, 2002;Arroyo et al., 2003;Moore et al., 2003;Acevedo-Hernández et al., 2005;Dekkers et al., 2008). In the presence of 6% Glc, both hxk1 and abi4, as expected, exhibited insensitivity to growth arrest compared with Col-0 seedlings (Fig. 3, A, C, and D). Intriguingly, gig was more insensitive in root growth compared with both hxk1 and abi4 under high-Glc conditions (Fig. 3, B-D). To investigate genetic interactions of gig and these mutants, we generated the double mutant combinations of gig hxk1 and gig abi4, respectively, and examined the growth phenotype in the presence of 6% Glc. To our surprise, the seedling growth of both gig hxk1 and gig abi4 was indistinguishable from that of the gig single mutant (Fig. 3, B, E, and F). Our findings The boxes depict the coding regions, whereas the lines represent the noncoding regions. The red triangle denotes the position of a T-DNA insertion in gig-1 that is isolated from an activationtagged population, whereas the blue triangle indicates the T-DNA insertion site in gig-2 that was identified from the SALK T-DNA database. The GIG locus encodes the plastidial P 1B -type ATPase Cu transporter PAA1. B, Expression of PAA1 in Col-0, gig-1, and gig-2 seedlings. No expression was detected in either gig-1 or gig-2 seedlings. C, Allelism test of gig-1 and gig-2. From left to right, Col-0, gig-1, gig-2, and F1 progeny of a cross between gig-1 and gig-2 (gig-1 gig-2). The F1 progeny show insensitivity to growth inhibition in 6% Glc. Bar = 1 cm. indicate that gig is epistatic to both hxk1 and abi4 in high Glc. GIG Encodes the Plastidial P 1B -Type ATPase Cu Transporter PAA1 To determine the molecular basis of the gig phenotypes, we identified the GIG locus using thermal asymmetric interlaced PCR (Liu et al., 1995), since the mutant was initially isolated in an activation-tagged population. We found a T-DNA insertion in the sixth intron of the GIG locus (At4g33520; Fig. 4A), which encodes the plastidial P 1B -type ATPase Cu transporter PAA1 (also known as AtHMA6; Shikanai et al., 2003;Williams and Mills, 2005). Previous studies have demonstrated that PAA1, localized in the chloroplast inner membrane, mediates Cu delivery into the stroma (Shikanai et al., 2003;Abdel-Ghany et al., 2005). With specific primers for PAA1, we detected no expression, indicating that, in accordance with our genetic analysis, gig is a loss-of-function mutant ( Fig. 4B; Table I). Additionally, we identified another T-DNA insertion allele in the SALK database (http://signal.salk.edu) and found that this new allele, named gig-2 (SALK_043208; Fig. 4, A and B), was also similarly insensitive to growth inhibition under high-Glc conditions (Fig. 4C). Subsequently, we reciprocally crossed these mutants for a complementation test. In the presence of 6% Glc, the F1 progeny of gig-1 and gig-2 showed indistinguishably insensitive growth compared with each parental line (Fig. 4C), corroborating their allelic relation. To further verify whether mutations in the GIG/PAA1 locus cause the growth insensitivity to high levels of exogenous Glc, we transformed gig mutants with a translational fusion (hereafter referred to as pGIG::GIG-GFP), including 443 bp of the putative promoter region and the open reading frame (ORF) fused to GFP. In the presence of 1% Glc, the translational fusion restored the retarded gig growth phenotype to Col-0 (Fig. 5A). As expected, transgenic seedlings with pGIG::GIG-GFP exhibited similar sensitivity to high-Glc-induced growth inhibition compared with Col-0 (Fig. 5B). Taken together, our findings indicate that the insensitivity of gig to high-Glc-induced growth inhibition is, indeed, due to the loss of the plastidial Cu transporter PAA1 function. The gig/paa1 Mutant Is Not Rescued by Cu Addition In High Glc Previously, it was shown that paa1 mutants had a lower Cu content in the chloroplast and that the addition of exogenous 10 mM CuSO 4 could suppress growth defects, indicating that PAA1, which mediates Cu delivery across the plastid envelope, is an essential component of the plastidial Cu transport system (Shikanai et al., 2003). Hence, we cultured gig/paa1 on Murashige and Skoog (MS) agar plates with increasing Cu concentrations (5, 10, and 50 mM). When grown on MS agar plates supplemented with 10 mM CuSO 4 , the growth of gig in the presence of 1% Glc was nearly indistinguishable from Col-0 seedlings ( assess the relationship between Cu and Glc, we analyzed the root growth of Col-0 and gig/paa1 in the absence or presence of 10 mM CuSO 4 with increasing Glc concentrations. In the absence of 10 mM CuSO 4 , the growth of gig/paa1 was retarded, as expected, compared with Col-0, up to the point where exogenous Glc was added to 3% (w/v; Supplemental Fig. S4A). Whereas both Col-0 and gig/paa1 were nearly identical in the presence of 4% Glc, the difference in growth between Col-0 and gig/paa1 became reversed under high-Glc conditions, in that gig/paa1 was more insensitive. When supplemented with 10 mM CuSO 4 , however, gig/paa1 was nearly indistinguishable from Col-0 seedlings up to 5% Glc (Supplemental Fig. S4B). Interestingly, gig/paa1 was still insensitive to high (6% and 7%) Glc, with no recovery of the growth defect (Fig. 5, F and G; Supplemental Fig. S4B). These findings indicate that the addition of Cu had no effect on reverting gig/paa1 to the wild type under high-Glc conditions, whereas exogenous Cu feeding could suppress its phenotype under normal growth conditions. Expression Analysis of GIG/PAA1 To obtain further insight into gig/paa1 phenotypes, we investigated the in planta expression patterns of GIG/PAA1. To this end, we generated transgenic lines harboring a transcriptional fusion of the GUS marker gene under the control of the GIG/PAA1 cis-regulatory sequence located upstream of the ORF (hereafter referred to as pGIG::GUS). The putative promoter sequence selected was the longest that was used for molecular complementation of the gig/paa1 mutant. Therefore, we expected that this cis-regulatory sequence would be informative in monitoring the expression patterns of GIG/PAA1 in planta. In the shoot of 12-d-old seedlings, we observed GUS staining in marginal regions of cotyledons and leaves and primarily in the vasculature (Fig. 6, A and B). In the root, the GIG/PAA1 expression was rather cell type specific, being detected only in the vascular tissues (Fig. 6C). In parallel, we also analyzed the seedling root with pGIG::GIG-GFP (translational fusion), and similarly, localization of GIG-GFP was observed only in the vasculature (Fig. 6D). For a more detailed analysis, we generated transverse sections of the primary root and detected GUS expression in the vascular bundle (Fig. 6E). Interestingly, however, no GUS expression was detected in the root tip (Fig. 6F), implying that cells in the MZ are more sensitive to fluctuations in Glc levels. Our findings suggest the involvement of the plastidial Cu transporter PAA1 in nongreen tissues, in which we primarily observed the growth retardation in gig. We also analyzed the levels of GIG/PAA1 expression by an independent, complementary method: reverse transcription-based quantitative (qRT)-PCR. In our analysis, the expression of GIG/PAA1 was detected in various organs, with the highest expression in the flower (Fig. 6G). Additionally, the levels of GIG/PAA1 mRNA accumulation were significantly increased, by approximately 3-fold, in the presence of 6% Glc, indicating that its expression is also subject to regulation by high Glc (Fig. 6H). Expression of Glc-Responsive Genes in gig/paa1 In addition to growth arrest, high levels of exogenous Glc also regulate a wide variety of genes at the transcriptional level, including APL3 (a large subunit of ADP-Glc pyrophosphorylase involved in starch biosynthesis) and CHS (for chalcone synthase; Koch, 1996;Li et al., 2006). Hence, we analyzed the expression levels of these Glc-responsive genes in both Col-0 and gig/paa1 in the presence of 1% and 6% Glc. Under high-Glc conditions, the APL3 mRNA level was, as expected, markedly elevated in Col-0, whereas the level of induction was significantly reduced in gig/ paa1 (Fig. 7A). Likewise, the CHS mRNA level was increased in the presence of 6% Glc, but in gig/paa1, the level of induction was significantly reduced (Fig. 7B). On the basis of our findings that gig is epistatic to both The GIG mRNA level in the shoot is arbitrarily set to 1. H, Expression of GIG in the presence of 1% and 6% Glc by qRT-PCR. The statistical significance of differences was determined by Student's t test (* P , 0.05). Error bars indicate SE from three biological replicates. abi4 and hxk1 in response to high-Glc concentrations, we subsequently analyzed the transcript levels of ABI4 and HXK1 in Col-0 and gig/paa1 in the presence of 1% and 6% Glc. Under high-Glc concentrations, the HXK1 mRNA level was, as expected, also induced by approximately 3-fold in Col-0, whereas it was significantly reduced in gig/paa1 (Fig. 7C). Interestingly, high-Glc activation of ABI4 was completely abolished in gig/ paa1 (Fig. 7D). To investigate the extent of Glc-induced gene regulation, we also examined the ABI3 and ABI5 expression levels, which are known to be induced by high Glc (Cheng et al., 2002;Arroyo et al., 2003;Yuan and Wysocka-Diller, 2006;Dekkers et al., 2008). In the presence of 6% Glc, such Glc activation of ABI3 was almost completely eliminated in gig/paa1 (Fig. 7E). Likewise, induction of ABI5 expression was markedly reduced in gig/paa1 (Fig. 7F). Furthermore, we also analyzed the expression of APL3, HXK1, and ABI4 in the presence of 10 mM CuSO 4 . Interestingly, the expression levels of these genes in gig/paa1 were not Figure 7. Expression analysis of Glc-responsive genes. qRT-PCR in Col-0 and gig is shown in the presence of 1% (blue) and 6% (red) Glc. A, APL3. B, CHS. C, HXK1. D, ABI4. E, ABI3. F, ABI5. The statistical significance of differences was determined by Student's t test (* P , 0.05). Error bars indicate SE from biological triplicates. Figure 8. Regulation of GIG expression levels by ABA. A, GIG transcript levels in abi3, abi4, abi5, and hxk1 in the presence of 1% and 6% Glc. B, Expression of GIG in the absence or presence of 10 mM ABA. C, Transient expression assay for the interaction between ABI4 and the GIG promoter. The effector and reporter plasmids are schematically shown. Relative LUC activity was determined by the addition of Glc alone, ABI4 alone, and both Glc and ABI4. Error bars indicate SE from biological triplicates. recovered to the Col-0 levels (Supplemental Fig. S5). This observation is consistent with our previous finding (Fig. 5, F and G; Supplemental Fig. S4) that the addition of Cu in high Glc could not suppress the gig/ paa1 phenotype. Taken together, our expression studies indicate that the loss of the plastidial Cu transporter PAA1 function significantly alters the expression levels of Glcresponsive genes. In particular, a reduction of the expression of nuclear factors ABI3, ABI4, ABI5, and HXK1 in gig/paa1 under high-Glc conditions suggests a novel retrograde plastid-to-nucleus signaling, in which the plastidial Cu transporter PAA1 is involved. ABI4 Is Essential for the Activation of GIG/PAA1 Expression Our findings that the loss of GIG/PAA1 function results in a dramatic reduction of the nuclear gene expression of ABI3, ABI4, ABI5, and HXK1 in the presence of high Glc and that the GIG/PAA1 promoter itself contains a CCAC/ACGT sequence for ABI4 binding (Supplemental Fig. S6; Strand et al., 2003;Koussevitzky et al., 2007) raised the question of whether the transcription of GIG/PAA1 itself is subject to regulation by the nuclear transcription factor ABI4. To test this, we first examined the expression of GIG/ PAA1 in abi3, abi4, and abi5 in the presence of 1% and 6% Glc. As shown in Figure 8A, the GIG/PAA1 mRNA levels were significantly reduced in both abi3 and abi4, but not in abi5, under high-Glc conditions. In addition, we also monitored the GIG/PAA1 expression in hxk1 in the presence of 1% and 6% Glc, since the Glc sensor HXK1 can be localized in the nucleus and regulate gene expression (Cho et al., 2006). In high Glc, the induction of GIG/PAA1 mRNA was almost completely abolished in hxk1 (Fig. 8A). Next, we addressed whether the expression of GIG/PAA1 is regulated by ABA. Indeed, the level of GIG/PAA1 mRNA accumulation was substantially increased in the presence of 10 mM ABA (Fig. 8B). To further investigate the interaction between ABI4 and the GIG/PAA1 promoter, we performed a transient expression assay using Arabidopsis protoplasts as described previously (Yoo et al., 2007). The reporter plasmid containing the 443-bp fragment that was used for both transcriptional fusion and molecular complementation, and the effector plasmid 35S::ABI4, were introduced into Arabidopsis protoplasts in the absence or presence of high Glc (300 mM). When relative luciferase (LUC) activity was monitored, the expression of GIG/PAA1 was, as expected, induced by high Glc alone and ABI4 alone (Fig. 8C). Interestingly, GIG/PAA1 expression was synergistically induced by both high Glc and ABI4, suggesting that the nuclear transcription factor ABI4 actively regulates the transcription of GIG/ PAA1 that is essential for plastid function and/or activity. Our findings suggest a molecular mechanism of bidirectional (plastid-to-nucleus and nucleus-to-plastid) communication in response to high Glc levels. DISCUSSION In this study, we identified novel mutants, designated gig, that showed insensitivity to growth inhibition in high (6%) Glc. The expression levels of Glc-responsive ABI3, ABI4, ABI5, APL3, CHS, and HXK1 genes were significantly altered in gig/paa1 in the presence of 6% Glc. Interestingly, gig abi4 and gig hxk1 double mutants were indistinguishable from the gig single mutant under high-Glc conditions, indicating that gig is epistatic to both abi4 and hxk1. Subsequent molecular cloning led us to the conclusion that the insensitivity to high-Glc-induced growth inhibition was caused by the loss of plastidial Cu transporter PAA1 function (Shikanai et al., 2003). When complemented with the GIG/PAA1 genomic fragment or supplemented with exogenous Cu, growth defects in the gig/paa1 mutants were completely restored to levels indistinguishable from Col-0 seedlings. These results, together with those from the phenotypic analyses of gig/paa1, indicate the important role of the GIG/PAA1 Cu transporter in Glc-induced retrograde plastid-to-nucleus signaling. In retrograde signaling, particularly plastid signaling revealed in this study, integrated information on developmental, functional, and metabolic states is conveyed to the nucleus, in which the expression of nuclear genes is modified accordingly (Nott et al., 2006;Pogson et al., 2008;Kleine et al., 2009;Pfannschmidt, 2010;Leister et al., 2011). In particular, ABI4 plays an important role in the integration of retrograde signaling to regulate the transcription of nuclear genes (Koussevitzky et al., 2007). Furthermore, intracellular communication between the plastid and the nucleus, as Figure 9. Bidirectional communication between the chloroplast and the nucleus. Changes in the functional states/activities of plastids caused by high Glc can possibly act as a retrograde signal to alter the expression of nuclear genes such as ABI3, ABI4, ABI5, and HXK1. In particular, the transcription factor ABI4, which likely binds directly to the CCAC/ACGT core element of the GIG/PAA1 promoter, regulates the expression of the plastidial Cu transporter PAA1 that is essential for plastid function and/or activity. suggested previously (Jung and Chory, 2010), is essentially bidirectional. Thus, we investigated whether the GIG/PAA1 gene itself is subject to transcriptional regulation by the nuclear transcription factor ABI4. First, we found a significant reduction of the GIG/ PAA1 transcript levels in abi3, abi4, and hxk1 mutants. Next, we analyzed the interaction between ABI4 and the GIG/PAA1 promoter, which contains a combination of the CCAC/ACGT core element for ABI4 binding and retrograde signaling (Strand et al., 2003;Koussevitzky et al., 2007). Indeed, our transient expression assay reveals that the transcription of GIG/ PAA1 is synergistically regulated by high Glc and ABI4. These results lend support to our hypothesis that the plastidial Cu transporter PAA1 plays a role in the coordination of the bidirectional intercompartmental communication between the plastid and the nucleus in the presence of high Glc. Previously, it was shown that miRNA398 levels, which function to repress two Cu/zinc superoxide dismutases (CSD1 and CSD2), were decreased by the addition of Cu (Sunkar et al., 2006;Yamasaki et al., 2007;Dugas and Bartel, 2008). Moreover, Dugas and Bartel (2008) demonstrated that in the presence of Suc, miRNA398 levels were increased, whereas the protein levels of CSD1 and CSD2, the miRNA398 targets, were substantially decreased. The inverse relationship between Cu and sugars in the regulation of miRNA398 levels, and in turn the CSD1 and CSD2 levels, suggests that sugars affect the results of Cu feeding. Interestingly, we also found that the addition of Cu could not revert gig/paa1 to the wild type in the presence of high Glc. Since it was demonstrated that both transcript and protein levels of CSD1 and CSD2 were higher in gig/ paa1 than in the wild type (Shikanai et al., 2003;Abdel-Ghany et al., 2005), it will be interesting to determine the levels of miRNA398 accumulation in gig/paa1 in the absence or presence of high Glc with the addition of Cu. Since GIG/PAA1 is expressed in roots as well as in green tissues (Shikanai et al., 2003;Abdel-Ghany et al., 2005) and reactive oxygen species (ROS) can modulate the balance between cell proliferation and differentiation in roots (Tsukagoshi et al., 2010), it is tempting to speculate that ROS and/or sugar levels in plastids of both roots and green tissues may be influenced by proper Cu delivery to CSD1 and CSD2, which is controlled by PAA1 (Shikanai et al., 2003;Abdel-Ghany et al., 2005). Thus, changes in ROS, sugar levels, functional states, and/or activities of plastids caused by high Glc can act as retrograde signals (Oswald et al., 2001;Fey et al., 2005) and in turn regulate the expression of nuclear genes such as ABI3, ABI4, ABI5, and HXK1 (Nott et al., 2006;Koussevitzky et al., 2007;Pogson et al., 2008;Kleine et al., 2009;Pfannschmidt, 2010). Subsequently, alterations in the expression levels of the nuclear factors, in particular ABI4, which can bind directly to the CCAC/ACGT core element of the GIG/PAA1 promoter, regulate the expression of the plastidial Cu transporter PAA1 that is essential for plastid function and/or activity (Fig. 9). In summary, we report a previously unrecognized role for the plastidial Cu transporter PAA1 in the coordination of Glc-induced intracellular signaling. Further molecular and biochemical characterization with respect to the nature of retrograde signals (e.g. ROS, sugar levels, or both), for which GIG/PAA1 may well be responsible, will be necessary to elucidate the precise molecular mechanism of Glc-induced intercompartmental signaling between the plastid and the nucleus. Screening for gig and Isolation of the GIG Locus For the identification of gig, an activation tagging library was generated as described (Weigel et al., 2000;Hwang et al., 2010). A population of approximately 1,500 M2 lines was screened for insensitivity to growth arrest on one-half-strength MS agar plates containing 6% (w/v) Glc. The gig mutant with enhanced high-Glc tolerance relative to wild-type seedlings was visually identified and was subsequently backcrossed to Col-0 at least four times for further analysis. To identify the GIG locus, thermal asymmetric interlaced PCR was performed as described previously (Liu et al., 1995). The primer sequences are listed in Supplemental Table S1. Root Growth Assays and Statistical Analysis Digital images of seedlings grown vertically on one-half-strength MS agar plates were taken using an SP-560UZ digital camera (Olympus) at each time point as indicated. Root length was measured from the digital images of the plates using ImageJ software (http://rsbweb.nih.gov/ij). The experiments were independently repeated three times, and the data were analyzed using the Excel statistical package (Microsoft). Student's t test was performed to compare the mean values of triplicates, and SE values are indicated. Anthocyanin Quantitation Anthocyanin accumulation in Arabidopsis seedlings was quantitatively determined as described previously (Mita et al., 1997;Teng et al., 2005;Solfanelli et al., 2006) with minor modifications. For anthocyanin extraction, frozen, homogenized seedlings (100 mg) at 12 d after germination were incubated with 800 mL of 1% (v/v) hydrochloric acid in methanol overnight at 4°C. Subsequently, 400 mL of distilled water and 200 mL of chloroform were added to the mixture and mixed vigorously by vortexing. After centrifugation at 13,000 rpm for 15 min, the supernatant was separated and the absorbance was measured at 530 and 657 nm using a spectrophotometer, as described previously (Mita et al., 1997;Teng et al., 2005;Solfanelli et al., 2006). qRT-PCR Analysis Total RNA samples were prepared from 12-d-old seedlings grown on onehalf-strength MS agar plates, containing either 1% or 6% Glc, using Easy-Blue reagent (Intron). RNA samples were treated with RQ1 RNase free-DNase (Promega) to eliminate potential contamination of genomic DNA and further purified using an RNeasy Plant Mini kit according to the manufacturer's instructions (Qiagen). The quality and quantity of the isolated RNA were assessed by both gel electrophoresis and spectrophotometry as described previously (Lee et al., 2008;Heo et al., 2011). The protocols used for qRT-PCR were essentially the same as those described previously (Lee et al., 2008;Heo et al., 2011) with minor modifications. Approximately 0.5 mg of total RNA was used for the synthesis of cDNA using the iScript cDNA synthesis kit according to the manufacturer's instructions (Bio-Rad). These cDNA samples were used for qRT-PCR using SYBR Premix ExTaq reagents (Takara) with the Mx3000P QPCR System (Agilent Technologies). The ACTIN2 (At3g18780) gene was used as the internal reference as described previously (Lee et al., 2008). Each experiment was conducted independently at least three times with biological replicates. Plasmid Construction and Transformation To generate a transcriptional fusion of the GIG/PAA1 locus to the GUS reporter gene, a 443-bp fragment containing the cis-sequence upstream of the start codon, which is the longest intergenic region of the GIG/PAA1 locus, was PCR amplified and subsequently cloned into the pENTR/D-TOPO vector (Invitrogen). The error-free promoter fragment of GIG/PAA1 was subcloned into the binary pMDC162 vector (Curtis and Grossniklaus, 2003) by Gateway recombination cloning technology (Invitrogen). For molecular complementation, we generated a translational fusion of the wild-type GIG/PAA1 ORF to GFP under the control of the 443-bp GIG/ PAA1 promoter into the binary pMDC107 vector (Curtis and Grossniklaus, 2003) using Gateway technology (Invitrogen). The resulting plasmids were introduced into Agrobacterium tumefaciens (GV3101) and then introduced into Col-0 and gig plants by the floral dipping method (Clough and Bent, 1998). For the transient expression assay, the reporter and effector plasmids were constructed using the pBI221 vector. In the reporter plasmid, the GUS gene was replaced with the firefly LUC gene, and the cauliflower mosaic virus 35S promoter was replaced with the 443-bp GIG/PAA1 promoter. To generate the effector plasmid, the full-length ABI4 coding region was amplified by PCR and then inserted into pBI221, replacing the GUS gene. As a control plasmid, the GUS gene in pBI221 was replaced with the Renilla LUC gene. Histochemical GUS Assays and Microscopy The protocols used for the histochemical localization of GUS activity were essentially the same as those described previously (Yu et al., 2010;Heo et al., 2011) with minor modifications. MS agar plate-grown seedlings at 12 d after germination were incubated overnight at 37°C in GUS staining solution [0.4 mM 5-bromo-4-chloro-3-indoxyl-b-D-glucuronic acid, 2 mM K 3 Fe(CN) 6 , 2 mM K 4 Fe(CN) 6 , 0.1 M sodium phosphate, 10 mM EDTA, and 0.1% Triton X-100]. After staining overnight, the samples were washed and cleared as described (Yu et al., 2010;Heo et al., 2011). Plastic sections of GUS-stained roots were embedded in Technovit 7100 (Heraeus Kulzer) and sectioned using an HM 355S microtome (Microm) according to the previous instructions (Yu et al., 2010;Heo et al., 2011). Samples were observed with differential interference contrast optics using an Axio Imager.A1 microscope (Carl Zeiss), and then the digital images were obtained with the AxioCam MRc5 digital camera (Carl Zeiss) equipped on the microscope. For GFP visualization, the Fluoview FV300 (Olympus) confocal laser scanning microscope was used as described (Yu et al., 2010;Heo et al., 2011). Transient Expression Assay The protocol used for the transient expression assay, including protoplast preparation, transformation, and LUC activity measurement, is basically the same as that described previously (Yoo et al., 2007) with minor modifications. Protoplasts were prepared from rosette leaves of 3-to 4-week-old Col-0 plants using cellulose R10 and macerozyme R10 (Yakult Pharmaceutical). Transformation was performed using polyethylene glycol solution including 40% (w/v) polyethylene glycol 4000 (Fluka), 200 mM mannitol (Duchefa Biochemie), and 100 mM CaCl 2 (Sigma) in the presence of the effector and/or reporter plasmids for 5 min. The plasmids were prepared with the AxyPrep Maxi-Plasmid kit (Axygen), and a total of 10 mg of plasmid DNA was used at a ratio of 9:9:2 (effector:reporter:control). After transformation, protoplasts were incubated in the washing and incubation buffer (500 mM mannitol, 4 mM MES, and 20 mM KCl) in the absence or presence of 300 mM Glc at 22°C for 16 h in the dark. Harvested protoplasts were lysed and measured for LUC activity using the Dual-Luciferase Reporter Assay System (Promega). The reporter gene activity was normalized by Renilla LUC activity. Experiments were independently repeated three times for biological replicates. Supplemental Data The following materials are available in the online version of this article. Supplemental Figure S1. Analysis of the stem cell niche in Col-0 and gig roots. Supplemental Figure S2. Comparative analysis of Col-0 and gig adult plants. Supplemental Figure S3. Complementation of gig with Cu supplementation. Supplemental Figure S4. Root growth assay in the absence or presence of Cu with increasing Glc concentrations. Supplemental Figure S5. Expression analysis of Glc-responsive genes with the addition of Cu. Supplemental Figure S6. Prediction of an ABA-responsive element sequence in the GIG promoter. Supplemental Table S1. Sequence information of PCR primers used in this study.
8,825.6
2012-05-11T00:00:00.000
[ "Biology", "Environmental Science" ]
Revolutionizing student course selection: Exploring the application prospects and challenges of blockchain token voting technology This paper explores the utilization of blockchain token voting technology in student course selection systems. The current course selection systems face various issues, which can be mitigated through the implementation of blockchain technology. The advantages of blockchain technology, including consensus mechanisms and smart contracts, are discussed in detail. The token voting mechanism, encompassing concepts, token issuance and distribution, and voting rules and procedures, is also explained. The system design takes into account the system architecture, user roles and permissions, course information on the blockchain, student course selection voting process, and course selection result statistics and public display. The technology offers advantages such as transparency, fairness, data security and privacy protection, and system efficiency improvement. However, it also poses several challenges, such as technological and regulatory hurdles. The prospects for the application of blockchain token voting technology in student course selection systems and its potential impact on other fields are summarized. Overall, the utilization of blockchain token voting technology in student course selection systems holds promising future implications, which could revolutionize the education sector. Introduction The issue of course selection has always posed a challenge for both students and universities.During the concentrated course selection period, traditional systems frequently experience severe congestion due to increasing numbers of students and limited server processing capacity [1], which can negatively impact students' course experience. Blockchain is a decentralized public ledger based on a peer-to-peer network [2].Its decentralization, openness, and transparency can enhance the credibility and security of the system while also enabling distributed storage and management of course selection information, improving fault tolerance and scalability.Additionally, tokens are currency-like digital assets with payment and designated circulation functions [3,4]. To integrate blockchain technology into the course selection system, we propose the introduction of a token voting system.Blockchain technology allows for the issuance of tokens, voting, and trading functions, which can facilitate students' course selection operations and result statistics.This system can improve the fairness and efficiency of course selection while also reducing pressure on centralized server resources.This paper discusses how to leverage blockchain technology and a token voting system to enhance university course selection systems.This approach offers a new perspective for addressing the challenges of course selection in academia. Blockchain technology overview This chapter will provide an overview of blockchain technology, focusing on its key components and their roles in supporting various applications.Blockchain technology adopts a chain-like structure, and its immutability and non-counterfeit ability characteristics are ensured through cryptography, combined in chronological order.This is a reliable technology for recording network change data transactions, with features such as security, decentralization, transparency, and immutability [5]. In the context of the student course selection system based on blockchain token voting technology, we will discuss how consensus mechanisms and smart contracts work together to create a secure and transparent environment for managing course selection data.Since 2020, the application of blockchain technology in the education field is still in its initial stage, mainly used for method verification and sharing certificates [6].These practices have demonstrated that blockchain technology can provide a reliable and secure distributed environment, capable of carrying and recording data such as students' course selection. Consensus mechanisms Consensus mechanisms are the process of nodes in a blockchain network reaches an agreement on a version of transaction.The security and reliability on blockchain network is ensured by consensus mechanisms, to make it difficult for attackers to tamper the recorded transactions.Ethereum currently uses a Proof of Stake (PoS) consensus mechanism.These are the main features of the PoS consensus mechanism: a. Validator's staking: In the PoS mechanism, nodes, or validators, need to stake their tokens as collateral to obtain the rights of produce blocks and validate transactions.In Ethereum, a minimum of 32 ETH is required to become a validator. b. Block Production and Validation: When new transaction occurs, validators compete for the right to produce the blocks.The validator who obtains the block production right will choose transactions to be packed in a block, create a new block, and adds it into the blockchain.After new block is created, other validators need to verify its correctness. c. Rewards and Penalties: Validators earn rewards through staking and block production.If validators behave maliciously, including tamper with transactions, or trying to create invalid blocks, their staked tokens will be forfeited. The Proof of Stake mechanism has higher energy efficiency and security compared to other consensus mechanisms, such as Proof of Work (PoW), another mechanism which Bitcoin applies. Smart contracts Smart contracts are self-executing, blockchain-based contracts.They are written in specific programming language, to be executed on-chain.When certain predetermined conditions are met, smart contracts will execute automatically to do its operations.Ethereum is the first blockchain platform to support smart contracts, which are typically written in Solidity programming language.The main features of smart contracts are: a. Decentralization: Smart contracts are deployed on blockchain networks; therefore, they are not subject to the control to any individuals.This enables higher transparency and immutability. b. Automatic Execution: Smart contracts execute automatically according to preset conditions, eliminating the need for human involvement and reduce the likelihood of errors. c. Programmability: Smart contracts like Ethereum smart contracts, are Turing-complete, which allows developers to create smart contracts with complex functionality.This has enabled the development of various decentralized applications (DApps) on Ethereum. System design This chapter will introduce the implementation process of a student course selection system based on token voting, which is deployed on a decentralized blockchain network that supports smart contract technology.This system aims to provide a safe, transparent, and fair course selection platform, enabling students to choose courses more conveniently and efficiently.The overall process of voting and course selection is controlled by smart contracts on the blockchain, and every interaction with the smart contract will be written into the block, making the course selection process trustworthy and tamper proof.The main implementation process of this system includes writing a smart contract for voting course selection schemes, designing a method for distributing tokens to students, and implementing a web front-end for course selection systems that interact with blockchain smart contracts. System architecture The student course selection system based on token voting is deployed on a decentralized blockchain network that supports smart contract technology.Ethereum is a decentralized blockchain network that has the characteristics of security, stability, transparency, and openness, and supports smart contract technology.Its characteristics can eliminate the need for centralized servers, so the Ethereum blockchain can carry the deployment of this system.The next system architecture will assume deployment on the Ethereum blockchain network. The voting protocol, including the management and execution of the entire voting process, is implemented on the Ethereum smart contract.The smart contract will automatically run on the Ethereum chain after being initialized by the voting initiator.During operation, the smart contract will control the voting process and automatically enter the next stage at the predetermined time.The schematic diagram of this project is shown in Figure 1, where each voter's voting process is as follows: a vote is initiated to the selected course through the front end of the webpage, the webpage interacts with the smart contract, a transaction is initiated to the smart contract address, and the corresponding method in the smart contract is activated.When the operation result of the smart contract on the blockchain is reported as successful, the user successfully participates in the voting.The smart contract is deployed on the Ethereum blockchain, and its compiled code is distributed and stored in each node of the blockchain network.When the smart contract is activated, it is executed in a decentralized manner by the Ethereum Virtual Machine (EVM), ensuring that each node on the network has a copy of the data, thereby ensuring the stability of the decentralized voting and course selection system. The abilities that students (voters) possess include determining the courses they vote for, as well as determining the number of votes to be cast for the selected courses; The Vote Web UI is a page that interacts with students and has the ability to display and interact.In terms of display, the front end of the webpage will display course information, remaining tokens, voting time, and the running results of course selection requests, etc., to facilitate students' voting decisions.In terms of interaction, the frontend of the webpage will obtain the course ID and number of votes selected by the user, and then initiate a request to the Ethereum smart contract.Ethereum Smart Contract is the core component of the entire system, which has the following capabilities: controlling, generating transactions, and generating results.In terms of control, the smart contract controls the voting stage, where self preparation, voting, and statistical results are conducted in a one-way stable manner, and the legitimacy of the voting is verified (time, number of tokens, voters, etc.).In terms of generating transactions, smart contracts will record the results of each vote (success/failure) and record them on the blockchain.In terms of generating results, after the voting is completed, running the corresponding method on the smart contract can return the final statistical results of course selection, which is the final winner of each course. Voting rules and mechanisms In this course selection system, voting rules and mechanisms are implemented through smart contracts, aiming to ensure security, stability, and transparency, so that each student can choose their own satisfactory course combination according to their own needs as much as possible. The specific details of the implementation of this system are as follows: firstly, the smart contract will allocate optional courses for the voting stage based on the number of credits taken by students in the semester determined during the declaration stage.After entering the voting stage, students are free to invest their tokens into the required courses.After the voting phase ends, enter the settlement phase.The settlement phase will be processed by smart contracts, with a specific allocation algorithm that sorts each course based on the number of tokens put in by students.Then, the top n students ranked in the course capacity are selected as successful candidates for the course.From this, it can be seen that students who invest more coins have an advantage in the competition, making them more likely to obtain the courses they expect. If a student fails to successfully choose a certain course, they still have the opportunity to secure a spot in other less competitive courses.For this group of students, the system will process other courses one by one in order of their vote count, from highest to lowest.The processing of other courses will follow the principles of no time conflict and no repetition of courses.In the courses that meet the requirements for course selection, the system will arrange corresponding courses for these students until the pre agreed number of credits is selected. Token issuance and distribution The school will determine and publish the total academic scores that students in each major should take based on the teaching plan and teaching situation, for reference by students in different majors.In the token voting course selection system, before the course selection stage of each semester, students need to declare to the smart contract and set the number of course credits they decide to take [7].The smart contract will distribute a corresponding number of tokens to different students in a proportional proportion to the required credits according to their needs.The tokens for each semester are independent and do not exist for inheritance between semesters.For example, the "Spring2023" Token released for course selection in the spring of 2023 can only be used for course selection in the spring semester of 2023.In the summer semester of 2023, a new 'Fall2023' token will be released, while the outdated 'Spring2023' token will become invalid and no longer be used for voting.The setting of this mechanism helps to ensure that the average token cost for all courses is the same. In the course selection stage, each student needs to refer to different course time periods and make preliminary plans for course selection.And achieve a targeted and arranged allocation of tokens, ultimately throwing a reasonable amount of tokens for each course that the student believes is reasonable.In each semester's course selection, students need to find ways to ensure that all types of elective courses meet the academic scores set by the college, in order to meet the requirements for smooth graduation. Comparison between token voting system and traditional course selection system The traditional course selection system requires high server response time, and some students may miss out on courses due to external factors such as internet speed.The token voting system refers to students throwing tokens at their desired courses based on their own planning, overall arrangement, and careful consideration.In comparison, the token voting system can better promote students' thinking on resource allocation, guide them to choose courses that are in line with their professional plans, and guide them to explore the future path [8].Moreover, the token voting system for course selection to some extent avoids the drawbacks of traditional course selection systems, which result in server paralysis caused by a large number of students entering the system together in a short period of time.However, in terms of system design, the token course selection system requires higher openness, computability, and sorting of data, so it has higher requirements for algorithms and compatibility. Evaluation and limitation analysis of token voting system in course selection system During peak periods of high resource demand, traditional network architectures have poor global performance and uneven distribution of real-time traffic, resulting in poor network timeliness.The token voting system eliminates the need for centralized servers and slows down the surge in access to course selection systems due to limited quotas during centralized time periods, resulting in excessive thread blocking, low server information processing rate, extremely slow response speed, and severe congestion and lag [9].This system can meet the course selection needs of most students in a short period of time, reduce the time cost and mental pressure of course selection, and greatly improve the course selection experience.At the same time, this mechanism to some extent reduces the consumption of server resources and reduces the maintenance costs of the school. The introduction of token voting system in the course selection process has strong innovation and also solves the practical difficulties faced by students.Before each semester's course selection, students determine and declare to the system the number of courses and corresponding credits they have decided to choose for this semester.The system also maximizes the satisfaction of students' initial wishes during course selection based on their wishes.This not only gives students the right to make independent choices, but also allows them to arrange their four-year university courses according to their own development needs.And to some extent, it tests students' ability to coordinate and plan as a whole.The amount of credit applications and coins invested in each semester is important, reminding students to carefully consider every choice they make. The token voting system also has limitations.Firstly, this system carries certain investment risks and cannot prevent a small number of students from spending too much tokens in the early stage due to gambler mentality or poor arrangements.As graduation approaches, there are still many courses that are not attended due to insufficient tokens, and ultimately they fail to complete the required total academic scores and credits for various types of courses in their respective majors, resulting in unsuccessful graduation.Secondly, due to the introduction of the token system, the essence of course selection has shifted from the traditional "first come, first served" approach of seizing network resources to a "multi coin" bidding approach.Therefore, the competition process cannot guarantee complete fairness, and there will be token depletion for some high investment bidders [10]. Conclusion The application of the token voting system in course selection processes has significant implications.Not only does it improve the student course selection experience and optimize the course selection system, but it also has the potential to align closely with national conditions and create added value.Through this system, the popularity of different courses and various content among students can be intuitively reflected, serving as an evaluation criterion for assessing teachers' teaching abilities.In the future, blockchain technology is expected to enable information sharing between teachers and students nationwide or globally.By comparing the number of tokens allocated by students to different courses or teachers of the same course and collecting corresponding data through real-time questionnaires, a comprehensive analysis can be conducted.This allows teachers to always know students' needs and optimize course arrangements to improve students' course selection experiences.Through semester course openings, students using token course selection, questionnaire recycling, and data statistical analysis can achieve a perfect closed loop of mutual understanding students and teachers, realizing a perfect two-way communication mode.
3,855.6
2023-10-23T00:00:00.000
[ "Computer Science", "Education" ]
One size fits all?: A simulation framework for face-mask fit on population-based faces The use of face masks by the general population during viral outbreaks such as the COVID-19 pandemic, although at times controversial, has been effective in slowing down the spread of the virus. The extent to which face masks mitigate the transmission is highly dependent on how well the mask fits each individual. The fit of simple cloth masks on the face, as well as the resulting perimeter leakage and face mask efficacy, are expected to be highly dependent on the type of mask and facial topology. However, this effect has, to date, not been adequately examined and quantified. Here, we propose a framework to study the efficacy of different mask designs based on a quasi-static mechanical model of the deployment of face masks onto a wide range of faces. To illustrate the capabilities of the proposed framework, we explore a simple rectangular cloth mask on a large virtual population of subjects generated from a 3D morphable face model. The effect of weight, age, gender, and height on the mask fit is studied. The Centers for Disease Control and Prevention (CDC) recommended homemade cloth mask design was used as a basis for comparison and was found not to be the most effective design for all subjects. We highlight the importance of designing masks accounting for the widely varying population of faces. Metrics based on aerodynamic principles were used to determine that thin, feminine, and young faces were shown to benefit from mask sizes smaller than that recommended by the CDC. Besides mask size, side-edge tuck-in, or pleating, of the masks as a design parameter was also studied and found to have the potential to cause a larger localized gap opening. Introduction During the COVID-19 pandemic, wearing face masks is the new status quo, and it has become apparent that the fit of the mask is important. In the early stages of the pandemic, face masks were primarily used as a barrier to small droplets that could carry the virus. Recently, however, scientists have urged public-health authorities to acknowledge the potential for airborne transmission of the novel SARS-CoV-2 coronavirus [1]. While there is still a lot that is unknown about the transmission of the SARS-CoV-2 virus, it is evident now that like its predecessor, a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 SARS-CoV-1, airborne transmission is a significant mode of transmission [2][3][4]. Airborne transmission happens when a susceptible person inhales microscopic bio-aerosols in the air which are generated from a respiratory event such as a cough, sneeze, or even just breathing and talking [2,5]. While larger droplets (�100μm) reach the ground within a second, aerosols can linger in the air for hours, increasing the probability of a susceptible person coming in contact with the virus [6,7]. For this reason, mask fit is important. Experimental studies with human subjects and manikins show that mask usage can limit the droplet and airborne transmission of various infections to and from the wearer [8][9][10][11][12][13][14][15][16]. Air leakage has been observed around the perimeter of the mask where it does not make a seal with the face, reducing the effectiveness of the mask [6,17]. Perimeter leakage is caused by loose or improper fitting face masks and can be significantly impacted by facial features [18][19][20][21]. A recent study found that seemingly insignificant facial features have an impact on the fitting of the mask on the face and concluded that 3D models could be used to assess mask fit in relation to the subtle changes in facial topology [17]. While proper-fitting of respirators on a face has always been stressed for effective filtering of all contaminants, there is a lack of knowledge on how important the fit of homemade cloth and simple surgical masks is. Surgical masks are primarily designed for outward protection from droplets, not aerosols, and therefore, the fit is much looser. Homemade masks made from cotton or similar fabrics are likely to be even looser, allowing for more leakage. These looser-fitting masks are more susceptible to perimeter leakage and, therefore, not as effective against aerosols. In a study comparing the effectiveness of different face masks, homemade masks were shown to be half as effective as surgical masks and 50 times less effective than an FFP2 mask (similar to an N95 respirator; filters 94% of particles larger than 0.3 microns). These effects were even more pronounced amongst the children subjects, likely due to an inferior fit on their smaller faces [15]. More recently, Verma et al. experimentally explored the effect of different mask types by visualizing the respiratory jets and observed leakage through the perimeter of the mask [22]. The study reported that both the mask material and fit have an important impact on the mask's effectiveness, with all masks tested showing leakage from the top of the mask due to poor fitting. The studies by Oestenstad et al [19] and Oestenstad and Bartolucci [20] used a fluorescent tracer to identify leak location and shape on subjects wearing half-mask respirators. They tested the effect of gender, race, respirator brand, and facial dimensions and found that facial dimensions were significantly correlated to the leakage location. Tang et al. studied the jet generated from coughing and the effect of wearing surgical masks or N95 respirators [23]. They found that a surgical mask effectively blocks the forward momentum of the cough jet, but the loose fit of the mask allows air leakage around the perimeter of the mask primarily through the top and sides. Lei et al. used a headform finite element model to study the leakage locations of an N95 respirator and show that the most leakage occurs along the top perimeter of the mask near the nose [24]. From previous studies, we can conclude that although it is understood that mask fit is important and affected by facial features, it is not clear yet how and which features impact the fit. The simulations of face-masks and population-based headform models have the potential to accurately estimate the location and amount of leakage for different facial structures. These 3D models can also be leveraged to systematically explore the effect of different facial features in order to design better masks. A strong argument can be made for the importance of accurate mask-fit models for the prediction of virus transmission. Mathematical frameworks that model the spread of a virus in public spaces, cities and entire countries must take into account the rate of transmission to and from each member in the population. The rate of transmission of a virus is dependent on the severity of the virus itself, population density, and mask effectiveness [4,9,25,26]. The parameters relating to the effectiveness of face-masks in these transmission models vary significantly, drastically affecting the results. Eikenberry et al. included the effect of mask usage in their model based on the inward and outward efficiencies of face-masks. They cite a wide range of mask efficiencies, ranging from 20 to 80%, derived from experimental studies [14][15][16]. The experimental studies of course are limited in the number of subjects tested, with all experimental studies mentioned previously having less than 25 subjects and in some cases as few as 7 subjects tested, from which mask efficiencies were calculated. The limitation of the number of subjects and range of facial features in experimental studies means that statistically significant results from which we can derive correlations of mask fit and face types or topology are hard to come by, if not nonexistent. Eikenberry et al. illustrate the significance of proper estimation of mask effectiveness by showing that the effective transmission rate decreases linearly proportional to the mask efficiency such that masks with 20% and 80% efficiency decrease effective transmission rate by approximately 20% and 80% respectively [9]. Such disparities in mask efficiency can lead to less than reliable models of the spread of the virus. Mittal et al. recently proposed a new transmission model, the COVID-19 airborne transmission (CAT) inequality. The model was designed for simplicity so that it can serve as a common scientific basis and also be understood by a more general audience [4]. Like Eikenberry's and Briennen's models, the CAT inequality accounts for the protection afforded by face coverings. The effect of face-masks in the CAT inequality is primarily based on the filtration properties of the material. All of these transmission models attempt to predict the spread of viruses, which requires a proper statistical model to account for the effectiveness of face-masks. Understanding and developing reliable models for the effectiveness of face coverings based on not only the fabric material but also the fit can lead to better transmission models. Accurate characterization of mask effectiveness due to individual fit goes beyond reliable transmission models and more directly affects the general public. The CDC has provided design guidelines for home-made masks, but as we will show in this study the recommended mask design, or any single mask design, may not be optimally effective for the many different facial structures inside a population. That is to say, one size does not fit all. Instead, to ensure the effectiveness of the mask, particular sizes and designs should be recommended for several distinct population categories. Here, we develop a framework to study the effect of the varying facial features in a large population on the mask fit and efficacy. The framework provides a systematic platform on which many different mask designs can be quickly tested on a larger population of faces than could otherwise be achieved under very extensive and costly traditional experimental tests. This study aims to provide a framework to develop better mask designs and provide motivation as to why mask fit and mask design should be studied at an individual level. Specifically, we look at the leakage from a simple home-made mask design as recommended on the CDC website [27], and show how the size and simple adjustment of facemasks can affect the mask leakage based on the facial features. The goal is to illustrate the practical application of such a framework and identify potential discriminative metrics for future studies. Here, different faces are categorized based on the subject's weight, age, gender, and height. The proposed framework can further be extended to different facial features and more complex mask designs. However, here, we introduce the methodology and its application for studying the efficacy of rectangular homemade cloth masks. Methodology The performance of the face-masks is highly dependent on the properties of the mask as well as the fit of the mask on a given face. Here, we employ three-dimensional morphable models of the human face to account for gender, age, and other body-habitus-associated variabilities in face morphologies and conduct mask-deployment simulations for a large "virtual cohort" of individuals. The goal is to quantify the mode of perimeter opening and study how the mask leakage is changing with a population's facial features. The components of the computational model are represented below. Virtual cohort of faces The morphable model is based on the Basel Face Model (BFM) [28], a publicly available database that includes face scans of more than 100 males and 100 females ranging from 8 to 62 years old with weight ranging from 40 to 123 kg. Since the BFM database is pre-processed with principal component analysis (PCA), we will use the low dimensional PCA subspace to create realistic in-silico face realizations [29]. Fig 1 shows sample realizations of a face based on subspace synthesis. In addition, using identified feature vectors associated with weight, gender, age, and height, each realization can further be modified toward a particular shape. A similar morphing mesh is used for different faces, and separate regions and landmarks of the lips, ears, nose, and eyes are identified on the model. These landmarks are utilized to establish the mask position for a given face. Due to the PCA subspace, feature vectors associated with any number of relevant face characteristics beyond the ones studied here can be identified and systematically correlated to the mask fit. In addition, different facial expressions can also be modeled similarly. Deployment modeling A quasi-static model is employed for the deployment of a mask on a given virtual face. In the simulation, the mask is initially placed in front of the face with elastic bands wrapped around the ears but with zero tension. The resting length of the band gradually decreases from the initial length to its final value during the initial transient phases and for each stage, the intermediate quasi-static equilibrium position of the mask is calculated from the model in section 2.3. The procedure is continued until the mask rests in the final configuration on the face (Fig 2c). The procedure is repeated for different face realizations, and for each realization, the face is systematically modified in 8 directions, namely, thinner or heavier, younger or older, more feminine or more masculine, and shorter or taller features. For each case, the ensemble statistics of a particular group are calculated and cross-compared. Fabric mask model using minimum energy concept Because of the mask's small flexural stiffness, the band's geometrical constraints, and the contact between the mask and the face, the mask could have local buckling as wrinkles and slacks on its surface [30]. To account for all of these effects, a detailed multi-scale approach is required to represent diversely scaled elements from the dominant fibers in the mask to the interaction between human facial tissue and the mask surface. Here we use the minimum energy concept as a unified principle of mechanics that works across all scales and governs the position of the mask on the face. In this method, the total elastic energy of the system is expressed as, where E s cloth is the extensional elastic energy stored in the cloth, E s border is the tension and compression energy in the border strip around the cloth mask, E b border is the stored bending energy in the border strip around the cloth, and E s band is tension energy stored in the connecting bands (refer to Fig 2a for a depiction of these effects). The cloth is assumed to be made up of two orthonormal fiber bundles where their extensional elastic energy is an order of magnitude larger than its in-plane shear and bending energies. This assumption is justified as the regular cloth masks can be modeled as thin membranes that could easily undergo localized buckling and show negligible bending stiffness. Moreover, to account for the wrinkling effect, the energy associated with the area change of the mask is not considered. Instead, the extensional elastic energy stored in the cloth is made up of the stored energy of a group of initially orthonormal fibers according to, where L 1 and L 2 are initial unloaded edge lengths of the cloth mask. Here, W s cloth is the strain energy density defined as, PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces where E clo is the elastic modulus and ν clo is the Poisson ratio of the cloth respectively, and I 1 = 2(D 11 +D 22 ) is the first invariant of Green strain tensor defined as, with X and X 0 are the current and initial position of the cloth, respectively. Similarly, E s;b border is defined as where is the summation of the length of all four edges of the cloth mask. Here, if X(s) and X 0 (s) are the coordinates of the border in the reference and deformed configurations respectively, we can define the attached coordinate system to the border in its deformed configuration using its tangent vector t ¼ kt�X ;ss k and normal vector n ¼ b�X ;s kb�X ;s k . A similar definition is also used for the reference position of the border. The extensional strain energy density functions can then be expressed as The bending strain energy density function is approximated as where κ and κ 0 are the curvature in the direction n and n 0 , respectively. Here, the initial curvature of the border is chosen to be zero. The curvature can be related between local sets of three consecutive discrete nodes X i−1 , X i , X i+1 along the rod according to [31], where θ is the angle between two consecutive segments of the line and is defined as, The energy contribution from the stretching band is also defined similarly to E s border , with the extensional stiffness of A ban E ban 1þn and the unstretched length of L 0 . (Fig 2b). In addition to internal energy action, the non-penetration contact force between the mask and the facial tissue is represented with non-conservative forces, f t = f contact , acting on the mask surface. Here, we assume soft contact between the face and the mask, in which f contact is defined as, f contact ¼ k con k X À X con k n con ðX À X con Þ � n con < 0 where X con is the closest point on the face to point X on the mask, and n con is the outward normal to this point. The contact stiffness between the skin and the mask is represented by k con . By relating the internal forces of the mask to the derivatives of the energy density function, nonlinear sets of equations are obtained for the node placements X k = (x, y, z) k in the discrete model of the mask. The resulting equations are solved to find the equilibrium position at a given deployment stage. Moreover, since the equilibrium shape is slowly modified from the previous deployment stage, X n−1 , a linearized equilibrium equation is derived for δ X = X n − X n−1 , and is employed as a preconditioner to accelerate the convergence of the solution. This is done by defining spatial virtual work in terms of virtual velocity, δ v(X), and a discretized solution of X, ϕ(X), obtained from the discrete models of the mask, border and band. The equilibrium equation is solved iteratively to find the new position X n from X n−1 using the projective dynamics methodology [32]. A sensitivity study was carried out to determine how the initial placement of the mask and the elastic modulus of the material affect the final fit. Fig 3a and 3b show the sensitivity for elastic modulus for all the face categories included in this study. The elastic modulus of cloth and band were varied between 7-13 MPa and E ban is varied between 30 and 50 MPa, in the range of typical cloth material properties. The total leakage area (A) and maximum gap distance (max(H)) show no significant effects. The initial position of the mask is varied such that the top edge of the mask positions between its extreme conditions. The lowest acceptable position is the top edge is placed on the nose tip and the highest placement is when the mask covers the vision of a subject. Varying the initial position of the mask also does not result in a significant change of A or max(H). Finally, we tested if the initial bulge in the mask affects the leakage from the sides and found no significant effect from such effect too. Therefore, all subsequent simulations are initialized with the mid nominal parameters. The mask center is assumed to be normally approaching the mid point between the mouth and nose. We choose E clo = E bor = 10 MPa and Young's modulus of the band is chosen to be 4 times stronger with E ban = 40 MPa. All the Poisson ratios are fixed at 0.3 [33], which is tested (not shown here) and found not to change the observed behavior. The thickness of the cloth mask is chosen to be 0.5mm, and the band is made up of 0.5in folded cloth fabric following the recommendation of the CDC for the short edges [27]. The stretching band is assumed to be circular with a diameter of 1mm and its initial length is equal to the length of the ear. These parameters are chosen to be close to typical cotton fabric and elastic band. In addition, it is assumed that the contact stiffness between the mask and different part of the face is similar at every location on the face, with the contact stiffness k con = 1Mpa, in the range of skin (0.6 MPa) and thick muscles (*0.8 MPa) [34,35]. These parameters have been checked to be sure that their value does not have a significant effect on the results. Also, the cloth mask is discretized with Δs = 2mm, and grid refinement studies have been performed to ensure that the simulation results are independent of the grid sizes. Effect of mask size and side tuck-in ratio Here, the leakage area for a rectangular cloth face-mask for three different sizes and tuck-in conditions of the side edges is explored. The tuck-in of the mask is mathematically modeled as the gradual reduction of the unstretched length of side edges from the initial length (L 0 ) to L = α T L 0 , where α T is named as the tuck-in ratio (Fig 4). The tuck-in can also be thought of as pleating the sides of the mask such that the side edges become shorter than the original length as shown in the inset of Fig 4b. The mask shape and size are chosen based on the guideline by CDC on how to sew cloth face coverings [27]. From the guideline, the baseline case is selected to be a mask with L 0 = 5.5in and W = 9in (will be referred to as medium mask), with α T = 0.5. Two other sizes of W = 8in (hereafter will be referred to as small mask) and W = 10in. (will be PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces referred to as large mask) with the same aspect ratio are also tested to account for the variation in the mask size. To ensure the ensemble statistics are sufficient to reach a confident inference from the simulations, 150 random subjects are selected from the virtual cohort of faces and for each subject, 8 modified configurations are generated to systematically explore how the facial features affect the leakage area around the perimeter of a mask. The modification to each random face is done along one feature direction at a time. The feature vectors in this study are restricted to weight (thin to heavy), age (young to old), gender (feminine to masculine), and height (short to tall) as shown in Fig 4a. The selection is based on the availability of the prior database. Other important features of faces such as race are not considered and will be explored in future work, as explained in the conclusion section. Fig 5a shows the statistical mean value of the cumulative leakage area around the perimeter of small, medium, and large masks with the tuck-in ratio of α T = 0.7, 0.5, 0.3. The bars are for the CDC recommended mask (medium size) while the blue and red dots represent large and small masks, respectively. Each category of faces are plotted with a different color and the tuck-in ratios are shown with bars with solid (α T = 0.7), dashed (α T = 0.5) and dotted (α T = 0.3) borders. The error-bar for each data shows the standard deviation of the computed parameters for the 150 random face realizations. It is found that the smaller mask size, relative to the CDC recommended size, has minimal effect on the total leakage area for the base cases, especially for higher tuck-in ratios. However, there are substantial changes in the total leakage area for thinner, younger and more feminine faces with a decrease in mask size regardless of the tuck-in ratio. In general, the trend seems to indicate that smaller masks will reduce the area across the spectrum of faces. Similarly, the total leakage area continuously reduces with more tuck-in on sides for all but the heavy face category. The side-edge tuck-in has a more pronounced effect on the large mask size. Later we show how the large mask is oversized for some of the cases, and this oversized mask hangs off the chin instead of fitting snug against the PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces face. Tucking in (pleating) the sides of the mask shifts the lower edge of the mask closer to chin and therefore significantly reducing the gap area. The leakage around the edges of the mask is also dependent on the hydraulic perimeter of the opening (both area and shape of opening) [36,37]. To explore this effect, in Fig 5b we show the maximum opening (maximum distance between the mask edge and face). Surprisingly, the reduction of the mask size does not have a monotonic effect on the maximum gap opening (Fig 5b). That is, while the majority of face categories show some reduction in the maximum opening, the base, heavy, and masculine faces can have a larger maximum opening with smaller masks. Only the thin, feminine, and short faces show a significant improvement in maximum opening with a reduction in mask size from the CDC recommended size. The maximum opening also reveals that the tuck-in ratio does not have a universal effect. As an example, smaller tuck-in ratios result in larger openings in older faces, albeit a small increase. The main observation is that proper mask sizing is the most effective way to reduce the maximum opening and a smaller tuck-in ratio could only be beneficial in certain categories of faces. The comparisons between mask sizes and tuck-in ratios for the median cases in each category are shown in Fig 6. It can be seen that the placement of the mask on the face is greatly modified when the mask becomes smaller than a threshold. In particular, the lower edge of the mask shifts from below the chin to the top of the chin for heavy and tall faces with a small mask. Consequently, the bottom support of the mask can slip easily in tangential directions and could easily lead to changes in the mask placement during routine daily activities such as talking and breathing. Any case that exhibits mask slippage, whether the bottom edge slipping past the chin support point or the top edge slipping below the nose tip point, are considered failures and were not used in evaluating any of the metrics in this study. We note that the small mask on heavy and tall faces failed for the majority of cases. The results presented are for the cases where the mask did not slip, however, due to the majority of cases with small masks in these particular cases (heavy and tall) failing, we disregard them in the rest of our discussion. The thin and feminine cases exhibit a rapid increase in the opening gap, primarily in the chin PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces area, with an increase in the mask size. The results suggest that the addition of a tuck-in mechanism to the lower edge of the medium size mask is a simple modification that would make them more effective for feminine and thin faces. The leakage around the mask can be divided into three distinct segments: the top edge near the nose, the side edges near the cheeks, and lower edge near the chin. The maximum gap opening and the contribution of each segment of the mask to the total leakage area are compared in Fig 7. It is observed that the leakage from the nose area is independent of the tuck-in ratio and mask size, except for thin, feminine, and short faces (i.e. smaller faces). When the mask is medium or small, the tuck-in ratio can be used to further reduce the opening near the nose (side 1) in thin, young, and short faces without increasing the maximum gap distance. For the other cases, more tuck-in is accompanied by an increase in the maximum gap opening on top of the mask. The increase is primarily due to changes in the placement of the mask on the nose. The insensitivity to mask size and tuck-in ratios from the majority of the face groups suggests that a clip or other mechanical effect is necessary to reduce the top edge opening, something that is standard in some higher quality masks currently in the market. A different trend is observed for the cheek edges (side 2) where there is a substantial reduction of opening with a decrease in mask size and tuck-in ratio. The maximum opening is significantly reduced by increasing the tucking from α = 0.7 to α = 0.5 in large masks, further increase in tuck-in (lower α) does not change to the maximum opening. The leakage area from this side shows the greatest reduction with more tuck-in relative to the other two regions of the mask. The reduced area and unchanged maximum opening indicate that with smaller tuck-in ratios, the side opening gap opening becomes more concentrated. The maximum opening and the leakage area near the chin (side 3) are major components of the total leakage for thin and feminine faces. The tuck-in ratio has an insignificant role in this part of the mask, while the mask size is the primary driving factor. Thin and feminine faces show a more than 50% reduction in leakage area with the smaller than the CDC recommended mask size. An exception is the heavy faces, where a small mask induces larger maximum openings in the lower edge. This is due to the face-mask slip on the face and placement of the lower edge on top of the chin area. It is apparent that the leakage area (A) or maximum gap (max(H)) alone are not perfect indicators of the mask's effectiveness. Instead, we propose looking at the mask as a set of N channels with one end at the mouth/nose (i.e., the region at which a high pressure is generated PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces that will drive the air towards the perimeter of the mask), and the other end at the outer edge of the mask. This can be visualized as rays emanating from the mouth to points along the mask edge (s i ), as shown in Fig 8a. These rays can be thought of as two-dimensional channels of length L and height H, equal to that of the mask opening at each point along the perimeter of the mask. Fig 8b shows such a channel, note that the length of each channel is given by the distance from the mouth to the corresponding point on the mask perimeter (s i ). Considering the system in this way, we can derive a hydraulic resistance corresponding to each point along the perimeter of the mask(R i ), which would describe the relative amount of the airflow that leaks out at this point compared to the amount filtered out through the mask cloth. The velocity profile for permeable channel flow can be defined as, Note that although one side of the channel is porous, i.e. the mask, it has been shown that the velocity profile does not change significantly and therefore, the Hagen-Poiseuille flow profile is still valid for our analysis [38]. Upon integrating v across the height of the channel, the PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces mass flow rate can be obtained as, where Δp = P 0 − P 1 , the difference between the high pressure near the mouth and ambient pressure just outside the mask. From this, an equivalent resistance model as seen in Fig 8b can be used to derived a hydraulic resistance for each point along the mask perimeter as, Only L and H are geometrically varying parameters in this expression; therefore, we introduce a new parameter � to characterize the relative changes in the hydraulic resistance. Since � R is inversely proportional to the leakage mass flow rate, a larger � R is viewed as more effective. The average and minimum of � R, for each face category and mask design, are shown in Fig 9. The average � R is taken as the average over all cases for each face category and mask, of the total resistance of each case. As shown in Fig 8b, all channels, and therefore also the resistances, are in parallel such that, Contrary to the previous metrics A and max(H), the mean hydraulic resistance R avg does not show that smaller masks are generally a better choice. Instead, there is a more clear distinction between the most effective mask design for each face category. The largest mask provides the highest R avg for heavy and tall faces. Base, young, old, and masculine faces benefit the most PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces from the medium mask, while the rest of the faces attain the best protection in terms of R avg with the small mask. As noted previously with A and max(H) any mask larger than the small mask on thin faces is mostly ineffective due to the large gaps especially in the overhanging lower edge. The hydraulic resistance accounts for the proximity of the mask edge to the mouth/nose such that if a smaller mask cause slippage that leads to a reduction in the distance between the mask edge and mouth/nose, the hydraulic resistance will decrease. Interestingly, the non-monotonic effect of the tuck-in ratio is more apparent in R avg . Tuck-in seems to provide the necessary adjustments to provide the most effective mask for several cases. Young faces show that smaller masks are more effective for their face, however it seems that a medium mask with tuck-in can be the better choice. Masculine and short faces, show similar behavior. In the base case, for example, increasing the tuck-in ratio is detrimental to the overall effectiveness of the small mask. As the mask size is increased, the larger tuck-in ratios perform better. This trend is clearly not monotonic, instead R avg initially increases with α but increasing beyond α = 0.5 results in a decrease in R avg . To understand this we take a closer look at the masculine faces with the large mask where the effect is most noticeable. It is found that increasing the tuck-in ratio from α = 0.3 to α = 0.5 results in a decrease in gap area and maximum gap opening for all sections of the mask resulting in higher hydraulic resistance. Increasing to α = 0.7 results in an increase in the maximum gap opening. This translates to a change in the shape of the openings from wide and shallow to more localized larger gaps accounting for the decrease in hydraulic resistance in large masks and large tuck-in. The effectiveness of the mask could also be defined by the most likely point of leakage, here defined by the point with the lowest hydraulic resistance (R min ). While there are minor differences in the trends, they are mostly insignificant. The most notable difference is that R min is in the short faces where the medium mask seems to be more effective than the small mask, contrary to the observations made in R avg . The previous results show that both leakage area and maximum gap opening of the edges should be considered to reach a discriminatory factor that can identify different modes of leakage around the mask. Toward this, we found that H SD = � H can serve as the parameter for PLOS ONE unsupervised classification of the results, where H SD is the standard deviation of the opening gap and � H is the average opening distance along the edges. Fig 10 shows the scatter plot for mask size and tuck-in ratio effects. It is found that the data can be grouped into 5 clusters to best separate the effect of facial features. The number of clusters (K) is chosen such that with a further increase of K, the reconstruction error is not significantly reduced. The reconstruction error is defined as EðD; KÞ ¼ jDj where D is the data set and μ is the center of the cluster z i ¼ argmin k k x i À m k k 2 2 . These clusters are marked with different colors in each sub-figures, while different symbols are used to distinguish between different feature categories. Besides, the inset figures present the percentage of face categories in each cluster. The thin face category (number 2 in the legend) is consistently the primary contributor of cluster 1 (dark blue cluster). Similar observations can be made for heavier faces and cluster 5 PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces (yellow cluster). The center of cluster 1 shifts to higher H SD = � Hj 1 and H SD = � Hj 2;3 with decreasing tuck-in ratio, consistent with previous observations that a smaller tuck-in ratio leads to more non-uniform gap distributions. An increase in mask size shifts the cluster centroid to lower H SD = � Hj 2;3 , indicating that the mask size will primarily affect the gap opening on lower and side edges. Cluster 5 only shows variations along the H SD = � Hj 1 axes. Also, we see that the masculine face category is the most prevalent member of cluster 3, with a minor centroid shift among the cases. The other two clusters, clusters 2 and 4, are mixed sets of faces suggesting that other facial features are needed to classify this region of the features sub-space. It is found that the tuck-in ratio can only induce a minor shift to these clusters, but the mask size can substantially modify the mode of opening along the edges. In Fig 11, we plot the cases of the dominant feature category nearest the cluster centroids of Fig 10 for medium (a), small (b), and large (c) mask sizes. Each figure also include the results for the tuck-in ratios of 0.7 (top row), 0.5 (middle row) and 0.3 (bottom row). As mentioned previously, clusters 1, 3, and 5 are predominantly comprised of thinner, more masculine, and heavier faces, respectively. The young and short faces are the most representative feature categories of cluster 2, while the old and feminine faces form the majority of cluster 4 for medium and large masks. Nonetheless, it is found that none of the feature categories is the dominant constituent of clusters 2 and 4 with more than 1/3 of the members. The results from unsupervised clustering based on the face-associated variabilities suggest that for certain groups such as heavy and thin faces, it is possible to find the most effective face covering with minimal gap opening, especially with the use of proper mask size. However, for the other cases, one needs to consider other factors such as shape and geometrical parameters of the face to better identify the most optimal cloth mask covering size and tuck-in ratio. Role of facial features This section explores how the changes in the categorical facial features (weight, age, and gender) affect the findings presented. The leakage area and maximum gap distance are shown in Fig 12. The horizontal axis is hereafter named as weight/age/gender index, with a zero value corresponding to the base case. The associated facial feature for each value is depicted in the legends, and the cases used in the previous sections are marked with. The left column is for different mask sizes and the tuck-in ratio of 0.5 and the right column is for the medium (CDC recommended) mask size with tuck-in ratios of 0.3, 0.5, and 0.7. The dash lines are showing the standard deviation of data. Finally, in Fig 12d we plot the median cases for the marked dots in the sub-figures (i-1) for the medium mask and the tuck-in ratio of 0.5. The increase in the weight feature results in decaying leakage area until a threshold at which the leakage area reaches an asymptotic value. This threshold is very similar between different mask sizes and happens at a weight index of 0.5 (Fig 12a-i). The weight index with the minimum gap is highly dependent on the mask size, observed at -0.3, 0, and 0.6 for the small, medium, and large masks respectively. On the other hand, the higher tuck-in ratio marginally reduces the opening area for all weight indices and has almost no effect on the minimum opening gap (Fig 12a-ii panel). As shown in Fig 12d-1, the gap opening along the bottom edge of the mask changes most significantly and its tightness on the chin is correlated with the transition point observed above. The change in the age feature of the face has a different outcome on the leakage area and maximum gap opening. Both the tuck-in ratio and mask size almost equally modify the leakage area and maximum opening with the age index (Fig 11b). The minimum leakage area with respect to the age index shifts to older faces with increasing mask size but is not affected by the tuck-in ratio. The large mask shows almost similar maximum opening across all ages, while the other sizes show initial decay and subsequent rise with age index. The smallest maximum opening occurs at a lower age index than does the smallest leakage area. The tuck-in ratio further changes the maximum opening, especially at the low and high extremes of the age index. The median realizations for highlighted cases in panel (i-1) in Fig 11d-2 indicate that there is a shift in the location of the upper edge of the mask on the nose with the age index. A simple mask design is found to be incapable of reducing the top edge opening near the nose as the geometric dissimilarities between the mask and the face always result in a non-zero gap at the top edge. The gender index, while showing a similar trend in total leakage area to the age index, has a distinct maximum opening profile, which is more similar to that of the weight index (Fig 11c). The minimum leakage area is shifted to more feminine faces with smaller mask sizes, but the maximum opening becomes larger with the increase of the feminine gender index. In fact, the PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces smallest mask exhibits the largest gap opening than any other case when the gender index is -1 (most feminine). For more masculine faces, the response of all mask sizes is similar. Finally, the tuck-in ratio has negligible effects across the gender index. From the tested cases, it is found that all gender faces have similar gap distributions in cheek and chin areas, but they are different in the opening at the nose area compared to other indices like weight. PLOS ONE One size fits all?: A simulation framework for face-mask fit on population-based faces Summary The findings from the previous sections illustrate that it is important to account for a wide representative population of faces whenever a mask fit/design study is performed. It is shown that cases with smaller facial dimensions such as thinner, younger, and more feminine faces, tend to suffer from more leakage due to improper fit of homemade cloth masks. There is, in fact, a threshold in mask size at which these faces show a significant increase in leakage area, especially from the bottom edge of the mask. Based on the depiction of the median cases (Fig 12d), it is observed that this increase is due to an oversized mask hanging off the face near the chin. Of the three mask sizes studied, all but heavy and masculine faces showed the reduced leakage area with the smaller masks. In some cases reducing the leakage area by over 50%, compared to the CDC recommended size. Although the total leakage area was reduced with smaller masks for most faces, the smaller masks do not extend below the chin for all face types (heavier and more masculine). This, of course, can increase the risk of new perimeter leaks during routine daily activities like breathing and talking, and especially during high transmission actions such as sneezing and coughing. The other simple modification to masks, besides the size, is the tuck-in of the side edges of the mask. In general, larger tuck-in (smaller tuck-in ratio) leads to reduced leakage areas. However, we know at least intuitively, that small masks are not the most effective for all faces. The effective hydraulic resistance is proposed as a more discriminatory metric that considers the gap along the perimeter of the mask and the distance from the mouth (source of aerosols). The hydraulic resistance shows a clearer distinction between the most effective mask between face categories. Smaller does not seem to be better as indicated by R avg . Thin, female, and short faces showed the smallest mask to be the most effective. Base, young, old, and male faces had the highest hydraulic resistance with the medium mask, while heavy and tall faces did best with the largest mask. Although not explicitly clear, the hydraulic diameter accounts for the shift of the mask lower on the nose for smaller masks. The shift of the mask on the nose reduces the distance from the mouth to the outer edge reflected in the hydraulic diameter. It is clear from Fig 9a that even simple mask design elements, such as mask size and tuck-in ratio, have significantly different effects on each face type, and further, that the combined effects of these design elements are not easily predicted. As mentioned in section 3.1, the shape of the opening is also crucial in determining the effectiveness of a mask, especially for outward protection. We describe this with the maximum gap opening max(H). Given the same leakage area, the mask that produces larger max(H) has more localized openings as opposed to the mask with smaller max(H) would have a more uniform slit-like opening through the length of the mask edge. During a respiratory event, the more localized openings would create higher exit velocity jets, which would further spread the aerosols. From this, we cannot definitively conclude that masks that reduce leakage areas are best. Instead, we must also ensure that the reduction in leakage area is not accompanied by an increase in max(H). This effect is also present indirectly in the hydraulic resistance, and hence is the reason why we see the difference in optimal mask design for each face type. A deeper study of key features, such as weight, age, and gender further highlighted the nonmonotonic effects of the explored design elements. The direct effect of the weight-dependent facial features shows that after a weight-index of 0.5, mask size and tuck-in have no effect on the leakage area. Heavier faces were observed to have more uniform openings (smaller maximum gap) with the larger masks but this was much dependent on the weight index. The same analysis was carried out on the age and gender feature categories. Both of these saw, by and large, a decrease in leakage with smaller masks across the feature's index. Masculine faces, similar to heavier faces, show negligible effects of both mask size and tuck-in ratio. The feminine faces did show some reduced leakage area when the mask is small but interestingly, the small mask produced the largest maximum gap opening in the most feminine faces (index of -1). There are optimal values for both the age and gender indices where the leakage area and maximum opening attain their minimum values. This would lead one to believe that there are parameters other than the feature categories explored here, on which the mask fit depends. Strengths and limitations Several limitations are present in this study. The design elements in masks are numerous and only two simple design elements were discussed. This, of course, was a necessary step to reduce the size of this study. However, the framework proposed can be easily extended to account for any other mask designs, including varying geometries, stitch patterns, and mechanical devices. Another limitation is related to the static nature of the model. It is known that during violent respiratory events such as coughing, the mask can deform due to the pressure build-up inside the mask and therefore affect the efficacy. The flow speeds during inspiration and expiration phases also differ. It is anticipated that lower pressure in the region interior to the face-mask during inhalation could induce inward deformation to the mask and reduce the perimeter leaks. On the other hand, the higher pressure during the expiration process might create larger leakage openings and depend on the instantaneous shape of the mask, induce stronger or weaker leakage jets on sides. Activities such as speaking can also cause the mask to shift and deform, affecting the efficacy of the mask. The 3D morphable face model accounts for large sample sizes of subjects with many different facial features at scales not achievable by experimental methods. It also serves to systematically study independent characteristics such as the shape and size of the nose and jaw, or macro features such as was done here in 3.2. The entire framework is flexible enough to allow fast exploration of many mask designs, providing us with a powerful tool in developing more effective and comfortable masks. Conclusion The effect of mask fit for a large cohort of individuals with varying facial features was studied using three-dimensional, morphable, headform models onto which a cloth mask was deployed via a quasi-static simulation. The categorical study of facial features (weight, age, gender, height) prove that the CDC recommended mask size is perhaps not the most effective mask size for the entire population. Thin, young, feminine, and short faces were observed to benefit from a smaller mask size. Heavier and taller faces, on the other hand, would benefit from a larger mask. For the base, masculine, and older faces, the best performance is achieved with a medium mask. More importantly, although tuck-in of the side-edges can reduce the leakage area, it can, in turn, cause larger gaps. The effect of the tuck-in ratio was observed to be more effective on larger masks. However, the tuck-in ratio has a non-monotonic behavior that changes for each subject and the mask size. It is apparent that it would be nearly impossible to have one universal recommendation for all subjects based solely on the feature vectors in this study, alluding to important topological features that can significantly impact the mask fit. It furthers highlights the necessity of approaching the task of mask design in a statistically inclusive way that accounts for the large variation in the population of faces. It is also found that the effects of the mask design elements are not easily predictable and need to be characterized on a more individual basis. The total leakage area does not reveal the complete picture of mask effectiveness. The examination of leakage area by section shows the principle sections of worry for each category of faces. The total leakage area in thinner and feminine faces predominantly comprises the opening near the chin due to oversized mask sagging below the chin. A lower edge tuck-in modification could reduce the leakage near the chin. For the remainder of faces, except the heavier and more masculine faces, the cheek opening is the major component of the total leakage area and can be reduced with tuck-in. Finally, the top edge opening in all subjects is mostly unaffected by the mask size and tuck-in, suggesting that other mechanical means are necessary to reduce the leakage in this section. For a more discriminatory metric to determine the best mask for each face, hydraulic resistance was introduced. Analysis about the flow of respiratory events needs to be carried out to arrive at a definitive conclusion on the most appropriate metric to quantify efficacy. As a future direction, more facial features and race should be included in the populationbased study as well as the comfort factor. More design elements should also be explored including mechanical nose clips, and different mask geometries. Supporting information S1 Dataset. Mask fit metrics. Data files corresponding to the figures in this paper are provided as a Zip file. The data is archived as a Matlab.mat file and the description of the data is included in the archive file. (ZIP)
12,247.2
2021-06-16T00:00:00.000
[ "Physics" ]
An Improved IoT-Based System for Detecting the Number of People and Their Distribution in a Classroom This paper presents an improved IoT-based system designed to help teachers handle lessons in the classroom in line with COVID-19 restrictions. The system counts the number of people in the classroom as well as their distribution within the classroom. The proposed IoT system consists of three parts: a Gate node, IoT nodes, and server. The Gate node, installed at the door, can provide information about the number of persons entering or leaving the room using door crossing detection. The Arduino-based module NodeMCU was used as an IoT node and sets of ultrasonic distance sensors were used to obtain information about seat occupancy. The system server runs locally on a Raspberry Pi and the teacher can connect to it using a web application from the computer in the classroom or a smartphone. The teacher is able to set up and change the settings of the system through its GUI. A simple algorithm was designed to check the distance between occupied seats and evaluate the accordance with imposed restrictions. This system can provide high privacy, unlike camera-based systems. Introduction Following the COVID-19 outbreak in 2019, we have been facing different and difficult challenges in all aspects of our lives. One of them is without a doubt the continuous fulltime educational process. Online education has its advantages, however, it cannot replace full-time education and student skills gained from face-to-face experience and practice, especially when it comes to education in technology and engineering. Various countries have different approaches that are enabling full-time education and access to facilities for students. Rules to reduce the maximum number of people in a classroom alongside social distancing rules have been introduced widely. These two mentioned restrictions were our primary motivation to propose an improved smart IoT-based system for detecting the number of people and their distribution in the indoor environment (e.g., classroom, or any type of room). There have been a multitude of studies published in the area of detecting and counting people in indoor spaces. Most of the proposed solutions are based on image processing from cameras installed in the area. For example, Myint and Sein [1] proposed a robust camera-based system which is able to estimate the number of people entering and exiting a room. Their solution is based on Raspberry Pi and software using a pre-trained VGG-16 CNN model and an SVM classifier based on TensorFlow and the Keras library. Another camera-based system for counting people using Raspberry Pi was proposed by Rantelobo et al. in [2]. This system can distinguish between people entering or leaving a room by performing image processing using background subtraction, morphological transformation, and calculating the contour area of the image. The main advantage of this solution is that the system can run on cheap hardware such as Raspberry Pi. Similar solutions relying on cameras and computer vision algorithms have been presented in [3,4]. Moreover, Hou et al. [5] presented a solution for social distancing detection based on a deep learning model. The primary goal was distance evaluation between individuals to mitigate the impact of the COVID-19 pandemic and reduce the virus transmission rate in indoor spaces. The detection tool was developed to alert people to maintain a safe distance from each other by processing a video feed from cameras used to monitor the environment. A similar system was proposed by Sharma in [6]. This system helps people to ensure proper social distancing in crowded places and highlights the violations of these norms in real time. The proposed system is based on image processing. There are many more published papers (e.g., [7,8]) solving the problem of social distancing using feeds from cameras and computer vision [9,10]. The work presented in [11] proposed a large Convolutional Neural Network (CNN) trained using a single-step model and You Only Look Once version 3 (YOLOv3) on Google Colaboratory to process the images within a database and accurately locate people within the images. The trained neural network was able to successfully generate test data, achieving a mean average precision of 78.3% and a final average loss of 0.6 while confidently detecting the people within the images. Yet another work presented in [12] uses YOLO v3 and Single Shot multi-box Detector (SSD) to detect and count people. The authors analyzed both methods and their comparison of the achieved results showed that the precision, recall, and F1 measure achieved for SSD were higher than for YOLO v3. The main issue related to camera-based systems involves privacy concerns, as data collected by cameras can be misused for face recognition, thus revealing the identity of the individuals in the area [13]. On the other hand, there are multiple works for counting people or measuring social distance which do not require the installation of camera systems. Among such systems, a smart social distancing monitoring system based on Bluetooth and GPS was described in [14]. In this system, an application can offer a solution for monitoring public spaces and reminding users to maintain distance. The work presented in [15] is based on an ultra-wideband radar sensor for a people counting algorithm. The proposed algorithm can operate in real time and is able to achieve a mean absolute error of less than one person. The system in [16] relies on Wi-Fi probing requests to count people in a crowd by taking advantage of people's smartphones. Another way of counting people or measuring social distance could be using Internet of Things (IoT) technology. The IoT can be described as a network of physical objects (things) that are equipped with sensors, software, and other technologies to connect and exchange data with other devices or systems via the internet or a local network. IoT represents interaction between the physical and the digital world in its simplest form. An IoT object in the world can be a simple sensor equipped with a communication interface or a smart self-driving car equipped with state-of-the-art technology. The advantages of using IoT technology in real-world applications are almost unlimited, and use cases can be found in such disparate areas as Industry 4.0 [17,18], smart agriculture [19,20], smart cities [21,22], smart transportation [23,24], smart homes [25,26], eHealth [27], and wearables [28]. With the massive adoption of IoT technology, it is finding applications in many areas [29,30]. For example, the authors of [29] stated that their platform, based on a combination of IoT and fog cloud, can be used in systematic and intelligent COVID-19 prevention and control. The system involves five use cases, including COVID-19 Symptom Diagnosis, Quarantine Monitoring, Contact Tracing and Social Distancing, COVID-19 Outbreak Forecasting, and SARS-CoV-2 Mutation Tracking [31]. Another IoT-based COVID-19 and Other Infectious Disease Contact Tracing Model was described by the authors of [32]. They presented an RFID-based proof-of-concept for their model and leveraged blockchain-based trust-oriented decentralization for on-chain data logging and retrieval. The wearable proximity sensing system presented in [33] is based on an oscillating magnetic field that overcomes many of the weaknesses of the current state-of-the-art Bluetooth-based proximity detection. The authors proposed, implemented, and evaluated their system and demonstrated that the proposed magnetic field-based system is much more reliable than previously proposed Bluetooth-based approaches. Another possible solution for monitoring people in indoor environments was introduced by Perra et al. [34]. The proposed device implements a novel real-time pattern recognition algorithm for processing data sensed by a low-cost infrared (IR) array sensor. The device can perform local processing of infrared array sensor data, and in this way is able to monitor occupancy in any space of a building while maintaining people's privacy. A seat-occupancy detection system based on Low-Cost mm-Wave Radar at 60 GHz was presented in [35]. Detection is based on Pulsed Coherent Radar in the unlicensed 60 GHz ISM band. The system can detect the presence of people occupying the seats by measuring small movements of the body, such as breathing. The solution for counting the people in the classroom proposed by Zhang et al. [36] is similar to the work presented in this paper. In their case, the authors used two E18-D80NK photoelectric sensors to count people in a classroom and an hc-sr501 infra-red sensor for detection of seat occupancy. However, the authors presented only the basic principles and hardware design of the system. The solution proposed in this paper is based on a slightly different technology with lower energy consumption. Moreover, it provides a complex solution with a user-friendly GUI and advanced functionalities, e.g., management of the rules and functions supporting deployment of the system. The remainder of this paper is organized as follows. Section 2 is devoted to a description of the system concept. The system's implementation is described in detail in Section 3, including the implemented methods, software, and hardware design. In Section 4, the achieved results are presented and discussed. Section 5 provides a comparison with other systems proposed for occupancy detection, and Section 6 concludes the paper. System Concept The main goal of the proposed system is to deliver counting of incoming students when they are entering the classroom as well as detection of the distribution of the students among the seats. The proposed system is portable and can be easily deployed and managed by the teacher. The main purpose of the proposed system is to help teachers with the management of their classes while respecting implemented COVID-19 restrictions. The concept of the proposed system is presented in Figure 1. The number of IoT nodes and sensor devices connected to the system is variable, however, there is one server per classroom. The teacher can access the system's Graphical User Interface (GUI) from a mobile device or personal computer in order to set up the system as well as to check whether all restrictions are being obeyed. Each seat available in the classroom is equipped with a single HC-SR04 Ultrasonic Distance Sensor. Up to sixteen distance sensors can be connected to a single IoT node, which is responsible for the evaluation of the seat occupancy on a single row. Each IoT node collects data from the connected sensors and sends data via MQTT protocol to the main server. The IoT nodes are based on the NodeMCU Arduino board. The entrance to the classroom is equipped with a Gate node. The purpose of the Gate node is to count the students entering or leaving the classroom. The Gate node consists of a single NodeMCU board with two HC-SR04 sensors used for door crossing detection by persons entering or leaving the classroom. The HC-SR04 is a low-cost sensor which can provide distance measurements between 2 and 400 cm with non-contact measurements and ranging accuracy up to 3 mm. The sensor accuracy is sufficient for the purposes of occupancy detection. Each sensor module includes an ultrasonic transmitter, a receiver, and a control circuit. The sensor's principle is as follows: • To trigger measurement output, the pin has to be activated for at least 10 µs; • The Module automatically sends eight 40 kHz signals and detects pulse signal reflections; • The distance is calculated as follows: where t is the pulse trigger duration and the constant 343 is the speed of sound in m/s. The sensor implemented in the system operates at 5 volts and consumes 15 mA, while its dimensions are 45 * 20 * 15 mm. For the sake of comparison, the Infrared Proximity Sensor E18-D80NK used for people counting and HC-SR501 PIR sensor for detecting the seat occupancy in [36] consume 25-100 mA and 65 mA, respectively. Moreover, it is important to note that the infrared proximity sensor has a shorter sensing range compared to the ultrasonic sensor implemented in the proposed solution. On the other hand, the ultrasonic distance sensor has its own disadvantages, such as sensitivity to variations in the ambient temperature and difficulties when reading reflections from soft, curved, thin, and small objects. The server is based on the Raspberry Pi model 4B+ and is responsible for managing the whole system. The proposed solution consists of a Message Queuing Telemetry Transport (MQTT) broker for communication, Node-RED for logic, Mongo database for data storage, and a React-based web application that serves as the GUI. The system is deployed using Docker containers and the docker-compose tool ensures container orchestration. Cloud Technologies Versus Self-Hosted Solutions There are two main categories of technology that can be implemented on the server side: • Cloud-based technology; • Self-hosted on-premises solutions. Both of these have their advantages and disadvantages. Cloud technologies are relatively new platforms, however, they can offer a lot of already built-in features which are ready to use without a long setup. On the other hand, users may argue that when their data are not stored on their hardware, they do not have full control over the hardware, nor over the data. The primary goal of cloud technology for IoT is to provide universal functionality for application development as well as ubiquitous access to the data. Therefore, the user of the IoT platform can focus only on the functionality of its product and its value and does not need to care about the hardware itself. There are four categories of IoT platforms: The well-known IoT platforms in 2022 are represented by the IBM Watson IoT Platform, Particle, AWS IoT Core, Google Cloud IoT Core, Azure IoT Central, and many others. While many of the features provided by these platforms are free of charge, users must pay in order to receive the most out of each of these platforms. The alternative is self-hosting. In this case, the user has to install and configure services according to their project's needs. In the proposed system, a self-hosted solution based on the Raspberry Pi was chosen. The primary reason for this decision was that there may not be internet access available in every classroom and the Raspberry Pi can create its Wi-Fi network. Therefore, there is no need to add an extra router in order to provide connectivity for the system. With this in mind, the proposed system can be deployed in any classroom without any significant limitations. System Implementation The first step in system implementation is the design of the communication flow diagram. The flowchart of the system functionalities is presented in Figure 2. Individual users can visit the GUI either from a personal computer in the classroom or through their smartphone. The communication between the GUI and the rest of the system was created using the HTTP protocol. WebSocket-based communication was implemented in order to obtain real-time information when seat occupancy changes. The main communication node of the system is the MQTT broker. All communication between nodes and Node-RED applications is handled by this broker. All connected IoT nodes send their data to the broker. The Node-RED application is responsible for data processing and storing the results in the Mongo database server using the Mongo DB driver. At the same time, it provides a RESTful application programming interface (API) and WebSocket endpoint that serves the data for the GUI. The hardware of the system consists of three main blocks: IoT nodes for detecting seat occupancy; • Self-hosted server. The Gate Node The primary task of the Gate node is to detect people who cross the door, i.e., those entering or leaving the classroom. The Gate node consists of one NodeMCU board equipped with two HC-SR04 distance sensors. The principal functionality of the Gate node is depicted in Figure 3. As mentioned earlier, from the sensing point of view the gate consists of two distance sensors, which are enough to detect when a person is crossing the door of the classroom. However, there is one limitation to this approach, being that only one person at a time can cross the door of the classroom. The flowchart of the software implemented in the Gate node is shown in Figure 4. The software managing the Gate node starts with the initialization of variables and definitions. Moreover, it is necessary to set up the Wi-Fi network name (SSID), Wi-Fi password, server IP address, and MQTT credentials. When the node is connected to the Wi-Fi network as well as the MQTT broker, the Gate node starts to measure values from distance sensors. The PubSubClient library, which is available for Arduino-based boards, was used for connection and data transfer over MQTT. The MQTT topics used for this type of message are "sensors/gateEnter" and "sensors/gateExit". The algorithm for door-crossing detection operates based on the detection of a sequence of events reported by the sensors. To explain the crossing detection algorithm, we defined two states of sensors S1 and S2. The first is the "Active sensor", which means that the measured distance from the sensor is shorter than the defined value of the door width, i.e., an active sensor means that the sensor detects the object. The second term is "Inactive sensor", which means that the measured distance from the sensor is not shorter than the defined value of the door width, i.e., there is no object detected in front of the sensor. The possible sequences of sensor states resulting in successful crossing detection by the algorithm implemented in the Gate node are shown in Table 1, where 0 represents the inactive state of the sensor and 1 stands for the active state of the sensor. S1 S2 S1 S2 S1 S2 S1 S2 S1 S2 Person entered room 0 In the table, steps t(0)-t(4) are defined by the time when the state of the sensor has changed. The system detects a door crossing event only when the states of the S1 and S2 sensors change according to the sequences provided in the table. In cases when other sequences are detected, the system does not detect a door crossing event, and thus does not change the number of persons in the room. IoT Nodes for Detecting Seat Availability Each IoT node is based on a NodeMCU board which can connect up to sixteen distance sensors. In the proposed system design, it is necessary to use just a single distance sensor per seat. Therefore, it is possible to determine seat occupancy across the classroom and automatically check whether the students in the room are keeping the desired social distance simply by evaluating data from individual distance sensors. The ultrasonic distance sensor (HC-SR04) can be placed under the PC monitor or can be attached to the bottom part of the table. In order to connect sixteen distance sensors to the NodeMCU board, it is necessary to use a 16:1 single-channel analogue multiplexer, such as CD74HC4067 in our case. An example of the sensor placement is shown in Figure 5. The implemented sensor can provide distance measurements between 2 and 400 cm with non-contact measurements and ranging accuracy up to 3 mm. Each sensor module includes an ultrasonic transmitter, a receiver, and a control circuit. During operation, it is necessary to establish whether there is a person relatively close to the sensor. The decision-making distance was set at 70 cm during the tests, however, the teacher is able to change the decision-making distance based on the conditions in the classroom using the web application interface of the server. When the measured distance is shorter than the threshold value, the seat is considered to be taken, represented by a logical one; otherwise, the seat is considered to be free (logical zero). When the seat occupancy status changes, the updated seat occupancy information is sent to the server for evaluation and storage. A flowchart of the software implementation for the IoT node for detecting seat availability is shown in Figure 6. The software of the IoT node starts with variable initialization and definitions. The first part of the code matches the software for the Gate node, such as Wi-Fi and MQTT connections and initialization of libraries. Afterwards, the nodes measure the distances from each sensor in the loop. When a change in the sensor value is detected, the node sends the message to the server for evaluation. The MQTT topic used for this type of message is "/sensors/distanceChanged". Useful information in messages represents JSON objects in string representation. A message with data about the occupancy sent from sensors looks like this: {row: 1, seats: [0, 1, 0, 1, 0, 0, 0, 1]}; where the row specifies the position in the classroom and the seats represent an array of values that define the seat occupancy for individual positions in a row. In the improved version of the IoT-based system, the option to set up the whole classroom from scratch using only the GUI was added. For this purpose, each node subscribes to the topic "nodes/setupConfig" to enable receiving of the configuration messages and is able send status data to the topic "nodes/nodeStatusUpdate". The setup message for the IoT node is as follows: {rowPosition: 1, sensorCount: 8, thresholdDistance: 70}; where rowPosition specifies the row position in the classroom, sensorCount defines the number of seats in a particular row, and thresholdDistance is used to set up the decisionmaking value for seat occupancy. The status message sent by the node contains the following information: {nodeId: "011808db24be32c5", nodeStatus: 0}; where nodeId is a string which represents the node's unique identification in the system and NodeStatus defines the status of the node in the system. Node status can have two values, zero and one. The node sends a status message with nodeStatus equal to zero when it is connected to the system. The Node-RED application evaluates the message and checks whether the node is already registered in the system. In such a case, Node-RED responds with the configuration message to this node automatically. The node responds to Node-RED with the status message with the value of nodeStatus equal to one to confirm a successful setup. On the other hand, when the node ID is not registered in the system, i.e., a new node is connected, it needs to be configured from the GUI. The process of creating the classroom and configuring the nodes is described in Section 4. Self-Hosted Server Solution The server, which is the central unit of the proposed system, is based on the Raspberry Pi minicomputer. This device runs all applications and services that provide connectivity to IoT nodes as well as management of the system, data evaluation, and storage. There are five primary services running on the server: Mosquitto MQTT broker, Node-RED application, React web application, Nginx server, and MongoDB database system. MongoDB is a popular general-purpose document-based distributed database that stores all data for actual and further evaluation. Mosquitto MQTT broker is an open-source MQTT message broker which is widely used across various IoT applications. The logic of the proposed system is implemented by a Node-RED application. The schematic design of flow for handling the messages from the Gate node for processing, evaluation, and representation of results is shown in Figure 7. The flow starts with the MQTT broker input nodes. These input nodes listen to the topics "sensors/gateEnter" and "sensors/gateExit". The purpose of this part of the flow is to collect data from the Gate node and evaluate the number of students that are currently in the classroom. All pieces of information are stored in the Mongo DB, and thus are available to Node-RED. However, the latest values are stored in flow variables as well, and their updates are sent via WebSocket to the connected client using the React web application. The message with the number of people currently in the classroom appears as follows: {"counter":14, "limit":20}; where the counter represents the number of people and the limit is the maximum allowed amount of people who can be inside the room due to active restrictions. This limit could be changed by a teacher via the GUI. Another MQTT broker input node listens to the topic "/sensors/distanceChanged". This node is responsible for handling the incoming messages from the IoT nodes about changes in seat occupancy. The structure of this message was provided in the previous section. The next part of the flow is responsible for providing all system data as the RESTful API. An example of this flow design is depicted in Figure 8 Description of RestAPI POST routes: • /setupDevice: this route serves to set up the IoT node. It expects the JSON data in the request body with parameters specifying node position, sensor count, and threshold for decision-making distance when the seat is considered occupied; • /startLesson: this route starts a new lesson; it expects only one parameter, e.g., lesson name; • /finishLesson: this route finishes the current lesson, and is parameter-less. Two of the routes shown in Figure 9 handle the IoT nodes' configuration setup from the GUI. The configuration process of the IoT nodes is described in the next section. Simple algorithms were designed to evaluate whether the distribution of students across the classroom meets the requirements defined by COVID-19 restrictions. It is assumed that the students are seated in rows; however, there do not have to be the same number of seats in each row. The implemented algorithm considers student distribution to be valid when the distance between taken seats is at least 2 in both the x and y directions. Otherwise, the system shows a popup notification about the violation of the restriction. Experimental Results In this section, the GUI of the proposed IoT-based system for detecting the number of people and their distribution in the classroom is presented. The application helps the teacher to easily handle the implementation of restrictions defined due to the COVID-19 outbreak. The home screen of the developed web application is shown in Figure 10. The teacher's daily routine will be as follows. Before the beginning of the class, the teacher enters the classroom and resets the current count of the students in the classroom. Afterwards, students can enter the classroom. The web application shows any changes in seat occupancy in real time using WebSocket communication. When students are heading to their chosen seat, they can cross other seats and temporarily change seat occupancy status. This could lead to an evaluation of person distribution that does not meet the requirements defined by COVID-19 restrictions. The system shows an alert only when the incorrect seat occupancy state holds for more than one minute. In a typical scenario, the system evaluates the data in real-time, and therefore temporarily changing seat status occupancy does not cause an alert. All other communication between the web application and the Node-RED backend is carried out via HTTP protocol. The data received from WebSocket or an HTTP GET request representing the student distribution are defined as follows: where "distributionState" tells whether the students' distribution across the classroom meets the requirements defined by restrictions. The entry "data" represents the real seat occupancy in the classroom. When the teacher finishes the lesson, all information gathered during the lesson is stored in the database for later analysis. The teacher can check the relevant data from the finished lessons at any time. The user window that provides the classroom setup and configuration is shown in Figure 11. The proposed system was designed to be as simple as possible from both the setup and implementation point of view. After all software is installed on the Raspberry Pi, the user is able to configure the classroom via the GUI. This configuration process is as follows. First, the user needs to set up the room limits, such as the maximum number of students in the classroom and minimum distance limits. Afterwards, the teacher turns on the first IoT node in the first row. The IoT node is registered in the system. By clicking on the button "Add IoT module", the teacher can display a window with a list of all unconfigured nodes in the system. Then, the teacher selects one of the unconfigured IoT nodes and configures it by setting the position in the classroom (e.g., the row number parameter) and the number of sensors connected to the IoT node, which is the number of seats in a particular row. Then, the IoT node is ready for use, and the teacher can add the rest of the IoT nodes. Moreover, the teacher is able to change the configuration of any IoT node at any time. Experiments were carried out to test the robustness and reliability of the door crossing detection algorithm at the Gate node. The tests were performed considering the following scenarios: 1. The person enters the classroom. The person enters the Gate area, stops, and then continues in the same direction. 4. The person enters the Gate area, stops, and leaves in the direction from which they entered. Each scenario was tested 100 times, and the achieved results are presented in Table 2. From the achieved results, it is obvious that the proposed system is both robust, and reliable and is able to correctly detect door crossing using the Gate node. The Gate node is able to distinguish between all four cases, i.e., a person who enters the room, leaves the room, or decides to return after entering the door from either side. The Gate can detect other crossing objects that are not human targets, such as bags, cabinets, or tables. However, it is not possible to distinguish what type of object crosses through the Gate. Moreover, the system can operate in real time and is designed to be deployed at a relatively low cost. The hardware cost of equipment required for one classroom with a capacity of 40 seats, i.e., five rows and eight seats per row, is approximately EUR 210, as can be seen in Table 3. It is important to note here that deployment of the system does not require any preexisting infrastructure for the internet connection. The server, running on Raspberry Pi, can provide wireless connection to all IoT nodes in the room and store all the data locally. Discussion In this section, the proposed solution for counting and detecting the distribution of people around the classroom is compared with the other state-of-the-art works. As our literature review found only a single work dealing with a similar solution based on data from a sensor network, solutions based on image processing were considered in comparison with proposed system. It is important to note that in most papers a limited amount of information about the systems requirements, cost, and power consumption were presented. However, based on the provided information, several parameters for comparison could be estimated. For the comparison, implementation of the system for the classroom with a capacity of 40 seats, i.e., five rows and eight seats per row, was considered. Unfortunately, due to the lack of information provided in the literature, it is not possible to cover all comparison parameters for all solutions. The comparison of the proposed system with other solutions is shown in Table 4. From the comparison, it is clear that the proposed system can provide information about seat occupancy with high accuracy while maintaining the privacy of people in the monitored area. In addition to the privacy, the advantage of the proposed system over systems based on image processing is that accuracy is consistent over whole area, while in systems based on image processing the accuracy can be affected by the position of the camera. On top of that, image processing-based solutions are not designed to provide information about distribution of people in the area. Moreover, the proposed system can operate in real time with low implementation cost and lower power consumption than the IoT-based solution proposed in [36]. Conclusions In the paper, an improved IoT-based system for detecting the number of people in a classroom and their distribution was proposed. The main purpose of the proposed system is to help teachers to manage their classes with respect to rules implemented due to COVID-19 restrictions. An improved system was presented in which the teacher can set up the whole classroom from a GUI. The system is more robust and much easier to extend than the previous version published in [37]. The teacher is able to configure IoT nodes from the GUI and change the configuration at any time. The system consists of a Gate node for counting people entering or leaving the classroom, IoT nodes with distance sensors placed in the room for detecting the availability of seats, and a server based on a Raspberry Pi. It is possible to connect up to sixteen HC-SR04 ultrasonic distance sensors to a single IoT node; thus, a single node is able to check the availability of sixteen seats. The Raspberry Pi server can create a Wi-Fi network, which is used to transfer the data from the IoT nodes; therefore, there is no need for an extra Wi-Fi router to provide connectivity in the classroom. Five system services running on the Raspberry Pi provide all functionalities of the proposed system, which uses an Nginx server for request routing, a React web application, the Mongo DB database, a Node-RED application for the logic part, and an eclipse-mosquitto MQTT broker. All services run on the server as Docker containers. Thanks to this setup, it is easy to deploy the proposed system in any classroom. The main idea of our system is to help teachers to handle COVID-19 restrictions. However, there may be other use cases for the system. For example, the system can check the distance between students during exams and tests in order to help prevent cheating. The main disadvantage of the proposed system is the complexity of the deployment of the IoT modules compared to camera-based systems. When camera-based systems are deployed, it is sufficient to simply place the camera in the classroom and the system is ready immediately. However, in the proposed system it is necessary to connect cables between sensors and IoT nodes in order to provide the power supply for the individual IoT nodes. On the other hand, when privacy is considered, the proposed system has the advantage in that it cannot provide any information about the identities of people in the room. The proposed system was deployed and tested in a classroom at the University of Zilina. The presented IoT-based system is highly reliable and robust thanks to the algorithm's simplicity and clarity. The main advantage of the system is the possibility of deployment without jeopardizing the privacy of individuals in the classroom, as there is no way to identify individuals, unlike in camera-based systems. Funding: This paper was supported by the project of Operational Programme Integrated Infrastructure: Independent research and development of technological kits based on wearable electronics products, as tools for raising hygienic standards in a society exposed to the virus causing the COVID-19 disease, ITMS2014+ code 313011ASK8 and by European Regional Development Fund and by Slovak Research and Development Agency under contract no. PP-COVID-20-0100: DOLORES.AI: The pandemic guard system. Informed Consent Statement: Not applicable. Data Availability Statement: The data are available from the authors upon request.
8,099
2022-10-01T00:00:00.000
[ "Computer Science", "Education", "Engineering" ]
The SNPcurator: literature mining of enriched SNP-disease associations Abstract The uniqueness of each human genetic structure motivated the shift from the current practice of medicine to a more tailored one. This personalized medicine revolution would not be possible today without the genetics data collected from genome-wide association studies (GWASs) that investigate the relation between different phenotypic traits and single-nucleotide polymorphisms (SNPs). The huge increase in the literature publication space imposes a challenge on the conventional manual curation process which is becoming more and more expensive. This research aims at automatically extracting SNP associations of any given disease and its reported statistical significance (P-value) and odd ratio as well as cohort information such as size and ethnicity. Our evaluation illustrates that SNPcurator was able to replicate a large number of SNP-disease associations that were also reported in the NHGRI-EBI Catalog of published GWASs. SNPcurator was also tested by eight external genetics experts, who queried the system to examine diseases of their choice, and was found to be efficient and satisfactory. We conclude that the text-mining-based system has a great potential for helping researchers and scientists, especially in their preliminary genetics research. SNPcurator is publicly available at http://snpcurator.science.uu.nl/. Database URL: http://snpcurator.science.uu.nl/ Introduction Ever since its completion in 2003, the Human Genome Project has accelerated and encouraged research on decoding the genome structure and functionality, powered by the huge advances in the genotyping technologies. The main goal of genomic studies is to identify and reveal the genetic variations associated with diseases and its prevalence across different populations. Such studies contribute to more tailored detection, prevention and treatment of diseases which lay the groundwork for the era of personalized medicine (1). In the hunt for correlation between genotype and phenotype, single-nucleotide polymorphisms (SNPs) are considered genetic signatures to the majority of polymorphisms responsible for trait susceptibility (2). By definition, a SNP is a single base-pair (A, T, C or G) variation that occurs at a specific site in the DNA sequence. It does not directly cause a disease but increases the genetic predisposition of individuals towards a certain disease and can affect their responses to drugs and medications (3). Currently, information on SNPs is available in databases such as the genome-wide association study (GWAS) Catalog (4), Gwas Central (5), GWASdb (6), mirsSNP (7), GRASP (8). These resources are constructed and curated manually; however, and the richness of information specifically related to the clinical impact of SNPs is contained in free text in the form of biomedical publications (9). The process of updating databases requires substantial human resources, financial support not to mention time (4). This imposes a challenge as the number of published studies is steadily increasing and hence the manual curation is proving more and more inefficient. Text-mining tools have been employed recently to overcome the mentioned limitations and accelerate the curation process. Mutation finder (MF) extracts mutations through regular expressions while tmVar (10) also extracts mutations based on conditional random fields. Open Mutation Miner (OMM) (11) uses MF to recognize single mutations and extends its regular expression set to detect mutation series. The extractor of mutations (EMUs) (12) detects mutations in text and links them to genes, proteins and diseases. The SNP Extraction Tool for Human Variations (SETH) (13) implements an Extended Backus-Naur Form and regular expressions with more emphasis on short sequence variations and SNPs. Disgent database (14) lists results compiled from expert curated databases and enhances the results by incorporating the BeFree text-mining system (15). Polysearch (16) associates genetic variants to diseases and drugs based on their co-occurrence frequency in abstracts. Most of the above tools achieved high performance levels on different corpora. However, there is still a gap between the research community and the biomedical textmining community. Yepes and Verspour compare in (17) the performance of EMU, OMM, MF, tmVar and SETH intrinsically on the Variome corpus, and extrinsically on the COSMIC and InSiGHT database. The study also discusses the technical aspects related to using the tools; some of them require an intermediate to advanced level of programming knowledge to use them. Furthermore, the evaluation of the practical utility of the tools is not properly investigated nor how can they be adapted to fulfill a researcher's tasks efficiently. In this article, we present the SNPcurator, a system more oriented towards information extraction specifically in the genome wide and candidate genes studies. The proposed model is constructed out of different natural language processing (NLP) modules to aid scientists in their search for relevant disease-associated SNPs through an intuitive web interface. It incorporates both syntactic and semantic methods to extract relevant information from PubMed abstracts such as cohort size and ethnicity, SNP ids and the reported results. The motivation behind this research is to create a publically available, scalable and fully automated extraction tool. Materials and methods SNPcurator has an online web interface at http://snpcura tor.science.uu.nl/ and allows researchers to easily query and search diseases and provides a useful resource for overview and summarization of associated SNPs found in literature. Sample code and the files used for evaluation are also available for download. Figure 1 illustrates the system's overview and workflow. Data collection The first step is to identify genetic research studies from the complete PubMed repository without any limitation on citations counts, publishing journals or a certain time period. The NCBI Eutilies (https://eutils.ncbi.nlm.nih.gov/ entrez/eutils/), in particular Esearch and Efetch, are used in conjunction with the BioPython module to query the PubMed database. All articles included in PubMed are associated with Medical Subject Headings (MeSH) terms (http://www.ncbi.nlm.nih.gov/mesh). MeSH is a controlled vocabulary used for indexing articles according to the unified Medical Language System meta-thesaurus which powers up the search capability of PubMed. SNPcurator combines the user input (the disease term in the current version) with relevant genetic search terms to construct the following PubMed query: Esearch returns a list of IDs that matches the query search while Efetch returns data records for the retrieved ID list. Not all retrieved articles are included in the final search results; as the system only relies on the information found in abstracts and not the full-text article, all records with missing/incomplete abstract text or in a foreign language are excluded. The downloaded results will include literature with research on multiple genetic variants such as mutations or genes. To limit our search scope to SNPassociation studies only, we apply a second filter to determine if the abstract text is relevant by checking SNP occurrences in text. Information extraction The filtered results are then pre-processed by several NLP modules. SNPcurator employs the publically available spaCy toolkit which is optimized for both accuracy and performance (https://spacy.io/). It accomplishes all necessary NLP tasks easily through its Python API. SpaCy is debatably one of the fastest publicly available parsers (18). We make use of its sentence splitting, tagging, tokenization, name entity recognition and dependency parsing modules to extract SNP-disease pairs. SNPs identification Scientists refer to SNPs in their research through unique ID numbers in a standard format assigned by the NCBI dbSNP database (http://www.ncbi.nlm.nih.gov/SNP/). The ID format differs between a newly submitted SNP (e.g.: ss28937569) or reference SNP cluster, refSNP (e.g.: rs28937569), where the latter is assigned to a submitted SNP after the sequence is aligned to its appropriate region (19). A regular expression formula '[rR] [sS] []*[0-9][0-9]*' is used to identify the SNP identifier format and extract a list of SNP occurrences in the abstract text. This expression has previously achieved a 100% recall over a test set of 300 SNP mentions (20). In our model, the latter formula has been improved to optionally accept characters G, C, T and A at the end of SNP mentions as authors tend to specify reference/alternative alleles or chromosome position in a non-standard format. Results matching In the analysis of genetic variations, researchers follow exhaustive guidelines and conduct several statistical tests in order to report positive associations in GWAS which in return requires the definition of a known threshold. P-value, a parameter of statistical significance is used to determine the certainty of an association. It provides the probability that a given result from a test is due to chance with lower P-values giving higher probability of the association. Many values have been reported as a threshold to attain genomewide significance (21). Therefore, our model includes all studies even those with marginally significant and insignificant P-values. By displaying all results without filtering their significance, SNPcurator aims to help researchers rule out a given hypothesis or trigger more questions for further investigations. To relate a reported value(s) to mentioned SNPs, the system first highlights one or more 'result sentence(s)' from the abstract. A 'result sentence' is identified as any phrase with P-value mention along with a numeric value. Each flagged sentence is first tokenized into words and then preprocessed to remove any unnecessary characters like quotes and brackets. There are also a number of naming patterns for reporting a P-value according to how it was calculated (e.g. 'P-combine', 'P-value', 'P-meta'), which requires normalization to one single standard format that is easy to capture and extract. Following that, a set of predefined regular expressions are used to capture and recognize the format followed by floating, scientific or exponent number notations. However, since a sentence meaning relies on its semantics and structure, the coupling of P-value to its corresponding SNP is not the same for all sentences. The most common and straight forward way of stating results is by mentioning both SNP and P-value in the same sentence. In this type, the extraction of needed pairs <SNP, P-value> relies on the order as illustrated in the examples shown in Table 1. The system forms a pair by extracting the nearest P-value to the SNP mentions. Note that a detected P-value can only be coupled with a single SNP mention, and thus not considered when measuring the distance for the next SNP. This allows an accurate coupling when multiple values are involved such as in examples (2, 3 and 4). The same process also applies to the extraction of the odds ratio (OR) values reported in abstract text. In general, OR is a measure of the effect size in case-control study and in genomic-studies specifically, it denotes the probability of having the disease n individuals with and without certain genotypes of SNPs. Patient information extraction The sample size included in the study is another key parameter when confirming the association with statistical confidence. For that purpose, we created two sets of keywords that are commonly used to describe both the patient cohort ('PatientKeywordSet') and the control group ('ControlKeywordSet'). The 'PatientKeywordSet' consists of words such as ['patient', 'case', 'subject'] and the 'ControlKeywordSet' includes words like ['control', 'normal', 'healthy']. The spaCy parser is then invoked to search for all numeric modifier (NUMMOD) dependencies found in the abstract text. The keyword sets are first compared against the head token of each candidate modifier and in the case of a no match; they are then compared against a list of two neighboring words of the candidate. Examples of control and patient group sizes extracted from evidence sentences are demonstrated in Table 2. Because genetic variants often have markedly different frequencies across populations, the system is also able to extract the ethnicity and nationality of patients through the spaCy name entity recognition module. The ents attribute of the processed input abstract lists all entity objects found. An entity object is a combination of text, category and word position within text. We are particularly interested in the NORP entity which extracts nationalities, religious or political groups efficiently (where the latter two are irrelevant in this context). To limit the number of false positives, we also set the minimum word length for a nationality. If more than one nationality/ethnicity are found, the most frequent nationality mentioned would be reported as shown in Table 3. other python scripts. The disease query is passed from the front-end; the server initiates and displays the output from the text-mining module script. All the extracted information is displayed in a tabular format, where each row includes a single SNP item. The last column allows the user to check the result sentence from which the <SNP, P-value> tuple was extracted. The user is able to sort the results according to the P-value, OR value, group sizes, date of publication or simply alphabetical/numerical order of the PubMed or SNPs Ids. For example, by ranking results according to the ascending P-values, researchers can easily view the SNPs with the strongest association values that attained the genomewide significance level. The web-tool is deployed on a server connected to the internet and can be accessed at http:// snpcurator.science.uu.nl/, Figure 2 shows screenshots of SNPcurator's interface. Results To assess SNPcurator's performance, we compared the information extracted from SNPcurator to data found in the GWAS Catalog for two queries: Obesity and Lung Cancer. According to cancer.org, Lung cancer is the second most common type of cancer that affects both men and women (combined) and it is the first cause of cancer death. Although obesity is not in itself a direct cause of death, overweight is a major risk factor of several diseases leading to death. However, it is also considered one of the top preventable diseases. Therefore, by identifying individuals with increased risk of obesity, an early treatment plan would help to avoid mortal consequences (22). SNPcurator was able to extract 1422 associations while the GWAS catalog shows a greater number of results with evidence of 1887 associations for the obesity trait. For the lung cancer disease, SNPcurator shows 620 associations versus 629 found in the GWAS catalog. This was expected given that GWAS catalog is manually curated from fulllength articles while SNPcurator tool results are limited to the analysis of abstracts and titles only. Nevertheless, SNPcurator achieves a similar number of associations. It is important to note that SNPcurator results will almost certainly include false positives but as mentioned previously, reporting an association relies on a set of standards and rules that differ from one database to the other. SNPcurator results leave such evaluations to the user according to the information extracted. By observing the top 25 results when ordering associations in ascending order of their P-values, SNPcurator shows evidence for SNPs (rs8102683, rs465498 and rs12914385) and SNP (rs11127958) for the lung cancer and the obesity queries, respectively. Even though the GWAS catalog curation constraints might be the reason why these records are not listed in the catalog, we were still able to confirm these associations through other resources (6). To further investigate the text-mining module performance, we compared the results from SNPcurator to associations derived from abstracts and titles only in the GWAS catalog without considering data from full text. The catalog cites a total of 37 articles for the obesity disease, only a subset of 22 articles were selected. A chosen article must include both the SNP id and the corresponding P-value in its abstract-text. SNPcurator was able to replicate 21 associations while 9 associations were missed or not properly extracted. For lung cancer search; 25 associations were matched from a total of 34 associations reported in GWAS catalog extracted from 28 articles. On average, SNPcurator was able to replicate around 70% of the associations. The performance of the information extraction component Recently, SNPPhenA, a new corpus of SNPs and Phenotypes associations extracted from GWA studies was published online (http://nil.fdi.ucm.es/? q ¼ node/639). The corpus was constructed by querying the GoPubMed (http://www.gopubmed.org/) with 20 popular SNPs fetched from SNPedia. The original 20 SNPs names were used as seeds for the abstract collection process that resulted in 360 abstracts with 875 distinct SNPs. The novelty of the SNPPhenA corpus relies in ranking the associations by classifying them into three classes: positive, negative and neutral. The associations were manually annotated by two experts in the genetic fields and in case of any contradictory results; the verdict of a third annotator was taken into consideration. Moreover, a confidence level of positive associations was manually extracted based on the strength of the linguistic correlation between SNPs and disease mentions in the abstract, they were categorized into weak, moderate and strong. More details on the annotation process and the corpus statistics can be found at (23). To our knowledge, SNPPhenA is one the first datasets to include the degree of certainty and confidence of associations instead of only reporting binary relations that simply include association or no association between SNPs and disease. Despite the relevance of the dataset to our work, the corpus authors relied on linguistic information, negation, modality markers and neutral candidates to label associations. In our approach, we determine the significance of associations in terms of biomedical statistical tests and study size. It is worth mentioning that P-values were regarded as an extra factor by the annotators of SNPPhenA when identifying the confidence levels of reported associations. To properly evaluate our model, only a subset of the corpus SNPPhenA_mod was used. All records of the original corpus with no p-values/OR values reported in the abstract text were excluded and not considered when building the new corpus. The modified corpus, SNPPhenA_mod, consists of 120 abstracts with 166 key sentences and a total of 331 SNP-Phenotype association candidates with 160 distinct SNP identifiers. The new corpus was constructed following the manual annotation of associations by a biological expert, itis available for download in XML format from the about page. We evaluated the performance of extracting both (<SNP, P-value>) pairs in terms of precision, recall and F-measure. The model achieved 81, 86 and 83%, respectively for each metric. What mostly affect the system sensitivity is the false negative (FN) cases that represents missed associations that were not detected by our system. Missed associations occurred due to failing to detect the P-value correctly or failing to link the correct P-value to the correct SNP. Failing to detect the correct P-value happens when values are reported in ranges instead of a single value. Another limitation of the system that it can only extract associations in the same sentence, i.e. both SNP and P-value are mentioned in the same sentence. It also assumes that one P-value applies to multiple SNPs if there is a single P-value mention with multiple SNPs mentioned in the sentence which in some cases results in False positives.On the other hand, failing to couple SNPs to their corresponding P-values contributes to a larger portion of (FN) and aslo to FP. Table 4 below illustrates some sample cases where SNPCurator failed to extract the correct associations External evaluation Singhal et al. discuss in (24) the need to advance biomedical sciences through text mining by empowering the roles of stakeholders involved (researchers, publishers and experts). Domain experts can evaluate the mined results and provide requirements, comments and guidelines for future improvements. For that reason, we presented the SNPcurator website to a number of interested scientists to solicit their feedback through an online survey. All eight participants have doctoral degrees in genetics or biosciences and are currently involved in mutations and polymorphisms research. Their years of experience in the genetic field varied; one participant was a full professor, four lecturers and two assistant lecturers and a genetic engineering expert from industry. Three participants were affiliated with the genetics department in the Suez-Canal Faculty of Medicine, while two were affiliated with the genetics department in Alexandria faculty of Medicine, two with the Medical research institute in Alexandria and one with Molecular Medicine and Tissue Culture sector of the European Egyptian Pharmaceutical Industries. The survey followed the original Davis technology acceptance model guidelines (25) by addressing its two main aspects; the perceived usefulness of the system and its ease of use. Moreover, it collected suggestions for improvements as well as their opinions on the context in which they envision using the tool. Participants were encouraged to try different diseases of their choice and evaluate the extracted results. The overall system performance was satisfactory to all of the users; Figure 3 demonstrates users' responses to the questionnaire. Most of them agreed that SNPcurator would be useful for conducting a general preliminary research, studying SNPs variations among populations' comparisons or highlighting related studies for further readings. Some participants pointed out that the initial results were limited and some known associations were missing despite their mention in multiple citations. For that we increased the number of papers downloaded from PubMed through eUtils to 3000 instead of only 1000. This resulted in longer loading time but we asked them to repeat the experts search query and this time, the missing associations were present. They also suggested to extract more data from the literature to enrich the records such as minor allele frequency and phenotypic effect. However, this Two SNPs rs2656069 and rs10851906 in IREB2 were associated with COPD P ¼ 0.045 and 0.032 Failure to detect the second P-value correctly led to missing the second association 23065249 The objective of this study was to investigate the coding region polymorphisms S19W (rs3135506) and G185C (rs2075291) and the promoter region polymorphism À1131T>C (rs662799) of the APOA5 gene as risk factors for ischemic stroke in Turkish population.. . .. 19W allele frequency was 0.090 in stroke patients and 0.062 in controls P ¼ 0.191. The annotators matched SNP rs5770917C to the P-value while the system detected SNP rs5770917. The system matched both SNP mentions to the P-value. information is usually found in the full-text and not abstract and hence could not be implemented for now. Another suggestion was to enable users to search by SNPs not just disease, this would require us to implement a SNP-Trait association extraction module, which we intend to pursue in future work. Conclusion In this article, we presented SNPcurator, a text-mining web-tool for automating the curation process of SNPs-trait information from GWASs. The system efficiently extracts associations and matches them with their statical significance parameters (P-value and OR) reported in abstracts; moreover it extracts population-related information such as the cohort sizes and ethnicity. To illustrate the tool's usability, we compared the results of two sample queries as opposed to the manually curated database. The system was able to report a comparable number of associations and also recreate a number of existing associations found in GWAS catalog. The system was also able to identify new associations not found in the catalog that might be interesting for experts to further investigate. Furthermore, to evaluate its usefulness and ease of use in daily research practice, a group of experts used the tool to search for any disease of interest; their opinions were very encouraging and confirmed the potential for SNPcurator. The tool was proven scalable and robust by applying it on the whole PubMed repository, and increasing the Faded articles from 1000 to 3000 per query. A main limitation was the analysis of abstract text only, we believe more accurate data would be extracted from full-text articles. We acknowledge the fact that text-mining systems would never step up to the level of sophistication of researchers when reading and analyzing an article and hence would never fully replace human curation. However, it is also almost impossible to keep up with the fast publication rates of scientific research today. The speed and satisfying results of SNPcurator may be very beneficial to accelerate that process. Future work Results have shown a difference between the amount of associations reported in abstracts and those reported in the article text itself. In the future, we aim to expand the textmining scope to full-length articles and also any extra data provided by authors to further improve the system's performance. This would allow to extract more information regarding associations and also detailed cohort descriptions. Another potential add-on to the system would be a disease-SNP relation extraction module that allows an inverse query search by SNP id or even simply a set of PubMed articles ids. The system would also benefit from a general scoring scheme or a filter to rank the results not only based on the statistical findings but should incorporate as well the number of citations and the impact factors of the publications to indicate the credibility of the reported associations. Conflict of interest. None declared.
5,604.2
2018-03-08T00:00:00.000
[ "Biology", "Computer Science", "Medicine" ]
Semantic Representation of Domain Knowledge for Professional VR Training Domain-specific knowledge representation is an essential element of efficient management of professional training. Formal and powerful knowledge representation for training systems can be built upon the semantic web standards, which enable reasoning and complex queries against the content. Virtual reality training is currently used in multiple domains, in particular, if the activities are potentially dangerous for the trainees or require advanced skills or expensive equipment. However, the available methods and tools for creating VR training systems do not use knowledge representation. Therefore, creation, modification and management of training scenarios is problematic for domain experts without expertise in programming and computer graphics. In this paper, we propose an approach to creating semantic virtual training scenarios, in which users’ activities, mistakes as well as equipment and its possible errors are represented using domain knowledge understandable to domain experts. We have verified the approach by developing a user-friendly editor of VR training scenarios for electrical operators of high-voltage installations. Introduction Progress in the quality and performance of graphics hardware and software observed in recent years makes realistic interactive presentation of complex virtual spaces and objects possible even on commodity hardware. The availability of diverse inexpensive presentation and interaction devices, such as glasses, headsets, haptic interfaces, motion tracking and capture systems, further contributes to the increasing applicability of virtual (VR) and augmented reality (AR) technologies. VR/AR applications have become popular in various application domains, such as e-commerce, tourism, education and training. Especially in training, VR offers significant advantages by making the training process more efficient and flexible, reducing the costs, liberating users from acquiring specialized equipment, and eliminating risks associated with training in a physical environment. Training staff in virtual reality is becoming widespread in various industrial sectors, such as production, mining, gas and energy. However, building useful VR training environments requires competencies in both programming and 3D modeling, as well as domain knowledge, which is necessary to prepare practical applications in a given domain. Therefore, this process typically involves IT specialists and domain specialists, whose knowledge and skills in programming and 3D modeling are usually low. Particularly challenging is the design of training scenarios, as it typically requires advanced programming skills, and the level of code reuse in this process is low. High-level componentization approaches commonly used in today's content creation tools are insufficient because the required generality and versatility of these tools inevitably leads to high complexity of the content design process. Therefore, the availability of user-friendly tools for domain experts to design VR training scenarios using domain knowledge becomes essential to reduce the required time and effort, and consequently promote the use of VR in training. A number of solutions enabling efficient modeling of VR content using techniques for domain knowledge representation have been proposed in previous works. In particular, the semantic web provides standardized mechanisms to describe the meaning of any content in a way understandable to both users and software. The semantic web is based on description logics, which permit formal representation of concepts, roles and individuals. Such representations can be subject to reasoning, which leads to the inference of implicit knowledge based on explicit knowledge, as well as queries including arbitrarily complex conditions. These are significant advantages for the creation and management of content by users in different domains. However, usage of the semantic web requires skills in knowledge engineering, which is not acceptable in the practical preparation of VR training. Thus, the challenge is to elaborate a method of creating and managing semantic VR scenarios, which could be employed by users who do not have advanced knowledge and skills in programming, 3D modeling and knowledge engineering. In this paper, we propose a new method of building and managing VR training scenarios based on semantic modeling techniques with a user-friendly editor. The editor enables domain experts to design scenarios in an intuitive visual way using domain knowledge described by ontologies. Our approach takes advantage of the fact that in concrete training scenes and typical training scenarios, the variety of 3D objects and actions is limited. Therefore, it becomes possible to use ontologies to describe available training objects and actions, and configure them into complex scenarios based on domain knowledge. The work described in this paper has been performed within a project aiming at the development of a flexible VR training system for electrical operators. All examples, therefore, relate to this application domain. However, the developed method and tools can be similarly applied to other domains, provided that relevant 3D objects and actions can be identified and semantically described. The remainder of this paper is structured as follows. Section 2 provides an overview of the current state of the art in VR training applications and a review of approaches to semantic modeling of VR content. Section 3 describes an ontology of training scenarios. The proposed method of modeling training scenarios is described in Section 4. An example of a VR training scenario along with a discussion of the results is presented in Section 5. Finally, Section 6 concludes the paper and indicates possible future research. Training in VR VR training systems enable achieving a new quality in employee training. With the use of VR, it becomes possible to digitally recreate real working conditions with a high level of fidelity. Currently available systems can be categorized into three main groups: desktop systems, semiimmersive systems and fully immersive systems. Desktop systems use mainly traditional presentation and interaction devices, such as a monitor, mouse and keyboard. Semi-immersive systems use advanced VR/AR devices for presentation, e.g., head-mounted displays (HMD), and interaction, e.g., motion tracking. Immersive systems use advanced VR/AR devices for both presentation and interaction. Below, examples of VR training systems within all of the three categories are presented. The ALEn3D system is a desktop system developed for the energy sector [1]. The system enables interaction with 3D content displayed on a 2D monitor screen, using a mouse and a keyboard. Scenarios implemented in the system mainly focus on training the operation of power lines and consist of actions performed by line electricians. The system includes two modules: a VR environment and a course manager. The VR environment can operate in three modes: virtual catalog, learning and evaluation. The course manager is a browser application that allows trainers to create courses, register students, create theoretical tests and monitor learning progress. An example of a semi-immersive system is the IMA-VR system [2]. It enables specialized training in a virtual environment aimed at transferring motor and cognitive skills related to the assembly and maintenance of industrial equipment. The specially designed IMA-VR hardware platform is used to work with the system. The platform consists of a screen and a haptic device. This device allows a trainee to interact and manipulate virtual training scenes. The system records accomplished tasks and statistics, e.g., time, required assistance, errors made and correct steps. An example of a fully immersive AR system is the training system for repairing electrical switchboards developed by Schneider Electric in cooperation with MW PowerLab [3]. The system is used for training in operation on electrical switchboards and replacement of their parts. The system uses the Microsoft HoloLens HMD. After a user puts on the HMD, the system scans the surroundings for an electrical switchboard. The system can work in two ways: providing tips on a specific problem to be solved or providing general tips on operating or repairing the switchboard. Semantic modeling of VR content A number of works have been devoted to ontology-based representation of 3D content, including a variety of geometrical, structural, spatial and presentational elements. A comprehensive review of the approaches has been presented in [4]. Existing methods are summarized in Table 1. Five of the methods address the low (graphics-specific) abstraction level, while six methods address a high (general or domain-specific) abstraction level. Three of those methods are general-may be used with different domain ontologies. For the methods that address a high abstraction level in specific application domains, the domains are indicated. Level of Abstraction Low (3D graphics) High (application domain) De Troyer et al. [5]- [9] general Gutiérrez et al. [10], [11] humanoids Kalogerakis et al. [12] -Spagnuolo et al. [13]- [15] humanoids Floriani et al. [16], [17] -Kapahnke et al. [18] general Albrecht et al. [19] interior design Latoschik et al. [20]- [22] general Drap et al. [23] archaeology Trellet et al. [24], [25] molecules Perez-Gallardo et al. [26] - The presented review indicates that there is a lack of a generic method that could be used for creating interactive VR training scenarios in different application domains. The existing ontologies are either 3D-specific (with focus on static 3D content properties) or domain-specific (with focus on a single application domain). They lack domain-independent conceptualization of actions and interactions, which could be used by non-technical users in different domains to generate VR applications with limited help from graphics designers and programmers. In turn, the solutions focused on 3D content behavior, such as [27], [28], do not provide concepts and roles for representation of training scenarios. Ontological Representation of VR Training Scenarios A scenario ontology has been designed to enable semantic representation of VR training scenarios. The scenario ontology consists of a TBox and an RBox. The TBox is a specification of classes (concepts) used to describe training scenarios. The RBox is a specification of properties (roles) of instances (individuals) of the classes. A particular training scenario is an ABox including instances of TBox classes described by RBox properties. The scenario ontology and particular training scenarios are separate documents implemented using the RDF, RDFS and OWL standards. RDF is the data model for the ontology and scenarios. In turn, RDFS and OWL provide vocabularies, which enable expression of such relations as concept and role inclusion and equivalence, role disjointedness, individual equality and inequality, and negated role membership. The entities specified in the scenario ontology as well as the relations between them are depicted in Fig. 1. The entities encompass classes (rectangles) and properties (arrows) that fall into three categories describing: the workflow of training scenarios, objects and elements of the infrastructure, and equipment necessary to execute actions on the infrastructure. Step, which is the basic element of the workflow, which consists of at least one Activity. Steps and activities correspond to two levels of generalization of the tasks to be completed by training participants. Activities specify equipment required when performing the works. In the VR training environment, it can be presented as a toolkit, from which the user can select the necessary tools. Steps and activities may also specify protective equipment. Actions, which are grouped into activities, specify particular indivisible tasks completed using the equipment specified for the activity. Actions are executed on infrastructural components of two categories: Objects and Elements, which form two-level hierarchies. A technician, who executes an action, changes the State of an object's element (called Interactive Element), which may affect elements of this or other objects (called Dependent Elements). For example, a control panel of a dashboard is used to switch on and off a transformer, which is announced on the panel and influences the infrastructure. N-ary relations between different entities in a scenario are represented by individuals of the Context class, e.g., associated actions, elements and states. Non-typical situations in the workflow are modeled using Errors and Problems. While errors are due to the user, e.g., a skipped action on a controller, problems are due to the infrastructure, e.g., a controller's failure. Designing VR Training Scenarios The concept of the method of modeling VR training scenarios is depicted in Fig. 2. The method consists of two main stages, which are accomplished using two modules of the editor we have developed. At the first stage, electricians who directly train new specialists provide primary information about scenarios using the Scenario Editor tool. At the second stage, the information collected from the first stage is used by the managers of technical teams to refine, manage and provide scenarios in their final form using the Semantic Scenario Manager. Next, the final scenarios are used to train specialists with the VR application. Scenario Editor The Scenario Editor is a visual tool based on MS Excel. Its main goal is to enable efficient and user-friendly collection of data about training scenarios by electricians who directly work with trainees and the high-voltage installations. Scenarios are stored as Excel files based on a specific scenario template. A single scenario is represented by several worksheets, each worksheet contains numerous rows with data. Data in a row is organized in a pair <attribute, value>. Rows containing data relating to the same topic are grouped into sections, where each section is identified by a header. The Scenario Exporter has been implemented as an Excel extension using C# programming language. Its class diagram is presented in Fig. 3. The OntologyStore class is responsible for managing mappings between scenario content (scenario sections and rows within the sections) and elements of the scenario ontology (classes and properties). The mappings are stored in a template file-the same file which is used by the Scenario Editor. While instantiating, the OntologyStore class parses the template file and builds in-memory object-oriented representation of the mappings. Each row in the template file is described by the corresponding mapping unit(s). A single mapping unit consists of three entities: Class, Property and Range. The Class entity defines a class which will be assigned to a domain individual introduced in the row of scenario content. Examples of such domain individuals are Scenario Step, Step Activity, and Activity Action. The Property entity defines an object property or a data property. The domain of that property is a class specified inline or above a row the given property is associated with. If it is a data property, the Range entity must be void; in this case, while exporting scenario content, the value inserted in a given scenario row is used as the object of the serialized triple. If it is an object property, the Range entity must be set to a class the object property refers to with optional name of a data property specified. While exporting scenario content, when no name of the data property is specified, the last seen individual of that class is used as the object of the serialized triple. Otherwise, when the name of the data property is specified, the last seen individual having the specific property value is used. The mapping units can be aggregated, i.e., more than a single mapping unit can be specified for a single scenario row. In this case, while exporting scenario content for a single row, more than one RDF triple will be generated. The resulting knowledge base includes data from two sources: the Excel file containing scenario content and a database of scene objects and equipment. The classes responsible for parsing those data sources are the ScenarioParser and the DatabaseParser respectively, both inheriting from the parent abstract class PrincipleParser. The parser classes generate instances of the DataTuple class, which represents data in an agnostic manner, i.e., independently of its origin. While conducting a parse, the parser classes use the OntologyStore class to obtain references to the appropriate mappings; the references are stored in instances of the DataTuple class together with the data value. To gain independence from the physical storage of data in various databases, the DatabaseParser class uses implementations of the IDatabas-eService interface. The RdfGenerator class represents an implementation of the IKnBaseGenerator interface for generating a semantic knowledge base in a form of RDF triples. The generating process performs as follows. First, the generator is fed with instances of the DataTuple class containing data values together with corresponding mappings to ontology elements. Then, the generator iterates through all data tuples and transforms them to appropriate RDF triples according to mappings. Because, in general, a data tuple can have several mapping units assigned, each data tuple can result in more than one RDF triple generated. The generated RDF triples are stored in a form of a semantic graph represented by the Graph class. An RDF triple is represented by the Triple class and consists of three entities: subject, predicate and object. These entities are included within the graph as its nodes and are represented by various classes being implementations of the INode interface: • the UriNode class: a node with a full identifier (a name), used to uniquely represent an RDF triple entity within the whole graph, • the LiteralNode class: a node with a literal text value, enriched with optional metadata: data type and language, used to store single data values of scenario content, • the BlankNode class: an anonymous node (without a public identifier), used to group a set of other nodes into a subgraph. The IIdGenerator interface defines a method for generating RDF triples with domain-specific identifiers for individuals of objects, elements and states included in a knowledge base. The IdGenerator class, which implements this interface, first uses the IQueryManager implemented as the QueryManager class to query the semantic graph for all mentioned above individuals. Next, it uses the IIdProvider implemented as the IdProviderDatabase class to retrieve the appropriate identifiers from the database of objects and equipment. Finally, RDF triples with the identifiers are generated and asserted into a semantic graph implemented through the Graph class. A semantic graph can be serialized to a text file or saved to a remote triple store. The TurtleWriter class is used to serialize a graph to a text file compliant with Turtle syntax. Semantic Scenario Manager The Semantic Scenario Manager is an intuitive visual tool based on Windows Presentation Foundation, which is used by the managers of electricians' teams. Its main goal is to enable refinement and management of the particular training scenarios on the basis of data provided by the electricians using the Scenario Editor. The Semantic Scenario Manager presents a user with a number of simple and intuitive forms enabling modification of scenario elements. The forms include the names of attributes as well as textboxes or drop-down lists, where the user can provide the necessary information (Fig. 4). The values presented in the drop-down lists are acquired from the scenario ontology. The user needs to provide general information, such as the type of work and a scenario title. Also, the scenario must be classified as elementary, complementary, regular, or verifying. Next, based on the type of work, the user gives information about the works: their category, symbol, technology used and workstation number. The last step is to provide which elements of protective equipment are necessary to complete the training. The user can choose the equipment from a list. After completing the general information about the scenario, the manager can review and modify the particular steps, activities and actions that trainees need to perform in this scenario. In each scenario, at least one step with at least one activity with at least one action must be specified (cf. Section 3). Actions are associated with interactive and dependent objects' elements as well as possible problems and errors that may occur during the action. The manager can refine and manage the details of the scenario by editing its tree view, which is a widespread and intuitive form of presentation of hierarchical data (Fig. 5). The hierarchy encompasses the scenario steps, activities, actions, problems, errors and objects, which are distinguished by different icons. The user can expand and collapse the list of subitems for every item in the tree. The user can also visually add, modify and delete the items in During the scenario design, the manager can potentially make a mistake leading to unexpected results in the VR training scene. For that reason, the Semantic Scenario Manager validates the entire scenario against the Scenario Ontology (cf. Section 3) to check whether the scenario is correct. The validation is the consistency checking process on the Scenario Ontology combined with the ABox describing the scenario. It verifies multiple elements of the scenario, including mandatory fields and permitted values, the number of steps, activities and actions, as well as relations between individual instances of classes. The Semantic Scenario Manager highlights the incorrect attributes and the encompassing tree items. Demonstration and Discussion Training of employees in practical industrial environments requires designing new and modifying existing training scenarios efficiently. In practice, the number of scenarios is by far larger than the number of training scenes. One of the possible applications of our approach is the representation of the training of operators of high-voltage installations. In this case, typically, one 3D model of an electrical substation is associated with at least a dozen different scenarios. These scenarios include learning daily maintenance operations, reactions to various problems that may occur in the installation as well as reactions to infrastructure malfunction. In the presented approach, all scenarios are knowledge bases structured according to the generic scenario ontology. The scenario ontology consists of 343 axioms, 18 classes, 34 object properties and 47 datatype properties, which can be used in different scenarios. A scenario knowledge base is an ABox specifying a concrete training scenario consisting of steps, activities and actions, along with its elements and infrastructure objects, which are described by classes and properties specified in the scenario ontology (Fig. 6). Scenario knowledge bases are encoded in OWL/Turtle. To perform training, a scenario knowledge base is imported into the VR Training Application by an importer module, which -based on the scenario KB -generates the equivalent object model of the scenario. An example view of a user executing the "Karczyn" VR training scenario action is presented in Fig. 7. The example scenario "Karczyn" covers the preparation of a trainee for specific maintenance work and consists of 4 steps, 11 activities and 17 actions. For each action, there are dependent objects (44 in case of this scenario). For each step, activity, action and object, the scenario provides specific attributes (9-10 for each item). For each attribute, the name, value, command and comment are provided. In total, the specification of the course of the scenario consists of 945 rows in Excel. In addition, there are 69 rows of specification of errors and 146 rows of specification of problems. The scenario also covers protective equipment, specific work equipment and others. The generic scenario ontology (TBox) encoded in OWL takes 1,505 lines of code and 55,320 bytes in total. The "Karczyn" scenario saved in Turtle (which is a more efficient way of encoding ontologies and knowledge bases) has 2,930 lines of code and 209,139 bytes in total. Implementation of the "Karczyn" scenario directly as a set of Unity 3D C# scripts would lead to very complex code, difficult to verify and maintain even by a highly-proficient programmer. The design of such a scenario is clearly beyond the capabilities of most domain experts dealing with the everyday training of electrical workers. An important aspect to consider is the size of the scenario representations. The total size of the "Karczyn" Unity 3D project is 58 GB, while the size of the executable version is only 1.8 GB. Storing 20 scenarios in editable form as Unity projects would require 1.16 TB of disk space. Storing 20 scenarios in the form of semantic knowledge bases requires only 4MB of storage space (plus the size of the executable application). The use of semantic knowledge bases with a formal ontology described in this paper enables the concise representation of training scenarios and provides means of editing and verifying scenarios correctness with user-friendly and familiar tools. Conclusions and Future Works The approach proposed in this paper enables the semantic representation of training scenarios, which is independent of particular application domains. The representation can be used in various domains when accompanied by domain-specific knowledge bases and 3D models of objects. In this regard, it differs from the approaches summarized in Table 1, which are not related to training, even if they permit representation of 3D content behavior. The approach enables flexible modeling of scenarios at a high level of abstraction using concepts specific to training instead of forcing the designer to use low-level programming with techniques specific to computer graphics. The presented editor, in turn, enables efficient and intuitive creation and modification of the scenarios by domain experts. Hence, the method and the tool make the development of VR applications, which generally is a highly technical task, attainable to non-technical users allowing them to use the terminology of their domains of interest in the design process. Future works include several elements. First, the environment will be extended to support collaborative creation of scenarios by distributed users. Second, we plan to extend the training application to support not only the training mode, but also the verification mode of operation with appropriate scoring based on user's performance. Finally, we plan to extend the scenario ontology with concepts of parallel sequences of activities, which can be desirable for multi-user training, e.g., in firefighting.
5,775
2021-01-01T00:00:00.000
[ "Computer Science", "Education" ]
Superior performance of macroporous over gel type polystyrene as support for the development of photo ‐ bactericidal materials Hexanuclear molybdenum cluster [Mo 6 Introduction In the current context of increasing antibiotic resistance there is an urgent need for alternative ways to fight bacterial infections. 1,2It has been estimated that in Europe healthcare associated infections cause 37,000 deaths every year, with an economic burden of €13-24 billion. 3In the United States, the estimated death toll is about 99,000 every year, with an economic cost of $33 billion. 4Although the general public is becoming slowly aware of this global threat, public institutions have been warning since long time ago about the imminent crisis on this regard.The problem could reach in the future unprecedented proportions since microbial resistance has been recently described both as a 'One Health Issue' and as a 'One World Issue'. 5][8][9][10][11] It is a technique consisting in the application of light to a compound capable to generate reactive oxygen species (ROS) cytotoxic to the pathogenic microbes.4][15][16][17][18][19][20] A broad number of chemical families have been proposed as effective generators of ROS, especially singlet oxygen ( 1 O 2 ). 21Among those molecules, xanthenic dyes, phenothiazinium compounds, porphyrins and phthalocyanins have been studied. 224][15][16][17][18][19][20] This last polymer is one of the most popular supports employed so far, dating back to the pioneering works of Schaap and Neckers. 36Polystyrene is a versatile system, used in applications as diverse as fluorescence sensing 37 or catalysis. 388][49] In contrast with the frequently used gel-type supports, there is another kind of resin of technological interest: macroporous resins are made of highly-crosslinked polystyrene and have the unique property of having permanent pores allowing effective mass transfer throughout the entire polymeric matrix. 39This fact is known and used regularly in the fields of catalysis and also for separations, but has not been explored, to the best of our knowledge, in the aPDI context. In this work we describe a comparative study of the photodynamic activity of two ion exchange resins used for immobilization of cluster 1 (Figure 1a), one gel-type (P gel ) and the other one macroporous (P mp ).Both resins were loaded with the same amount of photosensitizer Please do not adjust margins Please do not adjust margins Pseudomonas aeruginosa respectively) has been tested.In our preliminary communication only one type of polymer (P mp ) and one strain of bacteria (S.aureus) were tested, and the objective of the research was to check the ability of supported 1 to generate cytotoxic 1 O 2 . 23Here we focus our attention on deciphering the role of the organic polymeric matrix in the antimicrobial activity.As it will be shown in the following lines, the solid support is a variable that critically matters and hence must be taken into account in the development of effective photobactericidal materials.(right); (c) Phosporescence emission of solid powder of 1 (λ max = 678 nm),1@Pmp(λ max = 676 nm) and 1@P gel (λ max = 671 nm); (d) Emission of 1 in ACN (λ max =675 nm), acetone (λ max =676 nm),THF (λ max =674 nm) and toluene (λ max =671 nm). It must be noted that recently the research in the field of aPDI has focused its attention on the development of nanomaterials for the delivery of photosensitizers inside the pathogen cells. 50However the polymer beads employed in this study are much larger (about 600-700 microns) than the bacteria to be killed (S.aureus are about half a micron in diameter and P.aeruginosa are few microns in length 51 ).Hence, the materials here employed can be considered as surrogates of macroscopic surfaces, and thus as models for the development of bactericidal materials to prevent biofilm formation leading to nosocomial infections, for instance. Experimental Preparation of photoactive polymers Washed resin (1g) Amberlite TM IRA-900 or Amberlite TM IRA-400 (both from Sigma-Aldrich, chloride form) was suspended in 50 mL of absolute EtOH containing dissolved cluster 1 (1.5 mg, as tetrabutylammonium salt).The suspension was stirred overnight at room temperature, filtered and the polymer washed with 100 mL of absolute EtOH.UV-vis absorption measurements before and after the overnight period showed complete exchange of chloride by the octahedral molybdenum anionic complex (no remaining cluster detected in the supernatant).Characterization of supported photosensitizers was performed by emission and diffuse reflectance spectroscopies, thermogravimetric analysis (TGA), porosimetry, confocal laser scanning microscopy (CLSM) (see below). Steady state emission The steady-state emission of the samples was recorded with a Spex Fluorolog 3-11 apparatus equipped with a 450W xenon lamp, operated in the front-face mode.The samples were introduced into quartz cells, sealed and purged with nitrogen prior to measurements.Five measurements were carried out for each sample in different areas of the solid and the results were averaged.Excitation was set at 400 nm. Time resolved emission Data was recorded using a Varian Cary Eclipse apparatus (75W pulsed lamp) with excitation at 400 nm and placing the solid samples (beads of supported photosensitizers) inside 1mL quartz cells (sealed and purged with nitrogen prior measurement) and using a holder for solids oriented at the appropriated angle to maximize the intensity of the signal. Thermogravimetric Analyses TGA experiments were performed on TG-STDA Mettler Toledo model TGA/SDTA851e/LF/ 1600 apparatus.For the determination of the water content of the polymers, 100 mg of P gel and P mp where hydrated with water for 30 min and dried with filter paper prior to perform the thermogravimetry from 25 to 200 0 C. Porosimetry Measurements were performed on Micromeritics ASAP 2020 gas porosimeter equipped with degasification system Smart Vac.1g of P gel and P mp were washed and dried at 60 o C vacuum oven for 24h prior to perform the measurements. Diffuse reflectance Spectroscopy Measurements were performed on Agilent Cary 400 UV-Vis-Nir spectrophotometer equipped with a diffuse reflectance accessory.The measurements were performed from 700 to 200 nm. Confocal Laser Scanning Microscopy (CLSM) Experiments were performed on an inverted confocal microscope Leica TCS SP8.Images where obtained with HCX PL APO CS 10x/0.40DRY objective.Excitation of samples was done with a diode laser (488 nm) and images were acquired with a PMT detector.The polymers were observed directly on Please do not adjust margins Please do not adjust margins sterilized Ibidi µ-Slide 8 Well Glass Bottom: # 1.5H (170 µm ± 5 µm) Schott glass. Chemical trapping of singlet oxygen Photo-oxygenation reactions were performed inside open Erlenmeyer flasks containing aerated acetonitrile solutions of the trap (50 mL, DMA 0.1 mM) and 500 mg of 1@P mp or 1@P gel .Irradiations were carried out, with continuous stirring, using two LED lamps (11W each, Lexman, ca.400-700 nm emission output) placed 3 cm away from the Erlenmeyer flask.The evolution of the photoreactions was monitored over time by means of UV-vis absorption spectrophotometry (decrease of absorbance at 376 nm).The initial points of the kinetic traces were fitted to a pseudo-first order model (ln C/C 0 = -k obs • t , where C is the concentration of DMA at a certain time t and C 0 is the initial concentration of DMA).All the groups were prepared and handled under light-restricted conditions.The six groups were shaken during the whole time of the photodynamic treatment.The antimicrobial effect was determined by counting the number of colony-forming units (CFU) / mL at different light doses up to a maximum of 200 J/cm 2 .Bacterial cultures were incubated at 35 0 C for 24 h.CFU counting was performed using Flash and Go automatic colony counter. Results and Discussion For the sake of reproducibility, P gel and P mp were obtained from commercial sources, since the properties of these ion exchange resins have been thoroughly studied from different viewpoints. 39P gel is Amberlite TM IRA 400 and P mp is Amberlite TM IRA 900, both type I strong anion exchange resins (trimethylammonium chloride functionality) differing only in the crosslinking degree and the porosity of their structure (present in P mp , negligible in P gel ).Supported photosensitizers (1@P gel and 1@P mp ) were prepared easily via chloride exchange by the dianionic complex 1 and isolated by filtration, as described earlier (final loading: 0.74 µmol of 1/g of polymer). 23he visual appearance of 1@P gel is translucent whereas 1@P mp is opaque, suggesting a different interaction with light (much different refraction index; see Figure 1b).Diffuse reflectance spectra of both materials showed unequivocally the presence of 1 (signal at ca. 405-485 nm; see ESI). The optical properties of both materials were also examined by emission spectroscopy (Figure 1c).5][26][27][28][29][30][31][32][33][34][35] In our case solid powder of 1 emitted phosphorescence at λ max = 678 nm, 1@P gel emitted at λ max = 671 nm and 1@P mp displayed emission at λ max = 676 nm.The intensity of both emissions are quenched by oxygen upon exposure of the samples to air.Also emission lifetimes are shortened (see ESI), thus confirming that the origin of the emission is the triplet state of 1.][26][27][28][29][30][31][32][33][34][35] The performance of 1@P gel and 1@P mp as bactericidal materials was tested against S.aureus and P.aeruginosa.For both types of microorganisms notable differences were found between the gel-type and macroporous resins, especially in the case of gram positive bacteria.1@P mp is more effective for the eradication of S.aureus than 1@P gel as it can be seen in Figure 2. At a light dose of 110 J/cm 2 the decrease in S.aureus population photoinduced by macroporous 1@P mp was 8 log units 23 of colony forming units (CFU) (hence 99.999999% killing).At the same dose, the reduction caused by gel-type 1@P gel is only about 4 log units.Control experiments showed that the photosensitizers in the dark or irradiation of the polystyrene matrixes alone caused no important reductions in the populations of S.aureus (see ESI).Hence, in the case of these gram positive bacteria, the effect of the matrix is very clear: the macroporous polymer favours the inactivation of the pathogens.[8][9][10][11] This journal is © The Royal Society of Chemistry 20xx Please do not adjust margins Please do not adjust margins It is worth to note that for this microorganism, both the macroporous material P mp and the gel polymer P gel alone (both without 1) have intrinsic photobactericidal effect against P. aeruginosa, which is potentiated by 1 (for instance, decrease of log CFU after 200 J/cm 2 irradiation is ca.6 for P mp , 7 for 1@P mp , 3 for P gel and 5 for 1@P gel ; see control experiments in the ESI).This striking performance could have practical implications for the development of inert surfaces without added photosensitizers, but the clarification of this effect falls out of the scope of this work and will be studied in detail in the future.The finding that a porous structure like Amberlite TM IRA 900 has a remarkable effect as bactericidal support for photosensitizer 1 is important from the technological perspective since it can point the way for future investigations with other sensitizers and matrixes.Among the reports on materials with photobactericidal properties, only few mention the importance of porosity, and none compares directly a porous system to a nonporous material loaded with the same photosensitizer.However, some porous materials with photosensitizing properties have been studied.The group of Orellana reported a porous silicone hosting photoactive Ru(II) complexes. 52,53Greer and coworkers studied the generation of 1 O 2 in a porous Vycor glass from the photophysical point of view, even envisaging the photobactericidal utility of such material. 54The group of Hao found that a micro-patterned structure had some advantageous photobactericidal properties for the killing of Escherichia coli but with only a moderate 83% population reduction. 55he enhanced porosity of 1@P mp (as compared to 1@P gel ), and concomitantly better mass transport, is suggested as preliminary hypothesis to account for the superior bactericidal properties of this material.In order to get a deeper knowledge of the studied resins, a comparative study was performed using thermogravimetric analysis (TGA), nitrogen porosimetry, and confocal laser scanning microscopy (CLSM).TGA characterization was performed in order to estimate the amount of water absorbed by the polymers, resulting in ca.14 and 25 % (weight) for 1@P gel and 1@P mp respectively (Figure 3a).Porosimetry measurements were conducted in order to estimate the specific surface of both systems.For 1@P mp there resulted a value of 66.36 m 2 /g (32 nm of pore diameter) and for 1@P gel the estimated value falls below the detection limit of the technique (Figure 3b).CLSM was envisaged as a mean to visualize the distribution of the photosensitizer in the polymers.Since the phosphorescence of cluster 1 is easily quenched by oxygen, direct estimation of its distribution in 1@P gel and 1@P mp was not possible by CLSM.However we employed an analogue of 1 for this task: fluorescein (another dianionic system, whose fluorescence is not quenched in air) was diffused in samples of P gel and P mp (1.2 µmol of dye /g of polymer).As it can be seen in Figure 3c-f, the differences between both polymers are remarkable.Whereas in the case of the porous matrix the dye is distributed almost uniformly throughout the material, in the case of the gel resin, the dye is exclusively confined to an outer shell of ca.60 µm. The ultimate importance of this distribution for the photobiological performance remains to be exactly determined, but it suggests that photosensitizer 1 must be spread out along a more extended volume, hence increasing the probability of interaction with oxygen or the microbes (or both).This idea must be considered with caution since it is well known that singlet oxygen diffusion in water is conditioned by the short lifetime of this species (3.5 µs), hence distances travelled are limited to a few microns. 56Additionally, fluorescein is structurally very different from complex 1 and might not be reflecting exactly the behavior of the molybdenum complex in the polymers.Another proof of the enhanced generation of singlet oxygen in 1@P mp as compared to 1@P gel comes from the different photosensitizing activity displayed by them when used to sensitize a benchmark oxygenation reaction.Samples of both resins were submitted to irradiation in the presence of the wellknown 1 O 2 trap 9,10-dimethylanthracene (DMA), [57][58][59][60][61] in ethanol and in acetonitrile.Scheme 1. 9,10-dimethylanthracene (DMA) oxidation by singlet oxygen photosensitized by 1@P(P refers to P mp or P gel ). Please do not adjust margins Please do not adjust margins As it can be seen in Figure 4 in both media, resin 1@P mp showed clear superiority for the generation of 1 O 2 as evidenced by the higher calculated pseudo-first order kinetic constants (0.018 min -1 for 1@P mp and 0.006 min -1 for 1@P gel in ethanol; 0.049 min -1 for 1@P mp and 0.012 min -1 for 1@P gel in acetonitrile).A compilation of characterization data, including photosensitizing activity, is presented in Table 1.To explain the superior performance of P mp as support for photosensitizer 1 as bactericidal material we have focused our attention in the better capacity of the porous material to generate 1 O 2 , as it has been demonstrated above.However other factors could be playing key roles to rationalize the bactericidal performance.Specifically it is well known the cytotoxic effect for bacteria of cationic groups (like the ammonium moieties present in our polymers) and also the recently discovered importance of the roughness of the surfaces (1@P mp and 1@P gel are very different in this regard). 62- 66Overall, the photodynamic effect of the materials here presented should be explained as a combination of all the above mentioned factors. Conclusions In summary, two readily available polystyrene matrixes (P gel and P mp ) have been used as supports for singlet oxygen photosensitizer 1, and the resulting supported sensitizers compared for the inactivation of microorganisms like Gram-positive S.aureus and Gram-negative P.aeruginosa.P gel (Amberlite TM IRA 400) is a gel structure with no permanent pores and P mp (Amberlite TM IRA 900) is a macro-reticular scaffold with permanent pores and high specific surface.The nature of the support has an enormous influence on the bactericidal action induced by light, in such a way that, at the same photosensitizer loading and light dose, 1@P gel induces the killing of 99.99% of bacteria, whereas 1@P mp is able to kill 99.999999% of such microorganisms.The case of the highly resistant P.aeruginosa is qualitatively the same, with better performance of the macroporous resin over the gel type support.The comparative study of these two types of resins, common place in the field of catalysis and separation science, is novel in the area of aPDI.Although this is a basic research, we hope that the findings here reported will help to develop in the future improved materials for the generation of singlet oxygen with applications not only in aPDI but also in related areas. Figure 1 . Figure 1.(a) Cluster 1, [Mo 6 I 8 Ac 6 ] 2-, Ac is acetate; (b) picture of 1@P mp (left) and 1@P gel Staphylococcus aureus ATCC 29213 and Pseudomonas aeruginosa ATCC 27853 were obtained from the American Type Culture Collection (ATCC; Rockville, MD).Columbia Blood Agar (BA) was purchased from Oxoid.Microorganisms were grown aerobically in BA medium at 35 0 C for 24 h.Stock inoculum suspensions were prepared in bi distilled water and adjusted to optical densities corresponding to 0.5 McFarland containing > 10 8 cell/mL.A Showtec LED Par 64 Short lamp was used for the irradiations.Irradiation was performed up to a dose of 200 J/cm 2 with a blue LED lamp (maximum emission at 460 nm, 0.013 W/cm 2 ) at a distance of 17 cm for 12 minutes and 49 seconds every 10 or 20 J/cm 2 .Three groups of microorganisms were prepared for the irradiations (and other three as controls in the darkness): 5 mL of the initial suspensions with the desired McFarland value of S. aureus or P. aeruginosa were dropped into different RODAC plates and then (a) 200 mg of supported photosensitizer or (b) the same amount of control matrix (without photosensitizer) or (c) no polymer, were added.The final concentration in the experiments was 40 mg polymer/mL. Figure 2 . Figure 2. Comparison of antimicrobial photodynamic therapy of 1@Pgel and 1@Pmp in two different bacterial species (a) S.aureus and (b) P.aeruginosa. 1 and the photodynamic activity towards representative examples of Gram-positive and Gram-negative bacteria (Staphylococcus aureus and
4,216.8
2017-08-02T00:00:00.000
[ "Materials Science", "Chemistry" ]
Physics-based modeling and multi-objective parameter optimization of excitation coil for giant magnetostrictive actuator used on fuel injector Serving as one of the core component, the excitation coil can exert remarkable influence on the output performance of giant magnetostrictive actuator (GMA) for electronic controlled fuel injector. In this paper, a multi-objective coil optimization scheme is proposed to balance the conflicting response speed and magnetic field intensity determined by the coil parameters. Firstly, a physics-based coil model is established for optimization, whose parameters can be directly calculated by the coil dimensions. Then, with the current response of the coil calculated, a multi-objective optimization framework is conducted attempting to get the selection guideline for the enameled wire diameter of the coil. The optimal choices exist when the outer diameter of the enameled wire falls in 0.9∼1.6 mm, and the inner/outer diameter ratio is relatively high. Finally, a series of experiments are conducted and the results indicate that the proposed model is able to accurately describe the current response throughout the operating frequencies, and the optimization scheme can provide a valuable roadmap to design coil for high performance GMA. Introduction Gradually growing demand on fuel efficiency and pollutant emission of the internal combustion engine has resulted in rapid development of novel electronic controlled fuel injectors (ECI) based on smart materials including piezoelectric crystal, shape memory alloy, and giant magnetostrictive material (GMM), which possess some unique advantages over the traditional injectors. 1 Among the widely used smart materials, GMM is well-known for its excellent constitutive properties such as great output, significant efficiency, and high reliability. [2][3][4] Giant magnetostrictive actuator (GMA), designed utilizing GMM, inherits almost all the fantastic features of this material and becomes a promising device to drive the high-performance electronic controlled injectors. 5,6 Serving as the energy source of the magnetic field, the excitation coil plays an important role in determining the transient and steady state characteristics of GMA, like the response time, the magnetic field distribution, the maximum output, and the energy loss. 7,8 Therefore, it can be of paramount importance for researchers to conduct coil optimization, so that the optimal performance can be achieved when the structure of GMA is, to some extent, fixed. However, so far, only a few literatures have focused on this particular issue, and, meanwhile, many of them aim to obtain an evenly-distributed magnetic field along the axis of GMM rod. 9,10 For example, Gao et al. attempt to design and optimize the excitation coil parameters of GMA, expecting to acquire a highly-uniform magnetic field, and the uniformity rate can rise to 99.35% according to the simulation results in Ansoft. 11 Yang et al. investigate the influence of the coil length on the magnetic field distribution in GMA, and discover that it can generate quite evenly-distributed field while the coil possesses approximately the same length as the GMM rod. 12 In addition, Fan et al. propose optimization methods intended to minimize the power consumption utilizing shape factors, while Bai et al. design parallel coil with heat loss by decreasing the coupling inductance. 13,14 Several useful conclusions are drawn in these publications, which can provide some general roadmaps to conduct excitation coil design and optimization. 15,16 However, compared with other application conditions, GMA for ECI is a little different, for the magnetic field intensity uniformity and energy efficiency are usually not the most important indices to evaluate the output performance of this kind of actuator but the response speed. [17][18][19] Actually, although GMM can realize magneto-mechanical transition in several microseconds, the overall response speed of GMA is still not as quick as expected, due to the large inductance of the excitation coil, which can account for almost the entire response time of the actuator. Therefore, coils with smaller inductance are preferred while attempting to obtain faster response actuators. However, it should not be neglected that small inductance coil can lead to inadequate magnetic field, which may jeopardize the final output of the fuel injector. Based on the above analysis, how to balance the response speed and magnetic field intensity can be a challenging issue waiting to be addressed and this will be the main focus of this paper. To the best of the authors' knowledge, only one published research focuses on this pending problem, which establishes a model to describe the relationship of coil turns, steady magnetic field intensity, and response time. 20 However, since this model is only suitable for the quasi-static excitations, and no direct conclusion on coil optimization is achieved, this research can hardly be useful in the practical optimization process. When conducting coil optimization, it is necessary to employ voltage-current model to describe the coil current behaviors under different excitation signals. Unfortunately, modeling activities from the excitation voltage to the coil current are somewhat ignored in the present researches. And most of the GMA models begin with the coil current, making it hard to analyze the actual response time using these models. [21][22][23][24] According to the establishment process, the existing voltage-current models can be roughly divided into two types, namely, the statistical models and the physical models. The statistical models are established to mimic the dynamic properties of a system by different algorithms, which attract the researchers' attention due to its high accuracy. 25,26 Different from the statistical models, the physical models are built based on certain physical mechanisms. Although the physical models are usually not as accurate as the statistical models especially in complex system cases, they are still widely used by scholars for they can explain the operating process of the target systems. Since the coil of GMA do not belong to the complex systems, the physical models are preferred, and the most popular one treats the coil as the series connection of a resistor and an inductor, which can describe the current using a first order differential equation. 1,27,28 This model exhibits good accuracy in low frequency ranges, but can cause unacceptable error while predicting the mid or high frequency responses. In this condition, some scholars propose several complicated models specially for the high frequency excitation. 7,29 However, some parameters of these models lack physical meanings, so it can be hard to use these models to guide coil optimization. Given the above analysis, some attempts have been made in this work, and the contribution can be summarized as follows. Firstly, a fully physics-based optimization-oriented coil current model is established, where all the parameters can be directly computed by the structural dimensions of the excitation coil. Then, utilizing this model, the magnetic field intensity and the response time can be calculated, and a multiobjective optimization scheme is proposed to obtain the optimal coil design choice. Finally, a series of experiments under different excitation signals are conducted to validate the proposed coil model and the optimization method, and several conclusions are obtained to guide the structural design of the excitation coil. The rest of this paper is outlined as follows: Structure of GMA for the fuel injector describes the basic structure and operating process of GMA for ECI; Physics-based model of the excitation coil establishes the physics-based voltagecurrent model used for optimization; Response analysis of the coil circuit model analyzes the current response of the excitation coil utilizing the proposed model; Parameter optimization of the excitation coil covers the proposed multiobjective optimization framework and achieves the optimization results; in Experimentation and model validation, experiments are conducted and the results are discussed; in Conclusions, some conclusions are provided concerning the optimal choice of the GMA coil design. Structure of GMA for the fuel injector Figure 1 shows the structure of GMA used for ECI, and its overall working principle can be expressed as follows: A rectangular voltage signal is usually exerted on the excitation coil while operating. When the signal turns to the high voltage, the response current will be generated in the coil and also form the excitation magnetic field along the coil axis. With the magnetic field, the GMM rod produces axial strain, pushing the nut and the output rod to move upward. After that, the ball valve fixed at the end of the rod opens upward, and the fuel injection can be realized. When the signal goes to the low voltage, the output rod can return under the action of spring. Then the ball valve closes, and the fuel injection activity stops. Model of the excitation coil dimensions The structure and dimensions of the coil skeleton for GMA are shown in Figure 2, where d h is the diameter of the enameled wire, d l is the diameter of the copper core in the wire, c, c + 2b and a are the inner diameter, outer diameter, and effective length of the coil skeleton, respectively. According to the geometric relation, the average winding diameter of the coil can be represented by b + c. In the following part, some dimensional relationship is analyzed including that between the overall excitation coil frame dimensions and the enameled wire dimensions, as shown in Figure 3. If the overall dimensions of the coil frame are fixed, the number of coil turns is mainly determined by the outer diameter of the enameled wire. And the layers in the axial and radial directions can be expressed as follows where b � c represents rounding down to the nearest integer. Then the total number of the coil N can be calculated as follows. where d � e represents rounding up to the nearest integer. Considering the winding clearance, a modification factor k N is introduced, and equation (3) can be described as Based on the above equations, the total winding length of the excitation coil can be computed as follows During the winding process, a deviation exists between the actual average coil radius and the theoretical one, which can result in a considerable gap between the actual wire length and the calculated value. Therefore, a length correction factor k l is added to equation (5), and the total length of the excitation coil can be expressed as It should be noted that the value of k l can be obtained by fitting a series of actual results with the same coil skeleton and different wire diameters. Model of the excitation coil circuit In order to study the voltage-current relationship of the excitation coil, it is necessary to establish an equivalent circuit model. For a multi-layer winding coil, it can be regarded as the series connection of an ideal resistor and an ideal inductor, and also parallel connection with an ideal capacitor, shown as Figure 4. Although this equivalent model is relatively simple, it matches the actual condition with acceptable accuracy. Especially when the signal frequency remains low, the steady and transient characteristics of the circuit can be well described. In the equivalent circuit, all the resistance, inductance, and capacitance can be calculated by the coil dimensions. The equivalent coil resistance R e can be calculated by the following equation. where ρ r denotes the resistivity of the enameled wire, and S l denotes the cross-sectional area of the copper core in the enameled wire. For the long coil whose skeleton length is larger than 0.75 times of the average diameter, the equivalent inductance L e can be calculated by the following equation where 4 denotes the inductance correction value considering the radial thickness of the coil, L 0 denotes the inductance of the solenoid with the same length and average diameter as the excitation coil, and the calculation equation can be expressed as where μ 0 denotes the air permeability, d s denotes the diameter of the solenoid, whose value is b + c, and Φ s denotes a constant related to the dimensions of the solenoid. The inductance correction value can be calculated as where α denotes the ratio of the coil length a to the average diameter b + c, ρ denotes the ratio of the coil radial thickness b to the average diameter b + c, Γ(α, ρ, β) denotes a function determined by α, ρ and β, and the calculation equation of β can be expressed as 30 In the excitation coil, the interlayer capacitance should be treated as the dominant part of the equivalent capacitance. In order to make full use of the skeleton dimensions, the "U" type winding method is usually used in the GMA coil, shown as Figure 5. Considering the energy storage capacity in the capacitance element, the equivalent capacitance between two adjacent layers of the coil can be calculated from the stored electric energy. In the i th and (i + 1) th layer of the coil, the calculation equation of the stored electric field energy can be computed as follows (12) where ε 0 denotes the vacuum permittivity, ε r denotes the relative permittivity of copper, l i denotes the average turn length in the i th layer, U i (k) denotes the electric potential difference of the k th turn between the i th and (i + 1) th layer in the coil, which can be obtained by the ratio of the winding length of the i th layer to the total winding length of the coil, N i denotes the number of coil turns in the i th layer, q denotes the distance between the corresponding turn in two adjacent layers, and the value can be represented by ffiffi ffi 3 p d h =2, shown in Figure 3. For the coil with n layers, the stored electric energy can be regarded as the sum of that in every two adjacent layers where W e denotes the total energy stored between adjacent layers. Meanwhile, the stored electric energy can be expressed by the equivalent capacitance as follows where C e denotes the equivalent capacitance of the excitation coil, and U denotes the voltage exerted on the coil. Response analysis of the coil circuit model With the excitation coil circuit model established, further analysis on the performance of the excitation coil and influence of the coil parameters on its response characteristics under different operating conditions can be conducted. Therefore, the sinusoidal and square wave response properties of the circuit need to be considered. When the excitation signal is employed on the coil, the response can be displayed as Figure 6, where the equivalent current in the coil can be represented by i(t). Based on the nodal current law, the total current in the excitation coil is equal to the sum of the current in each branch expressed as i(t) = i 1 (t) + i 2 (t). In addition, the voltage on each parallel branch remains equal, expressed by where u(t) denotes the voltage on the excitation coil, u c (t), u R (t), and u L (t) denotes the voltages exerted on the capacitance, resistance, and inductance in the equivalent circuit, respectively. Considering the properties of the capacitance, resistance, and inductance, the above equations can be expressed as follows 8 > > < > > : where R e , L e and C e represent the resistance, inductance, and capacitance of the equivalent circuit, respectively. Response of the sinusoidal excitation According to the theories of signal processing, the periodic excitation can be decomposed into a series of sinusoidal excitations. Therefore, analyzing the sinusoidal excitation response of the coil can be regarded as the premise and foundation for studying the response characteristics under other periodic excitations. Assume that the excitation signal exerted on the coil can be written as uðtÞ ¼ U sinðωt þ φÞ, where ω denotes the angular frequency, and φ denotes the initial phase angle. In the resistanceinductance branch, the current response can be represented by the solution of equation (16a), and the expression is i 1 (t) = i 1s (t) + i 10 (t), where i 1s (t) and i 10 (t) are the steady and transient components of the circuit response, respectively. The steady-state current can be expressed by a sinusoidal function as After that, the transient component of the response current i 10 (t) is considered, which can be regarded as the particular solution of the homogeneous differential equation in equation (16a), represented by where A is a constant, determined by the initial state of the coil. When the initial value of the coil current is i 10i , In particular, if i 10i equals to 0, the coil is excited in the zero initial state, and A ¼ �U =jZjsinðφ � θÞ. In the capacitance branch of the equivalent circuit, the current response is the solution of equation (16b), which can be expressed as Response of the square excitation When the injector operates, the square signal should be applied on the excitation coil. In the ideal condition, the excitation signal should realize the voltage conversion at the time point t 1 . However, in the actual circuit, the devices cannot achieve ideal state, and the voltage is unable to realize sudden change, and a time interval Δt is often obliged in this procedure. Therefore, the actual voltage signals on the excitation coil are shown as Figure 7, where U h denotes the amplitude of the high voltage, U l denotes the amplitude of the low voltage, and Δt 1 , Δt 2 , Δt 3 , Δt 4 denote the voltage rising time, high voltage duration time, voltage declining time, and low voltage duration time respectively. It can be discovered from Figure 7 that a complete excitation cycle can be divided into four stages. To obtain the current response of the system, it is necessary to analyze the characteristics of each excitation phase separately. The excitation voltage in the rising phase can be expressed as u(t)=At+B. A and B are determined by the starting and ending time of voltage rising, and for the first excitation waveform in Figure 7 Considering the rising voltage, the solution of the equation (16a) can be obtained, which can represent the current response in the resistance-inductance branch as follows where C 1 denotes a constant, determined by the initial current value in this branch. Substituting it into the general solution, the value of C 1 can be obtained as where I r10 denotes the initial current in the voltage rising phase. For the capacitance branch, its current response is determined by the changing rate of the excitation voltage. The calculation equation can be expressed as i r2 ðtÞ ¼ C e A. When the voltage reaches the high level holding phase, the excitation voltage remains constant for some time, and the general solution of the current response in the resistanceinductance branch can be represented by where C 2 is determined by the initial value of the branch current. In the capacitor branch, since the excitation voltage remains constant, the current in the capacitor always stays zero. In the voltage declining phase, the excitation signal can also be expressed by u(t)=Et+F. E and F can be calculated by the starting and ending time of the rising voltage as Similarly, the current response of the resistance-inductance branch can be obtained as follows where i d10 denotes the initial current in the voltage declining phase, namely, the final current in the high voltage holding phase. Considering the capacitance branch of the equivalent Figure 11. Pareto solution distribution after optimization. circuit, the calculation equation of the current response remains to be i d2 ðtÞ ¼ C e E. In the end, the low voltage holding phase is analyzed. During this period, the excitation voltage maintains at a constant value close to zero, and there is no current passing through the capacitance branch. In the resistance-inductance branch, the general solution of the current response is Similarly, the value of C 4 can be determined by the final current in this branch excited by the previous phase. From the above analysis, it can be seen that the current calculation in the capacitance branch tends to be relatively simple, which only depends on the voltage changing rate on the coil. In contrast, the resistance-inductance branch exhibits quite complex response characteristics, not only affected by the excitation voltage amplitude, but also closely related to the holding period of the high and low voltages. When the high voltage and low voltage holding phases in the excitation signal stay relatively long, the response current can reach the steady state value. In this condition, the initial branch current of the voltage rising and declining phases in each excitation cycle can be expressed as U h =R, and U l =R, respectively. On the contrary, when the holding phases are not long enough, the current will enter the next excitation phase while not reaching the steady state. The initial values of the branch current in the rising and declining phases need to be calculated iteratively based on the response of the previous phase. Parameter optimization of the excitation coil The response speed, regarded as the most important index to evaluate the performance of GMA, directly determines the fuel injection accuracy of the injector. The excitation coil contains large inductance, and will inevitably cause the lag of the response current and reduce the response speed of the system. In the design phase of the excitation coil, it is necessary to optimize the parameters aiming to shorten the response time. Meanwhile, to take full advantages of the large magnetostriction in GMM, improve the output displacement of the actuator, and reduce the heat generation problem of the device, the excitation coil should produce a relatively large magnetic field under small current condition. And in this way, the saturation magnetization of GMM can be ensured. Therefore, the parameter optimization of the excitation coil should surround the topic of improving the magnetic field intensity and reducing the current response time. Analysis of the magnetic field intensity Due to low permeability and poor magnetic conductivity, it is difficult for GMM to converge the magnetic field inside. Therefore, the closed magnetic circuit is often utilized in GMA structure design to improve the output efficiency. The magnetic circuit of GMA designed in this paper is shown in Figure 8(a). Based on the different materials and shapes, it is necessary to treat the components in GMA as different magnetic reluctance elements. The final equivalent magnetic circuit model can be displayed as Figure 8(b), where the magnetic field source is generated by the excitation coil. For the magnetic reluctance of each component in the equivalent magnetic circuit, the calculation equation can be expressed as where l i , μ i and S i respectively denote the axial length, permeability and equivalent cross-sectional area of each component in the magnetic circuit. It should be noted that the permeability of GMM merely accounts for one thousandth of that of other components inside the magnetic circuit, indicating its magnetic reluctance far larger than those of other components. As for series connected magnetic circuit, the magnetic potential shared by each component always stays proportional to the magnetic reluctance of each component. Therefore, the magnetic potential generated by the excitation coil mostly exerts on the GMM rod, and the magnetic flux in the magnetic circuit can be calculated as where l GMM , μ GMM and S GMM denote the length, permeability and cross-sectional area of the GMM rod respectively. Then substitute Φ M ðtÞ ¼ μ GMM H GMM ðtÞS GMM into equation (24), and the following equation can be obtained. where H GMM (t) denotes the magnetic field intensity on the GMM rod. It should be noted that a correction coefficient C R between 0~1 is introduced into equation (25) Analysis of the response time Giant magnetostrictive actuator for ECI usually operates under the excitation of DC square wave, with high voltage opening, and low voltage returning. Therefore, it is very important to analyze the current response time on condition that the excitation voltage is switching from high-to-low or low-to-high. When the excitation frequency stays low, the coil current can reach steady state. Assume that ΔI denotes the difference between the high and low steady-state currents, and the time, when the current changing amplitude reaches 0.9ΔI, can be regarded as the characteristic parameter to evaluate the response speed of the coil, expressed as Figure 16. Values of k l in different enameled diameters. where, t on and t off denote the response time in the current rising and declining phase respectively. The derivation process of equation (27) is displayed in Appendix A, and equation (28) has a similar derivation process. According to the circuit theories, the coil response time in DC square wave only relies on the circuit parameters, not influenced by the excitation frequency. Therefore, if the excitation frequency is high, the response current cannot reach steady state in the rising and declining phase. The response time can still be evaluated by equation (27) and (28). Parameter optimization of the excitation coil When the coil skeleton dimensions are determined, the steady-state magnetic field, and the response time of the coil are mainly affected by the winding turns, and the equivalent impedance, respectively. In order to achieve the optimal performance, these parameters need to be optimized. Based on the coil dimension model and coil circuit model, the turn number and equivalent impedance of the excitation coil can be determined by the inner and outer diameters of the enameled wire. Therefore, the final parameters to be optimized are set as these two. It should be noted that in practice, the inner diameter is supposed to be strictly smaller than the outer diameter, which needs to be taken as a constraint condition in optimization. Substituting the circuit parameter expressions into equation (27) and (28), the magnetic field intensity and the response time can be calculated by the inner and outer diameters of the enameled wire as follows where F(a, b, c) denotes a function determined by coil dimensional parameters a, b and c, and the expression can be displayed as follows where the meaning of the parameters are the same as those in the above sections. As is described in Model of the excitation coil circuit, α, β, and ρ are all defined by a, b, and c. The derivation process of equation (30) is displayed in detail in Appendix B. The steady-state magnetic field intensity and the response time in different inner and outer wire diameters can be exhibited in Figures 9 and 10. Since the inner diameter should be smaller than the outer one, the curved surfaces in these two figures only reserve half of the complete surface. It should be noted that the insulating layer of the enameled wire is usually small compared to the wire diameter. And the curve features near the diagonal position in Figures 9 and 10 should be highlighted. It can be discovered that with the increase of the inner and outer diameters of the wire, the steady-state magnetic field intensity goes up gradually, but meanwhile the response speed of the coil tend to decrease. Therefore, these two performance indicators have mutual constraints in the optimization process, and it is necessary to use multi-objective optimization method to obtain the optimal solution of coil parameters. The embedded function in MATLAB, gamultiobj, is developed based on the genetic algorithm (GA), able to realize multi-objective optimization under certain constraints. The algorithm inherits the advantages of the traditional GA, including strong global searching ability, good robustness, and high efficiency. According to the above analysis, the objective function and constraint conditions of multi-objective optimization can be expressed as min 8 < : where ε denotes a given sufficiently small number. Since the response time of the excitation coil only falls in microseconds, the objective function f 2 amplifies 1000 times in the optimization process to avoid algorithm failure. And the parameters of optimization algorithm are selected as Table 1. Among these parameters, TolFun is set as an extremely small value to guarantee the accuracy of the output Conducting the optimization algorithm, 60 Pareto front solutions are obtained. The distribution of Pareto front individuals is shown as Figure 11. Since the Pareto front is defined as the set of nondominated solutions, where each objective is considered as equally good, the best choice among them fitting the coil optimization case needs to be determined using other criteria. Similar with other ferromagnetic materials, GMM experiences saturation when it is excited by the magnetic field, which indicates that the magnetostriction can hardly further increase with the external magnetic field intensity larger than a certain value. Therefore, a best choice can be the Pareto solutions with the fastest response speed satisfying the magnetic field intensity requirements. To conduct the selection process, a searching algorithm is established as follows: Step.1 Calculate the magnetic field intensity of all the Pareto solutions, and sort the Pareto solutions. Step.2 Obtain the H-M (magnetic field intensitymagnetization intensity) curve of the utilized GMM, and determine the saturation threshold. Step.3 Reserve the Pareto solutions whose magnetic field intensity larger than the threshold, and sort them according to the response time. Step.4 Choose the Pareto solutions with or close to the shortest response time. Employing this method, the balanced solutions can be determined based on the magnetic field intensity and response time. In this paper, the optimization choices exist when the outer diameters of the enameled wire fall in 0.9~ 1.6 mm, and the inner/outer diameter ratio is relatively high, which can be a guideline for coil optimization in GMA for ECI. To describe the parameter optimization procedure of the excitation coil in detail, a flowchart is displayed as Figure 12. As the magnetic field intensity and the response time can be directly calculated by the coil dimensions using analytical expressions, almost all model complexity exists in the MATLAB function gamultiobj, and the value is Ο (GMN 2 ), where G denotes the iteration generations, M denotes the objective numbers, and N denotes the population size. Experimentation and model validation In order to validate the coil model and parameter optimization method proposed in this paper, the excitation coil performance experiments are conducted utilizing the GMA testing system, shown as Figure 13. There are several essential parts constituting the system and its operating process can be described as follows: The digital oscilloscope, as the core of the system, is used to generate different signal waveforms provided by the master computer, and the signals are exerted on the coil after enlarged by the power amplifier. Then the response current generates in the coil, forming an excitation magnetic field in the three-dimensional space and driving the GMM rod to produce magnetostriction. The current clamp is used to collect the current in the coil, and the current signal together with the power amplifier voltage is input into the digital oscilloscope, which can be displayed on the master computer after processed by the data acquisition software. Validation of the coil model With the parameters of the coil skeleton determined, several excitation coil prototypes are fabricated using enameled wire in different diameters. As shown in Figure 14, the diameters of the enameled wire for Coils 1~6 are 0.29 mm, 0.38 mm, 0.49 mm, 0.59 mm, 0.69 mm and 0.8 mm respectively. From equation (4), it can be discovered that with the dimensions of the coil skeleton fixed, the turn number of the coil should be linearly related to the reciprocal of the wire diameter square. Therefore, the relationship between them is plotted, and can be shown as Figure 15. It can be seen that there is a clear linear relationship between the actual coil turns and the reciprocals of wire diameter square. In addition, the actual values distribute evenly on both sides of the fitting line, which proves the accuracy of the coil model. The maximum error occurs when the wire diameter is 0.29 mm, and the difference between the actual value and the fitting value is 46. In this case, the wire diameter is relatively small, and the clearance between two adjacent turns can be inevitable and more difficult to control compared with large diameter wire cases. Validation of the coil circuit model In the practical experimental conditions, it is not feasible and accurate enough to measure the resistance, inductance, and capacitance of the equivalent circuit directly. Therefore, in this paper, the component values in the equivalent circuit are obtained by identifying the modulus and phase angle of the measured impedance. For the circuit diagram shown as Figure 6, the impedance calculation equation can be expressed as follows. Then modulus and phase angle of impedance can be computed as Using the equations above, particle swarm optimization algorithm is used to conduct parameter identification, due to its high precision. And the identification values of Coil 1~6 are obtained shown as Table 2. Substituting the identified parameters into equation (6), the relationship between the modification factor k l and the wire diameter can be obtained, shown as Figure 16. From Figure 16, it can be discovered that while the wire diameter is larger than 0.4 mm, the value of k l keeps almost linear with the wire diameter; however, when the wire diameter goes down less than 0.4 mm, k l nearly remains unchanged. Therefore, the value of k l can be expressed in the form of piecewise function as follows Based on the aforementioned analysis, the calculated values of resistance, inductance and capacitance in the equivalent circuit can be obtained, as shown in Table 3. And the corresponding relative errors are plotted in Figure 17. From Table 3 and Figure 17, it can be detected that most of the calculated values locate quite close to the identified ones, indicating that the established model can well describe the properties of the excitation coil. The maximum relative error occurs while calculating capacitance, whose value is around 6.2%. The explanation of this deviation can be described as: In the equivalent circuit, the capacitance is comparatively small, and does not influence the coil performance too much. Therefore, it can be difficult for the identification algorithm to obtain the exact value of the capacitance. To further verify the equivalent circuit model and parameter calculation method proposed in this paper, the calculated impedance is compared with the measured one. The results are shown in Figures 18 and 19. It can be seen that the calculated values agree well with the measured modulus and phase angles of the coil impedance throughout the operating frequency range. The largest relative error, with the value being 8.41%, emerges while computing the phase angle of Coil 6, which indicates that the coil equivalent circuit model and parameter calculation method proposed in this paper possesses relatively high accuracy. Validation of the response current model In order to further analyze the excitation coil model and check its ability to describe the transient current response properties, sinusoidal and square wave excitation experiments are conducted. The experimental results and the corresponding errors with the calculated ones in sinusoidal excitation are shown as Figures 20 and 21. From Figures 20 and 21, it can be seen that under sinusoidal excitation, the calculated results of the model remains in good agreement with the experimental ones, and the model can accurately describe both the amplitude and phase of the sinusoidal response current. From the error curve, it can be discovered that the absolute errors between experimental data and model data always stay in low level. Except some special points, the relative errors also keep below 8%, with most of them smaller than 5%. Besides, to further analyze the difference between the model results and experimental results, root mean square error (RMSE) calculation is conducted. As for the cases shown in Figure 20, the RMSEs in three frequencies are 2.24 mA, 1.88 mA, and 2.20 mA, respectively, which can be regarded as very small values compared with the current amplitudes. And the calculated RMSEs in Figure 21 are 5.85 mA and 28.04 mA, which are larger than those mentioned above, and can be caused by the testing errors of the current clamp in higher measurement range. However, considering the currents in Coil 4 and Coil 6, the RMSEs still indicate good accuracy of the model. In addition to sinusoidal excitation, a series of DC square wave signals in different frequencies are applied on Coil 1 as well, and the current response is shown in Figure 22. It can be discovered that under different frequencies, the model calculation results coincide with the experimental results well, and the model can maintain high accuracy in describing the basic characteristics of the current response, like the upper and lower limits, rising and declining time and changing trend. According to the error curves in the figures, it can be seen that the absolute errors are even smaller than those under the sinusoidal excitations. The computed RMSEs in the provided three frequencies are 1.79 mA, 1.78 mA, and 1.81 mA, respectively, indicating its suitability for the square excitations. Figure 23 shows the current response curves of Coil 2 and Coil 4 with the excitation frequency of 300 Hz, and the RMSEs are 3.68 mA and 9.90 mA, respectively. Combined with the experimental results of Coil 1, the model exhibits strong adaptability for coils in different dimensions, and can accurately predict the square wave excitation experimental results of coils with different dimensional parameters. Validation of the multi-objective optimization To validate the multi-objective optimization scheme in this paper, the responses of all coils are obtained and plotted in Figure 24. It can be discovered that the steady state intensity changes a lot with different wire diameters. However, in contrast, the response speed seems a bit hard to compare, for the difference looks not significant enough. Aiming to analyze the results in a more clear way, the response time and the steady state current in each coil are extracted and compared with the calculation values, shown as Figure 25. It can be seen from Figure 25 that the calculated results can well predict the actual results obtained from the experiments, with the maximum relative error about 3.45% in magnetic field intensity calculation, and 9.85% in response time calculation. While computing the response time, the error seems larger than that in magnetic field calculation, which should be explained by the reason that the testing noises make it hard to get the accurate response time from experimental results. In addition, from the results, it can be detected that the steady state magnetic field intensity goes up remarkably as the wire diameter increases. However, the changing trend of the response time is not very clear, for the inner and outer diameters of the enameled wire increase simultaneously. In addition, the best choice among the prototypes is Coil 6, with its dimensions very close to the optimal range. Discussion From the experimental results and the corresponding analysis, it can be concluded that the proposed method can well perform the multi-objective optimization for coil design of GMA. However, it cannot be neglected that there are a few problems still waiting to be solved in the next step, which are listed as follows: (1) The optimization-oriented coil model needs to be further improved. Although the model results agree well with the experimental results in most cases, it can be discovered that some deviations exist, especially when the response current inside the coil is not too large. This phenomenon indicates that some improvements can be made on the coil model. (2) Processing of the experimental data needs to be carefully investigated. In this paper, the experimental data have not been processed to suppress the noises before comparing with the model data, for the noises do not influence the data too much. However, as the noises indeed exist, the data processing methods like filtering should be added. And using this, it can be more easily to quantitatively compare the experimental data and the model ones. Conclusions This work provides a general roadmap targeting excitation coil modeling and optimization for GMA used in ECI. The theoretical analysis and experimentation work have set a foundation for further research on design and application of GMA. Several conclusions of this work can be drawn as follows: (1) An optimization-oriented excitation coil model is established for GMA, in which the model parameters can be directly calculated by the coil dimensions. (2) Utilizing the established model, the sinusoidal and square wave responses of the excitation coil are analyzed, which provides a concise method to predict the final coil performance under different excitation forms. (3) The influence of coil dimensions on its response is investigated. Multi-objective optimization is conducted to balance the interaction between the response speed and the steady state magnetic field, and the optimal dimensional parameter range can be obtained. (4) Using enameled wires in different parameters, several coil prototypes are manufactured. A series of experiments are completed, and the results indicate that the excitation coil model can well describe the current response throughout the operating frequencies, and the optimization scheme can serve as a good guideline to design excitation coil for GMA in high performance ECI. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work is supported by National Natural Science Foundation of China (Project No.51275525) and First Batch of Yin Ling Fund (Project No.ZL3H39). Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ORCID iD Ce Rong  https://orcid.org/0000-0002-4280-7676 The time, when the current changing amplitude reaches 0.9ΔI, has been regarded as the characteristic parameter to evaluate the response speed of the coil. Therefore, the rising time of the coil current from U l /R to (0.1U l /R+0.9U h /R) needs to be calculated. As the actual excitation waveform is displayed in Figure 7, the steady state coil current at the time point t 1 is U l /R. Combining equation (20) and (21), the coil current at the time point t 2 can be calculated as the sum of the two branches as follows 8 > > > > < > > > > : In the high level voltage phase, as no current exists in the capacitor branch, the response time can be calculated by the expression of i h (t) as where C 2 is a constant, and can be obtained by the continuity of the coil current at the time point t 2 as follows Substituting (0.1U l /R+0.9U h /R) into equation (A.3), and the following expressions can be obtained Appendix B Calculation of equation (30) Substitute equation (6), (7), and equation (8) into equation (27), the derivation process of L/R can be expressed as follows Therefore, equation (B.1) can be further simplified as 8 > > > > < > > > > : Then the response time t on can be calculated as follows Γðα, ρ, βÞ 4ρ r k l k N abðb þ cÞ d 2 h d 2
9,980.6
2022-05-01T00:00:00.000
[ "Physics", "Engineering" ]
Spatial coalescent connectivity through multi-generation dispersal modelling predicts gene flow across marine phyla Gene flow governs the contemporary spatial structure and dynamic of populations as well as their long-term evolution. For species that disperse using atmospheric or oceanic flows, biophysical models allow predicting the migratory component of gene flow, which facilitates the interpretation of broad-scale spatial structure inferred from observed allele frequencies among populations. However, frequent mismatches between dispersal estimates and observed genetic diversity prevent an operational synthesis for eco-evolutionary projections. Here we use an extensive compilation of 58 population genetic studies of 47 phylogenetically divergent marine sedentary species over the Mediterranean basin to assess how genetic differentiation is predicted by Isolation-By-Distance, single-generation dispersal and multi-generation dispersal models. Unlike previous approaches, the latter unveil explicit parents-to-offspring links (filial connectivity) and implicit links among siblings from a common ancestor (coalescent connectivity). We find that almost 70 % of observed variance in genetic differentiation is explained by coalescent connectivity over multiple generations, significantly outperforming other models. Our results offer great promises to untangle the eco-evolutionary forces that shape sedentary population structure and to anticipate climate-driven redistributions, altogether improving spatial conservation planning. 1) The authors use Wright's classic formula based on an island model (equal population sizes and equal migration rates among all demes) to convert migration probabilities from their biophysical model into Fst. This simply doesn't make sense. First, we are clearly not in an island model and both the biophysical model and the empirical data clearly violate most of its assumptions (Whitlock and McCauley 1998) . Second, the m in this conversion is the proportion of migrants, or more specifically to the system, the proportion of individuals that send a migrant to another deme. This is not analogous to the dispersal probability estimated from the biophysical model and corrected for multiple generations etc. Finally, the use of a constant and low (for marine populations) Ne just amplifies the violation of the model's assumptions of equal Ne. Fluctuation in local Ne is probably the largest component in the mismatch between Fst and various models (Faurby and Barber 2012). A possible alternative is to simply look for correlations between observed Fst and the different corrections to dispersal probability. 2) Lagrangian methods for biophysical modeling get a lot of attention in seascape genetics because they more realistically model particle movement, but where they fall down is in the relatively small number of particles that they can model. This is important in population genetic applications because Fst is sensitive to a small number of longdistance dispersal events. 100 particles for >1000 demes is fine for a Lagrangian model, but it comes nowhere close to modeling the actual number of larvae released and it probably misses nearly all of the long-distance dispersal events in the Mediterranean system. I'm not sure what can be done with the manuscript at this stage, but at least a healthy paragraph of discussion is warranted. # Specific Comments Title and throughout -While I understand that the word "coalescent" is used correctly here, it runs the risk of confusing readers (as it did me) that it is referring to the coalescent theory of population genetics. In this paper, it is the spatial model that is coalescent, not the genetic model. L88 -"While a small proportion of migrants could be sufficient to ensure gene flow between distant populations 7, the inherent spatial scales of genetic structures are generally a few orders of magnitude higher than potential dispersal distances over a single generation 25, even for species exhibiting extremely rare long-distance dispersal 26,27." Not sure I follow this sentence, which may just be a grammatical thing. Are you saying that in the ocean, the spatial scale at which populations are structured is orders of magnitude greater than dispersal distances? I would agree. L406 -cumulative L262 and throughout -because this journal uses numerical citations, author names and year need to be cited when using them in a sentence. L270 "which is comparable to a previous meta-analysis" -also comparable to Selkoe and Toonen 2011. L276 "Single-generation dispersal models are worse than IBD models to predict genetic connectivity probably because IBD is supported by robust theory." -this seems like an oversimple explanation. The values from a biophysical model are just another distance after all. Reviewer #1 (Remarks to the Author): Predicting gene flow patterns from simulations as done in the present work offer promising approaches to unveil eco-evolutionary forces shaping population differentiation. In species where connectivity is mainly driven by dispersal larval phases their modelling has been done considering only single generations. The authors compare several models of dispersal (Euclidian distance, Sea least-cost distance, singlegeneration explicit, multi-generation explicit and multi-generation implicit) with the observed Fst distances reported in 58 population genetic studies. They found that multi-generation coalescent connectivity is significantly better and explain 50% of observed genetic differentiation variance. The obtained results seem reasonable and evolutionary meaningful and merit their publication. Thank you for the positive appreciation and for your interests in our work. Why for dispersal models testing IBD, Euclidian and sea-least cost distances were chosen instead of the shortest distance following the coast line? How meaningful in biological terms for the species would be the comparison with these three models? This information could be provided and discussed Our paper does not intend to test IBD by itself; conversely, previous models of spatial genetic structures short-listed here are cross-compared to show that our novel models perform better than classical approaches. Here we retain the well-documented "sea-least cost distances" instead of the "shortest distance following the coastline" to avoid redundancy (in the absence of islands, both measures would be very similar) and since the latter is less accurate than the former. This said, we acknowledge (in Discussion, l.321-326) that these methods have little biological meaning as they only relate to geographical distances without considering properly the complex movements of individuals. Through the development and application of our multi-generation dispersal models simulating plainly propagules' movements across the seascape, we hope to limit the shortcomings of IBD models in the near future. It seems to me that 100 propagules were simulated in each release and for each experiment repeated approximately 10 times per year (10-day periodicity) along 10 years. The number of Lagrangian experiments in Table SI-4 is related to that? Please rewrite for clarity. This is correct, we implemented 100 propagules in each population, now referred to as localities (i.e. the green squares displayed in Figure 1b,c). There are approximately 1,200 localities in each habitat, meaning we tracked approximately 120,000 propagules per Lagrangian experiment. The Table SI-4 refers to the number of Lagrangian experiments aggregated for each spawning season. Finally, dispersal probabilities associated to each study were obtained by integrating about 12,000,000 (seasonal spawning) to 48,000,000 (annual spawning) propagules trajectories. The main text (in Methods, subsection "Biophysical Modelling") and the caption of Table SI-4 were rewritten to improve clarity. Was the simulation carried out for the 1170 populations in the shallow coastal habitat and 1163 for the neritic shelf so that each connectivity matrix is considering those populations, or was it among the 8196 nodes? The composite matrix P for a given species in single generation dispersal estimates would contain the populations sampled in the genetic study? Please specify for clarity. For estimating multi-generation dispersal probabilities considering explicit or implicit links which and how many are the putative intermediate non-sampled populations? One for each generation? We considered all the populations, now referred to as localities, delineated by both habitats (i.e. 1170 and 1163 localities for the shallow coastal and neritic shelf habitats, respectively) in the estimation of multi-generation dispersal probabilities. The composite matrix P (e.g. equivalent to single generation dispersal) for a given species does contain the dozens of localities sampled in the studies, along with all the remaining non-sampled ones. In other words, there are ~ 1200 putative intermediate non-sampled localities when estimating filial or coalescent connectivity between two sampled localities. It has been clarified in the revised manuscript (l. 441 to l. 451). Although the modelling is described in a previous paper it would be interesting to explain it in more detailed in the methods section to improve the comprehension since it is key in the analysis. While the method is fully described in Ser-Giacomi et al., (2021), we made additional efforts to explain it here in a clear, yet simple, manner without duplicating already published work. In particular, Fig. 2 presents well-thought schematics to visualize how explicit and implicit links have been computed. In addition, we rewrote the subsection "Cumulating implicit and explicit links in multi-generation dispersal models" in Methods (including simplified equations with intelligible examples) to improve clarity (l. 452 to l.496). Is the optimal M in table SI-1 the optimal number of generations? Explain what the headers are in all tables for clarity. Yes, it is correct. We added details (now in Tables SI-7, SI-8, SI-9 of the revised SI) to clarify this doubt. Note that we now employ MLPE linear mixed models to test for the predictions of observed Fst by different approaches (see responses to reviewers #2 and #3); consequently, the AIC (Akaike Information Criterion) allows assessing the quality of each model (each generation, in this case) to select the optimal generation. In the Mantel tests methods section it is indicated that 40 generations maximizes the significant Mantel correlations according to SI-1 and SI-4. This information is difficult to interpret from these two tables. Is it only referring to SI-1? We rewrote this section since we now used MLPE linear mixed models rather than Mantel tests. We also added Fig. SI-7 to improve comprehension. It is indicated that the optimal number of generations to best predict gene flow significantly correlates with the sampling coverage scaled by the species-specific dispersal abilities. Please clarify. We rewrote this section since we now used MLPE linear mixed models rather than Mantel tests. Fig 5a for 15 populations the probability of significant gene flow prediction is 0? Why in We binned the studies considering the number of populations sampled, so each point in this figure corresponds to a particular number of sampled populations (in the words of population genetic studies). Across the meta-analysis, only one study (i.e. Marzouck et al., 2017) used 15 populations. In this study, two genetic markers were used to assess Fst between sampled populations: a nuclear marker and a mtDNA marker, both characterized by nucleotide sequences. This study focuses on a mollusk, Hexaplex trunculus, which has the particularity to aggregate to spawn large egg masses (~ 10 cm) that then undergo intracapsular development (no larval phase indeed). This could alter its dispersal by currents and may thus explain why our model returns inconsistent gene flow predictions for both markers. Note that this figure is no longer in the revised main manuscript. The last sentence in page 10, does this mean that the model implemented can only provide accurate results for widely distributed sampling designs? We agree with the reviewer on this point: in this study and in all the population genetics analysis, the ability of our multi-generation dispersal model to accurately predict gene flow is influenced by the sampling strategy. When validating our methodology through correlations with observed Fst, it clearly shows that widely distributed sampling design gives more power (i) to evaluate effectively genetic differentiation over the species broad-scale distribution and (ii) to predict accurately gene flow (i.e. more points in the mixed model). Low correlations can thus arise either from the model itself or from sampling deficiencies, and there is no statistical way to objectively tease those two hypotheses apart. Reviewer #2 (Remarks to the Author): The study uses existing data on 47 marine organisms over the Mediterranean basin to test multigeneration and coalescent multi-generation dispersal models against more classic isolation-bydistance (IBD) and single-generation dispersal models. The results show that the multi-generation and coalescent multi-generation models explain a greater proportion of genetic variation. Furthermore, the multi-generation models provide the opportunity to explore the number of generations that maximise the fit between predicted and observed values of genetic differentiation. The large dataset considered also allows to do this with respect to the spatial scale and number of populations considered. The results show that the numbers of generations relevant to link demographic and genetic connectivity are in the order of tens of generations, that a few tens of populations need to be sampled, and that these numbers depend on the pelagic larval duration of the species considered. The manuscript is clear and well written. Assuming that the model published in reference 42 (Ser-Giacomi et al. 2021, which I did not review) is correct, the study appears to be sound. It is an important contribution because it contributes to improve our understanding of the link between demographic and genetic connectivity, which represents an important knowledge gap. Thank you for the positive comments; we are glad that you consider this work promising and impacting as it bridges an important knowledge gap. We also hope that, once published, it will help the research community to better comprehend the causes and implications of both demographic and genetic connectivity. One aspect that I find paradoxical is that in the island model that is used to make the link between modeled dispersal and modeled genetic structure, dispersal is not spatially explicit (i.e. it is equally likely between any pair of population). So a classic model in which dispersal is explicit but not spatially explicit is used to test a spatially explicit model that forcefully shows that space is important when interpreting genetic structure. How can this conundrum be resolved? We had initially used the island model to transform our multi-generation dispersal probabilities into genetic distances and then used Mantel test with Pearson correlation method to test for correlations with observed genetic differentiation. Despite the caveats intrinsically linked to the island model theory (as documented by the reviewer), we had chosen this model because it corresponds to a reciprocal transformation of dispersal probabilities into genetic distances. As the relationship between Fst and Nem was merely interpreted, any reciprocal conversion would work fine. Following this relevant comment, we now use a reciprocal transformation of dispersal probabilities; we rewrote accordingly the sections Methods, Results and SI-V in the revised manuscript. There is indeed no consensus on how to transform "dispersal probabilities" or "oceanographic distances" into "genetic distances". Some studies used a log10 transformation (e.g. Crandall et al., 2012; Jahnke et al., 2018), which however does not allow handling null probabilities. We also tested (not shown) Reynold's distance log10(1-Fst), which (i) returned worse results than those obtained with the reciprocal transformations and (ii) does not allow comparing models with AIC. Last but not least, we use in the revised manuscript Fst/(1-Fst) instead of Fst when testing for predictions of our five dispersal models to emphasize that fact the island model has been replaced by a more accurate framework (Rousset, 2001). Also the island model assumes well-defined populations but the "populations" considered in the model (black and green squares in Figure 1b and c) are clearly not discrete populations. We do not rely anymore on the island model in the revised manuscript so that this comment is now outdated. Nevertheless, the reviewer is right for the physical assumptions of our models: it evaluates multistep connectivity across an ensemble of contiguous, yet discrete, sampled, or non-sampled populations of a given habitat (now called localities). It may thus appear quite different from the discrete and almost-isolated populations assumed by the island model. We made this choice because broad-scale spatialized data of both substrate and species distribution do not exist or are not precise and homogeneous enough over the Mediterranean Sea to properly delineate discrete populations for each species of the meta-analysis. It is perhaps one of the main difficulties of investigating connectivity over temperate seascapes (characterized by relatively long and continuous coastlines with diverse and patchy habitats) as compared to tropical seascapes (characterized by small isolated islands and less variable habitats) such as in the South pacific (Crandall et al., 2012) or the Caribbean Sea (Kool et al., 2010). The study shows that the multi-generation and coalescent and multi-generation dispersal models outperform the IBD and single-generation models but Figure 3 suggests that the coalescent aspect only represents a slight improvement to the multi-generation model. In which cases/situations is the coalescent aspect most important? The coalescent model is conceptually the most natural way of evaluating gene flow between two contemporary sampled populations (i.e. the two sampled populations share the same temporality, contrary to filial connectivity which consider both populations isolated by the number of generation considered, see Figure 2b,c). Moreover, in the case of self-recruitment, filial connectivity is a particular case of coalescent connectivity (e.g. when A = k2 in Figure 2c). In other words, and considering that simulated self-recruitment is prominent, filial connectivity appears included within coalescent connectivity. Then, (i) when explicit links are dominant, implicit links would not add much information (i.e. coalescent connectivity is equivalent to filial connectivity), (ii) when explicit links are weak, implicit links can be strong. The latter situations are those where coalescent connectivity largely improve any evaluation solely based on filial connectivity. The model assumes symmetric dispersal but dispersal is probably highly asymmetric (the oceanographic model can be used to address this in detail). How is that expected to affect the results? The reviewer is right, dispersal is, in nature, highly asymmetric. However, Fst are symmetric. Hence, we had to transform asymmetric dispersal probabilities between two populations (e.g. PAB from A to B and PBA from B to A in Fig. 2b) into a unique and symmetric metric evaluating connection probabilities thanks to current-driven dispersal. This transformation would not be necessary if we compiled in our meta-analysis directional migration value obtained by other genetic analysis methods (e.g. DivMigrate or GENECLASS2, Jahnke et al., 2018). However, these methods are not as ubiquitous as Fst, preventing the global synthesis achieved through our meta-analysis. Note that this asymmetric to symmetric transformation applies to filial connectivity only; indeed, coalescent connectivity is symmetric by construction (Ser-Giacomi et al., 2021). The strength of this transformation is to remain in a probabilistic framework: we do not compute a mean or select the max/min value between PAB-PBA, but we rather look for the maximized probability of connection between two populations for a given number of generations. For example, if P AB = 0.9 and P BA = 0.1; the symmetric probability will be 0.91. Or if PAB = 0.5 and PBA = 0.5; the symmetric probability will be 0.75. Another point is that this approach requires a sophisticated distributional and oceanographic model of the study area. In the absence of such a model this approach cannot be implemented. This is important to remind. The reviewer is totally right, any estimation of currents-mediated connectivity (i.e. single generation dispersal, multi-generation dispersal using either explicit or implicit connections) requires velocity fields from oceanographic models. Nowadays, operational ocean models have been developed for many oceanic systems at both regional and global scales and many products are available for the scientific communities. As an example, we used velocity fields for the Mediterranean Sea available on https://resources.marine.copernicus.eu/products, but other oceanic domains are available (i.e. Baltic sea, Antarctic ocean, Atlantic ocean, up to the global ocean). Finally, considering not the general approach but the specific case of the Mediterranean Sea: did the study reveal any new pattern or process? In accord with the scope of the target journal and its international readership, we focus the writing on the general results that are understandable by anyone and are applicable anywhere, without going into details on the Mediterranean Sea. In other words, the Mediterranean Sea is here used a natural laboratory to investigate how to best simulate gene flow. Note that in Ser-Giacomi et al., (2021), we reported that transport barriers (often invoked in Mediterranean population genetic studies) are indeed permeable to implicit connections, notably the Oran-Almeria front and the Balearic front, suggesting that transport barriers are permeable to gene flow. From a genetic perspective, our meta-analysis showed that the southeastern Mediterranean shorelines are largely under-sampled, preventing a global comprehension of basin-scale genetic differentiation patterns. As said in the manuscript, the coalescent connectivity concept offers great perspectives to further investigate evolutive processes over the Mediterranean Sea. Introduction "While a small proportion of migrants could be sufficient to ensure gene flow between distant populations": precise that this is considering an infinite island model. Thanks, the corresponding sentence has been rewritten in the revised manuscript. Results "for phylogenetically divergent 47 marine species ": rephrase Thanks, it has been modified in the revised manuscript. Discussion replace "so that it can be readily apply" by "so that it can be readily applied" Thanks, it has been modified in the revised manuscript. "About one third of the compiled studies displayed significant IBD predictions with a mean Mantel R²" clarify what is meant by "with a mean Mantel R²" Thanks, it has been detailed in the revised manuscript. Thanks, it has been modified in the revised manuscript. Reviewer #3 (Remarks to the Author): The authors present a meta-analysis of 58 population genetic studies from the Mediterranean basin in which they use new distance measures based on connectivity probability graphs from a Lagrangian biophysical model to explain Fst among populations. The distances they use are the cumulative products of multigenerational dispersal through the graphs; both an explicit distance based on dispersal from parents and an implicit distance which includes dispersal by siblings are calculated. Both of these multigenerational distances from a biophysical model have a higher mean Mantel R^2 with observed Fst than more traditional distances such as Euclidean or overwater distance, with the implicit distance having the highest mean correlation. Interestingly, genetic sampling strategy is found to be predictive of a significant correlation. This is an interesting study, and I'm quite excited that the authors have taken graph theory the extra generational steps beyond what others have done to derive these distances and show that they generally do a better job in explaining observed Fst across a decently large sample of species. However I have a number of reservations about their methods. Thank you very much for your interests in our work and for the constructive comments. We have carefully addressed all your reservations below, which helped us to further improve the quality of our manuscript. 1) The authors use Wright's classic formula based on an island model (equal population sizes and equal migration rates among all demes) to convert migration probabilities from their biophysical model into Fst. This simply doesn't make sense. First, we are clearly not in an island model and both the biophysical model and the empirical data clearly violate most of its assumptions (Whitlock and McCauley 1998) . Second, the m in this conversion is the proportion of migrants, or more specifically to the system, the proportion of individuals that send a migrant to another deme. This is not analogous to the dispersal probability estimated from the biophysical model and corrected for multiple generations etc. Finally, the use of a constant and low (for marine populations) Ne just amplifies the violation of the model's assumptions of equal Ne. Fluctuation in local Ne is probably the largest component in the mismatch between Fst and various models (Faurby and Barber 2012). A possible alternative is to simply look for correlations between observed Fst and the different corrections to dispersal probability. That is a good point indeed (also raised by reviewer #2) that we have taken into account. As reported above, we previously used the inverse function of the island model to transform our multigeneration dispersal probabilities into genetic distances, without really exploiting the relationship between Fst and Nem. Following these valuable comments, the island model was disregarded from the revised manuscript, which now uses different models of reciprocal transformations (see Methods, Results and section SI-5 in the revised manuscripts). 2) Lagrangian methods for biophysical modeling get a lot of attention in seascape genetics because they more realistically model particle movement, but where they fall down is in the relatively small number of particles that they can model. This is important in population genetic applications because Fst is sensitive to a small number of long-distance dispersal events. 100 particles for >1000 demes is fine for a Lagrangian model, but it comes nowhere close to modeling the actual number of larvae released and it probably misses nearly all of the long-distance dispersal events in the Mediterranean system. I'm not sure what can be done with the manuscript at this stage, but at least a healthy paragraph of discussion is warranted. The reviewer raises an interesting point that motivated us to do further analyses to test the sensitivity of long-distance dispersal events to the number of particles released in our biophysical model. To do so, we compared synthetic dispersal kernels over a single generation for all the sampled populations of the shallow coastal habitat using 100 (as in the manuscript) and 1000 particles per node. Dispersal kernels relate the distance of all connected populations to any sampled population with the probability of the connected populations to act as the sources of the sampled populations (as shown in Figure R1a,b). The binned comparison between both dispersal kernels (i.e. the probabilities associated to each distance categories, Fig. R1c) showed that long dispersal events are not sensitive to the number of particles modelised ( Fig. R1d; r = 0.9998***). We recognize that rare long-distance dispersal events could shape the patterns of genetic differentiation through density-dependent processes (e.g. gene surfing, see Waters et al., 2013), and is more likely related to exceptionally long PLDs (rare event for which biological knowledge is sparse) than numerical restrictions. It could be very interesting to investigate the impact of these rare longdispersal events on genetic differentiation in further work. Figure R1: Sensitivity of long-distance dispersal events to the number of simulated particles. Test is done for one exemplary dispersal event (spawning date on 01/06/2012) for a 30 days PLD case study species. a,b Dispersal plume and associated connection probabilities from the most southern sampled population (red contour) using a 100 particles and b 1000 particles. c Synthetic dispersal kernels for the 559 sampled populations using 100 (blue curve) and 1000 particles (red curve). d Comparison between the dispersal kernel computed with 100 particles and the dispersal kernel with 1000 particles (r = 0.9998***). Distances from source nodes to sampled nodes were binned into 50 km classes from 0 to 800 km. , Mantel tests are not appropriate for comparing two matrices that are both autocorrelated because it will give a high rate of false positives. See Wagner and Fortin (2015) for possible alternatives. 3) As described by The reviewer raises a relevant issue that we carefully considered. Indeed, the usefulness of Mantel tests in sea/landscape genetics has been questioned in the last decade (e.g. Selkoe et al., 2016). Note however that the caveat of Mantel tests (e.g. inflating type I error rate) impacts primarily "artificial" dissimilarity matrices computed from spatialized data, such as environmental distances between two populations computed from gridded environmental datasets (e.g. Isolation-By-Environment, Wang et al., 2013), but not "pure" dissimilarity matrices that can be directly formulated in terms of distances (i.e. genetic distance or Fst, geographic distance or oceanographic/dispersal distance, Legendre & Fortin 2010, Legendre et al., 2015. This is why Guillot & Rousset (2013) stated: "The simple Mantel test is therefore suitable to test the absence of IBD from population genetic data in this case". Thus, previously used Mantel tests were appropriate for our analysis because all matrices are "pure" dissimilarity matrices. Nevertheless, and as suggested by the reviewer, seascape genetics must move beyond Mantel tests (Selkoe et al., 2016). We took this opportunity to repeat all our analyses using mixed models, such as maximum-likelihood population effects (MLPE) models, as a robust alternative to Mantel test since it permits to evaluate predictors of pairwise population genetic differentiation while accounting for non-independence of pairwise comparisons (e.g. Selkoe et al., 2016, Boulanger et al., 2019, Jahnke & Jonsson, 2022. Moreover, using mixed models allow comparing gene flow predictors (IBDs and single-or multi-dispersal models using either explicit or implicit connections) using relative likelihood computed with AIC. The latter properties is very useful in our work so that it has been retained, following this constructive reviewer's suggestions, returning results overall clearer than before (e.g. ~70% instead of 50% of the variance explained). # Specific Comments Title and throughout -While I understand that the word "coalescent" is used correctly here, it runs the risk of confusing readers (as it did me) that it is referring to the coalescent theory of population genetics. In this paper, it is the spatial model that is coalescent, not the genetic model. Even if it could add minor confusion, we think the use of the term "coalescent" is judicious to define our novel dispersal model with implicit connections while linking biophysical models with genetic theory. It also lays the emphasis on this concept mostly disregarded by the marine connectivity community (that remains to-date focused on filial connectivity). Thanks to this relevant suggestion, we now clearly stated in the revised manuscript (l.137) that we refer to the dispersal model, not the genetic one. L88 -"While a small proportion of migrants could be sufficient to ensure gene flow between distant populations 7, the inherent spatial scales of genetic structures are generally a few orders of magnitude higher than potential dispersal distances over a single generation 25, even for species exhibiting extremely rare long-distance dispersal 26,27." Not sure I follow this sentence, which may just be a grammatical thing. Are you saying that in the ocean, the spatial scale at which populations are structured is orders of magnitude greater than dispersal distances? I would agree. Yes, this is exactly what we wanted to address with this sentence (that has been now slightly rewritten). L110 -connections Thanks, it has been modified in the revised manuscript. L111 -Seldom used Thanks, it has been modified in the revised manuscript. Thanks, it has been modified in the revised manuscript. L146 -"observed genetic structures" (and throughout the manuscript) This is grammatically correct, but usage in the population genetics community is to have "structure" as singular. Thanks, it has been modified in the revised manuscript. L174 -what is meant by least-cost in this context? What is the cost? Is this shortest overwater distance? Yes, least-cost refers to the shortest overwater distance, we used the same semantic as McRae & Beier, 2007. We added a precision in the revised manuscript. L333 -What specifically was extracted from the 58 studies? Was it pairwise Fst? Or were the actual data-reanalyzed? If the former, there are a lot of different estimators of Fst... how was this standardized among studies? Since it is practically impossible to obtain all raw data from each historical study, re-computing all Fst values was not feasible. Pairwise Fst estimates were extracted from the 58 studies except for a few cases (one or two) where we obtained raw data and computed them (or separated outlier and nonoutlier SNPs before computing Fst). However, the design of our approach is robust and does not require standardization because there are no comparisons of Fst values among studies. Indeed, each observational study is treated individually when compared to biophysical models, prior to any global analysis of the congruence between genetic data and biophysical models. Thanks, we replaced "population" by "locality" in the revised manuscript. L376 -100 propagules per population is quite low!! See above our answer to a similar comment. L381 -It should be made clear that the 5 PLDs are the 5 bins that were created for the empirical studies. Thanks, it has been detailed in the revised manuscript. L406 -cumulative It has been rewritten. L262 and throughout -because this journal uses numerical citations, author names and year need to be cited when using them in a sentence. Thanks, it has been modified in the revised manuscript. L270 "which is comparable to a previous meta-analysis" -also comparable to Selkoe and Toonen 2011. Thanks, it has been added in the revised manuscript. L276 "Single-generation dispersal models are worse than IBD models to predict genetic connectivity probably because IBD is supported by robust theory." -this seems like an oversimple explanation. The values from a biophysical model are just another distance after all. Thanks, it has been detailed in the revised manuscript. L312 "Moreover, *island model theory* assumes..." Thanks, it has been detailed in the revised manuscript. Also, how were confidence intervals determined? Finally, the text refers to Fisher's combined probability being depicted here, but I can't find it. Circles or dots indicate the mean Δ Mantel R² (Mantel R² of the left model minus Mantel R² of the right model) to perform pair-wise comparisons among Pearson correlation coefficients obtained with Mantel tests. The 95 % confidence interval is determined by the multiplication of the standard error (std(Δ Mantel R²)/(Nbr of studies -1)) with the 2.5 th and 97.5 th percentile of the Student's t distribution with (Nbr of studies -1). The Fisher's combined probabilities are reported by asterisks when significant (*,** or ***) or by "ns" otherwise. We intentionally use inverse color-scale to ease visual understanding and best represent that high value of Fst intuitively represents low connectivity between population pairs. In brief, we binned the 58 studies according to their number of populations sampled. For each "number of populations sampled categories", we account for the number of significant studies among the number of non-significant studies to display a probability of significant gene flow prediction. We already addressed (see a previous answer to reviewer #1) the specific case of the null prediction for the 15
7,816.2
2022-10-04T00:00:00.000
[ "Environmental Science", "Biology" ]
Comparative Error Analysis in Neural and Finite-state Models for Unsupervised Character-level Transduction Traditionally, character-level transduction problems have been solved with finite-state models designed to encode structural and linguistic knowledge of the underlying process, whereas recent approaches rely on the power and flexibility of sequence-to-sequence models with attention. Focusing on the less explored unsupervised learning scenario, we compare the two model classes side by side and find that they tend to make different types of errors even when achieving comparable performance. We analyze the distributions of different error classes using two unsupervised tasks as testbeds: converting informally romanized text into the native script of its language (for Russian, Arabic, and Kannada) and translating between a pair of closely related languages (Serbian and Bosnian). Finally, we investigate how combining finite-state and sequence-to-sequence models at decoding time affects the output quantitatively and qualitatively. Introduction and prior work Many natural language sequence transduction tasks, such as transliteration or grapheme-to-phoneme conversion, call for a character-level parameterization that reflects the linguistic knowledge of the underlying generative process. Character-level transduction approaches have even been shown to perform well for tasks that are not entirely characterlevel in nature, such as translating between related languages (Pourdamghani and Knight, 2017). Weighted finite-state transducers (WFSTs) have traditionally been used for such character-level tasks (Knight and Graehl, 1998;Knight et al., 2006). Their structured formalization makes it easier to encode additional constraints, imposed either 1 Code will be published at https://github.com/ ryskina/error-analysis-sigmorphon2021 это точно ಮನ #$ ಳ&$ ತು 3to to4no mana belagitu техничка и стручна настава tehničko i stručno obrazovanje Figure 1: Parallel examples from our test sets for two character-level transduction tasks: converting informally romanized text to its original script (top; examples in Russian and Kannada) and translating between closely related languages (bottom; Bosnian-Serbian). Informal romanization is idiosyncratic and relies on both visual (q → 4) and phonetic (t → t) character similarity, while translation is more standardized but not fully character-level due to grammatical and lexical differences ('nastava' → 'obrazovanje') between the languages. The lines show character alignment between the source and target side where possible. by the underlying linguistic process (e.g. monotonic character alignment) or by the probabilistic generative model (Markov assumption; Eisner, 2002). Their interpretability also facilitates the introduction of useful inductive bias, which is crucial for unsupervised training (Ravi and Knight, 2009;Ryskina et al., 2020). Unsupervised neural sequence-to-sequence (seq2seq) architectures have also shown impressive performance on tasks like machine translation (Lample et al., 2018) and style transfer (Yang et al., 2018;He et al., 2020). These models are substantially more powerful than WFSTs, and they successfully learn the underlying patterns from monolingual data without any explicit information about the underlying generative process. As the strengths of the two model classes differ, so do their weaknesses: the WFSTs and the seq2seq models are prone to different kinds of errors. On a higher level, it is explained by the structure-power trade-off: while the seq2seq models are better at recovering long-range dependencies and their outputs look less noisy, they also tend to insert and delete words arbitrarily because their alignments are unconstrained. We attribute the errors to the following aspects of the trade-off: Language modeling capacity: the statistical character-level n-gram language models (LMs) utilized by finite-state approaches are much weaker than the RNN language models with unlimited left context. While a word-level LM can improve the performance of a WFST, it would also restrict the model's ability to handle out-of-vocabulary words. Controllability of learning: more structured models allow us to ensure that the model does not attempt to learn patterns orthogonal to the underlying process. For example, domain imbalance between the monolingual corpora can cause the seq2seq models to exhibit unwanted style transfer effects like inserting frequent target side words arbitrarily. Search procedure: WFSTs make it easy to perform exact maximum likelihood decoding via shortest-distance algorithm (Mohri, 2009). For the neural models trained using conventional methods, decoding strategies that optimize for the output likelihood (e.g. beam search with a large beam size) have been shown to be susceptible to favoring empty outputs (Stahlberg and Byrne, 2019) and generating repetitions (Holtzman et al., 2020). Prior work on leveraging the strength of the two approaches proposes complex joint parameterizations, such as neural weighting of WFST arcs or paths (Rastogi et al., 2016;Lin et al., 2019) or encoding alignment constraints into the attention layer of seq2seq models (Aharoni and Goldberg, 2017;Wu et al., 2018;Wu and Cotterell, 2019;Makarov et al., 2017). We study whether performance can be improved with simpler decodingtime model combinations, reranking and product of experts, which have been used effectively for other model classes (Charniak and Johnson, 2005;Hieber and Riezler, 2015), evaluating on two unsupervised tasks: decipherment of informal roman-ization (Ryskina et al., 2020) and related language translation (Pourdamghani and Knight, 2017). While there has been much error analysis for the WFST and seq2seq approaches separately, it largely focuses on the more common supervised case. We perform detailed side-by-side error analysis to draw high-level comparisons between finitestate and seq2seq models and investigate if the intuitions from prior work would transfer to the unsupervised transduction scenario. Tasks We compare the errors made by the finite-state and the seq2seq approaches by analyzing their performance on two unsupervised character-level transduction tasks: translating between closely related languages written in different alphabets and converting informally romanized text into its native script. Both tasks are illustrated in Figure 1. Informal romanization Informal romanization is an idiosyncratic transformation that renders a non-Latin-script language in Latin alphabet, extensively used online by speakers of Arabic (Darwish, 2014), Russian (Paulsen, 2014), and many Indic languages (Sowmya et al., 2010). Figure 1 shows examples of romanized Russian (top left) and Kannada (top right) sentences along with their "canonicalized" representations in Cyrillic and Kannada scripts respectively. Unlike official romanization systems such as pinyin, this type of transliteration is not standardized: character substitution choices vary between users and are based on the specific user's perception of how similar characters in different scripts are. Although the substitutions are primarily phonetic (e.g. Russian n /n/ → n), i.e. based on the pronunciation of a specific character in or out of context, users might also rely on visual similarity between glyphs (e.g. Russian q / > tS j / → 4), especially when the associated phoneme cannot be easily mapped to a Latin-script grapheme (e.g. Arabic /Q/ → 3). To capture this variation, we view the task of decoding informal romanization as a many-to-many character-level decipherment problem. The difficulty of deciphering romanization also depends on the type of the writing system the language traditionally uses. In alphabetic scripts, where grapheme-to-phoneme correspondence is mostly one-to-one, there tends to be a one-to-one monotonic alignment between characters in the ro-manized and native script sequences (Figure 1, top left). Abjads and abugidas, where graphemes correspond to consonants or consonant-vowel syllables, increasingly use many-to-one alignment in their romanization ( Figure 1, top right), which makes learning the latent alignments, and therefore decoding, more challenging. In this work, we experiment with three languages spanning over three major types of writing systems-Russian (alphabetic), Arabic (abjad), and Kannada (abugida)-and compare how well-suited character-level models are for learning these varying alignment patterns. Related language translation As shown by Pourdamghani and Knight (2017) and Hauer et al. (2014), character-level models can be used effectively to translate between languages that are closely enough related to have only small lexical and grammatical differences, such as Serbian and Bosnian (Ljubešić and Klubička, 2014). We focus on this specific language pair and tie the languages to specific orthographies (Cyrillic for Serbian and Latin for Bosnian), approaching the task as an unsupervised orthography conversion problem. However, the transliteration framing of the translation problem is inherently limited since the task is not truly character-level in nature, as shown by the alignment lines in Figure 1 (bottom). Even the most accurate transliteration model will not be able to capture non-cognate word translations (Serbian 'nastava' [nastava, 'education, teaching'] → Bosnian 'obrazovanje' ['education']) and the resulting discrepancies in morphological inflection (Serbian -a endings in adjectives agreeing with feminine 'nastava' map to Bosnian -o representing agreement with neuter 'obrazovanje'). One major difference with the informal romanization task is the lack of the idiosyncratic orthography: the word spellings are now consistent across the data. However, since the character-level approach does not fully reflect the nature of the transformation, the model will still have to learn a manyto-many cipher with highly context-dependent character substitutions. Figure 2: A parallel example from the LDC BOLT Arabizi dataset, written in Latin script (source) and converted to Arabic (target) semi-manually. Some source-side segments (in red) are removed by annotators; we use the version without such segments (filtered) for our task. The annotators also standardize spacing on the target side, which results in difference with the source (in blue). Data Arabic We use the LDC BOLT Phase 2 corpus (Bies et al., 2014; for training and testing the Arabic transliteration models (Figure 2). The corpus consists of short SMS and chat in Egyptian Arabic represented using Latin script (Arabizi). The corpus is fully parallel: each message is automatically converted into the standardized dialectal Arabic orthography (CODA; Habash et al., 2012) and then manually corrected by human annotators. We split and preprocess the data according to Ryskina et al. (2020), discarding the target (native script) and source (romanized) parallel sentences to create the source and target monolingual training splits respectively. Russian We use the romanized Russian dataset collected by Ryskina et al. (2020), augmented with the monolingual Cyrillic data from the Taiga corpus of Shavrina and Shapovalova (2017) (Figure 3). The romanized data is split into training, validation, and test portions, and all validation and test sentences are converted to Cyrillic by native speaker annotators. Both the romanized and the nativescript sequences are collected from public posts and comments on a Russian social network vk.com, and they are on average 3 times longer than the messages in the Arabic dataset (Table 1). However, although both sides were scraped from the same online platform, the relevant Taiga data is collected primarily from political discussion groups, so there is still a substantial domain mismatch between the source and target sides of the data. Table 1: Dataset splits for each task and language. The source and target train data are monolingual, and the validation and test sentences are parallel. For the informal romanization task, the source and target sides correspond to the Latin and the original script respectively. For the translation task, the source and target sides correspond to source and target languages. The validation and test character statistics are reported for the source side. Annotated Source: proishodit s prirodoy 4to to very very bad Filtered: proishodit s prirodoy 4to to <...> Target: proishodit s prirodoȋ qto-to <...> Gloss: 'Something very very bad is happening to the environment' Monolingual Source: - Target: to videoroliki so s ezda partii"Edina Rossi " Gloss: 'These are the videos from the "United Russia" party congress' Figure 3: Top: A parallel example from the romanized Russian dataset. We use the filtered version of the romanized (source) sequences, removing the segments the annotators were unable to convert to Cyrillic, e.g. code-switched phrases (in red). The annotators also standardize minor spelling variation such as hyphenation (in blue). Bottom: a monolingual Cyrillic example from the vk.com portion of the Taiga corpus, which mostly consists of comments in political discussion groups. Kannada Our Kannada data ( Figure 4) is taken from the Dakshina dataset (Roark et al., 2020), a large collection of native-script text from Wikipedia for 12 South Asian languages. Unlike the Russian and Arabic data, the romanized portion of Dakshina is not scraped directly from the users' online communication, but instead elicited from native speakers given the native-script sequences. Because of this, all romanized sentences in the data are parallel: we allocate most of them to the source side training data, discarding their original script counterparts, and split the remaining annotated ones between validation and test. 'to use DDR3 in the source circuit' Figure 4: A parallel example from the Kannada portion of the Dakshina dataset. The Kannada script data (target) is scraped from Wikipedia and manually converted to Latin (source) by human annotators. Foreign target-side characters (in red) get preserved in the annotation but our preprocessing replaces them with UNK on the target side. Gloss: 'Everyone has the right to life, liberty and security of person.' Figure 5: A parallel example from the Serbian-Cyrillic and Bosnian-Latin UDHR. The sequences are not entirely parallel on character level due to paraphrases and non-cognate translations (in blue). Related language translation Following prior work (Pourdamghani and Knight, 2017;Yang et al., 2018;He et al., 2020), we train our unsupervised models on the monolingual data from the Leipzig corpora (Goldhahn et al., 2012). We reuse the non-parallel training and synthetic parallel validation splits of Yang et al. (2018), who generated their parallel data using the Google Translation API. Rather than using their synthetic test set, we opt to test on natural parallel data from the Universal Declaration of Human Rights (UDHR), following Pourdamghani and Knight (2017). We manually sentence-align the Serbian-Cyrillic and Bosnian-Latin declaration texts and follow the preprocessing guidelines of Pourdamghani and Knight (2017). Although we strive to approximate the training and evaluation setup of their work for fair comparison, there are some discrepancies: for example, our manual alignment of UDHR yields 100 sentence pairs compared to 104 of Pourdamghani and Knight (2017). We use the data to train the translation models in both directions, simply switching the source and target sides from Serbian to Bosnian and vice versa. Inductive bias As discussed in §1, the WFST models are less powerful than the seq2seq models; however, they are also more structured, which we can use to introduce inductive bias to aid unsupervised training. Following Ryskina et al. (2020), we introduce informative priors on character substitution operations (for a description of the WFST parameterization, see §4.1). The priors reflect the visual and phonetic similarity between characters in different alphabets and are sourced from human-curated resources built with the same concepts of similarity in mind. For all tasks and languages, we collect phonetically similar character pairs from the phonetic keyboard layouts (or, in case of the translation task, from the default Serbian keyboard layout, which is phonetic in nature due to the dual orthography standard of the language). We also add some visually similar character pairs by automatically pairing all symbols that occur in both source and target alphabets (same Unicode codepoints). For Russian, which exhibits a greater degree of visual similarity than Arabic or Kannada, we also make use of the Unicode confusables list (different Unicode codepoints but same or similar glyphs). 3 It should be noted that these automatically generated informative priors also contain noise: keyboard layouts have spurious mappings because each symbol must be assigned to exactly one key in the QWERTY layout, and Unicode-constrained visual mappings might prevent the model from learning correspondences between punctuation symbols (e.g. Arabic question mark → ?). Preprocessing We lowercase and segment all sequences into characters as defined by Unicode codepoints, so dia-critics and non-printing characters like ZWJ are also treated as separate vocabulary items. To filter out foreign or archaic characters and rare diacritics, we restrict the alphabets to characters that cover 99% of the monolingual training data. After that, we add any standard alphabetical characters and numerals that have been filtered out back into the source and target alphabets. All remaining filtered characters are replaced with a special UNK symbol in all splits except for the target-side test. Methods We perform our analysis using the finite-state and seq2seq models from prior work and experiment with two joint decoding strategies, reranking and product of experts. Implementation details and hyperparameters are described in Appendix B. Base models Our finite-state model is the WFST cascade introduced by Ryskina et al. (2020). The model is composed of a character-level n-gram language model and a script conversion transducer (emission model), which supports one-to-one character substitutions, insertions, and deletions. Character operation weights in the emission model are parameterized with multinomial distributions, and similar character mappings ( §3.3) are used to create Dirichlet priors on the emission parameters. To avoid marginalizing over sequences of infinite length, a fixed limit is set on the delay of any path (the difference between the cumulative number of insertions and deletions at any timestep). Ryskina et al. (2020) train the WFST using stochastic stepwise EM (Liang and Klein, 2009), marginalizing over all possible target sequences and their alignments with the given source sequence. To speed up training, we modify their training procedure towards 'hard EM': given a source sequence, we predict the most probable target sequence under the model, marginalize over alignments and then update the parameters. Although the unsupervised WFST training is still slow, the stepwise training procedure is designed to converge using fewer data points, so we choose to train the WFST model only on the 1,000 shortest source-side training sequences (500 for Kannada). Our default seq2seq model is the unsupervised neural machine translation (UNMT) model of Lample et al. (2018Lample et al. ( , 2019 in the parameterization of He et al. (2020). The model consists of an Table 3: Character and word error rates (lower is better) and BLEU scores (higher is better) for the related language translation task. Bold indicates best per column. The WFST and the seq2seq have comparable CER and WER despite the WFST being trained on up to 160x less source-side data ( §4.1). While none of our models achieve the scores reported by Pourdamghani and Knight (2017), they all substantially outperform the subword-level model of He et al. (2020). Note: base model results are not intended as a direct comparison between the WFST and seq2seq, since they are trained on different amounts of data. LSTM (Hochreiter and Schmidhuber, 1997) encoder and decoder with attention, trained to map sentences from each domain into a shared latent space. Using a combined objective, the UNMT model is trained to denoise, translate in both directions, and discriminate between the latent representation of sequences from different domains. Since the sufficient amount of balanced data is crucial for the UNMT performance, we train the seq2seq model on all available data on both source and target sides. Additionally, the seq2seq model decides on early stopping by evaluating on a small parallel validation set, which our WFST model does not have access to. The WFST model treats the target and source training data differently, using the former to train the language model and the latter for learning the emission parameters, while the UNMT model is trained to translate in both directions simultaneously. Therefore, we reuse the same seq2seq model for both directions of the translation task, but train a separate finite-state model for each direction. Model combinations The simplest way to combine two independently trained models is reranking: using one model to produce a list of candidates and rescoring them according to another model. To generate candidates with a WFST, we apply the n-shortest paths algorithm (Mohri and Riley, 2002). It should be noted that the n-best list might contain duplicates since each path represents a specific source-target character alignment. The length constraints encoded in the WFST also restrict its capacity as a reranker: beam search in the UNMT model may produce hypotheses too short or long to have a non-zero Input svako ima pravo da slobodno uqestvuje u kulturnom ivotu zajednice, da u iva u umetnosti i da uqestvuje u nauqnom napretku i u dobrobiti koja otuda proistiqe. Ground truth svako ima pravo da slobodno sudjeluje u kulturnom životu zajednice, da uživa u umjetnosti i da učestvuje u znanstvenom napretku i u njegovim koristima. WFST svako ima pravo da slobodno učestvuje u kulturnom životu s jednice , da uživa u m etnosti i da učestvuje u naučnom napretku i u dobrobiti koja otuda pr ističe . Reranked WFST svako ima pravo da slobodno učestvuje u kulturnom životu s jednice , da uživa u m etnosti i da učestvuje u naučnom napretku i u dobrobiti koja otuda pr ističe . Seq2Seq svako ima pravo da slobodno učestvuje u kulturnom životu zajednice , da učestvuje u naučnom napretku i u dobrobiti koja otuda proističe . Reranked Seq2Seq svako ima pravo da slobodno učestvuje u kulturnom životu zajednice , da uživa u umjetnosti i da učestvuje u naučnom napretku i u dobrobiti koja otuda proističe Product of experts svako ima pravo da slobodno učestvuje u kulturnom za u s ajednice , da živa u umjetnosti i da učestvuje u naučnom napretku i u dobro j i koja otuda proisti Subword Seq2Seq s ami ima pravo da slobodno u tiče na srpskom nivou vlasti da razgovaraju u bosne i da djeluje u medunarodnom turizmu i na buducnosti koja muža decisno . probability under the WFST. Our second approach is a product-of-expertsstyle joint decoding strategy (Hinton, 2002): we perform beam search on the WFST lattice, reweighting the arcs with the output distribution of the seq2seq decoder at the corresponding timestep. For each partial hypothesis, we keep track of the WFST state s and the partial input and output sequences x 1:k and y 1:t . 4 When traversing an arc with input label i ∈ {x k+1 , } and output label o, we multiply the arc weight by the probability of the neural model outputting o as the next character: p seq2seq (y t+1 = o|x, y 1:t ). Transitions with o = (i.e. deletions) are not rescored by the seq2seq. We group hypotheses by their consumed input length k and select n best extensions at each timestep. Additional baselines For the translation task, we also compare to prior unsupervised approaches of different granularity: the deep generative style transfer model of He et al. (2020) and the character-and word-level WFST decipherment model of Pourdamghani and Knight (2017). The former is trained on the same training set tokenized into subword units (Sennrich et al., 2016), and we evaluate it on our UDHR test set for fair comparison. While the train and test data of Pourdamghani and Knight (2017) also use the same respective sources, we cannot account for tokenization differences that could affect the scores reported by the authors. Results and analysis Tables 2 and 3 present our evaluation of the two base models and three decoding-time model combinations on the romanization decipherment and related language translation tasks respectively. For each experiment, we report character error rate, word error rate, and BLEU (see Appendix C). The results for the base models support what we show later in this section: the seq2seq model is more likely to recover words correctly (higher BLEU, lower WER), while the WFST is more faithful on character level and avoids word-level substitution errors (lower CER). Example predictions can be found in Table 4 and in the Appendix. Our further qualitative and quantitative findings are summarized in the following high-level takeaways: #1: Model combinations still suffer from search issues. We would expect the combined decoding to discourage all errors common under one model but not the other, improving the performance by leveraging the strengths of both model classes. However, as Tables 2 and 3 show, they instead WFST Seq2Seq Figure 6: Highest-density submatrices of the two base models' character confusion matrices, computed in the Russian romanization task. White cells represent zero elements. The WFST confusion matrix (left) is noticeably sparser than the seq2seq one (right), indicating more repetitive errors. # symbol stands for UNK. mostly interpolate between the scores of the two base models. In the reranking experiments, we find that this is often due to the same base model error (e.g. the seq2seq model hallucinating a word mid-sentence) repeating across all the hypotheses in the final beam. This suggests that successful reranking would require a much larger beam size or a diversity-promoting search mechanism. Interestingly, we observe that although adding a reranker on top of a decoder does improve performance slightly, the gain is only in terms of the metrics that the base decoder is already strong atcharacter-level for reranked WFST and word-level for reranked seq2seq-at the expense of the other scores. Overall, none of our decoding strategies achieves best results across the board, and no model combination substantially outperforms both base models in any metric. #2: Character tokenization boosts performance of the neural model. In the past, UNMT-style models have been applied to various unsupervised sequence transduction problems. However, since these models were designed to operate on word or subword level, prior work assumes the same tokenization is necessary. We show that for the tasks allowing character-level framing, such models in fact respond extremely well to character input. Table 3 compares the UNMT model trained on characters with the seq2seq style transfer model of He et al. (2020) trained on subword units. The original paper shows improvement over the UNMT baseline in the same setting, but simply switching to character-level tokenization without any other changes results in a 30 BLEU points gain for either direction. This suggests that the tokenization choice could act as an inductive bias for seq2seq models, and character-level framing could be useful even for tasks that are not truly character-level. This observation also aligns with the findings of the recent work on language modeling complexity (Park et al., 2021;Mielke et al., 2019). For many languages, including several Slavic ones related to the Serbian-Bosnian pair, a character-level language model yields lower surprisal than the one trained on BPE units, suggesting that the effect might also be explained by the character tokenization making the language easier to language-model. #3: WFST model makes more repetitive errors. Although two of our evaluation metrics, CER and WER, are based on edit distance, they do not distinguish between the different types of edits (substitutions, insertions and deletions). Breaking them down by the edit operation, we find that while both models favor substitutions on both word and character levels, insertions and deletions are more frequent under the neural model (43% vs. 30% of all edits on the Russian romanization task). We also find that the character substitution choices of the neural model are more context-dependent: while the total counts of substitution errors for the two models are comparable, the WFST is more likely to repeat the same few substitutions per character type. This is illustrated by Figure 6, which visualizes the most populated submatrices of the confusion matrices for the same task as heatmaps. The WFST confusion matrix is noticeably more sparse, with the same few substitutions occurring much more frequently than others: for example, WFST often mistakes for a and rarely for other characters, while the neural model's substitutions of are distributed closer to uniform. This suggests that the WFST errors might be easier to correct with rule-based postprocessing. Interestingly, we did not observe the same effect for the translation task, likely due to a more constrained nature of the orthography conversion. The predictions are segmented using Moses tokenizer (Koehn et al., 2007) and aligned to ground truth with word-level edit distance. The increased frequency of CER=1 for the seq2seq model as compared to the WFST indicates that it replaces entire words more often. #4: Neural model is more sensitive to data distribution shifts. The language model aiming to replicate its training data distribution could cause the output to deviate from the input significantly. This could be an artifact of a domain shift, such as in Russian, where the LM training data came from a political discussion forum: the seq2seq model frequently predicts unrelated domain-specific proper names in place of very common Russian words, e.g. izn [žizn, 'life'] → Z ganov [Zjuganov, 'Zyuganov (politician's last name)'] or to [èto, 'this'] → Edina Rossi [Edinaja Rossija, 'United Russia (political party)'], presumably distracted by the shared first character in the romanized version. To quantify the effect of a mismatch between the train and test data distributions in this case, we inspect the most common word-level substitutions under each decoding strategy, looking at all substitution errors covered by the 1,000 most frequent substitution 'types' (ground truth-prediction word pairs) under the respective decoder. We find that 25% of the seq2seq substitution errors fall into this category, as compared to merely 3% for the WFST-notable given the relative proportion of in-vocabulary words in the models' outputs (89% for UNMT vs. 65% for WFST). Comparing the error rate distribution across output words for the translation task also supports this observation. As can be seen from Figure 7, the seq2seq model is likely to either predict the word correctly (CER of 0) or entirely wrong (CER of 1), while the the WFST more often predicts the word partially correctly-examples in Table 4 illustrate this as well. We also see this in the Kannada outputs: WFST typically gets all the consonants right but makes mistakes in the vowels, while the seq2seq tends to replace the entire word. Conclusion We perform comparative error analysis in finitestate and seq2seq models and their combinations for two unsupervised character-level tasks, informal romanization decipherment and related language translation. We find that the two model types tend towards different errors: seq2seq models are more prone to word-level errors caused by distributional shifts while WFSTs produce more characterlevel noise despite the hard alignment constraints. Despite none of our simple decoding-time combinations substantially outperforming the base models, we believe that combining neural and finitestate models to harness their complementary advantages is a promising research direction. Such combinations might involve biasing seq2seq models towards WFST-like behavior via pretraining or directly encoding constraints such as hard alignment or monotonicity into their parameterization (Wu et al., 2018;Wu and Cotterell, 2019). Although recent work has shown that the Transformer can learn to perform character-level transduction without such biases in a supervised setting (Wu et al., 2021), exploiting the structured nature of the task could be crucial for making up for the lack of large parallel corpora in low-data and/or unsupervised scenarios. We hope that our analysis provides insight into leveraging the strengths of the two approaches for modeling character-level phenomena in the absence of parallel data. A Data download links The romanized Russian and Arabic data and preprocessing scripts can be downloaded here. This repository also contains the relevant portion of the Taiga dataset, which can be downloaded in full at this link. The romanized Kannada data was downloaded from the Dakshina dataset. The scripts to download the Serbian and Bosnian Leipzig corpora data can be found here. The UDHR texts were collected from the corresponding pages: Serbian, Bosnian. The keyboard layouts used to construct the phonetic priors are collected from the following sources: Arabic 1, Arabic 2, Russian, Kannada, Serbian. The Unicode confusables list used for the Russian visual prior can be found here. B Implementation WFST We reuse the unsupervised WFST implementation of Ryskina et al. (2020), 5 which utilizes the OpenFst (Allauzen et al., 2007) and Open-Grm (Roark et al., 2012) libraries. We use the default hyperparameter settings described by the authors (see Appendix B in the original paper). We keep the hyperparameters unchanged for the translation experiment and set the maximum delay value to 2 for both translation directions. UNMT We use the PyTorch UNMT implementation of He et al. (2020) 6 which incorporates improvements introduced by Lample et al. (2019) such as the addition of a max-pooling layer. We use a single-layer LSTM (Hochreiter and Schmidhuber, 1997) with hidden state size 512 for both the encoder and the decoder and embedding dimension 128. For the denoising autoencoding loss, we adopt the default noise model and hyperparameters as described by Lample et al. (2018). The autoencoding loss is annealed over the first 3 epochs. We predict the output using greedy decoding and set the maximum output length equal to the length of the input sequence. Patience for early stopping is set to 10. C Metrics The character error rate (CER) and word error rate (WER) as measured as the Levenshtein distance between the hypothesis and reference divided by reference length: with both the numerator and the denominator measured in characters and words respectively. We report BLEU-4 score (Papineni et al., 2002), measured using the Moses toolkit script. 7 For both BLEU and WER, we split sentences into words using the Moses tokenizer (Koehn et al., 2007). Input kongress ne odobril biudjet dlya osuchestvleniye "bor'bi s kommunizmom" v yuzhniy amerike. Ground truth kongress ne odobril b d et dl osuwestvleni "bor by s kommunizmom" v noȋ amerike.
7,527.8
2021-06-24T00:00:00.000
[ "Computer Science" ]
A Role for Bone Morphogenetic Protein-4 in Lymph Node Vascular Remodeling and Primary Tumor Growth Running title: BMP-4 in vascular remodeling and tumor growth Author manuscripts have been peer reviewed and accepted for publication but have not yet been edited. Author manuscripts have been peer reviewed and accepted for publication but have not yet been edited. Abstract Lymph node metastasis, an early and prognostically important event in the progression of many human cancers, is associated with expression of vascular endothelial growth factor-D (VEGF-D). Changes to lymph node vasculature that occur during malignant progression may create a metastatic niche capable of attracting and supporting tumor cells. In this study, we sought to characterize molecules expressed in lymph node endothelium that could represent therapeutic or prognostic targets. Differential mRNA expression profiling of endothelial cells from lymph nodes that drained metastatic or non-metastatic primary tumors revealed genes associated with tumor progression, in particular bone morphogenetic protein-4 (BMP-4). Metastasis driven by VEGF-D was associated with reduced BMP-4 expression in high endothelial venules, where BMP-4 loss could remodel the typical high-walled phenotype to thin-walled vessels. VEGF-D expression was sufficient to suppress proliferation of the more typical BMP-4-expressing high endothelial venules in favor of remodeled vessels, and mechanistic studies indicated that VEGFR-2 contributed to high endothelial venule proliferation and remodeling. BMP-4 could regulate high endothelial venule phenotype and cellular function, thereby determining morphology and proliferation responses. Notably, therapeutic administration of BMP-4 suppressed primary tumor growth, acting both at the level of tumor cells and tumor stromal cells. Together, our results show that VEGF-D-driven metastasis induces vascular remodeling in lymph nodes. Further, they implicate BMP-4 as a negative regulator of this process, suggesting its potential utility as a prognostic marker or anti-tumor agent. Author manuscripts have been peer reviewed and accepted for publication but have not yet been edited. Introduction Lymphatic dissemination is considered to be an early and crucial route of metastasis for many cancers (1,2).Blindending lymphatic capillaries drain fluid, cells, and macromolecules from tissue interstitium into a hierarchy of vessels punctuated by lymph nodes (LN), which provide immunologic surveillance for a particular lymphatic drainage basin (3).The presence of metastatic tumor cells in the "sentinel" LN draining a tumor site is a key factor in disease management: substantial clinical data indicates adverse prognostic significance of tumor-positive LNs for many tumor types (4,5).However, a clear understanding of the mechanistic role of LNs in tumor progression is still lacking. VEGF-D and VEGF-C are important inducers of the growth and differentiation of blood vessels and lymphatics.When overexpressed in experimental tumors these growth factors elicit angiogenesis and lymphangiogenesis, and are furthermore associated with increased metastasis to LNs and distant organs (1).VEGF-D and VEGF-C expression is also associated with metastasis to LNs in many human cancers, and is independently associated with poor prognosis (6).Recently, it has emerged that modulation of lymphatics and blood vesselsincluding high endothelial venules (HEV), vessels specialized for leukocyte trafficking (7,8)-also occurs in draining LNs of some tumors (9,10).Such alterations can precede the arrival of metastatic cells (7,(11)(12)(13), and members of the VEGF family have been implicated in these changes (12)(13)(14)(15).The importance of alterations to LN endothelium is highlighted by studies of human breast cancer: lymphangiogenesis or angiogenesis within metastatic tumor deposits in sentinel LNs was found to be associated with, and sometimes independently predictive of, distant metastasis or survival (9,16,17). Here, we sought to characterize changes to the vasculature within tumor-draining LNs, to identify molecules with prognostic or therapeutic potential.We compared the molecular profiles of enriched endothelial cell (EC) populations from LNs draining nonmetastatic tumors with those from LNs draining metastatic (VEGF-D-overexpressing) tumors.BMP-4 was downregulated in the HEVs of LNs draining metastatic tumors.This observation was linked with the remodeling of HEVs induced by VEGF-D-driven metastasis, thus implicating BMP-4 as a regulator of HEV morphology and cell function.Furthermore, therapeutically applied BMP-4 protein inhibited primary tumor growth.This study indicates that VEGF-D's prometastatic activity includes remodeling of specialized LN endothelium, and identifies new roles for BMP-4 in cancer and vascular biology. Materials and Methods Lists of antibodies, primers and detailed protocols are contained in the Supplementary Methods section. Metastatic and nonmetastatic xenograft tumor models 293 EBNA-1 tumor cell lines stably expressing full-length human VEGF-D (293-VEGF-D), human VEGF-C (293-VEGF-C), or vector alone (293-Apex) were established in SCID/NOD mice as described (18).293 EBNA-1 cells were a gift from Kari Alitalo, University of Helsinki, Finland.Regular growth and morphology of transfected cell lines was monitored routinely and growth factor expression verified by Western blot prior to each experiment.LNs were analyzed within the timeframe that metastasis typically occurs in this model; that is, 2 to 4 weeks postimplantation.All animal experiments were carried out with the approval of the Institutional Animal Ethics Committee. Enrichment of LN EC populations Draining LNs of metastatic or nonmetastatic tumors pooled from 1 to 5 mice were enzymatically digested, then tumor cells and leukocytes were depleted using immunomagnetic selection (Miltenyi Biotec) for class I HLA and CD16/CD32.The remaining cells were cultured in EGM-2 MV media (Lonza) before enrichment for ECs by selection for podoplanin (19).See Supplementary Fig. S1 for detailed procedure. Microarray analysis Duplicate samples of LN EC total RNA (RNeasy Plus kit, Qiagen) were applied to Affymetrix expression arrays (430 2.0; Australian Genome Research Facility).Raw intensity data were analyzed using GeneChip Operating Software (Affymetrix), and profiles compared via Robust Multiarray Analysis and linear modeling using AffylmGUI software (20).Microarray data are deposited in NCBI's Gene Expression Omnibus; series accession number GSE31123 (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc¼GSE31123). Human LNs Breast cancer-associated LNs with or without histologically identifiable metastases (n ¼ 7 patients, 22 LNs), or control nontumor-associated LNs collected during cardiac surgery (n ¼ 3 patients), were obtained as a pilot study.Access to deidentified tissue (formalin fixed, paraffin embedded) was provided by the Pathology Department, Royal Melbourne Hospital, with permission from the Melbourne Health Human Research Ethics Committee. Immunostaining and image quantitation For BMP-4/MECA-79 quantitation, 2 to 3 sections of each tumor-draining LN ($6 per group) were immunostained (18).For HEV morphometry, the luminal and basal edges of HEVs were traced using Metamorph Premier (Molecular Devices), to determine lumen area, average vessel wall width and endothelial area using Integrated Morphometry Analysis parameters (journal available on request).HEVs with 50% or more of their circumference staining for BMP-4 were designated BMP-4 high ; or otherwise BMP-4 low .Data were analyzed according to a linear mixed model (Supplementary Methods). Treatment of ear-draining LNs with recombinant VEGF-D One microgram of purified VEGF-D dimers (0.05 mg/mL; Vegenics Ltd.) in PBS, or PBS alone as control, was injected intradermally into the ears of SCID/NOD mice for 3 consecutive days.On day 4, BrdU (Invitrogen) was injected intraperitoneally, and ear-draining (superficial parotid) LNs were harvested 2 days later. Treatment of tumors with neutralizing antibodies Mice bearing metastatic (VEGF-D overexpressing) tumors received 3 times weekly intraperitoneal injections of neutralizing antibodies (800 mg) to VEGF receptor-2 (VEGFR-2; DC101; Imclone) or VEGF-D (VD1; ref. 21), or PBS.For analysis of HEVs, sections of LNs draining nonmetastatic and antibodytreated metastatic tumors were used from one experiment.LNs of PBS-treated metastatic tumors where HEVs were not obscured by tumor infiltration were included from an identically conducted experiment as a control. BMP-4 therapeutic model Tumor-bearing mice were injected intraperitoneally from day 1, 3 times weekly, with 1.4 mg of human BMP-4 (R&D Systems) in 200 mL PBS with 0.652 mg/mL BSA, or a vehicle control of PBS with 0.32 mmol/L HCl and 1 mg/mL BSA, until day 12 or experiment termination.Serum was sampled 60 minutes posttreatment and BMP-4 quantified by ELISA (R&D). Statistical analysis Data were compared using a 2-tailed Student t test, or Fisher's exact test for comparison of proportions.Graphed data represent mean AE SE unless specified otherwise. Enrichment of endothelial cells from tumor-draining LNs A model of VEGF-D-driven tumor metastasis to regional LNs was used to examine molecular changes in LN endothelium during metastasis (Fig. 1A).Overexpression of VEGF-D in 293-EBNA-1 tumor cells drives metastasis to local LNs within 2 to 4 weeks of implantation in approximately 80% of cases.Vector-transfected tumor cells (no VEGF-D) served as a nonmetastatic control (18).Podoplanin (19) was used as a highly expressed, protease-resistant selection marker to derive cell populations enriched for lymphatic ECs and related EC types, which may respond to VEGF-D (Fig. 1B).Microarray analysis revealed expression of EC-characteristic genes, including VEGFR-2, neuropilin-1 and neuropilin-2, endothelial nitric oxide synthase, CD34 and TIE-2; while desmin and calponin-1, found in fibroblastic lineages, and chondroitin sulfate proteoglycan 4 (NG-2 antigen), characteristic of pericytes, were absent.These findings confirmed that the podoplanin þve cells were enriched for ECs.The LN ECs heterogeneously expressed ICAM-1 and endoglin, markers of endothelial activation in inflammation and angiogenesis, respectively (Fig. 1C; Supplementary Methods). Identification of endothelial-expressed genes modulated during metastasis to LNs LN ECs from metastatic and nonmetastatic tumor models were compared by microarray (Fig. 2A).Of the top 10 differentially expressed genes (ranked by adjusted P value), 9 were downregulated in LNs draining metastatic tumors compared with their nonmetastatic counterparts, and all 10 showed more than 2-fold difference in expression (Table 1; Fig. 2B).Candidates were subsequently selected for further analysis based on relevance to endothelial and cancer biology.qRT-PCR validated the downregulation of Bmp4, Unc5c, Cfh, Emcn, and Gpr39 in ECs from LNs draining metastatic tumors, and the upregulation of Nova1 (Fig. 2C).Bmp4 showed the greatest abundance and more than 5-fold difference in expression, and was thus selected for further investigation. Localization of BMP-4 protein in HEVs and differential expression in metastasis Immunohistochemistry showed that BMP-4 protein was localized to HEVs (Fig. 3A), confirmed by costaining for the specific MECA-79 epitope (22).BMP-4 protein was present in a subset of HEVs in LNs draining both nonmetastatic and metastatic tumors (Fig. 3A), and in LNs from nontumorbearing SCID/NOD and immunocompetent mice (Fig. 3A, data not shown).HEVs did not endogenously express podoplanin (Supplementary Fig. S2), suggesting podoplanin probably became upregulated in HEV ECs during the brief culturing between extraction from LN and purification for microarray analysis (23,24).Although MECA-79 stained the surface of HEV ECs, BMP-4 seemed primarily in the cytoplasm (Fig. 3A inset), implying that HEV ECs express BMP-4 protein.No other sites of BMP-4 localization were observed in the LN or primary tumor.This supported the conclusion that HEV ECs are the main source of BMP-4 mRNA and protein in LNs. Quantitation of staining revealed that HEV-expressed BMP-4 was significantly reduced (by $50%), in LNs draining metastatic versus nonmetastatic tumors (P < 0.001; Fig. 3B and C).This illustrated a shift from predominately BMP-4 high to predominately BMP-4 low HEVs in LNs draining nonmetastatic versus metastatic tumors respectively; however, both LN types contained some BMP-4 high and some BMP-4 low HEVs (Fig. 3B and C).Therefore, the downregulation of BMP-4 mRNA was reflected at the protein level in vivo. BMP-4 loss marks HEV remodeling in cancer We examined LNs for evidence of tumor-induced HEV remodeling (7), and explored whether VEGF-D or BMP-4 was associated with this process (Fig. 4A).In LNs draining nonmetastatic tumors, BMP-4 high HEVs had significantly smaller lumen areas than BMP-4 low HEVs (P ¼ 0.0017; Fig. 4B).In LNs draining metastatic tumors, however, the BMP-4 high HEVs were more dilated than in the nonmetastatic context (P ¼ 0.028; Fig. 4B).Significantly, BMP-4 high HEVs had thicker vessel walls than BMP-4 low HEVs in all LNs (P < 0.001; Fig. 4B), suggesting that BMP-4 expression was closely linked with HEV morphology.Although the remaining BMP-4 high HEVs in LNs draining metastatic tumors largely retained their greater vessel wall width, there was a strong trend suggesting reduced width compared with those in LNs draining nonmetastatic tumors, indicating that VEGF-D-driven metastasis could affect the endothelial width of BMP-4 high HEVs (P ¼ 0.064; Fig. 4B).We also observed remodeled HEVs in a pilot study of human breast cancer-associated LNs with or without histologically identifiable metastasis (Fig. 4E), confirming its occurrence in human disease (7). We next investigated whether HEV remodeling involved EC proliferation.Interestingly, BMP-4 high HEV ECs in LNs draining metastatic tumors had a significantly lower proliferation rate than those from the nonmetastatic model (P ¼ 0.026; Fig. 4C).Furthermore, BMP-4 low HEV ECs in LNs draining metastatic tumors had a significantly higher proliferation rate than the BMP-4 high HEV ECs (P ¼ 0.015; Fig. 4C).These results indicated that the proliferation response of HEV ECs to tumor-secreted VEGF-D may be modulated by BMP-4; another way in which VEGF-D-driven metastasis may induce remodeling of HEV characteristics via reduction of BMP-4 expression. The role of VEGF-D and VEGFR-2 in HEV remodeling To determine whether HEVs could respond directly to tumor-secreted human VEGF-D, we examined VEGFR-2 and VEGFR-3 expression in LNs.VEGFR-2 was expressed on most HEVs, blood vessel capillaries and lymphatics (Fig. 4D).VEGFR-3 was strongly expressed on lymphatics, but was essentially absent from HEVs.Thus HEVs are capable of responding to VEGFR-2 ligands. In vivo approaches were utilized to investigate the specific pathways controlling HEV remodeling.Injection of VEGF-D into the mouse ear mimics tumor-secreted growth factor draining to regional LNs.After 3 days of VEGF-D treatment, proliferation of BMP-4 high HEV ECs was decreased (P ¼ 0.034; Fig. 5A), suggesting VEGF-D was responsible for the effect observed in tumor-draining LNs (Fig. 4C), and that suppression of proliferation in BMP-4 high HEVs by VEGF-D may occur early in the metastatic process.Alteration of lumen area, vessel wall width and BMP-4 expression may require a longer stimulation period as neither was affected in this experiment (Fig. 5A and B); however, BMP-4 high HEVs again exhibited significantly thicker vessel walls (Fig. 5A). Effects of exogenous BMP-4 on tumor progression As this study was designed to identify and analyze molecular targets with prognostic and/or therapeutic potential, we established a therapeutic model to determine the effects of exogenously-administered BMP-4.Activity and stability of recombinant human BMP-4 were verified by bioassay (Supplementary Fig. S3A; Supplementary Methods).As shown in Fig. 6A, BMP-4 inhibited the exponential growth of VEGF-D-overexpressing primary tumors by approximately 50% (day 20, P ¼ 0.056; day 22, P ¼ 0.036; day 24, P ¼ 0.080).In addition, similar tumors overexpressing VEGF-C were reduced in size by approximately 56% by BMP-4 treatment (day 15, P ¼ 0.067; day 18, P ¼ 0.021; day 23, P ¼ 0.026).BMP-4 could thus inhibit tumor growth driven by 2 different lymphangiogenic/angiogenic growth factors.ELISA results confirmed that injected BMP-4 reached the systemic circulation at approximately 1,200 pg/mL in serum after 60 minutes (Fig. 6B).Interestingly, under the conditions and timecourse of these experiments the BMP-4 treatment did not seem to affect metastasis to LNs or HEV morphology (Fig. 6C and data not shown).Analysis of HEVs did reveal a trend suggesting that in metastasis-positive LNs draining VEGF-D-overexpressing tumors, more BMP-4 high HEVs were observed under BMP-4 treatment than for the control (mean AE SE: BMP-4, 40.9 AE 10.1; vehicle, 29.5 AE 10.0; n ¼ 5, P ¼ 0.16).Furthermore, BMP-4 high HEVs again exhibited thicker vessel walls than BMP-4 low HEVs in both treatment conditions (P < 0.001; Supplementary Fig. S3B), confirming the importance of endogenous BMP-4 expression. Mechanisms of BMP-4-induced tumor growth suppression To clarify the mechanism by which BMP-4 suppressed primary tumor growth, we first examined the distribution of its receptors.BMPs bind a heterodimeric complex of type I and type II receptors (25).Immunohistochemistry for BMPR-II revealed broad expression on multiple cell types including tumor cells, stroma, and endothelium of large blood vessels (Fig. 6D).Microarray analysis indicated that the VEGF-Doverexpressing tumor cells expressed BMPR2, as well as BMPRIA and ACTR1A, but not BMPR1B (Supplementary Table S2), whereas immunocytochemistry confirmed expression of BMPR-IA and BMPR-II protein on tumor cells and tumorderived fibroblasts (Supplementary Fig. S4A; Supplementary Methods).Interestingly, Western blotting revealed that BMPR-II protein was more abundant in BMP-4-treated than controltreated VEGF-D-overexpressing metastatic tumors (P ¼ 0.048; Fig. 6E and Supplementary Methods), potentially representing a feedback loop that could contribute to tumor suppression. Discussion Changes to the blood or lymphatic vasculature in tumordraining LNs have prognostic significance in cancer (9,16,17,26), and may facilitate metastasis (11)(12)(13).Understanding the mechanisms and functional consequences of these alterations will be critical in determining the overall role of LN metastasis in tumor progression, and could advance prognostication and treatment for cancer patients.Here, we have identified molecules involved in the remodeling of HEVs in tumor-draining LNs, and an additional role for BMP-4 in suppressing primary tumor growth. Microarray analysis of enriched LN EC populations revealed differential expression of several genes with significance to endothelial and tumor biology.Analysis of isolated EC subtypes has enabled identification of important functional molecules (ref.27 and manuscript submitted).Although our isolation strategy utilized podoplanin, commonly used to distinguish lymphatic endothelium, immunohistochemical validation revealed BMP-4 to be differentially expressed in HEVs, a specialized venous endothelial type that did not express podoplanin in vivo.Subsequent to observations that blood vascular ECs cocultured with lymphatic ECs could spontaneously acquire expression of lymphatic-characteristic molecules including podoplanin (23), it has been shown that substantial plasticity exists between arterial, venous, and lymphatic EC lineages, controlled by specific transcription factors and reflecting their common embryonic origin (24).Our observations provide further confirmation of this plasticity and relatedness.Another similar study used microarray analysis of isolated lymphatic ECs from primary tumors, which were briefly cultured, to identify novel markers with prognostic significance (28).Our study advances upon this by examining the endothelium of tumor-draining LNs. The morphologic changes we observed to be associated with VEGF-D-driven metastasis and BMP-4 reduction-that is, remodeling of the normally "high"-walled HEVs into flat walled, more dilated vessels with altered proliferation responseswere consistent with those observed in mouse models and human breast cancer (7).Other investigators observed suppression of the HEV-expressed lymphotactic chemokine CCL21 and reduced lymphocyte recruitment in tumor-draining LNs (29).Such physical and molecular features of HEVs endothelium are integral to their role in trafficking leukocytes into the LN to facilitate immune responses (8).Although these investigators analyzed total HEVs, we identified HEV subtypes (BMP-4 high and BMP low ) which can respond differentially to tumor-associated stimuli.Although the functional significance of HEV height is poorly understood, flattening of HEV ECs seems to reduce leukocyte transmigration rates (30).Lower branching-order HEVs were observed to support lower rates of lymphocyte adhesion than higher-order HEVs (29); interestingly, in our studies lower-order HEVs tended to have flatter endothelium and lower BMP-4 expression than higher-order HEVs.It is possible that HEV remodeling may echo homeostatic differences in the morphologic, molecular, and functional characteristics of different branching-order HEVs.Ultimately, tumor-induced HEV remodeling could assist in generating a metastatic niche (31): proliferating, dilated blood vessels derived from remodeled HEVs could enrich the nutrient and oxygen supply to a LN, whereas impaired immune function would promote tumor cell survival.The proximity of remodeled HEVs and lymphatic vessels could provide a shortcut for metastatic cells into the blood vasculature and thus systemic circulation (31,32). Our study provides an important contribution to understanding the molecular mechanisms driving tumor-induced HEV remodeling (Fig. 5E).The effects of BMP-4 and VEGF-Ddriven metastasis on HEV vessel wall width were strongly evident, whereas differences in lumen area and proliferation were more dynamic and may be sensitive to other factors.The differing impacts of VEGFR-2 and VEGF-D blockade suggest involvement of another VEGFR-2 ligand.Several studies have implicated VEGF-A in stimulating HEV growth and remodeling in immune responses (33,34); thus endogenous VEGF-A could contribute to VEGFR-2-mediated HEV dilation in tumor-draining LNs.In addition, VEGF-A might be involved in the differential proliferative response of BMP-4 high and BMP-4 low HEVs to VEGF-D.BMP-4 can increase expression and phosphorylation of VEGFR-2 in ECs, thus enhancing responsiveness to autocrine or paracrine VEGF-A (35).BMP-4 itself could signal to ECs in an autocrine manner (36), and might upregulate VEGF-A expression by LN stromal cells (34,37), thus potentiating a VEGF-A/VEGFR-2 signaling loop.VEGF-D may then inhibit proliferation of BMP-4 high HEV ECs in the tumor context by competing with VEGF-A for binding to As a member of the TGF-b superfamily of multipotent cytokines, the role of BMP-4 in tumor progression can be complex and highly context specific (25,39).We showed that while endogenously expressed BMP-4 regulates HEVs, exogenous BMP-4 can restrict primary tumor growth.BMP-4 is also known to induce apoptosis of other tumor cell lines (40,41) and microvascular ECs (42), although in other studies proangiogenic responses were observed, possibly due to potentiation of VEGF-A/VEGFR-2 signaling (35).Our data suggest that lymphatic ECs may respond to BMP-4 in a similar way.An increase in proliferation of tumor-derived fibroblasts stimulated with BMP-4 in vitro is intriguing considering that cancer-associated fibroblasts are commonly implicated in promoting tumorigenesis (43).The upregulation of BMPR-II expression in BMP-4treated tumors recapitulates a similar observation in Xenopus embryos indicating that Bmpr2 is a target gene of BMP-4 signaling (44).Expression of several other regulators of BMP-4 signaling is also induced by BMP-4, raising the possibility that blockade of relevant signaling inhibitors might enhance the efficacy of BMP-4 treatment.Previous in vivo studies have described antitumorigenic effects of BMP-4 for several tumor types (40,(45)(46)(47)-as well as protumorigenic effects for some-but thus far only one other study, using a model of glioblastoma multiforme, has demonstrated an antitumor effect of therapeutically administered recombinant BMP-4 (48).Although the authors identified a prodifferentiation effect on tumor stem cells, we noted that VEGF-D is highly expressed in glioblastoma multiforme (49).Our study adds weight to the potential of BMP-4 as an antitumor agent by showing that it can inhibit tumor growth driven by 2 different lymphangiogenic/angiogenic factors through action on both tumor cells and stroma. The context-specific nature of BMP-4 signaling does compel careful tuning of BMP-4 targeting and dosage to ensure a robust antitumor effect.A more constant dosage of BMP-4, or a delivery system more targeted to the LN, may help clarify whether therapeutically administered BMP-4 can reverse HEV remodeling or inhibit metastasis.Nevertheless, reduction of BMP-4 expression in HEVs is an important early molecular indicator of remodeling, as it precedes loss of MECA-79 upon incorporation into the vasculature of the tumor deposit (7).Clinical studies will establish whether BMP-4 may represent a convenient surrogate marker of HEV remodeling in cancer.Furthermore, BMP-4 or HEV remodeling may serve as indicators of systemic or distant effects of prometastatic tumorderived factors such as VEGF-D, and provide prognostic information relevant to metastasis, treatment response, or patient outcome.Our data further highlight the need to better under-stand the functional and prognostic significance of the LN, and in particular its vasculature, to cancer metastasis, as well as the potential of BMP-4 as a multipotent antitumor agent. Figure 1 . Figure 1. Isolation of ECs from tumor-draining LNs.A, schematic of approach to investigate differentially expressed genes in enriched ECs from LNs draining metastatic or nonmetastatic tumors.B, immunomagnetic selection for podoplanin-enriched populations of LN ECs, as confirmed by flow cytometry.Gray line, isotype control; percentages represent proportions within podoplanin þve gate (isotype control proportion subtracted).C, enriched EC populations from LNs draining metastatic tumors were analyzed for ICAM-1 (green) and endoglin expression by immunofluorescence or flow cytometry. Figure 2 . Figure 2. Identification of differentially expressed genes in LN ECs.A, ECs from LNs draining metastatic or nonmetastatic tumors (labeled nonmetastatic or metastatic LN EC) were compared by microarray.B, a volcano plot of log odds of differential expression against fold change illustrates significantly differentially expressed genes.C, for selected genes, differential expression was validated by qRT-PCR.Shown are 2 representative examples (1 and 2) of pairwise comparisons.Data are mean AE SD of triplicate reactions.Ã , P < 0.05; ÃÃ , P < 0.01; ÃÃÃ , P < 0.001. Figure 6 . Figure 6.Therapeutic administration of BMP-4.A, BMP-4 or vehicle control was administered to mice from day 1 until day 12 or experiment termination, and tumor volume measured (n ¼ 9-11).B, detection of BMP-4 in serum by ELISA (n ¼ 3).C, LNs were scored histologically positive or negative for metastatic cells.D, immunohistochemistry detecting BMPR-II expression on multiple tumor cell types including blood vessels, inset.E, Western blot detecting BMPR-II in cultured tumor cells and metastatic (VEGF-D) tumor lysates, and densitometric quantitation of expression (n ¼ 3; full-length blot, Supplementary Fig. S5). M.G.Achenand S.A. Stacker: commercial research grant, Imclone; ownership interest, Circadian Technologies; consultant/advisory board, Vegenics.R. Shayan: ownership interest, Circadian Technologies.The other authors disclosed no potential conflicts of interest.
5,520.2
2011-10-15T00:00:00.000
[ "Biology", "Medicine" ]
DFT approaches to transport calculations in magnetic single-molecule devices Electron transport properties of single-molecule devices based on the [Fe(tzpy)2(NCS)2] complex placed between two gold electrodes have been explored using three different atomistic DFT methods. This kind of single-molecule devices is quite appealing because they can present magnetoresistance effects at room temperature. The three employed computational approaches are: (i) self-consistent non-equilibrium Green functions (NEGF) with periodic models that can be described as the most accurate between the state-of-art methods, and two non-self-consistent NEGF approaches using either periodic or non-periodic description of the electrodes (ii and iii). The analysis of the transmission spectra obtained with the three methods indicates that they provide similar qualitative results. To obtain a reasonable agreement with the experimental data, it is mandatory to employ density functionals beyond the commonly employed GGA (i.e., hybrid functionals) or to include on-site corrections for the Coulomb repulsion (GGA+U method). Introduction The field of Molecular Electronics has been developed with the goal to provide a miniaturization of the electronic devices beyond the limit of the silicon technology. [1] Nowadays, the development of new devices that can overcome the Moore law is mandatory because the actual size of the gates (14 nm in last technologies) of the silicon-based transistors is close to the limits. Thus, new technologies "More than Moore" and "beyond CMOS" [2] are required to achieve future generations of computing devices. Under this perspective, heterogeneous devices replacing some silicon components by nanometric 2D or 1D-systems and molecules are a good alternative. Furthermore, the possibilities for controlling electron spin in molecular systems open many opportunities and challenges in the Molecular Spintronics field. [3,4] Several examples of devices based on multilayer systems exploiting spin properties have been developed (e.g. spin filter, spin valves, negative differential resistance devices). [5][6][7] Most of the devices are based on a nonmagnetic layer sandwiched between two magnetic electrodes showing among other magnetoresistance properties. Such property consists in a change of the conductivity when the magnetic polarization of one of the magnetic electrodes in switched. The use of molecules in this research field is relatively new and most of the experiments were performed far from room conditions, such as ultrahigh vacuum and low temperatures. [8][9][10][11] The development of new devices has been done basically using break-junction or scanning tunneling microscope (STM) experiments with the magnetic molecule (mainly single-molecule magnets and spin-crossover systems) placed between the two metal electrodes. Recently, some of us reported molecular-based devices displaying room-temperature spin-dependent transport; single-molecule STM junctions built by bridging individual small magnetic molecules such as spin-crossover Fe II complexes [Fe(tzpy) 2 (NCX) 2 ], (X = S or Se) in a high-spin S = 2 configuration deposited on a gold substrate using a magnetic nickel tip. [12] From the computational point of view, many theoretical models have been proposed for the study of Molecular Electronic systems. Atomistic calculations remain restricted to DFT methods, [13][14][15] due to the complexity and large number of atoms (two metal electrodes plus the magnetic molecule) present in the devices. [16,17] DFT approaches are commonly based on the assumption that the electrons travel through the molecule rapidly (tunneling) without causing reduction or oxidation of the molecule. In the experiments, such conditions can be achieved with small molecules and good molecule-electrode contacts. The opposite mechanism is the Coulomb blockade regime where the conduction electron remains captured in the molecule (usually with larger molecules and bad contacts) has been mainly explored using Master Equation formalism but also time-dependent DFT methods. [18,14] Furthermore, DFT-based approaches consider that the electron transport in the scattering region (molecule and surface electrode in the contacts) is coherent (no change in the wavefunction phase) and elastic if not change in the energy in the transport process despite that some inelastic corrections can be mainly included to take into account the electron-phonon interactions. The most common procedure to analyze the phase coherent transport in molecular transport junctions is based on the work of Landauer, Imry and Buttiker. [19] Their expression for the conductance (G) for a given system with current I and voltage V is, where T ii , e and h are the transmission through the channel i, the electron charge and the Planck's constant. Usually, the pre-factor combination of constants 2e 2 /h is defined as the quantum of conductance, G 0 (conductance through a single-atom metal wire). From the practical point of view, the sum in Eq. 1 is considered in terms of the molecular orbitals of the molecule that can provide an electron pathway channel between the two electrodes. The energy of such orbitals must be closer to the Fermi level of the metal electrodes than the applied voltage. During the travel through the molecule, there is a scattering process but there are not significant changes in the electronic structure of the molecule (tunneling) and the process is energy conserving (in the original Landauer formulation, the scattering was always considered elastic). Eq. 1 cannot be directly applied by quantum chemistry methods. However, using non-equilibrium Green functions (NEGF), [20] the current (I) can be expressed as the integral over some voltage-and energy-dependent magnitudes. ( * /0 +1 Γ 3 0, 5 6 7 0, 5 Γ 8 0, 5 6 9 0, 5 (; 3 0, 5 − ; 8 0, 5 ) (2) where G r (G a ) is the retarded (advanced) Green function, Γ is the spectral density of the electrodes (twice of the imaginary component of the self-energies) and f is the Fermi distribution of the electrodes (left and right). From Eq. 2, the transmission can be expressed as: T E, V = +1 Γ 3 0, 5 6 7 0, 5 Γ 8 0, 5 6 9 0, 5 The terms needed to apply Eq. 3 can be extracted from the corresponding Hamiltonian, and the overlap matrices from any electronic structure method, for instance, extended-Hückel (tightbinding) or DFT. Thus, the retarded Green's function matrix for the whole system: The goal of this paper is to analyze three different DFT approaches and their accuracy to study coherent elastic spin-polarized transport through a magnetic single-molecule device (see Figure 1, the spin-crossover Fe II complex, [Fe(tzpy) 2 (NCS) 2 ], mentioned above between gold electrodes): (i) The first approach employs a combination of NEGF technique with density functional theory to obtain the self-consistent mean-field Hamiltonian of the system subject to a finite bias voltage and from it, the Green's functions providing the non-equilibrium electronic density and current. [21,14] This approach is available in several computer codes, for instance Transiesta, [21] ADF-BAND, [22,23] Smeagol [24] or ATK, [25] and allows a periodic description of the electrodes, normally associated with large computational demands. Magnetic molecules can pose additional complications for this approach, as correct orbital occupations must be retained in calculations see ref. [27]) will result in small energy gaps between empty and occupied orbitals, thus, giving wrong transmission curves and conductance values that are often too large. This can be improved by using more sophisticate functionals that partially remove such error (for instance, hybrid or long-range corrected functionals). (ii) The second approach tries to solve the main drawbacks of the first approach but paying the price of describing the electrodes as a relatively small finite cluster of metal atoms, which allows to perform a common quantum chemical molecular calculation on them (Gaussian, [28] Q-Chem, [29] ADF, [30] Turbomole [31] or FHIaims [32]). Thus, hybrid or long-range corrected exchange-correlation functionals can be employed to provide better orbital energies, and consequently transport properties. This kind of computer codes have also control over the electronic structure of the magnetic molecule once placed in between the electrodes to have correct open shell electron distribution. The Green's functions are calculated in the less-accurate wide band limit approximation, which assumes that the density of states (DOS) of the electrode is independent of the energy. [22] Thus, the Green's functions of the electrode surface (in Eq. 6) are just a constant value under the Fermi level. In these codes, the transport module is often implemented as a post-processing tool (Artaios, [33] AITRANSS [34]) and are usually restricted to zero-bias, however, other codes as Alacant [35] also allow a fully self-consistent NEGF approach implemented in the Gaussian code. (iii) Finally, the third approach adopted in the GOLLUM computer code [36] that uses periodic tight-binding calculations or mapping DFT Hamiltonian matrix (calculated with other code, for instance Siesta [37,38] that can incorporate DFT+U and dispersion corrected functionals and also can provide a good control of the electronic structure of magnetic open-shell systems) with a tight-binding approach. Next in a postprocessing step with simpler equilibrium transport theory can provide a large variety of transport properties with less computational resources than with the self-consistent NEGF codes. Computational details Calculations were performed with ATK, [25] Gaussian [28]-Artaios [33] and Siesta [38]-Gollum [36] packages. The self-consistent NEGF were performed with the ATK code using periodic models using PBE [39] and PBE+U functionals and double-zeta basis set with polarization for all the elements with the exception of gold that a single-zeta basis with polarization was used combined with an 11-electron pseudopotential. The periodic model structure has the thiocyanato S atoms located in threefold hollow sites of the Au(111) surface [40] (distance S-surface of 2.5 Å, Figure 1). The calculations were carried out with the molecule sandwiched between three gold layers with a 6 × 6 surface unit cell. The Gaussian calculations were performed with the release D01 using the pure GGA (PBE), [39] hybrid (B3LYP), [41] hybrid meta-GGA (TPSSh), [42] and long-range corrected (ωB97X) [43] exchange-correlation functionals with a lanl2dz basis set including pseudopotentials for all the elements. [44] The choice of the TPSSh functional is due to the well-know problems of most of DFT functionals [45] to reproduce correctly the energy difference of high-and low-spin states of spin-crossover complexes and such exchange-correlation functional provides the best results. [46][47][48] The post processing procedure to obtain the transmission curves was performed with the Artaios code (version 1.9). [33] The non periodic model has the same structure that the periodic model used with ATK but the gold electrode is cut just keeping 22 gold atoms (3,7,12 atoms in the first, second and third layer, respectively). Calculations with the Siesta code [38,37] were performed using the version 3.0 using PBE [39] and PBE+U functionals with and double-zeta basis set for all the elements with the exception of gold that a single-zeta basis was used combined with an 1-electron pseudopotential. Results and Discussion Previously, some of us have analyzed using PBE functional and periodic calculations with the selfconsistent NEGF Transiesta code the transport properties of the high and low-spin states of the [Fe(tzpy) 2 (NCS) 2 ] complex placed between two gold electrodes. [40] The high-spin state (S = 2, t 2g 4 e g 2 ), with five alpha and one beta electrons, shows a much higher conductance than the low spin (S = 0, t 2g 6 e g 0 ). [40,50] The reason for the high-spin large conductivity, considering that there is only one beta electron in a t 2g orbital, can be found in the fact that the orbital bearing this electron, as well as the first unoccupied beta t 2g orbital, are those closer in energy to the Fermi level of the gold electrodes providing effective channels for transport through the molecule (see Figure 2). However, in the low-spin state, both sets of orbitals (t 2g and e g ) are relatively distant from the Fermi level, thus resulting in a lower conductivity. As in the high-spin state, the transport is due to the beta orbitals. The current is highly spin polarized of minority beta carriers being such property crucial for the magnetoresistance behavior found experimentally using STM measurements of such systems. [12] Self-consistent NEGF with periodic models The calculations were performed using the ATK code [25] and the PBE [39] (and PBE+U with U=4.0 eV a recommended value for Fe II complexes [51]) functionals. The transmission spectra for the high-spin S=2 Fe II complex for PBE and PBE+U functionals are represented in Fig. 2. There is a clear influence of the inclusion of the electron repulsion through the U parameter resulting in a much larger gap between occupied and empty orbitals. Thus, the t 2g levels are more distant of the Fermi level of the electrodes and also logically a smaller intensity is obtained for such method (see (blue) is for rightward transport; π (red) is for leftward transport; and π/2 is transport perpendicular to the electrode plane. Non Self-Consistent NEGF with cluster models As mentioned above, the non self-consistent NEGF methods using cluster structural methods have the limitations of the structural model and also the simplifications in the calculation of the surface Green functions. However, it is possible to employ common Quantum Chemistry codes (i.e., Gaussian09 [28]) opening the possibility to use a wide range of exchange-correlation functionals and to extract the transport properties (only transmission spectra) in a post-processing procedure (Artaios code [33]). In the previous section, we have shown the usual overestimation of the transport properties provided by the GGA functionals. Thus, in our analysis we will compare four different functionals: (i) PBE functional Perdew, 1996 #51} to have the same reference than in the self-consistent NEGF periodic method, (ii) the popular hybrid B3LYP functional, [41] (iii) the hybrid meta-GGA functional, TPSSh, [42] usually recommended to provide a good description of spin state stabilities in metal complexes and (vi) the long-range corrected ωB97X functional [43] that should improve the asymptotic behavior and consequently, to reduce the self-interaction error to give a better description of the excitations. The comparison of the PBE transmission spectra (Figures 2 and 6) shows very similar results with two broad beta transmission peaks close to the Fermi level (indicating strong contact interaction with the gold electrode) with a distance between them around 0.5 eV (2 eV for PBE+U U=4.0 eV) while the alpha gap is around 2 eV. The position of the beta transmission peaks is the key parameter for the transport properties; thus, the gap between both peaks (with large contribution of t 2g -like metal orbitals, see Figure 6) is around 3, 2.3 and 8 eV, for B3LYP, TPSSh and ωB97X functionals. Thus, the two hybrid functionals give similar results (slightly higher energy gap for B3LYP due to the larger exact-type exchange contribution) and close to those obtained with the self-consistent NEGF periodic PBE+U calculations. However, the long-range corrected ωB97X functional gives a much higher energy gap. Concerning the position of these two peaks compared with the Fermi level, the empty t 2g -like orbital remains relatively close with all the functionals while the occupied one is considerably shifted to low energy values. Thus, such result would indicate that the transport would be through the unoccupied levels as it is usually found for this kind of single-molecule devices. properties are also plotted. Non Self-Consistent NEGF with periodic models This method allows to perform the calculation of transport properties in a post-processing way but using periodic boundary conditions for the description of the electrodes. Thus, the option to improve the results with DFT+U methods is available using regular calculations with the Siesta code [38,37] and obtaining lately the transport properties with the Gollum package. [36] Furthermore, the fact to be based on a non self-consistent approach results in a considerable It is worth noting that another representation of transport properties commonly employed with the Gollum program is to analyze the dependence of the G conductance with the Fermi level value. [52,53] This representation (see Fig. 8) is often employed because it is well-known drawback that the calculated Fermi level values with common DFT functionals are usually non-accurate, especially taking into account that the transport properties are highly dependent of a small energy shift of the transmission peaks. Thus, it is possible to easily choose the Fermi level value that provides a reasonable agreement with the experimental data if not the Fermi level is fixed at zero. The I/V characteristics can also be calculated (see Fig. 9) with such post-processing approach leading to almost identical results than those obtained with the self-consistent NEGF approach. Concluding Remarks Theoretical methods can provide very useful information about the transport mechanism in singlemolecule devices. The presence of magnetic molecules in such devices opens the field of Molecular Spintronics to new physical properties. We have analyzed three different theoretical approaches to study devices based on the [Fe(tzpy) 2 (NCS) 2 ] complex that shows room temperature magnetoresistance in single-molecule devices. Due to the small size of the molecule and the relative good molecule-gold electrode contact, the transport mechanism in such system is basically by tunneling. Hence, such mechanism can be analyzed using theoretical methods using atomistic DFT calculations using the non-equilibrium Green function approach to calculate transmission and current properties using Landauer equation. The most accurate approach is the self-consistent NEGF using a periodic description of the electrodes. The calculations were performed using PBE and PBE+U (U=4.0 eV for Fe d orbitals). Due to the well-know drawback of the large selfinteraction error of the PBE functional, the presence of occupied and empty molecular levels too close to the Fermi level of the electrode results in a considerable overestimation of the current. Thus, the PBE+U approach improves considerably the results in such methodology. The inclusion of the +U correction shifts the orbitals far of the Fermi level, increasing the energy gap, and consequently the overestimation of the current obtained with the PBE functional is corrected. The second employed theoretical approach allows to perform the calculations with a general computer code where many different flavors of the functional are available (i.e. Gaussian09). The transport properties are non self-consistently calculated using NEGF with a post-processing tool using small finite metal clusters to model the electrodes. The results using the same functional that the first approach provide similar results and again, the use of hybrid functionals to reduce the selfinteraction error is needed to obtain a similar transmission curves that the obtained with the periodic self-consistent NEGF DFT+U calculations. Finally, the third approach is also based on a periodic description of the electrodes but also with a post-processing tool the transport properties are obtained in a non self-consistent NEGF method. The comparison of the same functionals (PBE and PBE+U) than with the self-consistent NEGF periodic method give actually very similar transmission curves with considerable smaller computational resources. Hence, we can conclude that qualitatively the three theoretical approaches can achieve a right semi-quantitative description if the appropriate functional is employed and, qualitatively even the simplest GGA functional provides a correct description of the orbitals involved in the transport channels despite a strong overestimation of the calculated current.
4,351.2
2016-07-15T00:00:00.000
[ "Physics" ]
Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD). In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation. Introduction Modeling of ultrasonic and sonic fields generated by a planar transducer of finite dimension is a fundamental problem whose solution is available in textbooks in this area [1][2][3][4].A good review of the earlier developments of the ultrasonic field modeling in front of a planar transducer can be found in [5].A list of more recent developments is given by Sha et al. [6].The pressure field in front of a planar transducer in a homogeneous isotropic fluid has been computed both in the time domain [5,7,8] and in the frequency domain [9][10][11][12][13][14].In the frequency domain analysis, which is more popular, the transducers are assumed to be vibrating with constant amplitudes at certain frequencies, and the pressure fields in front of the transducers are computed.Most of these models give the steady state response of the transducers.Recently phased array transducers have been modeled as well [15][16][17][18][19][20]. In all modeling approaches that are based on the Huygens-Fresnel superposition principle and Rayleigh-Sommerfeld Integral (RSI) the strengths of the point sources modeling the transducer are assumed to be known.For type-I RSI the pressure distribution, and for type II RSI, the velocity distribution, on the transducer surfaces are assumed to be known.The distributed point source method (DPSM) [21][22][23] has been developed relatively recently for solving electrostatic, electromagnetic, and ultrasonic wave propagation problems.DPSM was first proposed by Placko and his co-workers for solving electrostatic and magnetostatic problems [21].Later it was developed further by Placko and Kundu [22] and Kundu et al. [23] for solving ultrasonic problems.In ultrasonic modeling the source strengths can be treated as unknowns and obtained from the transducer surface boundary conditions as velocity or pressure.Alternately, all point sources can be assumed to have the same strength in order to model the uniform velocity condition at the transducer surface.In recent years DPSM has been extended to solve high frequency fluid flow problems [24] and fluid mechanics problems involving flow around an obstacle [25]. DPSM showed a significant savings in computational time compared to the Finite Elements Method (FEM) FEM technique for modeling ultrasonic problems [23,24,26] or electromagnetic problems [27].A three-dimensional (3D) wave field modeling problem in a homogeneous fluid in front of a square transducer was solved in 2 min by DPSM while COMSOL Multi-Physics FEM code took 35 h to solve the identical problem on the same computer [23].Jarvis and Cegla [26] used DPSM and FEM for solving ultrasonic wave reflections from a rough surface and concluded, "[FEM solution is] two orders of magnitude slower than the equivalent DPSM simulation on the same machine". In the above works on DPSM, point sources were uniformly distributed near the boundaries, interfaces, and front faces of the transducer (or emitter).The source strengths were then calculated by solving a system of linear equations that are obtained by satisfying boundary and continuity conditions for the problem geometry.In the near field, very close to the point sources, the computed field satisfies the boundary conditions only at the discrete points where the boundary conditions are specified.In between those points the boundary conditions are generally not satisfied.To overcome this shortcoming the use of a scaling factor was proposed [23].In [23] Kundu et al. showed that the computed values in the far field (away from the transducer face) could be accurately modeled after incorporating the scaling factor.After considering this scaling factor for both near and far field computations, it was observed that at sections very close to the transducer face the average value of the computed field matched well with the true solution, although for individual points the deviation from the true solution could still be significant.To overcome this shortcoming Rahani and Kundu [28] tried DPSM with equivalent sources densities instead of the unique point source.They proposed two modifications-(1) G-DPSM or Gaussian DPSM replacing the single point source by a family of sources whose strength variations follow a Gaussian profile, and (2) ESM (Element Source Method): replacing discrete point sources by continuously varying element sources.Although these two modifications avoided the near field inaccuracy problem both approaches had their limitations.Results generated by G-DPSM were strongly affected by the Gaussian distribution function parameters and the ESM formulation was relatively complex.Thus, one major advantage of the DPSM formulation, its simplicity, was compromised by ESM.The concept of equivalent source density, including elemental point sources, linear, surface, or volume repartition, was presented and advocated by Placko et al. [29]. The objective of this paper is to propose a new ESD (equivalent source density) approach which is much simpler and can improve the near field computation accuracy without increasing the matrix size that is to be inverted for obtaining the solution.In this solution, the single point source is replaced by a source family.Point sources in a family are denoted as quantum sources to distinguish them from the regular point sources used in the classical DPSM formulation.A transducer can be modeled by M number of regular point sources in classical DPSM formulation or M × F quantum sources in the modified formulation.If every one of the M point sources are replaced by F number of quantum sources then the total number of quantum sources is equal to M × F. The quantum source strengths are set such that a given radiation pattern from the quantum source family is obtained. G-DPSM, proposed by Rahani and Kundu [28], assumed a Gaussian distribution of source strength variations in a source family and their final results were strongly affected by the Gaussian distribution function parameters.However, in the technique proposed here, the strength variation of the quantum sources in a family is assumed to be unknown, but the radiation pattern generated by a source family is considered to follow a specific pattern-Gaussian distribution is one possibility, but not necessarily the only possibility.However, for not assuming a Gaussian distribution of the source strengths in a family significantly increases the number of unknown source strengths.It is illustrated in the Formulations section how this improvement can be achieved without increasing the final matrix size that is to be inverted.Thus, this modification can improve the computational accuracy without significantly increasing the computational time. Formulations The classical and the proposed DPSM formulations using family ESD are presented in this section.Although the classical DPSM formulation using unique point sources is available in the literature-in various publications mentioned above-for the benefit of the readers the basic steps of classical DPSM formulation and its current limitations are briefly reviewed first in the following subsection, before introducing the new formulation. Classical DPSM Formulation: ESD is a Single Point Source To generate the acoustic field in the fluid medium in front of an ultrasonic transducer or the electric field in front of an emitter by the DPSM technique, a number of point sources are placed slightly behind the transducer (or emitter) face as shown in Figure 1.This figure shows M number of source points placed at the centers of M spheres of radius r s .Spheres touch the front face of the transducer.Thus, the point sources are located at a distance of r s behind the transducer face to avoid the singularity.The DPSM formulation for ultrasonic wave field modeling is presented below.However, for electric field modeling the derivations are identical; the parameter names are simply changed from pressure, velocity, etc., to electric field, magnetic field, etc. strengths in a family significantly increases the number of unknown source strengths.It is illustrated in the Formulations section how this improvement can be achieved without increasing the final matrix size that is to be inverted.Thus, this modification can improve the computational accuracy without significantly increasing the computational time. Formulations The classical and the proposed DPSM formulations using family ESD are presented in this section.Although the classical DPSM formulation using unique point sources is available in the literature-in various publications mentioned above-for the benefit of the readers the basic steps of classical DPSM formulation and its current limitations are briefly reviewed first in the following subsection, before introducing the new formulation. Classical DPSM Formulation: ESD is a Single Point Source To generate the acoustic field in the fluid medium in front of an ultrasonic transducer or the electric field in front of an emitter by the DPSM technique, a number of point sources are placed slightly behind the transducer (or emitter) face as shown in Figure 1.This figure shows M number of source points placed at the centers of M spheres of radius rs.Spheres touch the front face of the transducer.Thus, the point sources are located at a distance of rs behind the transducer face to avoid the singularity.The DPSM formulation for ultrasonic wave field modeling is presented below.However, for electric field modeling the derivations are identical; the parameter names are simply changed from pressure, velocity, etc., to electric field, magnetic field, etc. From [3] the pressure and velocity fields in a fluid at point x at a distance rm from the m-th point source of strength Am can be written as: The pressure field: The velocity in the radial direction, at a distance r from the m-th point source: Therefore, the z-component of the velocity is: From [3] the pressure and velocity fields in a fluid at point x at a distance r m from the m-th point source of strength Am can be written as: The pressure field: The velocity in the radial direction, at a distance r from the m-th point source: Therefore, the z-component of the velocity is: If there are M point sources distributed over the transducer surface, as shown in Figure 1, then the total pressure and the z-direction velocity values at point x are given by: where z m is the distance of point x in the z-direction (perpendicular to the transducer face) measured from the m-th point source. If the transducer surface velocity in the z direction is given by v 0 then for all x values on the transducer surface the velocity should be equal to v 0 .Therefore: By satisfying Equation ( 6) at M points on the transducer surface, where the spheres touch the transducer face it is possible to obtain a system of M linear equations with M unknowns (A 1 , A 2 , A 3 , . . ., A m ).If the point sources are placed slightly behind the transducer face as shown in Figure 1 then r m of Equation ( 6) can never be equal to zero; thus, singular points are avoided. If point x in Figure 1 coincides with the n-th target point and the velocity at the n-th target point or the observation point is to be computed then from Equation (6) one obtains: In Equation (7) the first subscript n (of z and r) corresponds to the n-th target point and the second subscript m represents the m-th source point; therefore, r nm is the distance of the n-th target point from the m-th source point while z nm is the distance along the z-direction between the n-th target point and m-th source point.If the number of target points is made equal to the number of source points (both equal to M) and the target points are placed on the transducer surface then Equation (7) can be written in matrix form as: where V S is the (M × 1) vector of the velocity components at M number of target points on the transducer surface, and A S is the (M × 1) vector containing the strengths of M number of point sources.M SS is the (M × M) square matrix relating the two vectors V S and A S : where superscript T indicates the transpose and v zm represents the z-direction velocity at the m-th target point.If the transducer surface vibrates with a constant velocity amplitude v 0 then all elements of Equation ( 9) should have the same value v 0 . Vector A S of the source strengths is given by: The matrix M SS is obtained from Equation (7): (11) where: Equation ( 8) can be inverted to obtain the point source strengths: Instead of velocity, if pressure is specified on the transducer surface, then the source strength is obtained from the following equation: where P S is the specified pressure field at the transducer surface and Q matrix relates the pressure field and the source strength (obtained from Equation ( 4)). It should be noted here that Equation ( 13) can also be rewritten as where V * S is a (3M × 1) vector containing all three components of velocity at M test points.For inversion of this equation, when needed, we may have used the concept of ESD, which are triplet sources used at every center point so that A * S becomes a (3M × 1) vector containing all three strengths of the triplets at M source center points.See [3] for more detailed derivation. After evaluating the source strength vector A S from Equation ( 13) or ( 14) the pressure p(x) or velocity vector V(x) at any set of target points x (on the transducer surface or away) can be obtained from the following equations: For a set of N target points the vector P T contains N pressure values at N target points, so its size is (N × 1).V T is a (N × 1) vector containing z-components of velocity at N target points, and V * T is a (3N × 1) vector containing all three components of velocity at N target points. Therefore, the velocity field on the transducer surface can be computed at N number of points where N is much greater than M. It should be noted that only at M points the boundary conditions are specified.The velocity variation on the transducer face shown in Figure 2 is obtained with M = 464 (the number of source points considered for modeling the transducer) and N = 2500 (the number of target points where velocity is computed on the plane of the transducer face).Velocity values are computed on a square surface that coincides with the transducer face.Some points on the square surface that are near its four corners are outside the transducer face and give zero displacement, as expected.However, on the transducer face the computed velocity is not uniform.It reaches a peak value of v 0 (in our case 1) at 464 points where small spheres of Figure 1 touch the transducer face but, in between these points, they are significantly smaller.Thus, the average velocity of the transducer face becomes less than v 0 giving rise to smaller pressure values from the DPSM analysis.When the DPSM results are multiplied by an appropriate scaling factor (obtained by dividing the given transducer face velocity by the average velocity of the transducer surface) then the discrepancy in the far field disappears [3].However, in the near field (on the transducer face and very close to the transducer face) the scaling factor multiplication cannot remove the oscillation patterns shown in Figure 2 although their average value matches with the prescribed velocity value of the transducer face. Formulation with ESD Families In the proposed formulation every point source of Figure 1 is replaced by a family of point sources as shown in Figure 3.These point sources in the family are called quantum sources to distinguish them from the unique point sources in the classical DPSM formulation.If there are F number of quantum sources in every family then, from M number of original point sources, a total of (M × F) quantum sources are generated.If classical DPSM formulation as described above is applied to these (M × F) quantum sources then the matrix size becomes (M × F) × (M × F) (see Equations ( 13) and ( 14)). Formulation with ESD Families In the proposed formulation every point source of Figure 1 is replaced by a family of point sources as shown in Figure 3.These point sources in the family are called quantum sources to distinguish them from the unique point sources in the classical DPSM formulation.If there are F number of quantum sources in every family then, from M number of original point sources, a total of (M × F) quantum sources are generated.If classical DPSM formulation as described above is applied to these (M × F) quantum sources then the matrix size becomes (M × F) × (M × F) (see Equations ( 13) and ( 14)). Formulation with ESD Families In the proposed formulation every point source of Figure 1 is replaced by a family of point sources as shown in Figure 3.These point sources in the family are called quantum sources to distinguish them from the unique point sources in the classical DPSM formulation.If there are F number of quantum sources in every family then, from M number of original point sources, a total of (M × F) quantum sources are generated.If classical DPSM formulation as described above is applied to these (M × F) quantum sources then the matrix size becomes (M × F) × (M × F) (see Equations ( 13) and ( 14)).To reduce the matrix size and the number of unknown source strengths from (M × F) to M it is assumed that the relative source strength distribution within a family is the same for all M families.Then, F sources of the i-th family (i = 1, 2, . . ., M) should have the following source strength distribution: The i-th family strength is denoted by λ i and its distribution is denoted by the vector q.The distribution vector q is the same for all M families but the strength λ i can be different for different families.Then the velocity field at M target points generated by (M × F) quantum sources can be written as: where m i jk represents the velocity at the i-th target point generated by the k-th quantum source of unit amplitude at the j-th family.The above equation can be re-written as: Or: Therefore: It should be noted that both Equations ( 13) and ( 20) require inversion of the M × M matrix.However, for classical DPSM formulation (Equation ( 13)) the number of point sources is M while for the new formulation (Equation ( 20)) the number of quantum sources is (M × F). To obtain the relative strength variation vector q = q 1 q 2 q 3 • • • q (F−1) q F T (see Equation ( 16)) of the quantum sources in a family the radiation pattern generated by the quantum sources will have to be assumed.The radiation pattern describes the velocity or pressure distribution generated by a family of quantum sources for ultrasonic field modeling, or electric field distribution for electrostatic problems. In the classical DPSM formulation Green's function for a point source determines the radiation pattern.Typically the wavefront generated by a point source is a sphere in an isotropic medium.However, for a family of quantum sources any radiation pattern can be achieved by properly adjusting the source strength distribution vector q in the family.We may notice that, for the far field, all of the terms of the line matrix m 1 11 . . .m 1 1F of Equation ( 18) tends to the same value, which corresponds to the m 11 term of Equation (12).Then, the family is seen as a cluster of sources equivalent to a unique point source of strength equal to the sum of q 1 q 2 q 3 • • • q (F−1) q F . Results Numerical results are generated for a Gaussian radiation pattern, as shown in Figure 4, generated by every family.Quantum source strengths within every family are obtained for this radiation pattern following the classical DPSM approach and ignoring the presence of other families.Then the pressure field in the fluid in front of the uniformly vibrating circular transducer with a radius of 6 mm is computed by considering all families, as shown in Figure 3.For clarity, 50 families are shown in Figure 3.However, during actual computation 464 families were considered.Every family is composed of 30 quantum sources.The transducer vibration frequency is 1 MHz.Fluid properties considered for this simulation are those of water-density (ρ) = 1000 kg/m 3 and P-wave speed (c f ) = 1480 m/s.To obtain the relative strength variation vector ⋯ (see Equation ( 16)) of the quantum sources in a family the radiation pattern generated by the quantum sources will have to be assumed.The radiation pattern describes the velocity or pressure distribution generated by a family of quantum sources for ultrasonic field modeling, or electric field distribution for electrostatic problems. In the classical DPSM formulation Green's function for a point source determines the radiation pattern.Typically the wavefront generated by a point source is a sphere in an isotropic medium.However, for a family of quantum sources any radiation pattern can be achieved by properly adjusting the source strength distribution vector q in the family.We may notice that, for the far field, all of the terms of the line matrix … of Equation ( 18) tends to the same value, which corresponds to the term of Equation ( 12).Then, the family is seen as a cluster of sources equivalent to a unique point source of strength equal to the sum of ⋯ . Results Numerical results are generated for a Gaussian radiation pattern, as shown in Figure 4, generated by every family.Quantum source strengths within every family are obtained for this radiation pattern following the classical DPSM approach and ignoring the presence of other families.Then the pressure field in the fluid in front of the uniformly vibrating circular transducer with a radius of 6 mm is computed by considering all families, as shown in Figure 3 The normal velocity component is first computed at 2500 points on a square surface that coincides with the transducer surface.The computed results with the new formulation is shown in Figure 5a and with the old formulation (classical DPSM) is shown in Figure 5b.Clearly, the oscillations are much less (almost disappearing) in Figure 5a.In both formulations the final matrix that was inverted had dimensions of 464 × 464.In the new formulation 464 clusters gave this matrix dimension, while in the old formulation 464 point sources generated this matrix.To improve the accuracy of the classical DPSM computation, the number of point sources and, hence, the matrix dimension must be increased while, in the new formulation, the number of quantum sources can be increased without increasing the number of clusters and, therefore, the matrix dimension.The normal velocity component is first computed at 2500 points on a square surface that coincides with the transducer surface.The computed results with the new formulation is shown in Figure 5a and with the old formulation (classical DPSM) is shown in Figure 5b.Clearly, the oscillations are much less (almost disappearing) in Figure 5a.In both formulations the final matrix that was inverted had dimensions of 464 × 464.In the new formulation 464 clusters gave this matrix dimension, while in the old formulation 464 point sources generated this matrix.To improve the accuracy of the classical DPSM computation, the number of point sources and, hence, the matrix dimension must be increased while, in the new formulation, the number of quantum sources can be increased without increasing the number of clusters and, therefore, the matrix dimension.Figure 6 shows the variation of the normal velocity along the diameter (y-axis going through the center of the transducer face).The continuous line shows the field computed by the new formulation while the dotted line is obtained from the classical DPSM formulation.Nine points where the two curves match correspond to the boundary points where the velocity is specified.These nine points out of 464 points appear on the diameter along which the velocity values are computed.Classical DPSM gives correct values only at those nine points and deviates significantly from the correct value of 1 at other points.The results generated by the new formulation are much closer to the true values and match with the true values at several other points where the boundary conditions are not specified.To get a quantitative measure of the difference between these curves the error is computed using the following equation: where V(i)true is the true value of the velocity amplitude at the i-th point, which is 1 on the transducer surface and zero outside the transducer, as shown in Figure 6.The true value is denoted as the objective function (dashed-dotted line) in this figure.V(i)computed are the computed values (continuous and dotted lines of Figure 6).The computed error is 34.8% for classical DPSM and 3.05% for proposed DPSM with the family ESD technique.Figure 6 shows the variation of the normal velocity along the diameter (y-axis going through the center of the transducer face).The continuous line shows the field computed by the new formulation while the dotted line is obtained from the classical DPSM formulation.Nine points where the two curves match correspond to the boundary points where the velocity is specified.These nine points out of 464 points appear on the diameter along which the velocity values are computed.Classical DPSM gives correct values only at those nine points and deviates significantly from the correct value of 1 at other points.The results generated by the new formulation are much closer to the true values and match with the true values at several other points where the boundary conditions are not specified.To get a quantitative measure of the difference between these curves the error is computed using the following equation: where V(i) true is the true value of the velocity amplitude at the i-th point, which is 1 on the transducer surface and zero outside the transducer, as shown in Figure 6.The true value is denoted as the objective function (dashed-dotted line) in this figure.V(i) computed are the computed values (continuous and dotted lines of Figure 6).The computed error is 34.8% for classical DPSM and 3.05% for proposed DPSM with the family ESD technique.Figure 6 shows the variation of the normal velocity along the diameter (y-axis going through the center of the transducer face).The continuous line shows the field computed by the new formulation while the dotted line is obtained from the classical DPSM formulation.Nine points where the two curves match correspond to the boundary points where the velocity is specified.These nine points out of 464 points appear on the diameter along which the velocity values are computed.Classical DPSM gives correct values only at those nine points and deviates significantly from the correct value of 1 at other points.The results generated by the new formulation are much closer to the true values and match with the true values at several other points where the boundary conditions are not specified.To get a quantitative measure of the difference between these curves the error is computed using the following equation: where V(i)true is the true value of the velocity amplitude at the i-th point, which is 1 on the transducer surface and zero outside the transducer, as shown in Figure 6.The true value is denoted as the objective function (dashed-dotted line) in this figure.V(i)computed are the computed values (continuous and dotted lines of Figure 6).The computed error is 34.8% for classical DPSM and 3.05% for proposed DPSM with the family ESD technique.From the symmetry of the problem it is obvious that the fluid velocity parallel to the transducer surface in the x-direction should be zero along the y-axis going through the transducer center.This velocity component on the transducer surface is plotted in Figure 7.It is close to zero for the new formulation.However, for the classical formulation, strong velocity values are observed near the periphery of the transducer.Both formulations give zero velocity near the central axis of the transducer.From the symmetry of the problem it is obvious that the fluid velocity parallel to the transducer surface in the x-direction should be zero along the y-axis going through the transducer center.This velocity component on the transducer surface is plotted in Figure 7.It is close to zero for the new formulation.However, for the classical formulation, strong velocity values are observed near the periphery of the transducer.Both formulations give zero velocity near the central axis of the transducer.From Figures 6 and 7 it is clear that the new formulation significantly improves the field computed in the fluid adjacent to the transducer face.To investigate how the computed fields away from the transducer face are affected by these two formulations the normal velocity components are plotted along the y-axis for four different distances (z = 0, rs/3, 2rs/3 and rs) from the transducer surface.These figures are shown in Figure 8.It should be noted that rs is the radius of the source sphere, as shown in Figure 1.Therefore, rs is also the distance of the point source from the transducer surface.Figure 8 clearly shows that as the observation points move away from the transducer face by a distance equal to rs, or when they are located at a distance 2rs from the point sources, then the oscillations in the computed field disappear for the classical DPSM also.However, the curves generated by the classical DPSM and the family ESD DPSM do not match, although their shapes appear to be similar.The classical DPSM results need to be multiplied by a scaling factor to match with the new results From Figures 6 and 7 it is clear that the new formulation significantly improves the field computed in the fluid adjacent to the transducer face.To investigate how the computed fields away from the transducer face are affected by these two formulations the normal velocity components are plotted along the y-axis for four different distances (z = 0, r s /3, 2r s /3 and r s ) from the transducer surface.These figures are shown in Figure 8.It should be noted that r s is the radius of the source sphere, as shown in Figure 1.Therefore, r s is also the distance of the point source from the transducer surface.Figure 8 clearly shows that as the observation points move away from the transducer face by a distance equal to r s , or when they are located at a distance 2r s from the point sources, then the oscillations in the computed field disappear for the classical DPSM also.However, the curves generated by the classical DPSM and the family ESD DPSM do not match, although their shapes appear to be similar.The classical DPSM results need to be multiplied by a scaling factor to match with the new results.Classical and family ESD DPSM results are then compared with the analytical solution in Figure 9.For a circular transducer of radius of the analytical expression of the pressure field along the central axis is given by [23]: where p is the fluid pressure at a distance z from the transducer face vibrating at frequency ω (rad/s) in a perfect fluid of density ρ.The normal velocity component (v z ) at the transducer surface is v 0 .The wave number k f is equal to ω/c f , where c f is the acoustic wave speed in the fluid.The dashed-dotted line of Figure 9 shows the analytical solution, the continuous line is the family DPSM solution, and the dotted line is the classical DPSM solution.Clearly, the family DPSM solution is closer to the analytical solution (error is 5.57%) than the classical DPSM solution (error is 32.7%), but neither solution matches with the analytical solution.This is because we arbitrarily assumed the Gaussian radiation pattern for each family.This assumption might be better than the spherical radiation pattern generated by a point source in classical DPSM modeling but it is still not necessarily the actual radiation pattern needed for perfect matching.However, both curves can be matched with the analytical solution simply by multiplying them with appropriate scaling factors.The sensitivity of the family DPSM-computed results on the assumed radiation pattern of the individual families is investigated in Figures 10 and 11.The standard deviation (σ) of the Gaussian radiation profile is changed from 1.3rs to 1.7rs with an interval of 0.2rs and the normal velocity profiles on the transducer surface along a diameter are plotted in Figure 10.Clearly, the results are sensitive to the assumed radiation pattern.However, all three curves generated by the family DPSM formulation are much closer to the true solution (vz = 1) than the classical DPSM-generated results.It should be mentioned here that Figures 5-9 were generated for σ = 1.5rs. Finally, Figure 11 shows the pressure field variations along the central axis generated by classical DPSM and family DPSM with same three values of σ as considered in Figure 10.Note that for some radiation pattern the family DPSM-computed field matches very well with the analytical solution.However, all of these curves can be brought very close to the true solution by multiplying them with the appropriate scaling factor. These results show that if we are interested in computing the field very close to the transducer surface (z < rs) then the family DPSM formulation must be adopted because simply multiplying the computed field by a scaling factor cannot get rid of the oscillations in the computed field near the transducer.However, if such near field solutions are not needed, then the classical DPSM solution with a scaling factor generates accurate results.The sensitivity of the family DPSM-computed results on the assumed radiation pattern of the individual families is investigated in Figures 10 and 11.The standard deviation (σ) of the Gaussian radiation profile is changed from 1.3r s to 1.7r s with an interval of 0.2r s and the normal velocity profiles on the transducer surface along a diameter are plotted in Figure 10.Clearly, the results are sensitive to the assumed radiation pattern.However, all three curves generated by the family DPSM formulation are much closer to the true solution (v z = 1) than the classical DPSM-generated results.It should be mentioned here that Figures 5-9 were generated for σ = 1.5r s . Finally, Figure 11 shows the pressure field variations along the central axis generated by classical DPSM and family DPSM with same three values of σ as considered in Figure 10.Note that for some radiation pattern the family DPSM-computed field matches very well with the analytical solution.However, all of these curves can be brought very close to the true solution by multiplying them with the appropriate scaling factor.These results show that if we are interested in computing the field very close to the transducer surface (z < r s ) then the family DPSM formulation must be adopted because simply multiplying the computed field by a scaling factor cannot get rid of the oscillations in the computed field near the transducer.However, if such near field solutions are not needed, then the classical DPSM solution with a scaling factor generates accurate results. formulation are much closer to the true solution (vz = 1) than the classical DPSM-generated results.It should be mentioned here that Figures 5-9 were generated for σ = 1.5rs. Finally, Figure 11 shows the pressure field variations along the central axis generated by classical DPSM and family DPSM with same three values of σ as considered in Figure 10.Note that for some radiation pattern the family DPSM-computed field matches very well with the analytical solution.However, all of these curves can be brought very close to the true solution by multiplying them with the appropriate scaling factor. These results show that if we are interested in computing the field very close to the transducer surface (z < rs) then the family DPSM formulation must be adopted because simply multiplying the computed field by a scaling factor cannot get rid of the oscillations in the computed field near the transducer.However, if such near field solutions are not needed, then the classical DPSM solution with a scaling factor generates accurate results. Conclusions A new formulation is presented to improve the accuracy of DPSM for the near field computation.The classical DPSM technique works very well for the far field computation but cannot accurately model the near field-the field on the transducer surface and very close to the transducer surfacebecause DPSM based on the Huygens-Fresnel principle assumes that the point of observation is not very close to the transducer face modeled by distributed point sources.The new formulation explores a new possibility of equivalent source density and makes the DPSM technique accurate in both the near field and the far field.This is achieved by replacing every point source in the classical DPSM formulation by families of quantum sources.An individual family of sources can generate any radiation pattern, unlike an individual point source for which the radiation wave front can only be spherical in an isotropic medium.The new formulation is developed in such a manner that it does not increase the dimension of the square matrix that needs to be inverted for the source strength calculation.Thus, the main advantage of DPSM, that it requires low computational time compared Conclusions A new formulation is presented to improve the accuracy of DPSM for the near field computation.The classical DPSM technique works very well for the far field computation but cannot accurately model the near field-the field on the transducer surface and very close to the transducer surface-because DPSM based on the Huygens-Fresnel principle assumes that the point of observation is not very close to the transducer face modeled by distributed point sources.The new formulation explores a new possibility of equivalent source density and makes the DPSM technique accurate in both the near field and the far field.This is achieved by replacing every point source in the classical DPSM formulation by families of quantum sources.An individual family of sources can generate any radiation pattern, unlike an individual point source for which the radiation wave front can only be spherical in an isotropic medium.The new formulation is developed in such a manner that it does not increase the dimension of the square matrix that needs to be inverted for the source strength calculation.Thus, the main advantage of DPSM, that it requires low computational time compared to FEM, is not compromised.In both FEM and classical DPSM formulations the standard approach for improving accuracy is to increase the mesh size for FEM or the number of point sources for DPSM.Both of these approaches significantly increase the computational time and is avoided here.Numerical examples are presented demonstrating the improvement in the computational accuracy with the new technique. Figure 1 . Figure 1.M point sources are placed slightly behind the transducer face and the ultrasonic field is computed at N target points in front of the transducer. Figure 1 . Figure 1.M point sources are placed slightly behind the transducer face and the ultrasonic field is computed at N target points in front of the transducer. Appl.Sci.2016, 6, 302 6 of 15 transducer face velocity by the average velocity of the transducer surface) then the discrepancy in the far field disappears[3].However, in the near field (on the transducer face and very close to the transducer face) the scaling factor multiplication cannot remove the oscillation patterns shown in Figure2although their average value matches with the prescribed velocity value of the transducer face. Figure 2 . Figure 2. Normal velocity on the transducer face computed by the DPSM (distributed point source method) technique when point sources, as shown in Figure 1, are used to model the transducer face. Figure 3 . Figure 3.Quantum source clusters or source families replace the individual point sources in the new formulation for modeling the transducer. Figure 2 . Figure 2. Normal velocity on the transducer face computed by the DPSM (distributed point source method) technique when point sources, as shown in Figure 1, are used to model the transducer face. Appl.Sci.2016, 6, 302 6 of 15 transducer face velocity by the average velocity of the transducer surface) then the discrepancy in the far field disappears[3].However, in the near field (on the transducer face and very close to the transducer face) the scaling factor multiplication cannot remove the oscillation patterns shown in Figure2although their average value matches with the prescribed velocity value of the transducer face. Figure 2 . Figure 2. Normal velocity on the transducer face computed by the DPSM (distributed point source method) technique when point sources, as shown in Figure 1, are used to model the transducer face. Figure 3 . Figure 3.Quantum source clusters or source families replace the individual point sources in the new formulation for modeling the transducer. Figure 3 . Figure 3.Quantum source clusters or source families replace the individual point sources in the new formulation for modeling the transducer. . For clarity, 50 families are shown in Figure 3.However, during actual computation 464 families were considered.Every family is composed of 30 quantum sources.The transducer vibration frequency is 1 MHz.Fluid properties considered for this simulation are those of water-density () = 1000 kg/m 3 and P-wave speed (cf) = 1480 m/s. Figure 4 . Figure 4. Normal velocity variation (or radiation pattern) from one source family. Figure 4 . Figure 4. Normal velocity variation (or radiation pattern) from one source family. Figure 5 . Figure 5.The normal velocity on the transducer face computed by the DPSM technique when the transducer face is modeled by (a) source clusters, as shown in Figure 3 (new formulation); and (b) individual point sources as shown in Figure 1 (old formulation). Figure 5 . Figure 5.The normal velocity on the transducer face computed by the DPSM technique when the transducer face is modeled by (a) source clusters, as shown in Figure 3 (new formulation); and (b) individual point sources as shown in Figure 1 (old formulation). Figure 5 . Figure 5.The normal velocity on the transducer face computed by the DPSM technique when the transducer face is modeled by (a) source clusters, as shown in Figure 3 (new formulation); and (b) individual point sources as shown in Figure 1 (old formulation). Figure 6 . Figure 6.The normal velocity variation on the transducer surface: dashed-dotted line-true value; solid line-computed results using source clusters, as shown in Figure 3; and dashed line-computed by superimposing individual point source contributions, as shown in Figure 1. Figure 6 . Figure 6.The normal velocity variation on the transducer surface: dashed-dotted line-true value; solid line-computed results using source clusters, as shown in Figure 3; and dashed line-computed by superimposing individual point source contributions, as shown in Figure 1. Figure 7 . Figure 7.The X-component of the velocity variation on the transducer surface: solid line-computed results using source clusters, as shown in Figure 3; and dashed line-computed by superimposing individual point source contributions, as shown in Figure 1. Figure 7 . Figure 7.The X-component of the velocity variation on the transducer surface: solid line-computed results using source clusters, as shown in Figure 3; and dashed line-computed by superimposing individual point source contributions, as shown in Figure 1. 15 Figure 8 .Figure 8 . Figure 8.The normal velocity variation on the transducer surface (bottom figure) and near it (the other three figures), 'z' is the distance from the transducer face; solid line-computed results using source clusters, as shown in Figure 3; and dashed line-computed by superimposing individual point source contributions, as shown in Figure 1.Classical and family ESD DPSM results are then compared with the analytical solution in Figure9.For a circular transducer of radius of the analytical expression of the pressure field along the central axis is given by[23]:, ω ρ exp exp(22) Figure 9 . Figure 9. Pressure variation along the central axis in front of the transducer: dashed-dotted linetheoretical value; solid line-computed results using source clusters, as shown in Figure 3; and dashed line-computed by superimposing individual point source contributions, as shown in Figure 1. Figure 9 . Figure 9. Pressure variation along the central axis in front of the transducer: dashed-dotted line-theoretical value; solid line-computed results using source clusters, as shown in Figure 3; and dashed line-computed by superimposing individual point source contributions, as shown in Figure 1. Figure 10 . Figure 10.The normal velocity variation on the transducer surface: dashed line-true value; dotted line-computed by superimposing individual point source contributions, as shown in Figure 1; solid lines-computed results using source clusters, as shown in Figure 3, for three different radiation patterns. 15 Figure 10 . Figure 10.The normal velocity variation on the transducer surface: dashed line-true value; dotted line-computed by superimposing individual point source contributions, as shown in Figure 1; solid lines-computed results using source clusters, as shown in Figure 3, for three different radiation patterns. Figure 11 . Figure 11.Pressure variation along the central axis in front of the transducer: dashed line-theoretical value; dotted line-computed by superimposing individual point source contributions, as shown in Figure1; solid lines-computed results using source clusters, as shown in Figure3, for three different radiation patterns. Figure 11 . Figure 11.Pressure variation along the central axis in front of the transducer: dashed line-theoretical value; dotted line-computed by superimposing individual point source contributions, as shown in Figure1; solid lines-computed results using source clusters, as shown in Figure3, for three different radiation patterns.
10,683.4
2016-10-18T00:00:00.000
[ "Engineering", "Physics" ]
$R({K^{(*)}})$ from dark matter exchange Hints of lepton flavor violation have been observed by LHCb in the rate of the decay $B\to K\mu^+\mu^-$ relative to that of $B\to K e^+e^-$. This can be explained by new scalars and fermions which couple to standard model particles and contribute to these processes at loop level. We explore a simple model of this kind, in which one of the new fermions is a dark matter candidate, while the other is a heavy vector-like quark and the scalar is an inert Higgs doublet. We explore the constraints on this model from flavor observables, dark matter direct detection, and LHC run II searches, and find that, while currently viable, this scenario will be directly tested by future results from all three probes. Introduction. The LHCb experiment has observed intriguing deficits in R(K) and R(K * ), defined as the ratio of branching ratios B(K ( * ) → µ + µ − )/B(K ( * ) → e + e − ) [1,2]. These "hadronically clean" ratios are free from theoretical uncertainties in hadronic matrix elements, which cancel out [3]. In the standard model (SM) it is expected that R(K ( * ) ) = 1 [4], while experimentally deficits of approximately 20% are observed. Although the significance in either observation K or K ( * ) is not high, model-independent fits to both data, and possibly including quantities more sensitive to hadronic physics, including B s → µ + µ − , B s → φµ + µ − and the angular observable P 5 , indicate a higher significance of ∼ 4σ [5][6][7][8][9] Ref. [10] shows that the best fits and significance do not change appreciably whether one includes the hadronically sensitive observables or not, and that it is possible to find a good fit to the data by including a single dimension-6 operator in the effective Hamiltonian, The new physics contribution (1) can be obtained from tree-level exchange of a heavy Z vector boson [11][12][13][14][15][16][17][18] or leptoquark [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36], or through loop effects of new particles. In ref. [37], an exhaustive classification and study of the simplest loop models was carried out, where it was shown that one needs either two new scalars and one new fermion, or two new fermions and one new scalar, to explain the B decay anomalies. Many possible quantum numbers of the new particles are possible. Here we note that these include cases where one of them can be neutral under the SM gauge interactions, opening the possibility that it could be dark matter (DM), and thus allowing the model to explain two observed phenomena requiring new physics. We prefer to minimize the number of new scalars so there is just one, thereby allowing the DM candidate to be one of the new fermions. 1 Fermionic dark matter is free from relevant Higgs portal couplings, making for a Ref. [38] focuses on the opposite choice, and observes that the possible scalar dark matter candidate cannot satisfy direct detection constraints because of its coupling to Z. Previous attempts to connect R(K ( * ) ) to dark matter can be found in refs. [39][40][41][42][43][44][45][46]. In addition, refs. [47,48] recently studied models similar to ours, but in which the DM is chosen to be a new scalar. These studies do not fully consider the impact of the Higgs portal coupling λ|H| 2 |φ| 2 on the DM relic density and direct detection. In ref. [49] it was shown that λ tends to dominate over any other new physics effects. Even if it vanishes at tree level, the one-loop correction tends to be too large to ignore without fine tuning. more predictive theory in which the dark matter properties are determined by the same couplings that explain the flavor anomaly. It will be shown that considerations of the dark matter relic density and direct detection give interesting additional restrictions on the model, and that it is also constrained by existing LHC searches as well as flavor-changing neutral current processes. The model therefore has high potential for discovery by a variety of complementary experimental searches. Model and low-energy effective theory. We introduce a Majorana fermionic DM particle S, a vectorlike heavy quark Ψ that carries SM color and hypercharge, and a scalar φ that is an inert SU(2) L doublet. The quantum numbers are shown in table I. The only couplings of the new fields to SM particles allowed by gauge and global symmetries (see table I where Q, L are the SM quark and lepton doublets, a is the SU(2) L index and i is the flavor index. The relevant interactions at low energy are generated at one loop and thus require sizable couplings. Since there is no flavor symmetry, we will see that this model lives in a corner of parameter space where meson mixing constraints are nearly saturated. In a more complete model, the global symmetries could be an accidental consequence of a spontaneously broken gauge symmetry under which the new physics particles are charged. The Higgs portal couplings λ H,i play no important role in the following; λ H,1 gives an overall shift to m 2 φ after electroweak symmetry breaking, while λ H,2 splits the charged and neutral components of φ by a small amount (relative to m 2 φ as constrained by LHC searches). A coupling of the form violates lepton number conservation, as can be seen from the charge assignments in table I. (Notice that S cannot be assigned lepton number since it is Majorana.) Of course one expects that L is only an approximate symmetry, if neutrinos have Majorana masses, which constrains the size of λ H, 3 . In fact this operator could be the origin of one of the neutrino masses through the loop diagram shown in fig. 2, with mass matrix , which has a single nonvanishing eigenvalue given by the trace. 2 If m ν,3 = 0.05 eV for example, λ H,3 ∼ 10 −9 / i λ 2 i . To make definite predictions from (2), we must specify which field bases are referred to. We will assume that for the leptons and down-type quarks, it is the mass eigenbasis. This implies that up-type quarks have couplings that are rotated by the CKM matrix: The box diagrams relevant for b → s + − , i → 3 j , neutral meson mixing and DM scattering on nucleons are shown in fig. 1. 3 Evaluating them we find the effective dimension-6 operators of the same form as (1) but different external states. The operator coefficients are shown in table II, where for simplicity we take m Ψ = m φ = M . Below we will see that M 1 TeV to meet LHC constraints, but S can be light since it is dark matter. The loop functions f 1,2 are given by Flavor constraints. To match the observed B anomalies, we require thatλ 2λ * 3 |λ 2 | 2 ∼ = (M/0.88 TeV) 2 [10]. Therefore the couplings must be of order unity, since LHC searches discussed below require M 1 TeV. On the other hand, strong B s mixing constraints, as determined by the mass splitting between B s andB s , limit the coefficient of (sb) 2 in table II to be less than 1/(408 TeV) 2 at 95% confidence level (c.l.) [37], giving the bound |λ 2λ3 | M/(6.6 TeV). Combined with the previous determination, this demands large λ 2 , As an example, suppose that M = 1 TeV and the bound on B s mixing is saturated. We can satisfy all other constraints with hierarchical quark couplings |λ 1 | = 0.014, |λ 2 | = 0.14, |λ 3 | = 1.1, |λ 2 | = 2.9 (6) If all of the couplings are positive and real,λ 1λ 2 = 0.009, right at the D mixing 95% c.l. limit. Ifλ 1 has the opposite sign toλ 2,3 ,λ 1λ 2 is smaller, ∼ = 0.004. The hierarchical nature of the quark couplings is preserved under renormalization group running, since they are multiplicatively renormalized. The one-loop beta functions take the form [52,53] For the choice of couplings in (6), this leads to a Landau pole inλ 2 at a scale of around 8m φ , indicating the need for further new physics at such scales. For example a spontaneously broken nonabelian gauge symmetry, such as we already suggested for explaining the global symmetries of the model, could avert the Landau pole. It is technically natural to assume the other leptonic couplings λ 1,3 are negligible, since they are generated radiatively only through neutrino mass insertions. However aesthetically it may seem peculiar to have λ 2 λ 3 . If λ 1,3 = 0, the box diagrams leads to lepton flavorviolating decays such as τ → 3µ and µ → 3e. However because of the Majorana nature of S, there are crossed box diagrams, shown in fig. 1, that exactly cancel the uncrossed ones in the limit where external momenta are neglected in the loop. Their amplitudes then scale as λ 3 λ 3 2 m 2 τ /m 4 φ and λ 2 λ 3 1 m 2 µ /m 4 φ respectively. After comparing them to those of leptonic decays in the SM, 2 , and imposing the experimental limits on the forbidden decay modes [54] we find no significant constraints on λ 1 or λ 3 . Radiative transitions are another flavor-sensitive observable, as shown in fig. 3. For b → sγ, fig. 3(a) generates the dipole operator where f (R) = (R 3 − 6R 2 + 3R + 6R ln R + 2)/(6(R − 1) 4 ), R = m 2 Ψ /m 2 φ , q is the photon momentum and f (1) = 1/12. The electric charges q i of Ψ and φ are as in table I. Due to operator mixing, the chromomagnetic moment also contributes. Using the results of ref. [37], the Wilson coefficients for our benchmark model with m φ = m Ψ = 1 TeV give C 7 + 0.24 C 8 = −9 × 10 −3 , a factor of 10 below the current limit on this combination from measurements of the branching ratio of b → sγ. Fig. 3(b) gives a contribution to the anomalous magnetic moment of the muon of ∆(g − 2) Table II. Effective Hamiltonian dimension 6 operators and coefficients; (f1f2)(f3f4) denotes (f1Lγ µ f2L)(f3Lγµf4L) (with the exception of (SS), which corresponds to 1 2 (Sγµγ5S)) and coefficients are in units of 1/(384π 2 M 2 ) with mΨ = m φ = M and loop functions fi given in text. r ≡ m 2 S /M 2 . and taking m φ = 1 TeV. Ultimately this model increases the tension between the measured and predicted values of g − 2, but the effect is minimal, 20 times smaller than the SM discrepancy [54]. A similar diagram with the photon replaced by the Z leads to a correction of the coupling of the Z to left-handed muons of the form δg L /g SM L (q 2 = m 2 Z ) ∼ = −(λ 2 m Z /24πm φ ) 2 ∼ = −0.0012% [37]. This is significantly smaller than the uncertainty on the most accurate measurements of this coupling by LEP, g L (m 2 Z ) = −0.2689 ± 0.0011 [55], which has a 0.4% error at the 1σ level. If the couplings λ 1 , λ 3 are nonzero, there are contributions to τ → µγ, τ → eγ, and µ → eγ, with partial width δΓ ∼ = µ 2 i,j m 3 i /8π [56] where µ i,j ∼ = eλ i λ j m i /192π 2 m 2 φ . Using λ 2 = 2.9 and m φ = 1 TeV, the requirement that the partial width of τ → µγ induced by the new physics contributions not exceed the measured value requires |λ 3 | < 0.8, while µ → eγ leads to the strong limit Dark matter constraints. The dark matter candidate in our model has tree-level annihilation to µμ and ν µνµ . The s-wave contribution to the cross section is helicity suppressed, so the v 2 term dominates [61]. The total thermally averaged annihilation cross section, counting both final states, either muons or neutrinos, is where x = m S /T . To get the observed relic density [62], at the freeze-out temperature T f this should be roughly equal to the standard value σv rel 0 ∼ = 4.6 × 10 −26 cm 3 /s [63] appropriate for p-wave annihilating Majorana dark matter in the mass range m S 50 GeV, that we will see is required by collider constraints. By assuming that λ 2 saturates the inequality (5) so that it is no larger than needed to satisfy the flavor constraints, the relation σv rel (x f ) = σv rel 0 requires This is valid if m φ ≥ m Ψ ; one can show that (10) is further reduced by the factor m φ /m Ψ if m φ < m Ψ . We verified the previous estimate by numerically solving the Boltzmann equation with micrOMEGAs 4.3.5 [64]; contours corresponding to the cosmologically preferred value Ωh 2 = 0.1199 [62] are displayed in fig. 4. S annihilations can lead to indirect signals in gamma rays and charged cosmic rays, but the p-wave suppression of [57] (green), and the requirement that S is the lightest particle so that it can be the DM (grey). The blue lines correspond to values of m φ and mS that give the correct relic density for different values of the ratio mΨ/m φ . λ2 is set everywhere to the minimum value that allows for explanation of the flavor anomalies while avoiding Bs mixing constraints. /μ N [fm] Figure 5. The current limit on the anapole moment from LUX at 90% c.l. [58,59] and the estimated eventual sensitivity of the DARWIN experiment [60]. The prediction of our model for this quantity, based on the need to achieve the correct relic density and explain the B anomalies, is shown by the red curve. the cross section makes the limits from such searches very weak. Collider limits are far more constraining, notably ATLAS searches for 2 leptons and missing transverse energy [57], which exclude the green region in fig. 4. Because S is a Majorana particle, the box diagram for scattering of S off quarks leads only to spin-dependent or velocity-suppressed scattering off nucleons. The spin-dependent cross section for DM scattering off a single nucleon is given by σ = n,S |λ 2 | 4 /(256π 5/2 M 2 ) 2 for low-energy scattering (e.g. [65]). The determination of the ∆ (n) q parameters is reviewed in [66]. For our benchmark model with M = 1 Figure 6. Processes for production of quark jets, leptons, and missing energy. TeV this leads to σ ∼ 10 −50 cm 2 for scattering off neutrons, far below current experimental limits on spindependent scattering from the PICO-60 direct detection experiment [67]. Had the dark matter been Dirac, diagram (c) of fig. 3 would give both a magnetic moment for the dark matter µ S ∼ = e|λ 2 | 2 m S /(64π 2 m 2 φ ), [approximating m S m φ consistently with eq. (10)], and a charge-radius interaction (Sγ µ S)∂ ν F µν that lead to scattering on protons. Although the former is below current direct detection limits, the latter is far too large, which obliges us to take S to be Majorana. 4 Then there is only an anapole moment A(Sγ µ γ 5 S) ∂ ν F µν , which has been computed and constrained (using 2013 LUX results) for our class of models in ref. [58]. We rescale their limit on A to reflect more recent results from LUX [59], as well as the projected eventual sensitivity of DARWIN [60], in fig. 5. The predicted value is also shown, using (5) and (10) with x f = 22 to eliminate λ 2 and m φ in favor of m S . For the lowest allowed value of m S = 60 GeV (considering that m φ 500 GeV from LHC constraints), the limit is a factor of 22.5 weaker than the prediction, corresponding to a factor of 500 in the cross section. This is below the reach of the LZ experiment [68], but slightly above the expected sensitivity of DARWIN, leaving open the possibility of direct detection. Collider constraints. The new states φ and Ψ carry SM quantum numbers, and can therefore be pairproduced in particle collisions. Fig. 6 shows the main production modes at a hadron collider and their decays. The final states necessarily include hard lepton pairs, since the splitting between m φ and m S must be large, eq. (10). This also produces missing energy as the decay products inevitably include dark matter SS pairs. Moreover hadronic jets appear if Ψ is produced, since Ψ decays into φ plus quarks. For Drell-Yan production of φ-φ * pairs, the signal is lepton pairs and missing energy, with no jets. (One of the leptons is a neutrino if qq → W → φ ± φ 0 occurs). This . Shaded regions in the mS-mΨ plane are excluded at 95% c.l. by ATLAS run 2 searches for one (blue) or two (red) leptons, jets, and missing energy [69,70]. For each point, mS and the couplings are set as described in text to satisfy flavor and DM relic density constraints. is the same final state as in production of slepton pairs, so SUSY searches [57] may be applied. 5 The excluded region is shown in fig. 4, constraining m φ 500 GeV for all m S for which the relic density can be accommodated. In diagrams 6(b,c,d), Ψ is produced, which subsequently decays to bµ + S or tν µ S. Such final states have been searched for by ATLAS in 13.3 fb −1 of √ s = 13 TeV data, including events with one or two leptons, jets and missing transverse momentum [69,70]. These analyses has been implemented in CheckMATE 2.0.14 [72], which we used to constrain our model, in conjunction with FeynRules 2.3 [73] and MadGraph 2.6.0 [71]. 20,000 events per model point were generated for the process pp → ΨΨ (pp → Ψφ * is suppressed by the small couplings of Ψ and φ to first generation quarks, or the parton distribution function of b or t). The subsequent showering and hadronisation of the final state partons was modelled with Pythia 8.230 [74] and detector simulation was done with Delphes 3.4.1 [75]. Fig. 7 shows the resulting 95% c.l. limits on m Ψ versus m S for models which both explain the flavor anomalies and give the correct DM relic density. Here m φ is set by eq. 10 with x f = 22 and the couplings are scaled relative to (6) by the factor (M/1 TeV) 1/2 , where M = max(m φ , m Ψ ); this choice keeps all the box diagrams approximately constant. At values of m S 60 GeV, the lowest values that allow for the correct relic density while avoiding slepton search constraints, the onelepton search limits m Ψ 950 GeV, except for a narrow window with m S just below m φ . The two-lepton search does not constrain m Ψ as strongly but is more sensitive to larger DM masses. Conclusions. The indications from LHCb of lepton flavor universality breaking down are currently our best hint of physics beyond the standard model from colliders. These anomalies should be verified within a few years by further data from LHCb and Belle II [76]. If confirmed, it is not unreasonable to expect that the relevant new physics could also shed light on other shortcomings of the standard model. We have shown how a very economical model, in which dark matter plays an essential role, could be the source of R(K ( * ) ) anomalies, while predicting imminent tensions in other flavor observables, notably B s mixing. The model may be tested by the next generation of direct detection searches and can be discovered at the LHC via searches for leptons, jets and missing energy.
4,757.2
2017-11-29T00:00:00.000
[ "Physics" ]
Hydroxypropyltrimethyl Ammonium Chloride Chitosan Functionalized-PLGA Electrospun Fibrous Membranes as Antibacterial Wound Dressing: In Vitro and In Vivo Evaluation A novel poly(lactic-co-glycolic acid) (PLGA)-hydroxypropyltrimethyl ammonium chloride chitosan (HACC) composite nanofiber wound dressing was prepared through electrospinning and the entrapment-graft technique as an antibacterial dressing for cutaneous wound healing. HACC with 30% degrees of substitution (DS) was immobilized onto the surface of PLGA membranes via the reaction between carboxyl groups in PLGA after alkali treatment and the reactive groups (–NH2) in HACC molecules. The naked PLGA and chitosan graft PLGA (PLGA-CS) membranes served as controls. The surface immobilization was characterized by scanning electron microscopy (SEM), atomic force microscopy (AFM), Fourier transform infrared (FTIR), thermogravimetric analysis (TGA) and energy dispersive X-ray spectrometry (EDX). The morphology studies showed that the membranes remain uniform after the immobilization process. The effects of the surface modification by HACC and CS on the biological properties of the membranes were also investigated. Compared with PLGA and PLGA-CS, PLGA-HACC exhibited more effective antibacterial activity towards both Gram-positive (S. aureus) and Gram-negative (P. aeruginosa) bacteria. The newly developed fibrous membranes were evaluated in vitro for their cytotoxicity using human dermal fibroblasts (HDFs) and human keratinocytes (HaCaTs) and in vivo using a wound healing mice model. It was revealed that PLGA-HACC fibrous membranes exhibited favorable cytocompatibility and significantly stimulated adhesion, spreading and proliferation of HDFs and HaCaTs. PLGA-HACC exhibited excellent wound healing efficacy, which was confirmed using a full thickness excision wound model in S. aureus-infected mice. The experimental results in this work suggest that PLGA-HACC is a strong candidate for use as a therapeutic biomaterial in the treatment of infected wounds. Introduction The healing of wounds, especially extensive full-thickness wounds, is one of the most challenging clinical problems [1,2]. Skin wound dressing is of significant importance in wound healing, as it prevents bacterial contamination, absorbs excess exudates, ensures sufficient gas and nutrient exchange and maintains a moist environment for cell proliferation and migration [3][4][5]. A range of dressing types has been fabricated and used in accelerating wound healing and skin regeneration. Preparation of PLGA Nanofibrous Membranes The PLGA nanofiber membranes were electrospun as previously reported [12]. Firstly, a 10 wt % PLGA solution was prepared by dissolving the polymer with HFIP. The polymer solution was then fed into a syringe capped with an internal diameter of 0.35 mm. A DC voltage of 12 kV potential was applied between the syringe tip and an aluminum sheet-collector at a distance of 15-20 cm and at a syringe flow rate of 1.5-2.0 mL/h at an ambient temperature of 25 • C. The fibers were then dried in a vacuum oven at 37 • C for 48 h to remove residual HFIP. Preparation of HACC, CS-Conjugated PLGA Nanofibrous Membranes HACC with 30% degrees of substitution (DS) was prepared according to a previously described modified method [33]. The prepared PLGA nanofibrous membranes were soaked in NaOH aqueous solution (50 mg·mL −1 ) for 2 h to form reactive carboxyl groups followed by a thorough rinse using abundant distilled water. Subsequently, the PLGA nanofibrous membranes were cross-linked with 0.2% HACC in EDC (0.40 g)/NHS (0.097 g)/MES (50 mL) solution for 24 h. Finally, the PLGA-HACC fibrous membrane was rinsed with abundant distilled water and then dried. Chitosan-modified PLGA (PLGA-CS) membrane was also prepared using a method similar to the preparation of PLGA-HACC [40]. All fibers were dried under vacuum at 37 • C for 24 h. Characterization Attenuated total reflectance-Fourier transform infrared (ATR-FTIR) spectroscopy of the membranes was recorded on a spectrophotometer (Perkin-Elmer Co., Waltham, MA, USA) at wavelengths ranging from 800-2400 cm −1 and at a resolution of 4.0 cm −1 over 16 scans. The morphologies of the electrospun PLGA, PLGA-CS and PLGA-HACC membranes were observed using scanning electron microscopy (SEM, HITACHI SU8220, Tokyo, Japan). The elemental composition of the PLGA-HACC membrane surface was detected using an energy dispersive X-ray spectrometer (EDX) (HITACHI, Tokyo, Japan). The topography of the membranes was also determined by atomic force microscopy (AFM, XE-100, Park SYSTEMS Co., Suwon, Korea). The two-dimensional images were converted to three-dimensional images, and a roughness analysis was done using XEP Data Acquisition Program (Park SYSTEMS Co., Suwon, Korea). The amount of grafted HACC and chitosan on the surface of the membranes was measured via thermogravimetric analysis (TGA, TA Q-200, New Castle, DE, USA) from 30-700 • C at a rate of 10 • C/min heating with a nitrogen flow rate of 50 mL·min −1 . The CS and HACC grafting ratio in PLGA-HACC and PLGA-CS was calculated according to the following equation [41]: Grafting ratio (%) = (WL PLGA (%) − WL PLGA-HACC (%))/(100% − WL HACC (%)) (1) WL: weight loss. For calculation the grafting ratio of CS, WL PLGA-HACC and WL HACC were changed to WL PLGA-cs and WL cs . The fiber diameter and pore size were measured by using ImageJ software (National Institutes of Health, Bethesda, MD, USA) on SEM micrographs at 30 random locations. The porosity of the membrane was calculated by using Equation (2), respectively [42]: where ρ e represents the density of ethanol (0.789 g/cm 3 ), V s represents the geometrical volume (V s ) of the samples, W 0 represents dry samples' weight and W e represents wet samples' weight. Antibacterial Assays Bacterial activity and morphology on the membranes at 24 h was determined using SEM and confocal laser scanning microscopy (CLSM, Leica TCS SP8, Leica Microsystems, Mannheim, Germany) observation. A volume of 500 µL of the S. aureus and P. aeruginosa bacterial suspensions in tryptic soy broth (TSB) medium (1 × 10 6 CFUs/mL) was added into wells containing PLGA, PLGA-CS and PLGA-HACC membranes and then incubated at 37 • C for 24 h. Afterwards, the three samples were gently washed with PBS two to three times to remove loosely-adherent S. aureus and P. aeruginosa and then sonicated for 5 min with an ultrasonic apparatus to re-suspend the bacteria. After that, the sonicated solution was serially diluted 100-fold using TSB, and 50 µL of the diluted bacteria solution were plated in triplicate onto tryptic soy agar (TSA) plates. Then, after 24 h of incubation in an incubator at 37 • C, the number of bacteria colonies on the plates was counted and multiplied by the dilution ratio. The bacteria growth on membranes was also observed using CLSM. After 24 h of culture, the samples were fixed with 2.5% glutaraldehyde for 30 min at 37 • C. The membranes were then stained in a fresh 24-well plate with 500 µL of combination dye (LIVE/DEAD BacLight viability kits, L7012; Molecular Probes, Life Technologies, Carlsbad, CA, USA). The images were acquired at random positions within the membranes. S. aureus and P. aeruginosa adhesion on PLGA, PLGA-CS and PLGA-HACC membranes was also observed using SEM. The PLGA, PLGA-CS and PLGA-HACC membranes were incubated with S. aureus or P. aeruginosa suspended in TSB at a concentration of 1 × 10 6 CFUs/mL. After incubation for 24 h, the three membranes were fixed in 2.5% glutaraldehyde for 30 min at 37 • C, then dehydrated through a series of graded ethanol solutions (50%, 70%, 95% and 100%). The samples were subsequently dried at 37 • C and then observed using SEM. Cell Attachment and Proliferation The three membranes (1 cm × 1 cm) were cultured with HDFs and HaCaTs at a density of 1 × 10 6 cells/mL in 24-well plates. HDFs' and HaCaTs' attachment on the membranes was observed using CLSM after staining with 4,6-diamidino-2-phenylindole (DAPI), and the cell number on membranes (150 µm × 150 µm, n = 5) was calculated by ImageJ. The proliferation of HDFs and HaCaTs on the membranes was also tested using the cell counting kit-8 (CCK-8) assay [13]. The absorbance was measured spectrophotometrically at wavelengths of 450 nm with the microplate reader. HDFs and HaCaTs with a density of 2.0 × 10 4 /cm 2 were seeded on the membrane surface and incubated for 1, 4 and 7 days. The CCK-8 assay was applied to evaluate cell proliferation according to the manufacturer's instructions. The morphologies of HDFs and HaCaTs on the membrane surface were also observed using CLSM. The seeding procedures were similar to those of the cell attachment assay. After incubating for 24 h, the cell cultured membranes were fixed with 2.5% glutaraldehyde for 30 min, then stained with rhodamine-labelled phalloidin for 45 min and then stained with DAPI for 10 min. The samples were washed with PBS, and then, CLSM was used to visualize the cell cytoskeleton and the nuclei on the membranes. HDFs Migration Assay The dissolution products of the membranes were prepared for the migration assay. One gram of each kind of membrane was soaked in 10 mL DMEM and incubated for 24 h, and the resultant solution was obtained. HDFs were seeded in 24-well plates at 4 × 10 5 cells per well and cultured in an incubator humidified at 37 • C with 5% CO 2 . After 24 h of culture, a scratch was made with a 200-µL pipette tip at the bottom of each well followed by washing the cells with PBS. The culture medium was then replaced with PLGA, PLGA-CS and PLGA-HACC dissolution products in order to determine the effect of the materials on HDF migration. The cells cultured with DMEM (without FBS) were regarded as a control. After being cultured for 12 h, the cells were fixed with 4% paraformaldehyde for 30 min, and after being washed twice with PBS, the cells were observed and images taken with an inverted microscope (Leica DMI 3000B, Wetzlar, Germany). In addition, a statistical analysis of the HDF migration assay was performed. We measured the original width and final width of the scratches in the two groups, and the percentage of scratch shrinkage was calculated using a previously reported method [43]: scratch shrinkage (%) = (original width − final width)/original width × 100 (3) Enzyme-Linked Immunosorbent Assay HDF cells at a density of 1 × 10 6 cells/mL were incubated with PLGA, PLGA-CS and PLGA-HACC membranes in 24-well plates. After incubation for 24 h, the cell culture medium was collected and centrifuged. The expression levels of FGF-2 were determined via an enzyme-linked immunosorbent assay (ELISA) kit, according to the instructions of the manufacturer [44]. Western Blot Analysis HDF cells at 2 × 10 6 cells/well were cultured with PLGA, PLGA-CS and PLGA-HACC in 6-well plates for 24 h. At the 24-h time point, attached HDF cells were washed with PBS, digested with trypsin and dissolved in a lysis buffer containing a protease inhibitor (Roche, Grenzach, Germany). Total protein was isolated from the cell homogenates, subjected to a 10% polyacrylamide gel (Invitrogen, Carlsbad, CA, USA) and transferred to 0.22-µm nitrocellulose membranes (Invitrogen). Mouse anti-human PCNA (Abcam, Cambridge, UK) was used as the primary antibody and incubated with a horseradish peroxidase-conjugated secondary antibody (Santa Cruz Biotechnology, Santa Cruz, CA, USA) for 1 h at 37 • C. The band images were obtained using a ChemiDoc TM XRS + System with Image Lab TM Software (Bio-Rad, Hercules, CA, USA). In Vivo Studies Using a Full-Thickness Excision Wound Healing Mice Model BALB/c mice (four weeks old) were used in the animal study. All animal experimental procedures were performed according to the guidelines of the Animal Ethics Committee of Shanghai Ninth People's Hospital (No. HKDL 2017100). The mice dorsum skin was shaved and then disinfected using 75% ethanol after anesthetization through the intraperitoneal injection of pentobarbitone sodium at a concentration of 50 mg/kg. An open excision type wound with a diameter of 1 cm was incised on the dorsum of each rat, then a 100-µL S. aureus suspension (1 × 10 8 CFU/mL) was injected onto the wound surface and then covered with PLGA, PLGA-CS and PLGA-HACC fibrous membranes, each of which were fixed in place with a bandage. Samples were taken from the center of wound after 3 and 7 days and cultured to evaluate the antibacterial activity. Wound closure observation was assessed on Days 3, 7, 11 and 15 post surgery. This is in accordance with previously reported protocols for such studies [45]. The wound closure rate is expressed by the following equation from a previous study [46]: where A o is the original wound area and A t is the wound area at a specified time point. The wound sections with adjacent normal skin were excised for histology and fixed with 10% formaldehyde for the further histological analysis. The tissue samples were then analyzed by H&E and Masson's trichrome staining for histological observation. Statistical Analysis All data are presented as the mean ± standard deviation (SD). The statistical significance was assessed by analysis of variance (ANOVA). Each result is an average of at least three parallel experiments. Physical Characteristics The HACC-functionalized PLGA membranes were synthesized using electrospinning and surface modification as illustrated in Figure 1a. The carbonyl groups in the PLGA membranes were activated as carboxyl in NaOH solution, then carboxyl groups of PLGA reacted with amido on HACC chains in EDC·HCl/NHS/MES solution. Figure 1b presents SEM images of the PLGA, PLGA-CS and PLGA-HACC fibrous membranes. The electrospun PLGA membranes have a randomly interconnected structure with no formed beads and smooth nanofibers with a diameter of several hundred nano-meters ( Table 1). The porosities of the membranes are all in the range of 60-80% and with the pore size in the range of 2.6-3.3 µm; whereas, the surfaces of the PLGA-HACC and PLGA-CS nanofibers appear rough, which is possibly due to the alkali treatment and HACC or CS layer immobilization. More importantly, the HACC and CS surface modification process did not deform the fibrous structure of PLGA, as is clearly evident in the SEM images ( Figure 1b). The evolution of the topography of the PLGA, PLGA-CS and PLGA-HACC membranes' surfaces as observed by AFM is illustrated in Figure 1c. The virgin PLGA film had a relatively smooth surface. The surfaces became rougher with CS and HACC immobilization. HACC chains in EDC·HCl/NHS/MES solution. Figure 1b presents SEM images of the PLGA, PLGA-CS and PLGA-HACC fibrous membranes. The electrospun PLGA membranes have a randomly interconnected structure with no formed beads and smooth nanofibers with a diameter of several hundred nano-meters ( Table 1). The porosities of the membranes are all in the range of 60-80% and with the pore size in the range of 2.6-3.3 μm; whereas, the surfaces of the PLGA-HACC and PLGA-CS nanofibers appear rough, which is possibly due to the alkali treatment and HACC or CS layer immobilization. More importantly, the HACC and CS surface modification process did not deform the fibrous structure of PLGA, as is clearly evident in the SEM images ( Figure 1b). The evolution of the topography of the PLGA, PLGA-CS and PLGA-HACC membranes' surfaces as observed by AFM is illustrated in Figure 1c. The virgin PLGA film had a relatively smooth surface. The surfaces became rougher with CS and HACC immobilization. Figure 2a shows the C, O, N and Cl elemental distributions in the PLGA-HACC membranes as detected by EDX mapping, and the elemental N derived from HACC in the membranes is illustrated. The ATR-FTIR spectra of PLGA, PLGA-HACC and the PLGA-CS membranes are shown in Figure 2b. It can be clearly observed that pure PLGA had a peak at 1752 cm −1 (carbonyl -C=O stretch), as well as peaks at 1182 and 1082 cm −1 (C-O-C ether group) [47]. The characteristic peak at 1650 cm −1 represents amide I, and 1540 cm −1 corresponds to amide II in CS. For the HACC, the peak at 1480 cm −1 was assigned to the C-H bending of the trimethylammonium group [33]. The FTIR spectra of the PLGA-CS and PLGA-HACC membranes exhibited peaks at 1650 and 1540 cm −1 , suggesting that CS and HACC had been successfully immobilized on the PLGA membranes. The PLGA membranes showed an average roughness of 0.12 ± 0.09 nm. The CS and HACC immobilization led to increased average roughness of 1.35 ± 0.18 and 1.76 ± 0.28 nm (Figure 2b). Figure 2a shows the C, O, N and Cl elemental distributions in the PLGA-HACC membranes as detected by EDX mapping, and the elemental N derived from HACC in the membranes is illustrated. The ATR-FTIR spectra of PLGA, PLGA-HACC and the PLGA-CS membranes are shown in Figure 2b. It can be clearly observed that pure PLGA had a peak at 1752 cm −1 (carbonyl -C=O stretch), as well as peaks at 1182 and 1082 cm −1 (C-O-C ether group) [47]. The characteristic peak at 1650 cm −1 represents amide Ι, and 1540 cm −1 corresponds to amide II in CS. For the HACC, the peak at 1480 cm −1 was assigned to the C-H bending of the trimethylammonium group [33]. The FTIR spectra of the PLGA-CS and PLGA-HACC membranes exhibited peaks at 1650 and 1540 cm −1 , suggesting that CS and HACC had been successfully immobilized on the PLGA membranes. The PLGA membranes showed an average roughness of 0.12 ± 0.09 nm. The CS and HACC immobilization led to increased average roughness of 1.35 ± 0.18 and 1.76 ± 0.28 nm (Figure 2b). The thermal characteristics of the PLGA membrane samples before and after CS or HACC surface modification were investigated using TGA (Figure 2d). The PLGA only membranes started to degrade at about 300 • C, and degradation finished at around 380 • C with complete weight loss occurring. The PLGA-HACC and PLGA-CS membranes had the same behavior in the starting degradation temperatures. The thermal degradation of pure CS and HACC started at 220 • C, and 64.08% and 64.29% weight, respectively, was lost at about 700 • C. The membranes were heated up to 700 • C, and at that temperature, the PLGA completely degraded, with only CS or HACC left. The total weight losses were 92.36% and 93.02% for the PLGA-HACC and PLGA-CS membranes, respectively. The grafting ratios of the PLGA-CS and PLGA-HACC membranes were 21.39% and 19.43%. Attachment, Spreading and Proliferation of HDFs and HaCaTs DAPI staining was used to evaluate cell attachment. Figure 3a shows the numbers of HDFs and HaCaTs on the surfaces of the three different membranes after 6 h of culture stained with DAPI. The cell number on the surface of membranes was calculated by ImageJ. The numbers of adherent HDFs and HaCaTs on the surfaces of the PLGA-CS (63 ± 7, 35 ± 8) and PLGA-HACC (82 ± 9, 52 ± 8) membranes were significantly higher than those on the PLGA membranes (18 ± 5, 16 ± 7). More HDFs and HaCaTs attached to PLGA-HACC than to PLGA-CS at the 6-h time point (Figure 3a). Attachment, Spreading and Proliferation of HDFs and HaCaTs DAPI staining was used to evaluate cell attachment. Figure 3a shows the numbers of HDFs and HaCaTs on the surfaces of the three different membranes after 6 h of culture stained with DAPI. The cell number on the surface of membranes was calculated by ImageJ. The numbers of adherent HDFs and HaCaTs on the surfaces of the PLGA-CS (63 ± 7, 35 ± 8) and PLGA-HACC (82 ± 9, 52 ± 8) membranes were significantly higher than those on the PLGA membranes (18 ± 5, 16 ± 7). More HDFs and HaCaTs attached to PLGA-HACC than to PLGA-CS at the 6-h time point (Figure 3a). The proliferation rates of HDFs and HaCaTs cultured on membrane surfaces is shown in Figure 3b,c. The proliferation rates of HDFs on PLGA are not very promising from Day 1-Day 7. HDF cells on the PLGA-HACC membranes showed a higher proliferation rate compared with those on the PLGA and PLGA-CS membranes at Day 4 and Day 7. The HDFs numbers on the PLGA-HACC membranes increased significantly from Day 1-Day 7. The same trend is also observed in the case of HaCaTs (Figure 3c). A significantly higher growth rate of the proliferation rates of the HDFs and HaCaTs on the PLGA-HACC was observed compared to that on PLGA and PLGA-CS. Figure 4 shows the spreading of HDFs and HaCaTs cells on PLGA, PLGA-CS and PLGA-HACC membranes' surfaces as observed by CLSM. As shown in the CLSM micrographs, after 24 h of incubation, the cells grown on the PLGA-HACC membranes displayed more actin filaments linking adjacent cells and had a characteristic shape; however, the cells on the PLGA membranes exhibited poor spreading and had a dispersed monolayer with fewer actin filaments. The cell density and morphology were better for the PLGA-CS membranes than for the PLGA membranes. HDF cells on the PLGA-HACC membranes showed a higher proliferation rate compared with those on the PLGA and PLGA-CS membranes at Day 4 and Day 7. The HDFs numbers on the PLGA-HACC membranes increased significantly from Day 1-Day 7. The same trend is also observed in the case of HaCaTs (Figure 3c). A significantly higher growth rate of the proliferation rates of the HDFs and HaCaTs on the PLGA-HACC was observed compared to that on PLGA and PLGA-CS. Figure 4 shows the spreading of HDFs and HaCaTs cells on PLGA, PLGA-CS and PLGA-HACC membranes' surfaces as observed by CLSM. As shown in the CLSM micrographs, after 24 h of incubation, the cells grown on the PLGA-HACC membranes displayed more actin filaments linking adjacent cells and had a characteristic shape; however, the cells on the PLGA membranes exhibited poor spreading and had a dispersed monolayer with fewer actin filaments. The cell density and morphology were better for the PLGA-CS membranes than for the PLGA membranes. Enhanced Regenerative Activities of Skin Cells Cultured on PLGA-HACC In Vitro HDF is believed to play a key role in wound healing by synthesizing extracellular matrix components, which allow the epithelial cells of to affix to the matrix, thereby allowing the epidermal cells to effectively join together to form the top layer of the skin. Cell migration is a complex multistep process that involves the movement of cells from one area to another and plays a vital role in wound repair. The wound scratch model is a 2D assay for evaluating wound healing in vitro [48]. The membrane dissolution products with control medium (without FBS) were used to detect the effects of membranes on HDF migration and the results are shown in Figure 5a. At 0 h, scratches with the same width were made on the bottom of each well covered with HDFs. Next, the cells were cultured with control medium and PLGA, PLGA-CS and PLGA-HACC dissolution products for 12 h: The scratches in the control, PLGA and PLGA-CS groups became slightly narrower, while the scratch in the PLGA-HACC-containing group almost disappeared, which indicates that HACC stimulated the HDFs to migrate into the scratch area. Figure 5b shows the statistical analysis of the HDF migration assay; the scratch shrinkage percentage in the PLGA-HACC group (83% ± 3.5%) was much higher than that in the PLGA-CS (58.3% ± 4.2%), PLGA (44% ± 3.9%) and control groups (42% ± 3.5%). Enhanced Regenerative Activities of Skin Cells Cultured on PLGA-HACC In Vitro HDF is believed to play a key role in wound healing by synthesizing extracellular matrix components, which allow the epithelial cells of to affix to the matrix, thereby allowing the epidermal cells to effectively join together to form the top layer of the skin. Cell migration is a complex multistep process that involves the movement of cells from one area to another and plays a vital role in wound repair. The wound scratch model is a 2D assay for evaluating wound healing in vitro [48]. The membrane dissolution products with control medium (without FBS) were used to detect the effects of membranes on HDF migration and the results are shown in Figure 5a. At 0 h, scratches with the same width were made on the bottom of each well covered with HDFs. Next, the cells were cultured with control medium and PLGA, PLGA-CS and PLGA-HACC dissolution products for 12 h: The scratches in the control, PLGA and PLGA-CS groups became slightly narrower, while the scratch in the PLGA-HACC-containing group almost disappeared, which indicates that HACC stimulated the HDFs to migrate into the scratch area. Figure 5b shows the statistical analysis of the HDF migration assay; the scratch shrinkage percentage in the PLGA-HACC group (83 ± 3.5%) was much higher than that in the PLGA-CS (58.3 ± 4.2%), PLGA (44 ± 3.9%) and control groups (42 ± 3.5%). To evaluate the effects of PLGA-HACC on skin cells in vitro, HDFs were cultured for 24 h with PLGA, PLGA-CS and PLGA-HACC membranes. Fibroblast growth factor (FGF-2) production from HDFs was enhanced in the PLGA-HACC group (Figure 5c). Compared with the cells cultured on PLGA, dermal fibroblasts cultured with PLGA-HACC showed an increase in cell proliferation (expression of proliferating cell nucleus antigen (PCNA) (Figure 5d). This is particularly important because wound healing requires fibroblast proliferation and differentiation into myofibroblasts [43,49]. The above results suggested that the PLGA-HACC could accelerate wound healing. To evaluate the effects of PLGA-HACC on skin cells in vitro, HDFs were cultured for 24 h with PLGA, PLGA-CS and PLGA-HACC membranes. Fibroblast growth factor (FGF-2) production from HDFs was enhanced in the PLGA-HACC group (Figure 5c). Compared with the cells cultured on PLGA, dermal fibroblasts cultured with PLGA-HACC showed an increase in cell proliferation (expression of proliferating cell nucleus antigen (PCNA) (Figure 5d). This is particularly important because wound healing requires fibroblast proliferation and differentiation into myofibroblasts [43,49]. The above results suggested that the PLGA-HACC could accelerate wound healing. Antibacterial Efficacy of Different Membranes The bacterial activities of P. aeruginosa and S. aureus on the PLGA, PLGA-CS and PLGA-HACC membranes at the 24 h time point were observed using both SEM and CLSM. Considerably less live bacteria (appearing as green fluorescence) could be observed on the PLGA-HACC membranes compared to the PLGA and PLGA-CS membranes, which indicates significantly less adherent surviving bacteria on the PLGA-HACC than on the PLGA membranes. A considerably density of dead bacteria (appearing as red fluorescence) indicated that dead colonies could be observed on the PLGA-HACC membranes. Figure 6b shows SEM images of bacterial morphology on the PLGA, PLGA-CS and PLGA-HACC membranes. S. aureus and P. aeruginosa showed more attachment on PLGA membranes compared to the PLGA-CS and PLGA-HACC membranes. A decrease in bacterial attachment was also seen in PLGA-CS membranes compared with the PLGA membranes. Very few sparsely-distributed S. aureus and P. aeruginosa could be spotted over the entire surface of the PLGA-HACC fibrous membranes. The bacteria obviously exhibit impaired structure, and cell membranes began to fester, which indicated that these bacteria were inactivated on the PLGA-HACC surface. Figure 6c,d quantitatively shows the surviving S. aureus and P. aeruginosa strains on the membranes at 24 h as determined by the spreading plate method. Antibacterial Efficacy of Different Membranes The bacterial activities of P. aeruginosa and S. aureus on the PLGA, PLGA-CS and PLGA-HACC membranes at the 24 h time point were observed using both SEM and CLSM. Considerably less live bacteria (appearing as green fluorescence) could be observed on the PLGA-HACC membranes compared to the PLGA and PLGA-CS membranes, which indicates significantly less adherent surviving bacteria on the PLGA-HACC than on the PLGA membranes. A considerably density of dead bacteria (appearing as red fluorescence) indicated that dead colonies could be observed on the PLGA-HACC membranes. Figure 6b shows SEM images of bacterial morphology on the PLGA, PLGA-CS and PLGA-HACC membranes. S. aureus and P. aeruginosa showed more attachment on PLGA membranes compared to the PLGA-CS and PLGA-HACC membranes. A decrease in bacterial attachment was also seen in PLGA-CS membranes compared with the PLGA membranes. Very few sparsely-distributed S. aureus and P. aeruginosa could be spotted over the entire surface of the PLGA-HACC fibrous membranes. The bacteria obviously exhibit impaired structure, and cell membranes began to fester, which indicated that these bacteria were inactivated on the PLGA-HACC surface. Figure 6c,d quantitatively shows the surviving S. aureus and P. aeruginosa strains on the membranes at 24 h as determined by the spreading plate method. The numbers of S. aureus and P. aeruginosa on PLGA-HACC membrane surfaces were found to be significantly less than those on the PLGA and PLGA-CS membrane surfaces. It is evident that there are less viable bacteria on PLGA-CS than that on the surface of PLGA. The S. aureus and P. aeruginosa burden per membrane of the PLGA (4.78 × 10 7 ± 7.8 × 10 5 , 7.60 × 10 7 ± 8.45 × 10 5 CFUs/membrane, respectively) and PLGA-CS (5.43 × 10 4 ± 5.50 × 10 3 , 6.53 × 10 4 ± 7.14 × 10 3 CFUs/membrane, respectively) was significantly higher than that of the PLGA-HACC (6.31 × 10 3 ± 3.40 × 10 3 , 1.92 × 10 4 ± 4.48 × 10 3 CFUs/membrane, respectively) after 24 h of incubation (Figure 6d). Thus, HACC demonstrated very effective antibacterial activity against S. aureus and P. aeruginosa. In Vivo Wound Healing We used a full-thickness infected cutaneous wound model to evaluate the healing characteristics of PLGA-HACC membranes in vivo. Figure 7a shows optical microscopic images of wound cuts treated with PLGA, PLGA-CS and PLGA-HACC fibrous membranes for 3, 7, 11 and 15 days. Differing from the PLGA and PLGA-CS groups, PLGA-HACC reduces the wound size by 21.8% after three days (Figure 7a,b). After seven days of treatment, the wound size of the PLGA membrane group was not significantly reduced, while both PLGA-CS and PLGA-HACC were able to reduce the wound size, by 34.5% and 66.4%, respectively (Figure 7b). After 11 days of treatment with either PLGA, PLGA-CS or PLGA-HACC, the wound size was reduced by 24.2%, 57.1% and 91.8%, respectively (Figure 7b). After 15 days of treatment, PLGA-HACC had a wound healing ratio of nearly 100%, significantly higher than that of the PLGA (46.4%) and PLGA-CS groups (61.3%) (Figure 7b). Such a strong wound-healing effect of PLGA-HACC could be attributed to the synergistic effects between antibacterial performance and cell migration promotion by HACC. In Vivo Wound Healing We used a full-thickness infected cutaneous wound model to evaluate the healing characteristics of PLGA-HACC membranes in vivo. Figure 7a shows optical microscopic images of wound cuts treated with PLGA, PLGA-CS and PLGA-HACC fibrous membranes for 3, 7, 11 and 15 days. Differing from the PLGA and PLGA-CS groups, PLGA-HACC reduces the wound size by 21.8% after three days (Figure 7a,b). After seven days of treatment, the wound size of the PLGA membrane group was not significantly reduced, while both PLGA-CS and PLGA-HACC were able to reduce the wound size, by 34.5% and 66.4%, respectively (Figure 7b). After 11 days of treatment with either PLGA, PLGA-CS or PLGA-HACC, the wound size was reduced by 24.2%, 57.1% and 91.8%, respectively (Figure 7b). After 15 days of treatment, PLGA-HACC had a wound healing ratio of nearly 100%, significantly higher than that of the PLGA (46.4%) and PLGA-CS groups (61.3%) (Figure 7b). Such a strong wound-healing effect of PLGA-HACC could be attributed to the synergistic effects between antibacterial performance and cell migration promotion by HACC. To determine the amounts of bacteria in infected tissue samples, the tissue was homogenized in normal saline (1.0 mL) and plated on LB agar, and the number of colonies was counted. Homogenized tissue dispersions from the infection site were added to bacteria-coated plates to evaluate therapeutic efficacy ( Figure 8). S. aureus load in the infected tissue at Day 3 was 6.6 × 10 6 CFU·g −1 in mice treated with PLGA and was reduced by PLGA-CS and PLGA-HACC treatment to 8.1 × 10 5 and 2.1 × 10 4 , respectively. At Day 7, the bacterial load in infected tissue mice was 1.3 × 10 6 , and 6.1 × 10 4 CFU·g −1 in response to PLGA and PLGA-HACC treatment, respectively, while the PLGA-HACC treatment reduced the bacterial load to 3.2 × 10 3 CFU·g −1 . To determine the amounts of bacteria in infected tissue samples, the tissue was homogenized in normal saline (1.0 mL) and plated on LB agar, and the number of colonies was counted. Homogenized tissue dispersions from the infection site were added to bacteria-coated plates to evaluate therapeutic efficacy ( Figure 8). S. aureus load in the infected tissue at Day 3 was 6.6 × 10 6 CFU·g −1 in mice treated with PLGA and was reduced by PLGA-CS and PLGA-HACC treatment to 8.1 × 10 5 and 2.1 × 10 4 , respectively. At Day 7, the bacterial load in infected tissue mice was 1.3 × 10 6 , and 6.1 × 10 4 CFU·g −1 in response to PLGA and PLGA-HACC treatment, respectively, while the PLGA-HACC treatment reduced the bacterial load to 3.2 × 10 3 CFU·g −1 . To determine the amounts of bacteria in infected tissue samples, the tissue was homogenized in normal saline (1.0 mL) and plated on LB agar, and the number of colonies was counted. Homogenized tissue dispersions from the infection site were added to bacteria-coated plates to evaluate therapeutic efficacy ( Figure 8). S. aureus load in the infected tissue at Day 3 was 6.6 × 10 6 CFU·g −1 in mice treated with PLGA and was reduced by PLGA-CS and PLGA-HACC treatment to 8.1 × 10 5 and 2.1 × 10 4 , respectively. At Day 7, the bacterial load in infected tissue mice was 1.3 × 10 6 , and 6.1 × 10 4 CFU·g −1 in response to PLGA and PLGA-HACC treatment, respectively, while the PLGA-HACC treatment reduced the bacterial load to 3.2 × 10 3 CFU·g −1 . Histomorphological determination of wound regeneration at different phases was conducted by HE and Masson's trichrome staining (Figure 9). At Day 3, there was enhanced infiltration of inflammatory cells in both the PLGA and PLGA-CS dressing groups, especially for macrophages and other monocytes in the PLGA group. Compared with the PLGA and PLGA-CS groups, inflammatory cell infiltration was partially suppressed, and far more fibroblasts were gathered around the impaired region for PLGA-HACC. On Day 11, many inflammatory cells appeared on the PLGA-treated wound. The PLGA-CS group's wounds showed a faster healing rate than the PLGA group. Compared with the PLGA and PLGA-CS groups, the PLGA-HACC group showed a higher regularity of both epithelium and connective tissue with more fibroblasts and a more complete epithelial structure than the two control groups, which might be attributed to the high antibacterial efficiency and the promotion of cellular activities (such as proliferation and migration), including the activities of fibroblasts and keratinocytes, by HACC. After 15 days of treatment, many inflammatory cells still appeared on the PLGA-treated wounds, compared with the wounds treated with PLGA-CS membranes. In addition, after 15 of days treatment, PLGA-HACC dressings led to the production of hair follicles (indicated by black arrows). Inflammatory cells disappeared from the PLGA-HACC-treated wounds after 15 days, and the PLGA-treated wounds were covered by an incomplete epidermis. A thickened and complete epidermis was observed in the PLGA-HACC membrane groups. Histomorphological determination of wound regeneration at different phases was conducted by HE and Masson's trichrome staining (Figure 9). At Day 3, there was enhanced infiltration of inflammatory cells in both the PLGA and PLGA-CS dressing groups, especially for macrophages and other monocytes in the PLGA group. Compared with the PLGA and PLGA-CS groups, inflammatory cell infiltration was partially suppressed, and far more fibroblasts were gathered around the impaired region for PLGA-HACC. On Day 11, many inflammatory cells appeared on the PLGA-treated wound. The PLGA-CS group's wounds showed a faster healing rate than the PLGA group. Compared with the PLGA and PLGA-CS groups, the PLGA-HACC group showed a higher regularity of both epithelium and connective tissue with more fibroblasts and a more complete epithelial structure than the two control groups, which might be attributed to the high antibacterial efficiency and the promotion of cellular activities (such as proliferation and migration), including the activities of fibroblasts and keratinocytes, by HACC. After 15 days of treatment, many inflammatory cells still appeared on the PLGA-treated wounds, compared with the wounds treated with PLGA-CS membranes. In addition, after 15 of days treatment, PLGA-HACC dressings led to the production of hair follicles (indicated by black arrows). Inflammatory cells disappeared from the PLGA-HACC-treated wounds after 15 days, and the PLGA-treated wounds were covered by an incomplete epidermis. A thickened and complete epidermis was observed in the PLGA-HACC membrane groups. Masson's Trichrome staining (Figure 9b) was performed to assess the collagen deposition (seen as blue) in the wound site. At both Day 3 and 11 of treatment, the PLGA-HACC treatment groups showed significantly higher collagen deposition than the PLGA and PLGA-CS groups. After 15 days of treatment, compared to the PLGA and PLGA-CS treatment groups, more mature collagen fibers and the production of hair follicles (indicated by black arrows) were also observed in the PLGA-HACC treatment group with less inflammatory cell presence, which is consistent with the HE staining results. More collagen tissue could help in the reconstruction of the ECM and further support skin tissue growth. Discussion The aim of the current study was to fabricate PLGA electrospun membranes with HACC surface modification to repair infected wounds. The SEM and AFM images showed that the prepared PLGA-HACC membranes continue to have a uniform, porous structure after the immobilization process ( Figure 1). The nanoscale topography of the PLGA-HACC membranes mimics the natural extracellular matrix and is, thus, favorable for cell attachment migration and proliferation [50]. The porosity of the prepared membranes favors nutrient and gas exchange in wound treatment, which is beneficial for wound healing [51]. During wound healing, infections caused by pathogens delay the closing of the wound and increase the healthcare burden [52,53]. Among a plethora of different antiseptics for wound care (mostly alcohols, hydrogen peroxide and iodine), there have already been attempts to load silver into electrospun nanofibers [54]. Because the utilization of antimicrobial agents can cause toxicity and the development of drug resistance, other approaches are highly desired. Our previous study found that HACC with moderate DS was not cytotoxic and had high antibacterial activity. In this study, HACC with 30% DS was grafted onto PLGA nanofibers to prevent post-wound infections. Based on the in vitro ( Figure 6) results, PLGA-HACC exhibited effective antibacterial activity towards both Gram-positive (S. aureus) and Gram-negative (P. aeruginosa) bacteria. Our previous study demonstrated that HACC exhibited a broad spectrum of antibacterial ability against various Gram-positive bacteria, which may be due to the electrostatic interaction of the positively-charged HACC and the negatively-charged bacterial membranes, which causes cytoplasmic membrane fracture of bacteria cells [34]. In addition, it can also be expected that the application of this material will not induce bacterial resistance. PLGA is a synthetic polymer that is used as a biomedical material because of its good biocompatibility, adjustable mechanical properties and tunable degradation rate [41]. However, the electrospun PLGA membranes had a relatively smooth surface ( Figure 1) and support low levels of cell adhesion ( Figure 3). The surfaces became rougher after alkali treatment and CS and HACC immobilization. More cells adhered to the surface of the PLGA-HACC membranes, and roughness of the PLGA-HACC surface might have contributed to these results. Several other studies have shown that cell response is improved by rough material surfaces [55,56]. It was demonstrated that the PLGA-HACC membranes could promote the migration of HDFs with increased PCNA and FGF-2 expression ( Figure 5), which enhances myofibroblastic differentiation, the growth and migration of dermal fibroblasts, angiogenesis and wound healing [49]. We concluded that the material made by grafting HACC onto PLGA membrane surfaces demonstrated acceptable cytocompatibility, as well as good antimicrobial activity. The in vivo experiments furtherly demonstrated that the PLGA-HACC membranes were effective in reducing the inflammatory response after implantation into the infected wound ( Figure 9). As expected, type I collagen was the most expressed form of collagen in the skin, serving as the framework for connecting skin tissue [57]. The PLGA-HACC membranes stimulated COL expression on Day 11 and Day 15 in infected wound skin, and using these membranes also resulted in higher collagen production, as observed by Masson's trichrome staining (Figure 9b). Conclusions In this study, HACC-modified PLGA nanofibrous membranes were fabricated through entrapment-graft treatment. Bacteria colonization on the surface of PLGA-HACC was observed using SEM and CLSM. Compared with PLGA and PLGA-CS membranes, PLGA-HACC membranes exhibited effective antibacterial activity towards both Gram-positive (S. aureus) and Gram-negative (P. aeruginosa) bacteria. HACC modification exhibited favorable cytocompatibility and significant ability to stimulate the adhesion, spread and proliferation of HDFs and HaCaTs. The in vivo study demonstrated that PLGA-HACC exhibits excellent wound healing efficacy in an infected full-thickness excision wound model in mice, which showed significant re-epithelialization and dermal reconstruction.
9,438.8
2017-12-01T00:00:00.000
[ "Biology", "Materials Science", "Medicine" ]
Non-planar ABJ Theory and Parity While the ABJ Chern-Simons-matter theory and its string theory dual manifestly lack parity invariance, no sign of parity violation has so far been observed on the weak coupling spin chain side. In particular, the planar two-loop dilatation generator of ABJ theory is parity invariant. In this letter we derive the non-planar part of the two-loop dilatation generator of ABJ theory in its SU(2)xSU(2) sub-sector. Applying the dilatation generator to short operators, we explicitly demonstrate that, for operators carrying excitations on both spin chains, the non-planar part breaks parity invariance. For operators with only one type of excitation, however, parity remains conserved at the non-planar level. We furthermore observe that, as for ABJM theory, the degeneracy between planar parity pairs is lifted when non-planar corrections are taken into account. Introduction The concept of spin chain parity [1] played a crucial role in the discovery of higher loop integrability of the planar spectral problem of N = 4 SYM [2]. For a spin chain state the parity operation simply inverts the order of spins at the sites of the chain. In the field theory language the operation correspondingly inverts the order of fields inside a single trace operator or equivalently complex conjugates the gauge group generators. N = 4 SYM theory is parity invariant. In particular, the theory's dilatation generator commutes with parity. Integrability of the planar spectral problem at one loop order, discovered first in [3], implies the existence of a tower of higher conserved charges. The first of these, while commuting with the dilatation generator, anti-commutes with parity. As a consequence one finds in the planar spectrum pairs of operators with opposite parity but the same conformal dimension, denoted as planar parity pairs. The fact that these planar parity pairs survived higher loop corrections constituted the seed for the unveiling of higher loop integrability [2,4]. When non-planar corrections were taken into account, parity was still a good quantum number but the degeneracies between planar parity pairs disappeared [2]. While not disproving integrability this shows that the standard construction of conserved charges does not work any more. The discovery of a novel AdS 4 /CF T 3 correspondence [5,6] has provided us with the possibility of studying the effects of parity violation in a supersymmetric gauge theory and its dual string theory. A supersymmetric N = 6 Chern-Simons-matter theory with gauge group SU(M) k × SU(N) −k , where k denotes the Chern-Simons level, has been found to be dual to type IIA string theory on AdS 4 × CP 3 with a background NS B-field B 2 having non-trivial holonomy on CP 1 ⊂ CP 3 . More precisely 1 This B-field holonomy causes breaking of world-sheet parity for M = N and results in a string background which breaks target-space parity [6]. Correspondingly, the dual field theory does not respect three-dimensional parity invariance. For M = N the Chern-Simons-matter theory is known as ABJM theory whereas the general version is denoted as ABJ theory. Our aim is to investigate how the parity breaking on the field theory side manifests itself in the spin chain language. The first steps in this direction were taken in [7,8] where the two-loop planar dilatation generator of ABJ theory was derived, respectively in an SU(4) sub-sector and for the full set of fields. However, rather surprisingly, in these studies no effects of parity violation were seen. In fact the planar two-loop dilatation generator of ABJ theory differs from that of ABJM theory [9,10,11] only by an overall pre-factor. This raises the question of whether the parity symmetry of the spin chain has a deeper significance, or is simply an accidental symmetry of the two-loop planar approximation. In the present letter we will derive the two-loop non-planar dilatation generator of ABJ theory in a SU(2) × SU(2) ⊂ SU(4) sub-sector and explicitly demonstrate parity-breaking effects. We start by, in section 2, briefly describing ABJ theory and subsequently proceed to derive its full (planar plus non-planar) two-loop dilatation generator in the SU(2)×SU(2) sector in section 3. As the derivation follows closely that of ABJM theory [12] we shall be very brief. In section 4 we explicitly apply the dilatation generator to a series of short operators and determine their spectrum. In particular, we show that the non-planar part of the dilatation generator does not conserve parity. In addition, we observe a lifting of all planar degeneracies. Finally, section 5 contains our conclusion. ABJ theory Our notation will follow that of references [13,10]. ABJ theory [6] (see also [14] for a discussion at the classical level) is a three-dimensional N = 6 superconformal Chern-Simons-matter theory with gauge group U(M) k ×U(N) −k and R-symmetry group SU(4). . Expressed in terms of these fields the action reads Here the covariant derivatives are defined as and similarly for D m ξ B and D m ω B . The decomposition of the scalars and fermions into their SU(2) components has allowed us to split the bosonic as well as the fermionic potential into D-terms and F -terms. The precise form of these can be found in [12]. The theory has two 't Hooft parameters and one can consider the double 't Hooft limit In this letter we will be interested in studying non-planar corrections (i.e. 1 N and 1 M corrections) for anomalous dimensions at the leading two-loop level. We shall restrict ourselves to considering scalar operators belonging to a SU(2) × SU(2) sub-sector i.e. operators of the following type where A i , B i ∈ {1, 2}, and their multi-trace generalizations. A central object in our analysis will be the parity operation which acts on an operator by inverting the order of the fields inside each of its traces, i.e. 2 to make use of the method of effective vertices [15]. An effective vertex is a space-time independent vertex which, when contracted with a given operator of the type (5) gives the combinatorial factor associated with a particular Feynman integral times the value of the integral. If things work as in N = 4 SYM and as in ABJM theory [12] the contribution from the bosonic D-terms should cancel against contributions from gluon exchange, fermion exchange and self-interactions to all orders in the genus expansion and this is indeed what happens. To prove this we first calculate the effective vertices corresponding to the four diagrams in figure 1. We notice, however, that for operators belonging to the SU(2) × SU(2) sector there are no contributions from Fig. 1d. Adding the contributions from the bosonic potential, gluon exchange and fermion exchange we find where and where : : means that self-contractions should be excluded. The quantity V is a vertex which can be shown to give a vanishing contribution when applied to any operator in the SU(2) × SU(2) sector. Furthermore, the last term in eqn. (7) has exactly the form expected for self-energies and one can show that it precisely cancels the contribution from these. To do so one has to check the cancellation of both the planar and the non-planar part of the constant appearing in eqn. (8). The planar part of the analysis can be carried out with the aid of reference [7]. The non-planar part, however, requires a careful analysis of the non-planar versions of the 14 self-energy diagrams. Collecting everything, we thus verify that the full two-loop dilatation generator is indeed given only by the F -terms in the bosonic potential, i.e. It is easy to see that the dilatation generator vanishes when acting on an operator consisting of only two of the four fields from the SU(2) × SU(2) sector. Accordingly we will denote two of the fields, say Z 1 and W 1 , as background fields and Z 2 and W 2 as excitations. It is likewise easy to see that operators with only one type of excitation, say W 2 's, form a closed set under dilatations. For operators with only W 2 -excitations the dilatation generator consists of four terms whereas in the case with two different types of excitations it has 16 terms. In both cases D is easily seen to reduce to the one of [9,10] in the planar limit where P k,k+2 denotes the permutation between sites k and k + 2 and 2L denotes the total number of fields inside an operator. It differs from the planar dilatation generator of ABJM theory only by having the pre-factor λλ instead of λ 2 . As explained in [9,10] this is the Hamiltonian of two alternating SU(2) Heisenberg spin chains, coupled via a momentum condition. As mentioned earlier, integrability implies that there exists a tower of charges which all commute and which commute with the Hamiltonian. In particular, there exists one such charge Q 3 which anti-commutes with parity. In addition, the planar dilatation generator itself commutes with parity, i.e. As a consequence, the spectrum of the planar theory has degenerate parity pairs, i.e. pairs of operators with identical anomalous dimension but opposite parity. In reference [12] it was shown that for ABJM theory at the non-planar level the two-loop dilatation generator still commutes with parity but the degeneracies between parity pairs are lifted. This hinted towards the absence of higher conserved charges, at least in a standard form. Below we will analyse the situation for ABJ theory and find that again the planar degeneracies disappear but in addition the non-planar two-loop dilatation generator does not any longer commute with parity. When acting with the dilatation generator on a given operator we have to perform three contractions as dictated by the three hermitian conjugate fields. It is easy to see that by acting with the dilatation generator one can change the number of traces in a given operator by at most two. More precisely, the two-loop dilatation generator has the expansion Here D + and D ++ increase the number of traces by one and two respectively and D − and D −− decrease the number of traces by one and two. Finally, D 0 does not change the number of traces and D 00 first adds one trace and subsequently removes one or vice versa. The quantity 1 M stands for 1 N or 1 M and 1 M 2 stands for 1 N 2 , 1 M 2 or 1 M N . Even for short operators it is in practice hard to diagonalise the full dilatation generator exactly. But one can relatively easily diagonalise the planar dilatation generator, either by brute force or by means of the Bethe equations. Subsequently the non-planar terms can be treated as perturbations and the energy corrections found approximately using quantum mechanical perturbation theory [16]. Notice that while energy corrections are generically of order 1 M 2 , degeneracies in the planar spectrum will lead to energy corrections of order 1 M . (For details see [12].) Short Operators In this section we determine non-planar corrections to the anomalous dimensions of a number of short operators. This is done by explicitly computing and diagonalising the planar mixing matrix (aided by GPL Maxima as well as Mathematica) and subsequently determining the non-planar corrections by quantum mechanical perturbation theory. Operators with excitations on the same chain In this sector, the simplest set of operators for which one observes degenerate parity pairs as well as non-trivial mixing between operators with one, two and three traces consists of operators of length 14 with three excitations. There are in total 17 such nonprotected operators. Among the non-protected operators there are only eight which are not descendants. Their explicit form can be found in reference [12]. This mixing matrix of course reduces to that of ABJM theory for N = M as it should, cf. [12]. We notice that for this type of operators the positive and negative parity states still decouple, i.e. parity is preserved. The states O 1 and O 2 are exact eigenstates of the full dilatation generator with non-planar corrections equal to 3 We also observe a degeneracy between the negative parity double trace state O 2 and the positive parity triple trace state O 8 as well as a degeneracy between the double trace state O 6 and the triple trace state O 7 both of positive parity. However, states with different numbers of traces cannot be connected via the conserved charge Q 3 . 4 Notice that by construction the mixing matrix is not hermitian but related to its hermitian conjugate by a similarity transformation [17,16]. For the remaining operators we observe that all matrix elements between degenerate states vanish. Thus the leading non-planar corrections to the anomalous dimensions can be found using second order non-degenerate perturbation theory. The results read We observe that all degeneracies found at the planar level get lifted when non-planar corrections are taken into account, for all values of M and N. This in particular holds for the degeneracies between the members of the planar parity pair (O 1 , O 3 ). We have considered a number of different types of states with only one type of excitation and have found that the same pattern persists in all cases. In fact, one can explicitly show that the matrix elements between n and (n + 1)-trace states of the normal ordered operator in eqn. (9), (i.e. D without its pre-factor) can only depend on M and N through the combination M + N. Thus one cannot have parity breaking. Operators with excitations on both chains The simplest multiplet of operators which have non-planar energy corrections are operators of length six with two excitations. There are in total three such non-protected highest weight states. These read Their associated planar anomalous dimension (in units of λλ), parity and trace structure are Already in this simple case we have one pair of degenerate states with opposite parity, namely O 1 and O 2 . Expressing the dilatation generator in this basis and taking into account all non-planar corrections we get (in units of λλ) We observe that in this case the dilatation generator does mix states with different parity. In other words, the non-planar dilatation generator does not commute with P . Calculating the energies by second order quantum mechanical perturbation theory we find In particular, we see that the planar degeneracy is lifted. Let us analyse a slightly larger multiplet of operators with two excitations of different types that exhibit some more of the above mentioned non-trivial features of the topological expansion: Operators of length eight with one excitation of each type. There are in total 7 such non-protected operators. Their explicit form can be found in reference [12] and the planar anomalous dimensions (in units of λλ), trace structure and parity of these operators, denoted as O 1 , . . . , O 7 , are Eigenvector Eigenvalue Trace Structure Parity Expressing the dilatation generator in the basis given above and taking into account all non-planar corrections we get (in units of λλ) This mixing matrix of course reduces to that of ABJM theory for N = M as it should, cf. [12]. We observe again that the dilatation generator does mix states with different parity. To find the corrections to the eigenvalues we use perturbation theory as described in section 3. First, we notice that most matrix elements between degenerate states vanish. The only exception are the matrix elements between the states O 1 and O 3 . To find the non-planar correction to the energy of these states we diagonalise the Hamiltonian in the corresponding subspace and find For the remaining operators the leading non-planar corrections to the energy can be found using second order non-degenerate perturbation theory. The results read We again notice that all degeneracies observed at the planar level get lifted when nonplanar corrections are taken into account, for all values of M and N. This in particular holds for the degeneracies between the members of the two parity pairs. We have examined a number of operators with excitations of two different types and found that the same pattern persists in all cases. A closer scrutiny of the action of the dilatation generator reveals that the asymmetry between M and N originates from the situation where the operator separates two neighbouring excitations, a situation which one does not encounter when the two excitations are on the same chain. Let us note that the characteristic polynomial of the anomalous dimension matrices will always be even in M − N. This implies that the eigenvalues will generically be even under the interchange of M and N (as is the case above). A possible exception might arise in cases where nonzero matrix elements appear between planar degenerate states which have opposite parity and differ in trace number by one (notice that the requirement of different trace structure prevents this complication from arising for planar parity pairs). Although mixing of the above type does occur, we did not observe any asymmetry in the eigenvalues for the explicit cases we examined. Conclusion We have derived and analysed the non-planar corrections to the two-loop dilatation generator of ABJ theory in the SU(2) × SU(2) sub-sector. Our analysis shows that these corrections mix states with positive and negative parity, i.e. More precisely, the value of the commutator is proportional to M − N. This is in contrast to earlier studies of the planar two-loop dilatation generator which did not reveal any sign of parity breaking [7,8]. Furthermore, whereas the planar dilatation generator could be proved to be integrable, we do not see any indication of this being the case for the nonplanar one, since none of the planar degeneracies between parity pairs survive the inclusion of non-planar corrections. It is an interesting question whether the planar dilatation generator remains integrable and parity invariant when higher loop corrections are taken into account. In this connection it is worth mentioning that parity breaking does not prevent integrability [7,8]. At planar level, one could try to address the question of parity breaking at higher-loop order from the string theory side by calculating a transition amplitude between two string states of different parity living in an instanton background of the ABJ theory dual. We note that an interesting effect of parity breaking in the non-interacting string theory has been observed in [18]. One could also try to match the results of the present calculation to the behaviour of the dual string theory by calculating the semi-classical amplitude for non-parityconserving splitting of a one-string state into a two-string state in the spirit of [19,20]. Of course, this calculation would at best allow us to obtain qualitative agreement between non-planar gauge theory and interacting string theory. How to achieve quantitative agreement remains a challenge.
4,365.2
2009-03-19T00:00:00.000
[ "Physics" ]
A Motion Planning Method for Automated Vehicles in Dynamic Traffic Scenarios : We propose a motion planning method for automated vehicles (AVs) to complete driving tasks in dynamic traffic scenes. The proposed method aims to generate motion trajectories for an AV after obtaining the surrounding dynamic information and making a preliminary driving decision. The method generates a reference line by interpolating the original waypoints and generates optional trajectories with costs in a prediction interval containing three dimensions (lateral distance, time, and velocity) in the Frenet frame, and filters the optimal trajectory by a series of threshold checks. When calculating the feasibility of optional trajectories, the cost of all optional trajectories after removing obstacle interference shows obvious axisymmetric regularity concerning the reference line. Based on this regularity, we apply the constrained Simulated Annealing Algorithm (SAA) to improve the process of searching for the optimal trajectories. Experiments in three different simulated driving scenarios (speed maintaining, lane changing, and car following) show that the proposed method can efficiently generate safe and comfortable motion trajectories for AVs in dynamic environments. Compared with the method of traversing sampling points in discrete space, the improved motion planning method saves 70.23% of the computation time, and overcomes the limitation of the spatial sampling interval. Introduction As an important aspect of profiling the evolution of human civilization, transportation modalities are rapidly moving towards automation and interconnection. It is widely accepted that transportation systems will become much more efficient and safer when the task of vehicle driving shifts from a manual process to an automatic process. Over the past few decades, researchers have devoted a great deal of effort to realize the goal of autopiloting. The current process of achieving autonomous driving consists of environment perception and localization, behavior decisions, motion planning, and trajectory tracking. The purpose of motion planning is to generate a trajectory that satisfies the constraints of vehicle dynamics, driving safety, comfort, and efficiency, after the AV makes the next initial driving decision based on dynamic environment information and its state. Motion planning is a key part of the autonomous driving technology process and a critical factor in drivers' experience and safety. Related Work The methods of motion planning for AVs originally evolved from mobile robot technology [1][2][3][4]. As autopilot technology has gradually developed, numerous motion planning methods have become available for AVs [5]. Originally applied from the field of mobile optimal trajectory from the sampling space. In addition, increasing the speed by decreasing the density of sampling points may lead to a decrease in the accuracy of the motion planning method, due to the fact that reducing the sampling points may cause the optimal trajectory to be missed. Achieving a balance between the number of optional trajectories in the sampling space and the efficiency of the method has rarely been mentioned in previous studies. Contribution To overcome these limitations, this paper proposes a safe and efficient motion planning method for dynamic traffic scenes by combining the sampling-based method in the Frenet frame and the optimal trajectory searching method improved by SAA. The motion planning is divided into two parts: trajectory generation and optimal trajectory searching. The lateral and longitudinal motions of the AV are generated by the sampling-based method in the Frenet frame [22,23], which ensures the continuous rate of change in the vehicle's acceleration. The lateral motions are modeled by a quintic polynomial, and the longitudinal motions are modeled by a quintic or quartic polynomial. Through the combination of lateral and longitudinal motions, the AV can handle different dynamic traffic scenarios. The cost function and the trajectory checking function are designed in accordance with traffic rules and drivers' habits. The proposed trajectory generation strategy satisfies the constraints of kinematics, dynamics, and the physical shape of the road. To efficiently search for the optimal trajectory, the constrained SAA is applied to search for the minimum cost from the non-convex cost distribution space. Based on the axisymmetric property of the cost distribution space, the initial value is reserved in the searching process of the optimal trajectory to ensure efficiency. Because the selection probability of all trajectories in the sampling space is equal, the number of optional trajectories in the sampling space is guaranteed. In addition, as the searching process is improved from ergodic to probabilistic, the execution time of the method no longer depends on the setting of the sampling space, but on the parameter settings of the SAA. This searching strategy balances the number of optional trajectories in the sampling space with the efficiency of the motion planning method. Driven by the cost and constraint, this motion planning method can actively adapt to different dynamic traffic scenarios. Under different driving strategies, the performance of the AV using the proposed motion planning method in simulated dynamic traffic scenarios was evaluated. The remainder of the paper is organized as follows. Section 2 introduces the methodological framework, including coordinates transformation, reference line generation, generation of optional trajectories, searching for optimal trajectories, and final path selection. Section 3 presents an improved method for the optimal trajectories searching process. Section 4 conducts numerical experiments on the proposed methodology with a simulated urban road. Finally, Section 5 concludes the paper. Problem Description and Basic Assumptions Consider the driving scenario shown in Figure 1, where the vehicle is expected to drive along the center line of the current lane (i.e., the reference line in Figure 1). A human driver will adjust the distance between the vehicle and the reference line during the driving process with his perception and experience, and an AV will be expected to control the vehicle in the same manner as a human driver when performing a same driving task. A Cartesian coordinate system can be established with reference to the orientation of the road to specify the vehicle's position (X 0 , Y 0 ). However, it is complicated to evaluate the deviation of the vehicle from the reference line in the Cartesian coordinate system, and it is not well adapted for curved roads. Because the reference line tends to be generated along the centerline of the road, the reference line is differentiable in most roads. Then, the tangential and normal directions of any point on the reference line can be calculated, and a Frenet frame as shown in Figure 2 Symmetry 2022, 14, 208 4 of 22 can be constructed. For each planning period, there will be a Frenet coordinate origin, and there will be vehicle coordinates and obstacle coordinates in the Frenet frame corresponding to the Cartesian coordinate system. Taking the scenario in Figure 2 as an example, the Frenet coordinates of the obstacle are (s 5 , d 2 ) at this point. When the vehicle moves along the green trajectory in Figure 2, a set of Frenet coordinates consisting of an arc length s and a normal offset length d corresponding to their Cartesian coordinates exists at each moment. These two coordinate systems will be used to describe the motion state of the AV in this paper. metry 2022, 14, x FOR PEER REVIEW Cartesian coordinate system can be established with reference to the road to specify the vehicle's position ( , ). However, it is complicat deviation of the vehicle from the reference line in the Cartesian coordin is not well adapted for curved roads. Because the reference line tends to be generated along the centerlin reference line is differentiable in most roads. Then, the tangential and of any point on the reference line can be calculated, and a Frenet frame a 2 can be constructed. For each planning period, there will be a Frenet and there will be vehicle coordinates and obstacle coordinates in the F sponding to the Cartesian coordinate system. Taking the scenario in Fig ple, the Frenet coordinates of the obstacle are ( , ) at this point. moves along the green trajectory in Figure 2, a set of Frenet coordinate arc length and a normal offset length corresponding to their Cart exists at each moment. These two coordinate systems will be used to de state of the AV in this paper. Because the reference line tends to be generated along the centerline of the road reference line is differentiable in most roads. Then, the tangential and normal direc of any point on the reference line can be calculated, and a Frenet frame as shown in Fi 2 can be constructed. For each planning period, there will be a Frenet coordinate or and there will be vehicle coordinates and obstacle coordinates in the Frenet frame c sponding to the Cartesian coordinate system. Taking the scenario in Figure 2 as an ex ple, the Frenet coordinates of the obstacle are ( , ) at this point. When the ve moves along the green trajectory in Figure 2, a set of Frenet coordinates consisting o arc length and a normal offset length corresponding to their Cartesian coordin exists at each moment. These two coordinate systems will be used to describe the mo state of the AV in this paper. Assuming that the road edge is continuously smooth, and its centerline is acc ingly smooth, AVs are expected to follow the reference line in most cases. It is neces to temporarily deviate from the reference line for the AV when there are obstacles or o vehicles on the reference line. The task of the motion planning is to find the safest most comfortable trajectory for the AV within a planning period. Assuming that the road edge is continuously smooth, and its centerline is accordingly smooth, AVs are expected to follow the reference line in most cases. It is necessary to temporarily deviate from the reference line for the AV when there are obstacles or other vehicles on the reference line. The task of the motion planning is to find the safest and most comfortable trajectory for the AV within a planning period. Coordinates Transformation and the Reference Line Generation The trajectory of an AV can more easily be represented in the Frenet frame due to the irregular road structure, whereas the inputs and outputs of the motion planning need to be transformed into Cartesian coordinates. Moreover, the process of calculating the distance between two points on the trajectory is indispensable during the trajectory checking. For each set of coordinates in the Frenet frame, it is necessary to define the transformation between it and the corresponding coordinates in the Cartesian coordinate system. As shown in Figure 3, assuming that the Cartesian coordinates (X t , Y t ) of a point at moment t on the real trajectory of the AV are known, there exists a reference point (X r , Y r ) on the s-axis of the Frenet frame that minimizes the distance from the point (X r , Y r ) to the point (X t , Y t ). Then, the curve length from the point (X r , Y r ) to the coordinate origin in the Frenet frame is the s-coordinate of the AV at time t, and the distance length from the point (X r , Y r ) to the point (X t , Y t ) is the l-coordinate of the AV at time t. Therefore, the Frenet coordinates of the vehicle at moment t are (s t , l t ). The value of l t can be calculated from Equations (1) and (2). where → X and → R are vectors of the coordinate origin pointing to the vehicle position and the reference point, respectively; sign represents a symbolic function, which can only take the value of 1 or −1; ρ l is the parameter to determine the positive or negative of the l coordinate, which can be calculated from Equation (3). where θ r is the heading angle of the reference point at moment t. Coordinates Transformation and the Reference Line Generation The trajectory of an AV can more easily be represented in the Frenet frame due to the irregular road structure, whereas the inputs and outputs of the motion planning need to be transformed into Cartesian coordinates. Moreover, the process of calculating the distance between two points on the trajectory is indispensable during the trajectory checking. For each set of coordinates in the Frenet frame, it is necessary to define the transformation between it and the corresponding coordinates in the Cartesian coordinate system. As shown in Figure 3, assuming that the Cartesian coordinates ( , ) of a point at moment on the real trajectory of the AV are known, there exists a reference point ( , ) on the s-axis of the Frenet frame that minimizes the distance from the point ( , ) to the point ( , ). Then, the curve length from the point ( , ) to the coordinate origin in the Frenet frame is the s-coordinate of the AV at time , and the distance length from the point ( , ) to the point ( , ) is the l-coordinate of the AV at time . Therefore, the Frenet coordinates of the vehicle at moment are ( , ). The value of can be calculated from Equations (1) and (2). where ⃗ and ⃗ are vectors of the coordinate origin pointing to the vehicle position and the reference point, respectively; represents a symbolic function, which can only take the value of 1 or −1; is the parameter to determine the positive or negative of the coordinate, which can be calculated from Equation (3). where is the heading angle of the reference point at moment . When the Frenet coordinates ( , ) of the AV at moment are known, the Cartesian coordinates of the AV at that time can be found with the help of the heading angle of the reference line. As shown in Figure 3, the unit tangent vector ⃗ and the unit normal vector ⃗ can be found at all points on the reference line. The angle between the ⃗ and the parallel line of the x-axis in the Cartesian coordinate system is defined as the heading angle of that point, and ⃗ is in the same direction as the d-axis of the Frenet frame at this moment. The coordinates in the Cartesian coordinate system of the AV at moment can be calculated by Equation (4). When the Frenet coordinates (s t , d t ) of the AV at moment t are known, the Cartesian coordinates of the AV at that time can be found with the help of the heading angle of the reference line. As shown in Figure 3, the unit tangent vector → t r and the unit normal vector → n r can be found at all points on the reference line. The angle θ r between the → t r and the parallel line of the x-axis in the Cartesian coordinate system is defined as the heading angle of that point, and → n r is in the same direction as the d-axis of the Frenet frame at this moment. The coordinates in the Cartesian coordinate system of the AV at moment t can be calculated by Equation (4). The premise of the above transformation process is that the points on the s-axis in the Frenet frame are known. AVs usually undertake global path planning based on the HD map according to their origin and destination. Then the lane center lines at each stage is spliced to obtain the reference line by referring to the road information given by the global path planning. From the perspective of vehicle dynamics and comfort, the reference line must not have any curvature interruption. The discrete original waypoints obtained on the HD map are interpolated by applying the cubic spline curve to obtain the reference line that matches the human driver's driving habits. The reference line is usually used as the s-axis of the Frenet frame in the motion planning process. The s-coordinates in the Frenet frame are expected to be mapped to the Cartesian coordinate system. The path consisting of the original waypoints is sampled uniformly, and its cumulative length is expressed as S i . The reference line is parameterized with S i as the independent variable. The coordinates (X ri , Y ri ) of the reference line are transformed into functions of S i , i.e., X ri = f x (S i ), and Y ri = f y (S i ), respectively. Taking the process of interpolation of (S i , X ri ) as an example, the process of cubic spline interpolation is shown in Equations (5)- (9). where h i = S i+1 − S i represents step size; m i is a quadratic differential value that satisfies the condition h i m i The value of m i can be solved by applying the Not-A-Knot boundary condition to these equations. After the cubic spline interpolations, the information of all the sampling points on the pre-selected driving centerline can be obtained in the Frenet frame and the Cartesian coordinate system. A usable reference line can be obtained. For the convenience of presentation, the longitudinal and lateral positions mentioned in this paper represent the coordinates in the Frenet frame. The vehicle positions in the Cartesian coordinate system will be expressed in x-and y-coordinates. Trajectories Generation In this section, the optional trajectories of the AV are described by mathematical equations. Safety and comfort are especially important during the motion of an AV. The degree of jerking is used to evaluate the comfort level when generating a trajectory, i.e., the derivative of the vehicle acceleration. The polynomial can generate smooth curves while imposing a series of constraints on the curves, which is very suitable for solving the problem of trajectory generation. The trajectories in a time-varying and two-dimensional space need to satisfy multiple constraints from the lateral and longitudinal motion in the Frenet frame. Because the accuracy and speed of the solution can be affected when the order of the polynomial is too high, the lateral and longitudinal motions of the vehicle are described as polynomials of the vehicle state concerning time. For d t1 represent the starting lateral acceleration and the ending lateral acceleration of the vehicle, respectively. The motion planning problem can be transformed into a constrained functional extremum solution problem as shown in Equation (10). where f lateral (t) is the mathematical function concerning the lateral distance of the motion of the AV with respect to moment t; dt is the integral of the square of the lateral motion jerk from t0 to t1. Obviously f lateral (t) must be a continuous bounded function in [t0, t1], otherwise ... f lateral would appear to be infinite and make the problem unsolvable. The quintic polynomial shown in Equation (11) is used to represent f lateral (t). where a d0 , a d1 , a d2 , a d3 , a d4 , and a d5 are the parameters of the quintic polynomial, which can be solved by substituting the constraints to obtain the values of each parameter, as shown in Equations (12)- (17). For convenience, assume that t1 − t0 = τ. It is important to note that f lateral (t) is not a standard planning process because the AV cannot move only in the lateral direction with moment t. Therefore f lateral (t) can only guarantee the smoothness and comfort of the trajectories in the lateral direction. It is also necessary to describe and constrain the longitudinal motion. Similar to the lateral motion planning, the longitudinal motion planning problem can be transformed into f ind f longitudinal (t) that minimizes .. dt. It is feasible to describe longitudinal motion in the same manner, but the driving strategies sometimes affect the vehicle's motion constraints. In the usual case, the AV is expected to arrive at a specified location within time τ to execute the order of the upper system. The constraints of longitudinal motion in this case are like those on lateral motion, and a quintic polynomial as shown in Equation (18) is used to represent f longitudinal (t) in this case. where s t0 and s t1 represent the starting longitudinal position and the ending longitudinal position of the vehicle motion, respectively; . s t0 and . s t1 represent the starting longitudinal speed and the ending longitudinal speed of the vehicle, respectively; .. s t0 and .. s t1 represent the starting longitudinal acceleration and the ending longitudinal acceleration of the AV, respectively; and a s0 , a s1 , a s2 , a s3 , a s4 , and a s5 are the parameters of the quintic polynomial, which can be solved by substituting the constraints to obtain the values of each parameter, as shown in Equations (19)-(24). In addition, there are cases when we want the AV to maintain a constant speed while ensuring safety and smoothness of trajectories, and its ending position within a planning period is not considered. Then, the number of constraints is 5 (i.e., starting longitudinal position s t0 , starting longitudinal speed s t1 ). A quadratic polynomial, as shown in Equation (25), is used to represent the f longitudinal (t) for the velocity maintenance case. , and b s4 are the parameters of the quintic polynomial, which can be solved by substituting the constraints to obtain the values of each parameter, as shown in Equations (26)- (30). After determining f lateral (t) and f longitudinal (t), a series of smooth trajectories can be obtained between the starting position and all optional ending positions of the AV in a motion planning period. Optimal Trajectories Searching Based on the Cost Function In a motion planning period, optional trajectories are obtained after selecting an optional ending position. The performance of the trajectories is evaluated by defining the cost function; a series of costs and their weights are defined as shown in Table 1. In consideration of human comfort, we consider lateral and longitudinal jerking, and the jerking is used to evaluate the comfort level of trajectories when the AV is moving, i.e., the derivative of the AV's acceleration. In consideration of efficiency, we also penalize longer trajectories that take more time to assess. In addition, shorter trajectories can sometimes lead to higher mobility. In general, we expect the AV to follow the reference line, so we penalize deviations from the reference line. For safety, we introduce the reciprocal of the cumulative distance between the trajectory and the obstacles to avoid risky behaviors. The weights of the cost function can be adjusted according to different scenarios. Equal weights are used in this study, which means each indicator is considered to be equally important. Cost Weight As shown in Table 1, C J d and C J s represent the jerk cost of lateral motion and the jerk cost of longitudinal motion of the AV. C o f f set represents the degree of offset of the AV from the reference line. C velocity represents the deviation between the planning speed and the desired speed. distance x−obstacle represents the minimum distance between the AV and the obstacles. C collision represents the cost of a collision between the AV and the static or dynamic obstacles. In an automated vehicle system, these costs are calculated in the form of discrete sampling points. Then, all the integral operations are replaced by cumulative summation. Supposing that there are n sampling moments in the planning period [t0, t1], the cost function of a trajectory is expressed as Equation (31). where C total represents the total cost of an optional trajectory. Therefore, the trajectory with the lowest cost can be found by calculating the cost of all trajectories in turn. Final Path Selection After the optimal trajectory searching process is completed, all trajectories are sorted in order of cost from minimum to maximum, and pass through the trajectory checking process in this order. As shown in Figure 4, assuming that there are m sets of different combinations of lateral ending position d t1 , ending time t1, and ending speed . s t1 in the sampling space of motion planning at moment t0, there are m optional ending positions in the sampling space of the AV in this motion planning period, which means that there are m optional trajectories. The cost of m trajectories is calculated according to Equation (31). These trajectories are sorted in order of cost from minimum to maximum. From the perspective of safety and comfort, these trajectories need to be checked by the maximum velocity, maximum acceleration, maximum curvature, and collision to obtain a preliminary total of m trajectories satisfying those physical properties. The first trajectory that passes the check in this sorted set of trajectories is the final path selection for the current period. The AV then moves along this path until the start of the next planning period. Final Path Selection After the optimal trajectory searching process is completed, all trajectories are sorted in order of cost from minimum to maximum, and pass through the trajectory checking process in this order. As shown in Figure 4, assuming that there are sets of different combinations of lateral ending position , ending time 1, and ending speed in the sampling space of motion planning at moment 0, there are optional ending positions in the sampling space of the AV in this motion planning period, which means that there are optional trajectories. The cost of trajectories is calculated according to Equation (31). These trajectories are sorted in order of cost from minimum to maximum. From the perspective of safety and comfort, these trajectories need to be checked by the maximum velocity, maximum acceleration, maximum curvature, and collision to obtain a preliminary total of ′ trajectories satisfying those physical properties. The first trajectory that passes the check in this sorted set of trajectories is the final path selection for the current period. The AV then moves along this path until the start of the next planning period. When the next planning period starts, the AV's motion state is considered as the initial state of the AV for the next period and the above process is repeated. The AV obtains the complete path through continuous loop iteration. Improved Optimal Trajectory Searching Process Based on SAA After determining the sampling space, it is common practice to compute all the trajectories and to examine the trajectories [22,23,31]. The costs of all trajectories in a typical sampling space are shown in Figure 5, where different trajectories are distinguished by the lateral and longitudinal coordinates of the Frenet frame at the ending position of the When the next planning period starts, the AV's motion state is considered as the initial state of the AV for the next period and the above process is repeated. The AV obtains the complete path through continuous loop iteration. Improved Optimal Trajectory Searching Process Based on SAA After determining the sampling space, it is common practice to compute all the trajectories and to examine the trajectories [22,23,31]. The costs of all trajectories in a typical sampling space are shown in Figure 5, where different trajectories are distinguished by the lateral and longitudinal coordinates of the Frenet frame at the ending position of the AV during the motion planning period. Assuming that the initial position of the AV is located on the reference line, the costs of different trajectories without obstacles are shown in Figure 5a, where the x-axis represents the longitudinal positions of the ending points of the trajectories, the y-axis represents the transverse positions of the ending points of the trajectories, and the z-axis represents the costs. Figure 5b shows the contour projection based on Figure 5a. It can be observed that the costs show an obvious axisymmetric regularity about the reference line. High-cost trajectories end off of the reference line, whereas low-cost trajectories tend to end on the reference line. This conclusion can also be verified in theory through the cost calculation function in Equation (31), because we penalize the degree of reference line offset and the lateral jerk. In addition, the other penalty terms also show the same symmetry regularity along the reference line. Based on this regularity, it can be speculated that when there are obstacles in the sampling space, the cost distribution of the trajectories close to the obstacles will produce obvious fluctuations. low-cost trajectories tend to end on the reference line. This conclusion can also be verified in theory through the cost calculation function in Equation (31), because we penalize the degree of reference line offset and the lateral jerk. In addition, the other penalty terms also show the same symmetry regularity along the reference line. Based on this regularity, it can be speculated that when there are obstacles in the sampling space, the cost distribution of the trajectories close to the obstacles will produce obvious fluctuations. To verify this speculation, two scenarios with the existence of static obstacles were performed. It is assumed that the starting position of the AV is located on the reference line, and the AV drives at 40 /ℎ along the reference line direction. The costs of different trajectories with a static obstacle placed at 35 along the reference line direction directly in front of the AV is shown in Figure 6a in front of the adjacent left lane. In this case, the costs are similar to the previous static obstacle case. However, the fluctuation is not obvious for longer trajectories, which is because the arc of long trajectories circumvents the obstacles closer to the starting point. Based on the above verification, it can be determined that there is a certain regularity in the costs of the trajectories. The costs show an axisymmetric regularity when the starting position of the AV is located on the reference line, and the existence of obstacles will lead to an increase in the costs of the nearby trajectories. To verify this speculation, two scenarios with the existence of static obstacles were performed. It is assumed that the starting position of the AV is located on the reference line, and the AV drives at 40 km/h along the reference line direction. The costs of different trajectories with a static obstacle placed at 35 m along the reference line direction directly in front of the AV is shown in Figure 6a, while the x-axis indicates the longitudinal positions of the ending points of the trajectory, the y-axis indicates the lateral positions of the ending points of the trajectory, and the z-axis indicates the costs. Figure 6b shows the contour projection of the costs. It can be observed that when there is a static obstacle on the reference line, the costs of all trajectories increase significantly along the reference line, i.e., the lateral ending position at 0 m. Figure 6c,d show the costs of different trajectories when the static obstacle is placed 35 m in front of the adjacent left lane. In this case, the costs are similar to the previous static obstacle case. However, the fluctuation is not obvious for longer trajectories, which is because the arc of long trajectories circumvents the obstacles closer to the starting point. Based on the above verification, it can be determined that there is a certain regularity in the costs of the trajectories. The costs show an axisymmetric regularity when the starting position of the AV is located on the reference line, and the existence of obstacles will lead to an increase in the costs of the nearby trajectories. Combining this symmetry regularity, we modify the cost function of the AV to ensure that it better follows the driving habits of human drivers and has better safety and comfort. Due to traffic rules and driving habits, human drivers in China tend to pass or avoid other vehicles from the left side. Therefore, we hope that the trajectory on the left side will be less costly when there are feasible trajectories on both sides of the reference line. The cost is modified as shown in Equation (32). where C f inal represents the modified cost; ε represents a very small non-zero positive number; sign represents the sign function; d t1 represents the distance between the ending position of the trajectory and the reference line. Combining this symmetry regularity, we modify the cost function of the AV to ensure that it better follows the driving habits of human drivers and has better safety and comfort. Due to traffic rules and driving habits, human drivers in China tend to pass or avoid other vehicles from the left side. Therefore, we hope that the trajectory on the left side will be less costly when there are feasible trajectories on both sides of the reference line. The cost is modified as shown in Equation (32). Therefore, the cost of all trajectories in the sampling space of a planning period forms a nonconvex space. The global optimal solution is often found by calculating the costs of all trajectories after sampling to prevent finding a locally optimal solution. We use the SAA to replace the process of calculating the cost, which refines the sampling interval with a fixed number of samples. The process of finding the optimal trajectory by SAA is contained in the dashed box in Figure 7. The variables in the optional solution are the lateral ending position of the trajectory (di), the planning time during the period (ti), and the longitudinal ending speed of the trajectory (tv). The optimal combination of variables is obtained by the searching process, including the inner isothermal layer and the probability adjustment of the outer layer with constant cooling. In addition, the corresponding trajectory is checked immediately after each set of solutions and generated to ensure comfort and safety. A new set of solutions is generated if the check does not pass. The computational effort of trajectory generation and cost calculation will be greatly reduced by setting a reasonable iteration number and temperature coefficients. The key part of SAA is the Metropolis acceptance rule. We avoid falling into a local optimum by utilizing the property of accepting poorer solutions with a certain probability described mathematically, as shown in Equation (33). where P represents the probability of accepting the new solution; Cost now and Cost next represent the costs of the trajectories corresponding to the current solution and the new solution, respectively; k is the temperature dropping rate, and the value is between 0 and 1; T represents the system temperature. Therefore, the cost of all trajectories in the sampling space of a planning period forms a nonconvex space. The global optimal solution is often found by calculating the costs of all trajectories after sampling to prevent finding a locally optimal solution. We use the SAA to replace the process of calculating the cost, which refines the sampling interval with a fixed number of samples. The process of finding the optimal trajectory by SAA is contained in the dashed box in Figure 7. The variables in the optional solution are the lateral ending position of the trajectory ( ), the planning time during the period ( ), and the longitudinal ending speed of the trajectory ( ). The optimal combination of variables is obtained by the searching process, including the inner isothermal layer and the probability adjustment of the outer layer with constant cooling. In addition, the corresponding trajectory is checked immediately after each set of solutions and generated to ensure comfort and safety. A new set of solutions is generated if the check does not pass. The computational effort of trajectory generation and cost calculation will be greatly reduced by setting a reasonable iteration number and temperature coefficients. The key part of SAA is the Metropolis acceptance rule. We avoid falling into a local optimum by utilizing the property of accepting poorer solutions with a certain probability described mathematically, as shown in Equation (33). where represents the probability of accepting the new solution; and represent the costs of the trajectories corresponding to the current solution and the new solution, respectively; is the temperature dropping rate, and the value is between 0 and 1; represents the system temperature. Numerical Experiments In this section, the performance of the proposed motion planning method is analyzed to demonstrate the effectiveness in several typical traffic scenarios on a three-lane road, including maintaining speed, lane changing, and car following. It was assumed that decisions of the AV were provided from the behavior planning layer, and all traffic participants were going in the same direction. In these experiments, the lateral and longitudinal motions of the AV were given by the quadratic or quintic polynomials described in Equations (11), (18), and (25). By sampling the road environment ahead, a series of optional motion trajectories consisting of different lateral and longitudinal motions are obtained. Numerical Experiments In this section, the performance of the proposed motion planning method is analyzed to demonstrate the effectiveness in several typical traffic scenarios on a three-lane road, including maintaining speed, lane changing, and car following. It was assumed that decisions of the AV were provided from the behavior planning layer, and all traffic participants were going in the same direction. In these experiments, the lateral and longitudinal motions of the AV were given by the quadratic or quintic polynomials described in Equations (11), (18) and (25). By sampling the road environment ahead, a series of optional motion trajectories consisting of different lateral and longitudinal motions are obtained. The traversal method and the SAA-based method proposed in this study were applied to search for the optimal trajectory among these optional trajectories, respectively. In each planning period, the AV followed the optimal path that was searched for and checked. The state of the AV at the end of a planning period was inherited as the initial state of the next period, and we obtained the global driving performance of the AV by cycling through this process. We compared the traversal method [22,23,31] and the improved method based on SAA under the same environment and parameter settings, and analyzed the computational effort, motion planning performance, and driving comfort. Scenario and Parameter Settings The AV was placed in a three-lane urban road scenario. The whole road was connected by 10 original waypoints according to the cubic spline curve interpolation method in this paper. The rough length of the road was 500 m, but there were curves within the first 200 m, and the width of each lane was 3.6 m. Three vehicles driven by human drivers at different speeds drove around the AV at the beginning of the simulation, and parameters of all vehicles were set as shown in Table 2. For convenience, we named the method that calculates the trajectory cost based on the exhaustive method, Method A, and the method based on SAA, Method B. To compare the two methods, the sampling space needed to be set with consistent settings, and the parameters of the sampling space for the AV in each motion planning period were set as shown in Table 3. For Method A, the number of generated trajectories and costs that needed to be calculated increases exponentially along with the increase in the number of sampling points, which could not satisfy the requirement of trajectory computation. Therefore, we limited the number of lateral sampling points in Method A to improve the calculation performance. For Method B, the parameters of SAA were set as shown in Table 4. Table 4. Parameters of SAA. Parameters of SAA Values Initial temperature ( • C) 100 Length of Markov chain 5 Rate of temperature decrease 0.9 The temperature at which the algorithm stops ( • C) 3 Performance of the AV with Methods A and B All vehicles departed at the same time in the preset dynamic scenario. The AV followed the maintain speed command for the first 15 s, received the command to change to the left lane after 15 s, and performed the following command after the longitudinal distance with the left obstructing vehicle was less than 25 m. The trajectory tracking module of the AV was considered to be performing under ideal operating conditions. We tested the methods on a computer with an AMD Ryzen 5 3600 CPU (6-core, 3.9 GHz) and 16 GB RAM. The two experiments were run on the same computer with the same configuration, and the computing elapsed times were extremely long, and affected by plotting and data storage. The program can be simplified to meet real-time requirements in practical applications. The performance of the AV when applying Methods A and B is shown in Figures 8 and 9, respectively. The blue rectangle represents the AV, and the black rectangles represent the obstructing vehicles driven by human drivers. The green and red trajectory lines represent the optional and nonoptional trajectories planned by the AV at the current moment, respectively. The series of blue trajectory points extending in front of the AV represent its selected optimal trajectory at the current moment, and similarly, the black trajectory points in front of the obstructing vehicles represent the predicted points of their motion trajectories in the future period. The traces left by each vehicle represent its real trajectory in the past. Performance of the AV with Methods A and B All vehicles departed at the same time in the preset dynamic scenario. The AV followed the maintain speed command for the first 15 s, received the command to change to the left lane after 15 s, and performed the following command after the longitudinal distance with the left obstructing vehicle was less than 25 . The trajectory tracking module of the AV was considered to be performing under ideal operating conditions. We tested the methods on a computer with an AMD Ryzen 5 3600 CPU (6-core, 3.9 GHz) and 16 GB RAM. The two experiments were run on the same computer with the same configuration, and the computing elapsed times were extremely long, and affected by plotting and data storage. The program can be simplified to meet real-time requirements in practical applications. The performance of the AV when applying Methods A and B is shown in Figures 8 and 9, respectively. The blue rectangle represents the AV, and the black rectangles represent the obstructing vehicles driven by human drivers. The green and red trajectory lines represent the optional and nonoptional trajectories planned by the AV at the current moment, respectively. The series of blue trajectory points extending in front of the AV represent its selected optimal trajectory at the current moment, and similarly, the black trajectory points in front of the obstructing vehicles represent the predicted points of their motion trajectories in the future period. The traces left by each vehicle represent its real trajectory in the past. Figure 8f, when the distance between the AV and the obstructing vehicle in the left lane was less than 25 m, the decision command was changed to follow the vehicle, and the AV started to adjust its speed to follow the vehicle ahead. As the searching process of Method A traversed all the sampled points, the AV calculated the costs of all the trajectories at each moment. This is evidenced by the red and green trajectories shown in Figure 8. Figure 9 shows the performance of the AV when applying the motion planning Method B. The whole calculation process took 1538.82 s, and all the settings in this test were the same as those for the test of Method A, except for the different searching method for optimal trajectories. The motion planning performance of the AV under each decision command when applying Method B was basically the same as that when applying Method A; both methods completely avoided the moving obstructing vehicles. It is worth noting that the trajectories searched for by Method B shown in Figure 9 are significantly sparser at each moment compared to those of Method A. This means that Method B may achieve the same motion planning results as Method A with less computational effort. The driving path of the AV in two numerical experiments over time and its twodimensional projection are shown in Figure 10. The reference line is always defined as the centerline of the middle lane, and the two offsets that occur for the AV relative to the reference line correspond to the overtaking and lane changing processes in the experiments, respectively. The 3D path generated by Method A and Method B almost overlap, which indicates that the optimal trajectories selected by the AV at each moment are approximately the same when applying Method A and Method B. In addition to the safety and efficiency, the comfort of the AV needs We extracted the velocity, acceleration, and jerking of the AV in two expe alize the comfort. Figure 11a,b shows the velocity and acceleration chan In addition to the safety and efficiency, the comfort of the AV needs to be ensured. We extracted the velocity, acceleration, and jerking of the AV in two experiments to visualize the comfort. Figure 11a,b shows the velocity and acceleration changing process of the AV in the two experiments, respectively; the velocity changing based on Method A and Method B is almost the same. The averages of the absolute values of velocity error under Method A and Method B were 0.2580 m/s and 0.2450 m/s, respectively. Method B based on SAA showed better velocity performance due to more dense sampling points. The large velocity deviations of both methods occurred during the lane changing process and at the beginning of the following process. The large changes in acceleration and jerking of the AV also occurred in these two processes. The smaller the acceleration changing rate, namely the degree of jerking, the greater comfort. Figure 11c In addition to the safety and efficiency, the comfort of the AV needs to be ensured. We extracted the velocity, acceleration, and jerking of the AV in two experiments to visualize the comfort. Figure 11a,b shows the velocity and acceleration changing process of the AV in the two experiments, respectively; the velocity changing based on Method A and Method B is almost the same. The averages of the absolute values of velocity error under Method A and Method B were 0.2580 / and 0.2450 / , respectively. Method B based on SAA showed better velocity performance due to more dense sampling points. The large velocity deviations of both methods occurred during the lane changing process and at the beginning of the following process. The large changes in acceleration and jerking of the AV also occurred in these two processes. The smaller the acceleration changing rate, namely the degree of jerking, the greater comfort. Figure 11c Comparison of the Efficiency of Methods A and B Because motion planning is only one part of the process of automating an AV, it is essential to conserve the computational effort of the on-board computer. After determining that the two methods can produce nearly consistent motion planning performance for an AV in dynamic environments, we focused on the differences between the two methods Comparison of the Efficiency of Methods A and B Because motion planning is only one part of the process of automating an AV, it is essential to conserve the computational effort of the on-board computer. After determining that the two methods can produce nearly consistent motion planning performance for an AV in dynamic environments, we focused on the differences between the two methods in terms of computational efficiency. Figure 12a,b shows the process of searching for the optimal trajectories of the AV in the first planning period at the beginning of the simulation when applying Method A and Method B, respectively. Method A traversed the sampling space at a fixed interval and computed the costs of 561 trajectories at the moment t = 0, taking 15.38 s. As shown for the searching process in Figure 12a, Method A was more detailed in searching the sampling space, but at the same time searched some unnecessary and costly trajectories. Method B computed the costs of 165 trajectories at the moment t = 0, taking 5.87 s. As shown in Figure 12b, Method B is more focused on searching for less costly trajectories with the effect of SAA, and the AV performed well in dynamic traffic environments. Method B saved 62% of the planning time compared to Method A in this planning period, which proves that method B improves the efficiency of the motion planning at the moment t = 0. Comparison of the Efficiency of Methods A and B Because motion planning is only one part of the process of automating an AV, it is essential to conserve the computational effort of the on-board computer. After determining that the two methods can produce nearly consistent motion planning performance for an AV in dynamic environments, we focused on the differences between the two methods in terms of computational efficiency. Figure 12a,b shows the process of searching for the optimal trajectories of the AV in the first planning period at the beginning of the simulation when applying Method A and Method B, respectively. Method A traversed the sampling space at a fixed interval and computed the costs of 561 trajectories at the moment = 0, taking 15.38 . As shown for the searching process in Figure 12a, Method A was more detailed in searching the sampling space, but at the same time searched some unnecessary and costly trajectories. Method B computed the costs of 165 trajectories at the moment = 0, taking 5.87 . As shown in Figure 12b, Method B is more focused on searching for less costly trajectories with the effect of SAA, and the AV performed well in dynamic traffic environments. Method B saved 62% of the planning time compared to Method A in this planning period, which proves that method B improves the efficiency of the motion planning at the moment = 0. (a) (b) Figure 12. The process of searching for optimal trajectories: (a) the process of searching for optimal trajectories in the first planning period at t = 0 with Method A; (b) the process of searching for optimal trajectories in the first planning period at t = 0 with Method B. In addition, the difference between the searching time of Method A and Method B was not constant. Figure 13 shows the comparison of the searching time between the two experiments within 25 s. It can be observed that the searching time of Method A was roughly divided into three intervals, which were the speed maintaining process from 0 to 15 s, the lane changing process from 15 to 20 s, and the following process from 20 to 25 s. This was because the change in the decision commands of the AV affected the sampling space. The speed maintaining decision command corresponded to the largest sampling space, covering three lanes, with the longest searching time. When the AV received the lane changing command, the sampling space was reduced to half of the original size, and the searching time was also reduced by nearly half. The sampling space of the AV was the smallest when it was performing the following command [35]. It is worth noting that the searching time of Method B was relatively stable and short because the searching method of SAA does not depend on the sampling space settings, but rather on the temperature settings to determine the searching process. The simulation process took 5168.84 s when Method A was applied, and took 1538.82 s when Method B was applied. Method B accelerated the whole motion planning process by 70.23% compared to Method A. was the smallest when it was performing the following command [35]. It is worth noting that the searching time of Method B was relatively stable and short because the searching method of SAA does not depend on the sampling space settings, but rather on the temperature settings to determine the searching process. The simulation process took 5168.84 when Method A was applied, and took 1538.82 when Method B was applied. Method B accelerated the whole motion planning process by 70.23% compared to Method A. Conclusions Combined with the analysis of the experimental results, the following conclusions can be drawn. 1. We propose a motion planning method applicable to AVs in dynamic traffic scenarios. The trajectories are solved by a polynomial in the Frenet frame, and the costs of the optional trajectories are quantified as a cost function that includes cost terms for safety, comfort, efficiency, etc. An improved optimal trajectory searching method applying SAA is proposed. The experimental results in the simulated dynamic traffic scenarios show that the method proposed in this paper is feasible and efficient. 2. The process of searching for the least costly trajectory is visualized, and the axisymmetric regularity of the costs about the reference line is summarized. Based on this, the cost function is modified to make the performance of the AV more consistent with the driving behaviors of human drivers. Conclusions Combined with the analysis of the experimental results, the following conclusions can be drawn. 1. We propose a motion planning method applicable to AVs in dynamic traffic scenarios. The trajectories are solved by a polynomial in the Frenet frame, and the costs of the optional trajectories are quantified as a cost function that includes cost terms for safety, comfort, efficiency, etc. An improved optimal trajectory searching method applying SAA is proposed. The experimental results in the simulated dynamic traffic scenarios show that the method proposed in this paper is feasible and efficient. 2. The process of searching for the least costly trajectory is visualized, and the axisymmetric regularity of the costs about the reference line is summarized. Based on this, the cost function is modified to make the performance of the AV more consistent with the driving behaviors of human drivers. 3. Compared with the optimal trajectories searching method that traverses the sampling space, the proposed method in this paper saves 70.23% of the searching time without affecting the performance of the AV. In addition, the searching time of the proposed method shows good robustness to variations in the sampling space. This is conducive to the improvement of the motion planning adaptability of AVs in a variety of road scenarios. In this study, we assumed that decisions of the AV were provided by the behavior planning layer. However, decision making is another challenging problem for AVs in urban scenarios. In the future, we will focus on human-like decision making and test the performance of the method in a more realistic environment.
12,678.4
2022-01-21T00:00:00.000
[ "Engineering", "Computer Science" ]
Secondary range symmetric matrices The concept of secondary range symmetric matrices is introduced here. Some characterizations as well as the equivalent conditions for a range symmetric matrix to be secondary range symmetric matrix is given. The idea of range symmetric matrices, range symmetric matrices over Minkowski space and secondary range symmetric matrices are different, and is depicted with the help of suitable examples. Finally, a necessary and sufficient condition for a secondary range symmetric matrix to have a secondary generalized inverse has been obtained. Introduction The theory of symmetric matrices as well as range symmetric matrices are well known in literature.A matrix is said to be EP (or range symmetric), whenever the range space of the matrix is equal to the range space of its conjugate transpose.In other words, matrix is EP whenever its null space is same as that of the null space of its conjugate transpose.Ballantine 1 has studied about the product of two EP matrices of specific rank to be again an EP matrix.In 2 new characterizations of EP matrices are given.Also, weighted EP matrix is defined and characterized.Meenakshi 3 extended the concept of range symmetric matrices over Minkowski space.In 2014, the same author defined range symmetric matrices in indefinite inner product space. 4In Ref. 5, the concept of EP matrices to bounded operator with closed range is defined on a Hilbert space.For more characterizations of EP and hypo EP operators one can refer. 6,7r an n  n matrix A, the secondary transpose is related to transpose of the matrix by the relation A S ¼ VA T V. Here, the matrix V has non zero unitary entries only on the secondary diagonal.For a matrix A with complex entries, the secondary transpose will be renamed as secondary conjugate transpose A θ and is given by A θ ¼ VA * V. The concept of secondary conjugate transpose is gaining importance in recent years.Shenoy 8 has defined Outer Theta inverse by combining the outer inverse and secondary transpose of a matrix A. Drazin-Theta matix A D,θ9 is a new class of generalized inverse introduced for a square matrix of index m.One can refer 10 for the extension of these inverses over rectangular matrices.R. Vijayakumar 11 introduced the concept of secondary generalized inverse with the help of secondary transpose of a matrix.This concept is similar to Moore Penrose inverse.But unlike Moore penrose inverse, existence of s-g inverse is not assured in general.In Ref. 12, a necessary and sufficient conditions of existence of s-g invese is given.In the same article, a few characterizations and a determinantal formula for s-g inverse also has been discussed.In 2009, Krishnamoorthy and Vijayakumar 13 has defined the concept of S -normal matrices with the help of secondary transopse for a class of complex square matrices.Jayashree 14 has defined secondary k -range symmetric fuzzy matrices.Its relation with S -range symmetric fuzzy matrices, k -range fuzzy symmetric matrices and EP matrices are defined. In this article, we define secondary range symmetric matrices.Several equivalent conditions for a matrix to be secondary range symmetric, is obtained here.Also, the existence of secondary generalized inverse of a secondary range symmetric matrix is discussed. Below are some useful deifnitions and results related to secondary conjugate transpose. Preliminaries The set of all n  n matrices with complex entries are denoted by C nÂn (R nÂn for matrices with real entries.Also, N A ð Þ represents the null space of the matrix A. The column space and rank of A are denoted by Then the secondary transpose (or secondary conjugate transpose A θ in the case of complex matrices) of A denoted by A S and is defined as Definition 2 (Ref.16) Let A∈ ℂ nÂn .Then the conjugate secondary transpose of A denoted by A θ and is defined as The secondary transpose of a matrix is defined by reflecting the entries through sendary diagonal. Definition 3 (Ref.16) A matrix is said to be secondary normal (S -normal) if AA S ¼ A S A. Definition 4 (Ref.12) A † S is said to be secondary generalized inverse of A if and AA † S and A † S A are S À symmetric: REVISED Amendments from Version 1 Version 2 of the article incorporates the valuable suggestions from the reviewers.Changes include language correction, a preliminary section highlighting basic definitions, examples elaborating the definitions, etc.Additionally, a few references have been added, updating the bibliography. Any further responses from the reviewers can be found at the end of the article Note that the matrix A is said to be secondary symmetric or S À symmetric if and only if A S ¼ A. Theorem 0.1 (Ref.12) Given an m  n matrix A. The following statements are equivalent. (1) A has S -cancellation property (i.e., A s AX ¼ 0AX ¼ 0 and YAA s ¼ 0YA ¼ 0). ( Definition 5 (Ref.17) A matrix A∈ ℂ nÂn is said to be EP (or range symmetric Meenakshi 3 has defined EP in Minkowski space and has given equivalent conditions for a matrix to be range symmetric. Let the components of a complex vector C n from 0 to n-1, be u tensor by G and it is defined as The Minkowski inner product on C n is defined by u, v ð Þ¼< u, Gv > where < :,: > is the Hilbert inner product.A space with Minkowski inner product is defined as Minkowski space.The idea of Minkowski space arised when Xing 18 tried to study the optical devices described by the Mueller matrix which may not have a singluar value decomposition.The problem was solved by Renardy 19 by defining Minkowski space and obtained the singular value decomposition of Mueller matrix over the Minkowski space. Definition 6 (Ref.3) A matrix A∈ ℂ nÂn is said to be range symmetric in Minkowski space if and only if Here A þ represents the Minkowski adjoint given by A þ ¼ GA * G where G is the Minkowski metric tensor. Results In this section, we define secondary range symmetric matrices which is analogous to that of range symmetric matrices.Some equivalent conditions for a matrix to be range symmetric is also given here. Definition 7 X is secondary right (left) normalized g inverse of A where A ∈ R (nÂn) if AXA ¼ A, XAX ¼ X and AX is S -symmetric.(XA is S -symmetric). Example 1: is both secondary right normalized g-inverse and secondary left normalized inverse of A. The conditions AXA ¼ A, XAX ¼ X can be easily verified. Here Here X is also a left normalized g-inverse. Definition 8 Consider A ∈ ℝ mÂn .The S -transpose of A is defined as . Therefore the matrix is not range symmetric, but it is secondary range symmetric. Theorem 0.2 Let A ∈ ℝ nÂn .Then the following conditions are equivalent. (1) A is secondary range symmetric where B and C are some nonsingular matrices This proves the equivalence of 1 ð Þ and 4 ð Þ. A S ðBy property of secondary transposeÞ From the following example, it is clear that EP matrices in Minkowski space defined by Meenakshi 3 and secondary range symmetric matrices are two different concepts. . Note that, here the secondary transpose A S of the matrix A coincides with A. Clearly A is secondary range symmetric (i.e., N A ð Þ¼N A S À Á : Here, A is secondary normal as well as . However, the matrix is not range symmetric in Minkowski's space since In this example, the matrix A is secondary range symmetric, but not range symmetric in Minkowski space. In the following example, B is range symmetric in Minkowski space.But it is not secondary range symmetric. . Clearly B is not secondary range symmetric. Observe that B is range symmetric in Minkowski space.Since, These examples shows that secondary range symmetric matrices and range symmetric matrices in Minkowski inverse are two different matrices even though the proof techniques adopted here are similar. Note that A necessary condition for a matrix to be a S -EP (secondary range symmetric) is proved here. . Thus A is secondary range symmetric. A relation connecting range symmetric and secondary range symmetric matrices is given below: Theorem 0.4 Let A ∈ ℝ nÂn .Then any two of the following conditions imply the third one. Hence A is range symmetric.Thus (1) holds. For any square complex matrix A, there exists unique S-symmetric matrices such that In the following theorem, an equivalent condition for a matrix A to be secondary range symmetric is obtained interms of M, the S-symmetric part of A. Theorem 0.5 For A∈ ℝ nÂn , A is secondary range symmetric if and only if Since, both M and N are S-symmetric, they are secondary range symmetric. . Thus A is secondary range symmetric. We shall discuss the existence of secondary generalized inverse inverse of a secondary range symmetric matrix.First, we shall prove certain lemmas, to simplify the proof of the main result. Lemma 1 For an m  Theorem 0.6 For an n  n matrix A, the following are equivalent: (1) A is secondary range symmetric and ρ A ð Þ¼ ρ A 2 À Á : (2) A † S exists and A † S is secondary range symmetric. (3) There exists a symmetric idempotent matrix E such that AE ¼ EA and and A is secondary symmetric, by using Theorem 0.2 we have, ρ Hence by Thoerem 0.1, it follows that A † S exists, By Lemma 1 and Theorem 0.2, , by equivalence of condition ( 1) and ( 5) of Theorem 0.2, A † S is secondary range symmetric which implies that Since E is S -symmetric and idempotent, AA r is S-symmetric.Hence by definition 7, A n exists and AA n ¼ EE † S ¼ E which implies EA ¼ A. By hypothesis AE ¼ EA ¼ A. Therefore AA n ¼ A n A ¼ E. Thus both AA n and A n A are S -symmetric.By definition 2, A † S exists and E ¼ AA † S ¼ A † S A. By taking secondary transpose on AE . Thus (1) holds.Hence the theorem. Corollary 1 Let A be n  n secondary range symmetric matrix.Then exists A † S if and only if ρ A Proof 8 Since A is secondary range symmetric and ρ A ð Þ¼ρ A 2 À Á , the existence of A † S follows from equivalence of (1) and (2) of Thereom 0.6.Conversly, if A is secondary range symmetric, and A † S exists, then by equivalence of (2) and (3) of Theorem 0. Conclusion In this article we defined and characterized the concept of secondary range symmetric matrices.The Moore Penrose inverse exists for any matrix.But, in the case of secondary generalized inverse this is not true.Here, we obtained a necessary condition for a secondary range symmetric matrix to have an s-g inverse.In fact this condition holds true for the existence of secondary generalized inverse for any matrix. As an extension of this work, the sum of range symmetric matrices are discussed in Ref. 20.One can think of defining weighted secondary EP matrices and its characterizations.Also, extending secondary range symmetric matrix to indefinite inner product spaces will open up a new area of research. Is the work clearly and accurately presented and does it cite the current literature? Yes Is the study design appropriate and is the work technically sound?Yes Are sufficient details of methods and analysis provided to allow replication by others? Yes If applicable, is the statistical analysis and its interpretation appropriate? Not applicable Are all the source data underlying the results available to ensure full reproducibility?Yes 1. The article requires a sufficient number of illustrations to explain the theory and definitions, which are currently missing. 2. A new preliminary section should be included, containing all the necessary basic concepts to provide a strong foundation for the reader. 3. Definition 7 does not adequately explain the entries of matrix A and needs to be revised for clarity. 4. Definitions 7 and 8 should be illustrated through examples to enhance comprehension. 5. The proofs, especially the proof of Theorem 1, are not written in an easily understandable manner and should be simplified for better readability. 6. A conclusion and discussion section must be included and well explained to summarize the findings and implications of the study. 7. The definition of Minkowski's space should be included, accompanied by examples to illustrate its application. 8. The examples given at the beginning of page 5 are not properly explained and need to be elaborated for clarity. 9. By addressing these points, the article can be significantly improved and provide a clearer, more comprehensive understanding of range symmetric matrices.In the revised article, it is elaborated and an example [example-2, results] has been added for better understanding of the concept. Comment 2: The article requires a sufficient number of illustrations to explain the theory and definitions, which are currently missing.Response: As per the suggestion, the article is modified providing sufficient examples.On page 3, the definitions are explained with examples [example 1, 2 and 3].Also, a clearer explanation is given for the example given after the proof of theorem 0.1. Comment 3: A new preliminary section should be included, containing all the necessary basic concepts to provide a strong foundation for the reader.Response: A preliminary section [section 2] is included with all necessary basic concepts. Comment 4: Definition 7 does not adequately explain the entries of matrix A and needs to be revised for clarity.Response: The definition (7) has been revised.The entries of the matrix are from a real field.The explanation has been provided in the text.Comment 6: The proofs, especially the proof of Theorem 1, are not written in an easily understandable manner and should be simplified for better readability.Response: Explanation for the steps for the proof of Theorem -1, has been provided within parenthesis. Is the work clearly and accurately presented and does it cite the current literature? No Is the study design appropriate and is the work technically sound?Yes Are sufficient details of methods and analysis provided to allow replication by others?Yes If applicable, is the statistical analysis and its interpretation appropriate?Not applicable Are all the source data underlying the results available to ensure full reproducibility?Yes Are the conclusions drawn adequately supported by the results? Yes Competing Interests: No competing interests were disclosed. Reviewer Expertise: Functional Analysis I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. 6 :Definition 9 The Stranspose of A is given by A S ¼ A matrix A∈ ℝ nÂn secondary range symmetric if and only © 2024 Manjhi P.This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Pankaj Kumar Manjhi 1 Department of Mathematics,, Vinoba Bhave University,, Hazaribag,, Jharkhand,, India 2 Department of Mathematics,, Vinoba Bhave University,, Hazaribag,, Jharkhand,, India I am very delighted to see the scholarly interest in the study of range symmetric matrices.Here are my suggestions for improving the article:Some definitions (such as definition 1) are not clearly written and need to be clarified for better understanding. Comment 1 : Are all the source data underlying the results available to ensure full reproducibility?PartlyAre the conclusions drawn adequately supported by the results?PartlyCompeting Interests: No competing interests were disclosed.Reviewer Expertise: Combinatorial matrices, Discrete Mathematics, computer science I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.Some definitions (such as definition 1) are not clearly written and need to be clarified for better understanding.Response: Definition 1 (of the secondary transpose of a rectangular matrix) has been quoted from: A. Lee, Secondary symmetric, Skew symmetric and orthogonal matrices, Period.Math.Hungar 7(1), pp.63-70, 1976. Comment 5 : Definitions 7 and 8 should be illustrated through examples to enhance comprehension.Response: As per the suggestion, definitions 7 and 8 are explained with examples.
3,781.4
2024-02-19T00:00:00.000
[ "Mathematics" ]
Endodontic f lare-up incidence in pulp necrosis in Universitas Airlangga Dental Hospital (RSKGMP Universitas Airlangga) Background: Dental caries occurs as a result of demineralization of the hard tissues of the teeth followed by the destruction of the organic matter, resulting in bacterial invasion and death of the pulp which can lead to pulp necrosis. One of the treatments for pulp necrosis is endodontic treatment. Endodontic treatment includes root canal preparation techniques, root canal irrigation materials, root canal dressing materials, and also root canal obturation techniques. In endodontic treatment can experience flare-ups. An endodontic flare-up is a pain or swelling after endodontic treatment that occurs in a relatively short time. Purpose: To describe the incidence of endodontic flare-ups in pulp necrosis at RSMKGP Universitas Airlangga. Methods: Conducted descriptive observational research on patients who had endodontic treatment at UPF Dental Conservation, Dental and Oral Hospital, Faculty of Dental Medicine, Airlangga University in 2018, 2019, and 2020. Results: A total of 28 patients (30.1%) from 93 patients were experience pain or flare-ups after root canal treatment. Conclusion: From the result of this study, it can be concluded that there is still an incidence of endodontic flare-ups in pulp necrosis at RSKGMP Universitas Airlangga which is 30.1%. INTRODUCTION Dental caries occurs as a result of demineralization of the hard tissues of the teeth followed by the destruction of their organic matter, resulting in bacterial invasion and also pulp death. 1 Pulp necrosis is a root canal disease caused by the spread of bacterial infection and trauma due to untreated pulpitis, causing pulp death. 2 One of the treatments for pulp necrosis is root canal treatment. Endodontic treatment is a treatment procedure used to treat inflamed and necrotic pulp tissue due to caries or trauma to the teeth. The success of endodontic treatment can reach up to 97%. 3 However, in some cases, endodontic treatment can also cause pain and swelling after endodontic treatment which is known as a flare-up. 4 Flare-up can be defined as pain or swelling caused by endodontic treatment that occurs in a relatively short time that occurs within 24 hours. 5 The cause of flare-ups can be caused by microbial factors as well as mechanical and chemical mediators. 5 Flare-ups can occur due to several factors caused by root canal preparation techniques, root canal irrigation materials, root canal dressing materials, and also root canal obturation techniques. The incidence of flare-ups is generally still relatively low when compared to other cases. 6 According to the research of Onay et al, the incidence of flare-ups 3.2%. 7 A retrospective study conducted by Nair et al, showed that 2% patient had flareups. 8 A prospective study conducted by Aoun et al showed that 1.9% of 423 patients had flare-ups. 9 Based on these data, it shows that there is still pain or flare-ups after endodontic treatment which should not occur. This study is to obtain a database regarding the incidence of flare-ups at RSKGMP Universitas Airlangga. Conservative Dentistry Journal Vol. 12 No. 1 January-June 2022;6-11 respondents were used to view the incidence of flare-ups. Root canal treatment at RSKGMP Universitas Airlangga. The criteria for the patient sample used were male and female patients, aged 20-60 years, who had undergone root canal treatment with a diagnosis of pulp necrosis, and the patient was willing to sign the informed consent and fill out a questionnaire. Then conduct interviews with patients online. MATERIALS AND METHODS After the patient fills out the questionnaire, the results of the data from the interview will be obtained. Then the assessment and calculation of scores and the results of direct interviews with patients were carried out. Then proceed with data processing using descriptive statistical tests. After that, the research results were presented in the form of percentages based on root canal preparation techniques, root canal irrigation materials, root canal dressings, and root canal obturation techniques. RESULTS The data of the Table 1 shown that as many as 28 respondents (30.1%) experienced pain after root canal treatment (flare-up) with a diagnosis of pulp necrosis within 24 hours after treatment, while 65 respondents (69.9%) did not experienced pain after root canal treatment (flare-up) with a diagnosis of pulp necrosis within 24 hours after treatment. Table 2 also showed that as many as 28 patients experienced flare-ups, 9 patients (32.14%) were male and 19 patients (67.86%) were female. Then as many as 18 patients (64.28%) aged 20-30 years, 3 patients (10.72%) aged 31-40 years, 4 patients (14.28%) aged 41-50 years, and 3 patients (10.72%) aged 51-60 years. It is also known that the number of patients who experienced flare-ups who came to UPF Dental Conservation RSKGMP Universitas Airlangga with one visit was 14 patients (50%) and multiple visits were 14 patients (50%). From the results of the Table 3 can be seen that flare-ups can occur due to the root canal preparation technique when performing root canal treatment for pulp necrosis is known that as many as 28 patients (100%) of flare-ups occurred due to the Crown Down Pressure-less preparation technique Table 4 it is known that as many as 2 patients experienced flare-ups due to this irrigation material with a distribution of 2 patients (100%) were female. Then as many as 1 patient (50%) aged 20-30 years and 1 patient (50%) aged 51-60 years. Then as many as 1 patient (50%) came with one visit and 1 patient (50%) came with multiple visits. From the Table 5 it is known that as many as 15 patients experienced flare-ups due to this irrigation material with a distribution of 4 patients (26.67%) male and 11 patients (73.33%) female. Then as many as 11 patients (73.33%) aged 20-30 years, 3 patients (20%) aged 41-50 years, and 1 patient (6.67%) aged 51-60 years. Then as many as 7 patients (46.67%) came with one visit and 8 patients (53.33%) came with multiple visits. From the results of the Table 6 it is known that 9 patients experienced flare-ups due to this irrigation material with the distribution of 4 patients (44.44%) male and 5 patients (55.56%) female. Then as many as 7 patients (77.78%) aged 20-30 years and 2 patients (22.22%) aged 31-40 years. Then as many as 4 patients (44.44%) came with one visit and 5 patients (55.56%) came with multiple visits. From the results of the Table 7 it is known that as many as 2 patients experienced flare-ups due to this irrigation https://e-journal.unair.ac.id/CDJ Conservative Dentistry Journal Vol. 12 No. 1 January-June 2022; 6-11 material with a distribution of 2 patients (100%) were female. Then as many as 1 patient (50%) aged 20-30 years and 1 patient (22.22%) aged 31-40 years. Then as many as 2 patients (100%) came with one visit. From the results of the Table 9, it can be seen that flareups can occur due to the single cone obturation technique. Flare-ups can occur due the warm vertical compaction obturation technique when performing root canal treatment for pulp necrosis (Table 10). It is known that 1 patient (100%) experienced flare-ups due to this obturation technique with the distribution of patient being female, aged 31-40 years, and came with one visit. DISCUSSION In general, flare-ups are pain or swelling after root canal treatment that occurs within a relatively short time of 24 hours after treatment. 5 From the results of this study, it was found that flare-ups most often occurred in patients of the female, aged 20-30 years, with the same number of visits at one visit and multiple visits. This is following research by El Mubarak et al (2010) which stated that flare-ups were more common among patients aged between 18 and 33 years 10 . Women tend to experience flare-ups more often than men, this can be caused because women are more susceptible to psychosomatic disorders in their emotions such as easily anxious, afraid, and stress. It can also be caused by changes in hormone levels in women during menstruation, hormone therapy, and the use of contraception where it can change serotonin and noradrenaline hormone levels so that it can cause a decrease in the pain threshold. 6,8 The comparison of the number of patient visits to the clinic, either one visit or multiple visits cannot be a benchmark for the occurrence of flare-ups. According to research by Nair et al (2017) and Sevekar and Gowda (2017), there was no significant difference between one visit and multiple visits in causing flare-ups. 8,11 According to Shahi et al (2016), pain after root canal treatment (flare-up) can be caused by several factors. 12 The results of the research data indicate that flare-ups can occur due to the root canal preparation technique using the Crown Down Pressure-less technique. This can be caused by operator error in determining the working length so that it can cause over-instrumentation which causes extrusion of debris to the periapical. 10 It is also known that flare-ups can be caused by the irrigation material used when performing root canal treatment. NaOCl irrigation can be caused by the higher concentration of NaOCl, the higher the level of toxicity. 13 NaOCl is also less effective in eliminating persistent and toxic bacteria so that its use can be combined with other irrigation materials. 14 NaOCl can also cause serious tissue damage if the solution is extruded during the application3 of the NaOCl material to the periapical tissue. If NaOCl extrudes into the periapical tissue, it can cause pain, swelling, and redness. 13,15 EDTA only dissolves dentin so that it will cause peritubular and intratubular dentin erosion which causes the dentin structure to become irregular and reduces density and adaptation between sealer and dentin. EDTA causes dentin erosion through the demineralization process and excessive opening of dentinal tubules so that it can harm the bonding and sealing process. 16,17 CHX irrigation material also has drawbacks, namely, it cannot dissolve the remnants of necrotic tissue and is also toxic if CHX is extruded outside the root canal which can cause inflammation. 18 Then it was also known that flare-ups could be caused by root canal dressing material, namely Calcium Hydroxide (Ca(OH) 2 ). This could be because Ca(OH) 2 has limited effectiveness in eliminating endodontic pathogenic bacteria such as E.Faecalis and Candida species which can cause various incidences of recurrent infections and flare-ups. 19,20 Flare-up can also occur due to residual Ca(OH) 2 which is still present in the apical third of the root canal, so before performing obturation permanently, Ca(OH) 2 must be removed completely from the root canal. Otherwise, the residual Ca(OH) 2 in the root canal will react chemically with the sealer material which can reduce the flowability and working time of the sealer so that it will lead to a poor prognosis of root canal treatment which can lead to periapical extrusion and treatment failure. root canal that can cause pain (flare-up). 21 Flare-ups can also be caused by obturation of the root canal technique. In this study, it was found that flare-ups can occur due to root canal obturation using the Single Cone and Warm Vertical Compaction techniques. Flare-ups due to the Single Cone obturation technique can be caused because in this technique does not use compaction forces in filling the root canal material so that the sealer material does not fill the last apical millimeter of the root canal where this can cause periapical leakage which can cause flare-ups. 22 Then the Warm Vertical Compaction had disadvantage of this technique, is that there is a risk of vertical root fracture due to excessive force and pressure when the operator fills the root canal material so that the gutta-percha extrudes apically and can cause pain after root canal treatment. 23 In conclusion, from the 93 patients who underwent root canal treatment at the RSGM Universitas Airlangga with a diagnosis of pulp necrosis, there were 28 patients (30.1%) who experienced flare-ups.
2,755.2
2022-06-30T00:00:00.000
[ "Medicine", "Materials Science" ]
A Defect Localization Approach Based on Improved Areal Coordinates and Machine Learning The defects are usually generated during the structural materials subjected to external loads. Elucidating the position distribution of defects using acoustic emission (AE) technique provides the basis for investigating the failure mechanism and prevention of materials and estimating the location of the potentially dangerous sources. However, the location accuracy is heavily affected by both limitation of localization area and reliance on the premeasured wave velocity. Here, we propose a novel AE source localization approach based on generalized areal coordinates and a machine learning algorithmic model. A total of 14641 AE source location simulation cases are carried out to validate the proposed method. The simulation results indicate that even under various measurement error conditions the AE sources could be effectively located. Moreover, the feasibility of the proposed approach is experimentally verified on the AE source localization system. The experiment results show that the mean localization error of 3.64mm and the standard deviation of 2.61mm are obtained, which are 67.55% and 75.46% higher than those of the traditional method. The geometric location method is simple and easy to implement, but it is limited to some parameters, such as sensor coordinates, signal propagation time, and wave speed. The more parameters required, the more errors may be introduced. Also, this method often has multiple solutions [18]. The sensor array location method can locate the damage source according to the sensor array layout and only needs less information to calculate the AE source location. However, this method needs to meet certain positioning prerequisites (for instance, the literature [8] requires a longer distance from the acoustic emission source than the distance between the two sensors in the array) or has a larger positioning error in some monitoring ranges. The point search localization method has a relatively intuitive positioning effect, but this method requires a large amount of preexperimental work and does not fully consider the influence of the dispersion effect and the anisotropy of the AE signal speed on the positioning error [19]. To derive from the methods mentioned above, we can see that the accuracy of AE source localization is dependent on the precision of AE signal speed measurement. Therefore, it is necessary to get rid of the dependency on the AE signal speed measurement to improve the localization accuracy. Recently, several localization methods without AE signal speed measurement have been reported. Dong et al. [20,21] developed a velocity-free MS/AE source location method in complex three-dimensional hole-containing structures. Fang et al. [22] gave a location algorithm with unknown velocity based on INGLADA algorithm. Zhou et al. [23,24] provided an algebraic solution for AE source localization without premeasuring wave velocity. With the development of artificial intelligence, machine learning methods have also been introduced into AE source localization area. Ebrahimkhanlou and Salamone [25] introduced two deep learning approaches to localize AE sources within metallic plates, which utilized the reflection and reverberation patterns of AE waveforms as well as their dispersive and multimodal characteristics to localize AE sources. Suwansin and Phasukkit [26] proposed deep learning-based AE scheme for localization of cracks in train rails. However, all these methods are applied to AE source localization in zone. Moreover, since the input of the deep learning network is the whole AE waveform, these methods often require significant computational resources, which leads to a low-efficient and complex positioning process [27]. To address the above problems, Liu et al. [28] developed a new AE source location method, which combined the generalized regression neural network and the TDM localization method to further improve the localization accuracy. However, despite these localization methods have taken into account to alleviate the influence of the wave velocity, there are still several other sources of measurement error (i.e., instrumentation noise, temperature changes, and systematic errors), which can affect the accuracy of these methods [29,30]. Moreover, these algorithms commonly investigate the AE source that occur inside the area surrounding by sensors; very few studies have been implemented outside the area surrounding by sensors. To further enhance the localization accuracy and improve previous studies, we developed a novel areal coordinate and machine learning-based AE source localization method that can not only realize wave velocity-free localization and consider measurement error disturbances but also avoid limitation of localization area. In order to validate the proposed method, the numerical simulations of 14641 different AE source location cases were conducted, and to compare its performance with the conventional algorithm. Besides the numerical simulation validation, AE signal datasets were prepared for detection and localization tasks to train the proposed model; then, the feasibility of the proposed method was experimentally verified on the test datasets. The results were used to test the developed method against our previous study. Localization Approach In this research study, an improved areal coordinate-based AE source localization method is adopted to estimate the coordinates of AE sources. This algorithm starts with determining the signed areal coordinates on triangles [31]. Consider a triangle area surrounded by three AE sensors i, j, and k, an AE source S occurs in this area as shown in Figure 1. 2.1. The Principle of Generalized Areal Coordinate-Based AE Source Localization Method. In order to improve the location accuracy and avoid the outliers near the boundary, a generalized areal coordinate-based source localization method which is developed based on the traditional areal coordinate-based source localization method is used to locate the AE source. The most obvious difference between the two methods is the number of sensors. There are more than three AE sensors in the generalized areal coordinate-based source localization method. The strategy of updating the Euclidean coordinate of AE source p s can be expressed as where p 1 , p 2 ,…, p n are the Euclidean coordinates of the n AE sensors and a s1 , a s2 ,…, a sn are the areal coordinates of the n AE sensors. Choose three of the n AE sensors randomly, until there is no combination of three sensors that have not been selected before. The mth areal coordinate of AE source S with respect to sensors i, j, and k is {a where S Δsjk , S Δski , S Δsij , and S Δijk are the sign areas of triangles, which can be computed through Cayley-Menger determinant with the distance measurements among AE source S and sensors i, j, k, d ij , d ik , d kj , d si , d sj , and d sk . The generalized areal coordinate of AE source S with respect to n AE sensors is defined as According to Equation (3), we can see that a si is the average measurement value of areal coordinate. Therefore, the obtained areal coordinate a si can achieve a higher accuracy and avoid the outliers. According to the above analysis, we find that the key factor affecting the localization accuracy is the accuracy of the measurement distances from the AE source S to the sensors. It is easy to be affected by the propagation speed and arrival time of AE wave, and it is inevitable to have a certain measurement error. Based on this, we propose a new location method without measuring AE wave velocity and considering measurement error. We use the characteristics of AE wave and artificial neural network (ANN) technology to deduce the distance between AE source and sensor without measuring wave velocity (see Section 4.1 for specific implementation principle). On this basis, we also improve the positioning algorithm based on increasing the measurement error factor to reduce the impact of unavoidable measurement errors on the accuracy of the positioning algorithm. Specifically, in the measurement errors existed condition, the distance measurements among AE source S and sensors i, j, and k are modeled as follows: where e i , e j , and e k are the measurement errors. And the measurement error obeys Gaussian, Uniform, or Exponential distribution [32]. The distribution and parameters of the errors are different in various environments and measurement methods. Due to the effect of measurement errors, the original calculation method will not work when solving the values of areal coordinate {a After obtaining the unsigned values of |a ðmÞ si |, |a ðmÞ sj |, and |a ðmÞ sk |, the next step is to judge and solve the sign patterns (σ si , σ sj , σ sk ). Equation (6) is modified as where Δ 1 is a small positive number, which changes depending on the measurement error. When n is 3 (such as the model shown in Figure 1), this equation can be rewritten as where σ si , σ sj , and σ sk are the signs of the coordinates and take values of either -1 or 1. Then, 7 possible combinations of (σ si , σ sj , σ sk ) are substituted into Equation (7), and the corresponding number of solutions is solved. After the sign determination, the localization of the AE source S is accomplished in the measurement error existed condition. The procedure of improved AE source localization method is illustrated in Figure 2. Simulation Validation To validate the performance of the proposed method in the previous section, numerical simulations are conducted to estimate the coordinate of the AE source. A total of 14641 different AE source locations are simulated on the plate, as shown in Figure 3. The interval is 5 mm between the two adjacent simulated AE source locations. The dimension of the simulated area is 600 mm × 600 mm. 3.1. The Traditional AE Source Localization Method without Measurement Error. When there is no measurement error, the obtained measurement distances d sj , d sk , and d si are real values; the positioning of the AE source is completed using the traditional areal coordinate method based on the sensor distances d ij , d jk , and d ki and the measurement distances d sj , d sk , and d si . For the case of no measurement error, the traditional areal coordinate-based AE source localization method is performed. Numerical simulations are conducted using the MATLAB software; the coordinates of the three AE sensors are set at (-80 mm, 0 mm), (80 mm, 0 mm), and (0 mm, 80 mm) for simulation. As shown in Figure 3, the simulated abscissa and ordinate localization errors are less than 1.55 and 1.85e-12 mm, the average abscissa and ordinate localization errors are 0.004 and 3.34e-14 mm, and the standard deviation of the abscissa and ordinate localization errors is 0.06 and 5.61e-14 mm. Because the average grid size for simulation is 5 mm, these localization results can be considered reasonable. While the above results will be changed if there are measurement errors, so it is necessary to make improvements to adapt to the measurement error existed situation. The simulation results indicate that the AE source localization algorithm has good adaptability and robustness; however, there are still some cases where large error points will occur, especially when the AE source lies on the line parallel to one side of the triangle of three sensors. Thus, in order to enhance the localization accuracy, an improved AE source localization method is proposed. 3.3. The Improved AE Source Localization Method with Measurement Error. After introducing the measurement error, the problem that has not been solved is the misjudgment of the sign pattern, which will cause localization errors. In order to solve this problem, we propose an improved localization method, which has been introduced in Section 2; taking four sensor nodes as an example, any three of the Journal of Sensors 7 Journal of Sensors four nodes can form a triangle, so the respective localization results of the AE source S can be obtained based on the four AE sensors. From these obtained four sets of localization coordinates, one set of relatively outlier data can be removed, and the remaining three sets of data are averaged to compute a new localization result. Specifically, the improved AE source localization method with measurement error is performed into the following steps. Step 1. There are four AE sensors, the coordinates of the three AE sensors are set at (-80 mm, 0 mm), (80 mm, 0 mm), (0 mm, -80 ffiffi ffi 3 p mm), and (0 mm, 80 ffiffi ffi 3 p mm) for simulation, selecting three out of the four nodes in turn as a group and performing AE source localization according to the method in Part B to obtain the AE source coordinates ðx 1 , y 1 Þ. Step 2. The first step is repeated in turn, and the other source coordinates ðx 2 , y 2 Þ, ðx 3 , y 3 Þ, and ðx 4 , y 4 Þ are all calculated by taken possible combinations of the sensor groups. Step 3. The obtained four AE source coordinates are compared. The one farthest from the other three coordinates is deleted from the four coordinates, leaving three coordinates. Step 4. Then, the point farthest from the remaining two points is deleted from the three coordinates, leaving only two coordinates. The remaining two coordinates are averaged to obtain the final AE source coordinates. The performance of proposed method is investigated when the measurement error obeys the Uniform distribution (e i~U ð−1, 1Þ) and the Gaussian distribution (e i~N ð0, 1 2 Þ), as shown in Figure 5. If the measurement error obeys the Uniform distribution (e i~U ð−1, 1Þ), the simulated abscissa and ordinate localization errors are less than 25 and 42.31 mm, the average abscissa and ordinate localization errors are 1.07 and 1.78 mm, and the standard deviation of the abscissa and ordinate localization errors is less than 1.36 and 3.08 mm. If the measurement error obeys the Gaussian distribution (e i~N ð0, 1 2 Þ), the simulated abscissa and ordinate localization errors are less than 30 and 68.34 mm, the average abscissa and ordinate localization errors are 1.76 and 2.84 mm, and the standard deviation of the abscissa and ordinate localization errors is less than 2.13 and 4.59 mm. Considering the average grid size of simulation is 5 mm, we set the localization error which is greater than or equal to 5 mm as the large error. Compute and compare the large error rates of simulation results based on different localization methods (Table 1). Compared with the other four cases, traditional method with measurement error that obeys the Gaussian distribution has the highest large error rate. Besides, new method with measurement error that obeys the Uniform distribution has the lowest large error rate among the methods with measurement error. The average value and standard deviation of the initial simulation results and the results after eliminating the large errors are also compared in Table 1. It can be seen that the values of all the methods with measure-ment error are reduced. However, the decrease in the value of traditional methods is obvious, which means the traditional methods are more vulnerable to measurement errors, and the new methods can achieve competitive performance. Experiments In this section, the performance of the introduced generalized areal coordinate method for monitored structure AE source localization based on the PZT sensors was experimentally validated. The experimental set up is elaborated in Section 4.1, and the experimental results and analysis are given in Section 4.2. Experimental Setup. The experimental setup was used to evaluate the effectiveness of the proposed AE source localization approach, as shown in Figure 6. The experiment system was mainly composed of AE data acquisition equipment, aluminum alloy plate, four PZT sensors, and preamplifiers. The sampling frequency was up to 1 MHz. The specimen was a sheet metal (material: 6061-T6 aluminum). The dimensions of the specimen were as follows: 600 mm × 600 mm × 3 mm. To collect AE signals, four AE sensors were mounted with Vaseline on the surface of the plate. Based on the previous studies, the sensors were attached at the coordinates of S1 Journal of Sensors (0mm, 80 ffiffi ffi 3 p mm), S2 (-80 mm, 0 mm), S3 (0mm, -80 ffiffi ffi 3 p mm), and S4 (80mm, 0 mm), respectively, relative to the center point of the plate. AE sources were generated by Hsu-Nielsen pencil lead breaks. As shown in Figure 7, the experimental area was 50 mm from the edge of the plate, and the grid size for AE tests was 50 mm. All the datasets were located at the intersection of the dotted line plotted in this figure, in which 61 points, 30 points, and 30 points were considered as training, validation, and test sets, respectively. In each point, 3 Hsu-Nielsen pencil lead break tests were simulated. Before data collection, the AE signals were preamplified by 40 dB and filtered to the frequency range of 100 kHz-200 kHz. The extracted narrowband signals can be obtained after processed by the Shannon wavelet transform and peak detection algorithms. From previous researches [33][34][35], it was found that apart from TOA, amplitude values of AE signals have shown great potential in AE source localization because of the excellent relationship between the amplitude value of AE signal and the damage distance from each AE sensor. In this paper, the amplitude values of AE signals can be obtained according to the extracted narrowband signals. We used the maximum amplitude value of the extracted narrowband signal. Then, the normalized amplitude value of each sensor signal N amplitudei was computed as below where amplitude is the average value of signal amplitudes and σ is the standard deviation of signal amplitudes. Artificial neural network (ANN) which is a useful solution of nonlinear problem was adopted to enhance accuracy of the estimated distances. In this paper, the used neural Figure 8. The input data is the normalized amplitude value N amplitudei , and the output data is the distance between each sensor and AE source d si . As shown in Figure 7, there were 363 AE signals in this research. The AE signal datasets were divided into a training dataset (50%), a validation dataset (25%), and a testing dataset (25%). The number of training dataset is 183. The validation dataset can provide an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters [36,37]. This dataset is used for minimizing the overfitting problem. In addition, in order to check the accuracy of training results, 90 test dataset is used. The neural network is improved for fast learning by Quai-Newton method which is an optimization algorithm based on Taylor series expansion. It can be seen that the estimated distances according to the neural network and the true distances coincide well with each other. The distances of AE source from three sensors can be well estimated using the trained weights of the hidden layers. The AE source locations can be predicted through the improved areal coordinate-based AE source localization method using the estimated distances. 4.2. Results and Analysis. The AE source locations in the test area were calculated using the generalized areal coordinates and ANN-based AE source localization algorithm described above. One example of the detected AE signals is shown in Figure 9. The exampled localization results of this experiment are shown in Figure 10. In this illustration, the triangle marker "△" represents the actual coordinate of the AE source, asterisk marker " * " represents the predicted coordinate of the AE source determined by the traditional areal coordinate-based localization method, and plus marker "+" represents the predicted coordinate of the AE source determined by the new generalized areal coordinate-based localization method. From the experimental results, we can find that the plus markers "+" are always nearer to the triangle markers "△" than the asterisk markers " * ," which means that compared with the localization results of traditional method, the results of new method have a better performance in coinciding with the actual AE source locations. The coordinates of 30 AE source locations in test dataset are shown in Table 2. Figure 11 illustrates the results of location errors using the traditional method and our proposed new method. The experimental results show that AE source positions N-8, N-11, N-13, N-27, and N-28 have larger errors than other positions (>25 mm). The larger errors are all resulted by using the traditional method. The maximum localization error obtained using the proposed method is less than 15 mm. Therefore, the proposed new method was found to have higher accuracy in practical AE source localization application. It can be seen that the distribution of localization error is consistent with the estimated error in Section 3. This error seems to be mainly attributed to the locations of these points on the edge of the test area. To expand the coverage area, additional PZT sensors can be used to supplement the limitations. In addition, by adding an appropriate amount of PZT sensors, the positioning performance of this method can be improved. Figure 12 shows the mean localization errors and standard deviations for the location results of 30 AE sources determined by the two methods. It can be seen that the mean localization error and standard deviation of the proposed method are both smaller than those of the traditional method, which further demonstrates the proposed method holds a more stable and higher location accuracy than the traditional method. In addition, the mean localization error (3.64 mm) and the standard deviation (2.61 mm) of the new method are less than the traditional method (11.23 mm and 10.67 mm). Therefore, compared with the traditional method, the location accuracy and stability of the proposed method are improved by 67.55% and 75.46%, respectively. Conclusion This study proposes an improved approach for localization of the potentially defect source in structural material using AE technique. Compared with the traditional AE source localization algorithms, the proposed method has higher location accuracy and more excellent stability. Besides, considering the measurement error makes the model more robust, which greatly improves the performance of AE source localization in practical engineering applications. This method also adopts a neural network structure to estimate the damage distances from AE sensors, which makes the method completely get rid of the reliance on premeasured wave velocity. A total of 14641 AE source location simulation cases are carried out to validate the proposed method. The simulation results indicate that the AE sources could be effectively located, even under various measurement error conditions. Compared with traditional method with no measurement error and the traditional methods with measurement errors, the proposed method achieved a higher accuracy. In addition, the feasibility of the proposed approach is experimentally verified on the AE source localization system. The experiment results show that the mean localization error of 3.64 mm and the standard deviation of 2.61 mm are obtained, which are 67.55% and 75.46% higher than those of the traditional method. Although the proposed approach realizes the localization of the AE sources in the structural material, there are still some limitations. For example, if there are untrained normalized amplitude values of the collected AE signals, there will be error location predictions. In future work, we will further expand the AE signal training dataset and use more novel machine learning methods to optimize the AE source localization approach. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that they have no conflicts of interest.
5,434.8
2022-02-12T00:00:00.000
[ "Materials Science" ]
Genus Spondias: A Phytochemical and Pharmacological Review It is believed that many degenerative diseases are due to oxidative stress. In view of the limited drugs available for treating degenerative diseases, natural products represent a promising therapeutic strategy in the search for new and effective candidates for treating degenerative diseases. This review focuses on the genus Spondias which is widely used in traditional medicine for the treatment of many diseases. Spondias is a genus of flowering plants belonging to the cashew family (Anacardiaceae). This genus comprises 18 species distributed across tropical regions in the world. A variety of bioactive phytochemical constituents were isolated from different plants belonging to the genus Spondias. Diverse pharmacological activities were reported for the genus Spondias including cytotoxic, antioxidant, ulcer protective, hepatoprotective, anti-inflammatory, antiarthritic, and antidementia effects. These attributes indicate their potential to treat various degenerative diseases. The aim of this review is to draw attention to the unexplored potential of phytochemicals obtained from Spondias species, thereby contributing to the development of new therapeutic alternatives that may improve the health of people suffering from degenerative diseases and other health problems. Introduction Degenerative disease results from a continuous process based on degenerative cell changes of tissues and organs, which increasingly deteriorate over time. This might happen due to normal bodily wear or lifestyle choices such as lack of exercise or eating habits. Oxidative stress is known to be implicated in the development of degenerative diseases. An imbalance between formation and neutralization of free radicals leads to oxidative stress. These reactive species seek stability through electron pairing with biological macromolecules such as proteins, lipids, and DNA in healthy cells leading to protein and DNA damage along with lipid peroxidation [1]. These changes contribute to the development of cancer, atherosclerosis, cardiovascular diseases, aging, inflammatory diseases, and other degenerative changes. All human cells protect themselves against free radical damage by enzymes such as superoxide dismutase (SOD) and catalase, or antioxidant compounds such as ascorbic acid, tocopherol, and glutathione. Sometimes, these protective mechanisms are disrupted by various pathological processes [1]. In view of the limited drugs available for the treatment of degenerative diseases, there is an urgent need for the development of new, nontoxic, and affordable candidates for treating these diseases, especially from natural sources. Investigation on the phytotherapy of medicinal plants that are highly valued and widely used in traditional medicine may provide efficient management for many diseases. Genus Spondias belongs to family Anacardiaceae which comprises 70 genera and 600 species and is endogenous mostly in the tropics and subtropics worldwide but also extends into the temperate zone. Members of this family are used in traditional medicine in the treatment of many ailments [2,3]. Spondias consists of 18 Ethnopharmacology Members of the genus Spondias are widely used in traditional medicine for the treatment of numerous diseases, including stomachache, diarrhoea, diabetes, dementia, anemia, dysentery, and various infections. Considering the fruits of various species, they were used to treat many ailments. It was reported that S. dulcis fruits are utilized by the rural population in Bangladesh to increase eyesight and to prevent eye infections [7] while those of S. tuberosa are eaten by rural communities in Brazil due to their high nutritional value [8]. On the other hand, the fruits of S. mombin are used in Nigeria as a diuretic [9]. Powdered ripe fruits of S. pinnata are used in India as an antidote for poison arrows [10]. Regarding the leaves of Spondias, in Mexico, an infusion of the fresh leaves of S. purpurea is used to treat stomachache and flatulence [11]. The leaf decoction of the fresh leaves is used in the treatment of anemia, diarrhoea, dysentery, and skin infections [12][13][14], while in Belize, a decoction of S. mombin leaves is used to treat diarrhoea and dysentery as well as by populations in Nigeria, Benin, and Togo to retain good memory [3]. The aqueous extract of S. mombin leaves is popularly used in Brazil as an abortifacient [15]. In Southwest Nigeria, the leaves are used by traditional healers to manage diabetes mellitus [2]. They possess also antimicrobial [16] and antiviral activities [17]. The gum of S. mombin is used in Belize as an expectorant and to expel tapeworms [18,19]. In India, the gum produced from S. pinnata is used as a a demulcent [20] and to treat bronchitis, dysentery, ulcers, diarrhoea, and skin diseases [21]. In Mexico, a decoction from the bark of S. purpurea is used to treat anemia, diarrhoea, dysentery, and skin infections [12][13][14]. In India, the bark of S. pinnata is used as a rubefacient for the treatment of painful joints. It is also used to treat diarrhoea and dysentery and to prevent vomiting [22]. A decoction prepared from the root bark is used to regulate menstruation and to treat gonorrhoea [23]. Evidence-Based Complementary and Alternative Medicine 3 Various volatile oil constituents were reported from different Spondias species. Hydrodistillation of the leaves of S. mombin and S. purpurea led to the isolation and identification of -pinene, -pinene, caryophyllene, humulene, indene, and cadinene [32]. Pharmacological Effects Different reported pharmacological activities of the genus Spondias are detailed below. Ghate et al. (2013) demonstrated that the methanolic extract of S. pinnata bark exhibited significant cytotoxicity on human lung adenocarcinoma (A549) and human breast adenocarcinoma (MCF-7) cell lines via inducing apoptosis. In vitro WST-1 cell proliferation assay was carried out; A549 cells were seeded in a 96-well culture plate at a density of 5 * 10 4 cells/well whereas MCF-7 cells were seeded at 1 * 10 4 cells/well and allowed to settle for 2 h. The cells were then treated with the methanolic extract of S. pinnata ranging from 0 to 200 g/ml for 48 h. The 70% methanolic extract of S. pinnata inhibited the growth of both A549 and MCF-7 cells in a dose-dependent manner with an IC 50 value of 147.84 and 149.34 g/ml, respectively. Cell proliferation and viability were quantified by measuring the absorbance of the produced formazan at 460 nm using a microplate ELISA reader. The pathway of apoptosis induction may be due to an increase in Bax/Bcl-2 ratio in both cell types, which resulted in the activation of the caspase cascade, subsequently leading to cleavage of poly adeno ribose polymerase enzyme [37]. Chaudhuri et al. (2015) tested the activity of compounds isolated from the ethyl acetate fraction obtained from the bark of S. pinnata for their cytotoxic activity against human glioblastoma cell line (U87). In vitro WST-1 cytotoxicity assay was carried out; 1 × 10 4 cells were treated with compounds isolated from the ethyl acetate fraction (1 to 30 g/ml) for 48 h in a 96-well culture plate. Two isolated compounds (gallic acid and methyl gallate) showed promising cytotoxic activities with IC 50 of 59.28 and 8.44 g/ml, respectively [27]. Gallic acid induced cell death in promyelocytic leukemia HL-60RG cells [38]. Previous studies showed that treatment of murine tumors with methyl gallate extracted from Moutan Cortex Radicis enhances the antitumor effects through modulation of the function of CD4 + CD25 + Treg cells. In vitro, methyl gallate decreased CD4 + CD25 + Treg cell migration and reduced the suppressive function of effector T-cells. In tumor-bearing animals, treatment with methyl gallate delayed tumor progression and prolonged survival through inhibition of the tumor infiltration of CD4 + CD25 + Treg cells [39]. 48 Cytotoxic Activity. Fruits (EtOH) [36] 5.2. Antioxidant Activity. Hazra et al. (2008) proved that the 70% methanolic extract of S. mangifera bark is a potent source of antioxidants. Total antioxidant activity was assessed in vitro, depending on the ability of the 70% methanolic extract to scavenge ABTS radical cation, and compared to trolox standard, the total antioxidant activity of the 70% methanolic extract was calculated from the decolorization of ABTS cation which was measured spectrophotometrically at 734 nm; the trolox equivalent antioxidant value was of 0.78 [1]. In addition, S. mangifera methanolic fruit extract at concentration of 5 g/ml showed 16% radical scavenging activity compared to the same concentration of vitamin C which showed only 5% radical scavenging activity [40]. Arif et al. (2016) showed that the ethanolic extract of S. mangifera fruits contains large amounts of phenolics, flavonoids, and acid glycosides, such as propan-1,2-dioicacid-3-carboxyl--D-glucopyranosyl-(6 →1 )--D-glucofuranoside. In vitro and in vivo studies were conducted to test the effects of ethanolic extract and acid glycoside as antioxidants against anoxia-stress tolerance, swimming endurance, and cyclophosphamide-immune suppression. The antioxidant activity was compared to a standard drug Geriforte [36]. An in vitro study was carried against DPPH • and determined by a UV spectrophotometer at 517 nm. Aliquots of 0.05, 0.5, and 1 mg/ml of either the ethanolic extract of the acid glycoside were mixed in test tubes each containing 3 ml of methanol and 0.5 ml of 1 mM DPPH • ; ascorbic acid was used as a standard at the same concentrations, and the reaction mixture was incubated at 37 ∘ C for 30 min. The radical scavenging activity was calculated; IC 50 was 0.32 and 0.15 mg/ml for the ethanolic extract and acid glycoside, respectively, while IC 50 of ascorbic acid was 0.015 mg/ml. These results indicated that the ethanolic extract and the acid glycoside exhibited a significant antioxidant activity [36]. Furthermore, an in vivo experiment was carried out on thirty Swiss Albino mice which were divided into five groups of six mice each; group 1 served as the control and received vehicle alone (2% gum acacia), groups 2 and 3 were treated with 100 and 200 mg/kg/day of the ethanolic extract, group and 158.6 min, respectively, whereas the acid glycoside treated mice swam for 155.4 min. It was evident that the ethanolic extract and the acid glycoside treated mice exhibited a significant increase in physical swimming endurance time [36]. An extra in vivo study was carried out on 24 mice, divided into four groups of six mice each. It was observed that the administration of cyclophosphamide alone (25 mg/kg/day) produced a significant decrease in the total RBCs and leukocytes counts, whereas cyclophosphamide given along with ethanolic extract (100 mg/kg/day) and acid glycoside (10 mg/ kg) conferred a good protection by increasing the haematological parameters. It was suggested, based on this study, that the ethanolic extract and acid glycoside may be coadministered with chemotherapy for the treatment of patients with severely impaired or suppressed immune system [36], as the ethanolic extract and the acid glycoside are able to reduce leukopenia and anemia induced by cyclophosphamide administration [36]. Shetty et al. (2016) conducted an in vivo study on Wistar rats to show the effects of combining conventional chemotherapy with S. pinnata bark extract to reduce chemotherapy's side effects. The rats were divided into four groups: group 1 (normal control), group 2 (received etoposide alone (i.p.) in a single dose of 60 mg/kg b.w.), group 3 (received etoposide followed by S. pinnata bark extract (100 mg/kg b.w.) orally once The results showed that animals which received chemotherapy in group 2 showed a significant decrease of GSH level in the liver and kidney tissues as compared to the control group, while treatment with S. pinnata bark extract after chemotherapy showed a significant increase in GSH level when compared to group 2. This study proved the protective action of the extract on the liver and kidney against chemotherapy-induced chemical stress [41]. Cabral et al. (2016) proved that the hydroethanolic extract of S. mombin leaves showed a significant antioxidant activity, and in an in vitro DPPH • assay, the hydroethanolic extract was tested at 60, 125, 250, and 500 g/ml and showed DPPH • radical scavenging activity ranging from 66% to 76% [42]. The methanol extract of S. purpurea fruit showed a strong free radical scavenging activity and this was deduced by carrying out an in vitro study to evaluate the ability of the methanol extract of S. purpurea fruit to sequestrate the DPPH • radicals; the flavonoid rutin was used as a positive control and sequestrated 90.01% of the DPPH • radicals at concentration 250 g/ml. while the methanol extract of S. purpurea fruit sequestrated 74.41% with EC 50 of 27.11 g/ml [43]. The strong antioxidant activity of plants belonging to genus Spondias has been attributed mainly to their flavonoids and phenolic content [41]. in vivo study to evaluate the ulcer protective activity of S. mangifera methanolic bark extract. Gastric ulceration was achieved by administering different doses of indomethacin (30, 60, and 100 mg/kg) to rats orally and 100 mg/kg was found to be the most effective for producing gastric ulceration in the rats. The rats were then divided into four groups, each comprising six animals. Food and water were withdrawn 24 h and 2 h, respectively, before drug administration. Rats in group 1 received 100 mg/kg indomethacin while those in group 2 were pretreated with 100 mg/kg cimetidine. The rats in groups 3 and 4 were pretreated with 100-200 mg/kg of bark extract 1 h prior to the administration of indomethacin (100 mg/kg). The drugs were administered intragastrically. After 4 h, the animals were killed by cervical dislocation and their stomachs were removed and opened along the greater curvature. The ulcer index (UI) of each group was calculated. The groups treated with bark extract showed a marked reduction of the ulcerogenic effect of indomethacin, reducing the ulcer index from 17.7 (ulcerated control) to 8.7 and 6.7 for the groups treated with bark extracts of 100 mg/kg and 200 mg/kg, respectively. The methanolic bark extract of S. mangifera was thus concluded to possess a marked inhibitory effect of indomethacin-induced ulceration [46]. Sabiu et al. (2015) tested the gastroprotective and antioxidative potential of the aqueous extract of S. mombin leaves. Ulceration was induced in Albino rats by oral administration of indomethacin which caused a significant increase in the degree of ulceration. Pretreatment with the extract of 200 mg/kg b.w. facilitated the ulcer healing process, which was associated with a decrease in pepsin activity and an elevation in mucin levels in the gastric mucosa. Moreover, 8 Evidence-Based Complementary and Alternative Medicine S. mombin leaf extract ameliorated the oxidative stress and inhibitory action of indomethacin on prostaglandin synthesis [47]. Hepatoprotective Activity. The ethyl acetate and methanolic extracts of S. pinnata stem heartwood possess a marked in vivo hepatoprotective effect on CCl 4 intoxicated rats. The ethyl acetate and methanolic extracts were administered at doses of 100, 200, and 400 mg/kg, p.o., and the results showed a protective activity in a dose-dependent manner as evidenced by the significant decreases in ALT and AST to their normal levels, which was comparable to silymarin. The hepatoprotective effect in this study was attributed to the presence of flavonoids. Histopathological examination was also carried out on CCl 4 intoxicated rats and revealed that normal hepatic architecture was retained in rats treated with S. pinnata extracts [48]. Hazra et al. (2013) evaluated the effect of S. pinnata stem bark methanol extract on iron-induced liver injury in mice. Intraperitoneal administration of iron dextran induced an iron overload and led to liver damage along with a significant increase in serum hepatic markers (ALT, AST, ALP, and bilirubin). The administration of S. pinnata methanol extract in doses of 50, 100, and 200 mg/kg induced a marked increase in antioxidant enzymes, along with dose-dependent inhibition of lipid peroxidation, protein oxidation, and liver fibrosis. Meanwhile, the levels of serum enzyme markers and ferritin were also reduced, suggesting that the extract is potentially useful as an iron chelating agent for iron overload diseases [49]. Chaudhuri et al. (2016) evaluated the activity of the methanolic extract of S. pinnata bark against iron-induced liver fibrosis and hepatocellular damage. In an ironoverloaded liver, iron reacts with cellular hydrogen peroxide to generate hydroxyl radicals which in turn initiate the propagation of various free radicals; this situation leads to oxidative stress. Two compounds (gallic acid and methyl gallate) were isolated from the ethyl acetate fraction of this extract; an in vivo study showed that methyl gallate exhibited better iron chelation properties than gallic acid. It was proved that methyl gallate overcomes hepatic fibrosis by ameliorating oxidative stress and sequestrating the stored iron in cells [50]. These results were in accordance with previous studies of Nabavi et al. (2013) which indicated the in vivo protective effect of gallic acid isolated from Peltiphyllum peltatum against sodium fluoride induced hepatotoxicity and oxidative stress. The results showed that gallic acid (10 and 20 mg/kg) prevented the sodium fluoride induced abnormalities in the hepatic biochemical markers; these effects were comparable to the reference drug silymarin (10 mg/kg) [51]. Photoprotective Activity. Ultraviolet A and ultraviolet B are known to induce skin cancer. The free radicals generated from sunlight are responsible for the degradation of essential cellular components such as DNA and proteins [43]. The UVA photoprotective activity of the ethanolic extract of S. purpurea fruit was assessed in vitro by the trans-resveratrol method, which indicated its marked photoprotective ability against UVA radiation [52]. tested the in vitro UVB photoprotection effect of the ethanol extract of S. purpurea fruit by a spectrophotometric method. The photoprotective effect was attributed to phenolic compounds in S. purpurea fruit extract having the ability to absorb the solar radiation, to scavenge free radicals, and to decrease the harmful effects of the sun [43]. 5.6. Anti-Inflammatory Activity. The hydroethanolic extract of S. mombin leaves showed a significant anti-inflammatory activity in a carrageenan-induced peritonitis model in mice. Carrageenan induced neutrophil migration to the peritoneal cavity and typical signs of acute inflammation including vasodilation, edema, and leukocyte infiltration. It was evident from this study that S. mombin leaf extract (100, 200, 300, and 500 mg/kg) reduced the leukocyte influx to the peritoneal cavity of the treated animals [42]. da Silva Siqueira et al. (2016) showed that phenolic compounds were responsible for the anti-inflammatory activity exhibited by S. tuberosa leaves hydroethanolic extract. Furthermore, an in vivo study was conducted on Swiss Albino mice, where dexamethasone was used as a standard antiinflammatory drug and carrageenan was used to induce hind paw edema. The extract (125, 250, and 500 mg/kg) induced significant amelioration of the inflammatory response induced by carrageenan, a marked reduction in the number of leukocytes in the peritoneal cavity, and a significant decrease in myeloperoxidase activity [53]. Antiarthritic Activity. Nitric oxide plays an important role in various inflammatory processes. However, sustained levels of production of this radical are directly toxic to tissues and contribute to the vascular collapse associated with septic shock, whereas chronic expression of nitric oxide radical is associated with various degenerative diseases, including carcinomas and inflammatory conditions such as juvenile diabetes, multiple sclerosis, arthritis, and ulcerative colitis. The toxicity of NO increases greatly when it reacts with a superoxide radical, forming the highly reactive peroxynitrite anion (ONOO-). Hazra et al. (2008) proved that the methanolic extract of S. pinnata inhibits nitrite formation in vitro by directly competing with oxygen in the reaction with nitric oxide. The results revealed that IC 50 of the methanolic extract (tested at 200 g/ml) was 716.32 g/ml which was lower than that of the reference compound gallic acid (IC 50 = 876.24 g/ml). The scavenging percentages were 22.3 and 15.8% for S. pinnata and gallic acid, respectively. This study proved that the extract exhibited more potent peroxynitrite radical scavenging activity than the standard gallic acid [1]. Learning and Memory. The ability to acquire knowledge and to retain this acquired knowledge can be defined as learning and memory. Several conditions such as aging and stress may lead to the impairment of learning. It has been shown that aging may lead to various neurodegenerative processes including memory loss, dementia, and Alzheimer's disease [54]. proved that the aqueous extract of S. mombin leaves (400, 800 mg/kg b.w.) enhanced the learning and memory capabilities of Wister rats due to structural changes observed in the cerebrum. Improved Evidence-Based Complementary and Alternative Medicine 9 learning and memory have been also linked to structural changes of the limbic system [55]. The aqueous extract may have also positively affected the biosynthesis of neurotransmitters, such as acetylcholine, noradrenaline, dopamine, and 5-HT that are involved in learning and memory mechanisms [56,57]. Ishola et al. (2017) investigated the in vivo protective effect of the hydroethanolic leaf extract of S. mombin (50, 100, or 200 mg/kg, p.o.) and proved the protective effect against scopolamine-induced cognitive dysfunction and memory deficit that could be attributed to the extract antioxidant properties [58]. . Pyrexia was induced in Albino rats by brewer's yeast. The extract showed a significant reduction in pyrexia, which continued for 5 hours after drug administration [60]. showed that both ethyl acetate and aqueous extracts of S. pinnata fruit at the concentration of 10 mg/ml have a significant thrombolytic activity compared to streptokinase as a standard substance [61]. Kamal et al. (2015) proved that the ethanolic extract of S. pinnata (1 mg/ml) leaves has a membrane stabilizing activity for human RBCs in hypotonic solution-induced hemolysis. In case of heat-induced hemolysis, S. pinnata extracts produced marked inhibition of hemolysis [62]. Uddin et al. (2016) demonstrated the possible thrombolytic and membrane stabilizing activities of the ethanolic extract of S. pinnata aerial parts and its different fractions. The ethyl acetate fraction exerted the highest thrombolytic activity and membrane stabilizing activity [63]. 5.11 . Hypoglycemic Activity. The hypoglycemic activity was tested using different extracts of the genus Spondias. The leaves of S. mombin were tested in vitro by Fred-Jaiyesimi et al. (2009) for their hypoglycemic activity. A new compound, 3olean-12-en-3-yl (9Z)-hexadec-9-enoate, isolated from the diethyl ether fraction of the methanolic extract of S. mombin leaves, showed an -amylase inhibitory activity similar to the activity of acarbose. The methanolic leaf extract and the isolated new compound decreased postprandial hyperglycemia. The methanolic extract (250 mg/ml) showed 39% inhibition of the -amylase activity, while the diethyl ether fraction (70 mg/ml) showed 73% inhibition and the isolated compound (20 mg/ml) exhibited 57% -amylase inhibition [2]. showed a promising hypoglycemic effect of the methanolic bark extract of S. pinnata, which was comparable to glibenclamide. The test was carried out in vivo, and the methanolic extract was administered at a dose of 300 mg/kg to rats. After 30 min of treatment, rats were loaded orally with glucose (2 g/kg, p.o.). Blood samples were collected before and at 30, 90, and 150 min intervals after glucose administration, the methanol extract was found to reduce blood glucose level by 63.12%, and the results were found to be comparable to glibenclamide [64]. Acharyya et al. (2010) tested the hypoglycemic activity of both the methanolic and the aqueous extracts of S. pinnata roots in vivo using oral glucose tolerance test and indicated a significant decrease in blood glucose levels after four hours of treatment as compared to glibenclamide [65]. 5.12. Antifertility Activity. carried out a study on adult female Wister rats to determine the effect of the ethanolic extract of S. mombin leaves on anterior pituitary, ovary, uterus, and serum sex hormones. The animals received the ethanolic extract at dose levels of 250, 350, and 500 mg/kg b.w. The results showed a significant decrease in the weight of pituitary, ovary, and uterus of the treated animals, along with a significant reduction in FSH, LH, estradiol, and progesterone levels. Therefore, this study concluded that the extract showed antifertility activity and can be used as a contraceptive [66]. 5.13. Antihypertensive Activity. Das and De (2013) tested the in vitro antihypertensive activity of the aqueous extract of S. pinnata fruit (20 g/ml). The angiotensin-convertingenzyme inhibitory activity was assayed using ACE from rabbit lung and N-hippuryl-L-histidyl-L-leucine as a substrate. This showed 50% inhibition of ACE enzyme [67]. 5.14. Antimicrobial Activity. Arif et al. (2008) tested the in vitro antibacterial activity of the methanolic and the aqueous extracts of S. pinnata bark by cup plate diffusion method at the concentrations of 50, 100, and 150 mg. The activity was tested against Escherichia coli, Salmonella Typhimurium, and Vibrio cholerae and compared with penicillin and streptomycin as standard drugs. The methanolic extract showed a good antibacterial activity against Gram +ve and Gram −ve bacteria, while the aqueous extract showed only a mild antibacterial activity. The resin of S. pinnata also showed an antibacterial activity against Bacillus subtilis [46]. The 80% ethanolic extract of S. pinnata fruits showed a strong antibacterial activity against both Gram +ve and Gram −ve bacteria. The antimicrobial activity was tested by disc diffusion method; standard discs of kanamycin (30 g/disc) and blank discs were used as positive and negative controls, respectively [68]. Tapan et al. (2014) isolated two new ergosteryl triterpenes (SP-40, SP-60) from S. pinnata bark and tested their antipseudomonal activity by agar disc diffusion method against a moderately resistant strain of Pseudomonas aeruginosa MTCC 8158. The tested organism was completely resistant to ampicillin and tetracycline at concentrations of 10 and 30 g/disc, respectively, while exhibiting an inhibition zone of 15 mm against streptomycin at 100 g/disc concentration. SP-40 exhibited an inhibition zone of 20 mm, which was better than streptomycin at comparable concentrations. SP-60, however, did not show any antimicrobial activity against this organism up to a concentration of 200 g/disc. The MIC values of SP-40, thus, were estimated to be between 25 and 12.5 g/disc [31]. Olugbuyiro et al. (2013) isolated two new phytosterols: stigmasta-9-en-3,6,7-triol and 3-hydroxy-22-epoxystigmastane from the methanolic extract of S. mombin stem bark. Both compounds exhibited a marked antimycobacterial activity with 93% inhibition against Mycobacterium tuberculosis by a fluorometric microplate Alamar Blue Assay [30]. Furthermore, the methanolic fruit extract of S. purpurea showed a strong antimicrobial activity against E. coli and P. aeruginosa using the disc diffusion method [43]. observed similar results when evaluating the antimicrobial activity of S. dulcis fruit [69]. 5. 15. Anthelmintic Activity. The ethanolic and acetone extracts of S. pinnata bark were tested for anthelmintic activity. Florido and Cortiguerra (2003) and Kumar et al. (2012) proved that the ethanolic extract with a concentration range of 50 mg/ml and 100 mg/ml showed more potent activity than the acetone extract [70,71]. The bark of S. pinnata was shown to exhibit an anthelmintic activity against Indian earthworms due to different glycosides present in the bark. tested the anthelmintic activity of the chloroform extract of the bark of S. pinnata (10,15, and 20 mg/ml) and showed promising effects [72]. showed that the administration of chloroform and the methanol extracts of S. pinnata bark (300 mg/kg) to Wister Albino rats produced significant diuretic and laxative activities as compared to reference standards furosemide and agar [73]. Antiepileptic and Antipsychotic Activity. Ayoka et al. (2006) conducted an in vivo study using the methanolic and ethanolic extracts of S. mombin leaves and showed promising antiepileptic and antipsychotic effects. They also tested the effects of aqueous, methanolic, and ethanolic extracts of S. mombin on hexobarbital-induced sleep in mice. Animals given hexobarbitone (100 mg/kg i.p.) showed loss of writhing reflex within five minutes of administration. The administration of the aqueous extract (100 mg/kg) decreased the latency of sleep significantly and was more potent in increasing hexobarbitone-induced sleeping time in mice. The methanolic extract did not alter the latency of sleep, whereas it increased the latency time at doses of 12.5 and 50 mg/kg. The three extracts produced a dose-dependent prolongation of hexobarbitone-induced sleeping time in mice [74]. Toxicity It was evident that oral administration of aqueous, methanolic, and ethanolic extracts of S. mombin leaves (≤5 g/kg) did not produce any toxic effects in mice and rats. Intraperitoneal administration of the aqueous extract (≤200 mg/kg) also did not produce any toxic effects; however, the ethanolic and methanolic extracts (>100 mg/kg) produced toxic symptoms. Lethal effects were observed in mice and rats with the three extracts at the dose of 3.2 g/kg administered i.p. LD 50 in mice for ethanolic extracts was 480 mg/kg while it was 1.1 g/kg for the methanol extract and 1.36 g/kg for the aqueous extract. Also, LD 50 in rats for the ethanolic, methanolic, and aqueous extracts was 620 mg/kg, 1.08, and 1.42 g/kg, respectively. The LD 50 determination of the extracts was carried out in a 48 h continuous observation [74]. Mondal and Dash (2009) tested the acute in vivo toxicity of chloroform, methanol, and aqueous extracts of S. pinnata bark. The animals were divided into different groups of six animals each. The control group received 1% Tween-80 in normal saline (2 ml/kg, p.o.). The other groups received 100, 200, 300, 600, 800, 1000, 2000, and 3000 mg/kg of the tested extracts, respectively, in a similar manner. Immediately after dosing, the animals were observed continuously for the first 4 h for any behavioural changes. They were then kept under observation for up to 14 days after drug administration to find out the mortality rate if any. It was found that the chloroform and methanol extract induced sedation, diuresis, and purgation at all tested doses. However, there was no mortality in any of the extracts at the tested doses till the end of the observation period [64]. Based on these results, it can be concluded that the aqueous extract is the safest one among the tested extracts. Furthermore, the aqueous extract showed a variety of pharmacological activities using different in vitro and in vivo models which could validate its ethnopharmacological use. This evidence of use and the absence of toxicity can provide an important basis for the development of herbal medicines from the aqueous extract of different Spondias species. Conclusion Presently, there is an increased demand worldwide for the use of natural remedies. Herbal medicines could be used as a complementary or alternative medicine to synthetic drugs, and this requires more laboratory investigations on their pharmacological activities. Many degenerative diseases are associated with oxidative stress. There is an increased demand worldwide for nontoxic, easily accessible, and affordable antioxidants of natural origin. Plants belonging to the genus Spondias were widely used in traditional medicine due to their beneficial therapeutic effects. This is attributed to their diverse bioactive phytoconstituents like phenolics and flavonoids which possess marked antioxidant activity and thus are capable of preventing many degenerative diseases. The present review provides a comprehensive understanding of the chemistry and pharmacology of Spondias species, which may help in the discovery of new candidates for the treatment of various degenerative diseases and health problems.
6,994.4
2018-02-12T00:00:00.000
[ "Biology" ]
Ultrafast dissipative soliton generation in anomalous dispersion achieving high peak power beyond the limitation of cubic nonlinearity The maximum peak power of ultrafast mode-locked lasers has been limited by cubic nonlinearity, which collapses the mode-locked pulses and consequently leads to noisy operation or satellite pulses. In this paper, we propose a concept to achieve mode-locked pulses with high peak power beyond the limitation of cubic nonlinearity with the help of dissipative resonance between quintic nonlinear phase shifts and anomalous group velocity dispersion. We first conducted a numerical study to investigate the existence of high peak power ultrafast dissipative solitons in a fiber cavity with anomalous group velocity dispersion (U-DSAD) and found four unique characteristics. We then built long cavity ultrafast thulium-doped fiber lasers and verified that the properties of the generated mode-locked pulses match well with the U-DSAD characteristics found in the numerical study. The best-performing laser generated a peak power of 330 kW and a maximum pulse energy of 80 nJ with a pulse duration of 249 fs at a repetition rate of 428 kHz. Such a high peak power exceeds that of any previous mode-locked pulses generated from a single-mode fiber laser without post-treatment. We anticipate that the means to overcome cubic nonlinearity presented in this paper can give insight in various optical fields dealing with nonlinearity to find solutions beyond the inherent limitations. Introduction Mode-locked fiber lasers have been favored in various fields where ultrashort pulse durations and high peak powers and intensities are demanded.Ultrashort pulse durations are useful in applications such as range-finding and lidar systems using the time-of-flight method to achieve high resolution and high accuracy [1][2][3][4], and high peak powers of mode-locked pulses are advantageous in nonlinear optics, medical surgery, and material processing [5][6][7][8][9].For these applications, various approaches to enhancing the properties of mode-locked lasers have been researched. One effort is to increase the peak power of mode-locked pulses by shortening the pulse duration or increasing the pulse energy.Increasing peak power remains a subtle and sophisticated challenge due to inherent limitations from nonlinear phase shifts in the fiber medium that can break a mode-locked pulse into pieces and drive the lasers to noisy operation [10][11][12][13].Many efforts have been made to reduce the accumulation of nonlinear phase shifts for pulse amplification in laser cavities or post-amplification; examples include applying large mode area fibers or chirped pulse amplification [14][15][16][17][18][19][20][21][22].These methods reduce the effective nonlinearity by alleviating the mode-locked pulse intensity through dispersing the pulse field in the time domain or over the transverse area of the fiber medium.However, laser systems applying these approaches are still under the limitation of nonlinearity and are complicated by the need for sophisticated parameter control [23][24][25][26][27]. Another approach to enhance the properties of mode-locked lasers is to optimize the mode-locking operation regimes for applications, such as soliton, dissipative soliton, ultrafast dissipative soliton, similariton, stretched pulse, and dispersion-managed modelocked lasers, which can be conducted together with the other methods mentioned above [10,[28][29][30][31].Recently, a new mode-locking solution of dissipative resonance has been found in anomalous group velocity dispersion (GVD) [32,33].Typically, dissipative solitons can be generated in normal GVD cavities by the help of dissipative resonance between normal GVD and cubic nonlinearity, where both drive the pulse to have positive chirp [29,30].However, in the case of dissipative solitons in anomalous GVD (DSAD), dissipative resonance derives from anomalous GVD and quintic nonlinearity, where both drive the pulse to have negative chirp when the quintic nonlinearity is negative.The solution of DSAD was first presented by Chang et al. in 2009 [32] based on the cubic-quintic Ginzburg-Landau equation, after which DSAD was experimentally demonstrated by Liu et al. in 2011 with square-shaped pulses in the time domain, where the pulse duration was extended with constant peak power as the pump power increased [33].Following this first realization, DSAD has been further studied [34][35][36][37], but to date demonstrations of DSAD have only shown dissipative soliton pulses with a square-shape in the time domain.In other words, ultrafast dissipative solitons have yet to be generated in anomalous GVD. In this paper, we demonstrate a fiber laser exhibiting ultrafast dissipative solitons in a fiber cavity with anomalous group velocity dispersion, or U-DSAD, with a negative quintic nonlinearity, along with the finding that the generation of ultrafast dissipative solitons can realize a high peak power beyond the limitation of cubic nonlinearity.We first investigate and characterize the properties of U-DSAD through a numerical study with Haus's master equation, based on which we propose a principle to generate an ultrafast mode-locked pulse with a high peak power beyond the limitation of cubic nonlinearity.In short, chirps induced by the quintic nonlinearity of the fiber medium compensate the chirps induced by the anomalous dispersion, and this can be exploited to build U-DSAD.Through the numerical study, four unique properties of U-DSAD are found that are quite different from those of typical solitons and dissipative solitons.Based on the numerical study, we then generate U-DSAD with a low repetition rate mode-locked thulium-doped silica fiber (TDF) laser employing the nonlinear polarization rotation (NPR) method.To realize U-DSAD, a few hundred meters long fiber ring cavity is necessary to achieve high total nonlinearity and high total dispersion values with the singlemode fiber used in this work (Thorlabs SM2000).The total length of the fiber cavity is 485 m, and the repetition rate of the mode-locked laser is 427 kHz with a total GVD of -43.7 ps 2 and cubic nonlinearity of 0.22 W −1 .The constructed laser is confirmed to exhibit the four unique properties of U-DSAD found in the numerical simulation.The pulse duration and energy of the mode-locked pulse are 249 fs and 80 nJ, respectively.Assuming the pulse shape to be Gaussian in the time domain, the peak power of the pulse is estimated to be 330 kW, a peak power exceeding that of any previous singlemode fiber lasers.We anticipate that the present demonstration of U-DSAD is useful for a wide range of high peak power applications, such as nanomachining, drilling, corneal surgery, and nonlinear optics.Moreover, as this paper presents a method to achieve high peak power beyond the limitation of cubic nonlinearity, the results can give insight not only to research fields focused on mode-locked fiber lasers but also to a variety of optical fields dealing with optical nonlinearity to overcome inherent nonlinearity. Principles The self-phase modulation (SPM) of a mode-locked pulse by nonlinear phase shifts is one of the most important factors to stabilize mode-locked lasers and determine the mode-locking regime.Up to now, cubic nonlinearity has attracted the most interest to express mode-locking regimes and to study mode-locking stability.Quintic nonlinearity, on the other hand, has been mostly negligible in high power applications such as pulse explosion or soliton collisions.However, dissipative soliton (DS) solutions have been found in anomalous GVD on account of quintic nonlinearity, and thus quintic nonlinearity is indispensable to study high peak power mode-locked lasers [32,38,39]. We can write the refractive index with nonlinear phase shifts induced by cubic and quintic nonlinearity as where n 0 is the linear refractive index, and n 2 , n 4 are the nonlinear refractive indices by cubic and quintic nonlinearity, respectively.Figure 1a shows the amount of nonlinear phase shift with respect to optical intensity when the value of n 4 is negative.At a low intensity level, the amount of nonlinear phase shift linearly increases as the intensity increases.As the intensity further strengthens, at some point, the nonlinear phase shift arising from quintic nonlinearity overwhelms that by cubic nonlinearity, and consequently, the amount of nonlinear phase shift decreases.An increment in the nonlinear phase shift drives the mode-locked pulse to have positive chirp in the time domain, while conversely, a decrement leads to negative chirp.Typically, DS can be generated from dissipative resonance between two positive chirps induced by normal GVD and cubic nonlinearity.In a similar way, we expect that dissipative resonance can also be made at intensities higher than I C , but in the opposite direction, by two negative chirps derived from anomalous GVD and negative quintic nonlinearity [32][33][34][35][36][37].The intensity levels for stable solutions of DS and U-DSAD are marked in Fig. 1a. Figure 1b shows the chirp direction in a typical DS pulse, where the arrows point in the direction of increasing optical field frequency.For example, positive chirp, where the high-frequency optical components are delayed in the time domain, points in the + direction.Chirps induced by normal GVD and by nonlinearity have the same direction and are quite uniform throughout the DS pulse.Correspondingly, Fig. 1c shows the chirp direction in U-DSAD (1) driven by anomalous GVD and nonlinearity.While the amount and direction of chirp induced by GVD is almost uniform throughout the pulse, the direction of chirp induced by nonlinearity is a function of optical intensity for U-DSAD.The pulse center, where the intensity is higher than I C , has negative chirp, as GVD does.Otherwise, the trailing and leading edges of the pulse, having intensities lower than I C , have an opposite chirp direc- tion than that by cubic nonlinearity.We anticipate that this nonuniform chirp structure of U-DSAD creates unique characteristics, which we explore via numerical calculations in the next section. Numerical calculation To find stable solutions of U-DSAD, we conducted numerical calculations of Haus's master equation with cubic-quintic nonlinearity.Haus's master equation in fundamental form with cubic-quintic nonlinearity is written as follows [38][39][40][41][42]: (2) where E n+1 , E n denote the electric field that has completed n + 1 and n round trips in the laser cavity, respectively, l 0 is the total loss of the cavity including the fibers and components in the cavity, γ, ν are factors of cubic and quintic nonlinearity, respectively, L is the total length of the fiber cavity, β 2 (ω) is the GVD parameter, q(t) is the saturable absorption, G(ω,t) is the gain that depends on optical frequency and time, g 0 (ω 0 ) is the small signal gain at the maximum point in the optical frequency domain, q 0 is the small signal absorption of saturable absorption, and P sat,g and P sat,q are the saturation power of the gain and saturable absorption, respectively.Cubic and quintic nonlinearity are expressed as γ = n 2 /k 0 and ν = n 4 /k 0 , respectively.The spectral gain shape is assumed to be Gaussian, as shown in Eq. ( 3), with a spectral gain bandwidth of ω bw .As Haus's master equation is an integrated form of the nonlinear Schrödinger equation (or Ginzburg-Landau equation) for a round trip and the total absolute values per round trip of GVD, nonlinearity, and loss are too large to merge them into a single value, these have to be calculated with a split-step Fourier method. For the calculations, we set the values of the mode-locked laser as shown in Table 1.In this condition, we found a soliton mode-locking operation with the help of anomalous GVD and cubic nonlinearity.Figure 2a shows stable mode-locked operation with soliton pulses in the time domain when P sat,g = 6 mW, γ = 2 × 10 −3 W −1 m −1 , ν = 0 .The opti- cal spectrum of soliton operation with a Kelly sideband and hyperbolic secant spectral shape is shown in Fig. 2d. Since our interest is to discover the solution of U-DSAD at higher power levels, we checked the intensity profile in the time domain with increasing P sat,g .When P sat,g is 63 mW, the intensity profile becomes noisy without quintic nonlinearity, as shown in Fig. 2b and e.With some amount of quintic nonlinearity, ν = −5.55 × 10 −7 W −2 /m , however, we found that a solution of stable mode-locking indeed exists, as shown in Fig. 2c and f.Another noteworthy property is that the optical spectrum of a mode-locked pulse with quintic nonlinearity exhibits a trapezoidal shape with a flat top.A trapezoidal optical spectrum is not typical of mode-locked lasers, but some dissipative solitons have been observed to exhibit such a spectrum shape when the total nonlinearity and GVD of the cavity are high enough [43,44].The stable mode-locked pulses in Fig. 2c and f to be built by dissipative resonance between anomalous GVD and quintic nonlinearity. Based on the results, we presume that quintic nonlinearity is an important factor for the solution of stable mode-locking with high peak power and high pulse energy. To clarify that dissipative resonance is one of the key factors for mode-locking solutions at high peak power, we conducted numerical calculations of Haus's master equation over a broad range of parameter values.Figure 3a shows the parameter ranges for stable mode-locking solutions with respect to gain saturation power ( P sat,g ) and quin- tic nonlinearity ( ν ) from numerical calculations.Each colored point in the figure indi- cates a condition where stable mode-locking is observed: red indicates fundamental 2f, and green, blue, and gray indicate 2nd, 3rd, and 4th harmonic modelocking, respectively.The results distinctly show different solution regimes in terms of gain saturation power.Under 10 mW, typical soliton solutions are found with hyperbolic secant-shaped pulses, and the solution regime seems to be independent of the value of quintic nonlinearity.It is natural that soliton solutions are constructed by cubic nonlinearity and anomalous dispersion when the effect of quintic nonlinearity is low.But with increasing gain saturation power, other solutions are found that exhibit atypical properties of mode-locking.When ν = −2.22 × 10 −7 W −2 /m , solutions are found from 75 to 168 mW, where no stable mode-locking solution has previously been found with cubic nonlinearity alone.Unlike typical soliton solutions at low P sat,g , the solutions above a P sat,g of 25 mW depend on the amount of quintic nonlinearity, where the value of P sat,g for a stable solution regime decreases as that of quintic nonlinearity increases.Another remarkable point is that some of these solutions exhibit a flat-top optical spectrum.The parameter values of fundamental mode-locking solutions with a flat-top spectrum are marked on the parameter map in yellow in Fig. 3a. To look inside these mode-locking solutions, the instantaneous frequency of a modelocked pulse at low P sat,g is compared to one at high P sat,g .Figure 4a shows the instan- taneous frequency of a pulse with typical soliton mode-locking.The pulse chirp, which is defined as the slope of the instantaneous frequency in the time domain, is the lowest at the center of the pulse due to counteractions between cubic nonlinearity and anomalous GVD.The leading and trailing edges of a soliton pulse have stronger chirp in the direction of GVD due to decrements in the chirp induced by cubic nonlinearity at low power.One solution at high P sat,g shows different properties in terms of chirp, as shown in Fig. 4b.Here, the pulse center has the strongest chirp, while the leading and trailing edges have lower chirp.The strongest chirp at the center of the pulse is -583.3THz/ns.At the pulse center, strong chirp can be made by the coaction of quintic nonlinearity and anomalous GVD that induces chirp in the same direction.At each edge with low power, cubic nonlinearity is dominant, and consequently, the amount of chirp decreases.This characteristic matches well with our expectation about U-DSAD shown in Fig. 1c.Based on these properties, we can say that the solutions found at higher P sat,g are U-DSAD built by dissipative resonance between anomalous dispersion and quintic nonlinearity.Due to the fact that the solutions are built by nonlinearity, optical spectra in the solution region have various shapes with respect to P sat,g .Figure 5a shows an optical spectrum with a shape totally different from that in Fig. 2f.Another atypical characteristic of the U-DSAD solutions is the feasibility of harmonic mode-locking with any shape of the optical spectra.Generally, harmonic mode-locking with a passive mode-locker can be realized in soliton operation with a hyperbolic secant optical spectrum; harmonic mode-locking is not typical for DSs.Some studies show that harmonic mode-locking of DSs in normal dispersion can be made by strong spectral filtering to saturate the pulse energy.However, in our calculations, no spectral filtering effect is made in the cavity.Harmonic mode-locking of U-DSAD is made naturally by the coaction of nonlinear phase shifts and GVD with various optical spectra shapes, as shown in Fig. 5b and c.The reason why the spectra in Fig. 5b and c look noisy is that the longitudinal modes corresponding to mode orders having odd numbers are suppressed for 2nd harmonic mode-locking. Based on the numerical study, we found mode-locking solutions of U-DSAD and characterize four unique properties as below.I.The pulse energy of U-DSAD is much higher than typical soliton pulse energy.At most, it is about 18 times higher in the current work.II.Some of the solutions exhibit a flat-top spectrum, which has not previously been found in soliton solutions with anomalous dispersion.III.The optical spectra of U-DSAD vary with respect to P sat,g .IV. Harmonic mode-locking with dissipative resonance without spectral filtering is feasible. We believe that these characteristics distinguish DS and U-DSAD and can provide evidence of U-DSAD generation in fiber lasers. Experiments Based on the numerical calculations, both anomalous dispersion and high nonlinearity with sufficient quintic nonlinear effects are necessary to realize U-DSAD in fiber lasers.Most experimental cases of DSAD have been demonstrated with TDF lasers in many previous studies [32][33][34][35][36][37].This is assumed to be because silica fiber has a negative value of quintic nonlinearity and properly matches the anomalous dispersion for dissipative resonance at a wavelength of 2 µm with the average power of typical single-mode fiber lasers (under few hundreds of mW).Accordingly, we presume that low repetition rate mode-locked TDF lasers are the best choice for U-DSAD [32,33,40,44].A laser with a low repetition rate can achieve a high pulse energy by elongating the fiber cavity to a few hundred meters [45][46][47].Furthermore, the amount of dissipative resonance is proportional to the length of the fiber cavity.Numerous factors determine the solution of mode-locked pulses, including spectral shape, recovery dynamics of gain and saturable absorption, amount of gain and loss at each component, etc.To maximize dissipative resonance, a long fiber cavity is essential. Following this idea, we built low repetition rate mode-locked TDF lasers for U-DSAD.The host material of the fiber is silica.Figure 6 shows a schematic of the low repetition rate mode-locked TDF laser for U-DSAD.For effective allocation of gain into the long fiber cavity, two TDF amplifiers were made.Each TDF amplifier is made of 4 m long TDF (Nufern SM-TDF10/130) and pumped by a laser diode (LD) at a wavelength of 793 nm through a beam combiner (BC).The rest of the passive fiber is single-mode silica fiber (Thorlabs SM2000).Mode-locking of the laser is realized by the NPR technique since the long fiber cavity has a quite large amount of loss by silica glass absorption at 2 µm wavelength, and thus strong saturable absorption is essential [40][41][42]. We tested four different lengths of TDF lasers to find a sufficient fiber cavity length for U-DSAD.The total lengths of the fiber cavities were 159 m, 262 m, 314 m, and 485 m with all-single-mode fibers.Since the material dispersion of silica glass at a wavelength of 2 µm is known as anomalous and since waveguide dispersion is not dominant in single-mode fibers with a core diameter of 8-10 µm, the total GVD of the silica fiber cavity is anomalous.The total GVD and cubic nonlinearity of the laser cavity with a 485 m long fiber cavity are -43.7 ps 2 and 0.22 W −1 , respectively. Output characteristics of the mode-locked pulses generated by the low repetition rate mode-locked TDF laser with a cavity length of 159 m are shown in Fig. 7. Figure 7a shows the mode-locked pulse power profile in the time domain with a repetition rate of 1.3 MHz at 9 W pump power.As shown in Fig. 7b, the laser operation exhibits 2nd harmonic mode-locking when the pump power increases to 13 W.In U-DSAD operation, the fundamentally mode-locked pulse of the fiber laser is split into multiple pulses, and the laser operation evolves to stable harmonic mode-locking in a few seconds by the pump power increment.Figure 7c displays an intensity autocorrelation of mode-locked pulses with a 280 fs pulse duration for fundamental, 2nd harmonic, and 3rd harmonic mode-locking.Based on these results, up to now, the laser operation seems to be typical soliton mode-locking operation.But as shown in Fig. 7d, the pulse energy is significantly higher than soliton solutions.The maximum pulse energy was 20 nJ at a pump power of 11 W, where the pulse energy is evaluated directly from the average output power and repetition rate of the mode-locked pulses.To check the validity of this pulse energy evaluation, we confirmed that no continuous wave power was observed between modelocked pulses compared to the noise signal of the detector when the input optical power was zero.The noise signal and mode-locked pulse signal are shown in the inset of Fig. 7a. Typically, the maximum pulse energy of soliton solutions is under 10 nJ with singlemode all-fiber lasers [48][49][50].In addition to this considerable difference, as shown in Fig. 8a, the obtained optical spectrum of mode-locking differs from a soliton spectrum.The shape of the spectrum is close to Gaussian rather than the hyperbolic secant function, and no Kelly sideband peak is observed.Accordingly, we presumed that the mode-locked pulses were built by dissipative resonance between anomalous dispersion and quintic nonlinearity, and we tested this conjecture with the other low repetition rate mode-locked TDF lasers with longer fiber cavity lengths.8d, the 485 m long fiber cavity exhibits a totally flat-top spectrum with a trapezoidal shape.This kind of spectrum can typically be found in DSs in a fiber cavity with high dispersion and nonlinearity [43,44].It is worth noting that the flat-top spectrum cannot be achieved with fiber cavity lengths of 159 m, 262 m, or 314 m even if the pump power is increased to realize sufficient nonlinearity.We believe that a high total dispersion value of the long-length fiber cavity is crucial for the flat-top spectrum to achieve sufficient dissipative resonance without being disturbed by gain or any other components that can affect the soliton solutions.Figure 9 shows the mode-locked pulse energy with respect to pump power for the four different fiber cavity lengths.With the 159 m long fiber cavity, the maximum pulse energy was 36.1 nJ at 10 W pump power.The maximum pulse energy increased to 64 nJ with the 314 m long fiber cavity, and further increased to 80 nJ with the 485 m long fiber cavity.All of the maximum pulse energies were observed in fundamental mode-locking operation. Figure 10 shows the characteristics of the mode-locked laser output generated from the 485 m long fiber cavity.Figure 10a and b plot the time domain intensity profile and RF spectrum, respectively, that show stable mode-locking operation at a 427 kHz With this mode-locked thulium-doped fiber laser, we found that the shape of the optical spectrum depends on the pump power.Figure 10e plots the optical spectra with respect to various pump powers.The shape of the optical spectrum is mostly Gaussian at a pump power of 9 W, while at a pump power of 11 W, the spectrum shape changes to have steep edges at short wavelengths and rounded edges at long wavelengths.When the pump power is further increased to 13 W, the optical spectrum exhibits a flat-top shape.This trapezoidal shape of the optical spectrum seems to be almost identical with Fig. 2f showing the U-DSAD solution found via numerical simulation.It should be noted that the polarization states of output lasers differ by the setup for each generated result since the output laser polarization strongly depends on the birefringence of the fiber cavity and the manipulation of the polarization controller (PC) in the cavity.Nevertheless, fundamentally mode-locked pulses and harmonic mode-locked pulses generated from a laser setup are expected to have almost identical polarization states due to the fact that the PC and birefringence of the fiber are not manipulated when the pump power is increased.Based on the results of Figs.9c and 10c, we evaluated the peak power of the modelocked pulse to be 330 kW with the assumption that the pulse shape is Gaussian.Such a high peak power has to date not been observed in former studies on single mode-fiber lasers.We believe that dissipative resonance in anomalous dispersion led to this high peak power based on the fact that the experimental results exhibited the same four properties that were found in the numerical simulations, as follows.Matching the first property stated at the end of "Numerical calculation" section, the mode-locked pulse energy of 80 nJ found here is higher than any previously reported pulse energy of mode-locked single-mode fiber lasers.Second, as shown in Fig. 8d, the low repetition rate modelocked thulium-doped fiber laser exhibits a flat-top optical spectrum, and moreover, the trapezoidal spectrum appears highly analogous with the optical spectrum of U-DSAD in Fig. 2f.Third, the optical spectra of the mode-locked pulses depend on the pump power.Such optical spectrum changes in anomalous dispersion are atypical, although some dispersion-managed mode-locked pulses have been shown to exhibit pump power dependence.Fourth, the present ultrafast mode-locked laser shows harmonic mode-locking.While harmonic mode-locking in anomalous dispersion is not atypical, harmonic modelocking with a flat-top shaped spectrum is quite unusual.We also note that typical DSs do not exhibit harmonic mode-locking without artificial treatment, such as strong spectral filtering [51][52][53][54]. For a clear comparison of the current work with previous studies, Fig. 11a plots the pulse energy and duration referring to references related to high peak power or high pulse energy [30,45,47,49,[55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71].Based on the data in Fig. 11a, the peak power is roughly estimated and organized as in Fig. 11b.The highest peak power of mode-locked pulses among the references is about 17 kW; by comparison, our TDF laser exhibits about an 18 times higher peak power.We believe that this phenomenon is only possible by the new laser dynamics called U-DSAD. Conclusion We discovered and characterized the properties of U-DSAD-ultrafast dissipative solitons in a fiber cavity with anomalous group velocity dispersion-based on simulations of laser dynamics including quintic nonlinearity.Four properties of U-DSAD found in this study are distinct from typical solitons or dissipative solitons.Furthermore, the solution of U-DSAD exhibits a high peak power beyond the cubic nonlinearity limitation of typical solitons.Based on the simulation results and previous studies on DSAD in TDF lasers, we built a TDF laser that can generate mode-locked pulses having atypical properties.Their characteristics are not included in any other laser solutions but fit well into the four unique properties of U-DSAD found via simulation.The highest peak power of the laser was about 330 kW, an unprecedently high value compared to former studies on mode-locked fiber lasers with single-mode fibers.We anticipate that this first demonstration of U-DSAD will be beneficial to applications related with high peak power, such as surgical laser scalpels or nonlinear optics. Fig. 1 a Fig. 1 a Nonlinear phase shift with respect to optical intensity.b Optical power profile and chirp direction in a typical dissipative soliton pulse with normal GVD and cubic nonlinearity.c Optical power profile and chirp direction in an ultrafast dissipative soliton pulse with anomalous GVD and quintic nonlinearity Fig. 4 a Fig. 4 a Power profile of a soliton pulse in the time domain and its instantaneous frequency with ν = 0, P sat,g = 9mW .b Power profile of U-DSAD in the time domain and its instantaneous frequency with ν = −5.55 × 10 −7 W −2 /m, P sat,g = 63mW Fig. 5 ab Fig. 5 a Optical spectrum of fundamental mode-locking with ν = −5.55 × 10 −7 W −2 /m, P sat,g = 78 mW .b Optical spectra of 2nd harmonic mode-locking with ν = −5.55 × 10 −7 W −2 /m, P sat,g = 141 mW, and c that with P sat,g = 165 mW .The inset in b shows the optical power profile in the laser cavity evolving to 2nd harmonic mode-locking Figure 8b-d plot the optical spectra of mode-locked pulses with fiber cavity lengths of Fig. 7 Fig. 7 Output characteristics of mode-locked pulses generated from a low repetition rate mode-locked TDF laser with a cavity length of 159 m. a Optical power profile of fundamental mode-locked pulses in the time domain (inset: mode-locked pulse signal compared to the noise level of the detector).b 2nd harmonic mode-locked pulses in the time domain.c Intensity autocorrelation signals for fundamental, 2nd harmonic, and 3rd harmonic mode-locked pulses.d Pulse energy with respect to pump power Fig. 8 Fig. 8 Optical spectra of mode-locked pulses generated by a low repetition rate mode-locked TDF laser for U-DSAD with a fiber cavity length of (a) 159 m, b 262 m, c 314 m, and d 485 m Fig. 9 Fig. 10 Fig. 9 Mode-locked pulse energy with respect to pump power from the low repetition rate mode-locked fiber laser with a fiber cavity length of (a) 159 m, b 314 m, and (c) 485 m AbbreviationsGVDGroup velocity dispersion DSAD Dissipative solitons in anomalous GVD TDF Thulium-doped silica fiber NPR Nonlinear polarization rotation SPM Self-phase modulation DS Dissipative soliton U-DSAD Ultrafast dissipative solitons in a fiber cavity with anomalous group velocity dispersion LD Laser diode BC Beam combiner Fig. 11 a Fig. 11 a Pulse energy and pulse duration, and b roughly estimated peak power from the current study and previous studies related with high peak power or high pulse energy single-mode fiber lasers Table 1 Definition of the parameters and their values used in the numerical calculation
6,813.8
2023-10-16T00:00:00.000
[ "Physics" ]
Human-machine interactions based on hand gesture recognition using deep learning methods ABSTRACT INTRODUCTION Human-machine interaction is one of the important aspects of modern information technologies.Hand gesture recognition technologies are an innovative approach to providing a convenient and natural user interaction with computers and electronic devices.One of the most effective and promising methods for hand gesture recognition is the application of deep learning methods.In recent years, deep learning has led to significant breakthroughs in pattern recognition [1], computer vision [2]- [4], and natural language processing  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol.14, No. 1, February 2024: 741-748 742 [5]- [8].These methods allow you to automatically extract high-level features from complex data, which has made them especially attractive for hand gesture recognition problems [9]- [11].This study is devoted to the research and development of a technology for recognizing human-machine interaction based on hand gesture recognition using deep learning methods.The main goal of this research is to create an efficient and accurate hand gesture recognition system that will allow users to interact with computers and other devices in a natural and intuitive way.To evaluate the performance of the developed models, experiments were carried out on various data sets with real hand gestures.The system's accuracy, speed, and stability are evaluated to determine its effectiveness and applicability in various scenarios. To evaluate the performance of the developed models, experiments were carried out on various data sets with real hand gestures.The system's accuracy, speed, and stability are evaluated to determine its effectiveness and applicability in various scenarios.The tasks under consideration based on hand gesture recognition using deep learning methods can be diverse, i.e., hand gesture recognition, mouse cursor control, interaction with virtual objects, control of robots, and drones.This research is very relevant and has great potential for application in various areas of human activity, such as in natural interaction, in real world applications, the applicability of deep learning in improving the accuracy and efficiency of the system.In general, hand gesture recognition technology [12]- [15] has significant potential and relevance in the modern world.Its development and application can lead to new innovative products and improved user experience, which makes this topic very important for research and development in the field of computer technology. Tussupov et al. [16] presents a unique learning-based approach to denoising without the use of pure data.The authors have shown that deep neural networks can be trained with a pair of noisy images without the need for pure training data.Lehtinen et al. [17] proposes a method for blind defocusing (blur removal) on images using conditional generative adversarial networks (GANs).The authors demonstrated that deep learning with cGAN can successfully recover images with different blur levels.Zhang et al. [18] proposed the residual dense network (RDN) for the problem of image super-resolution.RDN combines tight-tie blocks and residual blocks to achieve outstanding results in super-resolution problems.Huang et al. [19] presents a method called Noise2Void that allows a deep neural network to be trained to denoise based on a single noisy image without using pure data.This provides an efficient solution for object recognition in noisy images.Zhang and Oh [20] present a deep convolutional network that denoises Monte Carlo tracking images.This method is especially applied in computer graphics, but can be adapted for other problems of object recognition in noisy images. Zhang et al. [21] propose a deep learning model for processing noisy and dark images, which is an important aspect in object recognition in low light conditions.Zhang et al. [22] explores the application of networks with deep residual blocks and an attention mechanism to the problem of image super-resolution.Improving image resolution can also be an important step in object recognition in noisy images.Izadi et al. [23] presents a deep convolutional network capable of denoising images obtained by the Monte Carlo tracking method.This method is especially used in computer graphics, but it can be adapted to other problems of object recognition in noisy images.Lee and Jeong [24] presents the Noise2Self method, which uses self-surveillance to train a deep neural network on noisy images without the need for clean data.This approach also demonstrates good results in denoising and object recognition problems.Baguer et al. [25] presents the deep image prior method, which makes it possible to recover clean images from noisy data without training.This approach is based on the properties of the deep neural network architecture.In study [26], a deep learning model was proposed for processing noisy and dark images, which is an important aspect in object recognition in low light conditions. METHODS The use of MediaPipe hand recognition for human-machine interaction based on hand gesture recognition provides new opportunities for creating intuitive, interactive and user-friendly user interfaces, which contributes to the development of more advanced and smart systems.MediaPipe hand recognition is powered by deep neural networks specially trained on huge datasets containing many real-life images and videos with a variety of hand gestures and finger positions.The trained models are used to classify and identify key points on the hand such as fingertips, knuckles, wrist.MediaPipe hand recognition can process both static images and real-time video streams, making it useful for various applications such as hand gesture recognition in interactive systems, virtual reality and augmented reality, mouse cursor and interface control using hand movements, creating collaborative applications where users can draw, write, or interact with content using hand gestures.In the presented work, the MediaPipe hands library was used to detect key points on the hands in the video stream using a webcam.Specifically, the coordinates of the fingertips of the index finger in the image are determined, and then used to control the position of the mouse cursor on the computer screen.Human-machine interaction based on hand gesture recognition using deep learning methods such as convolutional neural network (CNN) and long short-term memory (LSTM) recurrent neural networks is an exciting and promising area of research and development.The considered CNN and LSTM methods make it possible to create more efficient models for processing video streams with hand gestures and analyzing gesture sequences, which improves the accuracy and reliability of the recognition system.In this context, CNNs are used to process images of hand gestures and extract meaningful features.CNNs are particularly efficient at processing visual information and automatically extract characteristic patterns and structures in images.For successful gesture recognition, CNNs can be used to classify hand images into different categories of gestures. On the other hand, long short-term memory recurrent neural networks are applied to analyze sequences of hand gestures.LSTMs allow for context and dependencies between successive video frames, making them suitable for analyzing temporal data such as hand gestures.This allows you to create models that can capture the dynamics of gestures and understand the sequence of actions.This approach finds application in various fields such as smart device control, medical applications, virtual and augmented reality, interactive systems, and more.The use of CNNs and LSTMs in hand gesture recognition helps create more accurate and reliable human-machine interaction systems, making the interaction more natural and comfortable for users. The process of collecting data for training CNN and LSTM models begins with initializing the window and camera, where the parameters for the size of the video stream display window are set and the camera is configured.Hand recognition is then initialized using the MediaPipe library, configuring parameters such as the maximum number of hands to detect and the confidence level.In the main video stream processing loop, each frame is captured, flipped, converted to RGB format and passed to the hand recognition model.If hands are detected, the coordinates of the pointer and thumb are extracted and the distance between them is calculated to determine gestures.Finger pointer coordinates are converted to screen coordinates using the PyAutoGUI library to control the mouse cursor as well as define gesture actions.The data is collected and stored in a comma-separated values (CSV) file "hand_movement_data.csv" for subsequent training of CNN and LSTM models.The current action is displayed on the frame and the process continues until the "q" key is pressed, at which point the camera is released, the windows are closed, and the data is saved. Thus, the whole process allows you to collect data for training CNN and LTSM models for hand gesture recognition and mouse cursor control based on hand position.Once data is collected, these models can be trained and used to create an interactive application for controlling the mouse cursor using hand gestures.The process of training neural networks based on the above codes includes several stages: Data preparation: First, the data must be loaded from the "hand_movement_data.csv" file using the Pandas library.The data is then divided into features (X) and labels (y), which are the coordinates of the fingertips and the coordinates of the mouse cursor, respectively.The data is scaled using MinMaxScaler so that feature and label values are in the range 0 to 1. RESULTS AND DISCUSSION In this paper, the deep learning methods discussed, such as CNN and LSTM, allow the development of efficient gesture recognition systems that can simulate the natural interaction between a person and a computer, which has promising prospects in various fields.One of the main advantages of using CNNs in hand gesture recognition is the ability to automatically extract important features from visual data.CNNs trained on large datasets can recognize hand and gesture features, making them effective tools for classifying various gestures.The ability to use pre-trained CNN models can also significantly reduce training costs and improve system speed. Using CNN and LSTM recurrent neural networks methods to control the mouse cursor on a computer or device using hand gestures is an interesting and innovative technology that can significantly improve user experience and make interaction with a computer more natural.and convenient.One of the main advantages of using CNNs and LSTMs is their ability to process visual data and analyze sequences of hand movements.CNNs can automatically extract features from hand images, which allows the system to recognize various gestures such as cursor movement, clicks, and scrolling with high accuracy.LSTM, in turn, provides analysis of sequences of gestures.This is especially useful when moving the cursor, as users can perform complex hand movements to accurately position the cursor.LSTM allows the system to take into account the dynamics of movements and adapt to various gestures, which makes cursor control more precise and intuitive.One application for such technology could be the use of hand gestures to control the cursor in virtual or augmented reality.This can provide a convenient and interactive way to interact with virtual objects and environments.However, there are some challenges and limitations that need to be taken into account.First, training efficient CNN and LSTM models requires large amounts of data so that the system Collecting and labeling such data can be a timeconsuming process.Secondly, the system must be able to process real-time gestures for a comfortable user experience.This requires optimization and efficient processing of the video stream with gestures.Consideration should also be given to the system's resilience to varying conditions, such as changing lighting, background noise, or varying hand postures.The system must work reliably and accurately in a variety of scenarios.In general, the use of CNN and LSTM techniques to control the mouse cursor with hand gestures represents a promising technology that can make computer interaction more natural and convenient.This allows users to control the cursor and perform actions on the computer or device more efficiently and intuitively, see Figure 1.However, for the successful implementation of such a system, it is necessary to take into account the challenges associated with training models, processing data in real time and ensuring stable operation in various conditions.The trained model is saved to the "hand_movement_lstm_model.h5" file using the save function.Finally, a graph is plotted for the change in the loss function and the mean absolute error on the training and test samples during the training process.As a result of executing the code, we will get a trained LSTM model that can predict the coordinates of the mouse cursor based on the coordinates of the fingertips.If the accuracy of the model on the test set satisfies the requirements, it can be used to control the cursor using hand gestures in real time as shown in Figure 3.As a result of training the considered CNN and LTSM models, it was determined that the CNN does a good job of extracting features from images, which allows you to accurately determine the coordinates of the finger pointer on the video stream.Figure 4 shows the result of an experiment using the CNN model to select a folder on the desktop using hand gestures.The model successfully recognized the hand gestures associated with highlighting a folder on the screen.This includes correctly determining the start and end points of the selection, as well as determining the shape and boundaries of the folder. Figure 5 shows the result of how a user can use to click on a folder with a squeezing hand gesture.In the future, the model can be successfully applied in real use cases, such as managing files and folders on a computer using hand gestures.When training, it is important to ensure that the model is trained well and tested thoroughly to achieve the desired results and ensure high performance of the system. CONCLUSION Using CNN and LSTM recurrent neural networks to control the mouse cursor on a computer or device using hand gestures has improved human-machine interaction, making it more natural and userfriendly.The advantages of using CNNs and LSTMs in this context lie in their ability to efficiently process visual data and analyze hand motion sequences.CNNs automatically extract important features from hand images, which allows the system to recognize various gestures with high accuracy.LSTM, in turn, made it possible to take into account the dynamics of movements and improve the accuracy of cursor control when performing complex gestures. Both architectures have their advantages and limitations, and the choice of the appropriate model may depend on the specific needs and characteristics of the system.For the task of controlling the mouse cursor with hand gestures, using the CNN model, advantages were identified, such as the ability to process visual data such as images or videos, the ability to automatically extract important features from hand images, which allows recognition of various gestures with high accuracy, and also suitability for classifying and recognizing objects, which is ideal for identifying various hand gestures.The limitation of this model is that it does not take into account sequences of gestures, which can be important for some cursor control scenarios, and it also required more training data, since CNN has many parameters, and large amounts of diverse data are required for successful training. Figure 1 . Figure 1.Algorithm for performing human-machine interaction Figure 4 .Figure 5 . Figure 4. Result of the CNN model for highlighting a folder on the desktop using hand gestures
3,407.6
2024-02-01T00:00:00.000
[ "Computer Science" ]
The Ruling of Paper Money Usage : Analysis Based on the Evolution of Currency Development Efforts to regain and elevate the usage of gold dinar in Malaysia have created various polemics either from the legal viewpoint, economic transactions, and the position of currency law. Views from some quarters advocating gold dinar as the currency demanded in Islam have implications on the use of paper money today. Consequently, some gold dinar activist groups in Malaysia forbid the use of paper money due to the lack of gold backing. This has created confusion amongst the communities on the law of paper money usage. Thus, the aim of this study is to analyze the views of the Muslim scholars regarding the forbidden use of paper money and the Islamic point of view. This analysis is based on the various phases of world currency development beginning from the first phase (paper currency as a receipt of debt), the second phase (a means of payment/fulus), the third phase (paper money as gold value backed) and the fourth phase (paper money as a principal currency). This study is in the form of qualitative by using the library research approach. The result of this study shows that the use of paper money is permissible because currently paper money has become the world principal money and no longer as a receipt of debt. The Council of Islamic Fiqh Academy (Majlis Majma al-Fiqhi al-Islami) 5 session held in 1402H/1982AD is also with a resolution that the use of paper money is permissible. Introduction Legal disputes over gold dinar and paper money exist due to various gold dinar activists with different faiths in their efforts to regain and elevate the usage of gold dinar as a currency.Gold dinar and silver dirham were seen as a form of Islamic currency used during the Prophet Muhammad's s.a.w.(peace be on him) era until the era of the c Ottoman government in 1924.While paper money was first emerged during the mid-era reign of the c Ottoman government known as al-Qa'imah. The World Murabitun Movement (Murabitun) who champions the physical usage of gold dinar holds that the gold dinar gold dinar is the currency of Islam and the use of paper money is illegal.According to Vadillo (2002), the paper money system we have today is forbidden (haram) because it is a receipt of debt or dayn.Islam does not allow debt as an exchange and their usage must be limited as a private contract.His argument is based on a history in the book of al-Muwatta' (Malik ibn Anas t.th), which stated how Marwan ibn al-Hakim had ordered his guards to repossess the receipts of debt (which were traded in the market of al-Jar before the goods arrive) and return them to their owner.This movement also considers that the paper money is prohibited due to the inherent element of gharar (deception) and riba' (usury) in the system.Gharar is the basis of unfair profit in the exchange of paper money.This is because the replacement of real money of dinar and dirham with artificial paper money.The excessive profit or surplus (riba') is created through the bank's methods of creating multiplier credits.This money also does not allow people to redeem gold as a collateral.Next Vadillo (2002), Hosein (2008) and Zuhaimy (2003) concluded that paper money and banking institutions were the symbol of the presence of riba'.Riba' is a clear prohibition in Islam and therefore paper money is prohibited.However, this view is seen in contravention of those who try to uphold the usage of gold dinar as a backup for value if paper cannot be made as a currency. Literature Review Among the responses from the views above are views of the earlier Muslim scholars (ulamas) that leads to the paper money as a receipt of debt.One of them is the view of al-Husayni (1329H), in which he has stated that banknote (notes) as a document of debt on its holder and this document is used as a medium of exchange and payment largely because it is lighter and easier to carry.Al-Husayni (1329H) opined that the transaction with this paper was similar with the al-hiwalah transaction (transfer).However, this view is refuted by several Muftis in Malaysia like Tan Sri Dato' Seri (Dr.) Hj.Harussani bin Hj.Zakaria (Perak State Mufti), Datuk Wira Hj.Rashid Redza bin Hj.Md.Saleh (Malacca State Mufti), Dato Paduka Haji Sheikh Abdul Halim Riza (Kedah State Mufti) and Mohamad Shukri bin Mohamad (Kelantan State Deputy Mufti).All of them are of the view that the use of paper money is permissible and the Muslim community may continue using it.The prohibited part of this system is the implementation mechanism that set the course towards riba', speculation and fraud.The paper money is not prohibited and it is not a receipt of debt because it has a preset value.If Malaysia wishes to reintroduce the usage of gold dinar, then the mechanism needs to be clear and definite.Meanwhile Syed Nazmi Bin Tuan Taufek (Islamic Affairs Officer, Department of State Mufti of Terengganu) concludes three factors why the paper money today is not prohibited.The factors are (Edawati, 2005): 1) paper money is not a receipt of debt because it has a value predetermined by the government; 2) the use of paper money is permissible as long as it does not violate the Islamic law.What is wrong today is its conduct and application by certain parties on the currency; and 3) the monetary system which began with barter system, followed by gold and then the paper money system is in accordance with the changing of times and not simply to falsify mankind and definitely not to make it forbidden. Whereas, the Mufti Harussani states that if a currency has no sound basis for exchange or backing then it forms part of gharar.According to Ibn Manzur (t.th), gharar means exposing a person or a person's assets unmindful of destruction, and generally gharar means danger, catastrophe and risk.While al-Sarakhsi (1979) explained that gharar could occur in conditions of uncertainty about the existence of certain unknowns such as some hidden outcomes or consequences.Gharar is forbidden in Islam because it can lead to disputes and conflicts and any dispute and conflict in muamalat is forbidden in Islam.Gharar is something that is uncertain and gives rise to doubt, similar to the problem of riba' which is clearly forbidden by Allah S.W.T.If the monetary system cannot escape from the elements of gharar and riba' then the system is not in accordance with the Islamic law (syara') (Joni Tamkin & Mohd Ridhwan 2009).Meanwhile, currency speculation and manipulation will lead to oppression and tyrannise the small nations.Therefore, in order to create a system that is more just and harmonious, Islam demands its followers to seek for an alternative financial system that is more just and stable (Edawati, 2005). Mufti Rashid Redza concurs with the above statement and proposes for the return of gold dinar as a currency.According to him, it is a good move and in conformity with the requirements and tenets of Islam.Gold dinar was practised during the Prophet Muhammad's s.a.w.time and was a valuable currency, stable and fair.Gold dinar has been proven to have many advantages compared with the existing currency.Hence, the study by Salmy Edawati (2005) concluded that the alternative to the floating paper money system was gold based system or gold dinar system.Gold dinar is a viable alternative currency because of its characteristics that fulfil the currency requirement of Islam namely: intrinsic value, stable and durable.These characteristics can safeguard a currency from riba', inflation and manipulation.The Prophet Muhammad s.a.w. had been using dinar and dirham currency in daily transactions.Meanwhile, the fulus currency was used as a supporting currency to the smaller and cheaper business transactions.From that leads to the words 'dinar' and 'dirham' being mentioned in several verses of al-Quran, as in verse 75 of surah Al-c Imran, verse 20 of surah Yusuf, verse 59 of surah al-Nisa', and verse 19 of surah al-Kahfi.This is because these verses were revelled during the Prophet's s.a.w.time and it is not peculiar if all the concepts and characteristics of currency being mentioned in the al-Quran are based on gold dinar and dirham currencies.Therefore, it is imperative to fine tune the discussion of currency in Islam so as to resolve the currency problems emerged today.Hence, this gives rise to disputes among the ulamas on the position of paper money being used today.Is paper money permissible in Islam or not?Hence, the disputes and contradictory views in determining the exact ruling on the use of paper money today should be discussed and coordinated.A clear ruling on this paper money is vital because it involves the daily muamalat of the Muslims.Besides, paper money will also be the medium (wasilah) in the implementation mechanism of gold dinar value backed currency system. Method This research is a qualitative study using the approach of library research and historical studies.Content analysis was used for data collection process.Meanwhile, the process of data analysis is carried out through the textual analysis method, evaluation of the views of Islamic scholar on changing phases of world currency. Analysis Based on the Views of the Islamic Scholar on the Changing Phases of World Currency The paper money being used today gives rise to many different opinions among the ulamas in determining the ruling of its uses.This is because paper money in a form of currency exists only after the pass on of the Prophet Muhammad s.a.w. and there is no clear ruling regarding the use of paper money in the al-Quran, al-Hadis nor old scriptures.Hence, the question of rulings regarding the paper money will be viewed through the evolution of it usages beginning from the earlier creation of paper money until today. According to a study by Hassan 1999, fatawas (edicts or legal decisions) on the fixing of rules regarding the currency were issued based on the development of currency beginning from the use of commodity money, metallic money (dirham and gold), and subsequently the paper money.Paper money has also undergone several developmental phases beginning from phase one: paper money as a letter of declaration in substitute of money or receipt of debt (bank money), phase two: paper money as gold value backed, phase three: paper money as a means of payment, and phase four: paper money as a complete substitute for gold and silver money.History has recorded that China was the first country to use paper money in 1368M.Meanwhile, community during the Romans has started using paper money which is regarded as a declaration receipt in substitute for gold and for storing gold in a safe place such as the goldsmiths, eminent people, trustworthy religious people or safe deposit boxes stashed in secured places (storage).The declaration letters in substitute of money represent the gold belongs to the respective owners and can be redeemed at any time from those secured places.The declarations are normally with IOU (I Owe You) printed on it.Keepers of the gold in storage will impose certain charges such as the custodial charges, storage and for reducing the risk of theft (Goldfeld & Chendler 1981).The government of the c Ottomans in 1256H (1804AD) has produced the first paper money known as the al-Qa'imah.The use of this paper money only lasted for 23 years before it was officially rescinded in 1278H (1816AD).However, in 1293H the al-Qa'imah was reproduced only to be discontinued a while later because it was not well received by the public.In 1332H, the al-Qa'imah was reproduced for the third time and this time its usage was enforced through legislation.The paper money of al-Qaimah continued to be used until the fall of c Ottoman Turkish government in 1924 ( c Ali, 1983). Therefore, most ulamas of thought between the years 1886-1924 was centred on the paper money as a declaration letter in substitute of gold or receipt of debt.Some of the ulamas ruled that paper money was a debt document while some of them rejected it.The ulamas who ruled paper money as a debt document or a receipt of debt were Ahmad al-Husayni, Muhammad Amin al-Shanqiti, Salim bin Abd, Allah bin Samir and Habib bin Sumayth.They were of the opinion that the paper money essentially did not fall into the category of a currency because paper money was just a document of evidence that the bank owed the holders of gold.In the event of a transaction using the documents, the value backed gold or metal being stored would be a measure.Therefore, the paper money merely served as a document representative of the gold deposited in the bank. Ahmad al-Husayni (1329H) stated that: "Sometimes the government produced its debt documents known as 'banknote', as a debt document against the holder of those documents.Those documents are being used as a medium of exchange and payment because it is light and easy to carry around in one's pocket or in hand in a large amount.To gain the trust of the public, owners of those documents can redeem them for gold at any time according to the values written on those documents.Sometimes the government also gives permission to the banks to produce these debt documents with specific conditions.Under these circumstances, the public trusts and have high regards on the debt documents until the banks and the government are encountered with problems when payment processes slow down or the banks withdraw those values.This makes the document worthless and causes loss of confidence on debt documents.The rule on this transaction is the same as the al-hiwalah transaction (transfer)". This view was refuted by several ulamas like Ahmad Ridha al-Buraylawi and Ahmad Khatib al-Jawi who both lived during the era of paper money being used as declaration document in substitute of money.al-Zuhayli (1986), a contemporary ulama was also of the same view that paper money was not a debt document. Al-Buraylawi (t.th) states that: "The opinion that states that paper money is a debt instrument i.e. the institution (banking) responsible in marketing this money as dues owed to the depositors of the dirhams.The bank issues a piece paper as a proof of debt and the amount owed.When the depositor presents that paper to the bank, the debt is paid and that paper is then returned to the bank.If the holder of that paper gives it to another people, this means the bank owes the second holder of the paper.In other words, the transfer of paper between the first holder to the second holder is a proof of debt transfer.The second holder will get the same value of debt due to him from the bank.The paper may continue to change hands in numerous times in tandem with the increase in transfer of debts.This is why it is called a debt document.Even children would understand that this sort of transaction will not occur in the heart of every person who transacts with the paper currency.They do not mean by this change; borrow, lend or transfer." Al-Jawi (1329H) also provides evidences that paper money is not a debt document.Among the evidences are: 1) The paper money will lose its value due to physical damage of the paper.For example, if a currency note (paper money) worth 10 dinars is burnt in a fire then the value of 10 dinars is gone.But, with the status of the paper money as a debt, the owner of the burnt paper money may claim the value of 10 dinars from the party issuing the paper money and its value. 2) When the holder of the paper money redeems the value from the issuing party, it deems the return of paper money to the issuer.Despite of this, the paper money will still be able to be resold with the same value.If the transaction is a debt, the debt free paper should not be resold.Al-Zuhayli (1986) states that: "It is not true to qiyas (analogize) paper money as a debt because debt is of no use to its owner, i.e. the person who gives the loan.Besides, ulamas do not compel zakat (tithe) payment on a price that is still in the form of debt unless it is delivered.This is because there is the possibility of loss or transactions being unpaid.On the other hand, paper currency can be of beneficial to its holder such as gold which is considered as having a price value on everything." There are some ulamas who take the view that the physical paper of paper money has no value, as the value depends on the number written on the paper representing the gold stored.This view is true in the sense that the paper that serves as a medium during the time was a representative document of the amount of gold owned and kept in the bank.The views of the ulamas considered at the time were based on the development of paper money during the period as a representative document or a bank's receipt of debt.The period was between the years 1886 to 1914.This means that the argument of paper money as a receipt of debt or document representative is no longer relevant to be debated in today's time when the paper money today is already considered as the staple currency in world monetary system. Second and Third Phase: Means of Payment/Fulus and Paper Money as Value Backed At this stage, the bank document or deposit declaration receipt is not considered as money in totality because the institutions or the public that own this document will continue to convert it into a metal currency in a short while.This is because at the time, the metallic money (coins) have become the basic currency for most countries in the world.However, when the economies prosper, the documents or the bank receipts could not be redeemed soon enough and are always changing hands and circulate in the market and did not return to the issuer of receipts (the bank).The keepers of this gold soon began to realize these changes and find that most of the paper receipts issued are not returned to them for redemption.Only a small number came to redeem their gold or silver.The keepers of gold and silver then started to issue more receipts not representing the true value of gold stored.This happens due to the public's confidence of the keepers, whereas redemption can be done at any time they need it.Finally, bank receipts depart the medium of exchange and at one swoop take over the function of basic currency.At this time, there have been other opinions of the ulamas who regard paper money as fulus money (chipped money or fraction money).This shows that the status of paper money is on par with fulus in term of price value.Meanwhile, in terms of fiqh law, some ulamas apply the rules of paper money as the same as fulus while some ulamas qiyas it with gold and silver.Among the ulamas supporting the opinions of paper money as fulus are Muhammad Ridha al-Buraylawi, Ahmad al-Khatib al-Jawi, Muhammad c Ulaysi al-Maliki, Sulayman al-Khalidi al-As c ardi and Muhammad Salamah Jabar. According to al-Buraylawi (t.th), the copper money of fulus is originally a type of commodity which has eventually become the currency in circulation and is widely used by the people in the market.This is similar to the paper money.Paper is originally a type of commodity and now it is widely accepted as a currency in circulation in the market and well received by the people.The numbers written on the paper money indicate the price value.Al-Buraylawi also is of the opinion that the obligation of zakat is compulsory on the paper money due to its own intrinsic value and non-debt receipt.However, he classifies the paper currency as a wealth of riba' because the value of paper money is fixed through market collaboration whereas the public has no right to interfere with the contract between the parties involved. In addition, al-Khatib (1330H) is of the view that value of paper money is on par with gold and silver in its physical and value properties.However, zakat is not obligatory on the paper money because paper is not included in the list of zakat obligatory metals such as gold or silver.According to al-Khatib, paper money has no c ilah as gold, therefore it could either be sold or turn into debts with repayment of same or different values or through deferred payment or otherwise.Paper money follows the same rules as copper fulus in totality to the extent that zakat is non obligatory.The same applies to the wealth of riba' where paper money has no element of riba' c ilah as gold and silver.This view is similar to the view taken by the Maliki ulamas who state that the obligatory of zakat only confined to livestock, selected grains, fruits, gold and silver and business property.Paper nor copper are zakat non-obligatory so appropriately neither paper money and fulus.Nevertheless, al-Zarqa (n.d.) has different view with al-Buraylawi and al-Khatib where for him, even though the function and status of paper money qiyas that of fulus, from fiqh law paper money is subjected to the rules of gold and silver as zakat obligatory and thus the same goes for the wealth of riba'. There are also the conflicting views of c Abd Allah ibn Basam, Muhammad Salamah Jabar and Sulayman al-Khalidi in which they support the view of equating (qiyas) paper money with fulus as a circulating currency accepted by all.Therefore, they agree that paper money is not a wealth of riba' and sales of debt can be transacted either through deferred or immediate payment.They also stress that even though the paper money and fulus follow the same currency rules, paper money is subjected to zakat obligation because paper money is part of asset and zakat obligation is meant to cleanse the asset of the Muslim (Hassan, 1999). These are all the opinions expressed during the simultaneous circulation of paper money with gold dinar and silver dirham currency in the market.These opinions are no longer tenable after the paper currency today become the mandatory currency and the use of gold as currency is no longer allowed.The ulamas liken the paper money with fulus because fulus is largely circulated in the market during the time as well as dinar and dirham and this situation no longer exists today. 3.1.3The Fourth Phase: Paper Money as Full Fledge Substitute for Metallic Money At the fourth stage, the production of money is no longer controlled by any law but has been dominated by large nations i.e. the United States of America.However, at this stage, the laws play a role in governing the production of money.For example, beginning from 1993, a legislation was introduced prohibiting the use of gold or gold backed receipts as a currency except for collection or authorized for specific reasons.Meanwhile beginning from 1971, President Richard Nixon officially broke the ties between paper money and gold.After that, paper money has become the major and mandatory currency in the monetary system today. In currency issues, the circulations of gold and silver as currency has been repealed by the international laws since 1914.After the end of Bretton Woods in 1971, the gold backed currency system was officially cut off from the global monetary system.Thus, there are difficulties in using gold and silver as currency whence paper money is widely received by the market as a medium of exchange and valuation.Thus, the paper money is seen as something that is easy to implement.Therefore, based on the fiqh method of 'Easy things to do not dismissed due to difficulties' (al-Maysur La Yasqutu bi al-Ma'sur) the zakat obligation on assets (paper money) cannot be dismissed with the repeal of circulation of gold and silver currency.The monetary application of paper money is seen as easily accepted by the public as the currency of today. Results and Discussions The Muslim ulamas nowadays have issued their views on the paper money where they consider the present paper money as distinctive from fulus, gold and silver.The paper money is one of the many developmental phases of money that has started out as a commodity, metallic coins and henceforth paper money.These three types of currencies are different from one another.The similarities they have are being the medium of exchange and their values determined by the market.Through these views, several Islamic fiqh laws on paper money are being issued today, such as the law on riba', zakah, mudharaba (sleeping partnership) and several other laws relating to gold and silver.Based on these views also, the fatawa of the al-majma c al-fiqhi was issued during the fifth conference held in 1402H, with the decision that the use of paper money as currency is permissible.The laws on the use of paper currency decided during the Council of Islamic Fiqh Academy 5th session held in 1402H/1982AD are the following (Bank Dubai al-Islami, 1411H): 1) The origin of paper money is gold and silver.The c ilah of riba' that occurs between them, based on the strongest view amongst the ulamas, is due to their values (thamaniyyah).The c ilah is not limited to only gold and silver even though originally it is so. 2) The currency that exists today is of value because of the government guarantee on these values.The community has accepted it as a storage device and as a means of payment even if its value is not the paper but the numbers or writings printed on it.This makes the existing currency valuable (thamaniyyah) and has taken the place of gold and silver in its use. 3) C Ilah that is on gold and silver clearly exists in today's currency. Therefore, members of the Council of Islamic Fiqh Academy have decided that the paper currency is impartial which takes all the laws on gold and silver.That includes the legal prohibition of the riba' al-fadl and riba' al-nasa', the compulsory zakah and other laws.This is based on the qiyas (analogy) of the existing currency on gold and silver.Hence, based on the analysis on the change phases of the currency, it can be concluded that the paper money used today is permissible in Islam.Therefore, Islam permits the use of any type of currency whether it is paper money or gold coins.Figure 1 shows an overview of the relationship between the fatawas on the ruling of paper money issued by the Islamic jurists based on the four phases in the evolution of paper money development. Conclusion Although the terms 'dinar 'and 'dirham' or gold and silver are mentioned in the al-Quran and al-Hadith, however, there is no evidence (nas) of directions or commands to use both currencies.The use of the terms 'dinar 'and 'dirham' merely reflects the form of money used at that time (during the revelation) but this does not mean that it is the duty of Muslims to use 'dinar 'and 'dirham' (gold and silver currency) today.Therefore, the use of currency other than gold and silver is not disapproved in Islam.Besides, decision on the laws of paper money can be viewed through the evolution of the development of paper money. Historical data shows that the evolutionary development of paper money is made up of four phases.The first phase is the phase of the use of 'Letter of Declaration' or 'Receipt of Debt'.Meanwhile, the second phase is the use of paper money as a 'Means of Payment' or 'Fulus'.The third phase is the period in which paper money is backed by gold and then, in the final phase where the paper money has become the world's major currency (independent) or the principal.The fuqahas (Islamic jurists) have given their fatawas that the laws on the use of paper money is based on these developmental phases of paper money.Hence, the laws on paper money today cannot be equated with representative money, fulus or receipt of such debt in the previous phases. 3.1.1First Phase: Paper Money as a Letter of Declaration in Substitute of Money or Receipt of Debt (Paper Money as a Bank Money) Figure 1 . Figure 1.Development of fatawas on the law of paper money usage
6,473.6
2014-01-27T00:00:00.000
[ "Economics" ]
Nanomechanical resonators based on adiabatic periodicity-breaking in a superlattice We propose a novel acoustic cavity design where we confine a mechanical mode by adiabatically changing the acoustic properties of a GaAs/AlAs superlattice. By means of high resolution Raman scattering measurements, we experimentally demonstrate the presence of a confined acoustic mode at a resonance frequency around 350 GHz. We observe an excellent agreement between the experimental data and numerical simulations based on a photoelastic model. We demonstrate that the spatial profile of the confined mode can be tuned by changing the magnitude of the adiabatic deformation, leading to strong variations of its mechanical quality factor and Raman scattering cross section. The reported alternative confinement method could lead to the development of a novel generation of nanophononic and optomechanical systems. We propose a novel acoustic cavity design where we confine a mechanical mode by adiabatically changing the acoustic properties of a GaAs/AlAs superlattice. By means of high resolution Raman scattering measurements, we experimentally demonstrate the presence of a confined acoustic mode at a resonance frequency around 350 GHz. We observe an excellent agreement between the experimental data and numerical simulations based on a photoelastic model. We demonstrate that the spatial profile of the confined mode can be tuned by changing the magnitude of the adiabatic deformation, leading to strong variations of its mechanical quality factor and Raman scattering cross section. The reported alternative confinement method could lead to the development of a novel generation of nanophononic and optomechanical systems. Acoustic cavities confine mechanical vibrations in one or more directions of space [1]. They have many applications in the development of novel devices able to generate, manipulate and detect high frequency acoustic phonons [2,3]. Furthermore, such systems are at the core of the development of novel optomechanical devices [4]. Well established designs of one-dimensional acoustic nanocavities are phononic Fabry Perot resonators capable of operating in the technologically relevant sub-THz range [5][6][7]. They are built out of highly reflective acoustic distributed Bragg reflectors (DBR) [8,9] obtained by stacking materials with different elastic properties in a periodic way. Most of the mechanical properties of an acoustic DBR can be described by an acoustic band diagram [10]: acoustic minigaps are opened at the center and at the edge of the superlattice Brillouin zone, and the acoustic modes can be described in the Bloch mode formalism [11]. By introducing a defect such as a spacer inside an acoustic DBR, acoustic Fabry-Perot cavities are obtained [5]. Acoustic confinement can be probed by performing Raman scattering spectroscopy and pump probe experiments [5,[12][13][14]. Moreover, stacking several of these resonators has opened up the possibility to engineer and study the dynamics of complex phononic systems [15,16]. So far, sub-THz phononics has extensively used the standard Fabry-Perot approach and very little work has been dedicated to the development of other designs [17][18][19]. This is in great contrast with their optical counterparts, for which sophisticated optical cavity designs have emerged over the years, providing stronger spatial confinement, higher quality factors as well as reduced sen-sitivity to nano fabrication imperfections [20][21][22]. One elegant design proposed in the optical domain consists in introducing tapered regions where the photonic crystal periodicity is adiabatically broken. Such an approach allows for reduced optical losses-hence increased optical Q-factors-when going to 3D confinement. This strategy has been adopted in several optical systems, such as 2-dimensional photonic crystal membranes [23], nanobeams [24,25] and waveguides [26] or micropillars [27,28]. In this Letter, we report the design and the experimental measurement of the confinement properties of an adiabatic acoustic cavity, designed to operate at a resonance frequency of ≈ 350 GHz. By progressively changing the periodicity of an acoustic DBR, we adiabatically transform the acoustic band diagram of the system, leading to the generation of a confined mechanical state. The presented results were obtained on a sample where the DBR periodicity was adiabatically transformed with a maximum amplitude of approximately 7%, which is technologically challenging to fabricate, even by molecular beam epitaxy (MBE). We probed the presence of the confined phononic state by performing high resolution Raman scattering experiments. Furthermore, by changing the magnitude of the adiabatic transformation, we demonstrate that we can significatively transform the spatial profile of the confined mode, leading to major changes in its mechanical Q-factor and Raman scattering cross section. Such kind of design could lead, as in the case of optics, to the development of new 3 dimensional mechanical resonators with quality factors overcoming the ones currently achieved with standard Fabry-Perot designs [4,29]. We start the conception of the adiabatic cavity by designing an acoustic GaAs/AlAs DBR constituted by 29 GaAs/AlAs layer pairs. The layer thicknesses of AlAs and GaAs are 12 nm and 3.4 nm, respectively. By choosing these layer thicknesses, we obtain a ( λ 4 , 3λ 4 ) acoustic DBR, where λ corresponds to the wavelength of the acoustic phonons in GaAs and AlAs respectively, for a frequency of 354 GHz [8]. Then, by gradually changing the layers' thicknesses ( Fig. 1a, top panel), we introduce an adiabatic perturbation at the center of the structure. The envelope of the perturbation has the shape of a sin 2 function, an amplitude of 7%, and extends over 12.5 layers pairs. We compute the reflectivity of the system around 350 GHz embedded in a GaAs matrix, as shown with the black curve in Fig. 1b. We note the presence of a sharp dip inside the stop band of the system at a frequency of 353 GHz, corresponding to a confined mode. The spatial profile of the acoustic mode in the adiabatic cavity is shown in Fig. 1a (bottom panel). It is determined by solving the equation for one dimensional propagation of longitudinal acoustic waves using transfer matrix calculations. It is confined at the center of the structure and decays exponentially when we move away from the adiabatically perturbed region. The red dashed line in Fig. 1b represents the simulated reflectivity spectrum of the considered DBR without any adiabatic perturbation. The high reflectivity region corresponds to the first minigap at the zone center of the Brillouin zone. The presence of a confined state can be explained by locally applying the Bloch mode formalism in the aperiodic part of the sample, and in particular for one period of alternating AlAs/GaAs layer pairs [16,27,30]. We calculate for every pair of AlAs/GaAs layers the corresponding local acoustic band diagram. In the inset of The eigenfrequency of the confined mode is represented by the horizontal dashed line. By progressively increasing the size of the layers, we gradually redshift the position of the local acoustic bandgap of the system. At the center of the perturbed region, the confined mode is outside the bandgap and is therefore allowed to propagate. However, by moving away from the center, the mode enters adiabatically into the bandgap and is progressively reflected by the DBRs, leading to its confinement. A GaAs/AlAs-based sample was fabricated by molecular beam epitaxy on a (001) GaAs substrate. The adiabatic acoustic cavity was characterized by Raman scattering spectroscopy performed at room temperature. We used a Ti-sapphire tunable laser set at a wavelength of 913 nm, and the collected spectra were dispersed using a double HIIRD2 Jobin Yvon spectrometer equipped with a liquid N 2 charged coupled device (CCD). The adiabatic acoustic cavity is embedded between two optical Al 0.1 Ga 0.9 As/Al 0.95 Ga 0.05 As DBRs and constitutes the factor 10 5 as shown in [31][32][33][34], but it also modifies the Raman scattering selection rules, allowing to detect Raman signals associated to the confined mode in a backscattering experimental configuration. We collected the scattered light at normal incidence, whereas the excitation laser was incident with and angle different from 90 • . As the sample has been grown with a thickness gradient, we could tune the resonance frequency of the collection mode between ≈ 0.8-1.0 µm by changing the position of the laser spot. When the frequency of the scattered photons corresponded to the energy of the collection mode, we were in a condition of single optical resonance. We further enhanced the intensity of Raman signals by taking advantage of the in-plane dispersion relation of the optical cavity: by carefully changing the incidence angle of the laser, it was possible to set the incoming photons in resonance with the excitation mode. When both the spot position and the laser incidence angle were set in order to maximize a Raman signal, we were in condition of double optical resonance (DOR) [31,35]. In Fig 2.a (Black curve) we show the simulated acoustic band diagram of the ( λ 4 , 3λ 4 ) GaAs/AlAs acoustic DBR used for the conception of our sample, without any adiabatic defect. The grey area highlights the spectral interval of the first zone-centre acoustic minigap. The vertical orange solid line indicates the spectral position of the dip in reflectivity marked by a black square in Fig. 1a (bottom panel), corresponding to the resonance frequency of the confined mechanical mode. The measured Raman spectrum is presented in Fig. 2b. The frequency of the inelastically scattered light is f laser + ∆f , where ∆f is the frequency shift introduced during the Raman process and f laser is the frequency of the incident laser. The DOR condition was optimized for maximizing Raman signals for ∆f ≈ 350 GHz. We observe 4 clear Raman peaks in the measured spectrum. The most intense one is well located in the frequency interval of the zone-centre acoustic minigap, also marked in Fig. 2b by a grey area. This Raman peak is generated by the cavity mode (CM) confined in the adiabatic structure. We implemented a photoelastic model to calculate the Raman spectrum of the structure [8,13,[36][37][38]. The simulated Raman spectrum is shown in Fig 2b. (blue curve), after convoluting it with a gaussian curve to account for the experimental resolution (3.5 GHz). We note that the simulated spectrum perfectly reproduces all the features of the experimental data, accounting for the good quality of the sample growth process [39]. The peaks located around 310 GHz and 380 GHz would be normally observable in a back scattering (BS) geometry in structures with no optical confinement. These mechanical modes are localized in the DBRs (propagative modes). The mode at 369 GHz, on the contrary, is usually active in forward scattering (FS) geometry in the absence of optical confinement. Small differences between the experiments and simulations in the relative intensities can be attributed to optical resonant effects. The vertical dashed lines in Fig. 2a and Fig. 2b correspond to the condition q = 2k laser , where k laser corresponds to the wavenumber of the incident laser. They indicate the frequencies of mechanical modes that are usually Raman active in a BS geometry for a superlattice [10,36]. We observe that in the measured Raman spectrum, the peaks associated to back scattering are red shifted with respect to these frequencies. The introduction of an adiabatic defect in a superlattice also affects the spectral position of the Raman peaks associated to propagative modes, as it increases the average thicknesses of the layers at the centre of the structure. We numerically investigated the confinement properties of the adiabatic cavity by exploring the impact of the adiabatic sin 2 transformation inside the structure. We define α as the maximal adiabatic transformation introduced in the system. For the case of the experimentally studied cavity in this letter, α = 7%. In Fig. 3a, the black curve (bottom curve) corresponds to the simulated reflectivity curve of the adiabatic cavity around 350 GHz for α = 7%. We then progressively increase the magnitude of α. We observe that the dip corresponding to the confined mode is gradually red-shifted inside the acoustic stop band. By further increasing α a second sharp dip in the acoustic stop band appears, evidencing the presence of a second confined mode. Both dips are clearly visible for α = 11% as shown with the green reflectivity curve (middle curve) in Fig. 3a, for which we have introduced a vertical offset for clarity. Eventually, by raising α up to 15%, the first mode disappears in the Bragg oscillations of the system, and the second mode reaches the centre of the acoustic stop band (top red curve in Fig. 3a). The spatial profiles of the first and second confined modes are plotted in the insets of Fig. 3b, calculated respectively for α = 7% and α = 15%. Both modes are confined at the centre of the structure. However, the second mode presents two maxima in its displacement pattern. The points corresponding to α = 7% and α = 15% are marked by a black square and red triangle, respectively. Insets: spatial profile of the fundamental confined mode (black curve) and first harmonic mode (red curve) calculated for 7% and 15% adiabatic changes respectively. (c): Simulated Raman scattering spectra for adiabatic transformations of 7% (black curve) and 15% (red curve, with offset). The symbols represent the spectral positions of the phononic confined mode inside the system. The first mode corresponds to the fundamental confined mode (already plotted in Fig. 1a bottom panel), and the second to the first harmonic of this acoustic equivalent of a quantum well. It is therefore possible to select the desired spatial profile of the mode which is optimally confined by changing the amplitude of the adiabatic deformation, and to finely tune its mechanical resonance frequency. To characterize the resonator mechanical performance, we studied the evolution of the confinement properties of the two considered modes as function of the amplitude of adiabatic transformation α (Fig. 3b)). The values of the mechanical quality factors (Q-factors) increase when the resonance frequencies approach the centre of the acoustic minigap. Maximal values for the mechanical Q-factors for the fundamental and first harmonic modes are reached for 7% (Q mechanical = 1520) and 15% (Q mechanical = 1220) adiabatic transformations respectively, marked by a black square and a red triangle respectively in Fig. 3b. To compare this design to a standard Fabry-Perot cavity, we have simulated the Q-factor of an acoustic Fabry-Perot resonator composed of 14 λ 4 , 3λ 4 GaAs/AlAs layer pairs for each DBR, and one λ 2 AlAs spacer. This structure contains the same number of layers as for the adiabatic system. The Q-factor reached is 1570, very close to the value of the Q-factor reached for α = 7%. In Fig. 3c we plot the simulated Raman spectra around 350 GHz for cavities with α = 7% (black curve) and α = 15% (red curve with offset). We have marked by a black square the Raman peak corresponding to the presence of the first confined mode. As it has been shown in Fig. 2, the confined phonons in the adiabatic structure are Raman active. For α = 15% (red curve), we marked the resonance frequency of the second harmonic confined mode with a red triangle. The confined mode induced by a α = 15% perturbation presents a different symmetry in strain, resulting in a Raman inactive mode, as indicated in Fig. 3c with a triangle. By tuning the paramater α it is thus possible to tailor the spatial profile of the adiabatic confined mode and its symmetry. In conclusion, we demonstrated the adiabatic confinement of longitudinal acoustic phonons at a resonance frequency of 350 GHz by progressively breaking the periodicity of an acoustic superlattice. We probed the presence of a confined mode by performing Raman scattering spectroscopy experiments in a DOR configuration. Numerical simulations based on transfer matrix calculations and a photoelastic model well reproduce our experimental data, accounting for the high quality of the MBE grown sample and showing the feasibility of actually fabricating these systems. We investigated the impact of the adiabatic transformation magnitude on the spatial profile of the confined modes and on their mechanical quality factors. The presented adiabatic cavity is one of the first steps in the study of acoustic phonon resonators where a local strain engineering is performed. As it has already been demonstrated for standard acoustic Fabry-Perot designs [4,29], it is possible to fabricate out of these planar structures 3 dimensional optomechanical microresonators operating at extremely high mechanical frequencies, and for which the confined mechanical and optical modes strongly interact. Combining the simultaneous localization of photons and phonons, the reported system has the potential of being at the heart of a novel generation of optomechanical resonators based on DBR structures. ACKNOWLEDGMENTS This work was partially supported by a public grant overseen by the French National Research Agency (ANR) as part of the "Investissements d'Avenir" program (LabexNanoSaclay, reference: ANR-10-LABX-0035), the ERC Starting Grant No. 715939 NanoPhennec, the French Agence Nationale pour la Recherche (grant ANR QDOM), the French RENATECH network.
3,887.8
2017-08-18T00:00:00.000
[ "Physics", "Engineering" ]
V-Pits and Trench-Like Defects in High Periodicity MQWs GaN-Based Solar Cells: Extensive Electro-Optical Analysis By combining microscopy investigation, light-beam induced current (LBIC), micro-photoluminescence (<inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>-PL), and micro-electroluminescence (<inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>-EL) characterization, we investigate the electrical and optical properties of V-pits and trench-like defects in high-periodicity InGaN/GaN multiple quantum wells (MQWs) solar cells. Experimental measurements indicate that V-pits and their complexes are preferential conductive paths under reverse and forward bias. Spectral analysis shows a redshifted wavelength contribution, with respect to MQWs emission peak wavelength, in presence of agglomerates of V-pits surrounded by trench-like defects. The intensity of the redshifted wavelength contribution is more pronounced under <inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>-EL with respect to <inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>-PL characterizations, due to the localization of carrier flow in proximity of V-defects. Results give insight on the role of V-pits and their agglomerates on the electrical and optical properties of high-periodicity quantum well structures, to be used for InGaN-based photodetectors and solar cells. I. INTRODUCTION H IGH-PERIODICITY InGaN/GaN multiple quantum well (MQWs) devices are investigated for different applications, from concentrator solar cells [1], [2] to wireless power transfer systems [3] and space applications [4].Such structures allow for high-efficiency light collection in the short wavelength range, thanks to their high absorption coefficient, outstanding radiation resistance, high thermal stability, and thus reliability in harsh environments [5], [6].Until now, the research effort in improving the efficiency and reliability of InGaN MQWs solar cells has been focused on optimizing the well and barrier thickness, controlling polarization effects, and improving the material crystal quality [7], [8], [9].However, the properties of high periodicity MQW solar cells may be significantly affected by the presence of extended defects, such as dislocations and V-pits, whose properties are still under investigation. It is well known that during the growth of GaN devices, the difference in thermal expansion coefficient and the large lattice mismatch between GaN and sapphire substrate can lead to the formation of different structural defects such as threading dislocations (TDs), inversion domain boundaries (IDBs), basal plane stacking faults (BSFs), and stacking mismatch boundaries (SMBs) [10], [11], [12], [13].Based on atomic force microscopy (AFM) and transmission electron microscopy (TEM), different papers reported that TDs lead to the formation of V-defects (or V-pits), which appear at the surface as open hexagonal inverted pyramid with [10], [11] side walls [14], [15], [16].On the other hand, BSFs and SMBs lead to the formation of trench defects, which are closed-loop boundaries with V-shaped grooves [12], [17], [18].These extended defects may significantly worsen the optical and electrical properties of the devices and their impact on high periodicity MQW structures is still under investigation.In the literature, the influence of V-pits in the characteristics of GaN-based solar cells is more extensively investigated in PIN structures [19], [20], [21], [22], [23], focusing in their influence on the open circuit voltage (Voc) and short circuit current (Isc): an in-depth analysis of the impact of V-pits and other extended defects, such as trench-like defects, in GaN-based MQWs solar cells is missing.The goal of this article is to fill this gap, by presenting a comprehensive study based on scanning electron microscopy (SEM), lightbeam-induced current (LBIC), electron-beam-induced current (EBIC), micro-electroluminescence (µ-EL), and microphotoluminescence (µ-PL) characterization.By combining these techniques within a specific region of the device, we investigate the morphological, electrical, and optical characteristics of the defects, and describe unique features in the intensity and spectral data.First, we show that agglomerates of V-pits play a dominant role in current conduction under reverse and forward bias [24].Second, by integrating microscopy investigation, µ-PL and µ-EL characterization, the spectral properties near different extended defects are explored in detail, by focusing on the intensity and peak wavelength dependence: a significant redshift is found in proximity of some defects, which is described through EL and PL measurements, and ascribed to the presence of trench defects. II. EXPERIMENTAL DETAILS The high-periodicity InGaN/GaN MQWs GaN-based solar cells analyzed in this work are grown on c-plane (0001) sapphire by metal organic chemical vapor deposition (MOCVD).A schematic of the device under test is shown in Fig. 1(a).The structure consists of 2 µm silicon doped n-GaN ([Si] = 3 × 10 18 cm −3 ) layer, deposited over a sapphire substrate and a 125 nm highly silicon doped n + -GaN (Si concentration [Si] = 2 × 10 19 cm −3 ) layer, to create an ohmic contact [25].Above the n + -GaN layer, the high-periodicity MQWs region is grown and is composed by 30 pairs of undoped In 0.15 Ga 0.85 N quantum wells (well thickness = 3 nm, with an indium mole fraction of 15%) and GaN barriers (barrier thickness = 7 nm).Above the MQWs region, a 5 nm magnesium-doped p-Al 0.15 Ga 0.85 N electron blocking layer (EBL) ([Mg] = 2 × 10 19 cm −3 , with an aluminum mole fraction of 15%) is inserted, to enhance carrier collection at the p-side of the devices by reducing the recombination rate and increasing the carrier lifetime [25].Above the EBL, a 150 nm magnesium doped p-GaN layer ([Mg] = 2 × 10 19 cm −3 ) is grown and finally, a 10 nm highly magnesium doped p + -GaN contact layer to create an ohmic contact ([Mg] > 2 × 10 19 cm −3 ) is formed.A semi-transparent 130 nm indium-tin oxide (ITO) layer is deposited by dc-sputtering on top of the mesa as a current spreading layer with post-annealing in N 2 /O 2 at 500 • C. Devices were then processed by standard lithography into 1 × 1 mm solar cells and finally, Ti/Al/Ni/Au ring contacts and Ti/Pt/Au grid contacts are deposited via electron beam evaporation around the perimeter and on the top of the mesa, respectively, to form cathode and anode.Other details of the device can be found in [25].SEM characterization to obtain filtering grid in-beam backscattered electrons (f-BSEs) images were performed through the TESCAN SOLARIS microscope, which is a dual beam system containing the TriglavTM immersion optics column and the OrangeTM Ga ion optics column attached to one chamber that allows to perform surface modification using a focused ion beam (FIB).Exploiting FIB, a lamella of the analyzed device was obtained, to characterize the cross-sectional structure of the sample.The cross-sectional structure was analyzed by a TEM using a JEOL JEM-2200FS field emission microscope equipped with an in-column filter, operated at 200 keV.Imaging was carried out in scanning transmission electron microscope (STEM) mode using a high-angle annular dark-field (HAADF) detector that exploits atomic-number (Z ) contrast.Finally, µ-EL and µ-PL characterizations were performed through a custom designed spectral resolved confocal microscope setup which is discussed more in detail in [26], [27], and [28]. III. CHARACTERIZATION AND DISCUSSION The SEM image reported in Fig. 2(a), shows the In-Beam f-BSE characterization of the analyzed area discussed in this manuscript, obtained by the TESCAN SOLARIS microscope.This area has an extension of 5 × 5 µm 2 and presents a high number of V-pits.V-pit formation has been ascribed to increased strain energy during GaN growth on a sapphire substrate.Other factors to take into account are a reduced Ga incorporation on the pyramid plane, a higher strain energy in high indium mole fraction in InGaN QWs and the relatively low temperatures used to grow InGaN QWs, leading to degraded GaN composition due to limited gallium surface diffusion [14], [29], [30].In particular, in Fig. 2(a), region "r1" (red circle), region "r2" (green circle) and region "r4" (purple circle) show agglomerates of V-pits and will be reference regions for the following discussion, as well as region "r3" (light blue circle) which, unlike the previous ones, has no V-pits.The well-known open hexagonal, inverted pyramid with [10], [11] side walls shape of the V-defects is observed [31], [32], with a diagonal length in the range of 100 and 160 nm; the density of V-pits is 8.6 × 10 7 cm −2 [24]. Fig. 2(b) and (c) report the LBIC current signal measured in the same area at reverse bias (−3 V) and forward bias (2.5 V) respectively under a monochromatic 375 nm laser beam at 500 µW.Details on the µ-PL setup, including measurements of the focus spot size, is published elsewhere [26].In these maps, the reference circles are shown to recognized reference regions ("r1," "r2" "r3" and "r4").In Fig. 2(b), the lower value in the color scale refers to −21 mA and the higher value refers to −29 mA (relative increase of 40% in the brightest region with respect to the darkest one) while in Fig. 2(c), the lower value refers to 400 mA and the highest value refers to 412 mA (relative increase of 3%).This means that I in the regions with increased current has the same sign as the bias voltage, and thus we observe an increased photocurrent for reverse bias and an increased forward current for forward bias, as observed in similar PIN GaN-based devices [19], [20], [21], [22].In particular, from Fig. 2(b) and (c), considering the reference regions in Fig. 2(a), it is clear that agglomerates of V-pits ("r1," "r2" and "r4") are more conductive than the V-pit free region ("r3").By comparing the images in Fig. 2, regions with V-pits and their agglomerates exhibit higher conductivity than V-pit-free regions improving the analysis and the modeling performed in our previous work [24]. The same is true also under EBIC characterization (data not shown here) where is clearly shown that V-pits and their agglomeration are more conductive than the V-pits free area.V-pits originate at the MQWs region of these device [33] and above them, layers grow differently than in the planar region where V-pits are not present, as clearly visible in the STEM image of the lateral cross section of the sample [Fig.1(b)].In the device under test, this results in an ITO layer, and thus the p-contact, being closer to the MQWs region resulting in the formation of localized short circuit paths with a reduced potential barrier at the p-side of the device [24] [a schematic is present in Fig. 1(c)].Furthermore, V-pits are shown to enhance hole injection efficiency [34], [35] and feature thinner GaN barriers and lower indium concentration wells along the sidewalls [36], [37] which could constitute preferential current paths with respect to thicker barriers and deeper wells in MQWs region [38], [39], leading to a higher current signal in correspondence of V-pits and their agglomerates.This is supported by Fig. 3, which reports the results of µ-EL analysis performed at a current injection of 80 mA.From the spectral data, measured locally, we could extract the zeroth moment µ 0 , which corresponds to the integral of the spectrum and is thus proportional to the intensity of the emission where I i is the intensity signal at each wavelength λ i and the integration runs from λ = 315-615 nm.On the other hand, the first moment µ 1 is the weighted average of the wavelength, defined as in the same integration range as the zeroth moment.µ 1 coincides with the peak wavelength if the spectrum is symmetric, otherwise, it is a good estimation of the mean emission wavelength [27].Fig. 3(a) and (b) show the µ-EL mapping carried out on the same 5 × 5 µm 2 area presenting the zeroth moment (normalized with respect to the maximum) and first moment respectively. Considering the intensity information in Fig. 3(a) (zeroth moment), it is clear that in correspondence of agglomerates of V-pits in region "r1" and "r2," a higher intensity signal is obtained with respect to the V-pits free reference region "r3."On the other hand, the agglomerate of V-pits "r4" does not present such a high intensity signal as the other agglomerates.Taking into account the wavelength information in Fig. 3(b) (first moment), it is clear that the higher intensity value is correlated with a redshift (about 10-12 nm) of the weighted average emission wavelength.In the remaining area, the latter is uniform at a value around 450 nm. Fig. 3(c) reports the reference spectra measured in the different regions (from "r1" to "r4").The spectrum related to region "r4" (dotted purple line) and "r3" (blue solid line) show a peak wavelength of 447 nm which is consistent with the emission peak wavelength of devices with similar indium concentration and MQWs thickness [40], [41].On the other hand, regions "r2" (solid green line) and "r1" (solid red line) show also a peak at a redshifted wavelength of 462 nm.It is thus clear that the higher intensity in the zeroth moment arises from the occurrence of the longer wavelength contribution, which is mostly defined for agglomerates of V-pits "r1" and Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.(c) Spectrum in "r1," "r2," "r3" and "r4." "r2," and absent for agglomerate "r4," which presents a similar spectrum to that of V-pits free region "r3." Further detail was obtained by analyzing the µ-PL mapping carried out on the same 5 × 5 µm 2 area presented in Fig. 4. High-resolution in-beam f-BSE zoom characterization of agglomeration of V-pits with trench-like defects: V-pit are surrounded by blue lines, while trench-like defect is surrounded by a red line. Specifically, Fig. 4(a) shows the zeroth moment (normalize with respect to the maximum), Fig. 4(b) shows the first moment and Fig. 4(c) shows the spectrum in the reference regions performed at forward bias (2 V) at 500 µW optical power laser beam excitation at 375 nm (mappings are also representative of reverse bias and short circuit operation).Given the zero moment in µ-PL, the intensity is more uniformly distributed in the analyzed area with respect to µ-EL, emphasizing a lower influence of extended defects under µ-PL characterization.Also, relative amplitude of the red peak is lower in µ-PL, compared to µ-EL, achieving a more uniform emission wavelength. Considering the first moment, reference regions "r3" and "r4" present no redshift of the weighted average wavelength with respect to the peak emission wavelength observable from the spectrum in Fig. 4(c), i.e., 447 nm (blue solid line and dotted purple line respectively). Even reference region "r2," which shows significant redshift in first moment and significant contribution of the longer wavelength peak in the spectrum, under µ-EL characterization [Fig.3 Considering the high-resolution in-beam f-BSE zoom characterization of reference region "r1" shown in Fig. 5, it becomes clear why this region has a longer wavelength contribution in the spectrum and thus a redshift in the first moment mapping.In fact, it is possible to recognize V-pits [42], [43] (circled by a blue hexagonal line) and a trench-like defect (circled by a red line) [44].The latter defect is found to originate from a stacking fault lying in the basal plane (BSF), which is connected to a vertical SMB terminating at the Vshaped trenches [45].The boundaries of the defects, made by several straight lines oriented at 60 • and 120 • to one other, are visible in Fig. 5 [44].SMBs and V-pits involved in the trench-like defects are formed by relaxation, and result in an increased indium content in the region enclosed by the trench defect [44], [46].As a consequence, a longer wavelength peak contribution in spectrum with respect to the surrounding region is observed [17], [47].Furthermore, the difference in strain caused by the trench-like defects could favor InN migration and formation of QD structures with redshifted emission [48], [49], [50].Additionally, with decreasing trench width, a reduced redshift and intensity is observed [45], [46] because of a narrower relaxed high-indium content region, as in the case of reference region "r2," and the region to the right of reference region "r1."Reference region "r4," instead, does not present any trench-like defects, from which no redshift weighted emission wavelength in the first moment is observed.Finally, the presence of the SMB increases the density of non-radiative recombination centers in the material, thus decreasing the quality and internal quantum efficiency of the sample [44].A lower amount of these defects, visible through SEM, EL and PL characterization, will results in a higher efficiency and reliability of future GaN-based solar cells. Based on the considerations above, considering that the indium concentration profile is constant throughout the MQWs region as shown by secondary-ion mass spectrometry (SIMS, data not presented here) we propose the following model to explain the related data: in µ-EL characterization, agglomerates of V-pits generate a preferential current path both for reverse photocurrent and forward current, due to the larger proximity of the p-contact to the MQWs region [24].This results in the localization of carriers' flow and recombination near the V-pit agglomerates.In case a trench-like defect is also present (reference region "r1" and "r2"), a redshift in the first moment is observed, with respect to agglomerates of V-pits without trench (reference region "r4"), due to the increased indium content in the region enclosed by the trench defect [44], [46].Under optical excitation, the longer wavelength peak has a lower impact, possibly due to non-radiative recombination losses at SMBs or BSFs [44], [46].Another relevant factor is the enhanced extraction of carriers (via photocurrent) close to V-pit agglomerates [24], as confirmed by the LBIC data in reverse and forward bias. IV. CONCLUSION In conclusion, we analyzed the electrical and optical proprieties of V-pits and their agglomerates in high periodicity InGaN-GaN MQWs solar cells.First, SEM investigation combined with LBIC analysis indicate that V-pit agglomerates are preferential paths for current conduction.Second, spectral measurements indicated a significant redshift in emission wavelength in correspondence of agglomerates of V-pits surrounded by trench-like defects.Results were interpreted by considering that agglomerates of V-pits form trench-like complexes, with a stronger indium incorporation and hence a longer wavelength emission, compared to the bulk defectfree material.The results provide relevant information for the study and optimization of high periodicity MQW structures based on InGaN, for application in photodetectors and solar cells, highlighting V-pits and trench-like defects role in current conduction and spectral performance. Fig. 1 . Fig. 1.(a) High-periodicity InGaN/GaN MQWs GaN-based solar cell schematic.(b) STEM image of the lateral cross section of sample with 100 nm p-GaN thickness: Cut 1 and Cut 2 line indicate where the band diagram is evaluated following the model in [24].(c) Band diagram showing potential barrier reduction at V-defect. Fig. 5 . Fig. 5.High-resolution in-beam f-BSE zoom characterization of agglomeration of V-pits with trench-like defects: V-pit are surrounded by blue lines, while trench-like defect is surrounded by a red line. (b)], green circle and Fig. 3(c), solid green line, shows a reduced longer wavelength contribution in the spectrum under µ-PL characterization.Only the reference region "r1" presents a high redshift in the first moment due to a significant longer wavelength peak contribution in the spectrum under both µ-EL and µ-PL characterizations [Figs.3(b) and 4(b) red circle and Figs.3(c) and 4(c) solid red line)].
4,324.6
2024-03-01T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
The Synthetic Antimicrobial Peptide 19-2.5 Interacts with Heparanase and Heparan Sulfate in Murine and Human Sepsis Heparanase is an endo-β-glucuronidase that cleaves heparan sulfate side chains from their proteoglycans. Thereby, heparanase liberates highly potent circulating heparan sulfate-fragments (HS-fragments) and triggers the fatal and excessive inflammatory response in sepsis. As a potential anti-inflammatory agent for sepsis therapy, peptide 19–2.5 belongs to the class of synthetic anti-lipopolysaccharide peptides; however, its activity is not restricted to Gram-negative bacterial infection. We hypothesized that peptide 19–2.5 interacts with heparanase and/or HS, thereby reducing the levels of circulating HS-fragments in murine and human sepsis. Our data indicate that the treatment of septic mice with peptide 19–2.5 compared to untreated control animals lowers levels of plasma heparanase and circulating HS-fragments and reduces heparanase activity. Additionally, mRNA levels of heparanase in heart, liver, lung, kidney and spleen are downregulated in septic mice treated with peptide 19–2.5 compared to untreated control animals. In humans, plasma heparanase level and activity are elevated in septic shock. The ex vivo addition of peptide 19–2.5 to plasma of septic shock patients decreases heparanase activity but not heparanase level. Isothermal titration calorimetry revealed a strong exothermic reaction between peptide 19–2.5 and heparanase and HS-fragments. However, a saturation character has been identified only in the peptide 19–2.5 and HS interaction. In conclusion, the findings of our current study indicate that peptide 19–2.5 interacts with heparanase, which is elevated in murine and human sepsis and consecutively attenuates the generation of circulating HS-fragments in systemic inflammation. Thus, peptide 19–2.5 seems to be a potential anti-inflammatory agent in sepsis. Introduction Sepsis is a common and life-threatening disease especially in medical and surgical intensive care patients with mortality rates up to 60% [1]. It is characterized by a systemic inflammatory response to infection triggered by both pathogen-associated molecular patterns (PAMPs) as well as endogenous danger-associated molecular patterns (DAMPs) [2,3]. Here, heparan sulfate and the enzyme heparanase play key roles. Heparan sulfates are linear polysaccharides composed of repeating disaccharide subunits, which are D-glucosamine and D-glucuronic acid in their unmodified form. They are attached on a cell surface bound core protein [4]. Heparanase is an endo-β-glucuronidase that cleaves the heparan sulfate side chains within highly sulfated regions. Thereby, heparanase liberates highly potent circulating heparan sulfatefragments (HS-fragments) [5]. Circulating HS-fragments are known to act as highly potent DAMPs and trigger the pro-inflammatory response in sepsis through Toll-like receptor 4-dependet pathways [6,7]. Thus, new anti-inflammatory agents interacting with heparanase and reducing the levels of circulating HS-fragments may be promising candidates for sepsis therapy. The naturally occurring antimicrobial cathelicidin peptide LL-37 neutralizes the proinflammatory action of PAMPs and DAMPs [8], however its therapeutic use is limited due to intrinsic toxicity [9,10]. Therefore, the challenge is to develop synthetic peptide-based drugs on the basis of naturally occurring antimicrobial peptides without causing harm. The synthetic antimicrobial peptide 19-2.5 belongs to the class of synthetic anti-lipopolysaccharide peptides (SALP = synthetic anti-LPS peptides). However its activity is not restricted to Gram-negative bacterial infection [11,12], as Peptide 19-2.5 shows anti-inflammatory activity against Gram-negative and Gram-positive bacteria as well as against viruses [13]. In this way, it limits systemic inflammation and protects mice from lethal septic shock [11,14]. We recently reported that peptide 19-2.5 is able to decrease the inflammatory response in murine cells stimulated with both PAMPs and DAMPs [3]. However, the interaction of peptide 19-2.5 and DAMPs is still unclear in vivo. Thus, our study aimed to investigate peptide 19-2.5 interaction with heparanase and HSfragments in murine and human sepsis. We investigated peptide 19-2.5 treatment in septic mice using cecal ligature and puncture (CLP), the gold-standard method for studying polymicrobial sepsis in mice [15]. Furthermore we used the plasma of patients with septic shock as well as of healthy humans and performed isothermal titration calorimetry (ITC) to study the thermodynamics of binding of peptide 19-2.5 with heparanase and HS-fragments. The findings of our present study indicate that peptide 19-2.5 interacts with heparanase and consecutively attenuates the generation of circulating HS-fragments in systemic inflammation. Thus, peptide 19-2.5 may be a promising tool for sepsis therapy. Plasma sampling As described before [3], we used plasma from 18 adult individuals within 24 hours after presentation with septic shock, according to the ACCP/SCCM definition [16] after written informed consent of patients or their legal representative. Individuals below 18 years were excluded. Furthermore, we used plasma from healthy human donors after written informed consent (n = 10). No personal or identifying information was collected from study participants. All samples are stored in the RWTH centralized Biomaterial Database (RWTH cBMB) of the University Hospital RWTH Aachen. The local ethics committee (University Hospital RWTH Aachen, EK 206_09) approved this study before inclusion of the first individual. Heparanase and heparan sulfate ELISA Heparanase levels in human and murine plasma were measured using a commercial ELISA kit (No.: E03H0100, AMS Biotechnology, Oxon, United Kingdom) according to the manufacturer's instructions. The ELISA´s detection range is 1.0 ng/ml-5000 ng/ml. Heparanase activity assay Heparanase activity in human and murine plasma was quantified using the commercial available Heparanase Assay Kit (No.: Ra001-BE-K, AMS Biotechnology, Oxon, United Kingdom) according to the manufacturer's instructions. Animal model All animal experiments have been performed in accordance with the guidelines of the Institutional Animal Care and Use Committee (IACUC) and the National Animal Welfare Law and after approval by the responsible government authority ("Landesamt für Natur, Umwelt und Verbraucherschutz": LANUV-NRW, Germany: AZ 8.87-50. 10.35.09.044). All efforts were made to minimize suffering of the animals. Mice were housed under standard laboratory conditions (room temperature 21 ± 1°C; relative humidity 40-55%; photoperiod 12 light:12 dark) and supplied with a standard feed and tap water ad libitum. The animals were handled according to guidelines of the Federation of European Laboratory Animal Science Associatinon (FELASA). This study used humane endpoints according to references ot the Society of Laboratory animal science (GV-SOLAS). The murine model of polymicrobial sepsis was divided into three steps as described before [14]. Briefly, in a first step 12 NMRI mice underwent a catheterization procedure under general (isoflurane 1% to 2% in oxygen/air mix with a FiO2 of 0.3) and local (0.2 ml lidocaine 2%) anesthesia. A central vein catheter (PE-tube, self-made) was implanted in the jugular vein. After further local infiltration with lidocaine, the neck was closed by single suture. To prevent hypothermia all animals were kept on a heating pad throughout the surgical procedure. Mice were transferred back into the cage to rest for 48h. The i.v.-line was connected to the syringe pump at a rate of 100 μl/h (NaCl 0.9%). In a second step, general and local anesthesia for cecal ligature and puncture (CLP) was performed as described above. The animal were transferred to the cage and reconnected with the i. v.-line. Depending on the group, peptide 19-2.5 infusion (20 μg/ml, 100 μl/h) or NaCl 0.9% infusion (100 μl/h) was started. During preceding studies the dose of 20μg/ml of peptide 19-2.5 was found to be the most beneficial with least harm to the animals [3,[11][12][13][14]17,18]. Finally, in a third step all animals were killed 24 h after CLP. Therefore, the animals were anesthetized as described above, brought to a prone position and the abdomen and thorax were opened. Blood was sampled in pre-citrated syringes. Directly after killing the animal under general anesthesia by cervical dislocation heart, liver, lung, kidney and spleen were snapfrozen in liquid hydrogen until further procedures. Isothermal Titration Calorimetry Isothermal titration calorimetry (ITC) experiments were performed as previously described [11]. Briefly, the binding of peptide 19-2.5 to heparanase or HS-fragments was recorded by measuring the enthalpy change of the reaction at 37°C. For this, a total of 100 μg/ml heparanase (R&D Systems Europe Ltd., Abingdon, United Kingdom) and 200 μg/ml HS-fragments (H7640 Sigma-Aldrich, St. Louis, MO, USA) were dispersed into the calorimetric cell, and 2 mM peptide 19-2.5 was titrated to this dispersion stepwise in 3 μl portions. After exploring experiments, the concentration of HS was 100 μg/ml titrated with 1.5 μl of peptide 19-2.5 (2 mM) 20 times at 37°C. Statistical analyses All data are given as mean ± standard deviation (SD). As described recently [14], the PCRderived mRNA expressions were analyzed using a relative expression software tool (REST, (http://www.gene-quantification.de/rest.html, rest-mcs-beta-9august 2006) performing a randomization test [19]. This tool avoids assumptions on distributions and applies a pair wise fixed reallocation randomization test, which reallocates control and sample groups (= pair wise fixed reallocation). The expression ratios are calculated on the basis of the mean crossing point (CP) values for reference and target genes [14,19]. We used a multiple t-test with Holm-Šídák correction when comparing differences between groups. A p-value of p < 0.05 was considered significant for all tests. We performed all calculation and figures using GraphPad Prism 6 (GraphPad, San Diego, CA, USA). Patient characteristics Patient and healthy human donor characteristics are shown in Table 1. All patients met criteria for septic shock according to the ACCP/SCCM definitions [16]. Peptide 19-2.5 reduces heparanase level in vivo Several studies indicate that heparanase plays a role in sepsis-associated pulmonary [20] and renal [21] failure. However, the measurements are limited to tissue levels in certain organs [20,21]. Our findings represent the first observations of plasma heparanase levels in murine and human sepsis. We induced sepsis in mice using cecal ligation and puncture (CLP), mimicking polymicrobial abdominal sepsis. To determine whether peptide 19-2.5 impacts on heparanase during sepsis in vivo, we treated septic mice with peptide 19-2.5 (peptide treatment) or saline (NaCl 0.9%) as control. CLP-mice without treatment demonstrated significantly higher levels of plasma heparanase compared to mice treated with peptide 19-2.5 ( Fig 1A). Relative heparanase mRNA expressions in lung, liver, spleen, heart and kidney were significantly higher in untreated CLP-mice compared to mice treated with peptide 19-2.5 (Fig 2A-2E). However, we detected organ-specific differences. For example, heparanase mRNA expression was increased 5.7±1.9 fold in kidney ( Fig 2E) but only 2.2±2.2 fold in lung (Fig 2A). The systemic response in sepsis encompasses both pro-and anti-inflammatory phases over time with an organ-specific time spread [22]. Notably, Schmidt et al. measured a peak in pulmonary heparanase expression 48h after CLP [20]. Thus, our results may be time-dependent and differ in a later stage of sepsis. Peptide 19-2.5 decreased heparanase mRNA expression in all investigated organ tissues (Fig 2). Several studies report an up to 10-fold increase of heparanase expression after stimulation with pro-inflammatory cytokines [20,[23][24][25]. In turn, we could demonstrate in previous experiments that peptide 19-2.5 decreases levels of pro-inflammatory cytokines in CLP-mice and decreases mRNA expression of CD14 [14]. The expression of CD14 showed differences between the organs in mice treated with peptide 19-2.5 [14]. Accordingly, we detected organspecific differences of heparanase levels that may depend on cytokine levels and stage of sepsis. To study heparanase levels in humans, we used plasma samples of septic shock patients (n = 18) and healthy human volunteers (n = 10). Levels of plasma heparanase were significantly higher in septic shock patients compared to healthy volunteers (Fig 1B). In both groups, the ex vivo addition of peptide 19-2.5 did not influence heparanase levels (Fig 1B). Recent studies reported elevated plasma heparanase levels in patients with diabetes mellitus and in pediatric patients with cancer. However, the reported levels are much lower compared to septic shock patients [26,27]. High levels of pro-inflammatory cytokines found in pathological conditions, such as sepsis, diabetes or cancer are the main inductors of heparanase expression [23]. Thus, higher levels of plasma heparanase in septic shock patients may be due to the higher levels of pro-inflammatory cytokines in sepsis [28]. Peptide 19-2.5 reduces heparanase activity in vivo and ex vivo Several studies identified elevated levels of circulating HS-fragments in critically ill patients [3,29,30] with significant higher levels in non-survivors [31]. Johnson et al. administered HS- fragments by intraperitoneal injection in mice resulting in eighty percent mortality. However, they used artificial amounts of 5 mg of HS-fragments for intraperitoneal injection [6]. Thus, attenuating the generation of circulating HS-fragments may serve as a crucial therapeutic effect in sepsis. Since heparanase is the major mammalian enzyme liberating circulating HS-fragments [5], we investigated heparanase activity in mice subjected to CLP. Treatment with peptide 19-2.5 resulted in significant lower heparanase activity compared to untreated control animals ( Fig 3A). Schmidt et al. observed an association between LPS and activation of heparanase; LPS administration to mouse lung microvascular endothelial cells induced cleavage of 65-kDa heparanase to its active 50-kDa isoform. Additionally, the inhibition of heparanase is lung-protective in murine sepsis [20]. Recently, it was shown that peptide 19-2.5 changes the aggregate structure of LPS thereby neutralizing the pro-inflammatory effects of LPS [12]. Thus, peptide 19-2.5 may lead to a lower heparanase activity in polymicrobial sepsis by neutralizing LPS. Next, we found significantly elevated heparanase activity in plasma samples from septic shock patients compared to healthy humans (Fig 3B). Notably, Schmidt et al. detected that plasma HS degradation activity is elevated only in individuals with non-pulmonary sepsis [20]. Although limited by a small sample size, our study included septic shock patients with pulmonary and non-pulmonary focus of infection ( Table 1). The addition of peptide 19-2.5 to the plasma of healthy humans did not alter heparanase activity. However, peptide-addition to the plasma of septic shock patients significantly lowered heparanase activity (Fig 3B). These findings may be due to the significant lower levels of heparanase in healthy human plasma compared to plasma of septic shock patients. Probably, in samples with heparanase below a certain threshold, the accompanying levels of cytokines and HS are too low to allow a measurable effect of peptide 19-2.5. Heparanase level in murine (A) and human (B) sepsis. Plasma was obtained from mice subjected to cecal ligature and puncture (CLP) and treated with peptide 19-2.5 (20 μg/ml, n = 6) or NaCl 0.9% as a control (n = 6). Moreover, plasma samples were collected from control healthy volunteers (n = 10) and from patients with septic shock (n = 18). Peptide 19-2.5 (20 μg/ml) was added ex vivo to the plasma. Data are presented as mean ± SD. P-values represent the statistical differences between groups using a multiple t-test with Holm-Šídák correction. ***p < 0.0001, n.s. = non significant. , spleen (C), heart (D) and kidney (E) of mice subjected to cecal ligature and puncture (CLP) and treated with peptide 19-2.5 (20 μg/ml, n = 6) or NaCl 0.9% as a control (n = 6). Data are presented as mean ± SD. P-values represent the statistical differences between groups using a multiple t-test with Holm-Šídák correction. **p < 0.01, ***p < 0.0001. doi:10.1371/journal.pone.0143583.g002 Pep2.5 and Heparanase in Sepsis Recently, we reported that peptide 19-2.5 is able to decrease the inflammatory response in murine cells stimulated with HS-fragments in vitro [3]. Thus, we investigated the impact of peptide 19-2.5 on levels of circulating HS-fragments in vivo. Levels of circulating HS-fragments significantly decreased in CLP-mice treated with peptide 19-2.5 compared to untreated control animals (Fig 4). We postulate two ways by which peptide 19-2.5 reduces levels of circulating HS-fragments in mice: 1. Interaction with heparanase and reduction of heparanase activity. Direct binding between peptide and circulating HS-fragments. Remarkably, Krepstakies et al. reported an alteration of the peptide's secondary structure and a characteristic change in the hydration and sulfation status of HS after incubating with peptide 19-2.5 [13]. Of note, heparanase induces the release of pro-inflammatory cytokines through the generation of circulating HS-fragments [5], which in turn induce the expression of heparanase [23]. Thus, reducing the level of circulating HS-fragments by peptide 19-2.5 interrupts this cycle of pro-inflammatory action. Enthalpy changes of the interaction between peptide 19-2.5 and circulating heparan sulfate-fragments To study the binding of peptide 19-2.5 with HS-fragments and heparanase we performed isothermal titration calorimetry (ITC). This technique allows statements about the kind of Heparanase activity in murine (A) and human (B) sepsis. Plasma was obtained from mice subjected to cecal ligature and puncture (CLP) and treated with peptide 19-2.5 (20 μg/ml, n = 6) or NaCl 0.9% as a control (n = 6). Furthermore, plasma samples were collected from control healthy volunteers (n = 10) and from patients with septic shock (n = 18). Peptide 19-2.5 (20 μg/ml) was added ex vivo to the plasma. Data are presented as mean ± SD. Pvalues represent the statistical differences between groups using a multiple t-test with Holm-Šídák correction. *p < 0.05, ***p < 0.0001, n.s. = nonsignificant. binding, such as Coulomb interactions leading to exothermic reactions or entropy-governed processes such as dissociation of water layers leading to endothermic reactions. First, peptide 19-2.5 was titrated to HS-fragments. There was a high exothermic reaction due to the single titrations which runs into saturation around a [Peptide 19-2.5]:[HS] weight ratio of 0.6 ( Fig 5A). This indicates a strong Coulomb interaction between peptide 19-2.5 and HS. The thermodynamic parameters deduced from such experiments (n = 4) are summarized in Table 2. The interaction between peptide 19-2.5 and HS-fragments seems to result from a Coulomb binding of the positive peptide charges with the negatively charged sulfate groups from HSfragments, and the resulting S-shaped curve with saturation characteristic for a complex formation of the peptide with HS-fragments (Fig 5A). These results are in line with the findings from Krepstakies et al. who investigated the impact of peptide 19-2.5 on virus attachment and entry of human pathogenic viruses by interacting with heparan sulfate proteoglycan [13]. A similar procedure was followed to determine the thermodynamic parameters of binding of peptide 19-2.5 with HS-fragments, however 1 μl of peptide was used for every titration [13]. It is important to note, that the high binding constant ( Table 2) for this interaction dramatically decreases when the hydrophobic C-terminal of the peptide, FWFWG, is removed from the peptide (peptide variant peptide 19-2.5gek GCKKYRRFRWKFKGK). Thus, beside a Coulomb interaction a second binding process must take place, which is governed apparently by hydrophobic interactions. Levels of circulating heparan sulfate-fragments in murine sepsis. Levels of circulating heparan sulfate-fragments (HS-fragments) levels were measured in plasma of mice subjected to cecal ligature and puncture (CLP) and treated with Peptide 19-2.5 (20 μg/ml, n = 6) or NaCl 0.9% as a control (n = 6). Data are presented as mean ± SD. P-values represent the statistical differences between groups using a multiple t-test with Holm-Šídák correction. *** p < 0.0001. doi:10.1371/journal.pone.0143583.g004 Pep2.5 and Heparanase in Sepsis Enthalpy changes of the interaction between peptide 19-2.5 and heparanase Next, we investigated the interaction of peptide 19-2.5 with heparanase ( Fig 5B). ITC revealed an exothermic reaction when peptide 19-2.5 was titrated to heparanase, corresponding to negative enthalpy changes (ΔH) of approximately −25 kJ/mol at the beginning of the experiment. Although an exothermic reaction can be observed between peptide 19-2.5 and heparanase ( Fig 5B), there are considerable differences from those described for HS-fragments ( Fig 5A): First, the ΔH values at the beginning of the titration are much lower than those for the HS-fragments titration. Second, they do not have a saturation character, because there is no sigmoidal curve and there is still an essential ΔH value constantly around 12 kJ/mole at high peptide concentrations. This behavior does not allow calculating a binding constant as in the case of HS-fragments. Third, the processes, which take place, are at peptide concentrations being orders of magnitude higher than those for the interaction with HS-fragments. A polar interaction should take place by binding of the positive peptide charges with the acidic (negative) charges of the protein, although the latter has more positive (100) than negative (86) charges. This indicates that there are no clear binding sites for the peptide in heparanase, and the interaction seems to have a more charge-independent, at least partially hydrophobic character. Taken together, for the high affinity binding of peptide 19-2.5 with HS-fragments as well as heparanase a two-step mechanism takes place. A direct Coulomb interaction between the positive charges of the peptide and the negative groups of HS-fragments or heparanase takes place as initial step. In a second step a hydrophobic interaction between the respective partners takes place. This leads to an activity decrease of the enzyme heparanase as well as a binding and characteristic change in the hydration and sulfation status of HS-fragments. Most importantly, however, the conformational change of the heparanase induced by the peptide causes a significant decrease of its enzymatic activity (Fig 3A and 3B). Conclusions In summary, our data indicate for the first time that plasma heparanase level and activity are elevated in murine and human sepsis. Moreover, we demonstrated that the synthetic antimicrobial peptide 19-2.5 interacts with heparanase and consecutively attenuates the liberation of circulating HS-fragments in systemic inflammation. The recently reported interaction between peptide 19-2.5 and PAMPs in vitro [3] is confirmed by our current study in vivo. Thus, peptide 19-2.5 may be a potential anti-inflammatory agent in sepsis by interacting with heparanase and circulating HS-fragments.
4,839.8
2015-11-23T00:00:00.000
[ "Chemistry", "Medicine" ]
A Duality in Two-Dimensional Gravity We demonstrate an equivalence between two integrable flows defined in a polynomial ring quotiented by an ideal generated by a polynomial. This duality of integrable systems allows us to systematically exploit the Korteweg-de Vries hierarchy and its tau-function to propose amplitudes for non-compact topological gravity on Riemann surfaces of arbitrary genus. We thus quantise topological gravity coupled to non-compact topological matter and demonstrate that this phase of topological gravity at N=2 matter central charge larger than three is equivalent to the phase with matter of central charge smaller than three. Introduction Field theories with N = 2 supersymmetry in two dimensions give rise to topological quantum field theories after twisting [1]. When the starting point is a non-compact conformal field theory, the correlation functions of the resulting topological quantum field theories were recently computed [2]. Subsequently, these theories were coupled to topological gravity [3,4] and the gravitational theory was solved on the sphere. 1 One motivation for studying topological gravity coupled to non-compact matter is to test the gravitational consequences of going beyond the central charge bound c = 3 for N = 2 minimal matter in two dimensions. In [4], it was noted that coupling topological gravity to twisted matter with central charge c > 3 gives rise to critical behavior reminiscent of N = 2 matter with central charge c < 3 coupled to gravity. We will gain more insight into this similarity in the present paper. Another motivation for the study of these theories is the wish to compute topological string amplitudes on asymptotically linear dilaton spaces which are generalizations of non-compact Calabi-Yau manifolds [9][10][11][12][13][14]. In this paper, we exhibit the close relation between non-compact topological quantum field theories [2,4] and the deformations of topologically twisted compact N = 2 minimal models [7]. By detailing the link, we also gain control over the non-compact topological quantum field theories coupled to topological gravity on Riemann surfaces of higher genus. We thus extend their solution at genus zero [4] to arbitrary genera. The underlying idea of the equivalence is simple. An N = 2 minimal model is the infrared fixed point of a Landau-Ginzburg theory in two dimensions with N = (2, 2) supersymmetry and a single chiral superfield, subject to superpotential interactions. The minimal model with central charge c = 3 − 6/k c corresponds to a superpotential monomial W c = X kc where X is a N = (2, 2) chiral superfield. On the other hand, a N = 2 non-rational conformal field theory at central charge c = 3 + 6/k, with k a positive integer can be modelled with a generalized Landau-Ginzburg theory with superpotential Y −k , where Y is again a chiral superfield [2,9,10]. The topological quantum field theories we study are the theories that arise upon twisting and deforming the infrared fixed points. Formally, the change of variables X = Y −1 maps the superpotential of the compact model to the superpotential of the non-compact model (upon identifying the levels k c = k). In this paper, we analyze the extent to which the change of variables proves an equivalence between the compact and non-compact topological quantum field theories, and their coupling to gravity. The correlation functions of both the compact and non-compact models are governed by the Korteweg-de Vries (KdV) or reduced Kadomtsev-Petviashvili (KP) integrable hierarchy. 2 Our duality comes down to chasing the change of variables X = Y −1 through the equations determining the classical and quantum integrable hierarchy. Thus, the calculational proofs are elementary. We then move to exploit this duality to solve non-compact topological gravity in two dimensions on Riemann surfaces of any genus. The duality provides a technically transparent though conceptually challenging answer to a hard question in two-dimensional quantum gravity, which pertains to the backreaction of gravity in response to a large amount of matter. Similar (though not identical) ideas have been mentioned in the integrable hierarchy literature. Firstly, there is an equivalence relation that was mentioned for the classical rational KP integrable hierarchy in [16]. It was applied to non-polynomial examples. Secondly, a similar device was employed to argue that matrix models with negative power monomial potential are governed by a reduced KP hierarchy [17]. Thirdly, we observe that the inversion of variables comes down to an analytic continuation of the exponent of the superpotential from k c to −k, where both k c and k are positive integers. 3 This analytic continuation was proposed as a method for obtaining results about models of topological gravity coupled to a topological non-compact coset conformal field theory [18], or matrix models with a negative power monomial potential [19]. Our plan is to kick off the paper by proving the equivalence between two dispersionless (i.e. classical, tree level, spherical) integrable hierarchies in section 2. In section 3 we detail the relation between the non-compact solution obtained by duality and the solution to the model obtained by twisting the physical spectrum of N = 2 Liouville theory studied in [2,4]. We then solve the non-compact topological quantum field theory coupled to gravity using a dispersionful (or quantum) KdV hierarchy in section 4. In section 5, we discuss the extent to which our solution relates to the approach of determining the correlators of a non-compact model through analytic continuation in the central charge (or level) of the conformal field theory [18,19]. We conclude in section 6 and comment on the conceptual implications of the duality for two-dimensional gravity. In appendices A and B we provide illustrations that may help to reveal aspects of our paper as either subtle, or simple. The Duality In this section, we briefly review the rational Kadomtsev-Petviashvili (KP) hierarchy reduced with respect to the derivative of a (super)potential. We follow the pedagogical reference [16] and refer to [15] for background. Importantly, the hierarchy can be formulated democratically with respect to the times of the integrable evolutions. Moreover, we carefully choose our setting sufficiently broadly to allow for all manipulations that we will need. Then, we prove an equivalence between a model with polynomial potential and a model which is polynomial in the inverse variable. These are the integrable hierarchies corresponding to a compact topological quantum field theory and a non-compact topological quantum field theory respectively (see e.g. [7] and [2] and references therein for the relation to deformations of supersymmetric conformal field theories and the operation of twisting). We prove the classical equivalence of these models in this section, and discuss the quantum, dispersionful hierarchy in section 4. The Rational KP Hierarchy in a Nutshell We very briefly review the rich rational KP hierarchy. We must refer to [16] for more background information and a laundry list of intermediate results. The basic data of the hierarchy is a potential W which is a polynomial in the ring C[X, X −1 ]. 4 It has a minimal and maximal degree. 5 If the maximal degree k max is positive and the minimal degree −k min is negative, we define two formal power series, one in X −1 , and one in X, through the formulas We normalized the first formal power series (by dividing by the leading coefficient c max of the polynomial W ) such that the first term on the right hand side has coefficient one, and we will often pick c min = 1/k min . Next, we define Hamiltonians Q i through the formulas: where the indices on the square brackets indicate which orders in the formal power series we keep. 6 We introduce an infinite set of times t i∈Z\{−1} and the reduced integrable hierarchy is then defined by the evolution equations: which is shorthand for Because the connection Q is flat, the time evolutions in the ring are mutually compatible [16]. We chose a formulation of the hierarchy which is democratic with respect to all time variables. Given the integrable hierarchy, one can define a set of operators, topological quantum field theory correlators, as well as classical topological gravitational correlators that satisfy all the axioms of such theories (such as associativity of the operator product and a topological recursion relation). Moreover, generating functions for these correlators can be constructed. See e.g. [16] for the large set of standard, relevant and explicit formulas. We assume these topological quantum field theories and topological theories of gravity to be known in the following. The Compact and Non-Compact Flows After the brief recap of the general framework of the rational Kadomtsev-Petviashvili hierarchy, we simplify matters considerably in this section. We concentrate on proving and analyzing in detail a duality between two reduced KP hierarchies. The first is the integrable KP hierarchy reduced over the derivative of a polynomial (super)potential in a variable X and the second is a theory reduced over the derivative of a polynomial potential in a variable Y −1 . The existence of such a duality for the strictly rational case is mentioned in [16]. The necessity of introducing the rational framework despite the polynomial nature of our potentials lies in the fact that we wish to be able to divide by polynomials in the following. Our two families of models can be described explicitly as follows. The compact model parameterised by the variable X has a polynomial potential while the non-compact model with variable Y has a polynomial potential in Y −1 (2.6) 6 There exists an important extension to include a Hamiltonian Q −1 , but we barely need it in this paper. The normalization of the coefficients is chosen to agree with [16]. 7 In the following it is important to assume that the coefficients v 1 as well as v −1 are non-zero since these determine the minimal and maximal degree of the derivative of the potential respectively, and consequently the dimension of the quotient ring. The leading coefficient of the potentials is chosen to be fixed and non-zero. The subleading coefficient of the potentials is chosen to be zero through a shift of the variables X, respectively Y −1 . For both the compact and the non-compact models, we can define generators Q i∈Z of KP integrable flows parameterized by times t i∈Z . For the compact models only the Hamiltonians with i ≥ −1 are non-trivial, while for the non-compact models those with i ≤ −1 are nontrivial. For simplicity, we again exclude the time t −1 from our considerations in both models, though it can be reinstated if so desired. We label quantities referring to the compact model with an extra index 'c' while those quantities without extra index relate to the non-compact model. Thus, for the compact model we have times t c i≥0 and for the non-compact model times t i≤−2 . To restate the time evolutions, we introduce the roots of the potential for the compact and the non-compact model as formal power series at large X and small Y respectively: The Hamiltonians are (2.9) The reduced integrable hierarchies are then defined by the evolution equations (2.3), valid for both the compact and the non-compact models. In the compact model, the evolution equations take a standard form if we pick t c j = t c 0 . They then read: In the non-compact model, if we pick the first possible time, t j = t −2 as our reference time, than the equations defining the hierarchy become In other words, we find a different symplectic structure. The Equivalence of Flows The first part of our equivalence map is to demonstrate that the classical flows of the compact and non-compact integrable hierarchies are isomorphic. We relate the flows of these two reduced KP hierarchies through the change of variables X = Y −1 . If we equate the levels k c = k in both models as well as the potential coefficients v a = v −a /a for a ∈ {1, 2, . . . , k c − 2}, then after the change of variables X = Y −1 , the compact superpotential W c (2.5) and the non-compact superpotential W (2.6) match. Thus, the series expansions L c (2.7) and L (2.8) are equal. We moreover map the indices i c + 1 ↔ −i − 1 which implies i c ↔ −i − 2, to match the compact Hamiltonians Q i≥−1 c (2.9) with the negative of the non-compact Hamiltonians Q i≤−1 (2.10). Most importantly, we note that the times t c i are mapped to the times t −i−2 , and that under the change of variables X = Y −1 , the flow evolution equations (2.11) and (2.12) are mapped to each other, because ∂ X = −Y 2 ∂ Y . The overall sign works out as well because of the minus sign in the comparison of Hamiltonians. The symplectic structures map into each other. We conclude that the integrable flows agree under the duality map. For the reader's convenience, we summarize the substitution rules: (2.13) The Operator Rings We matched the integrable hierarchies. We now work out further details of how the operator rings and other data match between the topological quantum field theories as well as the theories of topological gravity. For each model, an infinite set of operators is defined as derivatives of the Hamiltonians: (2.14) These operators live in the rings C[X, X −1 ] and C[Y, Y −1 ] respectively. Due to the condition that the coefficients v ±1 are non-zero, the quotient rings where we divide by the ideal generated by the derivative of the superpotential have bases φ α where α ∈ ∆ c = {0, 1, . . . , k c − 2} for the compact model and α ∈ ∆ = {−2, −3, . . . , −k} for the non-compact model. These quotient rings will be more manifestly isomorphic under duality after making a change of basis of the type discussed in [16]. In the non-compact model, we pick a reference basis element φ α 0 =−2 , and divide all operators in the basis by this reference element. Firstly, we recall that and therefore the corresponding operator φ −2 is We pick the operator basisφ α in the non-compact model given bỹ In fact, we can more generally define new operators 8 which under the equivalence map (2.13) map onto the operators φ −i−2 c of the compact model. The Topological Quantum Field Theories The topological quantum field theories associated to the equivalent integrable systems are necessarily isomorphic. There are some subtleties in the details of the matching that we want to discuss. The references [4,16] define universal coordinates for the non-compact models. They are useful, and allow for an explicit solution of the non-compact quantum field theory correlators [2,4]. They are, however, different from the universal coordinates one finds under the equivalence map from the universal compact coordinates. In this paper, we work in the latter coordinates, since we are focused on exploiting the equivalence map. To make the relation between the results in this paper and those in [4,16] more manifest, we record the explicit map between the universal coordinates we use in this subsection to those used in the original description of the non-compact model [4,16] in appendix A.1. Below, we define the new non-compact universal coordinates that are natural from the perspective of the duality. The compact universal coordinates u c are described by inverting the series L c (X): They coincide with the i0 component of the Gelfand-Dickey potentials G ij c [16] G ij c = For the non-compact theory on the other hand, we define the potentials [16] and the new universal coordinatesũ i adapted to the choice α 0 = −2: We can prove that the universal coordinates u c andũ do match under the equivalence map, using the following property of residue formulas. The relation between the residue of a function or formal power series at infinity and the residue at zero is (in conventions in which both are defined as the coefficient of the term with power minus one): (2.24) 8 Here, we go beyond our background reference [16]. Using this property, the universal coordinates agree (since the roots of the superpotentials, the indices and the operators are appropriately mapped). It is then straightforward to follow all the quantities that determine the topological quantum field theory through the equivalence map. Indeed, under the substitution rules, we have more generally: Since these are second derivatives of the generating functions [16], we can integrate the equality up and pick generating functions of correlation functions of matter correlation functions F m that are equal The upper index refers to the picture in which we evaluate the correlation functions, which is the zero picture for the compact model, and the minus two picture for the non-compact model (since for the latter we chose the reference time t −2 ). This proves the equality of universal N-point functions, defined as derivatives with respect to u c andũ coordinates respectively. In other words, we have (2.27) To illustrate the equivalence hands-on, we note that the metrics η c αβ = δ α+β,kc−2 and η αβ = δ α+β,−k−2 match since if we have primaries α, β and α + β = k c − 2 before the transformation, then −α − 2 − β − 2 = −k − 2 after the transformation. The structure constants also match. The first reason for this is that the operators φ α c andφ α are bases of the respective rings that agree on the nose under the equivalence map (as shown in the previous subsection). Thus, the corresponding structure constants will automatically coincide. This is sufficient to prove that the associated topological quantum field theories match on all Riemann surfaces, in agreement with the equality of generating functions. Let us provide even more details on the correspondence of the formulas. We can show how a scalar product on the space of operators agrees between the compact model with respect to time t 0 and the non-compact model with respect to time t −2 . For the compact model, we have the scalar product [16] where we sum over the zeroes of the derivative of the superpotential W c . For the non-compact model, with the reference choice α 0 = −2, we define the scalar product [16]: Firstly, this manifestly implies that (φ, ψ) 0 = (φ,ψ) α 0 , by the definition of the operatorsφ. Secondly, we can sum over the zeroes of the derivative of the superpotential, which for the compact model lie near zero by performing a large contour integral. We can slip this contour over the sphere in the compact model, and evaluate it at X = ∞. For the non-compact model, we reason similarly and can evaluate the residue formula at Y = 0. Using this contour deformation, we find under the equivalence map and using property (2.24) that The structure constants similarly match since (see [16] for the proof of the residue formula for the three-point correlator): Let us remark that we can interpret the φ α 0 insertions in formulas (2.29) and (2.31) as taking us from a zero picture vacuum state to a minus two picture vacuum state where the two are related by |vac (−2) = φ −2 |vac (0) . Finally, we backtrack to the time evolution of the coefficients of the superpotential with the integrable flows parameterized by the times. The reduced KP hierarchy has a topological quantum field theory solutionũ α = η αβ t β when we restrict the range of times to the basis set ∆ (for either the compact or the non-compact theory) [16]. These topological quantum field theory solutions also map from the compact to the non-compact problem under the substitution u α c ↔ũ −α−2 . In summary, we find that a compact topological quantum field theory captured by a polynomial superpotential is equivalent to a non-compact topological quantum field theory defined by a superpotential of an inverse variable. The relation between the theories is rather involved in terms of the standard universal coordinates (see appendix A.1), and becomes straightforward in the universal coordinates suggested by the equivalence map. The Classical Gravitational Equivalence We have proven a classical equivalence of a compact and a non-compact topological quantum field theory. In this subsection, we provide only a few of the details of the equivalence of the topological quantum field theories coupled to topological gravity, at the classical level, i.e. on the sphere. The idea of the map is again simple. We extend the agreement of the flows labelled by α exploited in the topological quantum field theory equivalence to include all the times of the integrable hierarchy. We need to go slightly beyond the discussion provided in [16] on this occasion. There is an equivalence map for descendants fields. We define [16] σ Under the equivalence map, these operators and their normalisations match. Since theφ α operators are an alternative basis for the quotient ring, the descendant decomposition theorem (which says that any descendant can be decomposed into primaries) as well as the topological recursion relation for the operators σ N (which says that descendant three-point functions can be recursively computed in terms of primary three-point functions -see [3,7,8] for background -) are valid also in the α 0 = −2 reference frame. Moreover, both theorems are mapped to their compact counterparts under the equivalence map. Thus, the equivalence map extends to the topological quantum field theories on the sphere coupled to gravity. The compact generating function of gravitational correlators is also mapped to its non-compact counterpart. Summary The duality map is now manifest. We used the change of variables X ↔ Y −1 to prove the equivalence of the potentials Q and Q c , of the fields φ c andφ, of the times t i c and t i , of the universal coordinates u c andũ, and finally of the Gelfand-Dickey potentials G and generating functions F . Thus the full classical equivalence is understood. The statement that we obtain is the following. If we consider an extended non-compact model with a superpotential of the form (2.6), then a choice of reference time t −2 provides a model equivalent to the compact model (2.5) in the more standard reference time t 0 . In the next section, we study how the trivialization of this non-compact model is related to the solution of the twisted N = 2 Liouville theory obtained in [2,4] in the zero picture. Moreover, since we have proven the classical equivalence of two models, the quantum equivalence will also hold, if we perform equivalent integrable quantisations. We present the resulting quantum equivalence in section 4. The Strict Non-Compact Model The duality map provides us with a definition of a non-compact topological quantum field theory model, before and after coupling to gravity. We also have a good understanding of a physical twisted non-compact model, namely the twisted relevant deformations of N = 2 Liouville theory, as the limit of an integrable system [2,4]. In the present section, we establish the connection between these two systems in detail. We work at the level of the topological quantum field theories. The Non-Compact Correlators in the Zero Picture In the duality approach, we can compute all the correlators in the −2 picture in a noncompact topological quantum field theory model with a potential (2.6) with constant up to sub-subleading deformations of a leading monomial Y −k . In principle, we have a solution for the generating function of correlation functions F (−2) m . We would like to understand an equivalent description in a (more standard) zero picture. To render the 0 correlators welldefined, namely, to have a sensible time t 0 with a proper time evolution associated to it, one adds to the superpotential (2.6) the linear term in Y . Clearly, this defines a new integrable system for which, among other quantities, the 0 correlators as well as the −2 correlators make sense. We then define a limit on the free energy (in the spirit of [4], but slightly more general) which leaves the 0 correlators well-defined, yet eliminates the leading linear term in the potential. In this manner we define 0 correlators for the superpotential without the linear term. In the following, we study this limiting procedure. The Definition of the Zero Picture Correlators Firstly, we convince ourselves that the limit of the 0-correlators is well-defined. To that end, we need to understand the scaling of quantities with the parameter ǫ 1 in which we take the limit. The scaling reduces the dimension of the chiral ring (since the linear term in the superpotential will be eliminated) and is therefore clearly impactful. We start with the superpotential W lin with linear term: The scaling limit is defined as follows. We multiply Y by ǫ 1 and rescale all u i≤−1 by ǫ 1 . By the formula for the v −α in terms of the universal coordinates (see [4,16]), this keeps all but the linear term in the potential fixed. In summary: For ǫ 1 = 0, a basis of the quotient ring is given by 1, Y −1 , . . . , Y −k . For ǫ 1 = 0, a basis of the quotient ring is given by Y −2 , . . . , Y −k . The latter ring has dimension two less than the former. The original model has a generating function of zero correlation functions, which we denote F (0) m . After the scaling transformation, it depends on ǫ 1 . We would like to determine the behaviour of the generating function as a function of the parameter ǫ 1 as we scale ǫ 1 to zero. We think of W lin (ǫ 1 ) as a constant term plus a linear term in ǫ 1 . In this particular model, we have that W lin = L max since the leading power in the potential is one. For the other formal power series, L min , the small Y and small ǫ 1 expansions are straightforwardly compatible. We can work to linear order in ǫ 1 at all stages. We have for instance where W now indicates the target superpotential without linear term, and where L is the formal series corresponding to the undeformed superpotential W . We can then use the formulas for the zero picture one-point functions gathered in [16] for the rational model to understand the first derivatives of the scaled generator of correlation functions F They are given by residues of fractional powers of the roots: We see that we have one linear term in ǫ 1 in the generating function F (0) m (ǫ 1 ) of correlation functions, namely ǫ 1 u −1 u 2 0 /2, and otherwise quadratic terms in ǫ 1 . We also deduce from the last equation that the limit of the zero correlators for the operators that remain in the spectrum, captured by the quadratic terms in ǫ 1 , is given by the naive formula, namely, in which we replace the root L min by the root L of the limit potential W . We conclude that, once we subtract the cubic term ǫ 1 u −1 u 2 0 /2, the limit lim ǫ 1 →0 F (0) m (ǫ 1 ) has at most logarithmic divergences, and those are proportional to u −1 . The generating function of zero correlation functions that do not depend on u −1 is given by integrating up the naive formula (3.5) of the one-point functions of the limit model. A Scaling Law We can revisit the discussion more systematically by observing a scaling law for the model with linear term. We recalled the first derivatives of the generating function F (0) m of topological quantum field theory correlation functions (in the zero picture) in equations (3.5). These first derivatives are determined algebraically. It is an interesting question whether the final integration of the generating function can also be performed algebraically. We already know this to be the case for the topological quantum field theories that arise from deforming topologically twisted conformal field theories. Indeed, the latter satisfy the scaling equation [4,7]: where the sum is over all operators in the spectrum of the topological theory, c is the central charge of the superconformal field theory and q i are the R-charges of the operator insertions. This equation allows us to perform the final integration (by computing the left hand side to obtain the right hand side) for the generating function F (0) m of correlation functions. Thus, for these theories, the calculation of the generating function can be performed completely algebraically. This equation is true both in the compact [7] and the non-compact model [4]. The question we turn to is whether there is a similar scaling law for the non-compact model with a linear term (3.1) (or equivalently, a compact model with a X −1 term). We propose the following answer: consider the generating function F (0) m . It has a single logarithmic term. Divide the argument of the logarithm, namely u −k , by a scale factor µ. Then, the generating function F (0) m satisfies the anomalous scaling law: We have checked the anomalous scaling law in examples at levels k ≤ 7. We now show that this scaling law is consistent with the limiting procedure towards the non-compact model without linear term, as well as the behaviour of the generating function in that limit. Indeed, in that limit, the terms quadratic in u −1≥−i≥−k (that correspond to order ǫ 2 1 terms in the generating function) will make for a right hand side in equation (3.7) equal to matching onto the right hand side of equation (3.6). Thus, the limiting scaling law reproduces the known scaling law (3.6) for the deformation of the non-compact conformal field theory. The scaling law allows for an entirely algebraic determination of the generating function F The Relation Between the Zero and the Minus Two Pictures In the previous subsections, we scaled out the linear term from the potential (3.1). The potential then agrees with the one from the integrable system obtained from duality. We have available a description of the correlators in both the −2 and the 0 picture and can now ask for the precise relation between these correlators. It is sufficient to observe that we know the relation between the universal coordinates, given by the equatioñ as well as the equality between the second derivatives of the generating function with respect to these coordinates [16]: Further derivatives give higher-point functions, and these derivatives can be related through the coordinate transformation and the chain rule For the three-point functions, we find for instance: We can confirm this equation using the relation between operators valid modulo W ′ , as well as the residue formulas for the correlators (see [16] for details). Thus, all correlators in the two pictures are in principle related in a straightforward manner. The Link to the Strict Non-Compact Model Finally, we take the limit towards the strict non-compact models described in [2,4] with superpotential (3.14) These are obtained by restricting to the topological degrees of freedom that arise from the physical Hilbert space of the twisted N = 2 Liouville theory at radius √ kα ′ [2,4]. We need to eliminate the constant term and the term proportional to Y −1 in the superpotential (2.6). We therefore send v 0 = u 0 = ǫ 2 and v −1 = u −1 = ǫ 2 to zero. Given that the superpotential and the non-compact root L behave regularly in the limit, there is no subtlety in defining the limiting expressions. Those give rise to the models of [2,4]. To recuperate all parameters present in [2,4], one needs to restore two parameters, which one can accomplish by rescaling Y (to obtain a non-trivial parameter in front of the leading order term Y −k ) as well as shift the variable Y −1 (to find a non-trivial subleading term proportional to Y −k+1 ). We refer to [2,4] for a full description of the resulting integrable system. Quantum Non-Compact Gravity In this section, we comment on the solution of non-compact matter coupled to topological gravity on higher Riemann surfaces. Our strategy for solving the quantum model is simply to exploit the solution to the quantum compact model through the equivalence map. Thus, we immediately describe the quantum theory with respect to the non-compact reference time t −2 . Schematically, the reasoning is that we turn the statements about the classical symplectic structure into equivalences on the quantum commutators, through the quantization: in respectively the compact and the non-compact theory. The first equation leads to the standard quantised (or dispersionful) KdV hierarchy, while the second is its image under the equivalence map. We change variables on the left, as in the classical theory, and then quantize and obtain the operators on the right (with identical operator ordering prescriptions in both theories). We refer to [8] for a review of the quantum KdV hierarchy in the context of topological gravity. There it is discussed that at zero compact times t i≥1 c , the initial condition for the quantum Lax operator L c appropriate for the topological quantum field theory coupled to gravity is because of the three-point function for primaries X i at the conformal point, fixed by charge conservation. This then uniquely determines the τ function which is the generator of correlation functions for compact matter coupled to topological gravity: We record what these statements become under the equivalence map. The initial condition will map to To understand the initial condition, we need to realize that the relevant topological quantum field theory three-point functions are the correlators in the minus two picture. As we saw earlier, the operatorsφ i indeed have the same (minus two picture) three-point functions as do the φ i c operators (in the zero picture). Continuing in this vein, the time variables t i will couple to theφ i operators and we define the logarithm log τ of the tau function as: Under the duality map, we then have the equality of tau-functions We can ask for the relation with the generator of correlation functions defined in terms of the original operators φ i or φ α in the non-compact model. We imagine we can restrict to the latter (by the decomposition of descendants into primaries and the renormalization of primary times). We then observe that the fields φ α andφ α are related in the non-compact model by a linear transformation (see equation (3.13)). We thus have a description of the correlators of the φ α (and φ i ) fields in the zero picture as well. Thus, we have provided an algorithm for calculating the correlators of quantum non-compact gravity. On Analytic Continuation In this section, we discuss the extent to which our non-compact topological models are related to the compact topological models by analytic continuation. It is interesting to perform analytic continuation in correlation functions computed for general positive compact level towards negative levels. Firstly, in [18], this was shown to reproduce, at negative level k c = −1, the Penner model for Euler characteristics of moduli spaces of Riemann surfaces. Secondly, at level k c = −2, a connection to unitary matrix models was uncovered [20]. Thirdly, at generic negative level k c , an intriguing connection to the spectral density of the SL(2, R)/U(1) coset conformal field theory was suggested in [19]. Fourthly, one can make a tentative link to matrix models with negative power monomial potential [17]. Here, we point out that analytic continuation reproduces a few elementary results in the non-compact topological theories that we obtained through duality. However, we also show that generically, analytic continuation will lead to a different model. Thus, we situate our non-compact model more clearly with respect to the literature. The section is structured as follows. We first make the point that it is hard to generically relate our non-compact models to compact models through analytic continuation. Then, we backtrack and show that for a number of elementary results, there is a connection using analytic continuation. Finally, we illustrate how these links break down for generic correlators in a simple example. A Generic Argument In the compact model coupled to topological gravity, it was argued in [21] that the generator of correlation functions F is a polynomial of maximal degree k c + 1 at genus zero. This follows from the charge conservation rule where s is the number of primary insertions and q i is their R-charge. The central charge c equals c = 3 − 6/k c where k c is a positive integer for compact minimal models. The order of the polynomial is given by the maximal number of insertions. To maximize the number of insertions, we must maximize their R-charge (since the R-charge is strictly smaller than one). Thus, to compute the order of the polynomial, we solve the equation For the maximal R-charge q max = 1 − 2/k c present in N = 2 minimal models, we find that the maximal number of insertions is s max = k c + 1, as stated. For non-compact models at central charge c = 3 + 6/k, the same R-charge conservation rule holds, and one can reason similarly. In order to come as close to the compact model as possible, we will not allow the marginal deformation with R-charge 1. If we allow the subleading R-charge 1 − 1/k, then we will find a polynomial of order 2k − 1. If we only allow for the (sub-subleading) maximal R-charge 1 − 2/k, then the polynomial will be of order k − 1. 9 Thus, we already strongly suspect that generic (zero picture) correlation functions cannot match (under analytic continuation). This generic argument is convincing, but we will confirm it through more detailed reasonings in the following. To increase intrigue, we first discuss a few correlators in which analytic continuation does provide a good guide to non-compact correlators. A Few Elementary Correlators It is known that the three-and four-point functions of topological gravity (at vanishing times) plus the associativity equation determine the generating function F of correlation functions uniquely [21]. Thus, one strategy to see to what extent analytic continuation reproduces the non-compact correlation functions is by starting out with the comparison of low-point functions at the conformal point (i.e. with monomial superpotential and all times equal to zero), and to build up from there. The Three-point Functions When the times vanish, the zero-, one-and two-point functions in topological gravity are zero. The three-point functions of three primaries in the compact model are (see e.g. [22]) The delta-function is dictated by charge conservation. In the non-compact model, we have [2,4] The delta-function on the right hand side can be obtained by analytic continuation k c → −k from the compact correlator. Indeed, this is a direct consequence of the relation between the compact and non-compact central charges of the underlying N = 2 superconformal field theories. Note that these correlators match the zero picture matter correlators: and which also translate into one another under analytic continuation in the level. This indicates that if analytic continuation is to work, it will be in the zero picture. Thus, we seem to find that three-point functions compare well under analytic continuation. However, it is crucial to think about the spectrum of R-charges as well. In other words, one should also wonder about how to match observables. The spectrum of R-charges in the zero picture is the set {2/k, 3/k, . . . , 1} in the (strict) non-compact theory, and does not match the spectrum of R-charges {0, 1/k, . . . , 1 − 2/k} in the compact theory. We will come back to this point, since it spoils the correspondence between the generating functions F at low order despite the neat continuation from equation (5.3) to equation (5.4). Four-point Functions A less trivial comparison is provided by the calculation of the four-point function on the sphere at zero times. For the compact four-point function we have the charge conservation rule: The four-point function was calculated in the integrable system formalism in [21], by perturbing the superpotential to first order in times, and following the prescriptions for computing the perturbed three-point functions to linear order in time. In particular, one uses the residue formula for the three-point function and the Hamiltonians and operators to linear order in time. The result for the perturbed three-point function [21], linear in times, in our conventions reads where a sum over i 4 is implied. At zeroth order in time, we confirm the three-point functions. The term linear in time fixes the four-point function at zero time, which satisfies 4 j=1 i j = 2k c − 2: The last equation is proven on a case by case basis. We turn to the non-compact four-point function on the sphere at zero times. At zero time, we take the conformal model We compute the intermediate results: We use the residue formula for the three-point function as a function of times t i , and wish to compute it to linear order in times in order to find the four-point function at zero times. We find: where we established the charge conservation equation for the four-point function 4 m=1 l m = 2k + 2 . (5.14) In the end, we obtain a four-point function: We rewrite the correlators in the compact case as which is proportional to a minimum of NS and R-sector R-charges. For the non-compact case, we write similarly: which has the same dependence on the charges, while the central charge c is an analytic continuation of the compact central charge c c . The overall sign can be made to agree by a change of sign convention for the deformation times. The big caveat however is that the spectrum of charges does not match, as remarked earlier. An Explicit Difference In this subsection, we illustrate in detail how the analytically continued compact and the non-compact topological gravity model part ways. We already mentioned the uniqueness of the higher point functions given limited data on the lower point functions. This uniqueness theorem goes through mostly unchanged in the non-compact setting. Thus, to understand the difference between the analytically continued compact model and the non-compact model it is indeed sufficient to study low-point functions. As we have already hinted at, the hiccup lies in the spectrum of R-charges (combined with anomalous R-charge conservation) which leads to differing low-point correlation functions. An Example We provide an example in which the reconstruction of the generating function differs for the compact and the non-compact case, showing non-uniqueness, even in the face of seeming analytic continuation. Consider the level k c = 3. For the compact case, we have the results for the generating function of matter correlation functions in terms of the times t α = u α . Indeed, we have a three-point function φ 0 φ 0 φ 1 0,t=0 = X 0,t=0 = 1 at zero times, and we have the three-point function φ 1 φ 1 φ 1 0,t = X 3 0,t = − u 1 X 0,t = −u 1 at non-zero time, giving rise to the quartic term in the generating function. For the non-compact case, at level k = 3, we have: To find an analytic continuation map, we want to work in the zero picture, as argued previously. We can, for instance, choose a basis of operators: but these have no non-zero two-or three-point functions in the zero picture at zero times and cannot match the compact model. On the other hand, we might choose the basis of operators: and we have a topological quantum field theory two-point function between these two operators in the zero vacuum. (The two-point function however is zero after coupling to topological gravity.) At zero times, there is however no non-zero three-point function involving both types of operators, and therefore, again, we cannot match the compact picture correlators. We conclude that the generating function is not an analytic continuation of the compact generating function. This can also be verified using their explicit expressions. 10 The underlying reason is that, in the zero picture, the spectrum of R-charges (as well as the anomalous R-charge contribution) in the compact model and the non-compact model differ. That makes (for instance) for different cubic terms in the generating function F . At higher levels, one finds even more manifest disagreement, for instance in the degree of the polynomial generating function, as argued in subsection 5.1. We conclude that the formal agreement of zero picture correlation functions that we obtained (at zero times) by analytic continuation, does not translate into identities for the generating functions. Of course, one can mend this disagreement between the compact and non-compact models, through duality. As we saw, in that case we identify the levels, without a sign flip, and do find a correspondence with the minus two picture of the non-compact theory, as described in detail in section 2. Summary Analytic continuation of compact correlators leads to interesting results, including relations to known integrable models [17][18][19][20]. Those models differ from the non-compact models we obtained by twisting N = 2 Liouville theory at asymptotic radius √ kα ′ [2,4], and from the models we obtained through duality. Conclusions We have exploited the transformation of variables X = Y −1 to solve non-compact topological quantum field theories, before and after coupling them to topological gravity, and on Riemann surfaces of arbitrary genus. The transformation reduces the problem to its compact counterpart, which has been solved previously. While the conceptual framework is simple, the details are slightly involved. We demonstrated that the duality maps compact zero picture correlators to non-compact minus two picture correlators. The minus two picture non-compact correlators are in turn related to their zero picture counterparts. Finally, the latter naturally arise from twisted topological conformal field theories as described in [2,4]. As a by-product, we were led to conjecture a scaling law for rational models with a leading linear term. Our duality has interesting conceptual consequences. Firstly, we shed new light on the solution of the topological quantum field theories discussed in [2]. Secondly, we extend the solution to non-compact gravity proposed in [4] to arbitrary genus. Thirdly, the duality map provides insight into the observation of [4] that the critical exponents of non-compact two-dimensional gravity are the same as those of the compact models. Indeed, the duality map implies that this must be the case. Non-compact matter in the presence of topological gravity seems to disturb the Riemann surface to a high degree, and precisely such that gravity compensates to make the matter degrees of freedom behave like compact matter once more. A further conceptual clarification of the gravitational backreaction of the non-compact matter would be welcome. How does it precisely come about that the gravitational backreaction makes sure that the combined non-compact gravitational system has critical behaviour that matches the compact critical behaviour? Can this be reproduced by a lattice simulation? Further insight into this mechanism would clarify whether we should expect a similar phenomenon for matter of central charge c > 1 coupled to ordinary gravity. That would solve a longstanding problem in two-dimensional gravity, i.e. it could foreshadow a stable endpoint for two-dimensional gravity coupled to more than minimal matter. Fourthly, the solution of the non-compact gravitational model is one key to solving topological string theories on asymptotically linear dilaton spaces which form a large class of analogues of non-compact Calabi-Yau manifolds. We look forward to exploiting the solution of the non-compact models further. Acknowledgments It is a pleasure to thank our colleagues for creating a stimulating research environment. A A Map and Illustrations This appendix is dedicated to details and illustrations that aid in improving our understanding of aspects of the duality and the models we describe in the bulk of the paper. A.1 The Map of the Universal Coordinates Since we proved the classical equivalence of the compact and non-compact models, we can use the formulas valid for the compact topological quantum field theory [22] in order to reconstruct the solution of the non-compact topological quantum field theory [2,4]. As argued in the bulk of the paper, the universal coordinates natural in the duality map differ from those typically used in the non-compact models. Thus, it is useful to compute the coordinate change. The coordinate change can be constructed as follows. The series expansion of the (compact model) variable X in terms of 1/L c at large L c is identical to the series expansion of 1/Y in terms of 1/L at large L, under the duality map. The subtlety lies in the fact that for the noncompact system the universal coordinates are defined in terms of the series expansion of Y at large L. Still this information is sufficient to find the link between the universal coordinates. For simplicity, we restrict to the case where u −2 = u −k = 1 and u −3 = u −k+1 = 0. 11 We then have the non-compact universal coordinates defined by the expansion and therefore derive Comparing the latter sum to the compact universal coordinates by replacing Y −1 = X and L ↔ L c , we find the relation between the universal coordinates in the two systems: We can also compute the inverse relation, expressing the non-compact universal coordinates in terms of the compact universal coordinates: Thus, we have established an explicit map between the compact and the non-compact models for the standard universal coordinates. Note that this also establishes a map between tilded universal coordinates in the non-compact model used in the bulk of the paper and the untilded universal coordinates for the non-compact model used in [2,4]. A.2 The Equivalence Exemplified We compare topological quantum field theory generating functions of correlations functions. On the one hand, we recall the compact generating function F c,m after a relabelling. We also illustrate the coordinate map of appendix A.1 concretely at levels three and four. At level k c = 3 = k, we have the potentials: and after some calculation, we find the generators of correlation functions: We have used the intermediate result that which codes the relation between the non-compact universal coordinates. Moreover, the duality relates the tilded universal non-compact coordinatesũ to the compact universal coordinates u c . We have therefore and the generating functions (A.7) coincide, as implied by duality. Next, we put k c = 4 = k, and work with the superpotentials: After computing the formal series L c and L, and the operators φ α , and plugging them into the second derivative potentials G αβ , we can integrate up twice to find the compact and noncompact generating functions. Alternatively, we can use the known one-point functions and a scaling relation. A number of lines of calculation later one finds: The universal coordinates map as In appendix B we will independently recover these results from the generating function of the non-compact model calculated in the zero basis and using the limiting procedure detailed in the text. We end this section by observing that the expression for the generating functions F B Models and Scaling Laws In this appendix, we provide examples of the limiting procedure discussed in subsection 3.1 of the paper as well as illustrations of the scaling law for a non-compact model supplemented with a linear term in the potential. In particular, we compute examples of the generating function F (0) m through integration, and checked that the result agrees with the algebraic calculation of the function F (0) m using the proposed scaling law (3.7). We also explicitly provide the limiting form of the generating functions in the limit ǫ 1 → 0, defined in subsection 3.1 in the bulk of the paper, as well as an example calculation of the relation between different sets of universal coordinates. As described after equation (3.5), in the generating function F (0) m of the model with the linear term, we scale all the universal coordinates u −j by ǫ 1 except u 0 and then the generating function of the strict non-compact theory is the one obtained by extracting the O(ǫ 2 1 ) coefficient, along with setting u 0 = u −1 = ǫ 2 = 0. One can then check that the resulting generating function indeed satisfies the conformal field theory scaling (3.6) with non-compact central charge c = 3 + 6 k . Below we list the zero picture generating function, its ǫ 1 -expansion and finally, the generating function of the strict non-compact model 12 in the zero picture for levels k = 3, 4 and 5. Level Three (B.1) The generating function F s.n.c. of the strict non-compact matter model of subsection 3.2 is obtained by setting u −1 = u 0 = 0 and taking the ǫ 2 1 coefficient of F m . This leads to the generating function F s.n.c. for the strict non-compact model . (B.2) Level Four Level Five The generating function of the strict non-compact model is given by The generating functions of the strict non-compact models can be seen to agree with the results obtained in [2] and they satisfy the scaling law (3.6). Relations between universal coordinates from the generating function Once we have the generating function of the non-compact theory, it is a simple matter to take derivatives and obtain the two point function G βα 0 . According to [16] this provides the relation between the universal coordinates u andũ in the zero and minus two bases. The superpotential for which we do this is The generating function that has been calculated for the examples in section A.2 is for the rational model As discussed in detail previously, to obtain results in the non-compact model defined by the potential (B.7) we perform the Y → ǫ 1 Y scaling, accompanied by the appropriate scaling of the u-variables. In addition, in order to apply these results to the model defined by equation (B.7), we need to set u −k = 1 and u −k+1 = 0. The relevant two point functions of the rational model that define the universal coordinates u −α in the −2 picture, are given bỹ for α ∈ {−2, −3, . . . , −k} (B.9) Let us illustrate this in the case of the level three model. The generating function is given by (see equation (B.1)): By differentiating with respect to the coordinates u we find that Applying the limiting procedure discussed above, we obtaiñ confirming what we found using the Lax operators in appendix A.2. The same procedure can be applied to the higher level examples. For level four we find (B.13) and for level five: (B.14) Once the map between the universal coordinates in the minus two and zero pictures is obtained, one can proceed to relate all n-point correlators as discussed in subsection 3.1.2.
12,864.6
2018-12-14T00:00:00.000
[ "Physics" ]
Thermodynamic Analysis of Combined Cooling Heating and Power System with Absorption Heat Pump Based on Small Modular Lead-based Reactor . Small Modular Lead-based Reactor (SMLR) has generated great interest in academic research all around the world due to its good safety characteristics and relatively high core outlet temperature. In this paper, a Combined Cooling Heating and Power (CCHP) system with usage of absorption heat pump, which couples with a SMLR, was proposed to fulfill the energy demands in remote areas. Thermodynamic analysis was implemented to improve the performance of the CCHP system based on SMLR. To meet the remote areas’ energy needs, the main parameters and mass flow rate of a 35 MW th SMLR design were analyzed. The SMLR-CCHP with absorption heat pump system can provide electric power 12.5MW e , heating 9.5MW h , and cooling 2.54MW c. The total energy utilization efficiency of the system can be 69.12 %. This work can provide a reference in the design and optimization of the CCHP system to meet the energy demands in the remote areas. Introduction The Lead-based Fast Reactor is expected to firstly achieve the industrial demonstration and commercial application in all of the fourth-generation nuclear power systems. Compared to other nuclear power systems, the coolant in the Small Modular Lead-based Reactor (SMLR) is chemically inert, the system operates at atmospheric pressure. Therefore, it has the characteristic of high security. Liquid alloy has very good heat transfer ability, the layout is compact and flexible, which makes the SMLR-based Combined Cooling Heating and Power (CCHP) system flexible in the site selection [1]. The system is suitable for a variety of complex geographic environments. The refueling cycle of SMLR is close to ten years, which makes it more sustainable and stable than other prime movers for CCHP systems. Besides, the high coolant outlet temperature can achieve high energy utilization, which well applies for remote areas [2]. There are some researches on the SMLR comprehensive utilization systems. Up to now, the existing studies about SMLR are only in the initial stage, there is no mature design of SMLR-CCHP. Institute of Nuclear Energy Safety Technology (INEST), the Chinese Academy of Sciences (CAS) carried out the concept design development of CCHP system driven by SMLR. Khan [3] proposed the conceptual design of a new double reheat high-efficiency power generation system and carried out parameter optimization analysis. Xu [4] proposed a preliminary design of the cogeneration system for SMFR with the extraction-condensing and back-pressure cogeneration type, pointing out the optimization direction of system integration for the Lead-cooled Fast Reactor. Existing researches focus on to improve the electricity/power generation and heating efficiency. As an energy-saving technology, absorption heat pump is widely used in low temperature waste heat utilization. It uses a small amount of high temperature heat source (such as steam, high temperature hot water, combustible gas combustion heat, etc.) as the driving heat source to produce a large amount of useful heat energy at medium temperature then improve the efficiency of heat energy utilization [5]. The SMLR system discharges large amounts of low-temperature cooling water with abundant low-level energy. Lithium bromide absorption heat pump can use this low-level energy and convert it into usable high level heat energy [6]. Li [7] proposed a new type of district heating method based on Co-ah cycle and designed to improve both the capacity of heating system and the energy efficiency of Cogeneration plant. Zhao [8] developed a theoretical model of the LiBr-H 2 O absorption heat pump system, and simulated the thermodynamic cycle of the heat pump system, indicating that COP and pyrotechnic efficiency can be used for performance analysis of absorption heat pumps. This paper designs CCHP systems with absorption heat pump based on SMLR for remote areas. Thermodynamic calculations are performed to analyze the parameters for the second circuit and study the effect of drive steam, outlet temperature of the hot water on the efficiency of the absorption heat pump. Derive the efficiency of the CCHP system at the optimal parameters. This work can provide a reference in the design and optimization of the SMLR-CCHP for remote areas. Design parameters of SMLR We have proposed the SMLR with 35MW because of its ultra-small and high-efficiency technical characteristics. The main design parameters of the SMLR for the CCHP are listed in Table 1. The maximum electricity, cooling and heating demand is shown in Table 2. Some assumptions on parameters are given as shown in Table 3, including the turbine efficiency, absorption chillers efficiency, and heater efficiency. Design of CCHP system based on SMLR The CCHP system usually consists of a power generation system, heat recovery system and cooling system. The SMLR is used as the thermal power source for the CCHP system. Heating is mainly used to supply the hightemperature industrial steam and domestic hot water. The cooling system has two forms: steam absorption chiller and electric compression chiller. The steam outlet temperature of LP turbine is appropriate for a doubleeffect absorption chiller [9]. The steam at the outlet of the second circuit is mainly divided into three parts. i) steam works in the HP and LP turbines, then convert mechanical energy into electrical energy, ii) steam at HP turbine exit is taken out and used to supply the heating for the residents iii) steam from the LP turbine heated exchange with the double absorption chiller to meet the cooling demand. The schematic of SMLR-CCHP is shown in Fig. 1. The main parameters of cold, heat and electricity supply are shown in Table 4 Although the temperature of the steam at the outlet of the LP turbine is 32.87℃, the mass flow rate of the steam is up to 4.9kg/s, the steam has a large amount of usable heat. The energy of the steam will be wasted if cooled by air in condenser directly. Absorption heat pump can make the full use of the low-temperature waste heat from the condenser [10]. To improve the utilization of lowtemperature waste heat from SMLR-CCHP, it is necessary to optimize the parameters for better performance. Thermodynamic model of SMLR-CCHP and absorption heat pump The thermodynamic model is carried out for the system. Thermodynamic model of the system is based on the conservation of mass, energy, and momentum [11]. The parametric calculation of the SMLR-CCHP needs to be considered the mass flow rate, temperature, and enthalpy at each point of the system based on the pressure. The power produced by the HP and LP turbines of the SMLR-CCHP as follows: The cooling process: The heating process: The compression process: Thermal efficiency of the system: Schematic Flow Chart and Thermodynamic model of Absorption Heat Pump Absorption heat pump mainly contains Evaporation, Absorber, Generator, Condenser four parts [12]. The basic principle is to utilize the difference in absorption characteristics between concentrated and dilute solutions of lithium bromide, recycling parts of the waste heat from condensate [13]. Schematic flowchart of the absorption heat pump shown in the Fig.2. Four basic parameters are set during the cycling of the lithium bromide absorption heat pump, driving the heat source at a pressure of 0.4 MPa. Condensate temperature is 32.87℃, The inlet/outlet temperature of the water for heating is 60/80 °C. The system makes full use of the low-temperature waste heat from the condenser to supply energy for the remote areas. To study the additional heating brought by absorption heat pump, the system maintains the same heating capacity and cooling capacity as the original system. Compared with the original system, lowtemperature steam at the condenser not cooled by the air directly but exchange the heat with the absorption heat pump. Results & Discussion By introducing absorption heat pump, a large amount of low-temperature waste heat from LP turbine exhaust can be fully utilized. Compared with origin system, it provides theoretically an additional power of 1.04MW e . The SMLR-CCHP system can provide electric power 12.56MW e , cooling 2.54MW c and heating 9.5MW h . The energy utilization rate of the system can be up to 69.2% by the addition of absorption heat pump. The parameters are shown in Table 5. Conclusions The overall thermal performance of the CCHP system based on SMLR is calculated. The thermal performance of the CCHP system with the heat pump is higher than the system without the heat pump. At the same cooling capacity and heating capacity, the generating power capacity increases from 11.52 MW to 12.56 MW. SMLR-CCHP system with addition of absorption heat pump is suitable to meet the energy demands in the remote areas.
1,982.4
2021-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Mathematical Reasoning in a Technological Environment . Dynamic geometry software has been accused of contributing to an empirical approach to school geometry. However, used appropriately it can provide students with a visually rich environment for conjecturing and proving. Year 8 students who were novices with regard to geometric proof were able to exploit the features of Cabri Geometry II to assist them in formulating and proving in the context of Cabri simulations of mechanical linkages. Introduction Most mathematicians would agree that it is proof which sets mathematics apart from the empirical sciences, and forms the foundation of our mathematical knowledge.Yet research indicates that students often fail to understand the purpose of mathematical proof, and readily base their conviction on empirical evidence or the authority of a textbook or teacher.A large-scale survey of above average Year 10 students in the UK (Healy and Hoyles, 1999), for example, has shown that many students, even those who have been taught proof, have little idea of the significance of mathematical proof, are unable to recognise a valid proof, and are unable to construct a proof in either familiar or unfamiliar contexts.Mathematics curricula in many countries are now emphasising the need for students to justify and explain their reasoning.A further important issue is the introduction into schools of a class of software known as dynamic geometry, such as Cabri Geometry II TM and The Geometer's Sketchpad R .Screen drawings in this software can be purely visual or they can be constructed using in-built tools based on Euclidean geometry, such as parallel or perpendicular lines, angle bisectors or perpendicular bisectors; segments or angles can be constructed precisely; accurate measurements can be made; and the loci of points traced.The 'drag' facility distinguishes dynamic geometry software from other computer drawing software, since only those features based on the use of appropriate geometric tools, such as parallel or perpendicular lines, will remain invariant when a screen drawing is dragged.These dynamic geometry environments have created widespread interest as constructivist learning tools, and have the potential to transform the teaching and learning of geometry. Despite this potential, though, concern has been expressed that dynamic geometry software is contributing to an empirical approach to geometry.Noss and Hoyles (1996) note that in the UK, for example, geometry is being reduced to pattern-spotting in data generated by dragging and measurement of screen drawings, with little or no emphasis on theoretical geometry: "school mathematics is poised to incorporate powerful dynamic geometry tools in order merely to spot patterns and generate cases" (p.235).Hölzl (2001) asserts, however, that the problem lies with the way dynamic geometry software is used, rather than with the software itself: The often mentioned fear that the computer hinders the development of an already problematic need for proof is too sweeping.It is the context in which the computer is a part of the teaching and learning arrangement that strongly influences the ways in which the need for proof does -or does not -arise (pp.68-69). De Villiers (1998) has criticised the emphasis on the verification aspect of proof in school mathematics, asserting that in a dynamic geometry environment the focus should move to proof as explanation rather than verification.While some students may have a cognitive need for proof as conviction, many see little point in proving something which they already 'know' to be true.Hofstadter (1997, p. 10) argues that the certainty given by dragging a dynamic geometry construction is more convincing for him than a proof: "it's not a proof, of course, but in some sense, I would argue, this kind of direct contact with the phenomenon is even more convincing than a proof, because you really see it all happening right before your eyes".The question, then, is how to exploit the rich visual environment of dynamic geometry software to engage students in deductive reasoning and proof.Scher (1999, p. 24) suggests that through an interplay between experimentation and deductive reasoning, "dynamic geometry can provide not only data to feed a conjecture, but tools to jump-start ideas and feed a proof". Mechanical Linkages as a Pathway to Deductive Reasoning My quest for a motivating, visually rich context in which to introduce Year 8 students to geometric proof led me to mechanical linkages, or systems of hinged rods (see Cundy and Rollett, 1981;Bolt, 1991).Found in many common household items, as well as in 'mathematical machines' from the past, mechanical linkages are often based on simple geometry such as similar figures, isosceles triangles, parallelograms or kites.With the emphasis on the underlying geometry, dynamic geometry software models of linkages provide an interface between the concrete and the theoretical, and a visually rich environment for students to explore, conjecture and construct geometric proofs.In this context of mechanical linkages, proof has the functions of verification of the truth of conjectures, promoting understanding of geometric relationships, and explanation, that is, giving insight into why a particular linkage works the way it does. Developing a Cognitive Need for Proof In a research experiment with Year 8 students, Tchebycheff's linkage (Cundy and Rollett, 1981) for approximate linear motion (see Fig. 1) was introduced as a means of developing a cognitive need for geometric proof.The linkage consists of three rigid bars, AC, BD and CD, with lengths five, five, and two units respectively.Points A and B are fixed, with the distance AB equal to four units.When CD rotates, the midpoint of CD moves along an almost linear path.The students first constructed the linkage from plastic strips, and conjectured that the midpoint of CD moved in a straight line. Fig. 2 shows a Cabri Geometry II (referred to from now on as Cabri) model of Tchebycheff's linkage, with the tabulated measurement data and the trace of point P (the midpoint of CD) demonstrating the closeness of the path of P to linear motion.When the students dragged the Cabri linkage, their realisation that the path was not in fact linear, and their astonishment at seeing how little the path actually deviated from a straight line, was sufficient to convince them that visual and empirical evidence could not be trusted. Linkages which Produce True Linear Motion During the nineteenth century several mathematicians became involved in designing linkages for converting circular motion to linear motion.Sylvester's linkage (Fig. 3), for example, is based on two similar kites, AEDC and DCBF , with O and F fixed so that OABF is a parallelogram.As point B is dragged, the locus of E appears to be a straight line through F , while measurement of angles suggests that ∠OF E is a rightangle.Using the geometry of the similar kites and the parallelogram, OABF , it can be proved that ∠OF E is indeed a right-angle. Pantographs Pantographs -mechanical devices used for copying or enlarging drawings -are readily modelled using dynamic geometry software.Sylvester's pantograph (Fig. 4) consists of a parallelogram OABC and two links, AP and CP , where AP = AB = OC, CP = CB = OA and ∠BAP = ∠BCP = α, a fixed angle.Tracing the paths of P and P as P is dragged, demonstrates to students that P traces out a rotated image of the path of P .Feedback from dragging the dynamic geometry model and measurement of OP , OP and ∠P OP should lead students to the conjectures that OP = OP and ∠P OP = α.Proof of these conjectures, based on congruent triangles OAP and OBP , then confirms why the movement of P is an image of the movement of P , rotated through an angle equal to α. The pantograph shown in Fig. 5, in which ABDC is a parallelogram, points O, C and E are collinear, and O is fixed, can be used for enlarging or reducing.By tracing the locus of points C and E students can compare the sizes of the loci and construct a proof based on the conjecture that ∆OAC, ∆OBE, and ∆CDE are similar. Pascal's Angle Trisector In Pascal's angle trisector (see Fig. 6), OA = AP = P B so that triangles OAP and AP B are isosceles triangles.Rods OC and OD are hinged at O and rod AP is hinged at A. As the rod OD is rotated to change the size of ∠BP C, B slides along OD and P slides along OC.The proof that ∠BOP is one third of ∠BP C is based on exterior angles of triangles. Year 8 Students' Conjecturing and Proving This section focuses on the role of feedback from Cabri linkage models during argumentation, conjecturing, and proving by two pairs of Year 8 students -Anna and Kate, and Lucy and Rose -who were novices with regard to geometric proof.The students were able to exploit the features of Cabri to assist them in formulating and proving in the context of Cabri simulations of mechanical linkages.In the transcriptions included in this section, TR refers to the teacher-researcher. Anna and Kate: Pascal's Angle Trisector Pascal's angle trisector was Anna and Kate's first linkage task, and their first attempt at conjecturing and proving.They commenced their investigation of the linkage with a metal strip model (see Fig. 7a) which was introduced to them as 'Pascal's mathematical machine' so they did not know the purpose of the device.Their knowledge of isosceles triangles and exterior angles of triangles soon led them to the angle relationships shown in Fig. 7b.However, Anna and Kate were unable to make any further progress in their reasoning until they were given the Cabri model (see Figs. 8 and 9).They measured angles in the Cabri figure (Fig. 9a) then tried to find relationships between the angles (Fig. 9b).Kate observed that 55.8 plus 27.9 was equal to 83.7, and therefore that ∠BDC + ∠BAC = ∠DCX.Initially they had observed only the two triangles, ABC and BCD.Dragging of the Cabri figure allowed Anna and Kate to notice ∆CAD, and they then recognised that ∠DCX was in fact an exterior angle of ∆CAD, and that ∠DCX was equal to three times ∠BAC.pantograph, conjecturing that the image was congruent to the shape they had drawn on the paper.Rose also tentatively suggested that the image was rotated by the fixed angle of the pantograph: "Maybe that angle . . .I'm not sure . . .maybe not . ..".They were then given a Cabri model of the pantograph where the distances OA, AB, BC, OC, AP , and CP were all equal, and ∠P AB = ∠P CB = 30 • .Lucy used the Cabri Triangle tool to draw a triangle with one of its vertices coinciding with point P , then selected Trace for point P .She dragged P around her triangle so that a trace of the path of P was drawn (see Fig. 11a). Lucy then placed points at the vertices of the trace formed by P , and removed the trace to expose the three points, which she then joined with segments (Fig. 11b).Lucy and Rose observed that the two triangles were "about the same", but rotated.Lucy measured ∠P AB and ∠P CB, noting that they were always 30 degrees.Rose then suggested that they should measure the angle between corresponding sides of the original triangle and the one they had drawn over the trace (see Fig. 12).Lucy had anticipated that the angle would be 30 degrees, but probably recognised the inaccuracy associated with constructing the triangle over the trace and moving the original triangle to coincide with this second triangle. Conclusion The students involved in the sequence of conjecturing-proving tasks displayed high levels of motivation, no doubt due in part to the tactile and novel experience of working with physical models of the linkages.It was, however, the accuracy of the feedback from the Cabri models which allowed the students to formulate their conjectures, and gave them the confidence and motivation to seek explanations for these conjectures.The unique features of dynamic geometry software -constructions based on Euclidean geometry, accurate measurements, tabulation of data, and the tracing of loci and the drag facility -rather than eliminating the need for proof, created a visually rich and motivating environment for these Year 8 students to explore, conjecture and construct geometric proofs. Fig. 1.Pencil-and-paper: Tracing the paths of points on Tchebycheff's linkage. Fig. 11.Using the Cabri model to investigate Sylvester's pantograph. Fig. 12 . Fig. 12. Measuring the angle of rotation of the image at P . Put that shape [the triangle drawn by P ] down there [pointing to image] 089 Rose: And angle BCP equals BAP because given . . .OAB plus BCP . . .090 Lucy: Those two added together, that whole angle . . .that means . . .091 Rose: Once we've proved that angle, then the whole thing's easy 'cause side angle side . . .see, if you have two sides and how big it's going to be in between . . .when you join them up the triangles will be the same. . .092 Lucy: Oh, yep.So . . .angle P CO will be equal to ... 093 Rose: Therefore . . .P CO equals P AO because . . .say side angle side so it makes congruent triangles.So OP equals OP .094 Lucy: Right, now prove that P OP equals angle P CB and P AB.In triangle P OA . . . studied mathematics and chemistry at the University of Melbourne, Australia, and has had 22 years experience teaching mathematics and science.Since 1991 she has been teaching mathematics at Melbourne Girls Grammar School, and is also working as a part-time research fellow in the Education Faculty at the University of Melbourne.She has just completed a PhD in mathematics education, researching the use of dynamic geometry software for introducing Year 8 students to geometric proof.She has written several books for secondary school mathematics, including Computer Enriched Mathematics for Years 7 and 8, and Exploring 2-dimensional Space with Cabri Geometry which have been published by the Mathematical Association of Victoria.She has also presented workshops for teachers on the use of Cabri Geometry and MicroWorlds Pro.
3,234
2003-01-01T00:00:00.000
[ "Computer Science", "Engineering", "Mathematics" ]
Study of the effects of ß-myrcene on rat fertility and general reproductive performance ß-Myrcene (MYR) is a monoterpene found in the oils of a variety of aromatic plants including lemongrass, verbena, hop, bay, and others. MYR and essential oils containing this terpenoid compound are used in cosmetics, household products, and as flavoring food additives. This study was undertaken to investigate the effects of MYR on fertility and general reproductive performance in the rat. MYR (0, 100, 300 and 500 mg/kg) in peanut oil was given by gavage to male Wistar rats (15 per dose group) for 91 days prior to mating and during the mating period, as well as to females (45 per dose group) continuously for 21 days before mating, during mating and pregnancy, and throughout the period of lactation up to postnatal day 21. On day 21 of pregnancy one-third of the females of each group were submitted to cesarean section. Resorption, implantation, as well as dead and live fetuses were counted. All fetuses were examined for external malformations, weighed, and cleared and stained with Alizarin Red S for skeleton evaluation. The remaining dams were allowed to give birth to their offspring. The progeny was examined at birth and subsequently up to postnatal day 21. Mortality, weight gain and physical signs of postnatal development were evaluated. Except for an increase in liver and kidney weights, no other sign of toxicity was noted in male and female rats exposed to MYR. MYR did not affect the mating index (proportion of females impregnated by males) or the pregnancy index (ratio of pregnant to sperm-positive females). No sign of maternal toxicity and no increase in externally visible malformations were observed at any dose level. Only at the highest dose tested (500 mg/kg) did MYR induce an increase in the resorption rate and a higher frequency of fetal skeleton anomalies. No adverse effect of MYR on postnatal weight gain was noted but days of appearance of primary coat, incisor eruption and eye opening were slightly delayed in the exposed offspring. On the basis of the data presented in this paper the no-observed-adverse-effect level (NOAEL) for toxic effects on fertility and general reproductive performance can be set at 300 mg of ßmyrcene/kg body weight by the oral route. Correspondence Introduction ß-Myrcene (7-methyl-3-methylene-1,6octadiene) (MYR) is an acyclic monoterpene found in a large variety of plants including lemongrass, verbena, hop, bay, and others (1,2).ß-Myrcene and essential oils containing this terpenoid compound are widely used as a fragrance in cosmetics, as a scent in household products, and as a flavoring additive in food and alcoholic beverages (3).Furthermore, it was reported that ßmyrcene is an analgesic substance and the active principle of lemongrass (Cymbopogon citratus Stapf) abafado, an infusion made with the pan covered in order to prevent the loss of volatile constituents (4).Lemongrass abafado is widely used in Brazilian folk medicine as a sedative and as a remedy for gastrointestinal disorders (5). The importance of human exposure to ßmyrcene, the widespread use of plants as well as essential oils containing large amounts of this monoterpene (e.g.lemongrass oil), and the relative paucity of data about its health risks prompted us to perform a rather comprehensive study of its reproductive toxicity. The metabolism of ß-myrcene was studied in the rabbit (6) and in the rat (7), and has been shown to induce liver monooxygenases (8,9).The acute toxicity of ß-myrcene was reported to be low (10) and this monoterpene was shown to have no genotoxic activity in vitro (11) or in vivo (12).No evidence that ß-myrcene is a teratogenic substance was found (13) and the no-observed-adverse-effect level (NOAEL) for peri-and postnatal developmental toxicity in the rat was set at 0.25 g ß-myrcene/kg body weight (14). The aim of the present study was to investigate the effects of ß-myrcene on rat fertility and general reproductive performance.This is segment I study, part of a more comprehensive evaluation of the reproductive toxicity of ß-myrcene designed in three segments as recommended by the guidelines of the Food and Drug Administration, and of the Organization for Economic Cooperation and Development (OECD). Animals Male and female Wistar rats (Bor: spf, TNO; Fa.Winkelmann, Borchen, Germany) were kept under spf conditions at a constant 12-h light-dark cycle (lights on from 9:00 to 21:00 h), at a room temperature of 21 ± 1 o C and 50 ± 5% relative humidity.The animals received a standard pelleted diet (Altromin 1324, Lage, Germany) and tap water ad libitum during the experiment.All rats were adapted to the conditions of our animal quarters for three weeks before starting the experiment. Mating procedure Males were housed individually in a Macrolon type 3 cage with wood shavings as bedding.Three virgin females were placed inside the cage of one male for 2 h each day (7:00 to 9:00 h) and vaginal smears were evaluated for sperm.The first 24-h period following the mating procedure was called day 0 of pregnancy if sperm were detected in the smear.The mating procedure was repeated every working day until all three females became sperm-positive or, alternatively, for fifteen mating sessions extending over three weeks. Treatment Commercially available ß-myrcene (Sigma Chemical Co., St. Louis, MO) was purified up to 95% (methanol extraction and HPLC purification) at our laboratory and administered to rats once a day during the following periods: a) male rats for 91 days prior to mating and during the mating period; b) female rats for 21 days prior to mating, during the mating period, and during pregnancy and lactation until day 21 after parturition. Three experimental groups (15 males and 45 females per group) were treated by gavage with ß-myrcene (100, 300 and 500 mg/ kg body weight) dissolved in peanut oil (pharmaceutical grade).The control group received a similar treatment but with vehicle only (peanut oil, 2.5 ml/kg body weight). Evaluation of the animals All F o -males and -females were evaluated for weight development, mortality, and signs of toxicity.Pregnant females were also observed for weight gain, signs of abortion, dystocia and prolonged duration of pregnancy.All males were sacrificed by decapitation and autopsied at the end of the mating period.All major organs were inspected macroscopically and weighed.Livers and one of the two testes were fixed in a 10% neutral buffered formalin solution for routine histological processing and light microscopic evaluation of sections stained with hematoxylin-eosin.The number of spermatozoa in the remaining testis and cauda epididymis was counted as described elsewhere (15).The following indices were used: mating index = [No. of sperm-positive females ÷ No. of mated females] x 100; pregnancy index = [No. of pregnant females ÷ No. of sperm-positive females] x 100. Cesarean section On day 21 of pregnancy one-third of the females in each group were anesthetized by ethyl ether inhalation and killed by decapitation.The gravid uterus was weighed with its contents.Resorption as well as living and dead fetuses were counted.The number of implantation sites was determined by the method of Salewski (16).All living fetuses were immediately weighed, numbered with a marker pen, examined for externally vis-ible malformations and fixed in a 5% formalin solution.All fetuses were examined for skeletal anomalies after clearing with potassium hydroxide and staining with Alizarin Red S (17). Postnatal development of the offspring All the remaining pregnant females were allowed to give birth to their offspring.From pregnancy day 20 on the dams cages were inspected for births and the day of birth was designated as postnatal day 1.As soon as possible after birth the numbers of viable and dead newborns were recorded, and the pups were sexed and weighed.Any newborn death on postnatal day 1 was considered to be a stillbirth.The weight gain of the pups was recorded on postnatal days 6, 11, 16 and 21.Each pup was examined for signs of physical development and the days on which developmental landmarks appeared were recorded as follows: a) incisor eruption: the first sign of eruption through the gums of both lower incisors; b) fur development: the first detection of downy hair; c) eye opening: total separation of the upper and lower eyelids and complete opening of both eyes. At weaning (postnatal day 21) all mothers were anesthetized with ethyl ether, killed by decapitation and subjected to postmortem examination. Statistical analysis Data were analyzed by one-way analysis of variance or, alternatively, by the Kruskal-Wallis test whenever the data did not fit a normal distribution.Differences between groups were tested by the two-tailed Student t-test or Mann-Whitney U-test.Proportions were analyzed by the chi-square test or, alternatively, by the Fischer exact test.Statistical evaluation was performed using a MINITAB program (MTB, University of Pennsylvania, 1984), and a difference was considered statistically significant at P<0.05. Body weight changes and toxicity to the parental generation No deaths were induced and no other signs of toxicity were apparent in male rats treated orally with ß-myrcene (100, 300 and 500 mg/kg body weight) for 91 days prior to mating and during the mating period.There were no statistically significant differences in body weight gain between the control and the MYR-treated male rats (Table 1).Except for a slight increase in both absolute and relative weights of liver and kidneys of males exposed to the highest dose tested (Table 1), no other treatment-related abnormality was noted in MYR-treated rats at autopsy.Light microscopy evaluation of sections stained with hematoxylin and eosin revealed no morphological alterations in the liver or testicular tissue of male rats exposed to ßmyrcene.Moreover, no effect of MYR treat-ment was found either on the number of spermatids in the testis or on the number of spermatozoa in the cauda epididymis (data not shown). Similarly, no adverse effects on body weight gain and no other signs of toxicity were observed in MYR-treated female rats during the premating (21 days) and mating periods. Outcome of fertility tests ß-Myrcene did not present any adverse effect on fertility indices at the dose range tested.As can be seen in Table 2, the proportion of females impregnated by male rats (mating index), and the ratio of pregnant to sperm-positive females (pregnancy index) did not differ between control and MYRtreated groups.Thus, no indication was found that MYR administered orally at doses as high as 500 mg/kg could impair male or female fertility. Evaluation of embryo-fetotoxic effects No adverse effect of MYR on pregnancy weight gain was noted at any dose level (Table 3).Except for a slight increase in the weights of liver and kidneys in the MYRtreated females, no other sign of toxicity to the maternal organism was observed.The body weight of MYR-treated fetuses did not differ from that of the control group at any dose level (Table 3).However, at the highest dose tested (500 mg/kg), MYR produced a slight increase in the resorption rate and a parallel decrease in the ratio of live fetuses per implantation site (Table 3). The effects of prenatal exposure to MYR on the occurrence of fetal skeleton abnormalities are shown in Table 4.No differences between control and treated groups were observed at doses up to 300 mg of MYR per kg body weight, but the frequency of skeletal malformations was increased at 500 mg/kg.Nonetheless, the higher incidence of skeletal abnormalities observed at this dose level seems to have been due, to a large extent, to an increase in the occurrence of anomalies such as fused os zygomatic, dislocated sternum (non-aligned sternebrae) and lumbar extra ribs, the spontaneous frequencies of which are high in our rat strain.Anyhow, the higher incidence of skeletal abnormalities as well as the embryolethal effect clearly indicated that a dose as high as 500 mg MYR/kg is embryotoxic to rats. Perinatal toxicity and postnatal development of the exposed offspring As shown in Table 5, duration of pregnancy was not affected by treatment with MYR at any dose level.No adverse effect of MYR on labor was noted in this experiment, and pup mortality in the treated groups was not above that observed in the vehicle-control group on the first day of life (stillbirths) or throughout lactation, i.e. from postnatal day 2 through day 21 (Table 5).Furthermore, no differences between control and MYR-treated groups were found with regard to maternal or offspring weight changes during the lactation period (Tables 5 and 6).In spite of the absence of MYR-induced effects on offspring body weight development, exposure to this monoterpene seemed to have caused a slight retardation in the appearance of incisor eruption, primary coat and eye opening (Table 7).This effect was not doserelated and the MYR-induced delay was more evident with incisor eruption (300 mg/kg) and eye opening (100 and 300 mg/kg).Except for this minor effect on physical maturation, no other indication was found that MYR at doses up to 500 mg/kg impaired the postnatal development of the treated offspring. Discussion Except for a slight increase in the absolute and relative weights of liver and kidneys, no other effects were noted in male rats continuously exposed to ß-myrcene for 91 days prior to mating and during the mating period.Since ß-myrcene has proved to be an inducer of hepatic monooxygenases (8), liver enlargement probably resulted from the marked hypertrophy of the endoplasmic reticulum to the induction of microsomal enzyme synthesis in treated animals (18). On the other hand, the mechanism underlying the ß-myrcene-induced enlargement of kidneys is still far from being entirely under-stood.ß-Myrcene was reported to cause a sex-specific hyaline droplet nephropathy in male rats very similar to that produced by dlimonene, a monocyclic monoterpene (19).d-Limonene-induced hyaline nephropathy was shown to be due to an epoxide metabolite (d-limonene-1,2-oxide) that binds to an α 2µ -globulin thereby preventing its lysosomal degradation and leading to an accumulation of this low molecular weight protein in the cytoplasm (hyaline droplets) of the proximal tubule cells (20).In the case of MYRinduced kidney damage the α 2µ -globulin ligand is still unknown, but it should be noted that microsomal oxidation of MYR olefinic bonds seems to yield epoxide metabolites structurally similar to d-lim-onene1,2-oxide (21).Anyhow, since the hyaline droplet nephropathy is sex-specific (i.e.female rats do not produce α 2µ -globulin), and the increase in kidney weight was noted in both males and females, the renal enlargement cannot be attributed only to the accumulation of hyaline droplets.One possible explanation for the kidney enlargement would be an induction of renal microsomal enzyme synthesis by MYR and d-limonene.Data from the present study suggest that continuous exposure of male rats to ßmyrcene for 91 days prior to mating and during the mating period did not cause any histological changes in the testis and did not impair male fertility.Since ß-myrcene is a highly lipophilic substance that reaches a rather high concentration in the testes (Webb J, Chahoud I and Paumgartten FJR, unpublished results), it seems unlikely that the absence of adverse effects on male fertility is due to an insufficient exposure of male germ cells to this monoterpene.The results also suggest that female fertility was not affected by continuous exposure to ß-myrcene for 21 days prior to mating and during the mating period.The percentages of MYR-treated females that copulated (mating index) and were impregnated by males (pregnancy index) did not differ from those obtained for the vehicle-control group at any dose level.In addition, MYR had no detectable effect on the frequency of pre-implantation losses since no difference in the number of implantation sites per dam was found when treated animals were compared to the controls.Thus, apparently no adverse effect on reproductive function was caused by MYR from gametogenesis up to implantation of the blastocyst in the maternal uterus. Notwithstanding the absence of adverse effects on female fertility in the present study, MYR, at doses as high as 1.0 and 1.5 g/kg, was shown to impair female offspring fertility in a segment III-designed study in rats (14).It should be emphasized, however, that not only the doses, but also the periods of exposure to MYR were quite different in the two studies.While in the present experiment only adult females were treated with MYR, in the segment III study female offspring were exposed while still in utero, from pregnancy day 15 on and throughout lactation.Therefore, the absence of adverse effects on female fertility in the present study might have been due either to the lower dose levels tested, or to the different period of exposure, or to both.In any case, MYR-induced impairment of female fertility in the segment III study was apparent only at very high doses when a pronounced perinatal mortality occurred as well (14). Except for liver and kidney enlargement, no indication of MYR-induced maternal effects was found at any dose level and no embryotoxic effects were detected at doses lower than 500 mg/kg body weight.On the other hand, a slight increase in the resorption rate and a higher incidence of skeletal abnormalities indicated that, in the present study, 500 mg MYR/kg was an embryotoxic dose.Absence of signs of maternal toxicity at 500 mg MYR/kg was also found in segment II- (13) as well as in segment III-designed studies in rats (14).Nonetheless, in the segment II study MYR-induced embryotoxic effects were observed only at a higher and maternally toxic dose (13).Since in the segment II study MYR was administered during the second week (pregnancy days 6 to 15) whereas in the present study treatment continued throughout pregnancy, the highest susceptibility of the embryos in the latter may have been due to a longer exposure to this monoterpene.It should also be pointed out that, in the present study, most of the skeletal abnormalities whose incidence was increased in MYR-treated animals were anomalies which occurred at a high frequency in the historical group as well as in the vehicle-control group.Under these circumstances, the toxicological significance of the fetal skeleton findings seems to be minor. Parturition, perinatal pup mortality as well as postnatal weight gain of the exposed offspring during the lactation period were not affected by MYR at any of the doses tested.The only effect of MYR on postnatal development detected in the present study was a substance-produced delay on the day of appearance of some milestones of somatic maturation.No dose-response relationship was found and retardation was more evident at 300 mg/kg (incisor eruption and eye open-ing) and 300 mg/kg (eye opening) than at the highest dose tested (500 mg/kg).Since this slight effect was not related to the dose and was not accompanied by any other indication that postnatal development was impaired in the treated animals, it was not taken into account for setting the present study-derived NOAEL . Contrasting with the absence of toxicity in the present experiment, an increased pup mortality on the first day of life and during the first week of lactation, as well as a reduced pup birth weight, were found after treatment with 500 mg MYR/kg in the segment III study (14).A possible explanation for this discrepancy is the development of tolerance to the toxic effects of ß-myrcene, since in the segment III study treatment began on pregnancy day 15, whereas in the present study administration of this monoterpene to the females started 21 days before mating.The induction of liver microsomal enzymes by MYR (8) and the observed crosstolerance with pentobarbital effects (9) are findings that give additional support to this interpretation. On the basis of the data presented in this paper the NOAEL for the toxic effects of ßmyrcene on fertility and general reproductive performance can be set at 300 mg/kg body weight by the oral route.This dose is about the same NOAEL as found in a segment-III-designed study and approximately half the NOAEL obtained for MYR-induced embryotoxicity (segment II) in the rat.Although no quantitative data on human exposure to ß-myrcene are available, it seems very unlikely that dose levels comparable to this experimentally derived NOAEL could be attained when humans are exposed to this olefinic monoterpene through the use of MYR-containing essential oils or folk medicine potions. Table 1 - Body w eight gain and organ w eight changes in male rats treated orally w ith ß-myrcene (0, 100, 300 and 500 mg/kg body w eight) for 91 days prior to mating.Values are reported as means ± SD.Data w ere analyzed by ANOVA and the Student t-test.* P<0.05 compared to controls. ß-myrcene effects on fertility and reproductive performance Table 3 - Effects of ß-myrcene (0, 100, 300 and 500 mg/kg) administered by gavage during the premating, mating and gestation periods on parameters evaluated at the time of cesarean section performed on pregnancy day 21 for fetal body w eight.Proportions w ere analyzed by the chi-square test.Live fetuses per litter, uterus w eight, maternal w eight gain and fetal body w eight are reported as means ± SD and w ere analyzed by one-w ay analysis of variance.M ean litter w eight w as taken as the unit of analysis for fetal body w eight. ß-myrcene effects on fertility and reproductive performance Table 5 - Duration of pregnancy, number of stillbirths, postnatal mortality and w eight gain of offspring of rats treated orally w ith ß-myrcene (0, 100, 300 and 500 mg/kg body w eight) during pregnancy and lactation. Table 6 - Weight development of female rats treated orally w ith ß-myrcene (0, 100, 300 and 500 mg/kg body w eight) during pregnancy and lactation.Values are reported as means ± SD.Data w ere analyzed by ANOVA and the Student t-test and no significant differences w ere detected.Dams submitted to cesarean section on pregnancy day 21 w ere not included.Days of pregnancy and days of lactation are indicated by subscripts P and L, respectively.
4,956
1998-07-01T00:00:00.000
[ "Biology", "Medicine" ]
mRNA transcript quantification in archival samples using multiplexed, color-coded probes Background A recently developed probe-based technology, the NanoString nCounter™ gene expression system, has been shown to allow accurate mRNA transcript quantification using low amounts of total RNA. We assessed the ability of this technology for mRNA expression quantification in archived formalin-fixed, paraffin-embedded (FFPE) oral carcinoma samples. Results We measured the mRNA transcript abundance of 20 genes (COL3A1, COL4A1, COL5A1, COL5A2, CTHRC1, CXCL1, CXCL13, MMP1, P4HA2, PDPN, PLOD2, POSTN, SDHA, SERPINE1, SERPINE2, SERPINH1, THBS2, TNC, GAPDH, RPS18) in 38 samples (19 paired fresh-frozen and FFPE oral carcinoma tissues, archived from 1997-2008) by both NanoString and SYBR Green I fluorescent dye-based quantitative real-time PCR (RQ-PCR). We compared gene expression data obtained by NanoString vs. RQ-PCR in both fresh-frozen and FFPE samples. Fresh-frozen samples showed a good overall Pearson correlation of 0.78, and FFPE samples showed a lower overall correlation coefficient of 0.59, which is likely due to sample quality. We found a higher correlation coefficient between fresh-frozen and FFPE samples analyzed by NanoString (r = 0.90) compared to fresh-frozen and FFPE samples analyzed by RQ-PCR (r = 0.50). In addition, NanoString data showed a higher mean correlation (r = 0.94) between individual fresh-frozen and FFPE sample pairs compared to RQ-PCR (r = 0.53). Conclusions Based on our results, we conclude that both technologies are useful for gene expression quantification in fresh-frozen or FFPE tissues; however, the probe-based NanoString method achieved superior gene expression quantification results when compared to RQ-PCR in archived FFPE samples. We believe that this newly developed technique is optimal for large-scale validation studies using total RNA isolated from archived, FFPE samples. Background A vast collection of formalin-fixed and paraffinembedded (FFPE) tissue samples are currently archived in anatomical pathology laboratories and tissue banks around the world. These samples are an extremely valuable source for molecular biology studies, since they have been annotated with varied information on disease states and patient follow-up, such as disease progression in cancer and prognosis/survival data. Although FFPE samples provide an ample source for genetic studies, formalin fixation is known to affect the quality of DNA and RNA extracted from FFPE samples and its downstream applications, such as amplification by the Polymerase Chain Reaction (PCR) or microarrays [1]. Von Ahlfen et al., 2007 [1] described the different factors (e.g. fixation, storage time and conditions) that can influence the integrity of RNA extracted from FFPE tissues, and its downstream applications. They showed that differences in storage time and temperature had a large effect on the degree of RNA degradation. In their study, RNA samples extracted within 1 to 3 days after formalin fixation and paraffin embedding maintained their integrity. Similarly, RNA isolated from FFPE samples that were stored at 4°C showed higher quality compared to samples stored at room temperature or at 37°C . They also reported that RNA fragmentation occurs gradually over time. It is also known that cDNA synthesis from FFPE-derived RNA is limited due to the use of formaldehyde during fixation. Formaldehyde induces chemical modification of RNA, characterized by the formation of methylene crosslinks between nucleic acids and protein. These chemical modifications can be partially irreversible [2], limiting the application of techniques such as reverse transcription, which uses mRNA as a template for cDNA synthesis. A fixation time over 24 hours was shown to result in a higher number of irreversible crosslinks [3,4]. Overall, fixation time and method of RNA extraction are the main factors that determine the extent of methylene crosslinks [1]. A recently developed probe-based technology, the NanoString nCounter™ gene expression system, has been shown to allow accurate mRNA expression quantification using low amounts of total RNA [5]. This technique is based on direct measurement of transcript abundance, by using multiplexed, color-coded probe pairs, and is able to detect as little as 0.5 fM of mRNA transcripts; described in detail in Geiss et al., 2008 [5]. In brief, unique pairs of a capture and a reporter probe are synthesized for each gene of interest, allowing~800 genes to be multiplexed, and their mRNA transcript levels measured, in a single experiment, for each sample. In addition, in a recent study, mRNA expression levels obtained using NanoString were more sensitive than microarrays and yielded similar sensitivity when compared to two quantitative real-time PCR techniques: TaqMan-based RQ-PCR and SYBR Green I fluorescent dye-based RQ-PCR [5]. Although NanoString and RQ-PCR were shown to produce comparable data in good quality samples, NanoString is hybridization-based, and does not require reverse transcription of mRNA and subsequent cDNA amplification. This feature of Nano-String technology offers advantages over PCR-based methods, including the absence of amplification bias, which may be higher when using fragmented RNA isolated from FFPE specimens. In addition, NanoString assays do not require the use of control samples, since absolute transcript abundance is determined for each single sample and normalized against the expression of housekeeping genes in that same sample [5]. Although NanoString technology has been optimized for gene expression analysis using formalin-fixed samples, to our knowledge we are the first to report the use of this technology for mRNA transcript quantification using clinical, archival, FFPE cancer tissues. In our pilot study, we used the NanoString nCounter™ assay for gene expression analysis of archival oral carcinoma samples. In order to show that mRNA levels obtained by NanoString analysis of FFPE tissues were accurate, we compared quantification data obtained using RNA isolated from paired fresh-frozen and FFPE oral cancer samples. Our goal was to determine whether this technology could be applied for accurate gene expression quantification using archived, FFPE oral cancer tissues. We also aimed to compare whether quantification data obtained by NanoString achieved a higher correlation than data obtained by SYBR Green I fluorescent dyebased RQ-PCR, using the same paired fresh-frozen and FFPE samples. Tissue samples This study was performed under approval of the Research Ethics Board at University Health Network. Tissues were collected with informed patient consent. Study samples included primary fresh-frozen and formalinfixed, paraffin-embedded (FFPE) tumor samples from 19 patients with oral squamous cell carcinoma. All patients had surgery as primary treatment. Fresh-frozen tissues were collected at the time of surgical resection, and samples were snap frozen and kept in liquid nitrogen until RNA extraction. RNA from these tumor samples was extracted and kept at -80C for long term storage. Representative FFPE tissue sections were obtained from the same tumor samples. We collected a total of 38 tumor samples (paired fresh-frozen and FFPE) from 19 patients. In addition, we included the analysis of a commercially available human universal RNA (pool of cancer cell lines) (Stratagene) and human normal tongue RNA (Stratagene); these samples were used as quality controls, since they are a source of high quality RNA, and have been previously used in other studies [6,7]. RNA extraction and cDNA synthesis Total RNA was isolated from fresh-frozen tissues using Trizol reagent (Life Technologies, Inc., Burlington, ON, Canada), followed by purification using the Qiagen RNeasy kit and treatment with the DNase RNase-free set (Qiagen, Valencia, CA, USA). RNA extraction and purification steps were performed according to the manufacturers' instructions. For FFPE tissues, one tissue section was taken from each specimen, prior to RNA extraction, stained with hematoxylin and eosin (H&E) and examined by a pathologist (B.P-O), to ensure that tissues contained >80% tumor cells. RNA was isolated from five 10 μm sections from FFPE samples, using the RecoverAll™ Total Nucleic Acid Isolation Kit (Ambion, Austin, TX, USA), following the manufacturer's procedures. RNA extracted from both fresh-frozen and FFPE tissues was assessed for quantity using Nanodrop 1000 (Nanodrop), and for quality using the 2100 Bioanalyzer (Agilent Technologies, Canada). For RQ-PCR experiments, cDNA was synthesized from 1 μg total RNA isolated from fresh-frozen or FFPE tissues, using the M-MLV reverse transcriptase enzyme and according to manufacturer's protocol (Invitrogen). Gene expression quantification using multiplexed, colorcoded probe pairs (NanoString nCounter™) Genes selected for testing in this technical report are frequently over-expressed in oral cancer (our own data, currently submitted for publication elsewhere). Probe sets for each gene were designed and synthesized by NanoString nCounter™ technologies (Table 1). Probe sets of 100 bp in length were designed to hybridize specifically to each mRNA target. Probes contained one capture probe linked to biotin and one reporter probe attached to a color-coded molecular tag, according to the nCounter™ code-set design. RNA samples were randomized using a numerical ID, in order to blind samples for sample type (freshfrozen or FFPE) and sample pairs. Samples were then subjected to NanoString nCounter™ analysis by the University Health Network Microarray Centre (http:// www.microarrays.ca/) at the Medical Discovery District (MaRS), Toronto, ON, Canada. The detailed protocol for mRNA transcript quantification analysis, including sample preparation, hybridization, detection and scanning followed the manufacturer's recommendations, and are available at http://www.nanostring.com/ uploads/Manual_Gene_Expression_Assay.pdf/ under http://www.nanostring.com/applications/subpage.asp? id=343. We used 100 ng of total RNA isolated from fresh-frozen tissues, as suggested by the manufacturer. FFPE tissues required a higher amount of total RNA (400 ng) for detection of probe signals. Technical replicates of three paired fresh-frozen and FFPE tissues were included. Data were analyzed using the nCoun-ter™ digital analyzer software, available at http://www. nanostring.com/support/ncounter/. Quantitative real-time RT-PCR In addition, we performed RQ-PCR analysis in the same fresh-frozen and FFPE samples and compared this to gene expression data determined by the Nano-String nCounter assay. RQ-PCR analysis was performed as previously described, using SYBR Green I fluorescent dye [8,9]. Gene IDs and primer sequences are described in Table 2. Primer sequences were designed using Primer-BLAST (http://www.ncbi.nlm. nih.gov/tools/primer-blast/). Gene expression levels were normalized against the average Ct (cycle threshold) values for the two internal control genes (GAPDH and RPS18) and calculated relative to a commercially available normal tongue reference RNA (Stratagene). Ct values were extracted using the SDS 2.3 software (Applied Biosystems). Data analysis was performed using the ΔΔCt method [10]. Statistical analysis Absolute mRNA quantification values obtained by NanoString as well as relative expression values obtained by RQ-PCR were log2-transformed. Summary statistics as median, mean, range were provided. Pairwise Pearson product-moment correlation analysis [11] was applied to test the correlation between gene expression data obtained by NanoString and RQ-PCR analysis in fresh-frozen vs. FFPE samples, as well as the correlation between NanoString and RQ-PCR data in fresh-frozen or FFPE samples. Both overall correlation and correlation across sample pairs were calculated. Statistical analyses were performed using version 9.2 of the SAS system and user's guide (SAS Institute, Cary, NC). In addition, Pearson correlation between sample pairs was plotted as heatmaps, in order to visualize the grouping of similar samples. Heatmaps were generated by hierarchical clustering analysis, using hclust R function, in R statistical environment [12]. Technical data on sample quality Bioanalyzer results for fresh-frozen samples showed a mean RNA integrity number (RIN) of 8.3 (range 4.6-9.8), with the majority of fresh-frozen samples (13/19) having a RIN ≥8. FFPE samples were degraded and the mean RIN was 2.3 (range 1.5-2.5); this result was expected since FFPE samples are archival tissues. Representative examples of the Bioanalyzer results for one fresh-frozen and one FFPE sample are shown in Figure 1. FFPE samples used in our study have been archived from a time period between 1997-2008. Correlation between mRNA transcript quantification in fresh-frozen vs. FFPE samples (NanoString) Raw data quantification values obtained by NanoString were log2 transformed, and values derived from the 19 paired fresh-frozen and FFPE samples were compared. The pair-wise Pearson product-moment correlation was 0.90 (p < 0.0001). The scatter plot and histogram for log2 values from fresh-frozen and FFPE samples are shown in Figure 2A. Analysis of the three replicate pairs (log2 transformed values) demonstrated a correlation of 0.93 (p < 0.0001). In addition, we performed unsupervised hierarchical clustering analysis of these data, and heatmaps are shown in Figure 2B. We also performed a correlation analysis between mRNA transcript quantification values (log2 transformed values) for each pair of fresh-frozen versus FFPE sample (sample by sample comparison). This analysis is important as it allows us to determine whether the amount of mRNA transcripts of a given gene is maintained in individual sample pairs. The mean correlation coefficient obtained was 0.94, with a minimum correlation of 0.77 and a maximum correlation of 0.99. Correlation between gene expression levels in freshfrozen vs. FFPE samples (RQ-PCR) We also compared gene expression levels determined by RQ-PCR analysis in fresh-frozen versus FFPE samples. The overall pair-wise Pearson product-moment correlation coefficient was 0.53 (p < 0.0001) ( Figure 3A). Heatmap analysis of these data is shown in Figure 3B. A sample-bysample (fresh-frozen/FFPE sample pair) correlation analysis of RQ-PCR data revealed a mean correlation of 0.54, varying between 0.12 and 0.99, with the majority of sample pairs (12/19) showing a correlation ≥0.50. Comparison of mRNA quantification data using NanoString versus RQ-PCR Since all RNA samples isolated from FFPE tissues were degraded, as confirmed by Bioanalyzer analysis, we expected that a probe-based assay would generate more accurate gene expression quantification data compared to amplification-based assays, such as RQ-PCR. For each sample type (fresh-frozen or FFPE), we compared mRNA transcript quantification as determined by NanoString analysis and gene expression levels as determined by RQ-PCR. For fresh-frozen tissues, this comparison analysis showed that the overall pair-wise Pearson productmoment correlation coefficient was 0.78 (p < 0.0001). Figure 4A shows the scatter plot for the Log(NanoString) vs. Log (QPCR) and their histogram in fresh-frozen tissues. This same analysis in FFPE samples showed a lower overall correlation coefficient of 0.59 (p < 0.0001); 11/19 FFPE sample pairs showed a correlation ≥0.60. Figure 4B shows the scatter plot for the Log(NanoString) vs. Log(QPCR) and their histogram in FFPE tissues. Unsupervised hierarchical clustering analysis of these data was performed and corresponding heatmaps are shown in Figure 4C and 4D. Discussion In this pilot study, we showed that NanoString technology is suitable for accurately detecting and measuring mRNA transcript levels in clinical, archival, FFPE oral carcinoma samples. Our results demonstrated that this probe-based assay (NanoString) achieved a good overall Pearson correlation when we compared mRNA transcript quantification results between paired fresh-frozen and FFPE samples. In addition, correlation coefficients were determined in a sample-by-sample comparison, and results showed that mRNA levels in single sample pairs (fresh-frozen and FFPE) were maintained across the sample pairs when using NanoString technology. When we compared gene expression levels obtained by RQ-PCR, we obtained a lower overall correlation coefficient between fresh-frozen and FFPE tissues, and across sample pairs. These results suggest that mRNA transcript levels are more concordant between fresh-frozen and FFPE sample pairs when using NanoString technology. A recently published study [13] evaluated the performance of quantitative real-time PCR using TaqMan assays (TaqMan Low Density Arrays platform), for gene expression analysis using paired fresh-frozen and FFPE breast cancer samples. The investigators found a good overall correlation coefficient of 0.81 between fresh-frozen and FFPE samples; however, when they compared individual sample pairs, they found a low correlation of 0.33, with variability of 0.005-0.81. These authors suggested that the extensive RNA sample degradation in FFPE samples is likely the cause for the low correlation coefficients observed across sample pairs [13]. Indeed, Bioanalyzer results for our samples showed that freshfrozen tissues had a good quality RIN and were suitable for gene expression analysis, while FFPE tissues were degraded and had a low RNA integrity number. This RNA degradation in FFPE samples also resulted in higher Ct values initially detectable by RQ-PCR, with loss of amplifiable templates. The low RIN characteristic of FFPE samples did not seem to have an effect on the efficiency of NanoString results, however, when we compared quantification values obtained using RNA isolated from fresh-frozen vs. FFPE tissues. Although quantitative PCR-based assays have been used for gene expression analysis in FFPE samples [13][14][15], these assays do carry some disadvantages, such as the need for optimization strategies aimed at reducing amplification bias and increasing the number of detectable amplicons when using RNA extracted from FFPE samples. To date, some of the recommended strategies include optimization of the RNA extraction method and designing primers able to detect short amplicons [16]. In our study, primers for RQ-PCR experiments yielded amplicon lengths between 72-170 bp (as detailed in Table 2). Only 2/19 primer pairs yield amplicons >110 bp in size. Such short amplicons are well-suited for PCR amplification using FFPE samples. Our results showed that, although we did obtain gene expression data using RQ-PCR in our FFPE samples, both the overall and the sample-by-sample correlation between fresh-frozen and FFPE samples was notably lower for RQ-PCR data than for data obtained using NanoString. This suggests that this newly developed technology, NanoString nCounter™, offers advantages over RQ-PCR for gene expression analysis in archival FFPE samples. Conclusions We found that the multiplexed, color-coded probebased method (NanoString nCounter™) achieved superior gene expression quantification results when compared to RQ-PCR, when using total RNA extracted from clinical, archival, FFPE samples. Such technology could thus be very useful for applications requiring the use of Figure 4 Correlation between data obtained from Nanostring and RQ-PCR analysis on fresh-frozen and FFPE tissues. Scatter-plot matrices examining the correlation between Nanostring and RQ-PCR data in fresh-frozen (A) and FFPE (B) samples. Scatter plot matrices show normalized quantification values. The pair-wise Pearson product-moment correlation coefficient for Nanostring vs. RQ-PCR data in fresh-frozen samples was r = 0.78 (p < 0.0001); this same analysis revealed a lower correlation coefficient in FFPE samples (r = 0.59) (p < 0.0001). A corresponding heatmap for the Pearson correlation of gene expression abundance in fresh-frozen (FF) and FFPE samples using Nanostring vs. RQ-PCR is shown to the right of each scatter plot (C and D respectively). These results show a good correlation between Nanostring and RQ-PCR in fresh-frozen samples, and a lower correlation between data obtained using these two different technologies, when using clinical, archival, FFPE tissues. clinical archival material, such as large scale validation of gene expression data generated by microarrays for generation of tissue specific gene expression signatures.
4,137
2011-05-09T00:00:00.000
[ "Biology" ]
Monocytes and Macrophages in Spondyloarthritis: Functional Roles and Effects of Current Therapies Spondyloarthritis (SpA) is a family of chronic inflammatory diseases, being the most prevalent ankylosing spondylitis (AS) and psoriatic arthritis (PsA). These diseases share genetic, clinical and immunological features, such as the implication of human leukocyte antigen (HLA) class I molecule 27 (HLA-B27), the inflammation of peripheral, spine and sacroiliac joints and the presence of extra-articular manifestations (psoriasis, anterior uveitis, enthesitis and inflammatory bowel disease). Monocytes and macrophages are essential cells of the innate immune system and are the first line of defence against external agents. In rheumatic diseases including SpA, the frequency and phenotypic and functional characteristics of both cell types are deregulated and are involved in the pathogenesis of these diseases. In fact, monocytes and macrophages play key roles in the inflammatory processes characteristics of SpA. The aim of this review is analysing the characteristics and functional roles of monocytes and macrophages in these diseases, as well as the impact of different current therapies on these cell types. Introduction Spondyloarthritis (SpA) are defined as a group of chronic inflammatory diseases that affect mainly the spine and joints, being the sacroiliac joint the most typically involved. These diseases, which affect to approximately 1% of the worldwide population, cause serious disorders, pain and disabilities, leading to significant health and socioeconomic problems. In contrast to other rheumatic diseases such as rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE), the prevalence of spondylarthritis is similar between males and females and the onset of the disease (third-fourth decade) is earlier than other rheumatic joint diseases [1][2][3][4][5]. The diseases that are framed within the SpA are ankylosing spondylitis (AS), undifferentiated spondyloarthritis (USpA), axial spondyloarthritis (axSpA), psoriatic arthritis (PsA), reactive arthritis (ReA), enteropathic arthritis and peripheral SpA, being the most predominant AS and PsA [1,[6][7][8]. These diseases can be grouped in axial or peripheral depending on the regions of the body that are affected [9], and they share genetic, clinical and immunological features, including joint inflammation (peripheral and axial skeleton), extra-articular manifestations and the absence of diagnostic autoantibodies [10]. The joint affectation, mainly the sacroiliac joint, is a common sign in SpA, but some particularities can be related to the different diagnosis. In fact, low back pain is typically found in AS and UspA [11]; the conjunctivitis-urethritis-polyarthritis triad characterize ReA [12]; enteropathic arthritis patients present chronic bowel inflammation [13]; axial skeleton (spine and the sacroiliac joints) involvement defines axSpA; psoriasis is the hallmark of PsA [14] and peripheral arthritis with dactylitis is distinctive of both peripheral SpA and have found differentially methylated positions in the HLA-DQB1 gene in AS patients. In addition, several hypermethylated genes have been described in SpA with respect to controls. Among them, hypermethylation of BCL11B, IRF8 and DNA methyltransferase genes such as DNMT1 was found in patients with AS with respect to healthy controls [38][39][40]. MicroRNAs (miRNAs) are also important epigenetic factors and multiple studies have reported dysregulation of different miRNAs in SpA patients. One of the most important works has found a signature of 13 miRNAs deregulated in monocytes, as well as 11 miRNAs deregulated in CD4 + T cells, in patients with axSpA compared to controls, which are implicated in the pathogenesis of SpA [41]. In this regard, correlations have been described between some miRNAs and certain clinical parameters of these diseases. For example, differential expression of miR-146a-5p, miR-125a-5p and miR-22-3p has been correlated in serum samples from SpA patients with TNF and C-reactive protein levels [42]. In addition, other related epigenetic marks have been reported in SpA. This is the case of histone modifications, as AS patients treated with a TNF inhibitor showed increased activity of the enzyme histone acetyl transferase, indicating increased acetylation and thus increased gene expression in these acetylated regions [43]. Also, increased levels of Histone Deacetylase 3 (HDAC3) were found in the peripheral blood of AS patients [44,45]. Environmental factors are also involved in SpA and, for instance, previous infections have been associated with the development of some SpA. A link has been found between the gut flora microbiota and inflammation in these diseases as well [7,46,47]. In fact, a rat model of SpA has shown that intestinal inflammation, dependent of microbiota, enhances bone erosive potential in monocytes. Besides, bowel disruption promoted systemic inflammation, osteoclastogenesis and joint destruction [48]. Conversely, in vitro studies revealed that anti-IL-17 drugs contribute to dysbiosis and gut inflammation through inhibition of IL-17 pathway [49,50]. Finally, mechanical stress seems to be a determining factor in the appearance and development of SpA [51]. Etiopathogenesis Despite the numerous studies in the field, the molecular mechanisms of action involved in the pathogenesis of SpA are still unclear [52]. There are currently different hypotheses, which can occur in combination and have in common that the trigger for autoinflammatory processes are mechanisms mediated by the HLA-B27 antigen. First hypothesis postulates that certain HLA-27 subclasses bind to peptides that are recognized by CD8 + cells, which leads to the activation of autoreactive T cells [46,53]. The second hypothesis suggests that defective HLA-B27 folding, occurring at the endoplasmic reticulum of immune cells, trigger the activation of the unfolding protein response (UPR) pathway, which induces the translocation and therefore activation of transcription factor NF-kB to the nucleus, leading to the production of cytokines involved in the pathogenesis of the disease by different inflammatory cells [46]. Related to this, it has been recently seen in macrophages that HLA-B27 impaired ubiquitination might promote the accumulation of misfolded HLA-B27 dimers, which are involved in the pathogenesis of SpA [54]. Moreover, murine models have revealed SpA-associated unconventional HLA-B27 molecules, detected in monocytes, resident and infiltrating macrophages [55]. The third hypothesis postulates that, since HLA-B27 tends to form homodimers, these are recognized as self-antigens by T cells and natural killer cells, leading to increase the production of cytokines such as IL-17 and IL-23 [46,52,56]. Also, a fourth hypothesis indicates that different polymorphisms in the aminopeptidases ERAP1 and ERAP2 genes would be involved in the production of aberrant forms of HLA-B27 and in the modification of peptides bound to HLA-B27. Finally, the most recent hypothesis links gut inflammation and dysbiosis with HLA-B27 and susceptibility to the development of SpA [52]. Overall, the mechanisms of action of these diseases involve the production of autoreactive T cells, through the activation of CD8 + T cells and through a key role of antigen presenting cells such as dendritic cells and macrophages. Following this activation, CD8+ T cells activate and perpetuate the inflammatory process, mainly through the release of key cytokines such as IL-17, IL-23, TNF, and IL-6; chemokines and other mediators such as Receptor Activator of Nuclear Factor-κB (RANK). The final consequences of these processes are the inflammation of the affected tissues (joints, skin, eyes, gut) and the destruction of the joints and spine, due to impaired osteoclastogenesis and bone remodelling [57]. Monocytes Cells of the innate immune system are involved in the onset and development of SpA and, within this group, monocytes have been shown to play a key role in the pathogenesis of these diseases [58]. According with the established literature, there are three classes of monocytes, defined by the expression of the surface markers CD14 and CD16. The classical monocytes, which are the most abundant population (around 90%), show high CD14 expression but no CD16 (CD14 ++ CD16 − ), meanwhile the non-classical express a low level of CD14 together with high CD16 (CD14 + CD16 ++ ). Finally, the intermediate monocytes present high levels of CD14 and low CD16 expression (CD14 ++ CD16 + ) and are considered a transitional population between the classical and non-classical monocyte subsets [58,59]. However, these 3 populations of monocytes reveal unique characteristics in healthy individuals, as they are defined by different transcriptional profiles, possess different repertoires of cell surface receptor genes and distinct cytokine production patterns revealed by LPS activation [59]. Similarly to other diseases, including RA, there is a disbalance in the frequency of three monocyte subsets in SpA patients [60]. In contrast to RA patients, who showed an increase of intermediated monocytes [61], an increment in the frequency of classical monocytes has been observed in SpA [62]. In addition, another study found an expansion of CD14 ++ CD16 + CCR9 + CX 3 CR1 + CD59 + monocytes in the peripheral blood, synovial fluid, synovial tissue and bone marrow of AS patients. Importantly, this population positively correlated with disease activity parameters and levels of C-reactive protein, and these cells displayed more pronounced phagocytic activity in AS patients than in controls [63]. Besides the frequency of monocyte subset, the phenotypic characteristics of monocytes are also altered in SpA patients. For instance, monocytes from AS patients exhibit higher pro-inflammatory phenotypes secrete higher amounts of pro-inflammatory cytokines and proteomic analysis showed a higher activation of leucocyte extravasation vascular, endothelial growth factor, Janus kinase/signal transducer and activator of protein (JAK/STAT) and TLR pathways [64,65]. Related to this, the levels of TLR4 were higher in monocytes from SpA and PsA patients compared to healthy controls [66]. Moreover, monocytes from PsA patients show higher expression of the calcium-binding proteins S100A8/A9, which have a central role in controlling leukocyte trafficking and the metabolic processes of arachidonic acid [67]. On another note, the monocytes to lymphocytes ratio (MLR) was increased in patients with AS compared with non-radiographic axSpA, another of the findings that highlights the role of monocytes in SpA. Furthermore, this ratio was correlated with C-reactive protein (CRP) levels, erythrocyte sedimentation rate (ESR) levels and spine movements [68]. Lastly, it is important to highlight that the monocytes are of great importance in these diseases because they are precursors of two key cell types in SpA: macrophages and osteoclasts, as we will discuss in the next sections [69]. Macrophages Macrophages are other cells belonging to innate immune system that are present in all tissues and body compartments and serve as the first line of defence against infection. They are the main phagocytic cells, but they are also antigen presenters and secrete cytokines involved in the immune system activation. Macrophages play also key roles in the maintenance of tissue homeostasis and are critical cells in the orchestration of chronic inflammation observed in different diseases, including SpA [24,60,70,71]. Regarding the ontology of this cell type, it was accepted during decades that all macrophages originate from circulating adult blood monocytes. However, works from last years have shown that tissue macrophages, including microglia, Kupffer cells, Langerhans cells and kidney, alveolar, heart and synovial macrophages, originate during embryonic development from the yolk sac, fetal liver or hematopoietic stem cells [72,73]. The phenotypic characteristics and functional capacities of macrophages are defined by the environmental factors, such as cytokines and pathogens that they are exposed during the differentiation process. Historically, macrophages have been classified into classically activated pro-inflammatory M1 or alternatively activated M2 wound-healing and immunosuppressive macrophages [71]. M1 macrophages, which are induced by IFN-γ, LPS and GM-CSF, are key components of host defence and are characterized by the expression of the pro-inflammatory cytokines IL-1β, IL-6, IL-12, IL-23 and TNF. The Th2 cytokines IL-4 and IL-13 induce the differentiation of wound-healing macrophages, which secrete components of the extracellular matrix and are involved in the tissue homeostasis. Finally, IL-10 drives immunosuppressive M2 macrophages that dampen the immune response and limit inflammation, playing this manner a regulatory role [21,70,71]. In addition to the different functional characteristics, M1 and M2 macrophages also express specific surface markers, such as CD80 and CD64 (M1) and CD200R and CD163 (M2) [74]. This is a useful classification for the in vitro differentiation of macrophages; however in vivo distinction is not easy, as macrophages can show characteristic of both M1 and M2 macrophages and they show a broad heterogeneity. In fact, recent studies have reported the existence of several macrophage subsets in the synovium of RA patients [21,75,76]. Phenotypic characterization of the SpA lining and sublining layers has shown an increased expression of the M2 macrophage marker CD163 compared to RA synovium [77,78]. Importantly, CD163 was also increased in the colonic mucosa of SpA and Crohn's patients versus ulcerative colitis patients and healthy controls [78,79]. The expression of M1 macrophages is more controversial, as an initial work found a reduction of CD80 and CD86 expression in the synovium of SpA compared to RA patients [78], but further works have not validated this finding, neither difference in the expression of CD14, CD68 CD64 or CD200R [77,80]. In addition, CD163 synovial expression was correlated with clinical disease parameters, such as swollen joint count (SJC), serum CRP and ESR [78]. In vitro studies also support these findings, as the expression of the M2 markers CD163 and CD200R is induced in peripheral blood monocytes by the stimulation with the synovial fluid of SpA, but not RA patients. In addition, the synovial fluid of SpA patients showed a reduced expression of TNF, IL-1 and CXCL-10, mediators secreted by M1 macrophages [81]. Altogether, these data suggest that in SpA M2 macrophages predominate over M1 phenotype. These M2-like phenotypic characteristics may be responsible of the differences in the synovium of SpA and RA patients, such as the reduced infiltration of B and T cells, higher frequency of Th17 cells and the increased vascularity, with more tortuous vessels found in SpA patients [78,82]. However, despite the M2-like phenotype of SpA macrophages, these cells are able to produce inflammatory cytokines. CD163 + SpA macrophages express high levels of HLA-DR and secrete TNF, but not IL-10 [78]. Moreover, TNF induced the expression of inflammatory mediators by monocyte-derived macrophages from blood and synovial fluid (SF) of PsA patients, as well from monocytes of healthy controls differentiated with the SF of PsA patients. Interestingly, activation of Tie2 signalling enhanced the TNF-dependent expression of these inflammatory mediators [83,84]. As Tie2 is also involved in angiogenesis and Angiopoietin-2, one of the Tie2 receptors, is elevated in the synovium of SpA patients, macrophage Tie2 signalling may be essential for the pathogenesis of SpA [85,86]. Importantly, IL-17, a key cytokine in SpA pathogenesis, stimulates the production and expression of proinflammatory cytokines by human macrophages [87]. Macrophages also express mediators that are elevated in SpA and are involved in the perpetuation of inflammation, such as Ca 2+ binding proteins S100A8 and S100A9 and HMGB1 (high mobility group box 1 proteins) [88,89], and angiogenic factors (vascular endothelial growth factor -VEGF-and basic fibroblast growth factor -BFGF-) that are highly expressed in the synovium of early PsA patients [85]. Finally, macrophages also contribute to the joint destruction through the production of matrix metalloproteinases, including MMP-2, MMP-3, MMP-7 and MMP-9 [90,91]. Osteoclasts Homeostatic bone tissue remodelling is consequence of a tight balance between the levels of bone formation (mediated by osteoblasts) and bone resorption (induced by osteoclasts). In SpA there is an imbalance in bone tissue remodelling, which leads to bone destruction and resorption, mainly at the peripheral joints, but also to new bone formation in the spine triggering disk fusion. Osteoblasts have a mesenchymal origin, meanwhile osteoclasts are multinucleated cells of hematopoietic origin. Osteoclast development is controlled by the interaction of TNF superfamily receptor-ligand pair known as Receptor Activator of Nuclear Factor-κB (RANK) and RANK ligand (RANKL), which are both necessary and sufficient requirement for osteoclast formation [92]. RANKL is over-expressed in SpA cells, notably in macrophages and memory T lymphocytes, but also in synovial fibroblasts and osteoblasts [57]. This enhanced expression of RANKL is mediated by different cytokines, such as TNF, IL-17, TGF-β and IL-22 [93,94]. Besides the IL-17 mediated RANKL expression, IL-17 also plays a direct role in bone resorption through the expression of RANK in osteoclast precursors [93,95]. The role of IL-23 on osteoclastogenesis seems more controversial and different studies have shown that its implication in this process is due to the induction of Th17 cells and Il-17 secretion, rather than a direct effect [95]. Interestingly, sera from patients with axSpA, which showed higher levels of TNF and IL-17 compared to healthy control, promoted the expression of RANK during the osteoclastogenesis process, highlighting the prominent role of both cytokines [96]. Other mediators have been involved in bone remodelling. For example, Macrophagecolony stimulating factor (M-CSF) and IL-34, which maintain macrophages homeostasis and regulates osteoclasts, are 1raised in PsA serum and these levels are associated with bone erosion [97]. In addition, inhibition of the share receptor (CSF-R1), reduced the severity of arthritis and the bone destruction in an arthritis mouse model [98]. SpA Treatments and Effect on Monocyte/Macrophage Function The European Alliance of Associations for Rheumatology (EULAR) and the American College of Rheumatology (ACR) have recently stablished medication guidelines for treating Spondyloarthritis. To date, the first line of pharmacological treatment of axSpA is the nonsteroidal anti-inflammatory drugs (NSAIDs) therapy, followed by glucocorticoids, while second and third lines correspond to Biological disease-modifying antirheumatic drugs (DMARDs), relegating Non-Biological DMARDs to the last option. According to guidelines, Biologicals DMARDs employed are TNF inhibitors, IL-17 inhibitors and JAK inhibitors, in order of preference. In PsA, the use of Non-Biological DMARDs prevails over NSAIDs and the administration of IL-12/23 inhibitors is suggested [99][100][101][102]. Nevertheless, new-targeted medications are being studied. Below these lines, we analyse their effects on monocytes and macrophages populations. Also, the mechanism of action and effects of treatments in monocytes and macrophages are summarized in Table 1. NSAIDs Despite non-steroidal anti-inflammatory drugs (NSAIDs) are the first-line drug treatment for SpA, not much evidence is found about the effect of these drugs in SpA monocytes/macrophages. NSAIDs inhibit the activity of cyclooxygenase (COX), which modulates prostaglandin E2 (PGE 2 ) expression. PGE 2 is an important early mediator of enthesitis, the hallmark of SpA, and is involved in the differentiation of Th17 cells [116], and, therefore, could activate monocytes/macrophages in an indirect manner. In addition, previous studies showed an immune modulatory role of NSAIDs, as these drugs attenuated inflammatory processes induced by macrophages and T cells, both key cell types in the pathogenesis of SpA [103]. However, a study in axSpA patients found that NSAID treatment did not modulate the monocytes secretion of IL-1β, IL-6, and TNF, in contrast with the effect of inhibitors (TNFi). These results suggest that NSAIDs are not important modulators of monocytes/macrophages function [65]. In contrast to this lack of action on dampening inflammation, NSAIDs minimize radiographic signs of spinal damage progression in axSpA [117,118] and this effect might be due to the reduced COX/PGE 2 activity, which would reduce Th17 differentiation, leading to decreased osteoclastogenesis. Nevertheless, further studies are needed for validating this hypothesis. Glucocorticoids The administration of glucocorticoids is another line of treatment for SpA patients, but the utilization is controversial because of their adverse effects, as osteoporosis and increase of cardiovascular risk. However, a recent systematic review has demonstrated the efficacy of using glucocorticoids in the short term (≤6 months) in SpA. Importantly, no deaths or major adverse events were reported [119]. Glucocorticoids signal through the glucocorticoid receptor (GCR), which activate cellular pathways that modulate the activity of the transcription factors C/EBPβ, PPARs and NFκB. These transcription factors promote the expression of anti-inflammatory mediators [120] and inhibit the expression of inflammatory mediators, including prostaglandins (departing from COX2 transcription blocking) [119,121] and Phospholipase A2 in the arachidonic acid pathway [122]. The overall consequence is the resolution of the inflammation [123][124][125]. In monocytes, glucocorticoids increase IL-10 secretion [126] and deplete the proinflammatory CD16 + cells [104]. It has been proven that glucocorticoids also mitigate some of the effects of macrophage activation, such as the production of important macrophage mediators, including TNF and IL-6. However, macrophages produce large amounts of proinflammatory cytokines, and doses of corticosteroids that are potentially toxic if sustained for more than few days are needed for reducing this production [127]. TNF also takes part in intervertebral disc degeneration, suggesting that corticosteroids could reduce pain and, remarkably, the damage in intervertebral discs via reducing monocyte/macrophage TNF secretion [128]. Finally, it has recently been reported that glucocorticoids may act on macrophages inducing a phenotype involved in tissue repair, but this effect needs to be demosntrated in SpA pathogenesis [120]. Non-Biological Disease-Modifying Anti-Rheumatic Drugs Non-Biologicals disease-modifying antirheumatic drugs (DMARDs) are widely used for the treatment of rheumatic diseases, including SpA. Methotrexate (MTX) is the most common DMARD and, due to its effect modulating cell-specific signalling pathways, inhibits important pro-inflammatory properties of different cell types involved in the pathogenesis of rheumatic diseases, including T cells, macrophages, endothelial cells and fibroblast-like synoviocytes [105,129]. In the context of this review, MTX induces apoptosis of monocytes and reduces the expression of IL-1β by monocyte precursors and the expression of Fcγ receptors on monocytes of RA patients, demonstrating an antiinflammatory role [105,106]. In addition, MTX induces release of adenosine by different cell types and the binding of adenosine to the adenosine receptor (ADORA) 2 a and 3 reduces the monocyte secretion of TNF and IL-6, and induces the polarization of macrophages towards an antiinflammatory M2 phenotype. These data suggest an anti-inflammatory effect of MTX on monocyte/macrophages in an indirect manner [129,130]. However, another study showed that MTX dose-dependently induced the expression of IL-1, IL-6 and TNF in a monocytic cell line, which could be implicated in the adverse effect of MTX use in some rheumatic patients [131]. Finally, a study in 10 PsA patients showed that MTX treatment reduced the number of CD68 + macrophages in the synovial tissue, as well the expression of inflammatory mediators secreted by monocytes/macrophages, such as IL-1β, IL-8, TNF and MMP-3 [132]. Anti-TNF Treatments TNF is one of the key cytokines in the pathogenesis of SpA and is involved in different pathogenic processes of the disease, like inflammation, angiogenesis and osteoclastogenesis. TNF signals through TNF receptors (TNFR), mainly TNFR1 and TNFR2. After receptor binding, TNF activates two different pathways. On one hand, TNF activates MAP kinase pathways, such as c-Jun N-terminal kinases (JNKs), which induces the translocation of the transcription factor AP-1 to the nucleus. On the other hand, TNF activates another signalling pathway leading to the phosphorylation and degradation of IκBα, which promote the nuclear translocation of NF-κB. Both transcription factors ultimately trigger the expression of pro-inflammatory, anti-apoptotic, angiogenic and cell proliferation genes, which most of them are involved in the pathogenesis of SpA [133]. For that reason, there are currently several approved anti-TNF drugs for the treatment of these diseases [134]. Etanercept was the first anti-TNF therapy approved for therapeutic use in SpA. This drug is a TNF receptor fusion protein and is approved in different spondyloarthropathies: PsA, AS and non-radiographic axSpA. Regarding its function in monocytes and macrophages, recent studies has shown that etanercept skewed macrophage polarization towards a M2 phenotype and that etanercept reduced the expression of LPS-induced expression of NF-κB target genes and the LPS-induced MMP9 activity in SpA monocytes [107][108][109]. Adalimumab and Infliximab are recombinant human monoclonal antibodies that blocks TNF. They are currently approved for the treatment of several SpA, specifically PsA, plaque psoriasis, AS, ulcerative colitis and Crohn's disease [9,134]. Both TNF inhibitors (TNFi) restrict pathological angiogenesis and inflammatory cell infiltration in the synovium, including macrophage infiltration of the sublining layer [135]. In PsA patients, Adalimumab leads to a decrease in the number of tissue resident macrophages (CD163 + ), infiltrating macrophages (MRP8 + ) and early stage differentiated macrophages (MRP14 + ) [136], while Infliximab significantly reduces the CD68 + macrophage levels in synovial tissue of PsA patients [137,138]. Moreover, both TNFi also enhance in vitro nonclassical monocytes, decrease classical monocytes [139] and modulate macrophage polarization to M2 phenotype [109]. In vitro research studies have shown that both Adalimumab and Infliximab inhibit IL-12/IL-23 production in M1 macrophages through the formation of immune complexes, demonstrating a functional effect of these TNFi in these cell types [140]. Finally, Infliximab also inhibits the osteoclast resorptive activity in AS patients [141]. On the other hand, Certolizumab pegol and Golimumab are also two anti-TNF monoclonal antibody-based treatments used and approved for SpA. Previous research has shown that Certolizumab, as well as Infliximab and Adalimumab, induces differentiation of an immunosuppressive macrophage subtype that inhibits T-cell proliferation [142]. Anti-IL-17 Treatments Since IL-17 is an essential cytokine involved in the pathogenesis of SpA, therapies against this cytokine have been developed in the last years. There are currently two approved anti-IL-17 treatments for SpA: Secukinumab and Ixekizumab. Secukinumab is a treatment whose target is the cytokine IL-17A and its structure is a fully human monoclonal IgG1k antibody. It has been validated in several SpA diseases, such as AS [143] and PsA [144]. Likewise, Ixekizumab is an anti-IL-17A drug and is a humanized IgG4 monoclonal antibody. Its efficacy has been validated in different SpA types, such as PsA [145] or AS [146]. Besides that, there are currently several anti-IL-17 drugs of clinical interest in development, such as Bimekizumab [147] and Afasevikumab [148]. Different macrophage subpopulations express several subunits of the IL-17 receptor (IL-17R). Macrophage IL-17 signalling is mediated through IL-17R and downstream signalling leads to the activation of C/EBP proteins and the nuclear translocation of NF-kB. This results in the release of a group of key cytokines such as IL12p70, GM-CSF, IL-3 and IL-9, representing a specific cytokine response profile [149,150]. Thus, direct blockade of IL-17 by binding to an anti-IL17 antibody prevents the ligation to IL-17R, inhibiting this manner the release of the aforementioned inflammatory mediators and therefore the pathogenic pathways that trigger macrophage activation and stimulation. The functional consequences in the context of SpA would be the abrogation of monocyte/macrophage activation and the reduction of IL-17-mediated osteoclastogenesis [110,111]. In fact, recent studies reveal that anti-IL17 drugs, in particular Secukinumab, reduce macrophage infiltration and MMP-3 expression, besides controlling disease signs, all without compromising systemic immune response [112]. 6.6. Anti-IL-12/Anti-IL-23 Therapy IL-12 and IL-23 are cytokines involved in the pathogenesis of autoimmune and immune-mediated diseases. Both cytokines present the p40 subunit and play a key role in immune cell regulation. IL-12 primarily mediates Th1 responses, enhancing IFN-γ production by NK cells and T cells, leading ultimately to the skewing towards M1 macrophages. On the other hand, IL-23 is crucial for the Th17 differentiation and is involved in the production of IL-17A and IL-17F by NK cells. In terms of signalling mechanisms, both IL-12 and IL-23 share similar pathways, such as JAK2, TYK2, STAT1, STAT3, STAT4 and STAT5 [151,152]. Ustekinumab is a monoclonal antibody against IL-12 and IL-23 approved in ulcerative colitis, Crohn's disease and PsA. Its mechanism of action consists in the binding of the antibody to the p40 subunit present in both IL-12 and IL-23, inhibiting both cytokines. Inhibition of this subunit is effective in SpA because the IL-23/IL-17 axis plays a key role in the pathogenesis of these diseases, and macrophages are involved in this process, as they are the main producers of this cytokine. Thus, on one hand, IL-23 blockade inhibits Th17 differentiation and a decrease in IL-17 levels. And, on the other hand, the blockade of IL-12 impairs Th1-lymphocytes differentiation [153]. In addition, a recent functional study of the effect of Ustekinumab on PsA has yielded important results. In this study, a significantly lower infiltration of CD68 + macrophages in the synovial sublining layer was found, which ameliorates the pathogenesis of the disease. Regarding expression of inflammatory mediators, Ustekinumab reduced the levels of IL-23p19 and MMP3 the in synovial tissue biopsies of these patients [113]. JAK Inhibitors Janus kinase (JAK)/STAT pathways are essential in the inflammatory processes observed in rheumatic diseases [154]. There are different JAK family members (JAK1-JAK3 and TYK2), being JAK1 the most important. In the context of monocyte/macrophages, JAK/STAT pathways are involved in different processes. On one hand, JAK/STAT signals the response to cytokines involved in macrophages differentiation [155]. For example, the binding of IFN-γ to its receptor activates JAK/STAT pathways, leading to M1 macrophages differentiation. On the other hand, IFN-γ, alone or in combination with IL-12, signal through JAK2-Tyrosine kinase 2 (TYK2) or JAK1-JAK2 pathways and induce the release of TNF by macrophages, which contributes to the development of the disease [154]. Due to the prominent role of JAK/STAT pathways in the production of inflammatory mediators, JAK inhibitors (JAKi) have emerged in the last years as therapeutic options of great relevance for the treatment of several immune-mediated inflammatory diseases. JAKi reduce the production of IL-12 and IFN-γ, which promote a decrease in the levels of TNF. Besides that, JAK inhibition also can cause the reduction of other cytokine production directly or indirectly, such as IL-17, IL-23, IL-18, IL-1, IL-6, IL-7, IFN-α and IFN-β, improving the inflammatory status [114]. However, and despite multiple inflammatory cytokines signal through JAK/STAT, the JAKi do not appear to have a direct effect on cytokine targets such as TNF or IL-17. The mechanism of action of JAKi in rheumatic diseases might be the interaction with alternative cytokine pathways [154]. Since JAK receptors are also involved in different proinflammatory signalling pathways observed in the pathogenesis of SpA, JAKi are promising therapeutic options [156]. In fact, Tofacitinib, a JAK1 and JAK3 inhibitor, is a drug already approved in PsA [157] and with promising results in a phase III trial in AS [158]. However, the effect of Tofacitinib on SpA monocyte/macrophages activation is still unknown and further research is needed in this context. Abatacept, an immunoglobulin against Cytotoxic T-Lymphocyte Antigen 4 (CTLA4-Ig), is a selective modulator of T cells employed in SpA treatment [159]. Abatacept inhibits CD4 + T cell activation, but also it has been observed that Abatacept reduces the migratory capacity of monocytes from RA patients [115,160]. Moreover, Abatacept downregulates the macrophages TNF production induced by activated T cells [161] and a recent paper has shown that Abatacept is able to directly skew RA macrophages towards a M2 phenotype [162]. IL-6 Inhibitors Despite the implications of IL-6 in SpA pathogenesis, its inhibition has completely failed [163,164], except in anti-TNF-refractory aggressive SpA [165]. The treatment weakness could happen, according to mouse model experiments, because the pathological mechanisms in tissue resident cells of only few SpA patients would be dependent of IL-6 [166]. Directed Therapies: From Monocytes and Macrophages to Disease Management Directly targeting monocytes and macrophages or their pathways is a potential novel therapeutic in SpA, as they are both implicated in the pathophysiology of the diseases. Therefore, they are promising therapeutic approaches, but they are still not approved for the clinical use. In chronic inflammation, hematopoietic stem cells are redirected to myelopoiesis for granulocyte-monocyte progenitors (GMPs) formation. Particularly, in SpA, these GMPs gather in bone marrow, spleen and affected joints, contributing to disease progression. Furthermore, secreted granulocyte-monocyte colony stimulating factor (GM-CSF) is a cytokine essential for the proliferation and differentiation of myeloid cells, including monocytes. Innate lymphoid cells, IL-17A + T cells and mast cells secrete granulocytemonocyte colony stimulating factor (GM-CSF). As these cell types are elevated in SpA patients, consequently the levels of this cytokine are also elevated [167][168][169]. Functionally, GM-CSF also induces monocyte inflammatory activity [170]. Taking this into account, antibody blocking of GM-CSF reveals potential therapeutic value in SpA, as it is being tested in other rheumatic diseases [169,171]. Apheresis Granulocyte and monocyte apheresis, an extracorporeal therapy consisting of selective removal of monocytes and macrophages from blood, is a safe procedure that could have an application in SpA, as it is propose for PsA and other rheumatic diseases [178,179]. However, further studies are needed to fully elucidate the therapeutic use in SpA. Concluding Remarks In this review, we have summarized and analyzed the main molecular mechanisms involved in the pathogenesis of spondyloarthritis and the function of monocytes and macrophages in these diseases ( Figure 1). As it has been observed, approved drugs for the treatment of spondyloarthropathies have a clear involvement at the molecular level with the mechanisms of action of monocytes and macrophages in these diseases. This indicates a preponderant role of monocytes and macrophages in SpA and highlights these cell types as a promising target for the development of new therapies.
7,378
2022-02-01T00:00:00.000
[ "Medicine", "Biology" ]
Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. Introduction Electroencephalography (EEG) is an essential tool for brain and behavioral research, as well as one of the mainstays of hospital diagnostic procedures and pre-surgical planning. Despite scalp EEG's many advantages, end users struggle with its poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting the research discovery and diagnosis [1][2][3]. In particular, EEG's poor spatial resolution is primarily due to (1) the blurring effects of the volume conductor with disc electrodes; and (2) EEG signals having reference electrode problems as idealized references are not available with EEG and interference on the reference electrode contaminates all other electrode signals [2]. The application of the surface Laplacian (the second spatial derivative of the potentials on the scalp surface) to EEG has been shown to alleviate the blurring effects enhancing the spatial resolution and selectivity, and reduce the reference problem [4][5][6]. Noninvasive concentric ring electrodes (CREs) can resolve the reference electrode problems since they act like closely spaced bipolar recordings [2]. They also act as spatial filters reducing the low spatial frequencies and increasing the spatial selectivity [7][8][9]. Moreover, CREs are symmetrical alleviating electrode orientation problems [9]. Most importantly, tripolar CREs (TCREs; Figure 1B) have been shown to estimate the surface Laplacian directly through the nine-point method, an extension of the five-point method (FPM) used for bipolar CREs, and significantly better than other electrode systems including bipolar and quasi-bipolar CRE configurations [10,11]. Compared to EEG with conventional disc electrodes ( Figure 1A) Laplacian EEG via TCREs (tEEG) have been shown to have significantly better spatial selectivity (approximately 2.5 times higher), signal-to-noise ratio (approximately 3.7 times higher), and mutual information (approximately 12 times lower) [12]. Because of such unique capabilities, TCREs have found numerous applications in a wide range of areas including brain-computer interface [13,14], seizure onset detection [15,16], seizure attenuation using transcranial focal stimulation applied via TCREs [17][18][19][20], detection of high-frequency oscillations and seizure onset zones [21], etc. These EEG-related applications of TCREs, as well as recent applications related to electroenterograms [22,23], electrocardiograms [11,[24][25][26], and electrohysterograms [27], suggest the potential of CRE technology in noninvasive electrophysiology, as well as the need for further improvement of CRE design. have been shown to estimate the surface Laplacian directly through the nine-point method, an extension of the five-point method (FPM) used for bipolar CREs, and significantly better than other electrode systems including bipolar and quasi-bipolar CRE configurations [10,11]. Compared to EEG with conventional disc electrodes ( Figure 1A) Laplacian EEG via TCREs (tEEG) have been shown to have significantly better spatial selectivity (approximately 2.5 times higher), signal-to-noise ratio (approximately 3.7 times higher), and mutual information (approximately 12 times lower) [12]. Because of such unique capabilities, TCREs have found numerous applications in a wide range of areas including brain-computer interface [13,14], seizure onset detection [15,16], seizure attenuation using transcranial focal stimulation applied via TCREs [17][18][19][20], detection of high-frequency oscillations and seizure onset zones [21], etc. These EEG-related applications of TCREs, as well as recent applications related to electroenterograms [22,23], electrocardiograms [11,[24][25][26], and electrohysterograms [27], suggest the potential of CRE technology in noninvasive electrophysiology, as well as the need for further improvement of CRE design. Recent directions for such improvement include printing disposable TCREs on flexible substrates to increase the electrode's ability to adjust to body contours for better contact and to provide higher signal amplitude and signal-to-noise ratio [23,25,27], as well as assessing the effect of ring dimensions and electrode position on recorded signal [26]. However, the signal recorded from TCREs in References [23,[25][26][27] is either a Laplacian derived for the case of the outer ring and the central disc of the TCRE being shorted together (quasi-bipolar CRE configuration) or just a set of bipolar signals representing differences between potentials recorded from the rings and the central disc. In our work, we are aiming to optimize the CRE design by combining the signals from all the recording surfaces available into a Laplacian estimate since for TCREs such approach has resulted in significantly higher Laplacian estimation accuracy and radial attenuation compared to bipolar and quasi-bipolar CRE configurations [10,11]. In Reference [28] we have shown that accuracy of Laplacian estimation can be improved with multipolar CREs. General approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 has been proposed. This approach allows cancellation of all the Taylor series truncation terms up to the order of 2n, which has been shown to be the highest order achievable for a CRE with n rings [28]. Proposed approach was validated using finite element method (FEM) modeling. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were compared and obtained results suggested statistical significance of the increase in Laplacian accuracy caused by increase of the number of rings n [28]. To the best of the authors' knowledge, all the previous research on CREs was based on the assumption of constant inter-ring distances (distances between consecutive rings). This means that distances between the rings were not considered as a means of improving the accuracy of Laplacian estimation. This paper takes the next fundamental step toward further improving the Laplacian estimation accuracy by proposing novel variable inter-ring distances CREs. Laplacian estimates for linearly increasing and linearly decreasing inter-ring distances TCRE (n = 2) and quadripolar CRE (QCRE; n = 3) configurations are derived using a modified (4n + 1)-point method from Reference [28] and directly compared to their constant inter-ring distances counterparts. Analytic analysis and FEM modeling are used to draw this comparison. Main results include establishing a connection between Recent directions for such improvement include printing disposable TCREs on flexible substrates to increase the electrode's ability to adjust to body contours for better contact and to provide higher signal amplitude and signal-to-noise ratio [23,25,27], as well as assessing the effect of ring dimensions and electrode position on recorded signal [26]. However, the signal recorded from TCREs in References [23,[25][26][27] is either a Laplacian derived for the case of the outer ring and the central disc of the TCRE being shorted together (quasi-bipolar CRE configuration) or just a set of bipolar signals representing differences between potentials recorded from the rings and the central disc. In our work, we are aiming to optimize the CRE design by combining the signals from all the recording surfaces available into a Laplacian estimate since for TCREs such approach has resulted in significantly higher Laplacian estimation accuracy and radial attenuation compared to bipolar and quasi-bipolar CRE configurations [10,11]. In Reference [28] we have shown that accuracy of Laplacian estimation can be improved with multipolar CREs. General approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ě 2 has been proposed. This approach allows cancellation of all the Taylor series truncation terms up to the order of 2n, which has been shown to be the highest order achievable for a CRE with n rings [28]. Proposed approach was validated using finite element method (FEM) modeling. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were compared and obtained results suggested statistical significance of the increase in Laplacian accuracy caused by increase of the number of rings n [28]. To the best of the authors' knowledge, all the previous research on CREs was based on the assumption of constant inter-ring distances (distances between consecutive rings). This means that distances between the rings were not considered as a means of improving the accuracy of Laplacian estimation. This paper takes the next fundamental step toward further improving the Laplacian estimation accuracy by proposing novel variable inter-ring distances CREs. Laplacian estimates for linearly increasing and linearly decreasing inter-ring distances TCRE (n = 2) and quadripolar CRE (QCRE; n = 3) configurations are derived using a modified (4n + 1)-point method from Reference [28] and directly compared to their constant inter-ring distances counterparts. Analytic analysis and FEM modeling are used to draw this comparison. Main results include establishing a connection between the analytic truncation term coefficient ratios from the Taylor series used in (4n + 1)-point method and respective ratios of Laplacian estimation errors obtained using the FEM model. Both ratios are consistent in suggesting that increasing inter-ring distances CRE configurations may offer more accurate Laplacian estimates compared to respective constant inter-ring distances CRE configurations. For currently used TCREs the Laplacian estimation error may be decreased more than two-fold, while, for the QCREs, more than six-fold decrease in estimation error is expected. Notations and Preliminaries In Reference [28] general (4n + 1)-point method for constant inter-ring distances (n + 1)-polar CRE with n rings was proposed. It was derived using a regular plane square grid with all inter-point distances equal to r presented in Figure 2. the analytic truncation term coefficient ratios from the Taylor series used in (4n + 1)-point method and respective ratios of Laplacian estimation errors obtained using the FEM model. Both ratios are consistent in suggesting that increasing inter-ring distances CRE configurations may offer more accurate Laplacian estimates compared to respective constant inter-ring distances CRE configurations. For currently used TCREs the Laplacian estimation error may be decreased more than two-fold, while, for the QCREs, more than six-fold decrease in estimation error is expected. Notations and Preliminaries In Reference [28] general (4n + 1)-point method for constant inter-ring distances (n + 1)-polar CRE with n rings was proposed. It was derived using a regular plane square grid with all inter-point distances equal to r presented in Figure 2. First, FPM was applied to the points with potentials v0, vnr,1, vnr,2, vnr,3 and vnr,4 following Huiskamp's calculation of the Laplacian potential ∆v0 using Taylor series [29]: Equation (1) was generalized by taking the integral along the circle of radius r around point with potential v0. Defining x = rcos(θ) and y = rsin(θ) as in Huiskamp [29] we obtain: is the average potential on the ring of radius r and v0 is the potential on the central disc of the CRE. Next, a second FPM was applied with an integral along a circle of radius 2r (v0, v2r,1, v2r,2, v2r,3 and v2r,4 on Figure 2) around the point with potential v0 [10,11] producing the following for the difference between the average potential on the ring of radius 2r and the potential on the central disc of the CRE: First, FPM was applied to the points with potentials v 0 , v nr,1 , v nr,2 , v nr, 3 and v nr,4 following Huiskamp's calculation of the Laplacian potential ∆v 0 using Taylor series [29]: where By 6 q`... is the truncation error. Equation (1) was generalized by taking the integral along the circle of radius r around point with potential v 0 . Defining x = rcos(θ) and y = rsin(θ) as in Huiskamp [29] we obtain: where 1 2π 2π 0 vpr, θqdθ is the average potential on the ring of radius r and v 0 is the potential on the central disc of the CRE. Next, a second FPM was applied with an integral along a circle of radius 2r (v 0 , v 2r,1 , v 2r,2 , v 2r, 3 and v 2r,4 on Figure 2) around the point with potential v 0 [10,11] producing the following for the difference between the average potential on the ring of radius 2r and the potential on the central disc of the CRE: Finally, generalizing Equations (2) and (3) for a case of multipolar CRE with n rings (n ě 2) we obtain a set of n FPM equations, one for each ring with radii ranging from r to nr (v 0 , v nr,1 , v nr,2 , v nr, 3 and v nr,4 on Figure To estimate the Laplacian for this general case the n equations are combined in a way that cancels all the truncation terms up to the highest order that can be achieved for n rings increasing the accuracy of the Laplacian estimate. In order to find such a combination we arrange the coefficients l k of the truncation terms with the general form plrq k k! 2π 0 k ř j"0 sin k´j pθqcos j pθqdθp B k v Bx k´j By j q for order k ranging in increments of 2 from 4 to 2n and ring radius multiplier l ranging from 1 (Equation (2)) to n (Equation (4)) into an n´1 by n matrix A that is a function only of the number of the rings n: A matrix equation of the form: is equivalent to a homogeneous system of linear equations where 0 is the (n´1)-dimensional zero vector and x is the n-dimensional vector that allows the cancellation of all the truncation terms up to the order of 2n by setting the linear combination of n coefficients l k corresponding to all ring radii for each order k equal to 0 [28]. We have showed that 2n is the highest truncation term order that can be cancelled out for a CRE with n rings while assuring existence of nontrivial solution (x ‰ 0) of Equation (6) by keeping the homogeneous system underdetermined [28]. Solution x of Equation (6) is given by the null space (or kernel) of matrix A [30]. Moreover, it should be noted that such null space vectors used for Laplacian estimates are not unique. From the properties of matrix multiplication it follows that for any vector x that belongs to the null space of matrix A and a scalar c the scaled vector cx also belongs to the null space of the same matrix A since pcAqx " cpAxq. Therefore, any scaled version of given null space vector would also be a null space vector. Variable (Linearly Increasing and Linearly Decreasing) Inter-Ring Distances CREs We consider the case of CRE configurations with variable inter-ring distances that increase or decrease linearly the further the concentric ring lies from the central disc. To modify the (4n + 1)-point method from Reference [28] to the case of linearly increasing inter-ring distances, the distance between the central point with potential v 0 and four points on the first concentric ring (the smallest and the closest one to the central point) is set equal to r. The distance between the first and the second (second closest to the central point) concentric ring is set equal to 2r. The distance between the second and the third (third closest to the central point) concentric ring is set equal to 3r, etc. In this case, the sum of all the inter-ring distances to the outer (furthest from the central point), n-th, ring can be obtained using the formula for the n-th term of the triangular number sequence that describes the sum of all points in a triangular grid where the first row contains a single point and each subsequent row contains one more point than the previous one to be equal to n(n + 1)/2 [31]. Therefore, modified matrix A of truncation term coefficients l k from Equation (5) for linearly increasing inter-ring distances CRE is equal to: In the opposite case of CRE configuration with inter-ring distances decreasing linearly the further the concentric ring lies from the central disc the distance between the outer (furthest from the central point), n-th, concentric ring and the second to last (second furthest from the central point) concentric ring is equal to r. The distance between the second to last and the third to last (third furthest from the central point) concentric rings is set equal to 2r, etc. In this case, the sum of all the inter-ring distances preceding the outer, n-th, ring can also be found using the formula for the n-th term of the triangular number sequence due to the commutative property of addition. Therefore, modified matrix A of truncation term coefficients l k from Equation (5) for linearly decreasing inter-ring distances CRE is equal to: An example including both linearly increasing and linearly decreasing inter-ring distances TCREs is presented in Figure 3. ( 1) In the opposite case of CRE configuration with inter-ring distances decreasing linearly the further the concentric ring lies from the central disc the distance between the outer (furthest from the central point), n-th, concentric ring and the second to last (second furthest from the central point) concentric ring is equal to r. The distance between the second to last and the third to last (third furthest from the central point) concentric rings is set equal to 2r, etc. In this case, the sum of all the inter-ring distances preceding the outer, n-th, ring can also be found using the formula for the n-th term of the triangular number sequence due to the commutative property of addition. Therefore, modified matrix A of truncation term coefficients l k from Equation (5) for linearly decreasing inter-ring distances CRE is equal to: ( 1) An example including both linearly increasing and linearly decreasing inter-ring distances TCREs is presented in Figure 3. FEM Modeling All the FEM modeling was performed using Matlab (Mathworks, Natick, MA, USA). To directly compare the discrete Laplacian estimates including the previously proposed constant inter-ring distances TCRE (n = 2) and QCRE (n = 3) configurations to their counterparts with variable inter-ring distances a FEM model from References [10,11,28] was used with an evenly spaced square mesh size FEM Modeling All the FEM modeling was performed using Matlab (Mathworks, Natick, MA, USA). To directly compare the discrete Laplacian estimates including the previously proposed constant inter-ring distances TCRE (n = 2) and QCRE (n = 3) configurations to their counterparts with variable inter-ring distances a FEM model from References [10,11,28] was used with an evenly spaced square mesh size of 600ˆ600 located in the first quadrant of the X-Y plane above a unit charge dipole projected to the center of the mesh and oriented towards the positive direction of the Z axis as shown in Figure 4. Namely, comparisons to the linearly increasing and linearly decreasing variable inter-ring distances TCRE and QCRE configurations respectively were drawn. Bipolar CRE configuration (n = 1) was also included in the FEM model. To ensure direct comparability of results for different CRE configurations, all modeled bipolar, tripolar and quadripolar CREs had the same dimensions despite having different numbers of rings. The largest, outer ring radius for all the CRE configurations was selected to be equal to 6r since 6 is the least common multiple of 2 and 3. Relative locations of concentric rings for all the TCRE and QCRE configurations modeled are presented in Figure 5. The outer ring for the bipolar CRE (the only concentric ring in this configuration) coincides with the outer rings for TCRE and QCRE configurations ( Figure 5). At each point of the mesh, the electric potential ϕ generated by a unity dipole was calculated with the formula for electric potential due to a dipole in a homogeneous medium of conductivity σ [32]: represents the observation point. The conductivity σ of the medium was taken to be 7.14 ms/cm to emulate biological tissue [33]. For this FEM model it was assumed that the medium was homogeneous and (0, 0, 1) p  making the term /4 p  in Equation (9) Namely, comparisons to the linearly increasing and linearly decreasing variable inter-ring distances TCRE and QCRE configurations respectively were drawn. Bipolar CRE configuration (n = 1) was also included in the FEM model. To ensure direct comparability of results for different CRE configurations, all modeled bipolar, tripolar and quadripolar CREs had the same dimensions despite having different numbers of rings. The largest, outer ring radius for all the CRE configurations was selected to be equal to 6r since 6 is the least common multiple of 2 and 3. Relative locations of concentric rings for all the TCRE and QCRE configurations modeled are presented in Figure 5. The outer ring for the bipolar CRE (the only concentric ring in this configuration) coincides with the outer rings for TCRE and QCRE configurations ( Figure 5). Namely, comparisons to the linearly increasing and linearly decreasing variable inter-ring distances TCRE and QCRE configurations respectively were drawn. Bipolar CRE configuration (n = 1) was also included in the FEM model. To ensure direct comparability of results for different CRE configurations, all modeled bipolar, tripolar and quadripolar CREs had the same dimensions despite having different numbers of rings. The largest, outer ring radius for all the CRE configurations was selected to be equal to 6r since 6 is the least common multiple of 2 and 3. Relative locations of concentric rings for all the TCRE and QCRE configurations modeled are presented in Figure 5. The outer ring for the bipolar CRE (the only concentric ring in this configuration) coincides with the outer rings for TCRE and QCRE configurations ( Figure 5). At each point of the mesh, the electric potential ϕ generated by a unity dipole was calculated with the formula for electric potential due to a dipole in a homogeneous medium of conductivity σ [32]: represents the observation point. The conductivity σ of the medium was taken to be 7.14 ms/cm to emulate biological tissue [33]. For this FEM model it was assumed that the medium was homogeneous and (0, 0, 1) p  making the term /4 p  in Equation (9) At each point of the mesh, the electric potential φ generated by a unity dipole was calculated with the formula for electric potential due to a dipole in a homogeneous medium of conductivity σ [32]: pr p´r q¨pˇr p´rˇ3 (9) where r " px, y, zq and p " pp x , p y , p z q represent the location and the moment of the dipole and r p " px p , y p , z p q represents the observation point. The conductivity σ of the medium was taken to be 7.14 ms/cm to emulate biological tissue [33]. For this FEM model it was assumed that the medium was homogeneous and p " p0, 0, 1q making the term p{4πσ in Equation (9) constant. The analytical Laplacian was then calculated at each point of the mesh, by taking the second derivative of the electric potential φ [32]: According to He and Wu [32], this results in: Laplacian estimates for seven CRE configurations were computed at each point of the mesh where appropriate boundary conditions could be applied. Modeling was repeated for different integer multiples of r ranging from 1 to 10. Therefore, in the worst case scenario of a CRE being modeled with the inter-point distance using a multiple value equal to 10 the number of points on the mesh where appropriate boundary conditions could be applied to compute Laplacian estimates was equal to 480ˆ480 (since for each dimension of the mesh 600´2ˆ6ˆ10 = 480). Correspondingly, in the best case scenario for a multiple value equal to 1 the number of points on the mesh where Laplacian estimates were computed was equal to 588ˆ588 (600´2ˆ6ˆ1 = 588). Since the model was tied to the physical dimensions (in cm) through the target physical size of the CRE, the smallest CRE diameter was equal to 0.5 cm (multiple of r equal to 1) and the largest was equal to 5 cm (multiple of r equal to 10). The dipole depth was equal to 5 cm. Derivation of Laplacian estimate coefficients for variable inter-ring distances CRE configurations was performed using the approach proposed in this paper by finding the null space of respective matrices A 1 (Equation (7)) and A" (Equation (8)) for n = 2 and n = 3. For TCREs the coefficients were (81,´1) and (81,´16) for increasing and decreasing inter-ring distances respectively. For QCREs the coefficients were (4374,´70, 1) and (6875,´2187, 625) for increasing and decreasing inter-ring distances, respectively. Coefficients for constant inter-ring distances CRE configurations were adopted from Reference [28]: (16,´1) for TCRE and (270,´27, 2) for QCRE. These seven estimates including three for TCRE (with constant, increasing, and decreasing inter-ring distances respectively), three for QCRE, and one for bipolar CRE configuration were then compared with the calculated analytical Laplacian for each point of the mesh where corresponding Laplacian estimates were computed using Relative Error and Maximum Error measures [10,11,28,29]: Maximum Error i " maxˇˇ∆v´∆ i vˇˇ (13) where i represents the seven Laplacian estimation methods used to approximate the Laplacian potential ∆ i v and ∆v represents the analytical Laplacian potential. FEM Modeling The FEM modeling results of two error measures computed for seven Laplacian estimation methods corresponding to seven CRE configurations using Equations (12) and (13), respectively, are presented on a semi-log scale in Figure 6 for CRE diameters ranging from 0.5 cm to 5 cm. Laplacian estimation errors in Figure 6 suggest that the increasing inter-ring distances TCRE and QCRE configurations hold potential for an improvement over their constant inter-ring distances counterparts while the decreasing inter-ring distances TCRE and QCRE configurations do not. Moreover, this improvement appears to become more significant with the increase of the number of rings (i.e., there is more significant improvement for QCREs than for TCREs). This stems from comparison of averages (mean˘standard deviation for 10 different sizes of each CRE configuration) It should be noted that these averages are presented for the FEM model with dipole depth of 5 cm. This dipole depth was selected since out of the range of dipole depths (1 cm to 5 cm) that were assessed in Reference [28], it corresponded to the smallest standard deviation values. The smallest standard deviation assures that Relative and Maximum Errors for 10 different sizes of each CRE configuration are as close as possible to the reported means. Analytic Verification Variable inter-ring distances CREs have the same number of rings and, therefore, the same number and order of truncation terms in Laplacian estimates as their constant inter-ring distances counterparts. Therefore, constant and variable inter-ring distances CRE configurations can be directly compared by assessing the coefficients at the respective truncation terms that comprise the truncation error of the Laplacian estimation. Analyzing those coefficients will allow us to determine which electrode configuration allows minimizing the truncation error resulting in more accurate Laplacian estimate. Performing this kind of analysis for increasing and constant inter-ring distances as well as for constant and decreasing inter-ring distances TCREs and QCREs would allow verifying the results obtained by FEM modeling analytically. Increasing and Constant Inter-Ring Distances TCREs and QCREs First, we derive the coefficients of the truncation terms for TCRE and QCRE configurations with increasing and constant inter-ring distances as functions of the order of the truncation term, k, under the conditions of the FEM model used in this study: the largest, outer ring radius equals to 6r and relative locations of concentric rings are as shown in Figure 5. It should be noted that these averages are presented for the FEM model with dipole depth of 5 cm. This dipole depth was selected since out of the range of dipole depths (1 cm to 5 cm) that were assessed in Reference [28], it corresponded to the smallest standard deviation values. The smallest standard deviation assures that Relative and Maximum Errors for 10 different sizes of each CRE configuration are as close as possible to the reported means. Analytic Verification Variable inter-ring distances CREs have the same number of rings and, therefore, the same number and order of truncation terms in Laplacian estimates as their constant inter-ring distances counterparts. Therefore, constant and variable inter-ring distances CRE configurations can be directly compared by assessing the coefficients at the respective truncation terms that comprise the truncation error of the Laplacian estimation. Analyzing those coefficients will allow us to determine which electrode configuration allows minimizing the truncation error resulting in more accurate Laplacian estimate. Performing this kind of analysis for increasing and constant inter-ring distances as well as for constant and decreasing inter-ring distances TCREs and QCREs would allow verifying the results obtained by FEM modeling analytically. Increasing and Constant Inter-Ring Distances TCREs and QCREs First, we derive the coefficients of the truncation terms for TCRE and QCRE configurations with increasing and constant inter-ring distances as functions of the order of the truncation term, k, under the conditions of the FEM model used in this study: the largest, outer ring radius equals to 6r and relative locations of concentric rings are as shown in Figure 5. For constant inter-ring distances TCREs the coefficients used to combine the differences between the concentric ring potentials and the central disc potential into a Laplacian estimate can be derived using the approach proposed in Reference [28]. This approach cancels all the truncation terms up to the order of 2n which has been shown to be the highest order achievable for a CRE with n rings [28]. In the case of TCREs (n = 2) this corresponds to cancellation of the fourth order leaving truncation terms of orders 6 and higher. Assuming that our TCRE has two rings with radii αr and βr, respectively, such that β > α, for each ring we take the integral along the circle with the corresponding radius of the Taylor series in a manner identical to deriving Equations (2)-(4) to obtain: For constant inter-ring distances TCREs we combine Equations (14) and (15) For increasing inter-ring distances TCREs, Equations (14) and (15) have to be combined with the coefficients 81 and´1, respectively, resulting in: Both Laplacian Equations (16) and (17) allow cancellation of the fourth order truncation term since (16α 4´β4 ) is equal to 0 for α and β equal to 3 and 6, respectively (constant inter-ring distances TCRE; panel A of Figure 5), and (81α 4´β4 ) is equal to 0 for α and β equal to 2 and 6, respectively (increasing inter-ring distances TCRE; panel B of Figure 5). Now we can express the coefficients c(k) of truncation terms with the general form c pkq r k´2 k! 2π 0 k ř j"0 sin k´j pθq cos j pθq dθ˜B k v Bx k´j By j¸a s the function of the truncation term order k. The same steps can be taken to derive the truncation term coefficient functions for increasing and constant inter-ring distances QCREs (n = 3) cancelling the truncation terms up to the sixth order. For constant inter-ring distances QCRE coefficients (270,´27, 2) are used to combine potentials on three rings with radii 2r, 4r, and 6r (constant inter-ring distances QCRE; panel A of Figure 5) and the central disc resulting in c QCRE C pkq " For increasing inter-ring distances QCRE coefficients (4374,´70, 1) are used to combine potentials on three rings with radii r, 3r, and 6r (increasing inter-ring distances QCRE; panel B of Figure 5) and the central disc resulting in c QCRE I pkq " 4374´70¨3 k`6k 945 for even k ě 8. We hypothesize that the ratios of constant inter-ring distances truncation term coefficient functions over the increasing inter-ring distances truncation term coefficient functions calculated for respective TCRE and QCRE configurations will be comparable to the respective ratios of Relative and Maximum Errors obtained using the FEM model. The ratio of truncation term coefficient functions for constant inter-ring distances to increasing inter-ring distances TCRE configurations is the following for even k ě 6: In a similar way, the ratio of truncation term coefficient functions for constant inter-ring distances to increasing inter-ring distances QCRE configurations is the following for even k ě 8: Plots of both functions from Equations (18) and (19) are presented in Figure 7 for even truncation term order k ranging from 6 to 30 and from 8 to 30, respectively. While the signs of the truncation term coefficients are consistent for both constant and increasing inter-ring distances CRE configurations (all negative for TCREs and all positive for QCREs), Figure 7 serves a three-fold purpose. First, it shows that absolute values of coefficients are larger for constant inter-ring distances CRE configurations since ratios of truncation term coefficients for constant inter-ring distances CRE configurations over corresponding increasing inter-ring distances CRE configurations are all larger than 1. Second, Figure 7 shows that the ratios of truncation term coefficients are higher for QCREs than for TCREs. Therefore, the improvement in Laplacian accuracy is likely to become more significant with the increase in the number of rings. Third, Figure 7 shows that all the coefficient ratios increase with the increase of the truncation term order but according to Reference [34] for Taylor series "higher-order terms usually contribute negligibly to the final sum and can be justifiably discarded". Therefore, we will consider the coefficient ratios for the lowest nonzero truncation term for TCRE (sixth order) and QCRE (eighth order) configurations equal to 2.25 and 7.11, respectively (dotted lines in Figure 7), as the ones that contribute the most to the truncation error. These analytically obtained ratios are comparable (difference of less than 5%) to the respective ratios of Relative and Maximum Errors obtained using the FEM model ( Figure 6) for tripolar (2.23˘0.02 and 2.22˘0.03, respectively) and quadripolar (6.95˘0.14 and 6.91˘0.16, respectively) CRE configurations. Even if we take weighted arithmetic means of all the truncation term coefficient ratios from Figure 7 for truncation term orders up to 30 with weights derived from an exponential decay model with unit original amount and decay rate equal to´1 to account for decreasing contribution of higher order terms we obtain weighted average ratios of 2.37 and 7.83, respectively. These analytic ratios are still within 20% of the respective ratios of Relative and Maximum Errors obtained by FEM modeling. Reference [34] for Taylor series "higher-order terms usually contribute negligibly to the final sum and can be justifiably discarded". Therefore, we will consider the coefficient ratios for the lowest nonzero truncation term for TCRE (sixth order) and QCRE (eighth order) configurations equal to 2.25 and 7.11, respectively (dotted lines in Figure 7), as the ones that contribute the most to the truncation error. These analytically obtained ratios are comparable (difference of less than 5%) to the respective ratios of Relative and Maximum Errors obtained using the FEM model ( Figure 6) for tripolar (2.23 ± 0.02 and 2.22 ± 0.03, respectively) and quadripolar (6.95 ± 0.14 and 6.91 ± 0.16, respectively) CRE configurations. Even if we take weighted arithmetic means of all the truncation term coefficient ratios from Figure 7 for truncation term orders up to 30 with weights derived from an exponential decay model with unit original amount and decay rate equal to −1 to account for decreasing contribution of higher order terms we obtain weighted average ratios of 2.37 and 7.83, respectively. These analytic ratios are still within 20% of the respective ratios of Relative and Maximum Errors obtained by FEM modeling. Constant and Decreasing Inter-Ring Distances TCREs and QCREs In a manner identical to the one used in increasing inter-ring distances CREs we can show that for decreasing inter-ring distances TCRE   The ratio of truncation term coefficient functions for decreasing inter-ring distances to constant inter-ring distances TCRE configurations is the following for even k ≥ 6: In a similar way, the ratio of truncation term coefficient functions for decreasing inter-ring distances to constant inter-ring distances QCRE configurations is the following for even k ≥ 8: Constant and Decreasing Inter-Ring Distances TCREs and QCREs In a manner identical to the one used in increasing inter-ring distances CREs we can show that for decreasing inter-ring distances TCRE c TCRE D pkq " 4´81α k´1 6β k8 1α 2´1 6β 2 or, for α and β equal to 4 and 6, respectively (decreasing inter-ring distances TCRE; panel C of Figure 5), as defined in the FEM model, for even k ě 6. For decreasing inter-ring distances QCRE coefficients (6875,´2187, 625) are used to combine potentials on three rings with radii 3r, 5r, and 6r (decreasing inter-ring distances QCRE; panel C of Figure 5) and the central disc resulting in 6875¨3 k´2 187¨5 k`6 25¨6 k 7425 for even k ě 8. The ratio of truncation term coefficient functions for decreasing inter-ring distances to constant inter-ring distances TCRE configurations is the following for even k ě 6: In a similar way, the ratio of truncation term coefficient functions for decreasing inter-ring distances to constant inter-ring distances QCRE configurations is the following for even k ě 8: ( 21) Plots of both functions from Equations (20) and (21) are presented in Figure 8 for even truncation term order k ranging from 6 to 30 and from 8 to 30, respectively. Plots of both functions from Equations (20) and (21) are presented in Figure 8 for even truncation term order k ranging from 6 to 30 and from 8 to 30, respectively. Similar conclusions to the ones derived from Figure 7 can be derived from Figure 8. Figure 8 suggests that truncation errors for decreasing inter-ring distances CRE configurations are greater than the ones for corresponding constant inter-ring distances CRE configurations which results in more accurate Laplacian estimates for constant inter-ring distances CRE configurations with the extent of improvement related to increase in the number of rings. More importantly, coefficient ratios for the lowest nonzero truncation term for TCRE (sixth order) and QCRE (eighth order) configurations are equal to 1.78 and 3.52, respectively (dotted lines in Figure 8). These analytically obtained ratios are again comparable (difference of less than 5%) to the respective ratios of Relative and Maximum Errors obtained using the FEM model for tripolar (1.75 ± 0.02 and 1.74 ± 0.03, respectively) and quadripolar (3.41 ± 0.09 and 3.38 ± 0.11, respectively) CRE configurations. If we take weighted arithmetic means of all the truncation term coefficient ratios from Figure 8 for truncation term orders up to 30 with weights derived from an exponential decay model with unit original amount and decay rate equal to −1 to account for decreasing contribution of higher order terms we obtain weighted average ratios of 1.91 and 3.99 respectively. These ratios are still within 20% of the respective ratios of Relative and Maximum Errors analytically verifying the results obtained by FEM modeling. Discussion The contribution of this paper is twofold. First, novel variable inter-ring distances CREs are proposed as opposed to all the previous research on CREs that, to the best of the authors' knowledge, was based on the assumption of constant inter-ring distances. Laplacian estimates for variable inter-ring distances CREs are derived using a modified (4n + 1)-point method from Reference [28] for any given number of rings n. Second, accuracies of Laplacian estimates corresponding to constant, linearly increasing and linearly decreasing inter-ring distances TCRE and QCRE configurations are directly compared using FEM model analysis. FEM modeling results obtained in this paper are consistent with the previous FEM modeling results obtained for bipolar and tripolar CRE configurations only [10,11], as well as for multipolar CRE configurations up to the septapolar one [28] in terms of accuracy of Laplacian estimation increasing (Relative and Maximum Errors decrease) with an increase in the number of rings n and decreasing (Relative and Maximum Errors increase) with an increase in the diameter of the CRE. More importantly, obtained FEM modeling results suggest that increasing inter-ring distances CRE configurations may decrease Relative and Maximum Errors resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances CRE configurations. For currently used TCREs the truncation error may be decreased more than two-fold while for QCREs more than six-fold decrease is expected. These results are verified analytically based on our hypothesis that the ratios of constant inter-ring distances truncation term Similar conclusions to the ones derived from Figure 7 can be derived from Figure 8. Figure 8 suggests that truncation errors for decreasing inter-ring distances CRE configurations are greater than the ones for corresponding constant inter-ring distances CRE configurations which results in more accurate Laplacian estimates for constant inter-ring distances CRE configurations with the extent of improvement related to increase in the number of rings. More importantly, coefficient ratios for the lowest nonzero truncation term for TCRE (sixth order) and QCRE (eighth order) configurations are equal to 1.78 and 3.52, respectively (dotted lines in Figure 8). These analytically obtained ratios are again comparable (difference of less than 5%) to the respective ratios of Relative and Maximum Errors obtained using the FEM model for tripolar (1.75˘0.02 and 1.74˘0.03, respectively) and quadripolar (3.41˘0.09 and 3.38˘0.11, respectively) CRE configurations. If we take weighted arithmetic means of all the truncation term coefficient ratios from Figure 8 for truncation term orders up to 30 with weights derived from an exponential decay model with unit original amount and decay rate equal tó 1 to account for decreasing contribution of higher order terms we obtain weighted average ratios of 1.91 and 3.99 respectively. These ratios are still within 20% of the respective ratios of Relative and Maximum Errors analytically verifying the results obtained by FEM modeling. Discussion The contribution of this paper is twofold. First, novel variable inter-ring distances CREs are proposed as opposed to all the previous research on CREs that, to the best of the authors' knowledge, was based on the assumption of constant inter-ring distances. Laplacian estimates for variable inter-ring distances CREs are derived using a modified (4n + 1)-point method from Reference [28] for any given number of rings n. Second, accuracies of Laplacian estimates corresponding to constant, linearly increasing and linearly decreasing inter-ring distances TCRE and QCRE configurations are directly compared using FEM model analysis. FEM modeling results obtained in this paper are consistent with the previous FEM modeling results obtained for bipolar and tripolar CRE configurations only [10,11], as well as for multipolar CRE configurations up to the septapolar one [28] in terms of accuracy of Laplacian estimation increasing (Relative and Maximum Errors decrease) with an increase in the number of rings n and decreasing (Relative and Maximum Errors increase) with an increase in the diameter of the CRE. More importantly, obtained FEM modeling results suggest that increasing inter-ring distances CRE configurations may decrease Relative and Maximum Errors resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances CRE configurations. For currently used TCREs the truncation error may be decreased more than two-fold while for QCREs more than six-fold decrease is expected. These results are verified analytically based on our hypothesis that the ratios of constant inter-ring distances truncation term coefficient functions over the increasing inter-ring distances truncation term coefficient functions (as well as of decreasing inter-ring distances truncation term coefficient functions over constant inter-ring distances truncation term coefficient functions) for TCRE and QCRE configuration will be comparable to the respective ratios of Relative and Maximum Errors obtained using the FEM model. The type of analysis that was used to confirm our hypothesis providing a new instrument for verification of FEM modeling results would not have been feasible in our previous works. For example, in Reference [28], where multipolar CRE configurations ranging from bipolar (n = 1) to septapolar (n = 6) were compared using FEM modelling, Laplacian estimates for different CRE configurations had different numbers of truncation terms (one truncation term less for each additional concentric ring causing an increase in Laplacian estimation accuracy) which made analytical comparison of truncation term coefficients for different CRE configurations infeasible. In this study proposed variable inter-ring distances CREs have the same numbers of rings and, therefore, the same numbers (and orders) of truncation terms in respective Laplacian estimates as their constant inter-ring distances counterparts which allowed us to quantify the expected improvement in estimation accuracy analytically. Therefore, this paper provides a comprehensive theoretical basis for variable inter-ring distances CREs, as well its validation via analytically verified FEM modeling. Biomedical significance of CREs is related to the fact that errors presented in this manuscript translate directly into more accurate surface Laplacian estimates which is of critical importance since, for example, in applications to EEG it has been shown to alleviate the blurring effects enhancing the spatial resolution and selectivity, and reduce the reference problem [4][5][6]. This is why several methods were proposed for Laplacian estimation through interpolation of potentials on a surface and then estimating the Laplacian from an array of conventional (single pole) disc electrodes [35][36][37]. Since only CREs allow estimating Laplacian directly at each electrode instead of combining the data from an array of conventional disc (single pole) electrodes, further improving the accuracy of Laplacian estimation via variable inter-ring distances CREs may be critical to the advancement of noninvasive electrophysiological electrode design with application areas not limited to electroencephalography, electrocardiography, and electromyography. In particular, since "negative Laplacian is approximately proportional to cortical (or dura) surface potential" [38], every application currently utilizing Laplacian signals such as, for example, tEEG [10][11][12][13][14][15][16][17][18]20,21] may benefit from more accurate Laplacian estimation since it improves estimation of the cortical potentials. Moreover, other potential advantages of variable inter-ring distances CREs need to be investigated including, for example, improved control of the electric field used for seizure attenuation compared to the one that current transcranial focal stimulation applied via constant inter-ring distances TCREs can offer [17][18][19][20]. It should be noted that variable inter-ring distances CREs do not cause any inherent growth in the size of the electrode compared to their constant inter-ring distances counterparts since all CRE configurations considered and modeled had the same dimensions. Neither do they cause an inherent growth in computational complexity since null space of matrices from Equations (5), (7), and (8) can be found offline for any given n with the preamplifier board calculating the Laplacian estimate as the linear combination of differences of potentials from each of the n rings and the central disc respectively using this null space vector as the vector of coefficients. Finally, no inherent growth is caused in the number of amplifier channels since surface Laplacian estimate calculated by the preamplifier board is the only signal sent to the amplifier for each CRE. Therefore, variable inter-ring distances CREs are not expected to have any adverse effects on signal acquisition or implementation and complexity of the related hardware. Further investigation is needed to confirm the obtained results. The plan for future work includes several directions and is based on limitations of the current study. The main limitation of both the proposed (4n + 1)-point method and the FEM model is that the width of the concentric rings and the diameter of the central disc are not taken into account and therefore cannot be optimized. To pursue our ultimate goal of being able to determine optimal CRE designs for specific applications these two parameters need to be included into future modifications of the (4n + 1)-point method and into the FEM model along with the currently included number of rings, size of the electrode, and, as proposed in this study, inter-ring distances. Another limitation is that while this study proposes variable inter-ring distances CREs for the first time, only linearly increasing and linearly decreasing inter-ring distances are considered. The solution to the general inter-ring distances optimization problem is likely to result in nonlinear relationship which is why solving this general problem is the second direction of the future work. Third direction is to create prototypes of variable inter-ring distances CREs with 2 and more rings and test them on real life data, both phantom and from human subjects. This direction is critical since obtained results suggest that variable inter-ring distances CREs may result in more accurate Laplacian estimates. This raises the question of how small can the distances between concentric rings get before partial shorting due to salt bridges becomes significant enough to affect Laplacian estimation. Moreover, these prototypes would allow investigating the translation of Relative and Maximum Errors assessed in this study into improvement of spatial selectivity, signal-to-noise ratio, mutual information, etc., the same way it was investigated for tEEG compared to EEG with conventional disc electrodes [12]. Prototyping techniques envisioned include using both rigid substrates for nondisposable CREs (e.g., gold-plated copper on biocompatible dielectric) [10][11][12][13][14][15][16][17][18][19][20][21][22]24] and flexible substrates for disposable CREs (e.g., silver paste on polyester film) [23,[25][26][27]39]. A comparative analysis of flexible CRE manufacturing techniques including screen-printing, inkjet, and gravure is available in Reference [39]. Conclusions With tripolar concentric ring electrodes gaining increased recognition in a wide range of applications due to their unique capabilities this study assesses the potential of novel variable inter-ring distances concentric ring electrodes. Results of mathematical analysis and finite element method modeling for tripolar and quadripolar concentric ring electrode configurations are consistent in suggesting that increasing inter-ring distances concentric ring electrodes may offer more accurate Laplacian estimation compared to their constant inter-ring distances counterparts. for Engineering (ENG) General and Age Related Disabilities (GARD) (award number 0933596 to Walter Besio). The content is solely the responsibility of the authors and does not represent the official views of the NASA, NIH or NSF. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript:
11,720.8
2016-06-01T00:00:00.000
[ "Engineering", "Medicine" ]
Achieving Sustainable Development Goals 2016-2030 in Nigeria through Female Enrolment into Electrical/Electronics Engineering Trade in Technical Colleges of Adamawa State https://www.mautech.edu.ng Abstract Purpose: This study investigated female enrolment into electrical/electronics engineering trade in technical colleges of Adamawa State in order to suggest ways of augmenting it for Sustainable Development Goals (SDGs) 2016-2030. Approach/ Methodology/ Design: Two research questions and two null hypotheses were formulated to guide the study. A descriptive survey research design was adopted for the study. The sample of the study comprised of 38 teachers and 140 parents. A 50-item Female Enrolment in Electrical/Electronics Engineering Trade (FEEET) Questionnaire was developed by the researchers and used for data collection. The questionnaire was validated by three experts from the Department of Electrical Technology Education, Modibbo Adama University of Technology, Yola, Adamawa State. Reliability co-efficient of 0.81 was obtained for the instrument using Cronbach’s Alpha reliability method. Mean statistic was used to answer the two research questions while z-test statistics was used to test the two hypotheses at 0.05 level of significance. Findings: The findings of the study revealed that inadequate knowledge on female participation in electrical/electronics engineering trade, hazards involved in working with electricity, and poor gender policy implementation among others were factors affecting female enrolment into the programme. Establishment of electrical/electronics engineering trade skill acquisition centres for females and provision of starter packs for female graduates of electrical/electronics engineering trade among others were strategies identified for improving female enrolment into the programme. Practical Implication: The study has practical implications for achieving sustainable development goals in Nigeria. A sustainable financing scheme for the female trainees of electrical/electronics engineering trade should be established in order to boost their interests in the programme. Originality/Value: The study identified that inadequate knowledge on female participation in electrical/electronics engineering trade, hazards involved in working with electricity, societal perception about electricity, cultural sanctions on women, early marriages, and poor gender policy implementation are the main factors that affect female enrolment in technical colleges in Nigeria. Introduction The statistical records of students' enrolment for the three Government Science and Technical Colleges (GSTC) of Adamawa State disclosed a decline in the enrolment of female students into electrical/electronics engineering trade. For instance, in 2016/2017 session, out of 365 students that were enrolled, 90.68% (331 students) were males and 9.32% (34 students) were females. There was a further decline in 2017/2018 session as 392 students were enrolled, of which 92.88% (364 students) were males and 7.12% (28 students) were females. A more decline in the enrolment of female students was experienced in the 2018/2019 session as out of 362 students that enrolled, only 4.70% (17 students) were females and 95.30% (345 students) were males (Apagu, et al., 2003). The above report points to a quick decline in female students' enrolment into electrical/electronics engineering trades in the technical colleges of Adamawa State. If this decline in female students' enrolment is not addressed urgently, the electrical/electronics engineering trades of the technical colleges in Adamawa state might end up having no female students' enrolment in the program in the near future. It is against this background that this study sought to ascertain the factors affecting female enrolment in electrical/electronics engineering trade in the technical colleges of Adamawa State while also ascertaining strategies to augment their enrolment as a means of achieving sustainable development goals. The Sustainable Development Goals (SDGs) define the world we want. They apply to all nations and mean, quite simply, to ensure that no one is left behind and realizing such dreams was almost always up to national Governments. And, in the spirit of leaving no one behind, it is up to the United Nations and all its partners and supporters to ensure that everyone has access to the SDGs and their inclusive message (Allen, 2016). That means that the UN, unlike purely commercial ventures, must preserve, as long as they are relevant, the 'old' means of communication. Radio, for instance, is still the only way to reach a higher percentage of population, living mostly in rural areas where the Internet has not yet penetrated. According to Abel (2016), in virtual reality, digitally connected, experimenting every few seconds with images and accustomed to change at the press of a button. Their universe is visual. The 21st century is an era of exponential technological advancement. Current society has become more dependent on technology in contrast to any other epoch in history. Regardless of occupation or geographic location, technology continually impacts and regulates our lives. Individuals who are technological literate are able to use, manage, assess, and understand technology. Unfortunately, a large percentage of female Nigerian citizens have entered the 21st century technologically illiterate due to lack of poor enrolment in technologically inclined courses. Many middle aged or older citizens claim that they are deficient in technological knowhow due to the lack of technically exposure as a child, it is essential that young people are introduced and immersed in technology to facilitate their preparation for the future that will enhanced and sustained development (Buse, 2015). It is hoped that the findings of this study will help encourage the participation of female students in acquiring skills in electrical/electronics engineering trade in technical colleges of Adamawa State while also providing the policy makers and educational planners with necessary information for more effective educational planning and better policy formulation to improve female participation in electrical/electronics engineering trade. It is expected that the Adamawa communities will also benefit from the findings of the study, as it will encourage the female aspirants to pursue careers in electrical/electronics engineering trade which will in turn equip them for self-reliance, thereby reducing crimes, prostitution and other social vices among females and also changing the general perception of the society towards female participation in TVET in general. Literature Review The Sustainable Development Goals (SDGs) are a collection of 17 global goals set by the United Nations General Assembly in 2015 for the year 2030. The goals which include Quality Education and Gender Equality bolsters the United Nations' commitment to end poverty, and are unique in that they cover issues that affect various nations. The 17 SDGs interconnect, therefore inferring that a success in one brings about a success for others e.g. achieving gender equality or inclusiveness will help alleviate poverty. According to the United Nations Development Programme (2019), the belief that education is one of the most powerful and proven vehicles for sustainable development is reaffirmed by achieving inclusive and quality education for all. Unfortunately, the Nigerian society has stereotyped some occupation and education to certain gender thereby discouraging the other from either benefiting from the programme or playing active role in its survival and effectiveness to help sustain development in the society (Ibanga, et al., 2019). Electrical/electronics engineering trade is one of the Technical and Vocational Education and Training (TVET) programs with the objective to produce craftsmen and women that will maintain, service and repair electrical equipment/gadgets and electronics appliances such as cassette players, radio, television among others and to wind electrical machines, wiring and installation of both domestic & industrial installations, etc. (Akinduro, 2006). However, employers of labour feel reluctant to employ the few available female graduates of electrical/electronics engineering trade probably because of their perceptions toward female occupational choice and consequently, the female craftsmen have low patronage from the society when they try to be self-reliant (West, 2007). This unwillingness to employ craftswomen contradicts World Health Organization (2009) which described gender equity as fairness and justice in the distribution of benefit and responsibilities between women and men. Robert (2005) maintained that females have less access (opportunities) to education in Nigeria as compared to males. This poor enrolment is further accentuated in female enrolment in Technical, Vocational Education and Training (Ezeliora & Ezeokana, 2010). Gender differences in various fields of education have been reported and this gender imbalance also affects the mode of enrolment in electrical/electronics engineering trade as observed by Josiah and Etuk (2014). Okwelle and Agwi-Vincent (2018) determine the strategies for improving female students' enrolment in technical and vocational education and training programmes in Nigeria through students' involvement in public relations activities. The study adopted descriptive survey design approach. The population includes 365 final year technical education students and 68 technology education lecturers in the three tertiary institutions in Rivers State that offer programmes in technical education. The sample was drawn from this population, and it comprised 120 technical education final year students and 68 technology education lecturers The simple random sampling technique was used in selecting the respondents. Data were analyzed using mean, standard deviation and t-test statistics. The findings of this study revealed among other things, that the following strategies among others were identified for improving female students enrolment in TVET programmes through female students involvement in public relations; effective supervision of students on industrial training and teaching practice, introducing public relations activities into the school curriculum and sponsoring and participating in students organizations and programmes. It was recommended among others that institutions should establish and fund public relation unit without wasting time, school public relation officers should ensure that they carry their TVE students along during public relation activities. In another related study, Khaguya (2014) examined the factors influencing female students' enrolment in technical courses. The study adopted case study research design. The study employed the Krejcie and Morganformulaeandasamplesizeof219wasused. In this study multiple regressions were used to analyze data. The findings revealed that cultural factors such early marriages, female genital mutilation, cultural beliefs and time spent on doing household chores make girl to have little time to devote to their academic work. It was also revealed that financial factors such the fees paid for the technical courses, expensive learning materials and books made parents to discourage their daughters on choosing the technical courses. The girls are also not informed about possible future salaries and their abilities and are therefore not motivated to choose technical courses. Psychological factors also influenced the enrolment. The findings show that majority of the respondents indicated that technical courses are masculine and are meant to be pursued by boys. The study also revealed that girls perform equally well on many technical skills and attitudes assessment in the elementary school years and what they need is role models to encourage them to pursue the technical courses in their tertiary education programmes. In a study conducted by Angelopulo (2013) identified the drivers of students' student enrolment and retention in the academic programmes of a South African. The research was undertaken amongst a diverse group of students, faculty, support and oversight staff, chosen to represent as wide a range of opinions on the topic as possible. Q methodology was used to categorize the variety and span of subjective opinion on the market-related, service quality and cultural variables that support or undermine student participation in the department's academic programmes. Eight richly diverse accounts were derived, reflecting the most salient perceptions on the topic. Underlying factors that supported student enrolment and retention were the reputation, credibility and image of university and department, and specific academic, disciplinary, technical and administrative competencies. The study revealed that the main factors that undermined enrolment and retention were the scope of research and tuition, institutional performance, inconsistency in teaching quality and the relative inaccessibility of tuition material. Methodology and Procedures The study adopted a descriptive survey research design. The study involved the use of a structured questionnaire to elicit information from the parents and teachers of electrical/electronics engineering trade in the government technical colleges of Adamawa state. Two research questions and two null hypotheses were developed and formulated respectively to guide the study. The study had a total population of 258 respondents comprising 38 teachers and 220 parents of students enrolled into electrical/electronics engineering trade in the three technical colleges of Adamawa state. The sample was 178 respondents comprising 38 teachers and 140 parents. The entire population of teachers was used, while Krejcie and Morgan (1970) table was used to determine the sample size of parents that were used for the study. Proportionate random sampling was then used to select the 140 parents. A structured Female Enrolment in Electrical/Electronics Engineering Trade (FEEET) Questionnaire developed by the researchers was used for data collection. The responses to the questionnaire were structured on 5-point Likert scale of Strongly Agreed (SA) = 5, Agreed (A) = 4, Undecided (U) = 3, Disagreed (D) = 2 and Strongly Disagreed (SD) = 1. The questionnaire was validated by three experts from Department of Electrical Technology Education, Modibbo Adama University of Technology, Yola, Adamawa State. The reliability co-efficient of 0.81 was obtained for the instrument using Cronbach's alpha reliability method. Mean statistic was used to answer the two research questions and z-test was used to answer the null hypotheses at 0.05 level of significance. All items with a mean score of 3.50 and above were considered agreed, while items with mean score less than 3.50 were considered disagreed. Any hypothesis with p value less than or equal to 0.05 was regarded as significant; otherwise, it was not significant. Results and Discussion Research Question 1 What are the factors affecting female enrolment into electrical/electronics engineering trade in technical colleges of Adamawa state? Source: Authors Data presented in Table 1 above showed that the respondents (teachers and parents) agreed on almost all the factors affecting female enrolment in electrical/electronics engineering trade in technical colleges of Adamawa state base on the grand means which ranges from 3.07 to 4.61 except on two items. Specifically, the respondents did not consider a gender biased NBTE curriculum and difficulty of combining home chores with study as factors affecting female enrolments. Furthermore, with standard deviation ranging from 0.64 to 0.94 and a cluster deviation of 0.78, the results also indicate that the opinions of the respondents are clustered around the mean response. The respondents (teachers and parents) agreed on almost all the factors affecting female enrolment in electrical/electronics engineering trade in technical colleges of Adamawa state which include inadequate knowledge on female participation in electrical/electronics engineering trade, hazards involved in working with electricity, societal perception about electricity, cultural sanctions on women, early marriages, and poor gender policy implementation among others. However, the respondents did not consider a gender biased NBTE curriculum and difficulty of combining home chores with study as factors affecting female enrolments. The findings of the study revealed that inadequate knowledge on female participation in electrical/electronics engineering trade, hazards involved in working with electricity, societal perception about electricity, poor gender policy implementation, poor parental perceptions towards female education, cultural sanctions on women, early marriages, poor primary education background at entry level, and unwillingness of parents to allow girls to travel long distances are major factors affecting female enrolment into electrical/electronics engineering trade in technical colleges of Adamawa state. The findings above are in line with Gimba (2008) who maintained that poverty and lack of awareness affects female enrolment into formal school in northern Nigeria. In support of this view, Amoo (2011) maintained that enrolment of female or girl-child is even worse in the northern Nigeria due to parents' attitude toward western education. Ayomike (2014) opined poor societal perception, poor entry level, lack of recognition and discrimination against graduates of technical vocational education (TVE) are some of the factor affecting female enrolment in TVET. The results of this study are in line with the previous studies indicating that the absence of policy implementation majorly affects female enrolment into technical/engineering colleges in Nigeria. More importantly, it is recognized that the media and society have not played an active role in dismantling the stereotypes associated with this type of education. Research Question 2 What are the strategies for improving female enrolment into electrical/electronics engineering trade in technical colleges of Adamawa state? Data presented in Table 2 above showed that the respondents (teachers and parents) agreed on almost all the strategies for improving female enrolments into electrical/electronics engineering trade in technical colleges of Adamawa state base on the grand means which ranges from 3.21to 4.50 except on one item. Specifically, the respondents did not consider the downward review of admission requirements for females with interest in electrical/electronics engineering trade as a strategy for improving female enrolments. Furthermore, with standard deviation ranging from 0.51 to 0.91 and a cluster deviation of 0.79, the results also indicate that the opinions of the respondents are clustered around the mean response. The respondents (teachers and parents) agreed on almost all the strategies outlined for improving female enrolments into electrical/electronics engineering trade in technical colleges of Adamawa state which include establishment of electrical/electronics engineering trade skill acquisition centres for females, creating conducive environment for would-be practicing female technicians, and provision of starter packs for female graduates of electrical/electronics engineering trade among others. However, the respondents did not consider the downward review of admission requirements for females with interest in electrical/electronics engineering trade as a strategy for improving female enrolments. The findings of the study also revealed that establishment of electrical/electronics skill acquisition centres for females, creating conducive environment for would-be practicing female technicians, provision of starter packs for female graduates of electrical/electronics engineering work trade, use of media to change stereotyped expectations, and establishment of policies that will favour and encourage women/girls education among others are strategies for improving female enrolment into electrical/electronics engineering trade in technical colleges of Adamawa state. Abdulahi (2016) suggested that greater public awareness and acceptance as well as in increased enrolments into TVET Programs will be achieved if the media is involved in the awareness campaign. Nawe (2002) suggested that improvement of school conditions, increasing schools facilities, teacher competence, systematic gender-sensitization programmes, introduction of quota system in enrolment; and gender-sensitive environment will enhance enrolment into TVET programs, electrical/electronics engineering trade inclusive. Hypothesis 1 There is no significant difference in the mean responses of parents and teachers on the factors affecting the enrolment of female students into electrical/electronics engineering trade in technical colleges of Adamawa State. The result on Table 3 reveals a z-value of 2.66 with a p-value of 0.01. Since the p-value is less than the alpha level of the test (p<.05), the null hypothesis tested is rejected. This means that there is a significant difference in the mean responses of parents and teachers on the factors affecting the enrolment of female students into electrical/electronics engineering trade in technical colleges of Adamawa State. Findings from the study revealed that there was a significant difference in the mean responses of parents and teachers on the factors affecting the enrolment of female students in electrical/electronics engineering trade in technical colleges in Adamawa State. This is in nonconformity with Apagu, et al (2003) who revealed that even though there was disproportionate enrolment pattern of males and females into the TVET programs, the respondents were not at variance in their opinion on the factor responsible for poor female enrolment in TVET program, and Emmanuel (2015) who suggested that the respondents unanimously agreed on the factors hindering massive female enrolment in TVET program. Hypothesis 2 There is no significant difference in the mean responses of parents and teachers on the strategies for improving female enrolment into electrical/electronics engineering trade in technical colleges of Adamawa State. The result on Table 4 reveals a z-value of 0.15 with a p-value of 0.88. Since the p-value is greater than the alpha level of the test (p>.05), the null hypothesis tested is accepted. This means that there is no significant difference in the mean responses of parents and teachers on the strategies for improving the enrolment of female students into electrical/electronics engineering trade in technical colleges of Adamawa State. Conclusion and Suggestion One of the priorities of the United Nations Educational, Scientific and Cultural Organization (UNESCO) as captured in the SDGs 2016-2030 is to mainstream gender equality and promote equity through TVET policies and programmes. This is achievable only through policies on TVET that will make sure that all youth and adults, including vulnerable and disadvantaged groups, have equal access to learning opportunities and skills development. Therefore, UNESCO integrates gender equality in and through national TVET systems in different ways including gender-sensitive evaluations of TVET programmes. The UNESCO also supports innovative means that seeks to widen access and participation in TVET for vulnerable and disadvantaged groups. This study thus revealed that the female gender is one of the disadvantaged groups with respect to enrolment in electrical/Electronics engineering trade. It was observed that there's a yearly decline in enrolment into the TVET programme in technical colleges of Adamawa state. This development has the potential to endanger the TVET programme and national development. Therefore, mechanisms should be put in place to adopt the strategies as captured in this study to improve female enrolment into electrical/Electronics engineering trade as investing in the education of girls brings high returns in terms of breaking cycles of poverty and social vices that the girls might be lured into, and thus aiding economic growth. Based on the findings of this study, the following invaluable suggestions were made: The Adamawa state government in collaboration with government para-statals and Nongovernmental Organizations (NGOs) should establish a sustainable financing scheme for the female trainees of electrical/Electronics engineering trade in order to boost their interests in the programme. Provisions and favourable policies should be made to ensure a conducive environment for female trainees in electrical/Electronics engineering trade who wants to practice upon graduation. Electrical/Electronics engineering trade teachers in partnership with the state government and NGOs should organize awareness campaigns on the impact and benefits of female enrolment into the programme. This awareness campaigns will also be an avenue to abreast them with incentives that will be available to female enrolees in the programme. More female trainers should be recruited into electrical/Electronics engineering trade in order to encourage more female enrolments as the female trainers will become role models to other girls with interests. This will also help change the negative public view with regards female enrolment into the programme.
5,086.2
2021-02-18T00:00:00.000
[ "Engineering", "Environmental Science", "Education" ]
The Cubical Cohomology Ring: An Algorithmic Approach A cohomology ring algorithm in a dimension-independent framework of combinatorial cubical complexes is developed with the aim of applying it to the topological analysis of high-dimensional data. This approach is convenient in the cup-product computation and motivated, among others, by interpreting pixels or voxels in digital images as cubes. The S-complex theory and so called co-reductions are adopted to build a cohomology ring algorithm speeding up the algebraic computations. Introduction In the past two decades, homology and cohomology theory have gained vivid attention outside of the mathematics community prompted by modern applications in sciences and engineering.The development of a computational approach to these theories is motivated, among others, by problems in dynamical systems [18], material science [5,8], electromagnetism [9,14], geometric modeling [10], image understanding and digital image processing [1,3,12,16,22].Conversely, that development is enabled by progress in computer science.Although algebraic topology arose from applications and has been thought of as a computable tool since its early stage, practical implementations had to wait until the modern generation of powerful computers due to the complexity of the operations involved, especially in high-dimensional problems. Until recently, progress has mainly been achieved in the computation of homology.The software libraries CHomP [4] and RedHom [24] provide systematic approach to computing homology of topological spaces in arbitrary dimension.There are also implementations of homology algorithms for some specialized tasks as, in particular, GAP [11] and Dionysus [19].The abundance of various homology algorithms and implementations is, at least in part, a consequence of the fact that the optimization techniques depend crucially on the type of input as well as the data structure chosen to represent it.Simplicial complexes constitute a common and historically well justified method to represent topological spaces.However, in many applications, in particular in rigorous numerics of dynamical systems and in all types of raster graphics a union of unit cubes in a cubical lattice provides the most natural way to represent sets.This leads to the concepts of a cubical set, a combinatorial cubical complex and cubical homology as introduced in [15].These concepts should not be confused neither with the singular cubical homology [17] nor with the notion of cubical set in the homotopy theory or algebra [2].The cubical set in the sense of [15] may look as a very restrictive one from the point of view of the general theory but it is sufficiently broad in the context of applications and, most importantly, its rigidness allows for bitmap representations which are extremely efficient due to the natural optimization of processors to perform operation on bitmaps.In this paper, we shall work in the framework of [15] but extending the material on cubical homology to the dual cochain groups. Cohomology theory, not less important than homology from the point of view of applications, but intrinsically more difficult, had to wait longer for computer implementations.Whenever a mathematical model made it possible as, for example, in the case of orientable manifolds, duality has been used to avoid explicitly working with cohomology.However, among features distinguishing cohomology from homology is the cup product, which renders a ring structure on cohomology.The cup product is a difficult concept, which has been more challenging to make explicit enough for computer programs than homology or cohomology groups.Some significant applicationoriented work on computing the cohomology ring of simplicial complexes has been done by Real et al. [13]. The notion of the singular cubical homology and cohomology and a cubical cup product formula were first introduced in 1951 by Serre [25,Chap. 2].However, we wish to emphasize that this pioneering approach is far from the combinatorial and application-driven spirit of our work; for three main reasons: of all, singular cubes are defined as equivalence classes of all continuous functions from the standard cube [0, 1] d to a given topological space.Secondly, because Serre, in his original work, does not directly derive algebraic properties of the singular cubical cohomology ring by arguments within his theory: he only refers to the isomorphism between singular and simplicial (co)homology.Finally, because Serre's result on this topic is hidden as part of a highly theoretical work addressed to readers with a deep background in pure mathematics and beyond the reach of most of the computer engineering community for instance.This is why authors working on applications of cohomology to 3D digital images e.g.[12,13] in the framework of 3D cellular cubical complexes tend to derive the needed cubical formulas from the simplicial theory rather than from Serre's work.Our philosophy is based on the observation that the combinatorial cubical complexes presented in [15] are a more friendly framework than the simplicial or singular setting, to directly derive explicit formulas, such as, for instance, the cup product formula, and to implement them in dimension-independent algorithms. Let us recall the general definition of the cup product used in the standard literature on homological algebra [23]. Definition 1.1 The cup product : H p (X) × H q (X) → H p+q (X) is defined on cohomology classes of cocycles z p and z q as follows: where [z p ] × [z q ] is the cohomology cross product and diag * is the homomorphism induced by the diagonal map diag : X → X × X given by diag(x) := (x, x). An algorithm for cup product based on formula (1) would require an implementation of cross product and diag * homomorphism which is troublesome and would lead to an inefficient algorithm.The main goal of this paper is to provide an explicit formula for computing the cup product in the setting of cubical sets.As we will prove in Theorem 2.24 the computation of the cup product of two generating elementary cubes in dimension d reduces coordinate-wise to dimension one and the respective formula in dimension one, given by Theorem 2.20 is straightforward to implement. Such an elementary and easy to implement formula is possible, because in the context of cubical sets the cochain cross product is simply the dual of the cubical product The concept of cross product is much easier and more natural in the context of cubical sets than for simplical or singular complexes, because the cartesian product of generating cubes is again a generating cube.This is not true for simplices.These considerations lead to Definition 2.16 of the cubical cup product in Sect.2.3. In order to obtain Theorem 2.24, we need to derive an explicit formula for a chain map diag # : C(X) → C(X × X) induced by the diagonal map.Actually, this task is more complex than it may seem at the first glance and Sect.2.2 is devoted mainly to the related constructions. Note that the choice of a chain map is not unique, thus the correctness of the definition and the properties of the cup product are achieved at the cohomology level.These properties are discussed in Sect.2.3.We end the section with an example illustrating the use of the explicit, coordinate-wise formula. Cubical sets arising from large data sets which are present in applications are built of a huge number of generating elementary cubes.In order to benefit from the established formula in this context, one needs reduction algorithms which render the computation efficient.Thus the second goal of this paper, reached in Sect.3, is to show that the techniques of S-reductions of S-complexes successfully developed in [20,21] with the purpose of computing homology of large cubical complexes may be adapted to computing the cohomology ring of a cubical set.The terminology of S-complexes, S-reduction pairs, the coreduction algorithm and the concept of homology models are reviewed and adapted for cohomology.We finish the paper with Sect.3.5, where computations via S-reductions are carried out and compared on two explicit examples of a still quite simple nature. The implementation of the methods presented in this paper as well as numerical experiments are in progress.In particular, the method of S-reductions for cohomology groups has been implemented in the Ph.D. thesis of P. Dłotko [6] and numerical experiments indicate the same efficiency of the implementation for cohomology groups as in the case of homology groups. Cubical Cohomology Groups Recall from [15, Chap.2] that X ⊂ R d is a cubical set if it is a finite union of elementary cubes number of non-degenerate intervals in its product expression and the embedding number emb Q is d.The set of all elementary cubes in R d is denoted by K(R d ) and those of dimension k by K k (R d ).Those which are contained in X are denoted by K(X), respectively, K k (X). The group C k (R d ) of cubical k-chains is the free abelian group generated by K k (R d ), its canonical basis.For k < 0 and k > d, we set C k (R d ) := 0. In the sequel, we identify the geometric elementary cube Q with the elementary cubical chain defined by We recall that the cubical cross product is defined on the canonical basis elements P ∈ K n p and Q ∈ K m q as the cartesian product P × Q and extended on all pairs of chains (c, c ) by bilinearity.In [15], this operation is called cubical product and denoted by c c in order to distinguish it from the cartesian product but we abandon this notation here to emphasize its equivalence to the cross product in homological algebra. Given any k ∈ Z, the cubical boundary map for some a ∈ Z.In the first case, ∂ 0 Q = 0.In the second case, we put For d > 1 decompose Q as Q = I × P , where emb I = 1 and emb P = d − 1, and put where p = dim I and q = dim P .The pair We refer to [15,Chap. 2] for the properties of the cubical cross product and cubical boundary maps, in particular for this one: As a consequence of the above formula, the cubical cross product induces an isomorphism of chain complexes (for the definition of the tensor product of chain complexes see e.g.[23,Chap. 7]).For any c, d ∈ C p (R d ), the notation c, d is used for the scalar product defined on the elements P , Q of the canonical basis K p by The support |c| of c is the union of all Q ∈ K p such that c, Q = 0. Given a cubical set X ⊂ R d , the cubical chain complex of X denoted by C(X) is the restriction of C(R d ) to the chains c whose support is contained in X.We refer to [15] for the properties of cubical chain complexes and for the computation of their homology. Definition 2.2 Let X ⊂ R d be a cubical set.The cubical cochain complex (C * (X), δ) is defined as follows.For any k ∈ Z, the k-dimensional cochain group Note that C k (X) is the free abelian group generated by the dual canonical basis {Q |Q ∈ K k (X)} where The notation Q , for the dual of Q, is aimed to be distinct from H * for the cohomology functor. The kth cohomology group of X is the quotient group Definition 2. 4 The cubical cross product of cochains c p ∈ C p (X) and c q ∈ C q (Y ) is a cochain in C p+q (X × Y ) defined on any elementary cube R × S ∈ K p+q (X × Y ), where R ∈ K(X) and S ∈ K(Y ), as follows: We easily check the following. Proposition 2.5 The cross product of cochains is a bilinear map.Moreover, for P ∈ K p (X), Q ∈ K q (Y ), P ×Q = (P ×Q) . Algebraic properties of the cubical product on cubical chains derived in [15, Sect.2.2] readily extend to the cross product on cochains, in particular the following.Proposition 2.6 If c p × c q ∈ C p+q (X), then δ c p ×c q = δc p ×c q + (−1) p c p × δc q . Constructing Chain Maps The most important step towards an explicit formula for the cup product is the construction of a homology chain map diag # : C(X) → C(X × X) induced by the diagonal map diag : X → X × X given by diag(x) := (x, x).This is done using the construction presented in [15,Chap. 6].We briefly outline that construction. Given an elementary cube Q is Q with all its proper faces removed.Given cubical sets X and Y , a multivalued cubical map F : X − → → Y is a map from X to the set of subsets of Y such that The Chain Selector Theorem [15,Theorem 6.22] affirms that, if such a map is lower semicontinuous and has non-empty acyclic values (that is, H * (F (x)) = 0 for all x ∈ X), then it admits a chain selector, that is, a chain map ϕ : C(X) → C(Y ) with the properties Any two such chain selectors are chain homotopic [15,Theorem 6.25], so they give rise to the same map in homology.If a continuous map f : X → Y admits an acyclicvalued representation, that is, a lower semicontinuous cubical map F with the property f (x) ∈ F (x), the homology map of any chain selector ϕ of F is the map H * (f ) induced in homology by f [15,Proposition 6.56].The idea behind this construction is illustrated in Fig. 1. In general, computing a map induced in homology may be hard, because an acyclic-valued representation may not always exist and one has to apply a process of rescaling [15,Sect. 6.4.2].However, for the purpose of this paper, we do not need Fig. 1 The graph of a continuous map f on an interval (displayed by a smooth curve), its representation F (displayed by shaded rectangles), and a symbolic display of a chain selector: The circles indicate pairs of vertices while the line segments connecting them can be used to localize, on the ordinate, the images of the corresponding edges under the chain map ϕ 1 rescaling, because all considered maps admit the acyclic-valued minimal representations.Recall from [15, Proposition 6.33 and Definition 6.34] that the minimal representation of a continuous map f : X → Y is a lower semicontinuous cubical map F : X − → → Y defined by where, given any set A ⊂ X, the closed hull ch(A) is the smallest cubical set containing A. In the case A = {x}, the set ch({x}) is an elementary cube denoted, for short, by ch(x).In the case of our diagonal map diag : X → X × X, its minimal representation Diag : X − → → (X × X) is given by Given chain maps ϕ : where the right-hand side is the cubical cross product of chains defined in Sect.2.1. Lemma 2.7 Let X 1 , X 2 , Y 1 , Y 2 be cubical sets and let f : continuous maps which admit acyclic-valued representations F , G. Let ϕ and ψ be the chain selectors of F and, respectively, G. Proof The statement in (a) is straightforward because the product of acyclic sets is acyclic.To prove (b), we show first that ϕ ⊗ ψ is a chain map.Given any It remains to check the chain selector conditions (6.12), (6.13) in [15,Theorem 6.22].First, we have Finally, for any vertex The following statement easily follows from the definitions. Proposition 2.8 Let X 1 ⊂ R n and X 2 ⊂ R d−n be the images of a cubical set X ⊂ R d under its projections onto, respectively, the first n and the complementary d Theorem 2.9 Let X, Y be cubical sets and let λ : X × Y → Y × X be the transpose given by λ(x, y) := (y, x). Then λ # is a chain selector of Λ. Proof The statement (a) is a simple check of the definitions.For (b), the conditions (6.12), (6.13) in [15,Theorem 6.22] follow immediately from the definitions, so it remains to check that λ # is a chain map, that is, it commutes with the boundary map. On the one hand, we have On the other hand, we have From Proposition 2.8 and Theorem 2.9 we derive the following corollary. Here are dual statements of Theorem 2.9 and Corollary 2.10 for cochains. Corollary 2.11 Let λ : X × Y → Y × X be the transpose defined in Theorem 2.9. If dim P = p or dim Q = q, then both sides vanish. Corollary 2.12 Let τ : be the permutation discussed in Corollary 2.10.The map induced by τ # is given on products of duals of ) by the formula Let X ⊂ R d be a cubical set.We proceed to the construction of a homologyrepresentative chain map diag # : C(X) → C(X × X) induced by the diagonal map.It is straightforward to see that the diagonal map admits an acyclic-valued representation Diag : X − → → (X × X) given by Diag(x) The construction of a chain selector proceeds by induction on d = emb X. Fig. 2 Illustration of two choices of chain maps induced by the diagonal map in dimension one.The path "up and right" shows the one defined in (7), while the path "right and up" shows the one in ( 8) For k / ∈ {0, 1}, we must have diag k = 0.The formula (7) is illustrated in Fig. 2. In order to show that our formula defines a chain map we are looking for, we need to make two observations.Lemma 2.13 Let emb X = 1.The map diag # defined by ( 6) and ( 7) is a chain selector for Diag. Proof The conditions [15, Theorem 6.22, (6.12), (6.13)] follow immediately from the definitions, so it remains to check that diag # is a chain map.Since ∂ 0 = 0 and On the other hand, leads to a different chain selector of Diag and a different definition of a cup product on cochains.Note that the two choices are homologous since These ideas are shown in Fig. 2. Induction Step Suppose that diag # is defined for cubical sets of embedding numbers n = 1, . . ., d −1 and let us construct it for a cubical set X of the embedding number d. where τ : R 2d → R 2d is the permutation of coordinates which transposes the (d +1)st coordinate with the preceding d − 1 coordinates.The formula for the chain maps induced by τ is provided by Corollary 2.10.Consider the images X 1 ⊂ R and X 2 ⊂ R d−1 of X under its projections of, respectively, the first and the complementary d − 1 coordinates in R d .Let diag X 1 and diag X 2 be the diagonal maps defined, respectively, on X 1 and X 2 .Then the diagonal map on X is the composition where j is the inclusion map discussed in Proposition 2.8.Note that τ takes values in defined by the formula where ) is defined by formula (5), and Proof We first check that the composition of the minimal representations of the maps involved in formula (9) and described in the series of preceding lemmas produces formula (4) for Diag.This implies, in particular, that the composition has acyclic values contained in X × X.Let x = (x 1 , x 2 ) ∈ X, with x i ∈ X i , and have ch(y) ⊂ Q and the equality holds when y ∈ We have and the inclusion becomes equality when y ∈ • Q.Thus, by Lemma 2.7(a), we have the inclusion and the two sets are equal when y ∈ • Q.From this, we get Using Corollary 2.10 and arguing as above, we get Hence the map Diag is equal to T • Diag X 1 × Diag X 2 • J with the range restricted to the image X × X.It follows that the composition of the corresponding chain selectors has the image in C(X × X).Finally, it follows from [15,Corollary 6.31] that this composition is a chain selector for Diag. The Cubical Cup Product Definition 2. 16 Let X be a cubical set.The cubical cup product of cochains c p and c q is defined by the formula In particular, for Q ∈ K p+q (X) The distributive law for the cup product on cochains follows immediately from Definition 2.16, the linearity of diag p+q , and the distributive law for the cross product.The proof of the associativity law and the identification of the unit element will be easier once we have established an explicit formula for the cup product.Definition 2.16 is also suitable for proving the boundary properties, and the graded commutative law: Theorem 2.17 (Boundary Properties of the Cup Product) (a) δ(c p c q ) = δc p c q + (−1) p c p δc q .(b) If z p ∈ Z p (X) and z q ∈ Z q (X), then z p z q ∈ Z p+q (X).(c) If x p , y p ∈ Z p (X), x p − y p ∈ B p (X), and z q ∈ Z q (X), then x p z q − y p z q ∈ B p+q (X) and z q x p − z q y p ∈ B p+q (X). (b) is an immediate consequence of (a).(c) Let x p − y p = δw.It follows from (a) that x p − y p z q = δw z q = δ w z q because δz q = 0. Hence x p z q − y p z q ∈ B p+q (X).The second equation follows by the same argument. The property (a) in Theorem 2.17 is the analogy of the boundary property of the cubical cross product defined in [15, Sect.2.2].The property (b) asserts that the cup product sends cocycles to cocycles, and by (c) that it does not depend on a representative of a cohomology class.Thus, we get the definition: Definition 2. 18 The cup product : H p (X) × H q (X) → H p+q (X) is defined on cohomology classes of cocycles as follows: The distributive law (12) extends easily to cohomology classes.The following property holds only for cohomology classes.Theorem 2.19 (Graded Commutative Law) If z p and z q are cocycles, then z q z p = (−1) pq z p z q . Proof Let λ be the transpose defined in Theorem 2.9, applied for X = Y .Let Q ∈ K(X).On the one hand, we have z p z q , Q = z p ×z q , diag p+q (Q) . We can now derive an explicit formula, suitable for computations, for the cubical cup product on cochains.By the distributive law (12), it is sufficient to present a formula for generators P ∈ K p (X) and Q ∈ K q (X).The formula is developed recursively with respect to d = emb X and presented in three stages, the first one for the case d = 1, the second one for the recursion step, and the last one is the final recursion-free coordinate-wise formula. Theorem 2.20 Let X be a cubical set in R and let In particular, P Q is either zero, or a dual of an elementary interval. We reach the conclusion by expressing the values in terms of elementary intervals [a] = 0, we see that the graded commutative law in Theorem 2.19 does not hold for the cup product on the level of chain complexes. Theorem 2. 22 Let emb X = d > 1, and suppose that the formula for is given for cochains on cubical sets of embedding numbers n = 1, . . ., d − 1.Consider the decomposition of elementary cubes Q 2 be computed using the induction hypothesis.Then , where emb R 1 = 1 and emb R 2 = d − 1.Let p = dim R 1 and q = dim R 2 .Note that p + q = k.By Lemma 2.15 we get Since π k is a projection, its dual map π k is an injection which extends any element of C (X × X) as zero on elementary cubes in K By Proposition 2.8, Corollary 2.12 and Lemma 2.7, we get with a non-trivial value assumed if and only if |x| = R 1 and |y| = R 2 .However, R ∈ K(X) and the conclusion follows. The following example illustrates the need for considering the alternative |x×y| / ∈ K(X) in the formula for P Q in Theorem 2.22. Example 2.23 Let However, if X = P ∪ Q, we get P Q = 0. We now pass to a coordinate-wise formula.Let P and Q be as in Theorem 2.22.Consider their decompositions to products of intervals in R: We get the following. Theorem 2.24 Let d = emb(X) > 1. With the above notation for elementary cubes provided the right-hand side is supported in X, and P Q = 0 otherwise. Proof We derive the formula from Theorem 2.22 by induction on the embedding number d > 1. Let d > 2. We apply Theorem 2.22 with P 1 = I 1 , P 2 = P 1 , Q 1 = J 1 , and Note that the support of the cross product on the right-hand side of ( 14) is the cartesian product of the supports of the terms: Therefore, the support of the right-hand side of ( 14) is non-empty and contained in X if and only if one of x × y in Theorem 2.22 is non-empty and contained in X.If this is not the case, both formulas give the cochain 0. Assume the non-trivial case.Then the support of P 1 Q 1 is non-empty and contained in X 2 .By induction hypothesis, where Let X be a cubical set embedded in R d .We define the weight of a cochain c p in X by By bilinearity, the computation of c p c q for two cochains c p , c q in X reduces to finding w(c p )w(c q ) cup products of generating elementary cubes.Since it follows easily from Theorem 2.24 that the cost of finding the cup product of two generating elementary cubes is O(d 2 ), we obtain the following. Corollary 2.25 The computational complexity of evaluating c p c q is O(d 2 mn) where m is the weight of c p and n is the weight of c q . Example 2.26 We illustrate the cup-product formula for the cubical torus 2 is the boundary of the square.Since it is hard to draw pictures in R 4 , we parameterize Γ 1 by the interval [0, 4] with identified endpoints 0 ∼ 4, which permits visualizing T as the square [0, 4] 2 with pairs of identified boundary edges, as shown in Fig. 3. Consider the cocycle x 1 generated by the sum of four solid line vertical edges with [2,3] at the second coordinate, and y 1 by the sum of solid line horizontal edges with [1,2] at the first coordinate.Only the edges of the parametric square [1, 2] × [2, 3] may contribute to non-zero terms of x 1 y 1 .Thus, using Theorem 2.20 and Theorem 2.24, The cohomology classes of cochains x 1 and y 1 generate H 1 (T ), and where We are now ready to prove the remaining ring properties for the cubical cup product. Theorem 2.27 Let X be a cubical set.The cup product on cubical cochains is associative, that is, The cochain given by 1 C 0 (X) := V ∈K 0 (X) V is the unit element, that is, A fortiori, these formulas are valid for cohomology classes. Proof By the distributive law (12), it is sufficient to work with generators P ∈ K p (X), Q ∈ K q (X), and R ∈ K r (X).Let d = emb(X). The unit element property easily follows from Theorem 2.20 in the case d = 1 and from Theorem 2.24 in the case d > 1. We prove the associativity by induction on d.When d = 1, a routine verification of the formula in Theorem 2.20 shows that (P Q ) R = 0 if and only if there exists a ∈ Z such that the triple (P , Q, R) takes one of the forms In the first case (P Q ) R = [a] and in the remaining cases it is [a, a + 1].The same is verified for P (Q R ).Let now d > 1 and suppose the conclusion is true for the embedding numbers smaller than d. where the first component of each elementary cube is in R and the second one in R d−1 .Using Theorem 2.20 and Theorem 2.22, we prove that where provided the displayed cross product is non-trivial and supported in X, and it is 0 otherwise.Indeed, assume the non-trivial case.The first application of Theorem 2.22 gives By Theorem 2.20, Let S = S 1 × S 2 .By hypothesis, S ∈ K(X).Hence (17) can be written as Another application of Theorem 2.22 gives We obtain (15) by combining (18) with (19) and passing (−1) γ inside the cross product term containing S 2 .Analogously, where with the same condition on the support.By the induction hypothesis, the expressions inside the square brackets in equations ( 15) and ( 20) are equal.In particular their supports are equal, so we may assume that both supports are non-empty and contained in X.It remains to show that (−1) α = (−1) β .By Definition 2.16, hence the conclusion follows.By Definition 2.18, the last statement on extension to the cohomology classes is obvious. The algebraic properties listed in (12), Theorem 2.27 and in Theorem 2.19 are referred to as the graded ring properties.Thus we arrived at the key definition: Definition 2.28 Let X be a cubical set.The cubical cohomology ring of X is the graded abelian group H (X) with the graded multiplication given by the cup product. It is known that the ring structure introduced in Definition 2.28 may be used to distinguish non homeomorphic spaces even if their homology and cohomology groups are isomorphic.This is shown in the example presented in Sect.3.5. Remark 2.29 All what we have done until now can be extended to chain complexes C(X; R) := C(X) ⊗ R with coefficients in a ring with unity R, which are graded modules over R.This gives rise to the cohomology ring H * (X; R).We have initially chosen coefficients in Z for the sake of clarity and, in particular, to avoid confusion between two rings, R and the graded cohomology ring.However, we introduce ring coefficients in the next section because, for computational purposes, it is often convenient to choose coefficients in the finite field R = Z p , p a prime number.Field coefficients are sufficient in many practical applications. Computing Cohomology The aim of this section is to show that the techniques of S-reductions of S-complexes developed in [20,21] in order to construct efficient algorithms computing homology of cubical complexes may be easily adapted to provide algorithms computing the cohomology ring of a cubical set. S-Complexes Let R be a ring with unity and let S be a finite set.Denote by R(S) the free module over R generated by S. Let (S q ) q∈Z be a gradation of S such that S q = ∅ for all q < 0. Then (R(S q )) q∈Z is a gradation of the module R(S) in the category of modules over the ring R. For every element s ∈ S, the unique number q such that s ∈ S q is called the dimension of s and denoted by dim s.We use the notation •, • : R(S) × R(S) → R for the inner product which is defined on generators by t, s = 1 for t = s, 0 otherwise, and extended bilinearly to R(S) × R(S). We recall (see [20,21]) that a pair (S, κ), where κ : S × S → R is a map such that κ(s, t) = 0 unless dim s = dim t + 1, is called an S-complex, if the pair (R(S), ∂ κ ) is a free chain complex with base S and the boundary map ∂ κ : R(S) → R(S) is defined on generators s ∈ S by The homology of an S-complex (S, κ) is the homology of the associated chain complex (R(S), ∂ κ ), denoted H (S, κ) or simply H (S). The elements of R(S) are called chains. Any cubical set X ⊂ R d discussed in Sect. 2 defines an S-complex (S, κ), where S = K(X) is the set of all elementary cubes of X and κ(Q, P ) := ∂Q, P , ∂ the cubical boundary map.Its chain complex (R(S), ∂ κ ) is equal to C(X; R) = C(X)⊗R, the cubical chain complex of X with coefficients in R. Let R (S) := Hom(R(S), R) be the group of cochains.The coboundary map defined as the dual δ κ := (∂ κ ) satisfies for duals of generators t ∈ S.Moreover, for any pair of a chain c ∈ R(S q ) and a cochain d ∈ R (S q−1 ) we have The cohomology of the cochain complex (Hom(R(S), R), δ κ ) is called the cohomology of the S-complex and denoted H * (S). In the following, we will drop the superscript κ in ∂ κ and δ κ whenever κ is clear from the context. The technique of S-reductions consists of replacing the original set of generators S by a subset S ⊂ S, and the original coincidence index κ by the restriction κ := κ| S ×S .This has to be done in such a way that (S , κ ) is still an S-complex, and the (co)homology does not change.A subset K ⊂ K is an S-subcomplex of the S-complex K if (K , κ ), with κ := κ| K ×K , the restriction of κ to K × K , is itself an S-complex, i.e. if (R[K ], ∂ κ ) is a chain complex.Note that the concept of an S-subcomplex is not the same as the chain subcomplex (see [7,Example 1]). Two important special cases of S-subcomplexes are the closed and open subset of an S-complex.In order to define these concepts we introduce the following notation for any subset A ⊂ S: Therefore, there is a well defined restriction Similarly, if K is open in K, then there is a well defined quotient complex (R[K]/R[K \ K ], ∂ ) with the boundary map ∂ taken as the respective quotient map of ∂ κ .From the computational point of view it is worth to observe that the quotient complex is isomorphic to the S-complex (K , δ κ ) where K = K \ K and κ = κ |K ×K (cf.[20]). The following theorem is a straightforward extension of Theorem 3.4 in [20] to cohomology.Theorem 3.1 Let (S, κ) be an S-complex over the ring R, S ⊂ S a closed subset and S := S \ S the associated open subset.Then we have the following long exact sequence of homology modules: and the following long exact sequence of cohomology modules: in which ι * : H q (S ) → H q (S) and ι * : H q (S) → H q (S ) are induced by the inclusion ι : R(S ) → R(S), whereas π * : H q (S) → H q (S ) and π * : H q (S ) → H q (S) are induced by the projection π : R(S) → R(S ). S-Reduction Pairs and the Coreduction Algorithm Let (S, κ) be an S-complex.A pair (a, b) of elements of S is called an S-reduction pair if κ(b, a) is invertible and either cbd S a = {b} or bd S b = {a}.In the first case the S-reduction pair is referred to as an elementary reduction pair and in the other case as an elementary coreduction pair. Arguing as in the proof of [20,Theorem 4.1] we obtain the following theorem.[20,Algorithm 6.1].The same algorithm without any changes may be used to speed up computation of cohomology modules.The algorithm consists of performing as many S-reductions as possible before applying the general Smith diagonalization algorithm to the reduced S-complex in order to compute the homology or cohomology module.To make it useful, one needs to find as many S-reduction pairs as feasible.In the case of simplicial complexes and cubical complexes it is straightforward to provide examples which admit elementary reduction pairs, but elementary coreduction pairs are not possible right away.However, it is easy to observe that by removing a vertex one obtains an open subcomplex which admits elementary coreduction pairs.Moreover, the homology of this subcomplex coincides with the reduced homology of the original complex and the cohomology of this complex coincides with the reduced cohomology of the original complex.Therefore, not only elementary reduction pairs, but also elementary coreduction pairs are useful when computing the homology or cohomology of simplicial or cubical complexes. If the reduced S-complex is small when compared to the original S-complex then the coreduction algorithm is fast, because the reduction process is linear whereas the Smith diagonalization algorithm is supercubical.In fact, numerical experiments indicate that elementary coreduction pairs provide essentially deeper reductions than the elementary reduction pairs, and the speed up is essential.For details we refer the reader to [20,Sect. 5]. Homology Models The Smith diagonalization algorithm applied to the reduced S-complex enables computing the cohomology module of the original S-complex up to isomorphism.In order to compute the cohomology ring of a cubical set it is not sufficient to have the cohomology generators in a reduced S-complex.It is necessary to construct the cohomology generators in the original cubical set. Since mutually inverse chain equivalences induce isomorphisms in cohomology, we get the following corollary.As we already mentioned, the coreduction algorithm consists of performing a sequence of reductions.A reduction sequence of an S-complex (S, κ) is a sequence of pairs ω = {(a i , b i )} i=1,2,...,n in S such that (a i , b i ) is a reduction pair in (S i−1 , κ i−1 ), where the S-complexes (S i , κ i ) are defined recursively by taking We then use the notation S ω for the last chain complex in the sequence of S-complexes {S i } i=1,2,...,n and call this S-complex the ω-reduction of S. A homology model of S is an ω reduction S ω together with the chain equivalences. In order to discuss the benefits of constructing a homology model of an S-complex S let us define first the weight of S by w(S) := max max(card bd s, card cbd s) | s ∈ S . The construction of a homology model of an S-complex S may be performed in time O(w(S) card S) (see [20,Theorem 6.2]).In particular, in the case of cubical sets of fixed embedding dimension the homology model construction takes linear time.When the ω-reduction of S is small relative to S, one can profit from the homology model whenever homology generators and/or a decomposition of a homology class on the generators are needed.To construct the generators of H (S) one constructs the generators of H (S ω ) and transports them to H (S) via the map ι ω , i.e. computes their image in ι ω .The computational complexity of this computation is O(w(S) card S) (see [21,Theorem 3.1]).To decompose a homology class in H (S) one transports the class via the map ψ ω to H (S ω ) and finds the decomposition there.See [21,Sect. 3.1] for details. Homology Models for Cohomology Precisely the same method may be used to speed up the construction of the cohomology generators in H * (S) and the same complexity analysis applies to this case.One only uses the dual (ι ω ) of ι ω to transport the cochains in the S-complex to its ω reduction and the dual (ψ ω ) of ψ ω to transport the cochains in the ω-reduction back to the original S-complex.However, the transport requires an analogue of Theorem 3.4 for the duals ψ and ι .For this it is convenient to make the following convention.If T is an S-subcomplex of the S-complex S and t is a generator in T then dual of t in T is the restriction to T of the dual of t in S. Since the dual of t in S is always zero on S \ T , it is convenient to identify both duals and denote them by the same symbol t .Using this convention and the setting of Theorem 3.4 we have the following theorem. Theorem 3.6 The duals of the chain maps ψ and ι are given by Proof It is enough to verify the formulas on generators.Let t ∈ S and let c ∈ R(S). Surprisingly, there are even more benefits from the homology model for cohomology computations than for homology computations.This is because of the following theorem. Theorem 3.7 Assume ω is a reduction sequence of an S-complex consisting only of elementary coreduction pairs.Then (ψ ω ) is an inclusion R S ω → R (S), that is, (ψ ω ) (c) = c for any c ∈ R (S ω ). Proof If (a, b) is an elementary coreduction pair, then bd b = {a}.Since a ∈ S = S \ {a, b}, we have b , δc = ∂b, c = 0 for any c ∈ R( S).Therefore, (ψ (a,b) ) (c) = c for any c ∈ R ( S).Since the reduction sequence ω consists only of elementary coreduction pairs, the conclusion follows. Note the following consequence.In the case of a reduction sequence ω consisting only of elementary coreduction pairs there is no need to transport the cohomology generators from the ω-reduction back to the original S-complex.The cohomology generators constructed in the ω-reduction are the cohomology generators in the original S-complex.This is particularly useful for computing the ring structure of a cubical set X, because we can apply formula (14) directly to the cohomology generators in the ω-reduction. Computational Example Observe that the set is a cubical subset of R 3 homeomorphic to S 2 ∨ S 1 ∨ S 1 (see Fig. 4 top left).The coreduction algorithm (see Sect. Equations ( 26) and (27) show that the cohomology rings of X and T are different.In this simple case, it is possible to make the necessary computations by hand.However, one may have two cubical sets homeomorphic respectively to X and T whose representations consist of millions of cubes.Such cubical sets often result from rigorous numerics of dynamical systems, data or image analysis.The benefits from computing the ring structure via the cohomology model is evident.This is visible even in the case of a simple rescaling (for the definition of rescaling see [15,Sect. 6.4.2]) of the cubical sets in our two examples (see Fig. 5). Lemma 2 . 15 the composition of maps on the right-hand side of (9) takes values in X × X.Let d = emb X > 1. Assume that the chain selector diag # of Diag is defined for cubical sets of embedding numbers less than d.Consider the chain map Fig. 3 y 1 By Fig.3 The graphical representation of the cubical torus discussed in Example 2.26.The solid line vertical edges carry the cocycle x 1 and the horizontal ones the cocycle y 1 .The shaded square carries x 1 y 1 Corollary 3 . 5 If (S, κ), is an S-complex over the ring R and (a, b) is an S-reduction pair in S, then the isomorphisms pointed out in Corollary 3.3 are induced by the chain maps defined in(22) and(23) for homology and their duals for cohomology. Fig. 5 Fig. 5 Rescaled wedge of the sphere and two circles from Fig. 4 (top left) and the result of a coreduction (top right).Rescaled cubical torus (bottom left) and the result of a coreduction (bottom right) , Z , where Hom(−, Z) is the functor assigning to any abelian group G the group of all homomorphisms from G to Z, called the dual of G. Elements of C k (X) are called cochains and denoted either by c k , d k or by c , d , if we do not need to specify their dimension k.The value of a cochain c k on a chain d k is denoted by c k , d k . Theorem 3.2 Assume S is an S-complex and (a, b) is an S-reduction pair in S. If (a, b) is an elementary reduction pair then {a, b} is open in S. If (a, b) is an elementary coreduction pair then {a, b} is closed in S.Moreover, in both cases {a, b} is an S-subcomplex of S and H * ({a, b}) = H * ({a, b}) = 0.If (a, b) is an S-reduction pair in an S-complex S, then the homology modules H (S) and H (S \ {a, b}) as well as the cohomology modules H * (S) and H * (S \ {a, b}) are isomorphic.Corollary 3.3 lies at the heart of the coreduction homology algorithm presented in
10,613.4
2012-10-26T00:00:00.000
[ "Computer Science", "Mathematics" ]
Benchmarks of Generalized Hydrodynamics for 1D Bose Gases Generalized hydrodynamics (GHD) is a recent theoretical approach that is becoming a go-to tool for characterizing out-of-equilibrium phenomena in integrable and near-integrable quantum many-body systems. Here, we benchmark its performance against an array of alternative theoretical methods, for an interacting one-dimensional Bose gas described by the Lieb-Liniger model. In particular, we study the evolution of both a localized density bump and dip, along with a quantum Newton's cradle setup, for various interaction strengths and initial equilibrium temperatures. We find that GHD generally performs very well at sufficiently high temperatures or strong interactions. For low temperatures and weak interactions, we highlight situations where GHD, while not capturing interference phenomena on short lengthscales, can describe a coarse-grained behaviour based on convolution averaging that mimics finite imaging resolution in ultracold atom experiments. In a quantum Newton's cradle setup based on a double-well to single-well trap quench, we find that GHD with diffusive corrections demonstrates excellent agreement with the predictions of a classical field approach. Introduction.-Thestudy of dynamics of integrable and near-integrable quantum many-body systems has been a thriving area of research for more than a decade since the landmark experiments on relaxation in the quantum Newton's cradle setup [1] and in coherently split one-dimensional (1D) Bose gases [2].During this time, an in-depth understanding of the mechanisms of thermalization and emergent out-of-equilibrium phenomena within these systems has been developed [3][4][5][6][7][8].A recent breakthrough in this area has been the discovery of the theory of generalized hydrodynamics (GHD) [9,10] (for recent reviews, see [11][12][13]).This new theory is capable of simulating large-scale dynamics of integrable and nearintegrable systems across a significantly broader range of particle numbers and interaction strengths than those accessible using previous approaches [14][15][16].Because of its broad applicability, GHD is currently regarded as well on its way to becoming "a standard tool in the description of strongly interacting 1D quantum dynamics close to integrable points" [16]. In the years since its discovery, GHD has been rapidly developed to include diffusive terms [17][18][19][20][21][22], particle loss [23], calculations of quantum and Euler-scale correlations [24][25][26][27][28][29], as well as the incorporation of numerous beyond-Euler scale effects [30][31][32][33] (see also [34][35][36][37] in a special issue).Recently, GHD applied to a 1D Bose gas has been experimentally verified in a variant of the quantum Newton's cradle setup in the weakly interacting regime [15], and in a harmonic trap quench in the strongly interacting regime [16].In both cases, GHD provided an accurate coarse-grained model of the dynamics, exceeding conventional (classical) hydrodynamics.In addition to comparisons with experiments, GHD was benchmarked against other established theoretical approaches-most prominently for the 1D Bose gas and XXZ spin chain [9,10,14,15,24,27,33,[38][39][40][41][42].As the purpose of these initial benchmarks was to validate GHD, the typical dynamical scenarios considered were in regimes where GHD was expected to be a valid theory.In all such cases GHD demonstrated very good agreement with the alternative approaches.On the other hand, in scenarios involving, for example, short wavelength density oscillations due to interference phenomena (which are not captured by GHD), it was conjectured that GHD would nevertheless adequately describe spatial coarse-grained averages of the more accurate theories [14,15,32].More generally, it is of significant interest to scrutinize the performance of GHD by extending its benchmarks to a more challenging set of dynamical scenarios.This is important for understanding exactly how GHD breaks down when it is pushed towards and beyond the limits of its applicability. In this Letter, we systematically benchmark the performance of GHD for the 1D Bose gas in several paradigmatic out-of-equilibrium scenarios.In particular, we focus on the regime of dispersive quantum shock waves emanating from a localized density bump of the type explored recently in Ref. [43].We use an array of theoretical approaches, including finite temperature c-field methods, the truncated Wigner approximation, and the numerically exact infinite matrix product state (iMPS) method, spanning the entire range of interaction strengths, from the nearly ideal Bose gas to the strongly interacting Tonks-Girardeau (TG) regime.We also analyse the dynamics of a localized density dip which sheds grey solitons, hence benchmarking GHD in scenarios not previously considered.In doing so we address the question of how well GHD predictions agree with coarse-grained averaging of the results of the more accurate theoretical approaches.Additionally, we explore the dynamics of a thermal quasicondensate in a quantum Newton's cradle setup [15,44,45] using Navier-Stokes type diffusive GHD [17,20,46], and address the question of characteristic thermalization rates [19,44]. Expansion from a localized density bump.-Webegin our analysis by considering dispersive quantum shock waves of the type studied recently in Ref. [43]. arXiv:2208.06614v3 [cond-mat.quant-gas] 13 Apr 2023 More specifically, we first focus on the weakly interacting regime of the 1D Bose gas of N particles, and consider the dynamics of the oscillatory shock wave train generated through a trap quench from an initially localized perturbation on top of a flat background to free propagation in a uniform box of length L with periodic boundary conditions [47,48].The weakly interacting regime is characterized by the Lieb-Liniger [49,50] dimensionless interaction parameter γ bg = mg/ 2 ρ bg 1, defined with respect to the background particle number density, ρ bg , where g > 0 is the strength of repulsive contact interaction and m is the mass of the particles. In our first example, we consider the case of a large total number of particles, N = 2000, and γ bg = 0.01, so that the gas is in the Thomas-Fermi regime where the interaction energy per particle dominates the kinetic energy.We assume that the gas is initialized in the zero-temperature (T = 0) ground state of a dimple trap that results in the density profile of Eq. ( 1) given in Appendix A. At time τ = 0, the dimple trap is suddenly switched off, and we follow the evolution of the system in a uniform 1D trap.In Figs. 1 (a) and (b), we show snapshots of the density profiles at different times, and compare the GHD results with those obtained using the mean field Gross-Pitaevskii equation (GPE) and the truncated Wigner approximation (TWA) which incorporates the effect of quantum fluctuations ignored in the GPE [51].The snapshot at τ = 0.00014, which corresponds to the onset of a shock formation due to a large density gradient, shows excellent agreement between GHD and the more accurate microscopic approaches.Such an agreement at early times is remarkable given that GHD, which is derived here at Euler scale [52], becomes formally exact only in the limit of infinitely large length and time scales [11,14,27]. Past this time, the GPE and TWA show the formation of an oscillatory shock wave train, which has been identified in Ref. [43] as a result of self-interference of the expanding density bump with its own background.The interference contrast in this regime is generally large, even though the quantum fluctuations present in the TWA approach cause a visible reduction in contrast compared with the mean-field GPE result.The GHD prediction, on the other hand, completely fails to capture the oscillations, as these occur on a microscopic lengthscale.The characteristic period of oscillations here (which we note are chirped) is given approximately by the healing length l h = / √ mgρ bg (l h /L = 0.0057) which is smaller than the width σ (σ/L = 0.02) of the initial bump and hence represents the shortest lengthscale of the problem in the bulk of the shock wave train.Thus, even though the local density approximation (required for GHD to be applicable to an inhomogeneous system in the first place) is valid for the initial Thomas-Fermi density profile, the failure of GHD at later times is expected since it is not supposed to capture phenomena on microscopic lengthscales, which emerge here dynamically. FIG. 1. Dimensionless density profiles ρ = ρL of quantum shock waves in the 1D Bose gas, as a function of the dimensionless coordinate ξ ≡ x/L at different times τ ≡ t/mL 2 .In (a) we show the initial (τ = 0) and time-evolved (τ = 0.00014) profiles of a weakly interacting gas at zero temperature, for γ bg = 0.01 and N = 2000 (with N bg 1761 the number of particle in the background).Due to the symmetry about the origin, we only show the densities for ξ > 0. In (b), the timeevolved profile is shown at τ = 0.0007.Panel (c) demonstrates the results of finite resolution averaging of both GPE and TWA data from (b) and compares them with the same GHD result.Panel (d) shows the same system as in (b), but at finite temperatures, simulated using the stochastic projected GPE (SPGPE) [43]; the dimensionless temperature T here is defined according to T = T /T d , where T d = 2 ρ 2 bg /2mkB [50].Panel (e) compares GHD predictions with exact diagonalization (ED) results in the TG regime (γ bg → ∞) for N = 1000 (N bg 884), at τ = 0.00004.In all examples, the initial profiles are characterized by the amplitude height β = 1 and dimensionless width of the bump σ = 0.02; see Appendix A for details. Despite this failure, GHD clearly captures the average density of the oscillations for the fully formed shock wave train, similar to that shown in [13].This is consistent with the analysis of Bettelheim [53], who showed that the Whitham approach, which allows one to write equations for averaged quantities in the oscillatory shock wave train, is equivalent to GHD in the semiclassical limit (c = mg/ 2 → 0) of the Lieb-Liniger model [54,55].This is also consistent with the expectation that GHD in an interfering region would correspond to a coarse-grained average density [14,15].To quantitatively assess this expectation, we perform a type of convolution averag-FIG.2. Quantum shock waves at zero temperature for N = 50 particles (N bg 44.03), over the entire range of interaction strengths.In all examples, the initial density profiles (not shown) closely match Eq. ( 1) in Appendix A, with β = 1 and σ = 0.02.In all panels, we show the GHD (dashed lines) and iMPS (full lines) results for the evolved density profiles at two time instances.In (a) there is no phase coherence beyond the mean interparticle separation (1/ρ bg L 0.0227), whereas in (e) the shortest lengthscale that determines the characteristic period of oscillations is given by the width of the initial Gaussian bump σ (σ/L = 0.02), which is much smaller than the healing length l h (l h /L = 0.227). ing that mimics the finite resolution of in-situ imaging systems used in quantum gas experiments (see Appendix B).As the imaging resolution is usually unable to resolve wavelengths on the order of the healing length (typically in the submicron range), one expects that such averaging will smear out the interference fringes seen in the GPE and TWA data-just as GHD implicitly does.In Fig. 1(c) we show the results of convolution-averaged density profiles performed on the GPE and TWA data of Fig. 1(b) and compare them with the same GHD curve.The level of agreement between all three curves is now remarkable-a result which was not a priori obvious for both GPE and TWA under this model of coarse-graining.This highlights the quantitative success of GHD in describing the dynamics on large scale despite interference or short-wavelength phenomena being present. In our second set of examples, shown in Fig. 1(d), we consider the same shock wave scenario, except now for a phase fluctuating quasicondensate at finite temperatures.Here, the effect of thermal fluctuations is expected to lead to a smearing of the interference contrast due to a reduced thermal phase coherence length in the system, l T = 2 ρ bg /mk B T [56][57][58].A well-established theoretical approach to model this is a c-field stochastic projected GPE (SPGPE) approach [59, 60] (see also [44,[61][62][63]), and we indeed observe such smearing in Fig. 1(d) [64], in addition to seeing the expected very good agreement of GHD with these c-field results. Our third example is shown in Fig. 1(e) and lies in the TG regime of infinitely strong interactions, γ bg → ∞.It further illustrates the same observation-that the performance of GHD improves with the loss of phase coherence in the system, wherein interference phenomena are suppressed.Here, we compare the predictions of GHD for the shock wave scenario at T = 0 with the results of exact diagonalization.In the TG regime, the system does not posses phase coherence beyond the mean interparticle separation 1/ρ bg , hence the absence of interference fringes in the evolution of a density bump whose initial width is larger than 1/ρ bg [43].Accordingly, we see very good agreement of GHD with exact diagonalization, ig-noring the small-amplitude density ripples that can be seen in the exact result.Such density ripples (which we note have different origin to Friedel oscillations) have been predicted to occur in the ideal Fermi gas by Bettelheim and Glazman [65] (see also [66]).By the Fermi-Bose mapping [67, 68], these same ripples should emerge in the TG gas, which we confirm here through exact diagonalization.However, their description lies beyond the scope of GHD as a large-scale theory [69]. The final set of examples for the evolution of a density bump is shown in Fig. 2. Here, we consider a range of interaction strengths, starting from very strong and going back [from (a) to (e)] to weak interactions, all at zero temperature and N = 50.We compare the GHD results with iMPS simulations, which are numerically exact at all interaction strengths [43].At this relatively low particle number, the strongly interacting regime displays Friedel oscillations which appear in the iMPS result and are, as expected, absent from the prediction of GHD.However, there is generally good agreement between GHD and iMPS at large scale.As the interaction strength is reduced, and hence the phase coherence of the gas increases, the Friedel oscillations disappear and interference fringes return, which now have period ∼ σ (with σ < l h ) since the gas is no longer in the Thomas-Fermi regime.The worst performance of GHD is observed for γ bg = 0.01, which lies in the nearly ideal (noninteracting) Bose gas regime for N = 50.In this regime, the local density approximation, intrinsic to GHD [14][15][16]50], is no longer valid even for the initial density profile, and we see that Euler-scale GHD breaks down both spatially and temporally, explaining the failure of GHD to agree with iMPS results even in the coarse-grained sense. In addition to considering the dynamics of a localized density bump, we have also analyzed evolution of an initial density dip in a uniform background.This scenario is known to shed a train of grey solitons in the mean-field GPE treatment [47,48,70], and the results of comparison of GHD simulations with those of GPE and TWA are presented in Appendix C. The overall conclusions regarding the performance of GHD in this scenario are similar to those for a density bump, including good agreement of GHD with coarse-grained averages of GPA and TWA results in the soliton train region. Quantum Newton's cradle in a thermal quasicondensate.-Ourfinal scenario for benchmarking GHD is in a variant of the quantum Newton's cradle setup for a weakly interacting 1D Bose gas in the quasicondensate regime.Namely, we analyze the release from a symmetric double-well trap to a single-well harmonic trap of frequency ω, similar to the type utilized in Ref. [15].Here, we use the SPGPE to simulate collisional dynamics and eventual thermalization, as in Ref. [44], and for the sake of one-to-one comparison, we also simulate the same system using the Navier-Stokes type of diffusive GHD [17,46], solved using a secondorder backwards-implicit algorithm [18,38,71]. Comparison of the results using the two methods are shown in Fig. 3, where we illustrate the evolution of the density distribution [(a) -for diffusive GHD, and (c) -for SPGPE] over the initial few oscillations, as well as after sufficiently long time, when the system has already thermalized.In Ref. [72] we give further details of how the final relaxed states were assessed within GHD and the SPGPE, whereas here, in Figs. 3 (b) and (d), we simply show the respective relaxed density profiles, along with their corresponding thermal equilibrium profiles from Yang-Yang thermodynamics [50,[72][73][74], as well as density profiles at earlier times illustrating their contrast to the relaxed state.The overall conclusion here is that GHD demonstrates excellent agreement with SPGPE in both short-and long-term dynamics, as well as in the characteristic thermalization rate [75]. We have also simulated the quantum Newton's cradle experiment in the original Bragg pulse scenario [1], except in a weakly interacting quasicondensate regime.In this scenario, we observe different thermalization rates in GHD and SPGPE simulations, and we discuss these results and the reasons behind the discrepancy in the Supplementary Material [72]. Summary.-We have benchmarked GHD in a variety of out-of-equilibrium scenarios in a 1D Bose gas against alternative theoretical approaches which are not limited to long-wavelength excitations.In particular, we have focused on systems supporting dispersive quantum shock waves and soliton trains, demonstrating that GHD generally agrees with the predictions of these approaches at sufficiently high temperatures and strong interactions.Here, the good agreement stems from a reduced phase coherence length of the gas, which in turn leads to a suppression of interference phenomena and therefore an absence of high-contrast short-wavelength interference fringes in the density.At low temperatures and weak interactions, where interference phenomena are more pronounced, the predictions of GHD only agree with a coarse-grained convolution averaging approximation.The effect of such averaging is similar to having finite imaging resolution in quantum gas experiments, and explains why GHD may perform well when compared to experiments, whilst departing from the predictions of theoretical approaches that are valid at short wavelengths.We have also benchmarked Navier-Stokes GHD within a quantum Newton's cradle setup for a doublewell to single-well trap quench of a weakly interacting quasicondensate, observing excellent agreement with the SPGPE in both transient dynamics and final relaxed state, as well as in the characteristic relaxation timescale.K. V. K. acknowledges stimulating discussions with I. Bouchoule, M. J. Davis, and D. M. Gangardt.This work was supported through Australian Research Council (ARC) Discovery Project Grants No. DP190101515. Appendix A: Parametrization of the density bump.-Theinitial density profile in Fig. 1 (a), in dimensionless units, is set to where the dimensionless coordinate, time, and density are introduced, respectively, according to ξ ≡ x/L, τ ≡ t/mL 2 , and ρ(ξ, τ ) ≡ ρ(x, t)L, with ρ bg = ρ bg L = N bg being the dimensionless background density equivalent to the total number of particles in the background, 2σ )] from the normalization.In addition, the width and amplitude of the bump above the background are characterized by the dimensionless parameters σ ≡ σ/L and β > 0, respectively. The associated trapping potential that is required for preparation of such a density profile as an initial ground or thermal equilibrium state of the 1D Bose gas in different regimes is discussed in Ref. [43].Within the meanfield approximation, described by the Gross-Pitaevskii equation, the density profile of Eq. ( 1) corresponds to the mean field amplitude being initialized as a simple Gaussian bump superimposed on a constant background, Appendix B: Finite resolution averaging -Finite resolution averaging procedure implemented in Fig. 1(c) emulates the finite spatial resolution of experimental absorption imaging systems.Following Ref. [76], we denote the impulse response function of the imaging system by A(x), which we here assume to be a normalized Gaussian.The impulse response for a pixel of width ∆ centered at x p is then, The measured atom number in the given pixel is then given by where N provides the correct normalization for the total particle number in the limit of zero pixel width. In our particular example of such averaging, the density profile ρ(x) (at any given time step, with the time argument t being omitted here for notational simplicity) is convoluted with a Gaussian resolution function of width w = 1 µm and then averaged over a finite pixel size ∆ = 4.5 µm, as in Ref. [76].These absolute values translate to dimensionless values of w/L = 0.01 and ∆/L = 0.045, assuming L ∼ 100 µm, with results being generally insensitive to the exact values of these parameters around these typical values.For comparison, the healing length in this example is equal to l h /L = 0.0057.Considering 87 Rb atoms, which have a scattering length of a 5.3 nm, in a system of size L = 100 µm, this corresponds to an absolute healing length of l h = 0.57 µm.These choices of dimensionless parameters, and γ bg = 0.01, can be realized at a background density of ρ bg 1.8 × 10 7 m −1 , with an interaction parameter g 2 ω ⊥ a 1.4 × 10 −38 J•m [77], where ω ⊥ /2π 1.9 kHz is the frequency of the transverse harmonic trapping potential. Appendix C: Dynamics of a localized density dip.-In this Appendix, we present the results of evo-FIG.4. Evolution of a density dip in a 1D Bose gas.Panel (a) shows the initial (τ = 0) and time-evolved (τ = 0.0005) density profiles from GPE, TWA and GHD simulations, for γ bg = 0.01 and N = 1688 (N bg 1761); panel (b) shows a timeevolved density profile at a later time (τ = 0.002), where we can see a fully formed train of three grey solitons in the meanfield GPE (full yellow) curve.Panels (c) and (d) compare the same GHD results (notice the different scale of vertical axis) at τ = 0.0005 and τ = 0.002 with the outcomes of finite resolution averaging of both GPE and TWA curves.In panel (e), we show a time-evolved snapshot of the density profile in the TG regime (γ bg → ∞) for N = 844 (N bg 880.5), and compare the GHD result with that of exact diagonalization (ED).Panel (f) is in the nearly ideal Bose gas regime, with γ bg = 0.01, N = 42 (N bg 44).In all examples, the initial density profile is given by Eq. ( 1) with β = −0.5 and σ = 0.02.lution of a localized density depression, after quenching (at time τ = 0) the initial trap potential with a localized barrier to uniform.We assume that the initial density profile is given by the same Eq.( 1), except with β being negative and satisfying −1 < β < 0. In Figs. 4 (a) and (b), we consider the weakly interacting regime (with γ bg = 0.01) and show the results of the GPE, TWA, and GHD simulations, for a gas with N = 1688 atoms and the same N bg 1761 as in Fig. 1 (a).In this scenario, the steep gradient of the shock front forms as the background fluid flows inward and tries to fill the density depression.As a result, one first observes the emergence of large-amplitude structures, forming multiple density troughs, which then evolve into a train of grey solitons propagating away from the origin [47,48,55,70,78,79].The differences between the TWA and pure mean-field GPE results, seen in Figs. 4 (b), are consistent with previous observations [80][81][82] that quantum fluctuations lower the mean soliton speed and fill in the soliton core.The GHD result, on the other hand, fails to capture the solitonic structures, whose characteristic width (on the order of the microscopic healing length) lies beyond the intended range of applicability of GHD. However, GHD still manages to adequately capture the coarse-grained description of the density across the soliton train, which is rather remarkable.This is seen in Fig. 4 (c) and (d), where we demonstrate the outcomes of finite resolution averaging applied to GPE and TWA results of panels (a) and (b), respectively.Similarly to Fig. 1(c), here we used the same normalized Gaussian resolution function of width 1 µm and adopted 87 Rb atoms as an example species for the relevant parameter values (see Appendix B).For panel (c) we used the same pixel size (∆ = 4.5 µm) as before, whereas for panel (d), due to the presence of fully formed grey solitons whose width is on the order of (2 − 4)l h , we used a twice larger pixel size (∆ = 9.0 µm).A larger pixel size here results in ∆/l h 16 1, which is required in order to comply with the large-scale framework of GHD. The last two examples, shown in Figs. 4 (c) and (d), correspond, respectively, to the strongly interacting TG and nearly ideal Bose gas regimes.The overall behaviour and conclusions about the performance of GHD in these examples are the same as in the equivalent scenario of the density bump discussed earlier in Figs. 1 (e) and 2 (a). Appendix D: Parametrization of the doublewell trap-The initial (pre-quench) double-well trap potential is set to V ( x) 2.16×10 −3 x 4 −5.27×10−1 x 2 in dimensionless form, where x = x/l ho , V = V / ω, and l ho = /mω, where ω is the post-quench single-well harmonic trap frequency.The initial dimensional temperature of the cloud, in harmonic oscillator units, is set to T = T /( ω/k B ) 205.In this configuration, the initial density profile for a total of N = 3340 atoms is double peaked, with the dimensionless interaction strength at either of the peaks given by γ max 0.0138.We briefly review the various formulations of generalized hydrodynamics (GHD) used in the main text and its relation to the Lieb-Liniger model of the 1D Bose gas.In particular we cover Euler-scale GHD, its zero-entropy subspace method, and its relation to the Wigner function formulation of the free Fermi gas.We also give a brief introduction to Navier-Stokes GHD and numerical tools used to simulate the dynamics of thermalization in Newton's cradle setups. A. The one-dimensional Bose gas The Lieb-Liniger model of a 1D Bose gas with repulsive contact interaction is a paradigmatic integrable quantum model, described by the following second-quantized Hamiltonian [S1, S2] Here, Ψ † (x) and Ψ(x) are the boson creation and annihilation field operators, obeying the canonical bosonic commutation relations [ Ψ(x), Ψ † (x ′ )] = δ(x − x ′ ), m is the bosonic mass, g is the one-dimensional interaction strength which is taken to be positive for repulsive interactions.For a uniform system of 1D density ρ, the exact eigenstates of the Lieb-Liniger model and its ground state properties can be found through the Bethe ansatz [S1], whereas the finite temperature equilibrium properties can be treated using Yang-Yang thermodynamic solutions [S3].The dimensionless interaction strength that characterises the system is introduced via γ = mg/ℏ 2 ρ. In the presence of an external trapping potential, V (x), the Hamiltonian acquires an additional term, ´dx V (x) Ψ † Ψ.Through this, the model becomes generically non-integrable outside of the ideal (noninteracting) Bose gas limit or the Tonks-Girardeau limit of infinitely strong interactions.However, the exact solutions for a uniform gas can be still utilized for finding thermodynamic properties of inhomogeneous gases in the local density approximation (LDA) [S4].In the LDA, the Yang-Yang equations are solved using a local chemical potential µ(x) = µ 0 − V (x), where µ 0 is the global chemical potential of the system; as a result, the local dimensionless interaction strength γ(x) = mg/ℏ 2 ρ(x) acquires position-dependence through the inhomogeneity of the density profile ρ(x). B. Euler-scale generalized hydrodynamics GHD for the 1D Bose gas is expressed using the language of Yang and Yang's thermodynamic Bethe ansatz [S3, S5-S7].Here we present a brief formulation of first order (or Euler-scale) GHD in terms of the density of quasiparticles, f p (λ; x, t) [S3].Through f p (λ; x, t) we express the core 'Bethe-Boltzmann' equation of Euler-scale GHD in an inhomogeneous external potential [S8, S9], where λ is the rapidity or quasi-velocity variable of the quasiparticle excitations. Through the occupation number function, n(λ; x, t), related to the density of quasiparticle and quasi-hole excitations, n = f p /(f p + f h ) [S3], one may express this hydrodynamic equation of motion in the form [S8, S9] Though they are equivalent, Eq. ( S3) is more convenient for simulating Euler-scale GHD, as it can be solved through the method of characteristics [S10, S11].The effective velocity, v eff [n], is a functional of the occupation number function, and is written in terms of the quasiparticle energy E and momentum p [S6, S7], and is the group velocity, v gr = ∂ λ E/∂ λ p, of the excitations renormalized by interactions through the dressing operation [S7, S11], where we suppress dependence on space and time variables for clarity.The scattering kernel, θ(λ), in Eq. ( S5) is the first model-dependent quantity utilized in GHD.For the Lieb-Liniger model it is given by θ(λ) = 2ℏg/(g 2 + (ℏλ) 2 ) [S1, S3, S12].The dressing equation may be seen as a generalization of the type of equation present throughout the thermodynamic Bethe ansatz; in particular, one may express the total state density, ].The other model-dependent quantities are the singleparticle eigenvalue functions, h i (λ) (i = 0, 1, 2, . . .), given in the Lieb-Liniger model by polynomials, h i (λ) ∝ λ i , with h 0 = 1, h 1 (λ) = p(λ) = mλ being the quasiparticle momentum, and h 2 (λ) = E(λ) = mλ 2 /2 -the quasiparticle energy, as examples [S1, S3].Correspondingly, the effective velocity simplifies to v eff (λ) = id dr (λ)/1 dr (λ), where id dr (λ) is the identity function id(λ) = λ, dressed according to Eq. ( S5), likewise 1 dr (λ) is the dressed unit function 1(λ) = 1 [S13].The evaluation of average charge densities takes a simple integral form [S3, S6, S7], where, for example, ⟨q 0 ⟩(x, t) = ρ(x, t) is the particle number density, ⟨q 1 ⟩(x, t) is the momentum or masscurrent density (where, in conventional hydrodynamics, ⟨q 1 ⟩(x, t)/mρ(x, t) = v(x, t) would correspond to the velocity field), and ⟨q 2 ⟩(x, t) = e(x, t) is the energy density.Evaluation of the initial equilibrium state in an arbitrary external potential requires knowledge of the local chemical potential, µ(x) = µ 0 − V (x).Under the LDA, the gas is treated as locally uniform across mesoscopic 'fluid cells', a process which coarse-grains over microscopic lengthscales [S14].Calculation of the equilibrium quasiparticle density on each fluid cell is achieved through an iterative numerical procedure utilizing the local chemical potential [S3, S15], with the GHD evolution of this density according to Eq. ( S3) occurring on macroscopic scales between these fluid cells. In our GHD simulations of systems evolving from finite-temperature thermal equilibrium states, we used the iFluid software package [S15], which is an efficient, easily expandable open-source numerical framework based in Matlab.Simulations of systems evolving from zero-temperature ground states, on the other hand, were carried out using zero-entropy subspace methods found in Ref. [S13] and detailed below. C. Zero-entropy subspace methods and the hard-core limit At zero temperature the occupation number simplifies to an indicator function, and 0 outside this region, where λ F is the Fermi rapidity and is generally dependent on the local effective chemical potential [S13].Thus, dynamics of the occupation number is restricted to its edges, or its 'Fermi contour' in systems of finite size [S16, S17].Numerically solving the GHD equation of motion of these systems may be accomplished through the zero-entropy algorithm presented in the supplementary material of Ref. [S13].This algorithm works with the finite set of curves defined by the edges of the zero-temperature occupation number, with each point on these curves given by a position and rapidity coordinate (x i , λ i ).For a single time-step, δt, the position x i is shifted by an amount v eff {λ} (λ i )δt, where calculating the effective velocity given in Eq. (S4) through the dressing operation in Eq. ( S5) is simplified due to the form of the zero-temperature occupation number described above. In the Tonks-Girardeau limit of hard-core bosons, γ → ∞, at zero temperature there is an equivalence between the occupation number and the Wigner function in the semiclassical formulation for the free Fermi gas [S18]. Here, the Fermi rapidity is equivalent to the Fermi pseudovelocity λ F (x) = 2µ(x)/m [S13].In this limit, the GHD equation of motion is equivalent to the evolution of the Wigner function, and is given in the form of a Boltzmann equation [S13, S18], Quantum corrections to the semiclassical Wigner function description were analysed by Bettelheim et al., demonstrating the presence of long-lived 'quantum ripples' in the dynamics [S18-S20].Analysis presented in the supplementary material of [S13] demonstrated that these corrections to the local density approximation are negligible at large scales in the regime where In this limit, the Wigner function oscillates rapidly as a function of λ such that, when calculating observables using Eq.(S6), the integration over rapidity averages over these fast oscillations, and eliminates the oscillatory terms [S13]. D. Navier-Stokes GHD Higher order corrections to the Euler-scale hydrodynamics presented above have recently been formulated, extending GHD to the Navier-Stokes scale and incorporating diffusive effects [S11, S21-S25].At this level, the hydrodynamic equation of motion is modified through the incorporation of a diffusion operator, D, arising through two-body scattering processes among quasiparticles [S26].Subsequently, it was shown in Ref. [S23], and later fully justified in Ref. [S24], that the diffusive hydrodynamic equation for the quasiparticle density, f p , in the presence of an external trapping potential is given by where (D∂ x f p )(λ; x, t) = ´dλ ′ D(λ, λ ′ ; x, t)∂ x f p (λ ′ ; x, t), and D(λ, λ ′ ; x, t) is the diffusion kernel, calculable from elements already present in the Euler scale theory [S21, S26].Through a scaling analysis performed at second order, it was shown in Ref. [S23] that corrections to Eq. (S9) arising from the Euler-scale force term do not contribute to the dynamics.It was thus demonstrated that, when an external trapping potential breaks the integrability of a gas, diffusive dynamics inevitably lead the system to thermalize at late times [S11, S23].Diffusive dynamics are soluble through a second-order backwards implicit algorithm, first demonstrated in Ref. [S27], and available within the iFluid package [S15]. II. QUANTUM NEWTON'S CRADLE We provide further analysis for the thermalization of the double-well to single-well quantum Newton's cradle discussed in the main text.Additionally, we provide comparison between GHD and SPGPE dynamics for a quantum Newton's cradle under a Bragg pulse protocol, thermalization of which was recently studied in Ref. [S28]. A. Additional results for double-well to harmonic trap quench Here, we further investigate dynamical thermalization of a double-well to single-well trap quench, demonstrated in Fig. 3 of the main text.To quantify the thermalization rate observed in Navier-Stokes GHD, we have access to the thermodynamic entropy density [S3] s which may be integrated for the total entropy per particle, S/k B N = N −1 ´dxs/k B .The total entropy per particle is known to plateau upon the system reaching a thermalized state [S23], and is demonstrated for the Navier-Stokes GHD simulations in Fig. S1(a), showing thermalization at time t ≃ 100/(ω/2π).For direct comparison between GHD and SPGPE results (as entropy is not accessible in SPGPE simulations), we also estimate the rate of thermalization through monitoring the peak density averaged over the region x/l ho ∈ [−2, 2], where l ho = ℏ/mω is the harmonic oscillator length.This quantity is plotted Figs.S1(a) and (c) for Navier-Stokes GHD and SPGPE simulation, respectively.Under non-equilibrium Newton's cradle dynamics, the peak density undergoes oscillations at twice the longitudinal trap frequency, eventually relaxing to a final thermal state at time t ≃ 100/(ω/2π) for both simulation methods.This thermalization time agrees with that extracted from the plateauing of GHD entropy per particle.As noted in Ref. [S23], observed thermalization times are generically observable-dependent, however the results presented here demonstrate that, for this system, the thermalization times estimated through relaxation of peak and entropy agree with each other.The temperature, T = T /(ℏω/k B ), and global chemical potential, µ 0 , of the relaxed state for the Navier-Stokes GHD result are fixed by the initial total energy and number of particles [S23].This determines the final temperature of the system to be T ≃ 213, with the corresponding Yang-Yang thermodynamic density profile shown as the cyan dotted line in Fig. 3(b) of the main text.Temperature estimation of the relaxed state for the SPGPE result is achieved via Yang-Yang thermometry of the relaxed density profile [S29, S30], and is shown as the cyan dotted line in Fig. 3(d) of the main text at a temperature of T ≃ 216.The small difference in the temperatures extracted for the relaxed states in GHD and SPGPE simulations comes from a small difference between the respective final density profiles, so that the best-fit density profiles from Yang-Yang thermodynamics return slightly different values of the respective temperatures. Additionally, we calculate the proximity of evolving density distributions in both GHD and SPGPE simulations to the respective Yang-Yang thermal density distributions through the Bhattacharyya statistical distance [S31] where B(P, P ′ ) is the Bhattacharyya coefficient of two normalized probability density functions P (x i , t) and P ′ (x i ) of the same discrete variable x i , Here, P (x i , t) = ρ(x i , t)/ i ρ(x i , t) is the normalized evolving density profile of either GHD, shown in Fig. 3 (a), or SPGPE, shown in Fig. 3 (c) of the main text, with x i being the position on our computational lattice.P ′ (x i ) = ρ YY (x i )/ i ρ YY (x i ), on the other hand, is taken to be the normalized density profile of the Yang-Yang thermal state, ρ YY (x i ), fitted to the final relaxed density profile of GHD or SPGPE evolution.At any given time t, the Bhattacharyya distance serves as a measure of similarity between the instantaneous distribution P (x i , t) and P ′ (x i ).For P → P ′ , the Bhattacharyya coefficient becomes B(P, P ′ ) → 1 (due to the normalization condition) and hence D B → 0, implying complete overlap of the two distributions.As we see from Figs. S1 (b) and (d), as the evolving density profile approaches the relaxed state, the Bhattacharyya distance, D B , tends to a vanishingly small constant, indicating near perfect overlap between the relaxed and Yang-Yang thermal states. We further point out here that as the SPGPE is a classical field method [S33, S34], it has an inherent dependency on the high-energy cutoff ϵ cut separating out the classical field region of relatively highly occupied modes from the sparsely occupied modes treated as an effective reservoir.More specifically, the results of SPGPE simulations can be cutoff dependent subject to the observable in question.In the simulations performed here, the highenergy cutoff was chosen according to the prescription of Ref. [S35], namely, such that the mode occupancy of the highest energy mode in the harmonic oscillator basis (of Hermite-Gauss polynomials) was on the order of ∼ 0.3.Such a cutoff mode occupancy, which is somewhat lower than a more conventional cutoff choice at mode occupancy of the order of ∼ 1 (see, e.g., [S36-S38] and references therein), must be taken in order to faithfully reproduce the tails of the initial thermal equilibrium density distribution.In doing so, we have also observed that the simulation of the quantum Newton's cradle system under variation of the high-energy cutoff shows a dependence of the thermalization time on this cutoff mode occupancy.In particular, as the cutoff mode occupancy is reduced (corresponding to an increase in the high-energy cutoff ϵ cut ), not only does the initial density profile match better to that of Yang-Yang thermodynamics, but the thermalization times are additionally shortened.As increasing the high-energy cutoff corresponds to including a larger number of thermally occupied modes, which speed up collisional relaxation, such shortening of thermalization times is expected.Ultimately, as we have seen from the results presented in Fig. S1, the characteristic thermalization times extracted from GHD and SPGPE agreed well with each other under the cutoff mode occupancy of 0.3.Accordingly, all SPGPE simulations reported here, including the ones corresponding to quantum Newton's cradle with a Bragg pulse (see next section) were carried out with ϵ cut corresponding to this optimal cutoff mode occupancy of 0.3. B. Bragg pulse quantum Newton's cradle Collisional dynamics and thermalization of a quantum Newton's cradle under a Bragg pulse protocol in the finite-temperature quasicondensate regime has recently been studied in Ref. [S28].Here, we provide an additional point of comparison between GHD and SPGPE dynamics through simulation of this system, studying the rate of convergence to thermalization within the two simulation methods.Dynamics are instigated from a thermal state in a harmonic trap of frequency ω, characterized by dimensionless interaction parameter γ 0 = 0.01 in the trap centre, with temperature T ≃ 152, and total atom number N ≃ 1960.A Bragg pulse is then applied to initiate Newton's cradle dynamics; this splits the initial atomic wavepacket at rest into two counter-propagating halves corresponding to ±2ℏk 0 diffraction orders of Bragg scattering [S28, S39], where k 0 is the Bragg momentum in wave-number units. Implementing the Bragg pulse within the SPGPE method consists of replacing each stochastic realisation of the initial thermal equilibrium state, ψ(x, t = 0 − ) by a coherent superposition, ψ(x, t = 0 + ) = 1 √ 2 (e i2k0x + e −i2k0x )ψ(x, t = 0 − ), which subsequently evolve in time according to the projected GPE [S28, S39].Such a superposition of wavefunctions with positive and negative momentum boosts of ±2k 0 is known to be an excellent approximation to experimental implementation of the Bragg pulse [S28, S39, S40] via an external periodic lattice (Bragg) potential formed by two counterpropagating laser beams [S41].The microscopic effect of the realistic Bragg pulse in the SPGPE simulation can be seen as the fast oscillating interference fringes in the spatial density profile at early times, during the periods when the two counter-propagating halves of the density profile overlap spatially. Simulating a Bragg pulse protocol in GHD, by contrast, is done through the quasiparticle density distribution, f p (λ; x, t), which does not directly depend on the bare atomic momenta, instead relying on the rapidity of quasiparticles, and is generally not equivalent to the momentum distribution, except in free models [S16, S42, S43].Implementing a Bragg pulse in GHD requires one to work with the quasiparticle density distribution of the initial thermal state, f t=0 − p (λ; x, t), adding positive and negative momenta to the quasiparticles with equal probability.Correspondingly, the post-pulse quasiparticle distribution may be modeled as Thus, in contrast to the quantum Newton's cradle presented in the main text, which is instigated in exactly the same way-through a sudden quench of the actual external trap potential V (x) from double to single well-in both the SPGPE and GHD simulation, the implementation of the Bragg pulse quantum Newton's cradle is achieved in SPGPE and GHD via two different approximations of the post-Bragg pulse state.Additionally, as the GHD is a large-scale theory, its post-Bragg pulse state fails to capture the fast-oscillating interference fringes observed in the SPGPE at early times. In GHD implementation, one typically chooses the dimensionless quasimomentum of the Bragg pulse, λ Bragg = (m/ℏ)λ Bragg l ho (given here in harmonic oscillator units), to be equal to the Bragg momentum, q 0 ≡ k 0 l ho , however this is an approximation that assumes a large momentum difference between the clouds [S43].Here, we instead choose the Bragg pulse quasimomentum such that the total energy imparted in the GHD simulation is equal to the energy difference between the initial thermal state and the post-Bragg pulse state calculated in the SPGPE.Using this method, we observe a large difference between the physical Bragg momentum and the corresponding GHD quasimomentum under a low momentum Bragg pulse (∼219% difference at a Bragg momentum of q 0 = 1), with this difference decreasing for larger momentum Bragg pulses (∼ 16.8% difference at a Bragg momentum of q 0 = 5). We first investigate a low momentum Bragg pulse of q 0 = 1 and illustrate the dynamics of the density profile for GHD and SPGPE simulations in Fig. S2(a) and (d), respectively.We observe a qualitative similarity in the oscillations of collisional dynamics between the two simulation methods, however, there is a clear disagreement in the rate of thermalization.The observed thermalization times of the Navier-Stokes GHD simulation is observed to be t f ≃ 80/(ω/2π), after approximately 160 oscillation periods, relaxing to a state of temperature T ≃ 214.In comparison, the SPGPE simulation of this q 0 = 1 Bragg pulse Newton's cradle system thermalizes at an earlier time of t f ≃ 30/(ω/2π), after approximately 60 oscillation periods, to a relaxed state at T ≃ 261.The observed difference in thermalization rates between Navier-Stokes GHD and the SPGPE methods of simulating this system likely stems from the approximate method utilized for simulating the Bragg pulse within GHD, described above. Simulation of the same system, but with a larger Bragg momentum of q 0 = 5, is illustrated in Fig. S3(a)-(c) for GHD, and in Fig. S3(d)-(f) for SPGPE, respectively.Here, we observe a greater disparity in the thermalization rates with SPGPE simulations thermalizing at around t f ≃ 40/(ω/2π) (approximately 80 oscillation periods), and GHD at t f ≃ 350/(ω/2π) (approximately 700 oscillation periods).This disparity is confirmed when comparing the average peak density between GHD and SPGPE, shown in Fig. S3(b) and (e), respectively. The density profile of the final relaxed state for the GHD simulation is shown in Fig. S3 (c), with its matching thermal density profile of temperature T ≃ 336.Likewise, the relaxed density profile for the SPGPE simula-tion is shown in Fig. S3 (f), with the density profile of its matching Yang-Yang thermodynamic state of temperature T ≃ 348.We thus observe that, for the Bragg pulse quantum Newton's cradle, the discrepancy in thermalization rates is present regardless of the momentum of the simulated Bragg pulse.However, short and intermediate time dynamics, along with the final relaxed states of both simulation methods, are seen to be qualitatively similar. To summarise, we emphasize that the quantum Newton's cradle in double-well to single-well quench is implemented in exactly the same way in both the GHD and SPGPE approaches-via a sudden quench of the external trapping potential at time t = 0. Accordingly, as we discussed in the main text, the GHD and SPGPE thermalization times agree with each other in this setup.In contrast to this, the original Bragg pulse quantum Newton's cradle scenario of Ref. [S41], which employs a splitting of the initial quasicondensate wavefunction into two counter-propagating halves, is implemented in GHD in an approximate and qualitatively different way to the SPGPE [S28, S43].Because of this, we observe significantly different thermalization rates (especially for large Bragg momenta) using GHD and SPGPE simulations, even though the overall dynamics of collisional oscillations are still qualitatively very similar. FIG. 3 . FIG. 3. Evolution and thermalization of the density distribution ρ(x, t) in a quantum Newton's cradle setup initialized from a double-well to single-well trap quench, simulated using (a)-(b) Navier-Stokes GHD, and (c)-(d) SPGPE.The initial cloud of N = 3340 atoms at temperature T = 205 (in harmonic oscillator units) is prepared in a thermal equilibrium state of a symmetric double-well trap potential (see Appendix D for details).Panel (b) demonstrates the relaxed density profile of the Navier-Stokes GHD evolution at t = 100/(ω/2π) (black solid line), alongside a best-fit thermal equilibrium profile from Yang-Yang thermodynamics at T 213 (cyan dotted line), and an additional GHD density profile at earlier time t = 6.79/(ω/2π) (red dashed line).Panel (d) is the same as (b), but for the SPGPE, with the relaxed density profile at t 100/(ω/2π), Yang-Yang thermodyanmic density profile of T 216, and an additional density profile at t = 6.81/(ω/2π). FIG. S1.Convergence to thermalization for the double-well to single-well trap quench demonstrated in Fig. 3 of the main text.Panel (a) shows the evolution of the peak density for the Navier-Stokes GHD simulation (averaged over the region x/l ho ∈ [−2, 2], where l ho = ℏ/mω is the harmonic oscillator length), alongside the respective total entropy per particle, both of which plateau upon reaching the final thermal state.Panel (b) demonstrates the Bhattacharyya statistical distance of the GHD evolving density profile to the corresponding Yang-Yang thermodynamic density profile (fitted to the final relaxed GHD profile) of temperature T = T /(ℏω/kB) ≃ 213 shown in Fig. 3 (b) of the main text.The final observed Bhattacharyya distance here is DB = 2.23 × 10 −4 , which is rather small and hence indicates near-perfect overlap of the two distributions.Similarly, panel (c) shows the evolution of the peak density for the SPGPE simulation (again averaged over the region x/l ho ∈ [−2, 2]) as it approaches thermalization and plateaus; (d) demonstrates the Bhattacharyya distance of the evolving SPGPE density profile to the Yang-Yang thermodynamic density profile of temperature T ≃ 216 shown in Fig. 3 (d) of the main text, with a final observed distance of DB = 4.28 × 10 −3 . FIG. S2 . FIG. S2.Dynamics of thermalization of a harmonically trapped quasiondensate for GHD and SPGPE evolution in a quantum Newton's cradle setup under a Bragg pulse of momentum q0 = k0l ho = 1.The corresponding value of Bragg quasimomentum for this system is λBragg = 3.19.Initial thermal states are characterised by a dimensionless interaction parameter at the trap centre of γ0 = 0.01, and dimensionless temperature T ≃ 152, with a total atom number of N ≃ 1960 [S32].Panel (a) shows Navier-Stokes GHD evolution of the density profile, ρ(x, t)l ho ; (b) demonstrates the evolution of the respective peak density, averaged over the region x/l ho ∈ [−2, 2] (blue, fast-oscillating curve), alongside the total entropy per particle of the GHD simulation (red, smooth curve), which both plateau upon reaching the final thermal state; (c) shows the density profile of the relaxed GHD state after dynamics at time t = 80/(ω/2π) (black solid line), as well as the Yang-Yang thermodynamic density profile (cyan dotted line) of temperature T ≃ 214 that fits best to that of the relaxed GHD state.Also shown in (c) is a GHD density profile at an earlier time t ≃ 6.56/(ω/2π) (red dashed line), chosen to illustrate its deviation from the density profile of the final relaxed state.Density profile evolution under SPGPE simulation is shown in (d); panel (e) demonstrates the evolution of the respective peak density (averaged over the region x/l ho ∈ [−2, 2]) as it approaches thermalization and plateaus; in (f) we show the relaxed density profile after dynamics at time t = 30/(ω/2π) (black solid line), as well a best fit Yang-yang thermodynamic density profile of temperature T ≃ 261 (cyan dotted line), and the SPGPE density profile at an earlier time t ≃ 6.54/(ω/2π) (red dashed line), which is already much closer to the final relaxed density profile compared to that of GHD simulation around the same time. FIG. S3 . FIG. S3.Same as in Fig. S2, but for q0 = 5.The corresponding Bragg quasimomentum for this system is λBragg = 5.84, whereas the total atom number, and the initial values of γ0 and T are the same.The final relaxed density profile from GHD in (c) is at time t = 350/(ω/2π) (black solid line), and is compared with a corresponding Yang-Yang thermodynamic density profile at temperature T = 336 (cyan dotted line), and a respective density profile taken at time t ≃ 21.5/(ω/2π) (red dashed line).The relaxed SPGPE state in (f) is given at a time t = 40/(ω/2π) (black solid line), with a respective Yang-Yang thermodynamic density profile of temperature T = 348 (cyan dotted line), and an additional density profile demonstrated at a time of t ≃ 22.1/(ω/2π) (red dashed line). weakly interacting one-dimensional Bose gas, Phys.Rev.A 86, 033626 (2012).[59]Y.Castin, R. Dum, E. Mandonnet, A. Minguzzi, and I. Carusotto, Coherence properties of a continuous atom laser, Journal of Modern Optics 47, 2671 (2000).[60]P.Blakie, A. Bradley, M. Davis, R. Ballagh, and C. Gardiner, Dynamics and statistical mechanics of ultra-cold Bose gases using c-field techniques, Advances in Physics 57, 363 (2008).[61]I.Bouchoule, S. S. Szigeti, M. J. Davis, and K. V.For the examples considered in Fig. 1(d), the thermal phase coherence lengths are lT /L 0.1 for T = 0.01 and lT /L 0.01 for T = 0.1.These values, in turn, are comparable or smaller than the width of the initial density bump σ = 0.02, implying a reduced phase coherence over the extent of the bump and hence loss of interference contrast upon expansion of the bump into the background [43].[65]E.Bettelheim and L. Glazman, Quantum Ripples Over a Semiclassical Shock, Phys.Rev.Lett.109,260602(2012).[66]In the example of Fig.1(e), which is for N = 1000 particles, we have been able to discriminate between the
12,306
2022-08-13T00:00:00.000
[ "Physics" ]
Red Panda Optimization Algorithm: An Effective Bio-Inspired Metaheuristic Algorithm for Solving Engineering Optimization Problems This paper presents a new bio-inspired metaheuristic algorithm called Red Panda Optimization (RPO) that imitates the natural behaviors of red pandas in nature. The main design idea of RPO is derived from two characteristic natural behaviors of red pandas: (i) foraging strategy, and (ii) climbing trees to rest. The proposed RPO approach is mathematically modeled in two phases of exploration based on the simulation of red pandas’ foraging strategy and exploitation based on the simulation of red pandas’ movement in climbing trees. The main advantage of the proposed approach is that there is no control parameter in its mathematical modeling, and for this reason, it does not need a parameter adjustment process. The performance of RPO is evaluated on fifty-two standard benchmark functions including unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types as well as CEC 2017 test suite. The optimization results obtained by the proposed RPO approach are compared with the performance of twelve well-known metaheuristic algorithms. The simulation results show that RPO, by maintaining the balance between exploration and exploitation, is effective in solving optimization problems and its performance is superior over competitor algorithms. Based on the analysis of the optimization results, RPO has provided more successful performance compared to the competitor algorithms in 100% of unimodal functions, 100% of high-dimensional multimodal functions, 100% of fixed-dimensional multimodal functions, and 86.2% of CEC 2017 test suite benchmark functions. Also, the statistical analysis of the Wilcoxon rank sum test shows that the superiority of RPO in the competition with the compared algorithms is significant from a statistical point of view. In addition, the results of implementing RPO on four engineering design problems confirms the ability of the proposed approach to handle real-world optimization applications. I. INTRODUCTION Optimization problems are a type of problems that have more than one feasible solution. According to this definition, optimization is the process of finding the best feasible solution among the available solutions for a problem [1]. From a mathematical point of view, an optimization problem can be modeled considering three main parts: decision variables, The associate editor coordinating the review of this manuscript and approving it for publication was Xiong Luo . constraints, and objective function. The main goal in optimization is to set values for decision variables such that the objective function is optimized according to the constraints of the problem [2]. There are numerous optimization problems in science that have become more complex with the advancement of technology, and this is the reason for the need to powerful tools for solving optimization problems [3]. Problem solving methods in optimization studies are classified into two groups: deterministic and stochastic approaches [4]. Deterministic approaches are effective tools for solving linear, convex, continuous, differentiable, and low-dimensional problems [5]. However, in case of more complex optimization problems, deterministic approaches lose their efficiency due to getting stuck in local optima. This is while many of today's optimization problems and real-world applications are non-linear, non-convex, discontinuous, non-differentiable, and high-dimensional [6]. These disadvantages and the inability of deterministic approaches to solve complex optimization problems have prompted researchers to develop stochastic approaches. Stochastic approaches, without the need for derivative information from the objective function and problem constraints, are able to provide suitable solutions for optimization problems based on the random search process in the problem-solving space [7]. Metaheuristic algorithms are one of the most effective stochastic approaches in solving optimization problems. Advantages such as simplicity of concepts, convenient implementation, no dependence on the type of problem, no need for derivative information, efficient performance in solving nonlinear, non-convex, high-dimensional, and NPhard problems, as well as desirable efficiency in nonlinear and unknown search spaces are the main reasons for the popularity of metaheuristic algorithms [8]. The nature of random search in metaheuristic algorithms means that there is no guarantee of achieving the global optimal solution with these approaches. However, since the solutions obtained by metaheuristic algorithms are close to the global optima, they are acceptable and known as quasi-optimal solutions [9]. In order to organize an effective search process, metaheuristic algorithms must be able to scan the problem-solving space appropriately at both global and local levels. Global search with the concept of exploration leads to the ability of the algorithm to comprehensively search the problem-solving space with the aim of discovering the main optimal area and preventing the algorithm from getting stuck in local optima. Local search with the concept of exploitation leads to the ability of the algorithm to achieve possible better solutions near the discovered solutions [10]. In addition to exploration and exploitation abilities, what leads to the success of metaheuristic algorithms in the optimization process is the balancing of exploration and exploitation during the search process [11]. The efforts of researchers to achieve more effective solutions for optimization problems have led to the design of numerous metaheuristic algorithms [12]. These algorithms are employed in various optimization applications in science, such as energy [13], [14], [15], [16], protection [17], energy carriers [18], [19], and electrical engineering [20], [21], [22], [23], [24], [25]. The main research question in the study of metaheuristic algorithms is that considering various algorithms presented so far, is there still a need to design newer metaheuristic algorithms? In response to this question, No Free Lunch (NFL) theorem [26] explains that there is no specific metaheuristic algorithm to be considered as the best optimizer for all optimization problems. In fact, the optimal performance of an algorithm in solving a set of optimization problems is not a guarantee for the similar performance of that algorithm in solving other optimization problems. According to NFL theorem, a successful algorithm in solving an optimization problem may even fail in solving another problem. Therefore, there is no guarantee for the success or failure of implementing an algorithm on a problem. By keeping open the field of metaheuristic algorithms study, NFL theorem encourages researchers to provide more effective solutions to optimization problems by designing newer metaheuristic algorithms. The innovation and novelty of this paper is the introduction and design of a new metaheuristic algorithm called Red Panda Optimization (RPO) to solve optimization problems. The main contributions of this paper are as follows: • The proposed RPO approach is based on the simulation of red panda behaviors in nature. • The fundamental inspiration for RPO design is the foraging strategy and tree climbing ability of red pandas. • The mathematical model of RPO is presented in two phases of exploration and exploitation. • The efficiency of RPO in optimization has been evaluated on fifty-two benchmark functions consisting of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types, as well as CEC 2017 test suite. • The performance of RPO is compared with twelve well-known metaheuristic algorithms. • The effectiveness of RPO in handling real-world applications is examined on four engineering design problems. The rest of this article is organized in this way, first the literature review is presented in section II. Then, the proposed Red Panda Optimization (RPO) algorithm is introduced and mathematically modeled in section III. Simulation studies and results are presented in section IV. The performance of the proposed RPO in solving real-world applications is evaluated in section V. Finally, conclusions and several proposals for future studies are provided in section VI. II. LECTURE REVIEW Metaheuristic algorithms have been developed with inspiration from various natural phenomena, animal life in nature, biological sciences, physical laws and phenomena, rules of games, human interactions, and other evolutionary processes. Based on the idea used in the design, metaheuristic algorithms can be broadly classified into five groups: swarmbased, evolutionary-based, physics-based, human-based, and game-based approaches [27]. Swarm-based metaheuristic algorithms are developed based on simulating the natural swarm behavior of animals, birds, aquatic animals, insects, plants, and other living organisms in nature. Among the well-known algorithms of this group, one can mention Particle Swarm Optimization (PSO) [28], Ant Colony Optimization (ACO) [29], Artificial Bee Colony (ABC) [30], and Firefly Algorithm (FA) [31]. ACO was proposed based on modeling the 57204 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. ability of ant swarm to identify the shortest communication path between nests and food sources. ABC was designed based on simulating interactions and natural behaviors of colony bees in obtaining food resources. FA was inspired by the communication feature of flashing light in the firefly's swarm. Finding food resources, migration, and chasing are common natural behaviors among animals, whose simulation has inspired researchers to design several swarm-based algorithms such as: Coati Optimization Algorithm (COA) [32], Golden Jackal Optimization (GJO) [33], White Shark Optimizer (WSO) [34], Marine Predator Algorithm (MPA) [35], African Vultures Optimization Algorithm (AVOA) [36], Pelican Optimization Algorithm (POA) [37], Tunicate Swarm Algorithm (TSA) [38], Honey Badger Algorithm (HBA) [39], Whale Optimization Algorithm (WOA) [40], Reptile Search Algorithm (RSA) [41], Green Anaconda Optimization (GAO) [42], Cuckoo Search Algorithm (CSA) [43], and Grey Wolf Optimizer (GWO) [44]. Evolutionary-based metaheuristic algorithms are introduced based on the concepts of genetics, biology, natural selection, and survival of the fittest. Genetic Algorithm (GA) [45] and Differential Evolution (DE) [46] are widely used approaches in this group. GA and DE were developed based on reproductive process modeling, biology concepts and stochastic operators such as selection, crossover, and mutation. Artificial Immune Systems (AISs) is another evolutionary approach that has been introduced based on the ability of the human body's defense system against diseases and microbes [47]. Some other evolutionary-based metaheuristic algorithms are: Evolution Strategy (ES) [48], Genetic programming (GP) [49], and Cultural Algorithm (CA) [50]. Physics-based metaheuristic algorithms are designed based on simulating concepts, phenomena, and laws in physics. Simulated Annealing (SA) [51] is one of the most widely used physics-based approaches. SA was developed based on the modeling of metal annealing phenomenon in physics, where metals are melted under heat and then cooled in order to achieve ideal crystal. The modeling of physical forces has been the starting point for the introduction of several physics-based algorithms, such as: Spring Search Algorithm (SSA) [52], Momentum Search Algorithm (MSA) [53], and Gravitational Search Algorithm (GSA) [54]. SSA was introduced based on the simulation of Hooke's law, spring elastic force, and Newton's laws of motion in a system consisting of weights connected by springs. MSA was proposed based on the modeling of the force resulting from the momentum between the bullets. GSA was designed based on simulating the gravitational force that masses exert on each other at different distances. The physical phenomenon of matter state transitions for water was employed in design of the Water Cycle Algorithm (WCA) [55]. Some other physics-based metaheuristic algorithms are: Nuclear Reaction Optimization (NRO) [56], Lichtenberg Algorithm (LA) [57], Archimedes Optimization Algorithm (AOA) [58], Equilibrium Optimizer (EO) [59], Multi-Verse Optimizer (MVO) [60], and Electro-Magnetism Optimization (EMO) [61]. Human-based metaheuristic algorithms have been developed based on the simulation of human interactions, communication, thinking, and decision-making in social and individual lives. Teaching-Learning Based Optimization (TLBO) [62] is one of the most widely used human-based algorithms. The basic inspiration in its design was modelling the educational interactions of students and teachers in the classroom. The economic activities of both the poor and the rich sections of the society, who are trying to improve their economic conditions, have been a source of inspiration in the design of Poor and Rich Optimization (PRO) [63]. Therapeutic interactions between patients and physicians were employed in the design of Doctor and Patients Optimization (DPO) [64]. The cooperation between the members of a team who are trying to achieve the team goal was the basic idea in the design of Teamwork Optimization Algorithm (TOA) [65]. Some other human-based metaheuristic algorithms are: Ali Baba and the Forty Thieves (AFT) [66], Skill Optimization Algorithm (SOA) [67], Language Education Optimization (LEO) [68], Coronavirus Herd Immunity Optimizer (CHIO) [69], War Strategy Optimization (WSO) [70], and Driving Training-Based Optimization (DTBO) [71]. Game-based metaheuristic algorithms have been introduced based on modeling the rules of various individual and group games, the strategy of players, coaches, referees, and other influential persons of the games. Football Game Based Optimization (FGBO) [72] and Volleyball Premier League (VPL) [73] are two game-based approaches, which were designed based on the simulation of competitions between clubs in soccer and volleyball leagues. The skill of the players in putting together the pieces of the puzzle has been a source of inspiration in the design of Puzzle Optimization Algorithm (POA) [74]. The strategy of players in throwing darts and collecting points in the darts game was employed in the design of Darts Game Optimizer (DGO) [75]. Some other game-based metaheuristic algorithms are: Tug of War Optimization (TWO) [76], Billiards Optimization Algorithm (BOA) [77], Dice Game Optimization (DGO) [78], Ring Toss Game-Based Optimization (RTGBO) [79], Orientation Search Algorithm (OSA) [80], and Archery Algorithm (AA) [81]. Based on the best knowledge obtained from the literature review, no metaheuristic algorithm has been designed based on simulating the natural behavior of the red panda. Meanwhile, the behavior of foraging and resting on trees among red pandas are intelligent activities that has the potential to design a metaheuristic algorithm. In order to address this research gap in the studies of metaheuristic algorithms, in this paper, a new metaheuristic algorithm is introduced based on the mathematical modeling of the natural behavior of the red panda, which is discussed in the next section. VOLUME 11, 2023 57205 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. Start RPO. Input: The problem information (variables, objective function, and constraints). Set RPO population size (N ) and the total number of iterations (T ). Generate the initial population matrix at random using (1) and (2). Evaluate the objective function by (3). For t = 1 to T For i = 1 to N Phase 1: : The strategy of red pandas in foraging Update food positions set for the ith RPO member using (4). Determine the selected food by the ith red panda at random. Calculate new position of the ith RPO member based on the 1 st phase of RPO using (5). Update the ith RPO member using (6). Phase 2: Skill in climbing and resting on the tree Calculate new position of the ith RPO member based on the 2 nd phase of RPO using (7). Update the ith RPO member using (8). Save the best candidate solution found so far. end Output: The best solution obtained by RPO. End RPO. III. RED PANDA OPTIMIZATION ALGORITHM In this section, the proposed Red Panda Optimization (RPO) algorithm is introduced, then its mathematical modeling is presented. A. INSPIRATION OF RPO The red panda is a small endemic animal of the southern China and eastern Himalayas. It has dense reddish-brown hair on its body and legs, a black belly and legs, white-lined ears, a mainly white muzzle, and a ringed tail. It has a headto-body length of 51-63.5 cm, a tail length of 28-48.5 cm, and weighs between 3.2 and 15 kg. Because of its flexible joints and curved semi-retractile claws, it is well adapted to climbing [82]. The red panda inhabits temperate broadleaf and mixed forests as well as coniferous forests, favoring steep slopes with dense bamboo cover close to water sources. It is largely arboreal and solitary [83]. A picture of the red panda is shown in Figure 1. Red panda is largely herbivorous and eats mainly bamboo leaves and shoots, as well as blooms and fruits. The red panda has a good sense of sight, smell, and hearing and uses its long white whiskers to search for food at night [84]. According to observations, the red panda is a nocturnal animal. Due to its high ability to climb, it sleeps and rests in high places, especially trees during the day [85]. Among the natural behaviors of the red panda, its foraging strategy based on its high ability of hearing, sight, and smell, as well as the high skill of this animal in climbing trees, is much more impressive. Mathematical modeling of these natural behaviors of the red panda is the basis for the design of the proposed RPO approach, which is explained below. B. MATHEMATICAL MODELLING In this subsection, first the initialization of the proposed RPO approach is described, then based on the simulation of the natural behaviors of the red panda, the mathematical model of updating the candidate solutions in two phases of exploration and exploitation is presented. 1) INITIALIZATION The proposed RPO approach is a population-based metaheuristic algorithm, whose members consist of red pandas. In RPO design, each red panda is a candidate solution to the problem, which suggests certain values for the problem variables based on its position in the search space. Therefore, from a mathematical point of view, each red panda (i.e., candidate solution) is modeled using a vector. Together, the red pandas of the algorithm population can be mathematically modeled using a matrix according to (1). Each row of this matrix represents a red panda (i.e., candidate 57206 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. using (2). where, X is the population matrix of red pandas' locations, X i is the ith red panda (i.e., candidate solution), x i,j is its jth dimension (problem variable), N is the number of red pandas, m indicates the number of problem variables, r i,j are random numbers in the interval [0, 1], lb j , and ub j are the lower bound and upper bound of the jth problem variable, respectively. Considering that the position of each red panda is a candidate solution for the problem, the objective function of the problem corresponding to each of these candidate solutions can be evaluated. The set of evaluated values for the objective function can be represented using a matrix according to (3). where F is the objective function values vector and F i denotes the value of the objective function obtained by the ith red panda. The evaluated values for the objective function of the problem are the main criterion in determining the quality of the candidate solutions. In other words, the best obtained value for the objective function corresponds to the best candidate solution and similarly the worst value obtained for the objective function corresponds to the worst candidate solution. Since the candidate solutions are updated in each iteration, the best and worst candidate solutions must also be updated in each iteration. After the implementation of the algorithm, the best candidate solution obtained during the iterations of the algorithm is presented as a solution to the problem. The process of updating candidate solutions in the proposed RPO consists of two phases of exploration and exploitation, which are described as follows. 2) PHASE 1: THE STRATEGY OF RED PANDAS IN FORAGING (EXPLORATION) The position of red pandas in the first phase of RPO is modeled based on their movement in order to forage in the wild. Red pandas are highly skilled in identifying and moving towards the location of food sources using their high abilities VOLUME 11, 2023 57209 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. in smell, hearing, and vision. In RPO design, for each red panda, the location of other red pandas that lead to better objective function values is considered as the location of food resources. The set of proposed food resource positions for each red panda based on the comparison of the objective function values is modeled using (4). Among these proposed positions, one position is randomly determined as the food position selected by the corresponding red panda. where PFS i is the set of proposed food sources for ith red panda and X best is the location of the red panda with best value for the objective function (best candidate solution). Moving towards the food source leads to big changes in the position of red pandas, which improves the capability of the proposed algorithm in exploration and global search in the problem-solving space. In order to model the behavior of red pandas during foraging, first a new position is calculated for each red panda based on movement towards the location of food source (the best candidate solution) using (5). Then, if the value of the objective function is improved in the new position, the position of the red panda is updated to the position calculated in the exploration phase using (6). 57210 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. where, X P1 i is the new position of the ith red panda based on the first phase of RPO, x P1 i,j is its jth dimension, F P1 i represents its objective function value, SFS i is the selected food source for ith red panda, SFS i,j denotes its jth dimension, r is a random number in the interval [0, 1], and I is a random number selected from the set {1, 2} randomly. 3) PHASE 2: SKILL IN CLIMBING AND RESTING ON THE TREE (EXPLOITATION) The position of red pandas in the second phase of the RPO is modeled based on the skill of this animal in climbing trees and resting on them. Red pandas spend most of their time resting on trees. After foraging on the ground, this animal climbs the nearby trees. Moving towards the tree and climbing it leads to small changes in the position of red pandas, which increases the capability of the proposed RPO algorithm in exploitation and local search in promising areas. In order to mathematically model the natural behavior of red pandas in climbing trees, first a new position is calculated for each red panda using (7). Then, if the value of the objective function is improved, this new position replaces the previous position of the corresponding red panda using (8). x P2 i,j = x i,j + lb j + r · ub j − lb j t , i = 1, 2, . . . , N , j = 1, 2, . . . , m, and t = 1, 2, . . . , T , where X P2 i is the new position of the ith red panda based on the second phase of RPO, x P2 i,j is its jth dimension, F P2 i indicates its objective function value, r is a random number in the interval [0, 1], t represents the iteration counter of the algorithm, and T is the maximum number of iterations. C. REPETITIONS PROCESS, FLOWCHART, AND PSEUDO-CODE OF RPO The proposed RPO approach is an iteration-based metaheuristic algorithm. After updating the position of all red pandas based on the exploration and exploitation phases, the first iteration of the RPO is completed. Then, based on the new values, the algorithm enters the next iteration and the process of updating the position of the red pandas is repeated using (4) to (8) until the last iteration of the algorithm. After completion of RPO implementation, the position of the best red panda, which results in the best value for the objective function, is presented as the solution of the problem. The implementation steps of RPO are presented in the form of a flowchart in Figure 2 and its pseudocode is given in Algorithm 1. D. COMPUTATIONAL COMPLEXITY In this subsection, the computational complexity analysis of the proposed RPO approach is discussed. RPO initialization has a computational complexity equal to O(Nm), where N is the number of red pandas and m denotes the number of problem variables. In each iteration of the algorithm, the position of red pandas is updated in the two phases of exploration and exploitation. Therefore, the red pandas update process has a computational complexity equal to O(2NmT), where T is the maximum number of the algorithm iterations. Therefore, the total computational complexity of RPO is equal to O (Nm(1 + 2T). IV. SIMULATION STUDIES AND DISCUSSION In this section, simulation studies on the performance of the proposed RPO in solving optimization problems are presented. For this purpose, fifty-two standard benchmark functions consisting of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types as well as CEC 2017 test suite [86] are employed. Also, in order to analyze the quality of RPO in providing appropriate solutions, the results obtained from the proposed approach are compared with the performance of twelve well-known metaheuristic algorithms including: GA, PSO, GSA, TLBO, GWO, MVO, VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. WOA, TSA, MPA, RSA, WSO, and AVOA. The values of the control parameters for these competitor algorithms are specified in Table 1. Simulation results are reported using six statistical indicators: mean, best, worst, standard deviation (std), median, and rank. The ranking criterion for metaheuristic algorithms in solving each benchmark function is to provide a better value for the mean index. A. EVALUATION OF UNIMODAL TEST FUNCTIONS Seven benchmark functions F1 to F7 are selected from the unimodal type. Because these functions have no local optima, they are suitable options for evaluating the exploitation power of metaheuristic algorithms. The optimization results for the functions F1 to F7 using RPO and the competitor algorithms are presented in Table 2. Based on the optimization results, RPO with high exploitation ability has converged to the global optima in solving functions F1, F2, F3, F4, F5, and F6. In solving the function F7, RPO is the first-best optimizer. The analysis of the simulation results shows that the proposed RPO approach has provided better results and in total by winning the first rank, it has achieved a superior performance in the optimization of unimodal benchmark functions compared to the competitor algorithms. B. EVALUATION OF HIGH-DIMENSIONAL MULTIMODAL TEST FUNCTIONS Six benchmark functions F8 to F13 are selected from high-dimensional multimodal type. In addition to the global optima, these functions have a large number of local optima, and for this reason, they are suitable options for evaluating the exploration power of metaheuristic algorithms. The implementation results of RPO and the competitor algorithms for the functions F8 to F13 are reported in Table 3. The optimization results show that RPO with high exploration ability has converged to the global optima in the optimization of F9 and F11 functions in addition to identifying the main optimal area in the search space. In solving the functions F8, F10, F12, and F13, RPO has provided suitable solutions with high exploration ability and is the first-best optimizer for these functions. The comparison of the simulation results indicates that the proposed RPO approach, with a high exploration ability in the case of high-dimensional multimodal functions, has obtained superior performance over the competitor algorithms. C. EVALUATION OF FIXED-DIMENSIONAL MULTIMODAL TEST FUNCTIONS Ten benchmark functions F14 to F23 have been selected from the fixed-dimensional multimodal type. These functions, compared to functions F8 to F13, have a lower number of local optima. Functions F14 to F23 are suitable options for evaluating the ability of metaheuristic algorithms in balancing exploration and exploitation features during the search process. The results of using RPO and the competitor algorithms for optimizing the functions F14 to F23 are presented in Table 4. Based on the optimization results, RPO is the first-best optimizer for the functions F14, F15, F21, F22, and F23. In solving the functions F16, F17, F18, F19, and F20, RPO has the same conditions as some of the competing algorithms considering the mean index criterion. However, RPO has provided a more effective performance in handling these functions by providing better results from the std index viewpoint. What is evident from the analysis of simulation results, RPO has achieved better results in solving fixed-dimensional multimodal functions with an appropriate ability to balance exploration and exploitation, and compared to the competitor algorithms, it has provided superior performance in optimizing these functions. The performance of RPO and the competitor algorithms in solving benchmark functions F1 to F23 is illustrated in the form of convergence curves in Figure 3. D. EVALUATION OF CEC 2017 TEST SUITE RPO's performance in solving optimization problems is evaluated on CEC 2017 test suite. This test suite has thirty benchmark functions consisting of three unimodal functions of C17-F1 to C17-F3, seven multimodal functions of C17-F4 to C17-F10, ten hybrid functions of C17-F11 to C17-F20, and ten composition functions of C17-F21 to C17-F30. The C17-F2 function has been excluded from the simulation studies due to its unstable behavior. The full description of CEC 2017 test suite is provided in [86]. The implementation results of RPO and the competitor algorithms on CEC 2017 test suite are reported in Table 5. Based on the optimization result, RPO is the first best optimizer for functions C17-F1, C17-F4 to C17-F8, C17-F10 to C17-F21, C17-F23, C17-F24, and C17-F26 to C17-F30. The performance of RPO and the competitor algorithms in solving the CEC 2017 test suite is drawn as boxplot diagrams in Figure 4. Analysis of the simulation results shows that RPO has provided better results for most of the benchmark functions of CEC 2017 test suite and overall, by winning the first rank, it has provided superior performance over the competitor algorithms in solving CEC 2017 test suite. E. STATISTICAL ANALYSIS In this subsection, statistical analysis is presented on the performance of RPO and the competitor algorithms to determine VOLUME 11, 2023 whether RPO has a significant statistical superiority or not. For this purpose, Wilcoxon rank sum test [87] is employed, which is a non-parametric statistical test to determine the significant difference between the average of two data samples. In Wilcoxon rank sum test, an index called p-value is utilized to evaluate whether RPO is significantly superior over any of the competitor algorithms from a statistical point of view. The results of the statistical analysis on the performance of RPO and the competitor algorithms are reported in Table 6. Based on the results, in cases where the p-value is less than 0.05, RPO has a significant statistical superiority over the corresponding competitor algorithm. V. RPO FOR REAL-WORLD APPLICATION In this section, the effectiveness of the proposed RPO approach for solving optimization problems in real-world applications is tested. In this regard, RPO is implemented on four engineering design problems. A. PRESSURE VESSEL DESIGN PROBLEM Pressure vessel design is an engineering minimization problem with the aim of reducing design cost. The schematic of this design is presented in Figure 5. Pressure vessel design mathematical model is as follows [88]: Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. The results of implementing RPO and the competitor algorithms for solving pressure vessel design problem are reported in Tables 7 and 8. Based on the obtained results, RPO has provided the optimal solution of this design with the design values equal to (0.778027, 0.384579, 40.31228, 200) and the corresponding objective function value is equal to (5882.895). The convergence curve of RPO while achieving the solution for pressure vessel design is drawn in Figure 6. Based on the simulation results, RPO has provided superior performance in pressure vessel design optimization compared to the competitor algorithms. B. SPEED REDUCER DESIGN PROBLEM Speed reducer design is an engineering minimization problem with the aim of reducing the weight of the speed reducer. The schematic of this design is illustrated in Figure 7. The mathematical model of speed reducer design is as follows [89], [90]: Schematic view of speed reducer design problem. Speed reducer design optimization results using RPO and the competitor algorithms are reported in Tables 9 and 10. Based on the obtained results, RPO has provided the optimal solution of this design with the design values equal to (3.5, 0.7, 17, 7.3, 7.8, 3.350215, 5.286683) and the corresponding objective function value is equal to (2996.348). The RPO convergence curve during solving the speed reducer design is drawn in Figure 8. Analysis of the simulation results shows that RPO is superior over the competitor algorithms by providing better results in the optimization of speed reducer design. C. WELDED BEAM DESIGN PROBLEM Welded beam design is a real-world application with the aim of minimizing the fabrication cost of the welded beam. The schematic of this design is shown in Figure 9. The mathematical model of the welded beam design is as follows [40]: The results of employing RPO and the competitor algorithms in handling the welded beam design problem are presented in Tables 11 and 12. Based on the obtained results, RPO has provided the optimal solution of this design with the design values equal to (0.20573, 3.470489, 9.036624, 0.20573) and the corresponding objective function value is equal to (1.72468). The convergence curve of RPO while reaching the solution for welded beam design is drawn in Figure 10. What is evident from the simulation results, RPO has a higher ability compared to the competitor algorithms in dealing with the welded beam design problem. D. TENSION/COMPRESSION SPRING DESIGN PROBLEM Tension/compression spring design is a real-world application aimed at minimizing the weight of tension/compression spring. The schematic of this design is presented in Figure 11. The mathematical model of tension/compression spring design is as follows [40]: VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. The implementation results of RPO and the competitor algorithms for the tension/compression spring design problem are reported in Tables 13 and 14. Based on the obtained VOLUME 11, 2023 57223 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. results, RPO has provided the optimal solution of this design with the design values equal to (0.051689, 0.356718, 11.28897) and the corresponding objective function value is equal to (0.012602). The convergence curve of RPO while achieving the optimal solution for tension/compression spring design is drawn in Figure 12. What can be concluded from the simulation results, RPO has provided a more effective performance compared to the competitor algorithms in solving the tension/compression spring design problem. VI. CONCLUSION AND FUTURE WORKS In this paper, a new bio-inspired metaheuristic algorithm called Red Panda Optimization (RPO) was introduced, which can be applied to solve optimization problems. The fundamental inspiration of RPO is simulation of the behavior of red pandas when foraging and their ability to climb trees to rest. The implementation steps of RPO were described and mathematically modeled in two phases of exploration and exploitation. The effectiveness of RPO in solving optimization problems was evaluated considering fifty-two benchmark functions consisting of unimodal, highdimensional multimodal, fixed-dimensional multimodal, and CEC 2017 test suite. The optimization results of unimodal functions indicated the high ability of RPO in local search and exploitation. The optimization results of multimodal functions showed that RPO has a high ability in global search and exploration. Also, the optimization results of CEC 2017 test suite showed the high capability of the proposed RPO approach in providing simultaneous exploration and exploitation during the search process. The results obtained from the implementation of RPO were compared with the performance of twelve well-known metaheuristic algorithms. The simulation results showed that the proposed RPO approach by balancing exploration and exploitation features during the search process, has provided superior performance over the competitor algorithms. Based on the simulation results, the proposed RPO approach provided better results compared to the competitor algorithms in 100% of unimodal functions, 100% of high-dimensional multimodal functions, 100% of fixed-dimensional multimodal functions, and 86.2% of CEC 2017 test suite benchmark functions. In addition, the implementation of RPO on four engineering design problems showed that the proposed algorithm has a high ability to solve optimization problems in real-world applications. Following the introduction of the proposed RPO approach, several research paths are activated for further studies. The development of binary and multi-objective versions of RPO is one of the most significant research potentials in this regard. The use of RPO for solving optimization problems in various fields of science as well as optimization tasks in real-world applications is one of the other suggestions of this paper for future investigations.
8,435
2023-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
The orbit method for Poisson orders A version of Kirillov's orbit method states that the primitive spectrum of a generic quantisation $A$ of a Poisson algebra $Z$ should correspond bijectively to the symplectic leaves of Spec$(Z)$. In this article we consider a Poisson order $A$ over a complex affine Poisson algebra $Z$. We begin by defining a stratification of the primitive spectrum Prim$(A)$ into symplectic cores, which should be thought of as families of coherent symplectic leaves on a non-commutative space. We define a category $A$-$\mathcal{P}$-Mod of $A$-modules adapted to the Poisson structure on $Z$, and we show that when the symplectic leaves of $Z$ are Zariski locally closed and $Z$ is regular, there is a natural homeomorphism from the spectrum of annihilators of simple objects in $A$-$\mathcal{P}$-Mod to the set of symplectic cores in Prim$(A)$ with its quotient topology. Applications of this result include a classification of annihilators of simple Poisson $Z$-modules when $Z =\mathbb{C}[\mathfrak{g}^*]$ where $\mathfrak{g}$ is the Lie algebra of a complex algebraic group, or when $Z$ is a classical finite $W$-algebra. The homeomorphism is constructed by defining and studying the Poisson enveloping algebra $A^e$ of a Poisson order $A$, an associative algebra which captures the Poisson representation theory of $A$. When $Z$ is a regular affine algebra we prove a PBW theorem for the enveloping algebra $A^e$ and use this to characterise the annihilators of simple Poisson modules in several different ways: we show that the annihilators of simple objects in $A$-$\mathcal{P}$-Mod, the Poisson weakly locally closed, Poisson primitive and Poisson rational ideals all coincide. This last statement can be seen as a semiclassical version of the Dixmier--M{\oe}glin equivalence. The orbit method Kirillov's orbit method appears in a wide variety of contexts in representation theory and Lie theory, and is occasionally referred to as a philosophy rather than a theory, on account of the fact that it serves as a guiding principle in many cases where it cannot be formulated as a precise statement. The original manifestation of the orbit method states that characters of simple modules for Lie groups can be expressed as normalisations of Fourier transforms of certain functions on coadjoint orbits [26], but perhaps the most concrete algebraic expression of the orbit method is a well-known theorem of Dixmier which asserts that when G is a complex solvable algebraic group and g = Lie(G), the primitive ideals of the enveloping algebra U (g) lie in natural one-to-one correspondence with the set-theoretic coadjoint orbit space g * /G; see [8,Theorem 6.5.12]. Dixmier's theorem fails for complex simple Lie algebras [16,Remark 9.2(c)]; however, progress has been made recently by Losev [28] using techniques from deformation theory to show that g * /G canonically maps to Prim U (g), and the map is an embedding in classical types. The image consists of a certain completely prime ideals and conjecturally it is always injective. The Kirillov-Kostant-Souriau theorem asserts that the coadjoint orbits are actually the symplectic leaves of the Poisson variety g * , and so a broad interpretation of the orbit method philosophy is the following: suppose that Z is a Poisson algebra and A is a quantisation of Z, then the primitive spectrum Prim(A) should correspond closely to the set of symplectic leaves of Spec(Z); indeed, a slightly more general principle was suggested by Goodearl in [16, § 4.4]. There are several examples of quantum groups and quantum algebras where the correspondence we allude to here actually manifests itself as a bijection, or better yet a homeomorphism, once the set of leaves is endowed with a suitable topology; the reader should refer to [16] where numerous correspondences of this type are surveyed. Poisson orders and their modules In deformation theory, Poisson algebras arise as the semi-classical limits of quantisations, as we briefly recall. If A is a torsion-free C[q]-algebra where q is a parameter, such that A 0 := A/qA is commutative, then A 0 is equipped with a Poisson bracket by setting {π(a 1 ), π(a 2 )} := π(q −1 [a 1 , a 2 ]), (1.1) where π : A → A 0 is the natural projection and a 1 , a 2 ∈ A. Of course, we do not need to assume that A 0 is commutative to obtain a Poisson algebra since formula (1.1) endows the centre Z(A 0 ) with the structure of a Poisson algebra regardless. In fact, something stronger is true: by choosing π(a 1 ) ∈ Z(A 0 ) and π(a 2 ) ∈ A 0 formula (1.1) endows A 0 with a biderivation which restricts to a Poisson bracket {·, ·} : Z(A 0 ) × Z(A 0 ) → Z(A 0 ). In [6] Brown and Gordon axiomatised this structure in cases where A 0 is a Z(A 0 )-module of finite type by saying that A 0 is a Poisson order over Z(A 0 ). The precise definition will be recalled in § 2.2, and a slightly more general approach to constructing Poisson orders in deformation theory will be explained in § 2.3. The bracket (1.2) induces a map H : Z(A 0 ) → Der C (A 0 ) and the image is referred to as the set of Hamiltonian derivations of A 0 . In op. cit., they proved some very attractive general results with the ultimate goal of better understanding the representation theory of symplectic reflection algebras. In this paper, we pursue the themes of the orbit method in the abstract setting of Poisson orders. When Z is a Poisson algebra and A is a Poisson order over Z, we define a Poisson A-module to be an A-module with a compatible action for the Hamiltonian derivations H(Z); see § 2.2. In the case where A = Z, these modules are closely related to D-modules over the affine variety Spec(Z) (see Remark 2.4), and they have appeared in the literature many times (see [1,10,19,27,32], for example). In the setting of Poisson orders, a similar category of modules was studied in [35]. (1. 3) The set of annihilators of simple Poisson A-modules will always be equipped with its Jacobson topology. Symplectic cores versus annihilators of simple Poisson modules A primitive ideal of A is the annihilator of a simple A-module and the set of such ideals equipped with their Jacobson topology is called the primitive spectrum, denoted as Prim(A). It is often the case that simple A-modules cannot be classified but Prim(A) can be described completely, which offers good motivation for studying primitive spectra. The Poisson core of an ideal I ⊆ A is the largest ideal P(I) of A contained in I which is stable under the Hamiltonian derivations, and we define an equivalence relation on the set Prim(A) by saying I ∼ J if P(I) = P(J). The equivalence classes are called the symplectic cores of Prim(A), the set of symplectic cores is denoted by Prim C (A) and the symplectic core of Prim(A) containing I is denoted by C(I). We view Prim C (A) as topological space endowed with the quotient topology. In case Z is an affine Poisson algebra such that Spec(Z) has Zariski locally closed symplectic leaves, Proposition 3.6 in [6] shows that the symplectic leaves coincide with the symplectic cores of Spec(Z); see also Proposition 2.7. Thus, the cores of Prim(A) can occasionally be regarded as non-commutative analogues of symplectic leaves. The hypothesis that the symplectic leaves of Spec(Z) are algebraic can be replaced with something strictly weaker; see Remark 6.2. Our theorem has obvious parallels with Joseph's irreducibility theorem [23], which states that for g a complex semisimple Lie algebra and M a simple g-module, the variety {χ ∈ g * | χ(gr Ann U (g) (M )) = 0} contains a unique dense nilpotent orbit. Other closely related results can be found in [13,29], although all of the papers we cite here apply to Poisson structures which have finitely many symplectic leaves -this is a hypothesis we do not require. It is natural to wonder whether our first theorem might serve as a starting point for a new proof of the irreducibility theorem. First Theorem. Suppose that Spec(Z) is a smooth complex affine Poisson variety with Zariski locally closed symplectic leaves, and A is a Poisson order over Z. For every simple Poisson In order to illustrate in what sense our theorem is an expression of the orbit method philosophy, we record the following special case where A = Z = C[g * ] is the natural Poisson structure arising on the dual of an algebraic Lie algebra. that there exists a bijection P-Prim(A) ↔ Prim C (A). The first and second steps are really consequences of our second main theorem, which gives a detailed comparison of different types of H(Z)-stable ideals in Poisson orders, as we now explain. The Poisson-Dixmier-Moeglin equivalence for Poisson orders Let k be any field, Z be an affine Poisson k-algebra and A a Poisson order over Z. As usual Spec(A) denotes the set of prime two-sided ideals, and we have Prim(A) ⊆ Spec(A). The Poisson ideals of A are the two-sided ideals which are stable under the Hamiltonian derivations H(Z). We write P-Spec(A) for the set of all Poisson ideals of A which are also prime. The set P-Spec(A) endowed with its Jacobson topology is referred to as the Poisson spectrum of A, and the elements are known as Poisson prime ideals of A. If A is prime and Z is an integral domain, then we can form the field of fractions Q(Z), and since A is a Z-module of finite type, the tensor product A ⊗ Z Q(Z) is isomorphic to Q(A) the division ring of fractions of A; in particular Q(A) exists. When I ∈ Spec(A), we have I ∩ Z ∈ Spec(Z) (see [30,Theorem 10.2.4], for example) and so we can form the division ring Q(A/I). If I is a Poisson ideal, then the set of derivations H(Z) acts naturally on A/I, and the action extends to an action on Q(A/I) by the Leibniz rule where δ ∈ H(Z) and a, b ∈ A/I with b = 0. The centre of Q(A/I) will be written CQ(A/I) and we define the Poisson centre of Q(A/I) to be the subalgebra Let I ∈ P-Spec(A) be a Poisson prime ideal. We say that: When the symplectic leaves of Z are locally closed in the Zariski topology conditions (I) and (II) hold for Z as a Poisson order over itself. If these equivalent conditions hold for Z, then they also hold for A. The equivalence of (i), (ii) and (iii) is known as the weak Poisson Dixmier-Moeglin equivalence (PDME). It has recently been proven for Poisson algebras by Bell et al. [3], and our approach is to lift the theorem to the setting of Poisson orders using the close relationships between prime and primitive ideals in finite centralising extensions. Part (a) is similarly a well-known fact in the setting of Poisson algebras, and the same proof works here. When the Poisson primitive ideals of a Poisson algebra Z are Poisson locally closed, we say that the PDME holds for Z. Generalising this rubric, we shall say that the PDME holds for A when conditions (I) and (II) hold for A. It was an open question from [6] as to whether every affine Poisson algebra satisfies the PDME; however, recently counterexamples have been discovered [3]. We may now rephrase the last sentence of the second theorem: we have shown that if the PDME holds for a complex affine Poisson algebra Z, then it holds for every Poisson order A over Z. The universal enveloping algebra of a Poisson order To conclude our statement of results, it remains to offer some commentary on (b) and (c) of the second main theorem. Statement (c) was proposed in the case A = Z in [32] although the proof contains an error † . The converse was conjectured at the same time and proven in case Z is a polynomial algebra in [33]. Since our results are stated in the setting of Poisson modules over Poisson orders, we require new tools and new methods. Our main technique is to define and study the (universal) enveloping algebra of a Poisson order. This is an associative algebra A e generated by symbols {m(a), δ(z) | a ∈ A, z ∈ Z} subject to certain relations (3.2)-(3.5) such that category A e -Mod of left modules is equivalent to the category of Poisson A-modules. Using this construction, we are able to define localisation of Poisson modules over Poisson orders, which is our main tool in proving part (b) of the second main theorem. In order to prove part (c), we show that when Z is regular, A e is a free (hence faithfully flat) A-module (Corollary 3.14), which implies that the ideals of A e are closely related to the ideals of A (cf. Lemma 4.1). The fact that A e is A-free follows quickly from our last main theorem of this paper, which we view as a Poincarée-Birkhoff-Witt (PBW) theorem for the enveloping algebras of Poisson orders. There is a natural filtration A e = i 0 F i A e , defined by placing generators {m(a) | a ∈ A} in degree 0 and {δ(z) | z ∈ Z} in degree 1, which we call the PBW filtration of A e . The associated graded algebra is denoted by gr A e . The statement and proof of our third and final main theorem are quite similar to Rinehart's PBW theorem for Lie algebroids [34]. Third Theorem. Suppose that Z is affine and regular over a field. Then the natural surjection induced by multiplication in gr A e is an isomorphism. Structure of the paper We now describe the structure of the current paper. In § 2 we state the definition of a Poisson order: our definition is very slightly different to the one originally given in [6], although a careful comparison is provided in Remark 2. , and prove the equivalence of (I) and (II), as well as the subsequent two assertions of second main theorem. In § 3 we introduce the enveloping algebra of a Poisson order. We state the universal property in § 3.1 and prove a criterion for A e to be noetherian. In § 3.2 we use A e to define and study localisations of Poisson A-modules, whilst in § 3.3 we prove the PBW theorem and state some useful consequences. In § 4 we prove (b) and (c) of the second theorem using the tools developed in of § 3. In § 5 we prove (a) and the equivalence of (i), (ii) and (iii) in the second main theorem. Following [15], we observe that results, such as the PDME, can be studied in the slightly more general context of finitely generated algebras equipped with a set of distinguished derivations, and it is in this setting that we prove the results of § 5. Finally, in § 6 we show that the second theorem implies the first. In § 6.3 we make a careful comparison between Dixmier's bijection g * /G → Prim U (g) and our bijection {annihilators of simple Poisson C[g * ]-modules} → g * /G in the case where g is a solvable, and finally in § 6.4 we discuss some famous examples of Poisson orders arising in deformation theory to which our first main theorem can be applied. We conclude the article by posing some questions about their Poisson representation theory. A discussion of related results and new directions It is worth mentioning that our first main theorem is very close in spirit to a conjecture of Hodges and Levasseur [21, § 2.8, Conjecture 1] which seeks to relate the primitive spectrum of the quantised coordinate ring of a complex simple algebra group O q (G) in the case where q is a generic parameter, to the Poisson spectrum of the classical limit O(G); see [16, § 4.4] for a survey of results. Although the spectra are always known to lie in natural bijection, this bijection is only known to be a homeomorphism in case G = SL 2 (C) and SL 3 (C) [12]. By contrast, our bijection is always a homeomorphism, however our results only apply to these families of algebras when the parameter is a root of unity. It would be natural to attempt to strengthen this comparison. Although our results are fairly comprehensive we expect that part (c) of the second main theorem should hold without the hypothesis that Z is regular, and so the first main theorem should hold true without assuming Spec(Z) is smooth. Note that the symplectic leaves of a singular Poisson variety can be defined, thanks to [6, § 3.5]. This would constitute an extremely worthwhile development, as there are important examples of Poisson orders over singular Poisson varieties, for example, rational Cherednik algebras. At least this should be achievable for Poisson orders over isolated surface singularities using the methods of [27, § 3.4] along with our proof of Theorem 4.2, which only depends upon the PBW theorem for A e . Another motivation for this work is the following: there appear to be deep connections between the dimensions of simple modules of a Poisson order A over Z and the dimensions of its symplectic leaves of Z. We expect that the Poisson representation theory of A will be closely related to the representation theory of A, and so the current paper will lay the groundwork for such relationships to be understood in a broader context. Notations and conventions For the first and second sections, we let k be any field whilst in subsequent sections we shall work over C. When the ground field is fixed, all vector spaces, algebras and unadorned tensor products will be defined over this choice of field. When we say that A is an algebra, we mean a not necessarily commutative unital k-algebra. When we say that A is affine, we mean that it is semiprime and finitely generated. By an A-module we mean a left module, unless otherwise stated. By a primitive ideal we always mean the annihilator of a simple left A-module. The category of all A-modules is denoted by A -Mod and the subcategory of finitely generated A-modules is denoted by A -mod. When we say that A is filtered, we mean that there is a non-negative Z-filtration As usual, the associated graded algebra of a filtered algebra is gr A := Furthermore, A is said to be almost commutative if gr A is commutative. Poisson orders and their modules A Poisson algebra is a commutative algebra Z endowed with a skew-symmetric k-bilinear biderivation {·, ·} : Z × Z → Z which makes Z into a Lie algebra. Let A be a Z-algebra which is a module of finite type over Z. We say that A is a Poisson order (over Z) if the Poisson bracket on Z extends to a map [6] is slightly weaker than the one given here, as they only assume property (ii) of H in the case where x, y, a ∈ Z. Our justification for choosing this definition is twofold: firstly, the most interesting examples which arise in deformation theory satisfy these slightly stronger properties; see § 2.3. Secondly the stronger definition suggests a stronger definition for a Poisson A-module, and the enveloping algebra for this category of modules satisfies the PBW theorem of § 3, which is fundamental to all of our results. When Z is a fixed Poisson algebra and A is a Poisson order over Z, we define a Poisson A-module to be an A-module M together with a linear map such that for all x, y ∈ Z, all a ∈ A and all m ∈ M , we have The morphisms of Poisson A-modules are defined in the obvious manner, and the category of all Poisson A-modules will be denoted by A-P -Mod. Since Poisson A-modules are Poisson Z-modules by restriction, we are considering a special class of flat Poisson connections. Remark 2.2. It is not true that simple Poisson A-modules are necessarily finitely generated over A. For example, when A = Z = C[g * ] and g is a simple Lie algebra, it is not hard to see that simple Poisson A-modules annihilated by the augmentation ideal (g) A are the same as simple g-modules, and these are often infinite-dimensional. We thank Ben Webster for this useful observation. Examples of Poisson orders and their modules Every Poisson algebra is a Poisson order over itself. Furthermore, for Z fixed there are several constructions which allow us to construct new Poisson orders over Z from old ones. Let A be a Poisson order over Z. Then: (1) Mat n (A) is a Poisson order for any n > 0, with (2) the opposite algebra A opp is a Poisson order; (3) the tensor product A ⊗ Z B of two Poisson orders is again a Poisson order with The above constructions are very suggestive of a theory of a Brauer group over Z adapted to the theory of Poisson orders. This is a theme we hope to pursue in future work. All other examples of Poisson orders which we will be interested in arise in the context of deformation theory. We follow [24] closely. Let R be a commutative associative algebra and let ( ) ⊆ R be a principal prime ideal, and write k = R/( ). Consider the k-algebra A := A/ A with centre Z and write N for the preimage of Z in A under the natural projection π : A A . For a ∈ N and b ∈ A, we have [a, b] ∈ A and so we may define generalising (1.1). When A is finite over Z , the bracket (2.2) makes Z into a Poisson algebra. If Z ⊆ Z is any Poisson subalgebra such that A is a Z-module of finite type, then A becomes a Poisson order over Z. Notable examples include. (1) When R = Z, = p ∈ Z is any prime number and g Z is the Lie algebra of a Z-group scheme, then U (g Z )/pU (g Z ) ∼ = U (g p ) where g p := g Z ⊗ Z F p and F p := Z/(p). It is well known that g p is a restricted Lie algebra and the calculations of [24] show that the p-centre Z p (g) is a Poisson subalgebra of the centre of U (g p ), naturally isomorphic to F p [g * p ] with its Lie-Poisson structure. Since U (g p ) is finite over the p-centre, we see that U (g p ) is a Poisson order over Z p (g). (2) Let R = C[t ±1 ], = (t − q 0 ) for some primitive th root of unity q 0 ∈ C, and A is any of the following: a quantised enveloping algebra of a complex semisimple Lie algebra, a quantised coordinate ring of a complex algebraic group, any quantum affine space. It is well known that the th powers of the standard generators of A generate a central subalgebra Z 0 over which A is a finite module, and (2.2) equips Z 0 with the structure of a complex affine Poisson algebra, so A is a Poisson order over Z 0 . We continue with A a Poisson order over Z and list some elementary examples of Poisson modules. The first example of a Poisson A-module is A, with map ∇ defined by ∇(z)a := {z, a}. If I A is any left ideal which is also Poisson, then both I and the quotient A/I admit the structure of a Poisson A-module. A natural way to construct such ideals is to consider those of the form AI where I Z is any Poisson ideal. A method for constructing Poisson A-modules from Poisson Z-modules occurs as a special case of the following crucial lemma. Proof. To see that ∇ A is well defined, we must check that the kernel of the natural For the rest of the proof, tensor products a ⊗ m will be taken over B. The first axiom of a Poisson A-module follows from the calculation where x, y ∈ Z, a ∈ A and m ∈ M . The second axiom of a Poisson module is a consequence of the next calculation, in which a, b ∈ A and x, m are as before The third axiom of a Poisson module only regards the Lie algebra structure and so follows from the Hopf algebra structure on the universal enveloping algebra of the Lie algebra Z, since A and M are Poisson Z-modules. Remark 2.4. It was observed in [10, Proposition 1.1] that when Z is a symplectic affine, Poisson algebra over C every Poisson Z-module arises from a unique D-module on Spec(Z). Symplectic cores in primitive spectra We continue with an affine Poisson k-algebra Z and a Poisson order A over Z. If S is any collection of ideals of A, then we can endow S with the Jacobson topology by declaring the sets {I ∈ S | J∈S J ⊆ I} to be closed, where S ⊆ S is any subset. We will refer to such a set S as a space of ideals to suggest that we are equipping it with the Jacobson topology. The spaces of prime ideals and primitive ideals of A are denoted as Spec(A) and Prim(A), respectively. A ring is Jacobson if every prime ideal is an intersection of primitive ideals; clearly this property is equivalent to the statement that Prim(A) is a topological subspace of Spec(A), not just a subset. It is well known that Z is Jacobson, since it is affine and commutative, and so it follows from [30, 9.1.3] that A is a Jacobson ring. The H(Z)-stable ideals of Z and A are called Poisson ideals and the space of prime Poisson ideals is called the Poisson spectrum, denoted as P-Spec(Z) and P-Spec(A), respectively. Recall that for any ideal I ⊆ A, the Poisson core P(I) is the largest Poisson ideal contained in I; by [8, 3.3.2], we have P(I) prime whenever I is prime, and the same holds for Z. Lemma 2.5. Let A be a Poisson order over Z and I ⊆ J ⊆ A are any ideals with I Poisson. Denote the quotient map π : A → A/I. We have: Proof. To prove (i), it suffices to observe that P(J ∩ Z) ⊆ P(J) ∩ Z ⊆ J ∩ Z, by the definition of P, whilst (ii) follows from the fact that π defines an inclusion preserving bijection between the set of Poisson ideals of A/I and the set of Poisson ideals of A which contain I. Remark 2.6. It is not hard to see that the topology on P-Spec(A) is the subspace topology from the embedding P- Our purpose now is to define the symplectic stratification of the primitive spectrum Prim(A) used in the statement of the first main theorem. Consider the following diagram, which is commutative by part (i) of Lemma 2.5: The vertical arrows denote contraction of ideals I → I ∩ Z. The fibres of the map Prim(Z) → P-Spec(Z) are called the symplectic cores of Prim(Z), and they were first studied by Brown and Gordon in [6]. We define the symplectic cores of Prim(A) to be the fibres of the map Prim(A) → P-Spec(A). For m ∈ Prim(Z) we write C(m) for the symplectic core of m, and for I ∈ Prim(A) we write C(I) for the symplectic core of I. The following result shows that the symplectic cores of Prim(Z) are closely related to the symplectic leaves; the first part was proven in [6, Proposition 3.6], and the second statement in [16,Theorem 7.4(c)]. where the union is taken over all n ∈ Prim(Z) such that L(n) C(m). Thus, we think of the symplectic cores of Prim(A) as being something similar to the symplectic leaves of the primitive spectrum. If the Poisson primitive ideals of A are Poisson locally closed, then we say that the PDME holds for A. Later on in the paper (Lemma 5. Furthermore, if the PDME holds for Z as a Poisson order over itself, then it also holds for A. Proof. If I ∈ Prim(A) then, using Lemma 5.2, {P(I)} is a locally closed subset of P-Prim(A) if and only if the intersection N I properly contains P(I), so (i) ⇔ (ii). We point out that the lemma just cited does not depend on any of the results of this article which precede it, and follows straight from the definitions. It is not hard to see that for I ∈ Prim(A), we have if and only if N I = P(I), from which the equivalence of (ii) and (iii) follows. Now suppose that P(I ∩ Z) P(m) where the intersection is taken over all ideals m ∈ Prim(Z) such that P(I ∩ Z) P(m). Using Lemma 2.5, we deduce that P( where the intersection is taken over all m ∈ Prim(Z) such that I ∩ Z P(m). By the incomparability property over essential extensions [11,Theorem 6.3.8], we see that P(I) P(J) implies P(I ∩ Z) P(J ∩ Z) and so from P(m) ⊆ N I ∩ Z, we deduce that P(I) N I . We conclude from (iii) ⇒ (i) that the PDME holds for A. The universal enveloping algebra of a Poisson order Throughout this entire section, we work over an arbitrary field k. Let Z be a Poisson k-algebra and let A be a Poisson order over Z. Definition and first properties of the enveloping algebra Poisson A-modules can be thought of as modules over a non-associative algebra due to the action of the derivations ∇(Z), and one encounters elementary technical problems with dealing with such modules. For example, if M is a simple Poisson A-module, then it is not necessarily finitely generated over A (cf. Remark 2.2); this contrasts with the situation for simple A-modules where any such module is generated by any non-zero element. To remedy this problem we take a viewpoint which is common in universal algebra: we write down an associative algebra whose module category is equivalent to A-P -Mod and we use this new algebra to study simple Poisson A-modules and their annihilators. The Poisson enveloping algebra A e of the Poisson order A over Z is the k-algebra with generators and relations α : A → A e is a unital algebra homomorphism; for all x, y ∈ Z and all a ∈ A. Recall that the Poisson algebra Z is a Poisson order over itself and we write Z e for the enveloping algebra of Z. The algebra Z e has been extensively studied in the mathematical literature, although the first results appeared in [34], since Poisson algebras are examples of Lie-Rinehart algebras. Our next observation follows straight from the relations. There is a natural homomorphism Z e → A e which sends the elements {α(z), δ(z) | z ∈ Z} of Z e to the elements of A e with the same names. Next, we record some criteria for A e to satisfy the ascending chain condition on ideals. Proof. The Artin-Tate lemma shows that when A is finitely generated so too is Z, and so Z is noetherian. It suffices to prove that when Z is noetherian so too are A and A e . The extension Z ⊆ A is centralising in the sense of [30, 10.1.3] and so Corollary 10.1.11 of that book shows that A is noetherian. Now by the relations the map of rings α : A → A e is almost normalising in the sense of [30, 1.6.10] and so the lemma follows from Theorem 1.6.14 of the same book. When a ∈ A e we write ad(a) for the derivation of A e given by b → ab − ba. The following two statements can be proven by induction on (3.4) and (3.5), respectively. Lemma 3.3. For x 1 , . . . , x n ∈ Z and a ∈ A, we have: We define a filtration on A e by placing A in degree 0 and δ(z) in degree 1 for all z ∈ Z. We call the resulting filtration the PBW filtration on A e and as usual we denote the associated graded algebra by gr A e . One of our main tools in this paper is a precise description of gr A e , which we give in Theorem 3.12. For now we record a precursor to that result which will be needed when describing localisations of Poisson modules. where i 1 , . . . , i n ∈ I and j 1 · · · j m lie in J. The same statement holds with the elements α(a) occurring after the elements δ(x). Proof. It follows from relations (3.2) and part (ii) of Lemma 3.3 that the algebra A e is generated by the set {α(a i ), δ(x j ) | i ∈ I, j ∈ J}. Therefore, the lemma will follow from the claim that gr Z e is central in gr A e . This is clear upon examining the top graded components of relations (3.3) and (3.4). We now record the universal property of A e which allows us to view Poisson A-modules as A e -modules. Consider the category U whose objects are triples (B, α , δ ) where B is an associative algebra with unital algebra homomorphism α : A → B and Lie algebra homomorphism δ : Z → B satisfying (3.4) and (3.5), and where the morphisms (B, α , δ ) → (C, α , δ ) between two objects in U are the algebra homomorphisms β : B → C making the diagrams below commute. (1) (A e , α, δ) is an initial object in the category U ; (2) There is a category equivalence Remark 3.6. We may now define the category A-P -Mod, of finitely generated Poisson A-modules, to be the essential image of A e -mod in A-P -Mod under the above equivalence. Localisation of Poisson A-modules It is well known that if S ⊆ Z is any multiplicative subset containing no zero divisors, then the localisation Z S := Z[S −1 ] carries a unique Poisson algebra structure such that the natural map Z → Z S is a Poisson algebra homomorphism. Briefly, this structure is defined by extending the Hamiltonian derivations to Z S via (1.4). The reader may refer to the proof of [25,Lemma 1.3] for the precise formula. In the same manner, when a multiplicative set S ⊆ Z consists of nonzero divisors of A, the algebra A S := A ⊗ Z Z S carries a unique structure of a Z S -Poisson order and A S is a faithful Z S -module. Let A e S denote the Poisson enveloping algebra of A S . Now we use the following characterisation of simple A e -modules: they are precisely the modules which are generated by any non-zero element. Pick 0 = s −1 m ∈ M S and let t −1 n be any other element. Since M is a simple A e -module, there is a ∈ A e such that am = n, and it follows that t −1 as(s −1 m) = t −1 n. Hence M S is generated as an A e S -module by any non-zero element, and so M S is a simple Poisson A S -module as required. Remark 3.11. When p ∈ Spec(Z), we adopt the usual convention of writing A p and Z p for the localisations A S\p and Z S\p . When z ∈ Z \ {0} is not nilpotent, we write A z and Z z for the localisations at the multiplicative set {z i | i 0}. A Poincaré-Birkhoff-Witt theorem for the enveloping algebra Our present goal is to describe the associated graded algebra gr A e with respect to the PBW filtration (3.6) in the case where Z is a regular Poisson algebra, that is, when Spec(Z) is a smooth affine variety. Let Ω := Ω Z/k denote the Z-module of Kähler differentials for Z; see [20,Chapter II,§ 8] for an overview. The relations in the enveloping algebra imply that there is a natural map A ⊗ Z S Z (Ω) → gr A e which is surjective. The PBW theorem for Poisson orders takes the following form. Theorem 3.12. Suppose that Z is a regular, affine Poisson algebra over an algebraically closed field k. The following hold. (1) The natural surjective algebra homomorphism gr A e is an isomorphism. (2) There is an isomorphism of (A, Z e )-bimodules (3) There is an isomorphism of (Z e , A)-bimodules The proof will occupy the rest of the current subsection. The approach is modelled on that of [34] where a similar result was proven for enveloping algebras of Lie-Rinehardt algebras. We first prove the theorem in the case where Ω is a finitely generated free Z-module and then use localisation of Poisson orders to deduce the theorem in the case where Ω is locally free, that is, projective. By [20,Theorem 8.15], we know that Ω is a projective Z-module if and only if Z is regular, from which we will conclude the theorem. Suppose that Ω is a free Z-module of finite type, so there exist z 1 , . . . , z n ∈ Z such that d(z 1 ), . . . , d(z n ) is a basis for Ω. Therefore the symmetric algebra S Z (Ω) is free over Z and the ordered products d(z I ) := d(z i1 ) · · · d(z im ) with i 1 · · · i m provide a basis. When I is a sequence 1 i 1 · · · i m n we write |I| = m and write j I if j i 1 . Lemma 3.13. Let Ω be a free Z-module with a finite basis. There is a Poisson whenever j I. Proof. It was first observed by Huebschmann that when Z is a Poisson algebra, Ω carries a natural Lie algebra structure and that (Z, Ω) is a Lie-Rinehardt algebra; see [22,Theorem 3.11]. Therefore we may apply the first part of the proof of [34, Theorem 3.1] to deduce that S Z (Ω) carries a Poisson Z-module structure satisfying (3.11), provided that Ω is free. Using Lemma 2.3 we see that A ⊗ Z S Z (Ω) carries the required Poisson A-module structure. Proof of Theorem 3.12. We start by proving the statement of part (1) of the theorem; however, for the moment we replace the hypothesis that Z is regular with the assumption that Ω is a free Z-module of finite rank. We adopt the notation introduced preceding Lemma 3.13 so that z 1 , . . . , z n is a basis for Ω over Z, and we write We have A = F 0 A e ∼ = F 0 A e /F −1 A e ⊆ gr A e and so gr A e is a left A-module. We need to show that the set spans a free left A-submodule of gr A e . Observe that the Poisson A e -module structure defined in Lemma 3.13 makes T := A ⊗ Z S Z (Ω) into a filtered A e -module, and so gr(T ) ∼ = T is a graded gr A e -module. We denote the operation gr A e ⊗ k T → T by u ⊗ a → u · a. Thanks to (3.11) the map ψ : gr A e → T defined by u → u · (1 ⊗ 1) sends δ(z I ) to 1 ⊗ d(z 1 ) i1 · · · d(z n ) in for I = (i 1 , . . . , i n ). Since ψ is A-equivariant and the image of (3.12) is A-linearly independent, we deduce that (3.12) is A-linearly independent, as claimed. This proves part (1) in the case where Ω is a free Z-module. Now we suppose that Z is regular. Then it follows from [20,Theorem 8.15] that Ω is a locally free Z-module in the sense that there is a function r : Spec Z → N 0 such that as Z-modules, for all p ∈ Spec Z. By the previous paragraph, we deduce that the natural is an isomorphism. This shows that there is a commutative diagram of algebra homomorphisms: We point out that the natural map this is a special case of the very general statement that a Z-module M embeds in the product of the localisations over Spec Z. We deduce from the diagram that the natural map gr A e is an injection, hence an isomorphism as required. We now prove (2). There is a surjective homomorphism of (A, Z e )-bimodules Here, we view A as a subalgebra of A e as explained in Remark 3.8 and Z e → A e is the map described in Lemma 3.1. The kernel of φ is an A-linear dependence between the ordered monomials δ(z I ) in A e but by part (1) we know that all such dependences are trivial, whence (2). Part (3) follows by a symmetrical argument. We now list some results which follow easily from Theorem 3.12. We thank the referee for pointing out the proof of freeness in part (iv) of the following result. Corollary 3.14. Suppose that Z is regular and affine. Then the following hold. (i) The natural map Z e → A e from Lemma 3.1 is an inclusion. (ii) If A is a free Z-module, then A e is a free (left and right) Z e -module. (iii) If A is a projective Z-module, then A e is a projective (left and right) Z e -module. (iv) A e is a free (left and right) A-module, hence A e is projective and faithfully flat over A. Proof. The PBW theorems for Z e and A e show that the map Z e → A e is injective on the level of associated graded algebras, proving (i). Part (ii) follows from parts (2) and (3) of the PBW theorem, whilst (iii) is an application of Hom-tensor duality. Part (iv) requires slightly more work, and we begin by showing that A e is a countable direct sum of projective A-module. As we noted many times previously, when Z is regular and affine, we have Ω finitely generated and projective. Write S Z (Ω) = k 0 S k Z (Ω) for the Z-module decomposition into symmetric powers. Since projective modules of finite type are retracts of finite rank free modules, and since symmetric powers S k Z preserve retracts and free modules of finite rank, we see that S k Z (Ω) is projective of finite type for all k 0. If S k Z (Ω) ⊕ Q k = Z n(k) for Z-modules {Q k | k 0} and integers {n(k) ∈ N | k 0}, then we see splits for all k 0, which implies that A e is a direct sum of projective (left) A-modules, hence projective. In this last deduction we have used the fact that F 0 A e ∼ = A is a projective A-module. A symmetrical argument shows that A e is projective also as a right A-module. We have actually shown that A e is a direct sum of countably many projective A-modules. It follows that if I ⊆ A is a two-sided ideal, then A e /IA e is not finitely generated as an A-module. In the language of [2], we have that A e is an ℵ 0 -big projective A-module. By Lemma 3.2 we know that A is noetherian, so A e satisfies the hypotheses of [2, Corollary 3.2] and A e is a free left A-module; by symmetry, it is also free as a right A-module. Faithful flatness follows immediately. Poisson primitive ideals versus annihilators of simple Poisson A-modules In this section, we shall prove parts (b) and (c) of the second main theorem which relate the Poisson primitive ideals of a Poisson order to the annihilators of simple modules. For the rest of the paper, the ground field will be the complex numbers C. Poisson primitive ideals are annihilators Let Z be a complex affine Poisson algebra and let A be a Poisson order over Z. We write Proof. The first part follows from part (iv) of Corollary 3.14 and [4, Chapter I, § 3, No. 5, Proposition 8]. For I ∈ I P (A), we have φ(I) := IZ e = Z e I ∈ I(A e ), thanks to relation (3.4) in A e . Furthermore, when J ∈ I(A e ), we see that the derivation ad(δ(z)) stabilises ψ(J) := J ∩ A ⊆ A e for all z ∈ Z. Using (3.4) again, the latter assertion is equivalent to saying that ψ(J) ∈ I P (A). We now prove part (c) of the second main theorem. Proof. Since I is primitive, there is a maximal left ideal L ⊆ A such that I = Ann A (A/L ). We consider the left ideal A e L ∈ I l (A e ) containing A e I and observe that, by Zorn's lemma, there is a maximal left ideal L ∈ I l (A e ) containing A e L and the quotient A e /L is a simple left A e -module. Since L is a proper ideal of A e , it follows that L ∩ A is a proper left ideal of A. By part (ii) of Lemma 4.1 we have L = A e L ∩ A ⊆ L ∩ A and so the maximality of L implies that (4.1) The annihilator Ann A e (A e /L) is the largest two-sided ideal contained in L, and we claim that Ann A e (A e /L) ∩ A = P(I). If we can show that P(I) (1) ⊆ Ann A (A e /L) (2) ⊆ I, then the claim will follow, since we know that P(I) is the largest Poisson ideal contained in I, Annihilators are Poisson rational In order to prove part (b) of the second main theorem, we make a more detailed study of the torsion subset of a simple module. as desired, proving (ii). Set I := Ann Z (M ). We have I ⊆ P(T (M )) and we now prove that this is an equality. According to [8, 3.3 .2], we have According to (4.4), this set is equal to We claim that (4.6) is equal to {z ∈ Z | z∇(x 1 ) · · · ∇(x n )m 0 = 0 for all x 1 , . . . , x n ∈ Z, n 0}, (4.7) where ∇ : Z → End C (M ) is the structure map of the module M . To prove they are equal, we define . , x n ∈ Z, k n 0}; J k := {z ∈ Z | z∇(x 1 ) · · · ∇(x n )m 0 = 0 for all x 1 , . . . , x n ∈ Z, k n 0}, and we show that I k = J k for all k 0. The case k = 0 is trivial and so we prove the case k > 0 by induction. By part (i) of Lemma 3.3 and part (iii) of Lemma 3.5, we have The right-hand side of (4.8) is a sum of expressions of the form where {j 1 , . . . , j n } = {1, . . . , n} and 0 p n, and there is a unique summand in (4.8) with p = 0, in which case (j 1 , . . . , j n ) = (n, n − 1, . . . , 1). If z∇(x 1 ) · · · ∇(x n )m 0 = 0 for all x 1 , . . . , x n ∈ Z, k n 0, then it follows immediately that (4.8) vanishes, whence J k ⊆ I k . Conversely, if z ∈ I k , then z ∈ J k−1 by the inductive hypothesis, and so we deduce z∇(x k ) · · · ∇(x 1 )m 0 = 0 for all x 1 , . . . , x k ∈ Z from our description of the summands occurring in (4.8). This shows that I k ⊆ J k . Since (4.6) is equal to k 0 I k and (4.7) is equal to k 0 J k , we have proven that P(T (M )) is given by (4.7). It follows that this ideal annihilates Z e m 0 . By Lemma 3.4 we see that M = A e m 0 = AZ e m 0 . If z ∈ P(T (M )), then since z is central in A we have zM = A(zZ e m 0 ) = 0 and we have shown that P(T (M )) = I. This proves (iii). We are ready to prove part (b) of the second main theorem. Let ab −1 ∈ C P Q(A/I) (defined in (1.5)) with b ∈ Z/Z ∩ I. We claim that ab −1 has a representative such that b / ∈ T (M ). To see this, suppose that b ∈ T (M ) and consider the ideal It is not hard to see that: This shows that I is rational and completes the proof. Remark 4.5. (i) It seems credible that the hypothesis Z is regular can be removed from Theorem 4.2; see the remarks in § 1.7 for a suggested approach in some special cases. (ii) The hypothesis that Z is affine is necessary in the statement of Theorem 4.4, as the following example shows. We are grateful to Sei-Qwon Oh for explaining this to us, and permitting us to reproduce it here. Let The weak Poisson-Dixmier-Moeglin equivalence Once again the ground field is C. In this section we prove (a) and the equivalence of (i), (ii) and (iii) from the second main theorem. Δ-ideals in Δ-algebras It will be convenient to work in a slightly more general context than the setting of Poisson orders over affine algebras: we do not need to assume that the derivations H(Z) arise from a Poisson structure in order to state and prove that Poisson weakly locally closed, Poisson primitive and Poisson rational ideals all coincide. We proceed by stating all of the notations needed, which will remain fixed throughout the current section. Let A be a finitely generated semiprime noetherian C-algebra which is a finite module over some central subalgebra Z. By the Artin-Tate lemma, it follows immediately that Z is an affine algebra. The centre of A will be written C(A) for the current section. We continue to denote the primitive and prime spectra of A by Prim(A) and Spec(A), endowed with their Jacobson topologies (cf. § 2.4). We fix for the entire section an arbitrary subset Δ ⊆ Der C (A) such that Δ(Z) ⊆ Z, and we remark that we do not need to assume that Δ is a Lie algebra, or even a vector space in what follows. When I is any subset of A, we write Δ(I) ⊆ I whenever δ(I) ⊆ I for all δ ∈ Δ. We say that an ideal I of A is a Δ-ideal if Δ(I) ⊆ I. For every ideal I ⊆ A, we consider the Δ-core of I, denoted as P Δ (I), which is the unique maximal two-sided Δ-ideal of A contained in I. It is easy to see that such an ideal exists and is unique since it coincides with the sum of all Δ-ideals contained in I. The Δ-primitive ideals of A are the ideals An ideal is called Δ-prime if whenever J, K are Δ-ideals satisfying JK ⊆ I we have J ⊆ I or K ⊆ I. The Δ-spectrum of A is the space of all Δ-prime ideals, equipped with the Jacobson topology, denoted by Spec Δ (A). Lemma 5.1. The following hold. (2) If I is a Δ-ideal and I 1 , . . . , I n are the minimal prime ideals over I, then I 1 , . . . , I n are Δ-ideals. ( The following result was first proven for Poisson algebras in [32]. Proof. Let I be Δ-locally closed. By [30, Lemma 9.1.2(ii), Corollary 9.1.8(i)] we know that A is a Jacobson ring, and so I = s∈S I s for some index set S where each I s is a primitive ideal of A. From part (4) of Lemma 5.1, we deduce that I = s∈S P Δ (I s ). Now by Lemma 5.2 it follows that I = I s for some s ∈ S, hence I is Δ-primitive. We now prove (2), so suppose that I = P Δ (J) is Δ-primitive, and that J ⊆ A is primitive. Since J is primitive, Dixmier's lemma tells us that J 0 is a maximal ideal of Z. Since A is a finite module over Z, it follows that A/J 0 A is finite-dimensional over C, thus C is the only subfield of A/J 0 A. Hence once we have proven the existence of such an embedding, the lemma will be complete. After replacing A by A/I we can assume that P Δ (J) = 0 and show that C Δ (Q(A)) → A/J 0 A. Since A is prime and finite over its centre, we have Q(A) ∼ = A ⊗ Z Q(Z) (cf. § 1.4) and this isomorphism is Δ-equivariant. Now if a ⊗ z −1 ∈ C Δ (A ⊗ Z Q(Z))), then by (1.4) we have a ⊗ z −1 = δ(a) ⊗ δ(z) −1 for all δ ∈ Δ. Since J contains no non-zero Δ-ideals, we can use this observation repeatedly to find a representative a 1 ⊗ z −1 1 of a ⊗ z −1 such that z 1 / ∈ J. In other words, we have a ⊗ z −1 ∈ A ⊗ Z Z J0 where Z J0 denotes the localisation of Z at the prime J 0 Z. We have and so there is a map The composition is necessarily an embedding since C Δ (Q(A)) is a field. The Δ-rational ideals are the Δ-primitive ideals Now we suppose that S ⊆ Z is a multiplicative subset, so that the localisation ZS −1 may be defined. Notice that S is also a multiplicative subset of A and the ring AS The lemma leads directly to a crucial proposition, which is probably well known. Proposition 5.5. The following hold. (1) Extension and contraction of ideals define inverse bijections between the sets Spec S (A) and Spec(AS −1 ). (3) When AS −1 is countably generated, these bijections also preserve the set of all primitive ideals. Proof. It is a fact, easily checked using part (i) of the previous lemma, that contraction through a central extension of rings preserves prime ideals. For the reader's convenience, we check that extension sends Spec S (A) to Spec(AS −1 ). Pick P ∈ Spec S (A) and suppose that IJ ⊆ P e . Then I c J c ⊆ P ec = P by parts (ii) and (iii) of the previous lemma, and so we may assume I c ⊆ P by primality. Using part (i) of that same lemma once again I = I ce ⊆ P e and so ideals in Spec S (A) extend to Spec(AS −1 ) as claimed. Now apply all three parts of the previous lemma to deduce that extension and contraction are inverse bijections on prime ideals, proving (1) of the current proposition. The fact that the Δ-ideals are preserved is an immediate consequence of the Leibniz rule for derivations. The statement regarding primitive ideals requires slightly more work, as we now explain. Suppose that M is a simple A-module with I = Ann A (M ) satisfying I ∩ S = ∅. We claim that MS −1 := M ⊗ Z ZS −1 is a simple non-zero AS −1 -module. The kernel of map M → M [S −1 ] consists of m ∈ M such that sm = 0 for some s ∈ S. If such an m = 0 exists, then M = Am and so sM = 0, meaning s ∈ I ∩ S. Since this is not the case, MS −1 is nonzero and we conclude that it is also simple over AS −1 , by essentially the same argument as we used in the proof of Proposition 3.10. What is more, the reader can easily verify that Ann A (M ) = A ∩ Ann AS −1 (MS −1 ). This shows that every primitive ideal in Spec S (A) is equal to J ∩ A for some J ∈ Prim(AS −1 ). We claim that whenever M is a simple AS −1 -module, there exists a simple A-submodule N ⊆ M . To show that such a simple A-submodule N ⊆ M exists, we observe that AS −1 is a countably generated C-algebra and hence it satisfies the endomorphism property [30, Proposition 9.1.7]. Since A is a finite module over Z, it follows that AS −1 is a finite ZS −1 -module, say certain elements a 1 , . . . , a t ∈ AS −1 . Therefore, for any 0 = m ∈ M , Recall that we say that A is an essential extension of Z, provided that every non-zero ideal of A intersects Z non-trivially. Lemma 5.6 [11,Theorem 6.3.8]. If A is a prime C-algebra and a finite extension of a central subalgebra Z, then A is an essential extension of Z. Proof. We suppose that J is a Δ-prime ideal, that C Δ (Q(A/J)) = C and we aim to find a primitive ideal I ⊆ A with P Δ (I) = J. After replacing A by A/J and replacing Z by Z/Z ∩ J, we see that it is sufficient to suppose that C Δ (Q(A)) = C and find a primitive ideal I ⊆ A with P Δ (I) = (0). By part (3) of Lemma 5.1, we may adopt the hypothesis that A is prime. Since C Δ (Q(Z)) → C Δ (Q(A)), it follows that C Δ (Q(Z)) = C. Let M be the set of a minimal non-zero Δ-prime ideals of A. We claim that M is countable. First of all notice that, since A is a finite extension of Z, there are finitely many prime ideals of A lying over each prime ideal of Z. For the reader's convenience we sketch the proof of this fact. If p ∈ Spec(Z) is any prime ideal, then the ideal pA is not necessarily prime, but since A is a noetherian ring, there are finitely many prime ideals P 1 , . . . , P m of A over pA. Suppose that Q ∈ Spec(A) is any ideal with Q ∩ Z = p. Then it follows that Q contains one of the minimal primes P 1 , . . . , P m . We may suppose P 1 ⊆ Q. Since A/P 1 is an essential extension of Z/P 1 ∩ Z, we conclude from Lemma 5.6 that either Q = P 1 or the image of Q in A/P 1 intersects Z/P 1 ∩ Z non-trivially. By assumption Q ∩ Z = p = P 1 ∩ Z and so the latter does not hold, hence we may conclude Q = P 1 . This proves that there are finitely many primes of A lying over p. Now in order to prove that M is countable, it suffices to show that Z contains only countably many minimal non-zero Δ-prime ideals. This follows from the argument given in [3, Theorem 3.2], using the fact that C Δ (Q(Z)) = C. Now we may enumerate M = {P 1 , P 2 , P 3 , . . .} and write p i := P i ∩ Z for all i = 1, 2, 3, . . .. By assumption A is prime and so thanks to Lemma 5.6, we see that A is an essential extension of Z. In particular {p 1 , p 2 , p 3 , . . .} are all non-zero. Choose non-zero elements {s 1 , s 2 , s 3 , . . .} with s i ∈ p i and let S be the multiplicative subset of S generated by {s 1 , s 2 , s 3 , . . .}. Note that AS −1 is countably generated, so we are in a position to apply every conclusion of Proposition 5.5. Let I ∈ Spec(AS −1 ) be any primitive ideal. Suppose for a moment that I contains some nonzero Δ-prime ideal. Then it must contain a minimal non-zero Δ-prime ideal, which we may denote by K. It follows from Proposition 5.5 that K ∩ A is a non-zero Δ-prime ideal which intersects S trivially. This is impossible, since S contains a non-zero element of every non-zero Δ-prime ideal of A. We may conclude that P Δ (I) = (0). Now apply part (4) of Lemma 5.1 to see that P Δ (I ∩ A) = P Δ (I) ∩ A = (0). Thanks to the last part of Proposition 5.5, we see that I ∩ A is a primitive ideal of A and so we have shown that (0) is Δ-primitive, as required. The Δ-weakly locally closed ideals are the Δ-rational ideals We now go on to prove that Δ-rational ideals of A enjoy a property which is strictly weaker than being Δ-locally closed. We say that an ideal I ⊆ A is Δ-weakly locally closed if the following set is finite Proof. After replacing A with A/I we may show that (0) is Δ-rational if and only if it is Δ-weakly locally closed. We begin by supposing that (0) is not Δ-rational and show that it is not Δ-weakly locally closed. Recall that we identify Q(A) with A ⊗ Z Q(Z), and we identify A and (Z \ {0}) −1 with subsets of Q(A). Suppose that there exists some nonconstant az −1 ∈ C Δ (Q(A)) with 0 = z ∈ Z. We consider the localisation Z z := Z[z −1 ] and . For all c ∈ C we observe that az −1 − c is central in A z and we consider the ideals I c := (az −1 − c)A z . We claim that {I c | c ∈ C} are generically proper ideals. Combined with the fact that az −1 − c is not a zero divisor, we conclude that (az −1 − c) −1 is central in A z . Writing C(A z ) for the centre of A z , we conclude that I c is proper if and only if J c := (az −1 − c)C(A z ) is proper. The Artin-Tate lemma implies that C(A z ) is an affine algebra and so the ideals {J c | c ∈ C} are generically proper, which confirms the claim at the beginning of this paragraph. Since A is a noetherian ring so is A z and so by [30, 4.1.11], there are finitely many minimal prime ideals over I c each of which has height one. If some prime ideal contains both I c and I c for some c, c ∈ C, then it contains c − c and so these prime ideals are all distinct. Now we have found infinitely many Δ-prime ideals of A z , all of height one. Finally we apply parts (1) and (2) of Proposition 5.5 to see that (0) is not Δ-weakly locally closed, as desired. Now we show that if (0) is Δ-rational, it is Δ-weakly locally closed. First of all we observe that (0) is a Δ-rational ideal of Z, thanks to the identification Q(A) = A ⊗ Z Q(Z). Thanks to [3,Theorem 7.1] we know that (0) is Δ-weakly locally closed in Spec(Z). Suppose that p 1 , . . . , p l are the set of those minimal non-zero prime ideals of Z which are Δ-stable. Since there are finitely many prime ideals of A lying above each ideal p 1 , . . . , p l , it suffices to show that each one of the minimal non-zero prime ideals of A which is Δ-stable lies over one of p 1 , . . . , p l . Let P be a minimal non-zero prime of A which is Δ-stable and observe that P ∩ Z is a Δ-prime ideal. We must show that P ∩ Z is minimal amongst non-zero primes of Z. If not then there exists a prime q with q P ∩ Z. It then follows from the going down theorem [30, 13.8.14(iv)] that there is a prime Q of A with Q ∩ Z = q and Q P ; this contradicts the minimality of P and so we deduce that P ∩ Z is a minimal non-zero prime, as required. Proof of the first theorem and some applications In this section, we continue to take all vector spaces over C. Existence of the bijection We begin with some topological observations about the space Prim(A) when A is finite over its centre. . . , J t . Since A is prime and finite over the centre Z, we have that Z ⊆ A is an essential extension by Lemma 5.6, and so we can choose non-zero element i k ∈ I k ∩ Z for k = 1, . . . , s and j k ∈ J k ∩ Z for k = 1, . . . , t. Since A is prime, not one of these elements is nilpotent, and we may form the multiplicative subset S ⊆ Z \ {0} which they generate. The set O 1 ∩ O 2 = {J ∈ Prim(A) | I 1 , . . . , I s , J 1 , . . . , J t ⊂ J} contains the set {J ∈ Prim(A) | J ∩ S = 0} and according to Proposition 5.5, the latter is bijection with Prim(AS −1 ). Note that the hypotheses of that proposition are satisfied because A is a finite module over a finitely generated algebra, hence finitely generated. We know that Prim(AS −1 ) is non-empty by Zorn's lemma, which proves (1). Suppose that Prim(A) = k∈K L k (6.1) decomposes as a disjoint union of locally closed subsets and L 1 , L 2 are two of these sets such that L 1 = L 2 = V (I) for some prime I. Then applying part (1) to the open subsets L 1 , L 2 of Prim(A/I), we see that L 1 ∩ L 2 = ∅, and so L 1 = L 2 since the decomposition (6.1) is disjoint. Suppose that Z is a regular complex affine Poisson algebra, so that Z := Spec(Z) is smooth, and that the symplectic leaves of Z are locally closed in the Zariski topology. Thanks to the last part of the second main theorem, we know that the PDME holds for A. By the equivalence of (I) and (II) in the second main theorem, we know that the closures of the symplectic cores of Prim(A) are the sets of the form V (P(I)) = {J ∈ Prim(A) | P(I) ⊆ J}. Combining parts (b) and (c) and the equivalence of (ii) and (iii) from the second main theorem, we deduce that the closures of the symplectic cores are precisely the sets {V(M ) | M simple Poisson A-module}. Applying Lemma 6.1 we see that each symplectic core C(I) ⊆ Prim(A) is uniquely determined as the open core in its closure, which shows that the map from the first main theorem is a bijection, as claimed. Remark 6.2. The same proof works if we replace the assumption that the leaves are algebraic with the assumption that the PDME holds for Z, which is a strictly weaker hypothesis, as may be seen upon comparing [17, Example 3.10(v)] and [15]. Another example of a Poisson algebra Z with non-algebraic symplectic leaves for which the PDME holds is the polynomial ring C[x 1 , x 2 , x 3 ] with brackets {x, y} = ax, {y, z} = −y, {x, y} = 0. According to [16,Example 3.8], the leaves are non-algebraic but the PDME holds [3] since the Gelfand-Kirillov (GK) dimension is 3. The bijection is a homeomorphism Retain the hypotheses of the previous subsection. Now that we have shown that the Poisson primitive ideals of A are the annihilators of simple modules, we may denote the set of such ideals P-Prim(A). Denote the bijection described in the first main theorem by φ : P-Prim(A) → Prim C (A). It remains to prove that φ is a homeomorphism. We must give a precise description of the topology on each of these two spaces and observe that φ sets up a bijection between closed subsets of its domain and codomain. The where the right-hand vertical arrow is constructed in our first main theorem, and all other arrows are defined above. The composition of the horizontal maps are the constant maps, respectively, sending Prim U (g) to the annihilator of the trivial Poisson C[g * ]-module, and sending g * /G to the zero orbit. Quantum groups and open problems In the introduction, we proposed two applications of the first main theorem: a description of the annihilators of simple Poisson C[g * ]-modules when g is a complex algebraic group, and also annihilators of simple modules over the classical finite W -algebra. Both of these examples are Poisson algebras and so they do not use the full force of the first main theorem. We conclude by mentioning some famous examples where A is a Poisson order over a proper Poisson subalgebra Z, satisfying the hypotheses of the main theorem. Let q be a variable and consider the affine C[q]-algebra A generated by X 1 , . . . , X n subject to relations X i X j − qX j X i for i < j. This is the single parameter generic quantum affine space. When we specialise q → where is a primitive th root of unity for some > 1, we obtain a Poisson order. To be precise, the subalgebra Z 0 of A := A/(q − )A generated by {X i | i = 1, . . . , n} is central, known as the -centre. Following the observations of § 2.3, we see that Z 0 is a Poisson algebra and A is a Poisson order over Z 0 . There is a (k × ) naction on A rescaling the generators and it is not hard to see that there are only finitely many (k × ) n -stable Poisson prime ideals. It follows from the results of [15] that the PDME holds for Z 0 and so by Remark 6.2, our first main theorem applies to A . In particular, the space P-Prim(A ) of annihilators of simple Poisson A -modules is homeomorphic to the set Prim C (A ). Two other natural examples to consider are the quantised enveloping algebras U q (g) where g is a complex semisimple Lie algebra, and their restricted Hopf duals O q (G); see [5] for more detail. Once again, after specialising the deformation parameter q to an th root of unity, we denote one of these algebras by A . Just as for quantum affine space, the th powers of the standard generators in either of these examples generate a central subalgebra Z 0 . In these cases the symplectic leaves are actually locally closed and so our main theorem applies here too. Problem. For all of the families of algebras discussed above: Of course, Theorem 4.2 shows that such a module exists but the proof is non-constructive. We hope that by constructing modules more explicitly (for example, by generators and relations) as per Problem (2), it will become more apparent how we could hope to deform a simple A /mA -module over the closure of the symplectic core C(m), where m ∈ Prim(Z 0 ).
17,015.6
2017-11-15T00:00:00.000
[ "Mathematics" ]
Multi-Component Extension of CAC Systems In this paper an approach to generate multi-dimensionally consistent N -component systems is proposed. The approach starts from scalar multi-dimensionally consistent quadrilateral systems and makes use of the cyclic group. The obtainedN -component systems inherit integrable features such as Bäcklund transformations and Lax pairs, and exhibit interesting aspects, such as nonlocal reductions. Higher order single component lattice equations (on larger stencils) and multi-component discrete Painlevé equations can also be derived in the context, and the approach extends to N -component generalizations of higher dimensional lattice equations. Introduction Integrability of nonlinear partial differential equations is of sovereign importance in the study of soliton theory. For discrete equations, especially quadrilateral equations, consistency around the cube (CAC) [12,43,48] provides an interpretation of integrability. A classification of discrete integrable equations with scalar-valued fields was obtained by Adler-Bobenko-Suris (ABS) in [4] and more general classes of scalar equations where classified in [5,13]. The CAC property is also applicable to higher order lattice equations by introducing suitable multi-component forms [24,52,57]. Further examples of multi-component integrable systems can be found in the literature [34,35,36,50]. However, general classification results have not been obtained yet. For CAC equations, the equations on the side faces of the consistent cube can be interpreted as an auto-Bäcklund transformation (BT). CAC also enables one to construct Lax pairs [14,43], as well as to find soliton solutions [27,28,29]. Whereas the classification in [4] requires the equations on the six faces of the cube to be the same, in [7] alternative auto-BTs were given for several ABS equations, giving rise to consistent systems where the equations on the side faces are different from the equation on the top and bottom of the cube. Moreover, (non-auto) BTs between distinct equations were also provided, corresponding to consistent systems with different equations on the top and the bottom faces of the cube. Other classifications of CAC systems with asymmetrical properties, other relaxations, and 3D affine linear lattice equations with 4D consistency have been also considered [5,6,13,25,53]. We will refer to a system of equations which is consistent on a cube, as a cube system. In Appendix B of the PhD. Thesis of J. Atkinson [8], multi-component versions of CAC scalar systems were introduced under the name "The trivial Toeplitz extension". Atkinson trivially extends a scalar equation for a field u to an N -component system for fields u 1 , . . . , u N , and then applies the transformation u i (l, m) → u i+l+m mod N (l, m). He remarks that the resulting system is trivial (indeed the inverse transformation decouples it), and that extensions of multi-dimensional consistent scalar equations are multi-dimensional consistent (and hence would emerge in classifications of multi-component discrete integrable systems). In [22], (N = 2)-component ABS equations resulting from such an extension were investigated. Here the CAC property (with affine linearity, D4 symmetry and the tetrahedron property) of these coupled systems was established, and solutions provided. In this paper, we consider more general (but still trivial) extensions of not only scalar equations, but also of cube systems. For scalar equations these extension take the (non-Toeplitz) form with a, b ∈ Z. We will provide Lax pairs, BTs, solutions and reductions for multi-component extensions of CAC lattice equations and cube systems, as well as multi-component extensions of 3D lattice equations with 4D consistency. The paper is organized as follows. In Section 2 we construct multi-component extensions of two distinguished kinds of systems, CAC cube systems (Section 2.1) and CAC lattice systems (quadrilateral equations in Section 2.2, and higher dimensional equations in Section 2.3). CAC lattice systems can be consistently posed on a lattice, whereas CAC cube systems need to be accompanied by reflected cube systems and posed on lattices similar to so-called black and white lattices [5,58]. Examples include multi-component extensions of a Boll cube system [13] (Section 2.1), an equation from the ABS list [4], and (auto and non-auto) Bäcklund transformations [7] (Section 2.2.1). We show that multi-component extensions admit Lax-pairs in Section 2.2.2. Assuming D4 symmetry we count the number of different non-decoupled N -component extensions in Section 2.2.3. In Section 3 we provide several kinds of reductions. Nonlocal reductions are given in Section 3.1, reductions to higher order scalar equations are given in Section 3.2, and a reduction to a multi-component Painlevé lattice equation is considered in Section 3.3. In Section 4.1 some particular solutions are given. These can be constructed from N solutions of the scalar equation. A solution for a nonlocal equation is provided in Section 4.2. In Section 5 we summarize and discuss the results, and we point out that particular examples of N -component generalised systems have appeared in the literature in different contexts. Multi-component extension of CAC systems In this section we construct multi-component systems that are consistent around the cube, a.k.a. CAC. We first focus on a single 3D cube with consistent face equations. Multi-component CAC cube systems We will be concerned with quadrilateral equations of the form Q u, u, u, u = 0, (2.1) where we use u and u to denote shifts of u in two different directions. Posing six such equations on a cube yields a general type of system of the form We assume the functions Q, A, B, Q * , A * , B * are affine linear with respect to each variable, and the symbols u, u, u, . . . , u represent the values of the field at the vertices of the cube, see Fig. 1(a). Each equation may depend on additional (edge) parameters but we omit these. The system (2.2) is called CAC if the three values for u calculated from the three starred equations coincide for arbitrary initial data u, u, u, u, i.e., In order to get a multi-component extension of the system (2.2), we consider the vertex symbols u, u, u, u, u, u, u to be N × N diagonal matrices, e.g., We introduce a cyclic group using the generator σ = σ N , defined as the N × N matrix with elements given by Thus, a cyclic transformation of u (a permutation of the components on the diagonal) can be denoted by Note that σ N = I N which is the N × N identity matrix. Lemma 2.1. If the scalar cube system (2.2) is CAC, the following multi-component cube system is CAC as well, where the variables u, u, u, u, u, u, u are diagonal matrices as in (2.4), and the T k i are cyclic transformations as defined in (2.6), with k i ∈ Z (mod N ). Proof . Note that the equations in the system (2.2) are affine linear and the variables denote the values of fields at the vertices of the cube. Replacing these variables by diagonal matrices they remain commutative. If the system (2.2) is CAC and u has a unique expression (2.3), it then follows that the system (2.7) is CAC in the sense that u has a unique expression in which the function F is the same as in equation (2.3), and the fields at the vertices are relabelled. In Fig. 1 the cube on the right has equation A u, T k 1 u, T k 3 u, T k 5 u = 0 on its front face, which can be thought of in two distinct ways: as a relabeling of the variable names (which we do in the proof), or, as introducing coupling between different components of the fields at the vertices (which yield multi-component coupled systems of equations). As an example we consider a cube system of Boll [13, equations (3.31), (3.32)]. With N = 2, denoting the field components by u, v (instead of u 1 , u 2 ), and taking k i = 1 2 (1 − (−1) i ) we find the following 2-component cube system (written as vector system instead of as a matrix system): where α, δ 1 , δ 2 are parameters. It is consistent around the cube. Remark 2.2. As in the scalar case, one can not straightforwardly impose the cube system (2.7) on the Z 3 lattice. It needs to be accompanied by 7 other cube systems which are obtained from the original one by reflections. If R i denotes a reflection in the ith direction, e.g., application of R 1 gives the cube system depicted in Fig. 2, then on the cube with center n + 1 2 , m + 1 2 , l + 1 2 one should impose the cube system reflected by R n Multi-component CAC lattice systems In this section we consider CAC lattice systems. The difference with the previous section is that we now require that the lattice equation Q = 0 can be consistently imposed on the entire Z 2 lattice, together with the cubes they are part of. The consequence of this requirement is two-fold: • we have to restrict ourselves to cube systems with A = A * and B = B * , B u, u, u, u = 0, B u, u, u, u = 0, (2.8c) because consistent cubes with Q = 0 on the bottom face need to be glued together so that their common faces carry same equation, A = 0 or B = 0. Note that we want to allow for the possibility that Q = Q * , so that (non-auto) Bäcklund transformations are included in the same framework. • we need to restrict the values the parameters of the extension, k i , can acquire. Theorem 2.3. Suppose that the system (2.8) is CAC in the sense u is uniquely determined by (2.3) in terms of initial values u, u, u, u. Extending u to be a diagonal matrix (2.4), the system is CAC as well, where a, b, c ∈ Z (mod N ), and can be consistently defined on Z 2 ⊗ {0, 1}. Proof . The CAC property follows directly from Lemma 2.1. We have to show that we can consistently define the same cube system on neighboring cubes. Consider two neighboring cubes, as in Fig. 3. Before we can glue them together we need to apply T a to every vertex of the cube on the right. But then we have to establish, e.g., that the shifted system of equations where S n f (n, m) = f (n + 1, m), is the same system of equations as the system Indeed, we have the identity which shows that the system (2.11) is just a rearrangement of the components of the system (2.10). In fact, it can be shown that for any fractional affine linear function g of diagonal N × N matrices (m 1 , . . . , m n ), we have θg(m 1 , . . . , m n )θ −1 = g θm 1 θ −1 , . . . , θm n θ −1 , for any invertible N × N matrix θ. It follows from the proof that the equations in (2.9) on the right may be simplified, i.e., Q * u, T a u, T b u, T a+b u = 0, A u, T a u, T c u, T a+c u = 0, B u, T c u, T b u, T b+c u = 0. It also follows that in the case where Q * = Q, the cube system can be imposed on the entire Z 3 -lattice. In the case where Q * = Q one needs a second cube system obtained from the first by the reflection R 3 , and impose the reflected system on cubes with center n + 1 2 , m + 1 2 , l + 1 2 with l odd. Remark 2.4. The cubes in Figs. 1(b), 2, and 3 are useful to define the equations which live on the faces of the cubes. However, one should be aware that the field u (which provides the support for equations (2.7) and (2.9) is defined in the usual way, namely u(n, m, l) = u(n + 1, m, l), as in Fig. 1. We do not have u(n, m, l) = T a u(n + 1, m, l). Examples The N -component equation where u is an N × N diagonal matrix (2.4)), will be referred to as the N [a, b] extension of the scalar equation (2.1). Similarly, the multi-component cube system (2.9) will be referred to as the N [a, b, c] extension of the cube system (2.8). In this terminology, the trivial Toeplitz extension introduced in Appendix B of [8] corresponds to the N [1, 1] extension. Discrete Burgers A simple example of a CAC scalar equation is the 3-point discrete Burgers equation [16,39] The parameters in this equation, p, q, are called lattice parameters, p corresponds to the tilde-direction, q corresponds to the hat-direction, and there is a third parameter, r, which corresponds to the bar-direction. The equations on the faces of the corresponding consistent cube are each of the form (2.13) with different dependence on the lattice parameters, i.e., setting The latter form has been investigated in [22]. Explicitly, for the H1 equation, also known as the lattice potential KdV equation (lpKdv), The latter appeared in [14], where the CAC property was used to construct its Lax pair. Here the B equation has the same form as the Q equation, but the A equation is decoupled. Auto-Bäcklund transformations There also exist CAC systems containing two different equations. For example, one can compose a CAC system using the lattice potential modified KdV (lpmKdV, or H3 0 ) equation on the side faces and the discrete sine-Gordon (dsG) equation for the other four faces of the cube (Q, A) [13,26]. Multi-component extension yields the CAC system with their shifts. In particular, we mention the 2[1, 1] extension of the dsG equation whose auto-BT consists of the dsG equation and the lpmKdV equation, which gives rise to an asymmetric Lax-pair, given in Section 2.2.2. The auto-Bäcklund transformations given in Table 2 of [7] provide other instances of the same situation. For example, one can take the ABS equation called Q1 1 as the Q-equation, that is at the bottom face, and Q u, u, u, u; p, q = 0 on the top face. A CAC system is obtained by placing the auto-BT, where the Bäcklund parameter r plays the role of the lattice parameter, on the front face, A u, u, u, u, p, r = 0 on the back face, A u, u, u, u, q, r = 0 on the left face and A u, u, u, u, q, r = 0 on the right face. Such CAC lattice systems can be consistently extended to multi-component CAC lattice systems by virtue of Theorem 2.3. Thus, when Q * = Q the multi-component equations can be interpreted as an auto-BT, mapping one solution u to another solution u. This is because the top equation in (2.9) can be rewritten as We remark that the equation (2.25) (in fact, any auto-BT) is an integrable equation on the Z 2 lattice. The equation (2.25) is not in the ABS list and neither is the sine-Gordon equation, because they are not CAC with copies of themselves. However, they do posses an auto-BT and hence (non-symmetric) Lax pairs (where the Bäcklund parameter provides the so called spectral parameter) can be constructed, see Section 2.2.2. Bäcklund transformations For (non-auto) Bäcklund transformations, such as the ones given in Table 3 in [7], the equations on the bottom face, Q, and on the top face Q * are different. For example, taking Q =H2, and posing and their shifted versions, A and B on the side faces, one finds that on the top face the variable u satisfies the Q * =H1 equation (2.17). In the general N -component cube system, denoted N [a, b, c], the side system provides a BT between the N -component H2 system and the N -component H1 system There are other examples of CAC lattice systems such as the ones in [25]. These all allow multi-component extension. On the other hand, one can directly write down a Lax pair of the N -component system (2.12) in terms of Lax matrices of the scalar equation. Suppose that (2.8) is a scalar 3D consistent lattice system, and the bottom equation has a 2 × 2 Lax pair (e.g., the one obtained from the BT at hand) where ψ = (ψ 1 , ψ 2 ) T and L and M are 2 × 2 matrices. Considering N copies of the equation, each scalar equation in which L and M are the Lax matrices given in (2.29), θ = σ 2 2N and σ 2N is a 2N × 2N cyclic matrix defined by (2.5). Note that the gauge transformation Φ = θ c Φ, transforms the Lax pair (2.30) into which does not depend on c. Proof . The compatibility of the which is equivalent to equation (2.28) with diagonal matrix u given by (2.4). From the compatibility of (2.30) we find which gives rise to (2.12), as (2.28) arises from (2.33). Asymmetrical Lax pairs When equation A is not related to equation B by a standard change in lattice parameters, the corresponding Lax pair for Q is asymmetrical. Similarly, one also finds asymmetry if one constructs a Lax pair for an auto-BT (A), using its auto-BT given by B, Q. For example, the auto-BT (2.23b)-(2.23c) provides an asymmetrical Lax pair for the 2-component dsG equation (2.24), For all the auto-BTs gives in [7, Table 2] a superposition principle emerges for solutions of the equation that are related by the auto-BT. This gives rise to a different asymmetrical Lax-pair for the auto-BT. The superposition principle for solutions of the lmpKdV equation related by the dsG equation is which is an lmpKdV equation with p and s interchanged. The cube system with the superposition principle (2.34) on the bottom and top faces and the dsG equation on the side faces, cf. Fig. 4, is consistent, and admits multi-component extension. One can also construct Lax-pairs from non-auto BTs [7, Table 3], however, these will not contain a spectral parameter. (2.35) In (2.35) we can consider u and its shifts to be diagonal matrices (2.4). By relabelling we also have Q u, T a u, T b u, T a+b u; p, q = ± Q u, T b u, T a u, T a+b u; q, p = ± Q T a u, u, T a+b u, T b u; p, q = ±Q T b u, T a+b u, u, T a u; p, q . Now we introduce v(n, m) = u(−n, m), and in terms of v we write (2.12) as Applying a tilde-shift, by virtue of D4 symmetry we obtain are all equivalent up to coordinate refections. Consequently, we can assume 0 ≤ a ≤ b ≤ c ≤ N/2 without loss of generality. In the next theorem, for which we include a proof in Appendix B, we give conditions which decide when a system is decoupled or non-decoupled. The greatest common divisor between integers a, b, . . . , c will be denoted gcd (a, b, . . . , c). N [a, b] We let α N denote the number of N -component extensions (2.12) that decouple, and we let β N denote the number of N -component extensions (2.12) that do not decouple. Thus, α 6 = β 6 = 5. In the following theorem we give formulas for the functions α N and β N . We use notation as follows. Let s be a set. By P(s) we denote the powerset of s, #s we denote the number of elements in s, and Πs denotes the product of the elements in s, e.g., with s = {1, 2, 3} we have 3, 5, 5, 9, 9, 12, 13, 20, 15, 27, 24, 28, 30, 44, 33, 54, 42, and we note that α 2n + β 2n = α 2n+1 + β 2n+1 = n+2 2 , and α p = 1 when p is prime. Running through all the primes in P N by the inclusion-exclusion principle one finds the formula (2.37) for α N . Due to (2.39) the formula for β N is then given by (2.38). One can include arbitrary coefficients {α j } in the equations, which can be gauged to any nonzero value [49]. The stencils of these two equations are depicted in Fig. 5. Any 3D CAC lattice equation can be generalised to a multi-component 3D equation which inherits the CAC property. We have the following result. Multi-component CAC 3D lattice equations Theorem 2.10. Suppose that the scalar lattice system Q u, u, u, u, u, u, u, u = 0, Q u,˙ u,˙ u,u,˙ u,˙ u,˙ u,˙ u = 0, A u, u, u,u, u,˙ u,˙ u,˙ u = 0, A u, u, u,u, u,˙ u,˙ u,˙ u = 0, B u, u,u, u,˙ u, u,u,˙ u = 0, B u, u,u, u,˙ u, u,˙ u,˙ u = 0, is consistent around the 4D cube in Fig. 6, i.e., the value of˙ u is uniquely determined by suitably given initial values. Then, after replacing u with diagonal form (2.4), the following system is also consistent around the 4D cube. We will call (abusing notation) equation ( Nonlocal systems A few years ago, nonlocal integrable systems were introduced by Ablowitz and Musslimani [1]. They studied the nonlocal nonlinear Schrödinger equation where i is the imaginary unit and q * denotes the complex conjugate of q. The equation is called nonlocal as it involves functions which depend on points −x, x which are far apart. Most nonlocal integrable systems are continuous or semi-discrete. In this section we show that nonlocal fully discrete integrable equations can be constructed as reductions of 2-component ABS systems. For a 2[0, 1] ABS system, (2.15), that is a system of the form Higher order equations from eliminations Multi-component extensions can be reduced to higher order lattice equations by elimination of field components. Examples of higher order lattice equations can be found in [12,17,21,23,33,38,47,51]. The first instance of such an elimination procedure appeared in the theory of integrable discretisation of holomorphic and harmonic functions [18,40]. More recently such techniques were applied in discrete integrable systems, e.g., in [33], where multi-quadratic relations were obtained, and related to Yang-Baxter difference systems. The discrete Burgers equation An example where the elimination can be done by a single substitution, is provided by the discrete Burgers equation. Eliminating the variable v in the 2[0,1] extension (2.14) yields the four point equation Equations on similar but larger stencils are easily obtained from N -component extensions. ABS equations For ABS-equations, the generic form of the function Q is, cf. [56], Q u, u, u, u = k 1 u u u u + k 2 u u u + u u u + u u u + u u u + k 3 u u + u u + k 4 u u + u u + k 5 u u + u u + k 6 u + u + u + u + k 7 , (3.9) where the coefficients depend on the lattice parameters p, q. Due the D4 symmetry property Q u, u, u, u; p, q = ±Q u, u, u, u; p, q , there exists a function G such that u = G u, u, u , u = G u, u, u . with H û ,˜ u,ˆ u,û, u, u = 0, (3.12) which is multi-quadratic in each variable. Multi-quadratic CAC equations related to the ABSequations have been studied in the literature, cf. [9,33]. The multi-quadratic equations in [9,33] have been written in quadrilateral form. However, considering, e.g., dQ1 * [33, and equation (3.12) yields the six-point equation To our knowledge this is a new equation. It can be written as a quadrilateral system by introducing variables x =ũ − u, y =û − u. Eliminating ψ we arrive at a five-point lattice equation which is called a discrete Toda-type equation in [2,3]. In the multiplicative case, we have Φ u, u; p, q Φ(u,ũ; p, q) = Φ(u, ũ ; p, q)Φ(u, û ; p, q), which is again a discrete Toda-type equation after a transformation Φ = e φ . Explicitly, the 5-point equation derived from 2[1, 1]H1 or 2[1, 1]Q1 0 can be written as This equation can also be found by elimination of the single shifts in the 7-point equation [46, equation (3) In [35] certain a-and m-bond systems lead to double H-type vertex systems, cf. [35, Proposition 6.1]. As mentioned in [35, Remark 6.2], two of these are two-component extensions of the type discussed here, the third is a mixed 2-component system, H1×H2, and they provide Bäcklund transformations for the above 5-point schemes. The other 2 [1,1]ABS equations relate to the following 5-point equations: Higher order scalar equations from 2[1, 1, 1] AKP and BKP Here we show that higher order equations can also be obtained from multi-component extensions of 3D lattice equations. One can eliminate u from the 2[1, 1, 1] AKP equation (2.44) as follows. Denote the left hand side from (2.44a) by E and the left hand side from (2.44b) by F . First we solve ũ fromẼ = 0, ū fromĒ = 0, û fromF = 0 andū fromF = 0. Then we substitute ũ , ū intoF/(ṽv v) and û ,û intoÊ/(v vv). The difference of the results gives rise to the 12-point equation Similar to the 2D case, the coupled system (2.44) can be considered as either a BT or a Lax pair of (3.18). From the 2-component BKP equation (2.45), using a similar elimination scheme, one obtains the 14-point King-Schief equation which arose in the study of nondegenerate Cox lattices [37]. Its BT/Lax pair is provided by (2.45), with u acting as an eigenvalue function, cf. The analysis in [37, Section 5] reveals that Cox-Menelaus lattices are intimately related to the AKP equation. We expect that (3.18) will play a similar role in that context as (3.19) plays in the context of nondegenerate Cox lattices. We further note that equation (3.19) relates, by a simple coordinate transformation [55], to an equation that appeared in a completely different setting, namely through the notion of duality, employing conservation laws of the lattice AKP equation [54]. N -component q-Painlevé III equation The results in Section 2 remain true for non-autonomous multi-dimensionally consistent systems, extending spacing parameters p → p(n), q → q(m) and r → r(l). There are close relations, cf. [32,42], between non-autonomous ABS lattice equations and discrete Painlevé equations exploing the affine Weyl group. A particular example is provided by a non-autonomous version of the lpmKdV equation (2.22) which can be reduced to a q-Painlevé III equation, by performing a periodic reduction. We use this example to illustrate that such a link extends to the multicomponent case. For the non-autonomous lpmKdV equation we use the bottom and front equations on its multi-component consistent cube, i.e., Imposing the periodic reduction (cf. [32]) on (3.20), and replacing p, q by pq, qq, and the first equation (3.20a) is unchanged but we rewrite it as for convenience. For (3.20b), after a tilde/hat-shift and making use of the reduction (3.21), we have Then, introducing diagonal matrices f and g by from (3.22a) and (3.22b) we find , which is an N -component q-Painlevé III equation. Taking N = 3 and a = 1, this yield the 3-component q-Painlevé III system , , , where the sub-index is taken modulo N , the system of equations (2.12) comprises N copies of the scalar equation. Thus, (4.2) provides a solution to (2.12). If (2.12) is non-decoupled, i.e., when gcd(a, b, N ) = 1, then u k will run over {w j : j = 1, 2, . . . N } by virtue of Lemma B.1 in Appendix B. The pattern of u k is depicted in Fig. 8. For the 2[1, 1] ABS equation (2.16), according to (4.2) its solution can be given by where each w i satisfies scalar equation (4.1). This coincides with the result in [22]. For the 3[1, 1] ABS equation (2.12), solutions can be presented by provided each w i solves the scalar equation (4.1). Note that (4.2) has the so-called jumping property (cf. [22]) and for (4.3) this property is illustrated by Fig. 9. Bilinear equations Many equations in the ABS list have been bilinearized [27]. If a scalar ABS equation (2.28) has a bilinear form 1 H f, g, f, f, g, g, f, g = 0, (4.4) with transformation u = F (f, g), for example, H1 equation (2.17) has bilinear form where p = −α 2 , q = −β 2 , then for the N [a, b] system (2.12), its bilinear form can be given by H f, g, T a f , T a g, T b f , T b g, T a+b f , T a+b g = 0, (4.5) through the transformation u = F (f, g) where f , g are diagonal forms in (2.27). With respect to solutions, suppose (f j , g j ) are any arbitrary solutions of (4.4). Using them we define f k (n, m) = f k−an−bm (n, m), g k (n, m) = g k−an−bm (n, m). Then, (f, g) composed by such components will be a solution to (4.5). As an example, the 2[0, 1] H1 equation (2.18) has a bilinear form with transformation u i = αn + βm + r − g i /f i , and (with m taken modulo 2) where (f i , g i ) are solutions of (4.4). We remark that rational solutions in terms of {x j } have been obtained for all the ABS equations except Q4 [59,61]. This implies rational solutions for nonlocal ABS equations can be derived, which will be explored elsewhere. Conclusion We have presented a systematic way to generate multi-component lattice equations which are CAC, by making use of the cyclic group. We note that cyclic matrices have been used in 3-point differential-difference equations [10,15] and fully discrete Lax pairs [20] to generate multi-component systems. Although the multi-component extensions we have considered are more general than "the trivial Toeplitz extension" introduced in Appendix B of the PhD thesis of J. Atkinson [8], the key idea is the same: starting from a single-component CAC (2D or 3D) discrete system (2.2), replacing u by a diagonal matrix (2.4), applying permutations T k on shifted field components, one obtains a multi-component CAC system (2.9). Posing the equations on lattices, D4 symmetry, criteria for decoupled and non-decoupled cases, BTs, auto-BTs, Lax pairs, solutions, nonlocal reductions, elimination of components to get equations on larger stencils, and reduction to multi-component discrete Painlevé equations, have been investigated in detail. Isolated examples of multi-component extensions sporadically appear in the literature in different contexts. We mention: the two-component potential KdV system [14, What we have not touched upon, is the fact that multi-component extension can also be applied to systems of equations. For example, the discrete Boussinesq family contains several multi-component CAC lattice systems [24]. These also allow N [a, b] extension, cf. the 2 [1,1] Boussinesq extension given in [22]. Finally, we'd like to point out that the multi-component extensions considered here are commutative. Non-commutative lattice equations, which are multi-component generalisations, have also been considered in the literature: 3D matrix discrete interable systems can be found in [45, equations (1.5)-(1.9)], a non-commutative version of Q1 0 was given in [11, equation (23)], a matrix version of H1 was obtained in [19, equation (2.1)], cf. [60], where several matrix discrete integrable equations derived from the Cauchy matrix approach were presented. Φ(T c f, T c g) = θ c Φ(f, g), Φ T a+c f , T a+c g = θ a+c Φ f , g , Φ T b+c f , T b+c g = θ b+c Φ f , g . B Proof of Theorem 2.8 We first prove a useful lemma. Although we believe it is an elementary result in number theory, we include it for completeness. We denote N = {1, 2, . . .} and N 0 = {0, 1, 2, . . .}. Lemma B.1. Let a, b ∈ N 0 , N ∈ N such that gcd(a, b, N ) = 1, and let A = {ia+jb+kN : i, j, k ∈ Z}. There are i 0 , j 0 ∈ N and k 0 ∈ Z such that and hence A = Z. Proof . Let s 0 = i 0 a + j 0 b + k 0 N be the smallest positive integer in A. For all s > 0 ∈ A, there are i, j, k ∈ Z such that s = ia + jb + kN , and there exist q, r ∈ Z such that s = s 0 q + r where q > 0 and 0 ≤ r < s 0 . Then we have Since s 0 is the smallest positive number in A and 0 ≤ r < s 0 , we must have r = 0, which leads to s = s 0 q. As s was arbitrary it follows that s 0 is a divisor of gcd(a, b, N ), which implies s 0 = 1 because gcd(a, b, N ) = 1. Thus we reach (B.1) and consequently A covers Z. If i 0 and j 0 are not positive, there exist i, j ∈ N such that iN + i 0 > 0 and jN + j 0 > 0, and from (B.1) we have 1 = (iN + i 0 )a + (jN + j 0 )b + (k 0 − ia − jb)N. We now prove Theorem 2.8. Proof . We consider two cases, d = gcd(a, b, N ) > 1 and d = 1. Suppose Y is a subset of equations which depend on a subset of variables U ⊂ V . If all equations that depend on variables in U are in Y and Y is a proper subset, then the system is decoupled. contains N equations. As 1 + ib ≡ 1 + jb mod N implies i ≡ j mod N when gcd(b, N ) = 1, they are all distinct. This shows that Y is not a proper subset and hence the system is non-decoupled. a = b: The generic equation in the system N [b, b], with b = 0, is Q[k] : Q u k , u k+b , u k+b , u k+2b = 0. As in the case a = 0, attempting to construct a proper subset of equations, Y , we find (B.2). a < b: For the system N [a, b], with 0 < a < b, the generic equation has the form Q[k] : Q u k , u k+a , u k+b , u k+a+b = 0. Starting from Q[1] ∈ Y , following the dependence on the variables we must have It follows from Lemma B.1 that Y contains N distinct equations and therefore this case is non-decoupled as well.
8,257.2
2020-05-03T00:00:00.000
[ "Mathematics", "Physics" ]