id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
413,340 | https://en.wikipedia.org/wiki/Sporophyte | A sporophyte () is the diploid multicellular stage in the life cycle of a plant or alga which produces asexual spores. This stage alternates with a multicellular haploid gametophyte phase.
Life cycle
The sporophyte develops from the zygote produced when a haploid egg cell is fertilized by a haploid sperm and each sporophyte cell therefore has a double set of chromosomes, one set from each parent. All land plants, and most multicellular algae, have life cycles in which a multicellular diploid sporophyte phase alternates with a multicellular haploid gametophyte phase. In the seed plants, the largest groups of which are the gymnosperms and flowering plants (angiosperms), the sporophyte phase is more prominent than the gametophyte, and is the familiar green plant with its roots, stem, leaves and cones or flowers. In flowering plants, the gametophytes are very reduced in size, and are represented by the germinated pollen and the embryo sac.
The sporophyte produces spores (hence the name) by meiosis, a process also known as "reduction division" that reduces the number of chromosomes in each spore mother cell by half. The resulting meiospores develop into a gametophyte. Both the spores and the resulting gametophyte are haploid, meaning they only have one set of chromosomes.
The mature gametophyte produces male or female gametes (or both) by mitosis. The fusion of male and female gametes produces a diploid zygote which develops into a new sporophyte. This cycle is known as alternation of generations or alternation of phases.
Examples
Bryophytes (mosses, liverworts and hornworts) have a dominant gametophyte phase on which the adult sporophyte is dependent for nutrition. The embryo sporophyte develops by cell division of the zygote within the female sex organ or archegonium, and in its early development is therefore nurtured by the gametophyte.
Because this embryo-nurturing feature of the life cycle is common to all land plants they are known collectively as the embryophytes.
Most algae have dominant gametophyte generations, but in some species the gametophytes and sporophytes are morphologically similar (isomorphic). An independent sporophyte is the dominant form in all clubmosses, horsetails, ferns, gymnosperms, and angiosperms that have survived to the present day. Early land plants had sporophytes that produced identical spores (isosporous or homosporous) but the ancestors of the gymnosperms evolved complex heterosporous life cycles in which the spores producing male and female gametophytes were of different sizes, the female megaspores tending to be larger, and fewer in number, than the male microspores.
Evolutionary history
During the Devonian period several plant groups independently evolved heterospory and subsequently the habit of endospory, in which the gametophytes develop in miniaturized form inside the spore wall. By contrast in exosporous plants, including modern ferns, the gametophytes break the spore wall open on germination and develop outside it. The megagametophytes of endosporic plants such as the seed ferns developed within the sporangia of the parent sporophyte, producing a miniature multicellular female gametophyte complete with female sex organs, or archegonia. The oocytes were fertilized in the archegonia by free-swimming flagellate sperm produced by windborne miniaturized male gametophytes in the form of pre-pollen. The resulting zygote developed into the next sporophyte generation while still retained within the pre-ovule, the single large female meiospore or megaspore contained in the modified sporangium or nucellus of the parent sporophyte. The evolution of heterospory and endospory were among the earliest steps in the evolution of seeds of the kind produced by gymnosperms and angiosperms today. The rRNA genes seems to escape global methylation machinery in bryophytes, unlike seed plants.
See also
Alternation of generations
References
Further reading
Plant morphology
Plant reproduction | Sporophyte | [
"Biology"
] | 927 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction",
"Plant morphology"
] |
413,430 | https://en.wikipedia.org/wiki/Jan%20Oort | Jan Hendrik Oort ( or ; 28 April 1900 – 5 November 1992) was a Dutch astronomer who made significant contributions to the understanding of the Milky Way and who was a pioneer in the field of radio astronomy. The New York Times called him "one of the century's foremost explorers of the universe"; the European Space Agency website describes him as "one of the greatest astronomers of the 20th century" and states that he "revolutionised astronomy through his ground-breaking discoveries." In 1955, Oort's name appeared in Life magazine's list of the 100 most famous living people. He has been described as "putting the Netherlands in the forefront of postwar astronomy".
Oort determined that the Milky Way rotates and overturned the idea that the Sun was at its center. He also postulated the existence of the mysterious invisible dark matter in 1932, which is believed to make up roughly 84.5% of the total mass in the Universe and whose gravitational pull causes "the clustering of stars into galaxies and galaxies into connecting strings of galaxies". He discovered the galactic halo, a group of stars orbiting the Milky Way but outside the main disk. Additionally Oort is responsible for a number of important insights about comets, including the realization that their orbits "implied there was a lot more solar system than the region occupied by the planets."
The Oort cloud, the Oort constants, Oort Limit, an impact crater on Pluto (Oort), and the asteroid 1691 Oort were all named after him.
Early life and education
Oort was born in Franeker, a small town in the Dutch province of Friesland, on April 28, 1900. He was the second son of Abraham Hermanus Oort, a physician, who died on May 12, 1941, and Ruth Hannah Faber, who was the daughter of Jan Faber and Henrietta Sophia Susanna Schaaii, and who died on November 20, 1957. Both of his parents came from families of clergymen, with his paternal grandfather, a Protestant clergyman with liberal ideas, who "was one of the founders of the more liberal Church in Holland" and who "was one of the three people who made a new translation of the Bible into Dutch." The reference is to Henricus Oort (1836–1927), who was the grandson of a famous Rotterdam preacher and, through his mother, Dina Maria Blom, the grandson of theologian Abraham Hermanus Blom, a "pioneer of modern biblical research". Several of Oort's uncles were pastors, as was his maternal grandfather. "My mother kept up her interests in that, at least in the early years of her marriage", he recalled. "But my father was less interested in Church matters."
In 1903 Oort's parents moved to Oegstgeest, near Leiden, where his father took charge of the Endegeest Psychiatric Clinic. Oort's father, "was a medical director in a sanitorium for nervous illnesses. We lived in the director's house of the sanitorium, in a small forest which was very nice for the children, of course, to grow up in." Oort's younger brother, John, became a professor of plant diseases at the University of Wageningen. In addition to John, Oort had two younger sisters and an elder brother who died of diabetes when he was a student.
Oort attended primary school in Oegstgeest and secondary school in Leiden, and in 1917 went to Groningen University to study physics. He later said that he had become interested in science and astronomy during his high-school years, and conjectured that his interest was stimulated by reading Jules Verne. His one hesitation about studying pure science was the concern that it "might alienate one a bit from people in general", as a result of which "one might not develop the human factor sufficiently." But he overcame this concern and ended up discovering that his later academic positions, which involved considerable administrative responsibilities, afforded a good deal of opportunity for social contact.
Oort chose Groningen partly because a well known astronomer, Jacobus Cornelius Kapteyn, was teaching there, although Oort was unsure whether he wanted to specialize in physics or astronomy. After studying with Kapteyn, Oort decided on astronomy. "It was the personality of Professor Kapteyn which decided me entirely", he later recalled. "He was quite an inspiring teacher and especially his elementary astronomy lectures were fascinating." Oort began working on research with Kapteyn early in his third year. According to Oort one professor at Groningen who had considerable influence on his education was physicist Frits Zernike.
After taking his final exam in 1921, Oort was appointed assistant at Groningen, but in September 1922, he went to the United States to do graduate work at Yale and to serve as an assistant to Frank Schlesinger of the Yale Observatory.
Career
At Yale, Oort was responsible for making observations with the Observatory's zenith telescope. "I worked on the problem of latitude variation", he later recalled, "which is quite far away from the subjects I had so far been studying." He later considered his experience at Yale useful as he became interested in "problems of fundamental astronomy that [he] felt was capitalized on later, and which certainly influenced [his] future lectures in Leiden." Personally, he "felt somewhat lonesome in Yale", but also said that "some of my very best friends were made in these years in New Haven."
Early discoveries
In 1924, Oort returned to the Netherlands to work at Leiden University, where he served as a research assistant, becoming Conservator in 1926, Lecturer in 1930, and Professor Extraordinary in 1935. In 1926, he received his doctorate from Groningen with a thesis on the properties of high-velocity stars. The next year, Swedish astronomer Bertil Lindblad proposed that the rate of rotation of stars in the outer part of the galaxy decreased with distance from the galactic core, and Oort, who later said that he believed it was his colleague Willem de Sitter who had first drawn his attention to Lindblad's work, realized that Lindblad was correct and that the truth of his proposition could be demonstrated observationally. Oort provided two formulae that described galactic rotation; the two constants that figured in these formulae are now known as "Oort's constants". Oort "argued that just as the outer planets appear to us to be overtaken and passed by the less distant ones in the solar system, so too with the stars if the Galaxy really rotated", according to the Oxford Dictionary of Scientists. He "was finally able to calculate, on the basis of the various stellar motions, that the Sun was some 30,000 light-years from the center of the Galaxy and took about 225 million years to complete its orbit. He also showed that stars lying in the outer regions of the galactic disk rotated more slowly than those nearer the center. The Galaxy does not therefore rotate as a uniform whole but exhibits what is known as 'differential rotation'."
These early discoveries by Oort about the Milky Way overthrew the Kapteyn system, named after his mentor, which had envisioned a galaxy that was symmetrical around the Sun. As Oort later noted, "Kapteyn and his co-workers had not realized that the absorption in the galactic plane was as bad as it turned out to be." Until Oort began his work, he later recalled, "the Leiden Observatory had been concentrating entirely on positional astronomy, meridian circle work and some proper motion work. But no astrophysics or anything that looked like that. No structure of the galaxy, no dynamics of the galaxy. There was no one else in Leiden who was interested in these problems in which I was principally interested, so the first years I worked more or less by myself in these projects. De Sitter was interested, but his main line of research was celestial mechanics; at that time the expanding universe had moved away from his direct interest." As the European Space Agency states, Oort "sh[ook] the scientific world by demonstrating that the Milky Way rotates like a giant 'Catherine Wheel'." He showed that all the stars in the galaxy were "travelling independently through space, with those nearer the center rotating much faster than those further away."
This breakthrough made Oort famous in the world of astronomy. In the early 1930s he received job offers from Harvard and Columbia University, but chose to stay at Leiden, although he did spend half of 1932 at the Perkins Observatory, in Delaware, Ohio.
In 1934, Oort became assistant to the director of Leiden Observatory; the next year he became General Secretary of the International Astronomical Union (IAU), a post he held until 1948; in 1937 he was elected to the Royal Academy. In 1939, he spent half a year in the U.S., and became interested in the Crab Nebula, concluding in a paper, written with American astronomer Nicholas Mayall, that it was the result of a supernova explosion.
Nazi invasion of Netherlands
In 1940, Nazi Germany invaded the Netherlands. Soon after, the occupying regime dismissed all Jewish professors from Leiden University and other universities. "Among the professors who were dismissed", Oort later recalled, "was a very famous … professor of law by the name of Meyers. On the day when he got the letter from the authorities that he could no longer teach his classes, the dean of the faculty of law went into his class … and delivered a speech in which he started by saying, 'I won't talk about his dismissal and I shall leave the people who did this, below us, but will concentrate on the greatness of the man dismissed by our aggressors.'"
This speech (26 November 1940) made such an impression on all his students that on leaving the auditorium they defiantly sang the anthem of the Netherlands and went on strike. Oort was present for the lecture and was greatly impressed. This occasion formed the beginning of the active resistance in Holland. The speech by Rudolph Cleveringa, the dean of the faculty of Law and former graduate student of professor Meijers, was widely circulated during the rest of the war by the resistance groups. Oort was in a little group of professors in Leiden who came together regularly and discussed the problems the university faced in view of the German occupation. Most of the members of this group were put in hostage camps soon after the speech by Cleveringa. Oort refused to collaborate with the occupiers, "and so we went down to live in the country for the rest of the war." Resigning from the Royal Academy, from his professorial post at Leiden, and from his position at the Observatory, Oort took his family to Hulshorst, a quiet village in the province of Gelderland, where they sat out the war. In Hulshorst, he began writing a book on stellar dynamics.
Oort's radio astronomy
Before the war was over, he initiated, in collaboration with a Utrecht University student, Hendrik van de Hulst, a project that eventually succeeded, in 1951, in detecting the 21-centimeter radio emission from interstellar hydrogen spectral line at radio frequencies. Oort and his colleagues also made the first investigation of the central region of the Galaxy, and discovered that "the 21-centimeter radio emission passed un-absorbed through the gas clouds that had hidden the center from optical observation. They found a huge concentration of mass there, later identified as mainly stars, and also discovered that much of the gas in the region was moving rapidly outward away from the center." In June 1945, after the end of the war, Oort returned to Leiden, took over as director of the Observatory, and became Full Professor of Astronomy. During this immediate postwar period, he led the Dutch group that built radio telescopes at Radio Kootwijk, Dwingeloo, and Westerbork and used the 21-centimeter line to map the Milky Way, including the large-scale spiral structure, the Galactic Center, and gas cloud motions. Oort was helped in this project by the Dutch telecommunications company, PTT, which, he later explained, "had under their care all the radar equipment that was left behind by the Germans on the coast of Holland. This radar equipment consisted in part of reflecting telescopes of 7 1/2 meter aperture.... Our radio astronomy was really started with the aid of one of these instruments… it was in Kootwijk that the first map of the Galaxy was made." For a brief period, before the completion of the Jodrell Bank telescope, the Dwingeloo instrument was the largest of its kind on Earth.
It has been written that "Oort was probably the first astronomer to realize the importance" of radio astronomy. "In the days before radio telescopes," one source notes, "Oort was one of the few scientists to realise the potential significance of using radio waves to search the heavens. His theoretical research suggested that vast clouds of hydrogen lingered in the spiral arms of the Galaxy. These molecular clouds, he predicted, were the birthplaces of stars." These predictions were confirmed by measurements made at the new radio observatories at Dwingeloo and Westerbork. Oort later said that "it was Grote Reber's work which first impressed me and convinced me of the unique importance of radio observations for surveying the galaxy." Just before the war, Reber had published a study of galactic radio emissions. Oort later commented, "The work of Grote Reber made it quite clear [radio astronomy] would be a very important tool for investigating the Galaxy, just because it could investigate the whole disc of the galactic system unimpeded by absorption." Oort's work in radio astronomy is credited by colleagues with putting the Netherlands in the forefront of postwar astronomy. Oort also investigated the source of the light from the Crab Nebula, finding that it was polarized, and probably produced by synchrotron radiation, confirming a hypothesis by Iosif Shklovsky.
Comet studies
Oort went on to study comets, which he formulated a number of revolutionary hypotheses. He hypothesized that the Solar System is surrounded by a massive cloud consisting of billions of comets, many of them "long-period" comets that originate in a cloud far beyond the orbits of Neptune and Pluto. This cloud is now known as the Oort Cloud. He also realized that these external comets, from beyond Pluto, can "become trapped into tighter orbits by Jupiter, and become periodic comets, like Halley's comet." According to one source, "Oort was one of the few people to have seen Comet Halley on two separate apparitions. At the age of 10, he was with his father on the shore at Noordwijk, Netherlands, when he first saw the comet. In 1986, 76 years later, he went up in a plane and was able to see the famous comet once more."
In 1951 Oort and his wife spent several months in Princeton and Pasadena, an interlude that led to a paper by Oort and Lyman Spitzer on the acceleration of interstellar clouds by O-type stars. He went on to study high-velocity clouds. Oort served as director of the Leiden Observatory until 1970. After his retirement, he wrote comprehensive articles on the galactic center and on superclusters and published several papers on the quasar absorption lines, supporting Yakov Zel'dovich's pancake model of the universe. He also continued researching the Milky Way and other galaxies and their distribution until shortly before his death at 92.
One of Oort's strengths, according to one source, was his ability to "translate abstruse mathematical papers into physical terms," as exemplified by his translation of the difficult mathematical terms of Lindblad's theory of differential galactic rotation into a physical model. Similarly, he "derived the existence of the comet cloud on the outskirts of the Solar System from the observations, using the mathematics needed in dynamics, but then deduced the origin of this cloud using general physical arguments and a minimum of mathematics."
Personal life
In 1927, Oort married Johanna Maria (Mieke) Graadt van Roggen (1906–1993). They had met at a university celebration at Utrecht, where Oort's brother was studying biology at the time. Oort and his wife had two sons, Coenraad (Coen) and Abraham, and a daughter, Marijke. Abraham became a professor of climatology at Princeton University.
According to the website of Leiden University, Oort was very interested in and knowledgeable about art. "[W]hen visiting another country he would always try to take some time off to visit the local museums and exhibitions…and in the fifties served for some years as chairman of the pictorial arts committee of the Leiden Academical Arts Centre, which had among other things the task of organizing expositions".
"Colleagues remembered him as a tall, lean and courtly man with a genial manner," reported his New York Times obituary.
Writings
An incomplete list:
Oort, J.H., "Some Peculiarities in the Motion of Stars of High Velocity," Bull. Astron. Inst. Neth. 1, 133–37 (1922).
Oort, J.H., "The Stars of High Velocity," (Thesis, Groningen University) Publ. Kapteyn Astr. Lab, Groningen, 40, 1–75 (1926).
Oort, Jan H., "Asymmetry in the Distribution of Stellar Velocities," Observatory 49, 302–04 (1926).
Oort, J.H., "Non-Light-Emitting Matter in the Stellar System," public lecture of 1926, reprinted in The Legacy of J. C. Kapteyn, ed. by P. C. van der Kruit and K. van Berkel (Kluwer, Dordrecht, 2000) [abstract].
Oort, J.H., "Observational Evidence Confirming Lindblad's Hypothesis of a Rotation of the Galactic System," Bull. Astron. Inst. Neth. 3, 275–82 (1927).
Oort, J.H., "Investigations Concerning the Rotational Motion of the Galactic System together with New Determinations of Secular Parallaxes, Precession and Motion of the Equinox (Errata: 4, 94)," Bull. Astron. Inst. Neth. 4, 79–89 (1927).
Oort, J.H., "Dynamics of the Galactic System in the Vicinity of the Sun," Bull. Astron. Inst. Neth. 4, 269–84 (1928).
Oort, J.H., "Some Problems Concerning the Distribution of Luminosities and Peculiar Velocities of Extragalactic Nebulae," Bull. Astron. Inst. Neth. 6, 155–59 (1931).
Oort, J.H., "The Force Exerted by the Stellar System in the Direction Perpendicular to the Galactic Plane and Some Related Problems," Bull. Astron. Inst. Neth. 6, 249–87 (1932).
Oort, J.H., "A Redetermination of the Constant of Precession, the Motion of the Equinox and the Rotation of the Galaxy from Faint Stars Observed at the McCormick Observatory," 4, 94)," Bull. Astron. Inst. Neth. 8, 149–55 (1937).
Oort, J.H., "Absorption and Density Distribution in the Galactic System," Bull. Astron. Inst. Neth. 8, 233–64 (1938).
Oort, J.H., "Stellar Motions," MNRAS 99, 369–84 (1939).
Oort, J.H. "Some Problems Concerning the Structure and Dynamics of the Galactic System and the Elliptical Nebulae NGC 3115 and 4494," Ap.J. 91, 273–306 (1940).
Mayall, N.U. & J.H. Oort, "Further Data Bearing on the Identification of the Crab Nebula with the Supernova of 1054 A.D. Part II: The Astronomical Aspects," PASP 54, 95–104 (1942).
Oort, J. H., & H.C. van de Hulst, "Gas and Smoke in Interstellar Space," Bull. Astr. Inst. Neth. 10, 187–204 (1946).
Oort, J.H., "Some Phenomena Connected with Interstellar Matter (1946 George Darwin Lecture)," MNRAS 106, 159–79 (1946) [George Darwin]. Lecture.
Oort, J.H., "The Structure of the Cloud of Comets Surrounding the Solar System and a Hypothesis Concerning its Origin," Bull. Astron. Inst. Neth. 11, 91–110 (1950).
Oort, J.H., "Origin and Development of Comets (1951 Halley Lecture)," Observatory 71, 129–44 (1951) Halley Lecture.
Oort, J.H. & M. Schmidt, "Differences between New and Old Comets," Bull. Astron. Inst. Neth. 11, 259–70 (1951).
Westerhout, G. & J.H. Oort, "A Comparison of the Intensity Distribution of Radio-frequency Radiation with a Model of the Galactic System," Bull. Astron. Inst. Neth. 11, 323–33 (1951).
Morgan, H.R. & J.H. Oort, "A New Determination of the Precession and the Constants of Galactic Rotation," Bull. Astron. Inst. Neth. 11, 379–84 (1951).
Oort, J.H. "Problems of Galactic Structure," Ap.J. 116, 233–250 (1952) [Henry Norris Russell Lecture, 1951].
Oort, J. H., "Outline of a Theory on the Origin and Acceleration of Interstellar Clouds and O Associations," Bull. Astr. Inst. Neth. 12, 177–86 (1954).
van de Hulst, H.C., C.A. Muller, & J.H. Oort, "The spiral structure of the outer part of the Galactic System derived from the hydrogen mission at 21 cm wavelength," Bull. Astr. Inst. Neth. 12, 117–49 (1954).
van Houten, C.J., J.H. Oort, & W.A. Hiltner, "Photoelectric Measurements of Extragalactic Nebulae," Ap.J. 120, 439–53 (1954).
Oort, Jan H. & Lyman Spitzer, Jr., "Acceleration of Interstellar Clouds by O-Type Stars," Ap.J. 121, 6–23 (1955).
Oort, J.H., "Measures of the 21-cm Line Emitted by Interstellar Hydrogen," Vistas in Astronomy. 1, 607–16 (1955).
Oort, J.H., "A New Southern Hemisphere Observatory," Sky & Telescope 15, 163 (1956).
Oort, J. H. & Th. Walraven, "Polarization and Composition of the Crab Nebula," Bull. Astr. Inst. Neth. 12, 285–308 (1956).
Oort, J.H., "Die Spiralstruktur des Milchstraßensystems," Mitt. Astr. Ges. 7, 83–87 (1956).
Oort, J.H., F.J. Kerr, & G. Westerhout, "The Galactic System as a Spiral Nebula," MNRAS 118, 379–89 (1958).
Oort, J.H., "Summary – From the Astronomical Point of View," in Ricerche Astronomiche, Vol. 5, Specola Vaticana, Proceedings of a Conference at Vatican Observatory, Castel Gandolfo, May 20–28, 1957, ed. by D.J.K. O'Connell (North Holland, Amsterdam & Interscience, NY, 1958), 507–29.
Oort, Jan H., "Radio-frequency Studies of Galactic Structure," Handbuch der Physik vol. 53, 100–28 (1959).
Oort, J.H., "A Summary and Assessment of Current 21-cm Results Concerning Spiral and Disk Structures in Our Galaxy," in Paris Symposium on Radio Astronomy, IAU Symposium no. 9 and URSI Symposium no. 1, held 30 July – 6 August 1958, ed. by R.N. Bracewell (Stanford University Press, Stanford, CA, 1959), 409–15.
Rougoor, G. W. & J.H. Oort, "Neutral Hydrogen in the Central Part of the Galactic System," in Paris Symposium on Radio Astronomy, IAU Symposium no. 9 and URSI Symposium no. 1, held 30 July – 6 August 1958, ed. by R.N. Bracewell (Stanford University Press, Stanford, CA, 1959), pp. 416–22.
Oort, J. H. & G. van Herk, "Structure and dynamics of Messier 3," Bull. Astr. Inst. Neth. 14, 299–321 (1960).
Oort, J. H., "Note on the Determination of Kz and on the Mass Density Near the Sun," Bull. Astr. Inst. Neth. 15, 45–53 (1960).
Rougoor, G.W. & J.H. Oort, "Distribution and Motion of Interstellar Hydrogen in the Galactic System with Particular Reference to the Region within 3 Kiloparsecs of the Center," Proc. Natl. Acad. Sci. 46, 1–13 (1960).
Oort, J.H. & G.W. Rougoor, "The Position of the Galactic Centre," MNRAS 121, 171–73 (1960).
Oort, J.H., "The Galaxy," IAU Symposium 20, 1–9 (1964).
Oort, J.H. "Stellar Dynamics," in A. Blaauw & M. Schmidt, eds., Galactic Structure (Univ. of Chicago Press, Chicago, 1965), pp. 455–512.
Oort, J. H., "Possible Interpretations of the High-Velocity Clouds," Bull. Astr. Inst. Neth. 18, 421–38 (1966).
Oort, J. H., "Infall of Gas from Intergalactic Space," Nature 224, 1158–63 (1969).
Oort, J.H., "The Formation of Galaxies and the Origin of the High-Velocity Hydrogen.," Astronomy & Astrophysics 7, 381–404 (1970).
Oort, J.H., "The Density of the Universe," Astronomy & Astrophysics 7, 405 (1970).
Oort, J.H., "Galaxies and the Universe," Science 170, 1363–70 (1970).
van der Kruit, P.C., J.H. Oort, & D.S. Mathewson, "The Radio Emission of NGC 4258 and the Possible Origin of Spiral Structure," Astronomy & Astrophysics 21, 169–84 (1972).
Oort, J.H., "The Development of our Insight into the Structure of the Galaxy between 1920 and 1940," Ann. NY Acad. Sci. 198, 255–66 (1972).
Oort, Jan H. "On the Problem of the Origin of Spiral Structure," Mitteilungen der AG 32, 15–31 (1973) [Karl Schwarzschild Lecture, 1972].
Oort, J.H. & L. Plaut, "The Distance to the Galactic Centre Derived from RR Lyrae Variables, the Distribution of these Variables in the Galaxy's Inner Region and Halo, and A Rediscussion of the Galactic Rotation Constants," Astronomy & Astrophysics 41, 71–86 (1975).
Strom, R. G., G.K. Miley, & J. Oort, "Giant Radio Galaxies," Sci. Amer. 233, 26 (1975).
Pels, G., J.H. Oort, & H.A. Pels-Kluyver, "New Members of the Hyades Cluster and a Discussion of its Structure," Astronomy & Astrophysics 43, 423–41 (1975).
Rubin, Vera C., W. Kent Ford, Jr., Charles J. Peterson, & J.H. Oort,"New Observations of the NGC 1275 Phenomenon," Ap.J. 211, 693–96 (1977).
Oort, J.H., "The Galactic Center," Annual Review of Astronomy & Astrophysics 15, 295–362 (1977).
Oort, J.H., "Superclusters and Lyman α Absorption Lines in Quasars," Astronomy & Astrophysics 94, 359–64 (1981).
Oort, J.H., H. Arp, & H. de Ruiter, "Evidence for the Location of Quasars in Superclusters," Astronomy & Astrophysics 95, 7–13 (1981).
Oort, J.H., "Superclusters," Annual Review of Astronomy & Astrophysics 21, 373–428 (1983).
Oort, J.H., "Structure of the Universe," in Early Evolution of the Universe and its Present Structure; Proceedings of the Symposium, Kolymbari, Greece, August 30 – September 2, 1982, (Reidel, Dordrecht & Boston, 1983), 1–6.
Oort, Jan H. "The Origin and Dissolution of Comets (1986 Halley Lecture)" Observatory 106, 186–93 (1986).
Oort, Jan H. "Origin of Structure in the Universe," Publ. Astron. Soc. Jpn. 40, 1–14 (1988).
Oort, J.H., "Questions Concerning the Large-scale Structure of the Universe," in Problems in Theoretical Physics and Astrophysics: Collection of Articles in Celebration of the 70th Birthday of V. L. Ginzburg (Izdatel'stvo Nauka, Moscow, 1989), pp. 325–37.
Oort, J.H., "Orbital Distribution of Comets," in W.F. Huebner, ed., Physics and Chemistry of Comets (Springer-Verlag, 1990), pp. 235–44 (1990).
Oort, J.H., "Exploring the Nuclei of Galaxies," Mercury 21, 57 (1992).
Oort, J.H., "Non-Light-Emitting Matter in the Stellar System," public lecture of 1926, reprinted in The Legacy of J. C. Kapteyn, ed. by P. C. van der Kruit and K. van Berkel (Kluwer, Dordrecht, 2000) [abstract].
A few of Oort's discoveries
In 1924, Oort discovered the galactic halo, a group of stars orbiting the Milky Way but outside the main disk.
In 1927, he calculated that the center of the Milky Way was 5,900 parsecs (19,200 light years) from the Earth in the direction of the constellation Sagittarius.
In 1932, by measuring the motions of stars in the Milky Way he was the first to find evidence for dark matter, when he found the mass of the galactic plane must be more than the mass of the material that can be seen.
He showed that the Milky Way had a mass 100 billion times that of the Sun.
In 1950, he suggested that comets came from a common region of the Solar System (now called the Oort cloud).
He found that the light from the Crab Nebula was polarized, and produced by synchrotron emission.
Honours
Awards
Bruce Medal of the Astronomical Society of the Pacific in 1942
Gold Medal of the Royal Astronomical Society in 1946
Janssen Medal from the French Academy of Sciences in 1946
Prix Jules Janssen, the highest award of the Société astronomique de France, the French astronomical society (1947)
Henry Norris Russell Lectureship of the American Astronomical Society in 1951
Gouden Ganzenveer in 1960
Vetlesen Prize in 1966
National Radio Astronomy Observatory, Jansky Prize, 1967
Karl Schwarzschild Medal of the Astronomische Gesellschaft in 1972
Association pour le Développement International de l'Observatoire de Nice, ADION medal, 1978
Balzan Prize for Astrophysics in 1984
Inamori Foundation, Kyoto Prize, 1987
Named after him
1691 Oort (asteroid)
Oort cloud (Öpik–Oort cloud)
Oort limit
Oort constants
Oort building, the current building of the Leiden Observatory
Memberships
Member of the Royal Netherlands Academy of Arts and Sciences (1937–1943, 1945–)
Member of the American Academy of Arts and Sciences (1946–)
Members of the United States National Academy of Sciences (1953–)
Member of the American Philosophical Society (1957–)
Upon his death, Nobel Prize winning astrophysicist Subrahmanyan Chandrasekhar remarked, "The great oak of Astronomy has been felled, and we are lost without its shadow."
References
Notes
Biographical materials
Blaauw, Adriaan, Biographical Encyclopedia of Astronomers (Springer, NY, 2007), pp. 853–55.
Chapman, David M.F., "Reflections: Jan Hendrik Oort – Swirling Galaxies and Clouds of Comets," JRASC 94, 53–54 (2000).
ESA Space Science, "Comet Pioneer: Jan Hendrik Oort," 27 February 2004.
Katgert-Merkelijn, J., University of Leiden, "Jan Oort, Astronomer".
Katgert-Merkelijn, J.K.: The letters and papers of Jan Hendrik Oort, as archived in the University Library, Leiden. Dordrecht, Kluwer Academic Publishers, 1997. .
Oort, J.H., "Some Notes on My Life as an Astronomer," Annual Review of Astronomy & Astrophysics 19, 1 (1981).
van de Hulst, H.C., Biographical Memoirs of the Royal Society of London 40, 320–26 (1994).
van der Kruit, Pieter C.: Jan Hendrik Oort. Master of the Galactic System. Springer Nature, 2019. .
van Woerden, Hugo, Willem N. Brouw, and Henk C. van de Hulst, eds., "Oort and the Universe: A Sketch of Oort's Research and Person" (D. Reidel, Dordrecht, 1980).
Obituaries
Blaauw, Adriaan, Zenit jaarg, 196–210 (1993).
Blaauw, Adriaan & Maarten Schmidt, PASP 105, 681 (1993).
Blaauw, Adriaan, "Oort im Memoriam," in Leo Blitz & Peter Teuben, eds., 169th IAU Symposium: Unsolved Problems of the
Milky Way, (Kluwer Acad. Publishers, 1996), pp. xv–xvi.
Pecker, J.-C., "La Vie et l'Oeuvre de Jan Hendrik Oort," Comptes Rendus de l'Acadèmie des Sciences: La Vie des Science 10, 5, 535–40 (1993).
van de Hulst, H.C., QJRAS 35, 237–42 (1994).
van den Bergh, Sidney, "An Astronomical Life: J.H. Oort (1900–1992)," JRASC 87, 73–76 (1993).
Woltjer, L., J. Astrophys. Astron. 14, 3–5 (1993).
Woltjer, Lodewijk, Physics Today 46, 11, 104–05 (1993).
Literature
External links
Oral history interview transcript with Jan Oort on 10 November 1977, American Institute of Physics, Niels Bohr Library & Archives
Jan Oort, astronomer (Leiden University Library, April–May 2000)—Online exhibition
1900 births
1992 deaths
Academic staff of Leiden University
Foreign associates of the National Academy of Sciences
Foreign members of the Royal Society
Foreign members of the Russian Academy of Sciences
Foreign members of the USSR Academy of Sciences
Kyoto laureates in Basic Sciences
Members of the American Philosophical Society
Members of the Royal Netherlands Academy of Arts and Sciences
People from Franekeradeel
Presidents of the International Astronomical Union
Recipients of the Gold Medal of the Royal Astronomical Society
University of Groningen alumni
Vetlesen Prize winners | Jan Oort | [
"Astronomy"
] | 7,748 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
413,472 | https://en.wikipedia.org/wiki/Idli | Idli or idly (; plural: idlis) or iddali or iddena is a type of savoury rice cake, originating from South India, popular as a breakfast food in Southern India and in Sri Lanka. The cakes are made by steaming a batter consisting of fermented de-husked black lentils and rice. The fermentation process breaks down the starches so that they are more readily metabolised by the body.
Idli has several variations, including rava idli, which is made from semolina. Regional variants include sanna of Konkan.
History
A precursor of the modern idli is mentioned in several ancient Indian works. Vaddaradhane, a 920 CE Kannada language work by Shivakotiacharya, mentions "iddalige", prepared only from a black gram batter. Chavundaraya II, the author of the earliest available Kannada encyclopedia, Lokopakara (1025 CE), describes the preparation of this food by soaking black gram in buttermilk, ground to a fine paste, and mixed with the clear water of curd and spices. The Western Chalukya king and scholar Someshwara III, reigning in the area now called Karnataka, included an idli recipe in his encyclopedia, Manasollasa (1130 CE). This Sanskrit-language work describes the food as iḍḍarikā. In Karnataka, the Idli in 1235 CE is described as being "light, like coins of high value", which is not suggestive of a rice base. The food prepared using this recipe is now called uddina idli in Karnataka.
The recipe mentioned in these ancient Indian works leaves out three key aspects of the modern idli recipe: the use of rice (not just black gram), the long fermentation of the mix, and the steaming for fluffiness. The references to the modern recipe appear in the Indian works only after 1250 CE. Food historian K. T. Achaya speculates that the modern idli recipe might have originated in present-day Indonesia, which has a long tradition of fermented food. According to him, the cooks employed by the Hindu kings of the Indianised kingdoms might have invented the steamed idli there, and brought the recipe back to India during 800–1200 CE. Achaya mentioned an Indonesian dish called "kedli", which according to him, was like an idli. However, Janaki Lenin was unable to find any recipe for an Indonesian dish by this name. According to food historian Colleen Taylor Sen the fermentation process of idli batter is a natural process that was discovered independently in India, since nearly all cultures use fermentation in some form.
The Gujarati work Varṇaka Samuccaya (1520 CE) mentions idli as idari, and also mentions its local adaptation, idada (a non-fermented version of dhokla).
The earliest extant Tamil work to mention idli (as itali) is Maccapuranam, dated to the 17th century. In 2015, Chennai-based Idli caterer Eniyavan started celebrating March 30 as "World Idli Day".
Preparation
To make Idli, four parts uncooked rice (idli rice or parboiled rice) to one part whole white lentil (black gram, Vigna mungo) are soaked separately for at least four hours – overnight if more convenient. Optionally spices such as fenugreek seeds can be added at the time of soaking, for additional flavour. Once soaked, the lentils are ground to a fine paste and the rice is separately coarsely ground, then they are combined. Next, the mixture is left to ferment overnight during which its volume will more than double. After fermentation, some of the batter may be kept as a starter culture for the next batch. The finished idli batter is put into greased moulds of an idli tray or "tree" for steaming. The perforated molds allow the idlis to be cooked evenly. The tree holds the trays above the level of boiling water in a pot, and the pot is covered until the idlis are done (about 10–25 minutes, depending on size). A more traditional method is to use leaves instead of moulds.
Serving
Since plain idlis are mild in taste, a condiment is considered essential. Idlis are often served with chutneys (coconut-based), sambar and Medu vada. However, this varies greatly by region and personal taste, it is also often served with kaara chutney (onion-based) or spicy fish curries. The dry spice mixture podi is convenient while travelling.
Variations
There are several regional variations of idlis made in South India and Sri Lanka. With the emigration of south Indians and Sri Lankans throughout the region and world, many variations on idli have been created in addition to the almost countless local variations. Hard-to-get ingredients and differing cooking customs have required changes in both ingredients and methods. Parboiled rice can reduce the soaking time considerably. Store-bought ground rice or cream of rice may also be used. Similarly, semolina or cream of wheat may be used for preparing rava idli (wheat idli). Dahi (yogurt) may be added to provide the sour flavour for unfermented batters. Pre-packaged mixes allow for almost instant idlis.
In addition to or instead of fenugreek, other spices may be used such as mustard seeds, chili peppers, cumin, coriander, ginger, etc. Sugar may be added to make them sweet instead of savoury. Idli may also be stuffed with a filling of potato, beans, carrot and masala. Leftover idlis can be cut-up or crushed and sautéed for a dish called idli upma. A microwave or an automatic electric steamer that is non-stick is considered to be a convenient alternative to conventional stovetop steamers. Batter preparation using a manual rocking rock grinder can be replaced by electric grinders or blenders. Many restaurants have also come up with fusion recipes of idlis such as idly manchurian, idly fry, chilly idly, stuffed idly, to name a few.
Batter fermentation mechanism
Fermentation of idli batter results in both leavening caused by the generation of carbon dioxide as well as an increase in acidity. This fermentation is performed by lactic acid bacteria, especially the heterofermentative strain Leuconostoc mesenteroides and the homofermentative strain Enterococcus faecalis (formerly classified as Streptococcus faecalis). Heterofermentative lactic acid bacteria such as L. mesenteroides generate both lactic acid as well as carbon dioxide whereas homofermentative lactic acid bacteria only generate lactic acid.
Both L. mesenteroides and E. faecalis are predominantly delivered to the batter by the black gram. Both strains start multiplying while the grains are soaking and continue to do so after grinding.
L. mesenteroides tolerates high concentrations of salt unlike most other bacteria. Hence the salt in the batter and the ongoing generation of lactic acid both suppress the growth of other undesirable micro-organisms.
Idli Day
March 30 is celebrated as World Idli Day. It was first celebrated in 2015 at Chennai.
See also
Bhapa pitha
Cuisine of Karnataka
Dhokla
List of Indian breads
List of steamed foods
Puttu
Rava idli
Sanna
References
Bibliography
Devi, Yamuna (1987). Lord Krishna's Cuisine: The Art of Indian Vegetarian Cooking, Dutton. .
Jaffrey, Madhur (1988). A Taste of India, Atheneum.
Rau, Santha Rama (1969). The Cooking of India, Time-Life Books.
Andhra cuisine
Culture of Mumbai
Fermented foods
Indian breads
Indian rice dishes
Indian cuisine
Indo-Caribbean cuisine
Karnataka cuisine
Kerala cuisine
Konkani cuisine
Mangalorean cuisine
Malaysian breads
South Indian cuisine
Sri Lankan rice dishes
Steamed foods
Tamil cuisine
Telangana cuisine | Idli | [
"Biology"
] | 1,686 | [
"Fermented foods",
"Biotechnology products"
] |
413,755 | https://en.wikipedia.org/wiki/Stapler | A stapler is a mechanical device that joins pages of paper or similar material by driving a thin metal staple through the sheets and folding the ends. Staplers are widely used in government, business, offices, workplaces, homes, and schools.
The word "stapler" can actually refer to a number of different devices of varying uses. In addition to joining paper sheets together, staplers can also be used in a surgical setting to join tissue together with surgical staples to close a surgical wound (much in the same way as sutures).
Most staplers are used to join multiple sheets of paper. Paper staplers come in two distinct types: manual and electric. Manual staplers are normally hand-held, although models that are used while set on a desk or other surface are not uncommon. Electric staplers exist in a variety of different designs and models. Their primary operating function is to join large numbers of paper sheets together in rapid succession. Some electric staplers can join up to 20 sheets at a time. Typical staplers are a third-class lever.
History
The growing usage of paper in the 19th century created a demand for an efficient paper fastener.
In 1841 Slocum and Jillion invented a "Machine for Sticking Pins into Paper", which is often believed to be the first stapler. But their patent (September 30, 1841, Patent #2275) is for a device used for packaging pins. In 1866, George McGill received U.S. patent 56,587 for a small, bendable brass paper fastener that was a precursor to the modern staple. In 1867, he received U.S. patent 67,665 for a press to insert the fastener into paper. He showed his invention at the 1876 Centennial Exhibition in Philadelphia, Pennsylvania, and continued to work on these and other various paper fasteners throughout the 1880s. In 1868 an English patent for a stapler was awarded to C. H. Gould, and in the U.S., Albert Kletzker of St. Louis, Missouri, also patented a device.
In 1877 Henry R. Heyl filed patent number 195,603 for the first machines to both insert and clinch a staple in one step, and for this reason some consider him the inventor of the modern stapler. In 1876 and 1877, Heyl also filed patents for the Novelty Paper Box Manufacturing Co. of Philadelphia, PA, However, the N. P. B. Manufacturing Co.'s inventions were to be used to staple boxes and books.
The first machine to hold a magazine of many pre-formed staples came out in 1878.
On February 18, 1879, George McGill received patent 212,316 for the McGill Single-Stroke Staple Press, the first commercially successful stapler. This device weighed over two and a half pounds and loaded a single wire staple, which it could drive through several sheets of paper.
The first published use of the word "stapler" to indicate a machine for fastening papers with a thin metal wire was in an advertisement in the American Munsey's Magazine in 1901.
In the early 1900s, several devices were developed and patented that punched and folded papers to attach them to each other without a metallic clip. The Clipless Stand Machine (made in North Berwick) was sold from 1909 into the 1920s. It cut a tongue in the paper that it folded back and tucked in. Bump's New Model Paper Fastener used a similar cutting and weaving technology.
The modern stapler
In 1941, the type of paper stapler that is the most common in use was developed: the four-way paper stapler. With the four-way, the operator could either use the stapler to staple papers to wood or cardboard, use pliers for bags, or use the normal way with the head positioned a small distance above the stapling plate. The stapling plate is known as the anvil. The anvil often has two settings: the first, and by far most common, is the reflexive setting, also known as the "permanent" setting. In this position, the legs of the staple are folded toward the center of the crossbar. It is used to staple papers which are not expected to need separation. If rotated 180° or slid to its second position, the anvil will be set on the sheer setting, also known as the "temporary" or "straight" setting. In this position, the legs of the staple are folded outwards, away from the crossbar, resulting in the legs and crossbar being in more or less a straight line. Stapling with this setting will result in more weakly secured papers but a staple that is much easier to remove. The use of the second setting is almost never seen, however, due to the prevalence of staple removers and the general lack of knowledge about its use. Some simple modern staplers feature a fixed anvil that lacks the sheer position.
Modern staplers continue to evolve and adapt to users' changing habits. Less effort or easy-squeeze/use staplers, for example, use different leverage efficiencies to reduce the amount of force the user needs to apply. As a result, these staplers tend to be used in work environments where repetitive, large stapling jobs are routine.
Some modern desktop staplers make use of Flat Clinch technology. With Flat Clinch staplers, the staple legs first pierce the paper and are then bent over and pressed absolutely flat against the paper – doing away with the two-setting anvil commonly used and making use of a recessed stapling base in which the legs are folded. Accordingly, staples do not have sharper edges exposed and lead to flatter stacking of paper – saving on filing and binder space.
Some photocopiers feature an integrated stapler allowing copies of documents to be automatically stapled as they are printed.
Industry
In 2012, $80 million worth of staplers were sold in the US. The dominant US manufacturer is Swingline.
Methods
Permanent fastening binds items by driving the staple through the material and into an anvil, a small metal plate that bends the ends, usually inward. On most modern staplers, the anvil rotates or slides to change between bending the staple ends inward for permanent stapling or outward for pinning (see below). Clinches can be standard, squiggled, flat, or rounded completely adjacent to the paper to facilitate neater document stacking.
Pinning temporarily binds documents or other items. To pin, the anvil slides or rotates so that the staple bends outwards instead of inwards. Some staplers pin by bending one leg of the staple inwards and the other outwards. The staple binds the item with relative security but is easily removed.
Tacking fastens objects to surfaces, such as bulletin boards or walls. A stapler that can tack has a base that folds back out of the way, so staples drive directly into an object rather than fold against the anvil. In this position, the staples are driven similar to the way a staple gun works, but with less force driving the staple.
Saddle staplers have an inverted V-shaped saddle for stapling pre-fold sheets to make booklets.
Stapleless staplers, invented in 1910, are a means of stapling that punches out a small flap of paper and weaves it through a notch. A more recent alternative method avoids the resulting hole by crimping the pages together with serrated metal teeth instead.
Surgical staplers
Surgeons can use surgical staplers in place of sutures to close the skin or during surgical anastomosis.
A skin stapler does not resemble a standard stapler, as it has no anvil. Skin staples are commonly preshaped into an "M." Pressing the stapler into the skin and applying pressure onto the handle bends the staple through the skin and into the fascia until the two ends almost meet in the middle to form a rectangle.
Staplers are commonly used intra-operatively during bowel resections in colorectal surgery. Often these staplers have an integral knife which, as the staples deploy, cuts through the bowel and maintains the aseptic field. The staples, made from surgical steel, are typically supplied in disposable sterilized cartridges.
Types
See also
Office Space, a 1999 comedy film where a stapler is one of the plot objects
Staple remover
Staple gun
References
External links
Fasteners
American inventions
Packaging machinery
Stationery
19th-century inventions
Office equipment | Stapler | [
"Engineering"
] | 1,732 | [
"Construction",
"Packaging machinery",
"Fasteners",
"Industrial machinery"
] |
414,048 | https://en.wikipedia.org/wiki/40%20Eridani | 40 Eridani is a triple star system in the constellation of Eridanus, abbreviated 40 Eri. It has the Bayer designation Omicron2 Eridani, which is Latinized from ο2 Eridani and abbreviated Omicron2 Eri or ο2 Eri. Based on parallax measurements taken by the Gaia mission, it is about 16.3 light-years from the Sun.
The primary star of the system, designated 40 Eridani A and named Keid, is easily visible to the naked eye. It is orbited by a binary pair whose two components are designated 40 Eridani B and C, and which were discovered on January 31, 1783, by William Herschel. It was again observed by Friedrich Struve in 1825 and by Otto Struve in 1851.
In 1910, it was discovered that although component B was a faint star, it was white in color. This meant that it had to be a small star; in fact it was a white dwarf, the first discovered. Although it is neither the closest white dwarf, nor the brightest in the night sky, it is by far the easiest to observe; it is nearly three magnitudes brighter than Van Maanen's Star, the nearest solitary white dwarf, and unlike the companions of Procyon and Sirius it is not swamped in the glare of a much brighter primary.
Nomenclature
40 Eridani is the system's Flamsteed designation and ο² Eridani (Latinised to Omicron2 Eridani) its Bayer designation. The designations of the sub-components – B and C – derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). also bears the variable star designation DY Eridani.
The system bore the traditional name Keid derived from the Arabic word قيض () meaning "the eggshells," alluding to its neighbour Beid (Arabic "egg"). In 2016, the IAU organized a Working Group on Star Names (WGSN)
to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems.
It approved the name Keid for the component on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
Properties
40 Eridani A is a main-sequence dwarf of spectral type K1, 40 Eridani B is a 9th magnitude white dwarf of spectral type DA4, and 40 Eridani C is an 11th magnitude red dwarf flare star of spectral type M4.5e. When component B was a main-sequence star, it is thought to have been the most massive member of the system, but ejected most of its mass before it became a white dwarf. B and C orbit each other approximately 400 AU from the primary star, A. Their orbit has a semimajor axis of 35 AU and is rather elliptical with an orbital eccentricity of 0.410).
As seen from the 40 Eridani system, the Sun is a 3.4-magnitude star in Hercules, near the border with Serpens Caput.
Potential for life
The habitable zone of where a planet could exist with liquid water, is near 0.68 from A. At this distance a planet would complete a revolution in 223 Earth days (according to the third of Kepler's laws) and would appear nearly 20% wider than the Sun does on Earth. An observer on a planet in the system would see the B-C pair as unusually bright white and reddish-orange stars in the night sky – magnitudes −8 and −6, slightly brighter than the appearance of Venus seen from Earth as the evening star.
It is unlikely that habitable planets exist around because they would have been sterilized by its evolution into a white dwarf. As for , it is prone to flares, which cause large momentary increases in the emission of X-rays as well as visible light. This would be lethal to Earth-type life on planets near the flare star.
Search for planets
40 Eridani A shows periodic radial velocity variations, which were suggested to be caused by a planetary companion. The 42-day period is close to the stellar rotation period, which made the possible planetary nature of the signal difficult to confirm. A 2018 study found that most evidence supports a planetary origin for the signal, but this has remained controversial, with a 2021 study characterizing the signal as a false positive. As of 2022, the cause of the radial velocity variations remained inconclusive.
Further studies in 2023 and 2024 concluded that the radial velocity signal very likely does originate from stellar activity, and not from a planet.
The candidate planet would have had a minimum mass of , and lie considerably interior to the habitable zone, receiving nine times more stellar flux than Earth, which is an even greater amount than Mercury, the innermost planet in the Solar System, on average receives from the Sun.
In fiction
In the Star Trek franchise, the planet Vulcan orbits 40 Eridani A. Vulcan has been referenced in relation to the real-life search for exoplanets in this system. The hypothetical planet 40 Eridani A b is also mentioned in the book Project Hail Mary as the home of the eponymous Eridian species.
Notes
References
External links
Keid at Jim Kaler's STARS.
Omicron(2) Eridani entry at the Internet Stellar Database.
Emission-line stars
K-type main-sequence stars
M-type main-sequence stars
White dwarfs
Eridani, 40
Triple star systems
Hypothetical planetary systems
Keid
Eridanus (constellation)
Eridani, Omicron2
1325
Durchmusterung objects
Eridani, 40
0166
026965
019849
Eridani, DY | 40 Eridani | [
"Astronomy"
] | 1,208 | [
"Eridanus (constellation)",
"Constellations"
] |
414,144 | https://en.wikipedia.org/wiki/Sterilization%20%28microbiology%29 | Sterilization () refers to any process that removes, kills, or deactivates all forms of life (particularly microorganisms such as fungi, bacteria, spores, and unicellular eukaryotic organisms) and other biological agents (such as prions or viruses) present in fluid or on a specific surface or object. Sterilization can be achieved through various means, including heat, chemicals, irradiation, high pressure, and filtration. Sterilization is distinct from disinfection, sanitization, and pasteurization, in that those methods reduce rather than eliminate all forms of life and biological agents present. After sterilization, fluid or an object is referred to as being sterile or aseptic.
Applications
Foods
One of the first steps toward modernized sterilization was made by Nicolas Appert, who discovered that application of heat over a suitable period of time slowed the decay of foods and various liquids, preserving them for safe consumption for a longer time than was typical. Canning of foods is an extension of the same principle and has helped to reduce food borne illness ("food poisoning"). Other methods of sterilizing foods include ultra-high temperature processing (which uses a shorter duration of heating), food irradiation, and high pressure (pascalization).
In the context of food, sterility typically refers to commercial sterility, "the absence of microorganisms capable of growing in the food at normal non-refrigerated conditions at which the food is likely to be held during distribution and storage" according to the Codex Allimentarius.
Medicine and surgery
In general, surgical instruments and medications that enter an already aseptic part of the body (such as the bloodstream, or penetrating the skin) must be sterile. Examples of such instruments include scalpels, hypodermic needles, and artificial pacemakers. This is also essential in the manufacture of parenteral pharmaceuticals.
Preparation of injectable medications and intravenous solutions for fluid replacement therapy requires not only sterility but also well-designed containers to prevent entry of adventitious agents after initial product sterilization.
Most medical and surgical devices used in healthcare facilities are made of materials that are able to undergo steam sterilization. However, since 1950, there has been an increase in medical devices and instruments made of materials (e.g., plastics) that require low-temperature sterilization. Ethylene oxide gas has been used since the 1950s for heat- and moisture-sensitive medical devices. Within the past 15 years, a number of new, low-temperature sterilization systems (e.g., vaporized hydrogen peroxide, peracetic acid immersion, ozone) have been developed and are being used to sterilize medical devices.
Spacecraft
There are strict international rules to protect the contamination of Solar System bodies from biological material from Earth. Standards vary depending on both the type of mission and its destination; the more likely a planet is considered to be habitable, the stricter the requirements are.
Many components of instruments used on spacecraft cannot withstand very high temperatures, so techniques not requiring excessive temperatures are used as tolerated, including heating to at least , chemical sterilization, oxidization, ultraviolet, and irradiation.
Quantification
The aim of sterilization is the reduction of initially present microorganisms or other potential pathogens. The degree of sterilization is commonly expressed by multiples of the decimal reduction time, or D-value, denoting the time needed to reduce the initial number to one tenth () of its original value. Then the number of microorganisms after sterilization time is given by:
.
The D-value is a function of sterilization conditions and varies with the type of microorganism, temperature, water activity, pH, etc.. For steam sterilization (see below), typically the temperature, in degrees Celsius, is given as an index.
Theoretically, the likelihood of the survival of an individual microorganism is never zero. To compensate for this, the overkill method is often used. Using the overkill method, sterilization is performed by sterilizing for longer than is required to kill the bioburden present on or in the item being sterilized. This provides a sterility assurance level (SAL) equal to the probability of a non-sterile unit.
For high-risk applications, such as medical devices and injections, a sterility assurance level of at least 10−6 is required by the United States of America Food and Drug Administration (FDA).
Heat
Steam
Steam sterilization, also known as moist heat sterilization, uses heated saturated steam under pressure to inactivate or kill microorganisms via denaturation of macromolecules, primarily proteins. This method is a faster process than dry heat sterilization. Steam sterilization is performed using an autoclave, sometimes called a converter or steam sterilizer. The object or liquid is placed in the autoclave chamber, which is then sealed and heated using pressurized steam to a temperature set point for a defined period of time. Steam sterilization cycles can be categorized as either pre-vacuum or gravity displacement. Gravity displacement cycles rely on the lower density of the injected steam to force cooler, denser air out of the chamber drain. Steam Sterilization | Disinfection & Sterilization Guidelines | Guidelines Library | Infection Control | CDC In comparison, pre-vacuum cycles create a vacuum in the chamber to remove cool dry air prior to injecting saturated steam, resulting in faster heating and shorter cycle times. Typical steam sterilization cycles are between 3 and 30 minutes at at , but adjustments may be made depending on the bioburden of the article being sterilized, its resistance (D-value) to steam sterilization, the article's heat tolerance, and the required sterility assurance level. Following the completion of a cycle, liquids in a pressurized autoclave must be cooled slowly to avoid boiling over when the pressure is released. This may be achieved by gradually depressurizing the sterilization chamber and allowing liquids to evaporate under a negative pressure, while cooling the contents.
Proper autoclave treatment will inactivate all resistant bacterial spores in addition to fungi, bacteria, and viruses, but is not expected to eliminate all prions, which vary in their heat resistance. For prion elimination, various recommendations state for 60 minutes or for at least 18 minutes. The 263K scrapie prion is inactivated relatively quickly by such sterilization procedures; however, other strains of scrapie and strains of Creutzfeldt-Jakob disease (CKD) and bovine spongiform encephalopathy (BSE) are more resistant. Using mice as test animals, one experiment showed that heating BSE positive brain tissue at for 18 minutes resulted in only a 2.5 log decrease in prion infectivity.
Most autoclaves have meters and charts that record or display information, particularly temperature and pressure as a function of time. The information is checked to ensure that the conditions required for sterilization have been met. Indicator tape is often placed on the packages of products prior to autoclaving, and some packaging incorporates indicators. The indicator changes color when exposed to steam, providing a visual confirmation.
Biological indicators can also be used to independently confirm autoclave performance. Simple biological indicator devices are commercially available, based on microbial spores. Most contain spores of the heat-resistant microbe Geobacillus stearothermophilus (formerly Bacillus stearothermophilus), which is extremely resistant to steam sterilization. Biological indicators may take the form of glass vials of spores and liquid media, or as spores on strips of paper inside glassine envelopes. These indicators are placed in locations where it is difficult for steam to reach to verify that steam is penetrating that area.
For autoclaving, cleaning is critical. Extraneous biological matter or grime may shield organisms from steam penetration. Proper cleaning can be achieved through physical scrubbing, sonication, ultrasound, or pulsed air.
Pressure cooking and canning is analogous to autoclaving, and when performed correctly renders food sterile.
To sterilize waste materials that are chiefly composed of liquid, a purpose-built effluent decontamination system can be utilized. These devices can function using a variety of sterilants, although using heat via steam is most common.
Dry
Dry heat was the first method of sterilization and is a longer process than moist heat sterilization. The destruction of microorganisms through the use of dry heat is a gradual phenomenon. With longer exposure to lethal temperatures, the number of killed microorganisms increases. Forced ventilation of hot air can be used to increase the rate at which heat is transferred to an organism and reduce the temperature and amount of time needed to achieve sterility. At higher temperatures, shorter exposure times are required to kill organisms. This can reduce heat-induced damage to food products.
The standard setting for a hot air oven is at least two hours at . A rapid method heats air to for 6 minutes for unwrapped objects and 12 minutes for wrapped objects. Dry heat has the advantage that it can be used on powders and other heat-stable items that are adversely affected by steam (e.g., it does not cause rusting of steel objects).
Flaming
Flaming is done to inoculation loops and straight-wires in microbiology labs for streaking. Leaving the loop in the flame of a Bunsen burner or alcohol burner until it glows red ensures that any infectious agent is inactivated or killed. This is commonly used for small metal or glass objects, but not for large objects (see Incineration below). However, during the initial heating, infectious material may be sprayed from the wire surface before it is killed, contaminating nearby surfaces and objects. Therefore, special heaters have been developed that surround the inoculating loop with a heated cage, ensuring that such sprayed material does not further contaminate the area. Another problem is that gas flames may leave carbon or other residues on the object if the object is not heated enough. A variation on flaming is to dip the object in a 70% or more concentrated solution of ethanol, then briefly leave the object in the flame of a Bunsen burner. The ethanol will ignite and burn off rapidly, leaving less residue than a gas flame
Incineration
Incineration is a waste treatment process that involves the combustion of organic substances contained in waste materials. This method also burns any organism to ash. It is used to sterilize medical and other biohazardous waste before it is discarded with non-hazardous waste. Bacteria incinerators are mini furnaces that incinerate and kill off any microorganisms that may be on an inoculating loop or wire.
Tyndallization
Named after John Tyndall, tyndallization is an obsolete and lengthy process designed to reduce the level of activity of sporulating microbes that are left by a simple boiling water method. The process involves boiling for a period of time (typically 20 minutes) at atmospheric pressure, cooling, incubating for a day, and then repeating the process a total of three to four times. The incubation allow heat-resistant spores surviving the previous boiling period to germinate and form the heat-sensitive vegetative (growing) stage, which can be killed by the next boiling step. This is effective because many spores are stimulated to grow by the heat shock. The procedure only works for media that can support bacterial growth, and will not sterilize non-nutritive substrates like water. Tyndallization is also ineffective against prions.
Glass bead sterilizers
Glass bead sterilizers work by heating glass beads to . Instruments are then quickly doused in these glass beads, which heat the object while physically scraping contaminants off their surface. Glass bead sterilizers were once a common sterilization method employed in dental offices as well as biological laboratories, but are not approved by the U.S. Food and Drug Administration (FDA) and Centers for Disease Control and Prevention (CDC) to be used as a sterilizers since 1997. They are still popular in European and Israeli dental practices, although there are no current evidence-based guidelines for using this sterilizer.
Chemical sterilization
Chemicals are also used for sterilization. Heating provides a reliable way to rid objects of all transmissible agents, but it is not always appropriate if it will damage heat-sensitive materials such as biological materials, fiber optics, electronics, and many plastics. In these situations, chemicals either in a gaseous or liquid form, can be used as sterilants. While the use of gas and liquid chemical sterilants avoids the problem of heat damage, users must ensure that the article to be sterilized is chemically compatible with the sterilant being used and that the sterilant is able to reach all surfaces that must be sterilized (typically cannot penetrate packaging). In addition, the use of chemical sterilants poses new challenges for workplace safety, as the properties that make chemicals effective sterilants usually make them harmful to humans. The procedure for removing sterilant residue from the sterilized materials varies depending on the chemical and process that is used.
Ethylene oxide
Ethylene oxide (EO, EtO) gas treatment is one of the common methods used to sterilize, pasteurize, or disinfect items because of its wide range of material compatibility. It is also used to process items that are sensitive to processing with other methods, such as radiation (gamma, electron beam, X-ray), heat (moist or dry), or other chemicals. Ethylene oxide treatment is the most common chemical sterilization method, used for approximately 70% of total sterilizations, and for over 50% of all disposable medical devices.
Ethylene oxide treatment is generally carried out between with relative humidity above 30% and a gas concentration between 200 and 800 mg/L. Typically, the process lasts for several hours. Ethylene oxide is highly effective, as it penetrates all porous materials, and it can penetrate through some plastic materials and films. Ethylene oxide kills all known microorganisms, such as bacteria (including spores), viruses, and fungi (including yeasts and moulds), and is compatible with almost all materials even when used repeatedly. It is flammable, toxic, and carcinogenic; however, only with a reported potential for some adverse health effects when not used in compliance with published requirements. Ethylene oxide sterilizers and processes require biological validation after sterilizer installation, significant repairs, or process changes.
The traditional process consists of a preconditioning phase (in a separate room or cell), a processing phase (more commonly in a vacuum vessel and sometimes in a pressure rated vessel), and an aeration phase (in a separate room or cell) to remove EO residues and lower by-products such as ethylene chlorohydrin (EC or ECH) and, of lesser importance, ethylene glycol (EG). An alternative process, known as all-in-one processing, also exists for some products whereby all three phases are performed in the vacuum or pressure rated vessel. This latter option can facilitate faster overall processing time and residue dissipation.
The most common EO processing method is the gas chamber. To benefit from economies of scale, EO has traditionally been delivered by filling a large chamber with a combination of gaseous EO, either as pure EO, or with other gases used as diluents; diluents include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and carbon dioxide.
Ethylene oxide is still widely used by medical device manufacturers. Since EO is explosive at concentrations above 3%, EO was traditionally supplied with an inert carrier gas, such as a CFC or HCFC. The use of CFCs or HCFCs as the carrier gas was banned because of concerns of ozone depletion. These halogenated hydrocarbons are being replaced by systems using 100% EO, because of regulations and the high cost of the blends. In hospitals, most EO sterilizers use single-use cartridges because of the convenience and ease of use compared to the former plumbed gas cylinders of EO blends.
It is important to adhere to patient and healthcare personnel government specified limits of EO residues in and/or on processed products, operator exposure after processing, during storage and handling of EO gas cylinders, and environmental emissions produced when using EO.
The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) at 1 ppm – calculated as an 8-hour time-weighted average (TWA) – and 5 ppm as a 15-minute excursion limit (EL). The National Institute for Occupational Safety and Health's (NIOSH) immediately dangerous to life and health limit (IDLH) for EO is 800 ppm. The odor threshold is around 500 ppm, so EO is imperceptible until concentrations are well above the OSHA PEL. Therefore, OSHA recommends that continuous gas monitoring systems be used to protect workers using EO for processing.
Nitrogen dioxide
Nitrogen dioxide (NO2) gas is a rapid and effective sterilant for use against a wide range of microorganisms, including common bacteria, viruses, and spores. The unique physical properties of NO2 gas allow for sterilant dispersion in an enclosed environment at room temperature and atmospheric pressure. The mechanism for lethality is the degradation of DNA in the spore's core through nitration of the phosphate backbone, which kills the exposed organism as it absorbs NO2. This degradations occurs at even very low concentrations of the gas. NO2 has a boiling point of at sea level, which results in a relatively high saturated vapour pressure at ambient temperature. Because of this, liquid NO2 may be used as a convenient source for the sterilant gas. Liquid NO2 is often referred to by the name of its dimer, dinitrogen tetroxide (N2O4). Additionally, the low levels of concentration required, coupled with the high vapour pressure, assures that no condensation occurs on the devices being sterilized. This means that no aeration of the devices is required immediately following the sterilization cycle. NO2 is also less corrosive than other sterilant gases, and is compatible with most medical materials and adhesives.
The most-resistant organism (MRO) to sterilization with NO2 gas is the spore of Geobacillus stearothermophilus, which is the same MRO for both steam and hydrogen peroxide sterilization processes. The spore form of G. stearothermophilus has been well characterized over the years as a biological indicator in sterilization applications. Microbial inactivation of G. stearothermophilus with NO2 gas proceeds rapidly in a log-linear fashion, as is typical of other sterilization processes. Noxilizer, Inc. has commercialized this technology to offer contract sterilization services for medical devices at its Baltimore, Maryland (USA) facility. This has been demonstrated in Noxilizer's lab in multiple studies and is supported by published reports from other labs. These same properties also allow for quicker removal of the sterilant and residual gases through aeration of the enclosed environment. The combination of rapid lethality and easy removal of the gas allows for shorter overall cycle times during the sterilization (or decontamination) process and a lower level of sterilant residuals than are found with other sterilization methods. Eniware, LLC has developed a portable, power-free sterilizer that uses no electricity, heat, or water. The 25 liter unit makes sterilization of surgical instruments possible for austere forward surgical teams, in health centers throughout the world with intermittent or no electricity and in disaster relief and humanitarian crisis situations. The 4-hour cycle uses a single use gas generation ampoule and a disposable scrubber to remove NO2 gas.
Ozone
Ozone is used in industrial settings to sterilize water and air, as well as a disinfectant for surfaces. It has the benefit of being able to oxidize most organic matter. On the other hand, it is a toxic and unstable gas that must be produced on-site, so it is not practical to use in many settings.
Ozone offers many advantages as a sterilant gas; ozone is a very efficient sterilant because of its strong oxidizing properties (E=2.076 vs SHE) capable of destroying a wide range of pathogens, including prions, without the need for handling hazardous chemicals since the ozone is generated within the sterilizer from medical-grade oxygen. The high reactivity of ozone means that waste ozone can be destroyed by passing over a simple catalyst that reverts it to oxygen and ensures that the cycle time is relatively short. The disadvantage of using ozone is that the gas is very reactive and very hazardous. The NIOSH's IDLH for ozone is smaller than the IDLH for ethylene oxide. NIOSH and OSHA have set the PEL for ozone at , calculated as an 8-hour time-weighted average. The sterilant gas manufacturers include many safety features in their products but prudent practice is to provide continuous monitoring of exposure to ozone, in order to provide a rapid warning in the event of a leak. Monitors for determining workplace exposure to ozone are commercially available.
Glutaraldehyde and formaldehyde
Glutaraldehyde and formaldehyde solutions (also used as fixatives) are accepted liquid sterilizing agents, provided that the immersion time is sufficiently long. To kill all spores in a clear liquid can take up to 22 hours with glutaraldehyde and even longer with formaldehyde. The presence of solid particles may lengthen the required period or render the treatment ineffective. Sterilization of blocks of tissue can take much longer, due to the time required for the fixative to penetrate. Glutaraldehyde and formaldehyde are volatile, and toxic by both skin contact and inhalation. Glutaraldehyde has a short shelf-life (<2 weeks), and is expensive. Formaldehyde is less expensive and has a much longer shelf-life if some methanol is added to inhibit polymerization of the chemical to paraformaldehyde, but is much more volatile. Formaldehyde is also used as a gaseous sterilizing agent; in this case, it is prepared on-site by depolymerization of solid paraformaldehyde. Many vaccines, such as the original Salk polio vaccine, are sterilized with formaldehyde.
Hydrogen peroxide
Hydrogen peroxide, in both liquid and as vaporized hydrogen peroxide (VHP), is another chemical sterilizing agent. Hydrogen peroxide is a strong oxidant, which allows it to destroy a wide range of pathogens. Hydrogen peroxide is used to sterilize heat- or temperature-sensitive articles, such as rigid endoscopes. In medical sterilization, hydrogen peroxide is used at higher concentrations, ranging from around 35% up to 90%. The biggest advantage of hydrogen peroxide as a sterilant is the short cycle time. Whereas the cycle time for ethylene oxide may be 10 to 15 hours, some modern hydrogen peroxide sterilizers have a cycle time as short as 28 minutes.
Drawbacks of hydrogen peroxide include material compatibility, a lower capability for penetration and operator health risks. Products containing cellulose, such as paper, cannot be sterilized using VHP and products containing nylon may become brittle. The penetrating ability of hydrogen peroxide is not as good as ethylene oxide and so there are limitations on the length and diameter of the lumen of objects that can be effectively sterilized. Hydrogen peroxide is a primary irritant and the contact of the liquid solution with skin will cause bleaching or ulceration depending on the concentration and contact time. It is relatively non-toxic when diluted to low concentrations, but is a dangerous oxidizer at high concentrations (> 10% w/w). The vapour is also hazardous, primarily affecting the eyes and respiratory system. Even short-term exposures can be hazardous and NIOSH has set the IDLH at 75 ppm, less than 1/10 the IDLH for ethylene oxide (800 ppm). Prolonged exposure to lower concentrations can cause permanent lung damage and consequently, OSHA has set the permissible exposure limit to 1.0 ppm, calculated as an 8-hour time-weighted average. Sterilizer manufacturers go to great lengths to make their products safe through careful design and incorporation of many safety features, though there are still workplace exposures of hydrogen peroxide from gas sterilizers documented in the FDA Manufacturer and User Facility Device Experience (MAUDE) database. When using any type of gas sterilizer, prudent work practices should include good ventilation, a continuous gas monitor for hydrogen peroxide, and good work practices and training.
Vaporized hydrogen peroxide (VHP) is used to sterilize large enclosed and sealed areas, such as entire rooms and aircraft interiors.
Although toxic, VHP breaks down in a short time to water and oxygen.
Peracetic acid
Peracetic acid (0.2%) is a recognized sterilant by the FDA for use in sterilizing medical devices such as endoscopes. Peracetic acid which is also known as peroxyacetic acid is a chemical compound often used in disinfectants such as sanitizers. It is most commonly produced by the reaction of acetic acid with hydrogen peroxide by using an acid catalyst. Peracetic acid is never sold in un-stabilized solutions which is why it is considered to be environmentally friendly. Peracetic acid is a colorless liquid and the molecular formula of peracetic acid is C2H4O3 or CH3COOOH. More recently, peracetic acid is being used throughout the world as more people are using fumigation to decontaminate surfaces to reduce the risk of COVID-19 and other diseases.
Potential for chemical sterilization of prions
Prions are highly resistant to chemical sterilization. Treatment with aldehydes, such as formaldehyde, have actually been shown to increase prion resistance. Hydrogen peroxide (3%) used or 1 hour was shown to be ineffective, providing less than 3 logs (10−3) reduction in contamination. Iodine, formaldehyde, glutaraldehyde, and peracetic acid also fail this test (1 hour treatment). Only chlorine, phenolic compounds, guanidinium thiocyanate, and sodium hydroxide reduce prion levels by more than 4 logs; chlorine (too corrosive to use on certain objects) and sodium hydroxide are the most consistent. Many studies have shown the effectiveness of sodium hydroxide.
Radiation sterilization
Sterilization can be achieved using electromagnetic radiation, such as ultraviolet light (UV), X-rays, and gamma rays, or irradiation by subatomic particles such as electron beams. Electromagnetic or particulate radiation can be energetic enough to ionize atoms or molecules (ionizing radiation), or less energetic atoms or molecules (non-ionizing radiation).
Non-ionizing radiation sterilization
UV irradiation (from a germicidal lamp) is useful for sterilization of surfaces and some transparent objects. Many objects that are transparent to visible light absorb UV. UV irradiation is routinely used to sterilize the interiors of biological safety cabinets between uses, but is ineffective in shaded areas, including areas under dirt (which may become polymerized after prolonged irradiation, so that it is very difficult to remove). It also damages some plastics, such as polystyrene foam if exposed for prolonged periods of time.
Ionizing radiation sterilization
The safety of irradiation facilities is regulated by the International Atomic Energy Agency of the United Nations and monitored by the different national Nuclear Regulatory Commissions (NRC). The radiation exposure accidents that have occurred in the past are documented by the agency and thoroughly analyzed to determine the cause and improvement potential. Such improvements are then mandated to retrofit existing facilities and future design.
Gamma radiation is very penetrating, and is commonly used for sterilization of disposable medical equipment, such as syringes, needles, cannulas and IV sets, and food. It is emitted by a radioisotope, usually cobalt-60 (60Co) or caesium-137 (137Cs), which have photon energies of up to 1.3 and 0.66 MeV, respectively.
Use of a radioisotope requires shielding for the safety of the operators while in use and in storage. With most designs, the radioisotope is lowered into a water-filled source storage pool, which absorbs radiation and allows maintenance personnel to enter the radiation shield. One variant keeps the radioisotope under water at all times and lowers the product to be irradiated in the water in hermetically sealed bells; no further shielding is required for such designs. Other uncommonly used designs are dry storage, providing movable shields that reduce radiation levels in areas of the irradiation chamber, etc. An incident in Decatur, Georgia, USA, where water-soluble caesium-137 leaked into the source storage pool, required Nuclear Regulatory Commission (NRC) intervention and led to the use of this radioisotope being almost entirely discontinued in favor of the more costly, non-water-soluble cobalt-60. Cobalt-60 gamma photons have about twice the energy, and hence greater penetrating range, of caesium-137-produced radiation.
Electron beam processing is also commonly used for sterilization. Electron beams use an on-off technology and provide a much higher dosing rate than gamma or X-rays. Due to the higher dose rate, less exposure time is needed and thereby any potential degradation to polymers is reduced. Because electrons carry a charge, electron beams are less penetrating than both gamma and X-rays. Facilities rely on substantial concrete shields to protect workers and the environment from radiation exposure.
High-energy X-rays (produced by bremsstrahlung) allow irradiation of large packages and pallet loads of medical devices. They are sufficiently penetrating to treat multiple pallet loads of low-density packages with very good dose uniformity ratios. X-ray sterilization does not require chemical or radioactive material: high-energy X-rays are generated at high intensity by an X-ray generator that does not require shielding when not in use. X-rays are generated by bombarding a dense material (target) such as tantalum or tungsten with high-energy electrons, in a process known as bremsstrahlung conversion. These systems are energy-inefficient, requiring much more electrical energy than other systems for the same result.
Irradiation with X-rays, gamma rays, or electrons does not make materials radioactive, because the energy used is too low. Generally an energy of at least 10 MeV is needed to induce radioactivity in a material. Neutrons and very high-energy particles can make materials radioactive, but have good penetration, whereas lower energy particles (other than neutrons) cannot make materials radioactive, but have poorer penetration.
Sterilization by irradiation with gamma rays may however affect material properties.
Irradiation is used by the United States Postal Service to sterilize mail in the Washington, D.C. area. Some foods (e.g., spices and ground meats) are sterilized by irradiation.
Subatomic particles may be more or less penetrating and may be generated by a radioisotope or a device, depending upon the type of particle.
Sterile filtration
Fluids that would be damaged by heat, irradiation, or chemical sterilization, such as drug solution, can be sterilized by microfiltration using membrane filters. This method is commonly used for heat labile pharmaceuticals and protein solutions in medicinal drug processing. A microfilter with pore size of usually 0.22 μm will effectively remove microorganisms. Some Staphylococcal species have, however, been shown to be flexible enough to pass through 0.22 μm filters. In the processing of biologics, viruses must be removed or inactivated, requiring the use of nanofilters with a smaller pore size (20–50 nm). Smaller pore sizes lower the flow rate, so in order to achieve higher total throughput or to avoid premature blockage, pre-filters might be used to protect small pore membrane filters. Tangential flow filtration (TFF) and alternating tangential flow (ATF) systems also reduce particulate accumulation and blockage.
Membrane filters used in production processes are commonly made from materials such as mixed cellulose ester or polyethersulfone (PES). The filtration equipment and the filters themselves may be purchased as pre-sterilized disposable units in sealed packaging or must be sterilized by the user, generally by autoclaving at a temperature that does not damage the fragile filter membranes. To ensure proper functioning of the filter, the membrane filters are integrity tested post-use and sometimes before use. The nondestructive integrity test assures that the filter is undamaged and is a regulatory requirement. Typically, terminal pharmaceutical sterile filtration is performed inside of a cleanroom to prevent contamination.
Preservation of sterility
Instruments that have undergone sterilization can be maintained in such condition by containment in sealed packaging until use.
Aseptic technique is the act of maintaining sterility during procedures.
See also
Antibacterial soap
Asepsis
Aseptic processing
Contamination control
Electron irradiation
Food packaging
Food preservation
Food safety
Spaulding classification
Sterilant gas monitoring
References
Sources
WHO - Infection Control Guidelines for Transmissible Spongiform Encephalopathies. Retrieved Jul 10, 2010
Control of microbes
Innovative Technologies for the Biofunctionalisation and Terminal Sterilisation of Medical Devices
Sterilization of liquids, solids, waste in disposal bags and hazardous biological substances
Pharmaceutical Filtration - The Management of Organism Removal, Meltzer TH, Jornitz MW, PDA/DHI 1998
"Association for Advancement of Medical Instrumentation ANSI/AAMI ST41-Ehylene Oxyde Sterilization in Healthcare facilities: Safety and Effectiveness. Arlington, VA: Association for Advancement of Medical Instrumentation; 2000."
“US Department of Labor, Occupational Safety and Health Administration.29 CFR 1910.1020. Access to Employee Medical Records.". October 26, 2007.
Perioperative Standards and Recommended Practices, AORN 2013,
Biocides
Electron beam
Hygiene
Microbiology | Sterilization (microbiology) | [
"Chemistry",
"Biology",
"Environmental_science"
] | 7,345 | [
"Microbiology techniques",
"Biocides",
"Sterilization (microbiology)",
"Toxicology"
] |
414,259 | https://en.wikipedia.org/wiki/Trichomonas%20vaginalis | Trichomonas vaginalis is an anaerobic, flagellated protozoan parasite and the causative agent of a sexually transmitted disease called trichomoniasis. It is the most common pathogenic protozoan that infects humans in industrialized countries. Infection rates in men and women are similar but women are usually symptomatic, while infections in men are usually asymptomatic. Transmission usually occurs via direct, skin-to-skin contact with an infected individual, most often through vaginal intercourse. It is estimated that 160 million cases of infection are acquired annually worldwide. The estimates for North America alone are between 5 and 8 million new infections each year, with an estimated rate of asymptomatic cases as high as 50%. Usually treatment consists of metronidazole and tinidazole.
Clinical
History
Alfred Francois Donné (1801–1878) was the first to describe a procedure to diagnose trichomoniasis through "the microscopic observation of motile protozoa in vaginal or cervical secretions" in 1836. He published this in the article entitled, "Animalcules observés dans les matières purulentes et le produit des sécrétions des organes génitaux de l'homme et de la femme" in the journal, Comptes rendus de l'Académie des sciences. With it, he created the binomial name of the parasite as Trichomonas vaginalis. 80 years after the initial discovery of the parasitic protozoan, Hohne declared Trichomoniasis as a clinical entity in 1916.
Signs and symptoms
Most women (85%) and men (77%) with infected with T. vaginalis do not have symptoms. Half of these women can develop symptoms within 6 months and can have vaginal erythema, dyspareunia, dysuria, and vaginal discharge, which is often diffuse, malodorous, and yellow-green, along with itching in the genital region. “Strawberry cervix,” occurs in about 5% of women. In men, it can cause urethritis, epididymitis and prostatitis.
Complications
Some of the complications of Trichomonas vaginalis in women include: preterm delivery, low birth weight, and increased mortality as well as predisposing to human immunodeficiency virus infection, AIDS, and cervical cancer. Trichomonas vaginalis can be seen in diverse locations within the body, such as," in the urinary tract, fallopian tubes, and pelvis and can cause pneumonia, bronchitis, and oral lesions."
Diagnosis
Classically, with a cervical smear, infected women may have a transparent "halo" around their superficial cell nucleus but more typically the organism itself is seen with a, "slight cyanophilic tinge, faint eccentric nuclei, and fine acidophilic granules." It is unreliably detected by studying a genital discharge or with a cervical smear because of their low sensitivity. Trichomonas vaginalis is also routinely diagnosed via a wet mount, in which motility is observed. Currently, the most common method of diagnosis is via overnight culture, with a sensitivity range of 75–95%. Newer methods, such as rapid antigen testing and transcription-mediated amplification, have even greater sensitivity, but are not in widespread use.
Treatment
Infection is treated and cured with metronidazole or tinidazole. The CDC recommends a one time dose of 2 grams of either metronidazole or tinidazole as the first-line treatment; the alternative treatment recommended is 500 milligrams of metronidazole, twice daily, for seven days if there is failure of the single-dose regimen. Medication should be prescribed to any sexual partner(s) as well because they may be asymptomatic carriers.
Morphology
Trichomonas vaginalis exists in only one morphological stage, a trophozoite, and cannot encyst (or form cysts.) This protozoan does not typically adhere to one shape, as in different conditions, the parasite has the ability to change. When in culture separate from the host, it usually displays a more "pear" or oval shaped morphology, but when present in a living host, specifically on the epithelial cells of the vaginal wall, the shape is more "amoeboid". It is slightly larger than a white blood cell, measuring 9 × 7 μm. In both forms, Trichomonas vaginalis has five flagella – four protruding from the front or anterior of the parasite and the fifth on the back or posterior end. The functionality of the fifth flagellum is not known. In addition, a barb-like axostyle projects opposite the four-flagella bundle. All of these flagella are connected to an "undulating" membrane. The axostyle may be used for attachment to surfaces and may also cause the tissue damage seen in trichomoniasis. The nucleus is usually elongated, and is located near the anterior end of the protozoan within the cytoplasm which contains many hydrogenosomes (closed-membrane organelle with the ability to produce both adenosine triphosphate and hydrogen while in anaerobic conditions.)
While Trichomonas vaginalis does not have a cyst form, the organism can survive for up to 24 hours in urine, semen, or even water samples. A nonmotile, round, pseudocystic form with internalized flagella has been observed under unfavorable conditions. This form is generally regarded as a degenerate stage as opposed to a resistant form, although viability of pseudocystic cells has been occasionally reported. The ability to revert to trophozoite form, to reproduce and sustain infection has been described, along with a microscopic cell staining technique to visually discern this elusive form.
Metabolism
Trichomonas vaginalis is an anaerobe. There is an absence of cytochrome C and mitochondria, thus making oxygen uptake and synthesis of adenosine triphosphate via oxidative phosphorylation difficult. Although it contains no mitochondria, an analogous structure called a hydrogenosome, which is the site of fermentative oxidation of pyruvate, carries out many of the same metabolic processes. Carbohydrates, specifically those with alpha1,4- glycosidic linkages, are metabolized and eventually fermented to produce products such as acetate, lactate, malate, glycerol and CO2 under aerobic conditions. Hydrogen is produced under anaerobic conditions. Outside the hydrogenosome, carbohydrate metabolism also occurs freely in the cytoplasm. The Embden-Meyerhof-Parnas pathway. is used to convert glucose into phosphoenolpyruvate which ultimately becomes pyruvate.
Virulence factors
Although Trichomonas vaginalis exists as a trophozoite in its infective form, its amoeboid form is also an important characteristic that adds to how well it is able to infect its host. The amoeboid form, which is pancake shaped, allows for greater surface area contact with epithelial cells of the vagina, cervix, urethra, and prostate. The pseudocyst form is also a way in which the microbe can infect more efficiently, but this only induced when exposed to cold and other stressors. These various forms are accompanied with differing protein phosphorylation profiles which are triggered by environmental pressures.
One of the hallmark features of Trichomonas vaginalis is the adherence factors that allow cervicovaginal epithelium colonization in women. The adherence that this organism illustrates is specific to vaginal epithelial cells being pH, time, and temperature dependent. A variety of virulence factors mediate this process some of which are the microtubules, microfilaments, bacterial adhesins (4), and cysteine proteinases. The adhesins are four trichomonad enzymes called AP65, AP51, AP33, and AP23 that mediate the interaction of the parasite to the receptor molecules on vaginal epithelial cells. The best characterized surface molecule associated with one of the four adhesins is called Trichomonas vaginalis lipoglycans. This molecule is the most abundant on the surface of Trichomonas vaginalis, aids in sticking to vaginal epithelial cells, and can also influence how the human immune system responds, affecting inflammatory responses and macrophages in the body. Cysteine proteinases may be another virulence factor because not only do these 30 kDa proteins bind to host cell surfaces but also may degrade extracellular matrix proteins like hemoglobin, fibronectin or collagen IV.
Genome sequencing and statistics
The Trichomonas vaginalis genome is approximately 160 megabases in size – ten times larger than predicted from earlier gel-based chromosome sizing. (The human genome is ~3.5 gigabases by comparison.) As much as two-thirds of the Trichomonas vaginalis sequence consists of repetitive and transposable elements, indicative of a recent drastic, evolutionarily expansion of the genome. The total number of predicted protein-coding genes is ~60,000, with the genome being around 65% repetitive (virus-like, transposon-like, retrotransposon-like, and unclassified repeats, all with high copy number and low polymorphism). Approximately 26,000 of the protein-coding genes have been classed as 'evidence-supported' (similar either to known proteins, or to expressed sequence tags), while the remainder have no known function. These extraordinary genome statistics are likely to change downward as the genome sequence, currently very fragmented due to the difficulty of ordering repetitive DNA, is assembled into chromosomes, and as more transcription data (expressed sequence tags, microarrays) accumulate.
TrichDB.org was launched as a free, public genomic data repository and retrieval service devoted to genome-scale trichomonad data. The site currently contains all of the Trichomonas vaginalis sequence project data, several expressed sequence tag libraries, and tools for data mining and display. TrichDB is part of the EupathDB functional genomics database project funded by the National Institutes of Health and National Institute of Allergy and Infectious Diseases.
Genetic diversity
High levels of genetic diversity were detected in Trichomonas vaginalis after phenotypic differences were discovered during clinical presentations. Studies into the genetic diversity of Trichomonas vaginalis has shown that there are two distinct lineages of the parasite found worldwide; both lineages are represented evenly in field isolates. The two lineages differ in whether or not Trichomonas vaginalis virus infection is present. Trichomonas vaginalis virus infection is clinically relevant in that, it has an effect on parasite resistance to metronidazole, a first line drug treatment for human trichomoniasis.
Increased susceptibility to human immunodeficiency virus
In addition to inflammation that Trichomonas vaginalis causes, the parasite also causes lysis of epithelial cells and red blood cells in the area leading to more inflammation and disruption of the protective barrier usually provided by the epithelium. Having Trichomonas vaginalis also may increase the chances of the infected woman transmitting human immunodeficiency virus to her sexual partner(s).
Evolution
The biology of Trichomonas vaginalis has implications for understanding the origin of sexual reproduction in eukaryotes. Trichomonas vaginalis is not known to undergo meiosis, a key stage of the eukaryotic sexual cycle. However, when Malik et al. examined Trichomonas vaginalis for the presence of 29 genes known to function in meiosis, they found 27 homologous genes to the ones found in animals, fungi, plants and other protists, including eight of nine genes that are specific to meiosis in model organisms. These findings suggest that Trichomonas vaginalis has the capability for meiotic recombination, and hence "parasexual" reproduction. 21 of the 27 meiosis genes were also found in another parasite Giardia lamblia (also called Giardia intestinalis), indicating that these meiotic genes were present in a common ancestor of Trichomonas vaginalis and G. intestinalis. Since these two species are descendants of lineages that are highly divergent among eukaryotes, these meiotic genes were likely present in a common ancestor of all eukaryotes.
See also
List of parasites (human)
References
Further reading
External links
TIGR's Trichomonas vaginalis genome sequencing project.
TrichDB: the Trichomonas vaginalis genome resource
NIH site on trichomoniasis.
Taxonomy
eMedicine article on trichomoniasis.
Patient UK
Metamonads
Sexually transmitted diseases and infections
Parasites of humans
Metamonad species | Trichomonas vaginalis | [
"Biology"
] | 2,780 | [
"Parasites of humans",
"Humans and other species"
] |
414,352 | https://en.wikipedia.org/wiki/Pumpjack | A pumpjack is the overground drive for a reciprocating piston pump in an oil well.
It is used to mechanically lift liquid out of the well if there is not enough bottom hole pressure for the liquid to flow all the way to the surface. The arrangement is often used for onshore wells. Pumpjacks are common in oil-rich areas.
Depending on the size of the pump, it generally produces of liquid at each stroke. Often this is an emulsion of crude oil and water. Pump size is also determined by the depth and weight of the oil to remove, with deeper extraction requiring more power to move the increased weight of the discharge column (discharge head).
A beam-type pumpjack converts the rotary motion of the motor (usually an electric motor) to the vertical reciprocating motion necessary to drive the polished-rod and accompanying sucker rod and column (fluid) load. The engineering term for this type of mechanism is a walking beam. It was often employed in stationary and marine steam engine designs in the 18th and 19th centuries.
Names
A pumpjack is also called a beam pump, walking beam pump, horsehead pump, nodding donkey pump (donkey pumper), rocking horse pump, grasshopper pump, sucker rod pump, dinosaur pump, Big Texan pump, thirsty bird pump, hobby horse, or just pumping unit.
Above ground
In the early days, pumpjacks worked by rod lines running horizontally above the ground to a wheel on a rotating eccentric in a mechanism known as a central power. The central power, which might operate a dozen or more pumpjacks, would be powered by a steam or internal combustion engine or by an electric motor. Among the advantages of this scheme was only having one prime mover to power all the pumpjacks rather than individual motors for each. However, among the many difficulties was maintaining system balance as individual well loads changed.
Modern pumpjacks are powered by a prime mover. This is commonly an electric motor, but internal combustion engines are used in isolated locations without access to electricity, or, in the cases of water pumpjacks, where three-phase power is not available (while single phase motors exist at least up to , providing power to single-phase motors above can cause powerline problems, notably voltage sag on startup, and many pumps require more than 10 horsepower). Common off-grid pumpjack engines run on natural gas, often casing gas produced from the well, but pumpjacks have been run on many types of fuel, such as propane and diesel fuel. In harsh climates, such motors and engines may be housed in a shack for protection from the elements. Engines that power water pumpjacks often receive natural gas from the nearest available gas grid.
The prime mover runs a set of pulleys to the transmission, often a double-reduction gearbox, which drives a pair of cranks, generally with counterweights installed on them to offset the weight of the heavy rod assembly. The cranks raise and lower one end of an I-beam which is free to move on an A-frame. On the other end of the beam is a curved metal box called a horse head or donkey head, so named due to its appearance. A cable made of steel—occasionally, fibreglass—, called a bridle, connects the horse head to the polished rod, a piston that passes through the stuffing box.
The cranks themselves also produce counterbalance due to their weight, so on pumpjacks that do not carry very heavy loads, the weight of the cranks themselves may be enough to balance the well load.
Sometimes, however, crank-balanced units can become prohibitively heavy due to the need for counterweights. Lufkin Industries offer "air-balanced" units, where counterbalance is provided by a pneumatic cylinder charged with air from a compressor, eliminating the need for counterweights.
The polished rod has a close fit to the stuffing box, letting it move in and out of the tubing without fluid escaping. (The tubing is a pipe that runs to the bottom of the well through which the liquid is produced.) The bridle follows the curve of the horse head as it lowers and raises to create a vertical or nearly-vertical stroke. The polished rod is connected to a long string of rods called sucker rods, which run through the tubing to the down-hole pump, usually positioned near the bottom of the well.
Down-hole
At the bottom of the tubing is the down-hole pump. This pump has two ball check valves: a stationary valve at bottom called the standing valve, and a valve on the piston connected to the bottom of the sucker rods that travels up and down as the rods reciprocate, known as the traveling valve. Reservoir fluid enters from the formation into the bottom of the borehole through perforations that have been made through the casing and cement (the casing is a larger metal pipe that runs the length of the well, which has cement placed between it and the earth; the tubing, pump, and sucker rod are all inside the casing).
When the rods at the pump end are travelling up, the traveling valve is closed and the standing valve is open (due to the drop in pressure in the pump barrel). Consequently, the pump barrel fills with the fluid from the formation as the traveling piston lifts the previous contents of the barrel upwards. When the rods begin pushing down, the traveling valve opens and the standing valve closes (due to an increase in pressure in the pump barrel). The traveling valve drops through the fluid in the barrel (which had been sucked in during the upstroke). The piston then reaches the end of its stroke and begins its path upwards again, repeating the process.
Often, gas is produced through the same perforations as the oil. This can be problematic if gas enters the pump, because it can result in what is known as gas locking, where insufficient pressure builds up in the pump barrel to open the valves (due to compression of the gas) and little or nothing is pumped. To preclude this, the inlet for the pump can be placed below the perforations. As the gas-laden fluid enters the well bore through the perforations, the gas bubbles up the annulus (the space between the casing and the tubing) while the liquid moves down to the standing valve inlet. Once at the surface, the gas is collected through piping connected to the annulus.
Water well pumpjacks
Pumpjacks can also be used to drive what would now be considered old-fashioned hand-pumped water wells. The scale of the technology is frequently smaller than for an oil well, and can typically fit on top of an existing hand-pumped well head. The technology is simple, typically using a parallel-bar double-cam lift driven from a low-power electric motor, although the number of pumpjacks with stroke lengths and longer being used as water pumps is increasing. A short video recording of such a pump in action can be viewed on YouTube.
Although the flow rate for a water well pumpjack is lower than that from a jet pump and the lifted water is not pressurised, the beam pumping unit has the option of hand pumping in an emergency, by hand-rotating the pumpjack cam to its lowest position, and attaching a manual handle to the top of the wellhead rod. In larger pumpjacks powered by engines, the engine can run off fuel stored in a reservoir or from natural gas delivered from the nearest gas grid. In some cases, this type of pump consumes less power than a jet pump and is, therefore, cheaper to run.
See also
Gas lift
Progressing cavity pump
Submersible pump
References
External links
All Pumped Up – Oilfield Technology, The American Oil & Gas Historical Society, updated October 2014
Articles containing video clips
Petroleum technology
Pumps
nv:Chidí bikʼáh
tr:Atbaşı | Pumpjack | [
"Physics",
"Chemistry",
"Engineering"
] | 1,624 | [
"Pumps",
"Turbomachinery",
"Petroleum technology",
"Petroleum engineering",
"Physical systems",
"Hydraulics"
] |
414,421 | https://en.wikipedia.org/wiki/John%20Kendrew | Sir John Cowdery Kendrew, (24 March 1917 – 23 August 1997) was an English biochemist, crystallographer, and science administrator. Kendrew shared the 1962 Nobel Prize in Chemistry with Max Perutz, for their work at the Cavendish Laboratory to investigate the structure of haem-containing proteins.
Education and early life
Kendrew was born in Oxford, son of Wilfrid George Kendrew, reader in climatology in the University of Oxford, and Evelyn May Graham Sandburg, art historian. After preparatory school at the Dragon School in Oxford, he was educated at Clifton College in Bristol, 1930–1936. He attended Trinity College, Cambridge in 1936, as a Major Scholar, graduating in chemistry in 1939. He spent the early months of World War II doing research on reaction kinetics, and then became a member of the Air Ministry Research Establishment, working on radar. In 1940 he became engaged in operational research at the Royal Air Force headquarters; commissioned a squadron leader on 17 September 1941, he was appointed an honorary wing commander on 8 June 1944, and relinquished his commission on 5 June 1945. He was awarded his PhD after the war in 1949.
Research and career
During the war years, he became increasingly interested in biochemical problems, and decided to work on the structure of proteins.
Crystallography
In 1945 he approached Max Perutz in the Cavendish Laboratory in Cambridge. Joseph Barcroft, a respiratory physiologist, suggested he might make a comparative protein crystallographic study of adult and fetal sheep haemoglobin, and he started that work.
In 1947 he became a Fellow of Peterhouse; and the Medical Research Council (MRC) agreed to create a research unit for the study of the molecular structure of biological systems, under the direction of Sir Lawrence Bragg. In 1954 he became a Reader at the Davy-Faraday Laboratory of the Royal Institution in London.
Crystal structure of myoglobin
Kendrew shared the 1962 Nobel Prize for chemistry with Max Perutz for determining the first atomic structures of proteins using X-ray crystallography. Their work was done at what is now the MRC Laboratory of Molecular Biology in Cambridge. Kendrew determined the structure of the protein myoglobin, which stores oxygen in muscle cells.
In 1947 the MRC agreed to make a research unit for the Study of the Molecular Structure of Biological Systems. The original studies were on the structure of sheep haemoglobin, but when this work had progressed as far as was possible using the resources then available, Kendrew embarked on the study of myoglobin, a molecule only a quarter the size of the haemoglobin molecule. His initial source of raw material was horse heart, but the crystals thus obtained were too small for X-ray analysis. Kendrew realized that the oxygen-conserving tissue of diving mammals could offer a better prospect, and a chance encounter led to his acquiring a large chunk of whale meat from Peru. Whale myoglobin did give large crystals with clean X-ray diffraction patterns. However, the problem still remained insurmountable, until in 1953 Max Perutz discovered that the phase problem in analysis of the diffraction patterns could be solved by multiple isomorphous replacement — comparison of patterns from several crystals; one from the native protein, and others that had been soaked in solutions of heavy metals and had metal ions introduced in different well-defined positions. An electron density map at 6 angstrom (0.6 nanometre) resolution was obtained by 1957, and by 1959 an atomic model could be built at 2 angstrom (0.2 nm) resolution.
Later career
In 1963, Kendrew became one of the founders of the European Molecular Biology Organization; he also founded the Journal of Molecular Biology and was for many years its editor-in-chief. He became Fellow of the American Society of Biological Chemists in 1967 and honorary member of the International Academy of Science, Munich. In 1974, he succeeded in persuading governments to establish the European Molecular Biology Laboratory (EMBL) in Heidelberg and became its first director. He was knighted in 1974. From 1974 to 1979, he was a Trustee of the British Museum, and from 1974 to 1988 he was successively Secretary General, Vice-President, and President of the International Council of Scientific Unions.
After his retirement from EMBL, Kendrew became President of St John's College at the University of Oxford, a post he held from 1981 to 1987. In his will, he designated his bequest to St John's College for studentships in science and in music, for students from developing countries. The Kendrew Quadrangle at St John's College in Oxford, officially opened on 16 October 2010, is named after him.
Kendrew was married to the former Elizabeth Jarvie (née Gorvin) from 1948 to 1956. Their marriage ended in divorce. Kendrew was subsequently partners with the artist Ruth Harris. He had no surviving children.
A biography of Kendrew, entitled A Place in History: The Biography of John C. Kendrew, by Paul M. Wassarman was published by Oxford University Press in 2020.
Selected publications
References
Further reading
John Finch; 'A Nobel Fellow on Every Floor', Medical Research Council 2008, 381 pp, ; this book is all about the MRC Laboratory of Molecular Biology, Cambridge.
Oxford University Press, page on Paul M. Wassarman, A Place in History, , 2020
External links
1917 births
1997 deaths
Alumni of Trinity College, Cambridge
Commanders of the Order of the British Empire
British crystallographers
English biologists
English biophysicists
English molecular biologists
English Nobel laureates
Fellows of the Royal Society
Structural biologists
Foreign associates of the National Academy of Sciences
Knights Bachelor
Members of the European Molecular Biology Organization
Nobel laureates in Chemistry
People educated at Clifton College
People educated at The Dragon School
Scientists from Oxford
Presidents of St John's College, Oxford
Presidents of the British Science Association
Royal Medal winners
Trustees of the British Museum
X-ray crystallography
20th-century British biologists
Royal Air Force personnel of World War II
Royal Air Force Volunteer Reserve personnel of World War II
Royal Air Force wing commanders | John Kendrew | [
"Chemistry",
"Materials_science"
] | 1,267 | [
"Crystallography",
"X-ray crystallography",
"Structural biologists",
"Structural biology"
] |
414,736 | https://en.wikipedia.org/wiki/Isochoric%20process | In thermodynamics, an isochoric process, also called a constant-volume process, an isovolumetric process, or an isometric process, is a thermodynamic process during which the volume of the closed system undergoing such a process remains constant. An isochoric process is exemplified by the heating or the cooling of the contents of a sealed, inelastic container: The thermodynamic process is the addition or removal of heat; the isolation of the contents of the container establishes the closed system; and the inability of the container to deform imposes the constant-volume condition.
Formalism
An isochoric thermodynamic quasi-static process is characterized by constant volume, i.e., .
The process does no pressure-volume work, since such work is defined by
where is pressure. The sign convention is such that positive work is performed by the system on the environment.
If the process is not quasi-static, the work can perhaps be done in a volume constant thermodynamic process.
For a reversible process, the first law of thermodynamics gives the change in the system's internal energy:
Replacing work with a change in volume gives
Since the process is isochoric, , the previous equation now gives
Using the definition of specific heat capacity at constant volume, , where is the mass of the gas, we get
Integrating both sides yields
where is the specific heat capacity at constant volume, is the initial temperature and is the final temperature. We conclude with:
On a pressure volume diagram, an isochoric process appears as a straight vertical line. Its thermodynamic conjugate, an isobaric process would appear as a straight horizontal line.
Ideal gas
If an ideal gas is used in an isochoric process, and the quantity of gas stays constant, then the increase in energy is proportional to an increase in temperature and pressure. For example a gas heated in a rigid container: the pressure and temperature of the gas will increase, but the volume will remain the same.
Ideal Otto cycle
The ideal Otto cycle is an example of an isochoric process when it is assumed that the burning of the gasoline-air mixture in an internal combustion engine car is instantaneous. There is an increase in the temperature and the pressure of the gas inside the cylinder while the volume remains the same.
Etymology
The noun "isochor" and the adjective "isochoric" are derived from the Greek words ἴσος (isos) meaning "equal", and χῶρος (khôros) meaning "space."
See also
Isobaric process
Adiabatic process
Cyclic process
Isothermal process
Polytropic process
References
Thermodynamic processes | Isochoric process | [
"Physics",
"Chemistry"
] | 563 | [
"Thermodynamic processes",
"Thermodynamics"
] |
414,765 | https://en.wikipedia.org/wiki/Isobaric%20process | In thermodynamics, an isobaric process is a type of thermodynamic process in which the pressure of the system stays constant: ΔP = 0. The heat transferred to the system does work, but also changes the internal energy (U) of the system. This article uses the physics sign convention for work, where positive work is work done by the system. Using this convention, by the first law of thermodynamics,
where W is work, U is internal energy, and Q is heat. Pressure-volume work by the closed system is defined as:
where Δ means change over the whole process, whereas d denotes a differential. Since pressure is constant, this means that
.
Applying the ideal gas law, this becomes
with R representing the gas constant, and n representing the amount of substance, which is assumed to remain constant (e.g., there is no phase transition during a chemical reaction). According to the equipartition theorem, the change in internal energy is related to the temperature of the system by
,
where cV, m is molar heat capacity at a constant volume.
Substituting the last two equations into the first equation produces:
where cP is molar heat capacity at a constant pressure.
Specific heat capacity
To find the molar specific heat capacity of the gas involved, the following equations apply for any general gas that is calorically perfect. The property γ is either called the adiabatic index or the heat capacity ratio. Some published sources might use k instead of γ.
Molar isochoric specific heat:
.
Molar isobaric specific heat:
.
The values for γ are γ = for diatomic gases like air and its major components, and γ = for monatomic gases like the noble gases. The formulas for specific heats would reduce in these special cases:
Monatomic:
and
Diatomic:
and
An isobaric process is shown on a P–V diagram as a straight horizontal line, connecting the initial and final thermostatic states. If the process moves towards the right, then it is an expansion. If the process moves towards the left, then it is a compression.
Sign convention for work
The motivation for the specific sign conventions of thermodynamics comes from early development of heat engines. When designing a heat engine, the goal is to have the system produce and deliver work output. The source of energy in a heat engine, is a heat input.
If the volume compresses (ΔV = final volume − initial volume < 0), then W < 0. That is, during isobaric compression the gas does negative work, or the environment does positive work. Restated, the environment does positive work on the gas.
If the volume expands (ΔV = final volume − initial volume > 0), then W > 0. That is, during isobaric expansion the gas does positive work, or equivalently, the environment does negative work. Restated, the gas does positive work on the environment.
If heat is added to the system, then Q > 0. That is, during isobaric expansion/heating, positive heat is added to the gas, or equivalently, the environment receives negative heat. Restated, the gas receives positive heat from the environment.
If the system rejects heat, then Q < 0. That is, during isobaric compression/cooling, negative heat is added to the gas, or equivalently, the environment receives positive heat. Restated, the environment receives positive heat from the gas.
Defining enthalpy
An isochoric process is described by the equation Q = ΔU. It would be convenient to have a similar equation for isobaric processes. Substituting the second equation into the first yields
The quantity U + pV is a state function so that it can be given a name. It is called enthalpy, and is denoted as H. Therefore, an isobaric process can be more succinctly described as
.
Enthalpy and isochoric specific heat capacity are very useful mathematical constructs, since when analyzing a process in an open system, the situation of zero work occurs when the fluid flows at constant pressure. In an open system, enthalpy is the quantity which is useful to use to keep track of energy content of the fluid.
Examples of isobaric processes
The reversible expansion of an ideal gas can be used as an example of an isobaric process. Of particular interest is the way heat is converted to work when expansion is carried out at different working gas/surrounding gas pressures.
In the first process example, a cylindrical chamber 1 m2 in area encloses 81.2438 mol of an ideal diatomic gas of molecular mass 29 g mol−1 at 300 K. The surrounding gas is at 1 atm and 300 K, and separated from the cylinder gas by a thin piston. For the limiting case of a massless piston, the cylinder gas is also at 1 atm pressure, with an initial volume of 2 m3. Heat is added slowly until the gas temperature is uniformly 600 K, after which the gas volume is 4 m3 and the piston is 2 m above its initial position. If the piston motion is sufficiently slow, the gas pressure at each instant will have practically the same value (psys = 1 atm) throughout.
For a thermally perfect diatomic gas, the molar specific heat capacity at constant pressure (cp) is 7/2R or 29.1006 J mol−1 deg−1. The molar heat capacity at constant volume (cv) is 5/2R or 20.7862 J mol−1 deg−1. The ratio of the two heat capacities is 1.4.
The heat Q required to bring the gas from 300 to 600 K is
.
The increase in internal energy is
Therefore,
Also
, which of course is identical to the difference between ΔH and ΔU.
Here, work is entirely consumed by expansion against the surroundings. Of the total heat applied (709.3 kJ), the work performed (202.7 kJ) is about 28.6% of the supplied heat.
The second process example is similar to the first, except that the massless piston is replaced by one having a mass of 10,332.2 kg, which doubles the pressure of the cylinder gas to 2 atm. The cylinder gas volume is then 1 m3 at the initial 300 K temperature. Heat is added slowly until the gas temperature is uniformly 600 K, after which the gas volume is 2 m3 and the piston is 1 m above its initial position. If the piston motion is sufficiently slow, the gas pressure at each instant will have practically the same value (psys = 2 atm) throughout.
Since enthalpy and internal energy are independent of pressure,
and .
As in the first example, about 28.6% of the supplied heat is converted to work. But here, work is applied in two different ways: partly by expanding the surrounding atmosphere and partly by lifting 10,332.2 kg a distance h of 1 m.
Thus, half the work lifts the piston mass (work of gravity, or “useable” work), while the other half expands the surroundings.
The results of these two process examples illustrate the difference between the fraction of heat converted to usable work (mgΔh) vs. the fraction converted to pressure-volume work done against the surrounding atmosphere. The usable work approaches zero as the working gas pressure approaches that of the surroundings, while maximum usable work is obtained when there is no surrounding gas pressure. The ratio of all work performed to the heat input for ideal isobaric gas expansion is
Variable density viewpoint
A given quantity (mass m) of gas in a changing volume produces a change in density ρ. In this context the ideal gas law is written
where T is thermodynamic temperature and M is molar mass. When R and M are taken as constant, then pressure P can stay constant as the density-temperature quadrant undergoes a squeeze mapping.
Etymology
The adjective "isobaric" is derived from the Greek words ἴσος (isos) meaning "equal", and βάρος (baros) meaning "weight."
See also
Adiabatic process
Cyclic process
Isochoric process
Isothermal process
Polytropic process
Isenthalpic process
References
Thermodynamic processes
Atmospheric thermodynamics | Isobaric process | [
"Physics",
"Chemistry"
] | 1,711 | [
"Thermodynamic processes",
"Thermodynamics"
] |
1,141,208 | https://en.wikipedia.org/wiki/Independence%20%28mathematical%20logic%29 | In mathematical logic, independence is the unprovability of some specific sentence from some specific set of other sentences. The sentences in this set are referred to as "axioms".
A sentence σ is independent of a given first-order theory T if T neither proves nor refutes σ; that is, it is impossible to prove σ from T, and it is also impossible to prove from T that σ is false. Sometimes, σ is said (synonymously) to be undecidable from T. (This concept is unrelated to the idea of "decidability" as in a decision problem.)
A theory T is independent if no axiom in T is provable from the remaining axioms in T. A theory for which there is an independent set of axioms is independently axiomatizable.
Usage note
Some authors say that σ is independent of T when T simply cannot prove σ, and do not necessarily assert by this that T cannot refute σ. These authors will sometimes say "σ is independent of and consistent with T" to indicate that T can neither prove nor refute σ.
Independence results in set theory
Many interesting statements in set theory are independent of Zermelo–Fraenkel set theory (ZF). The following statements in set theory are known to be independent of ZF, under the assumption that ZF is consistent:
The axiom of choice
The continuum hypothesis and the generalized continuum hypothesis
The Suslin conjecture
The following statements (none of which have been proved false) cannot be proved in ZFC (the Zermelo–Fraenkel set theory plus the axiom of choice) to be independent of ZFC, under the added hypothesis that ZFC is consistent.
The existence of strongly inaccessible cardinals
The existence of large cardinals
The non-existence of Kurepa trees
The following statements are inconsistent with the axiom of choice, and therefore with ZFC. However they are probably independent of ZF, in a corresponding sense to the above: They cannot be proved in ZF, and few working set theorists expect to find a refutation in ZF. However ZF cannot prove that they are independent of ZF, even with the added hypothesis that ZF is consistent.
The axiom of determinacy
The axiom of real determinacy
AD+
Applications to physical theory
Since 2000, logical independence has become understood as having crucial significance in the foundations of physics.
See also
List of statements independent of ZFC
Parallel postulate for an example in geometry
Notes
References
Mathematical logic
Proof theory | Independence (mathematical logic) | [
"Mathematics"
] | 514 | [
"Mathematical logic",
"Proof theory"
] |
1,142,136 | https://en.wikipedia.org/wiki/Biosolids | Biosolids are solid organic matter recovered from a sewage treatment process and used as fertilizer. In the past, it was common for farmers to use animal manure to improve their soil fertility. In the 1920s, the farming community began also to use sewage sludge from local wastewater treatment plants. Scientific research over many years has confirmed that these biosolids contain similar nutrients to those in animal manures. Biosolids that are used as fertilizer in farming are usually treated to help to prevent disease-causing pathogens from spreading to the public. Some sewage sludge can not qualify as biosolids due to persistent, bioaccumulative and toxic chemicals, radionuclides, and heavy metals at levels sufficient to contaminate soil and water when applied to land.
Terminology
Biosolids may be defined as organic wastewater solids that can be reused after suitable sewage sludge treatment processes leading to sludge stabilization, such as anaerobic digestion and composting.
Alternatively, the definition of biosolids may be restricted by local regulations to wastewater solids only after those solids have completed a specified treatment sequence and have concentrations of pathogens and toxic chemicals below specified levels.
The United States Environmental Protection Agency (EPA) defines the two terms – sewage sludge and biosolids – in the Code of Federal Regulations (CFR), Title 40, Part 503 as follows: Sewage sludge refers to the solids separated during the treatment of municipal wastewater (including domestic septage). In contrast, biosolids refers to treated sewage sludge that meets the EPA pollutant and pathogen requirements for land application and surface disposal. A similar definition has been used internationally, for example, in Australia.
Use of the term "biosolids" may officially be subject to government regulations. However, informal use describes a broad range of semi-solid organic products from sewage or sewage sludge. This could include any solids, slime solids or liquid slurry residue generated during the treatment of domestic wastewater including, scum and solids removed during primary, secondary or advanced treatment processes. Materials that do not conform to the regulatory definition of "biosolids" can be given alternative terms like "wastewater solids."
Characteristic
Quantities
Approximately 7.1 million dry tons of biosolids were generated in 2004 at approximately 16,500 municipal wastewater treatment facilities in the United States.
In the United States, as of 2013 about 55% of sewage solids are used as fertilizer. Challenges faced when increasing the use of biosolids include the capital needed to build anaerobic digesters and the complexity of complying with health regulations. There are also new concerns about micropollutions in sewage (e.g. environmental persistent pharmaceutical pollutants), which make the process of producing high quality biosolids complex. Some municipalities, states or countries have banned the use of biosolids on farmland.
Nutrients
Encouraging agricultural use of biosolids is intended to prevent filling landfills with nutrient-rich organic materials from the treatment of domestic sewage that might be recycled and applied as fertilizer to improve and maintain productive soils and stimulate plant growth. Biosolids can be an ideal agricultural conditioner and fertilizer which can help promote crop growth to feed the increasing population. Biosolids may contain macronutrients nitrogen, phosphorus, potassium and sulphur with micronutrients copper, zinc, calcium, magnesium, iron, boron, molybdenum and manganese.
Industrial and man-made contaminants
Biosolids contains synthetic organic compounds, radionuclides and heavy metals. The United States Environmental Protection Agency (EPA) has set numeric limits for arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc but has not regulated dioxin levels.
Contaminants from pharmaceuticals and personal care products and some steroids and hormones may also be present in biosolids. Substantial levels of persistent, bioaccumulative and toxic (PBT) polybrominated diphenyl ethers were detected in biosolids in 2001.
The United States Geological Survey analyzed in 2014 nine different consumer products containing biosolids as a main ingredient for 87 organic chemicals found in cleaners, personal care products, pharmaceuticals, and other products. These analysis detected 55 of the 87 organic chemicals measured in at least one of the nine biosolid samples, with as many as 45 chemicals found in a single sample.
In 2014, the City of Charlotte, North Carolina, discovered extreme levels of polychlorinated biphenyls (PCBs) in their biosolids after being alerted by that illegal PCB dumping was taking place at regional waste water treatment plants across the state.
Biosolids land application in South Carolina was halted in 2013 after an emergency regulation was enacted by the South Carolina Department of Health and Environmental Control (SCDHEC) that outlawed any PCB contaminated biosolids from being land applied regardless if Class A or Class B. Very soon thereafter, SCDHEC expanded PCB fish consumption advisories for nearly every waterway bordering biosolids land application fields.
In 2019, the state of Maine found that 95% of sewage sludge produced in the state contained unsafe levels of Per- and polyfluoroalkyl substances (PFAS) chemicals. Several farms that had spread biosolids as fertilizer were found to have PFAS contaminated soil, groundwater, animals and crops. Hundreds of other farms were potentially similarly contaminated. The state subsequently imposed additional rules restricting biosolids spreading. In 2023, Arturo A. Keller at the Bren School of Environmental Science & Management within the University of California, Santa Barbara began working on solutions to eliminate these PFAS, including creating bio-char fertilizers from the bio-solids.
Biosolids used as a fertilizer have resulted in PFAS contamination of beef raised in Michigan.
In October 2021 EPA announced the PFAS Strategic Roadmap which includes risk assessment for PFAS (PFOA and PFOS) in biosolids.
Pathogens
In the United States the EPA mandates certain treatment processes designed to significantly decrease levels of certain so-called indicator organisms, in biosolids. These include, "...operational standards for fecal coliforms, Salmonella sp. bacteria, enteric viruses, and viable helminth ova."
However, the US-based Water Environment Research Foundation has shown that some pathogens do survive sewage sludge treatment.
EPA regulations allow only biosolids with no detectable pathogens to be widely applied; those with remaining pathogens are restricted in use.
Different types of biosolids
Anaerobic Digestion: Micro-organisms decompose the sludge in the absence of oxygen either at mesophilic (at 35 °C) or thermophilic (between 50° and 57 °C) temperatures.
Aerobic Digestion: Micro-organisms decompose the sludge in the presence of oxygen either at ambient and mesophilic (10 °C to 40 °C) or auto-thermal (40 °C to 80 °C) temperatures.
Composting: A biological process where organic matter decomposes to produce humus after the addition of some dry bulking material such as sawdust, wood chips, or shredded yard waste under controlled aerobic conditions.
Alkaline Treatment: The sludge is mixed with alkaline materials such as lime or cement kiln dust, or incinerator fly ash and maintained at pH above 12 for 24 hours (for Class B) or at temperature 70 °C for 30 minutes (for Class A).
Heat Drying: Either convention or conduction dryers are used to dry the biosolids
Dewatering: The separation of the water from biosolids is done to obtain a semi-solid or solid product by using a dewatering technologies (centrifuges, belt filter presses, plate and frame filter presses, and drying beds and lagoons).
Different General & Land Applications of Biosolids
Agriculture - Biosolids as an alternative to Chemical Fertilizers
Biosolids are similar to animal manure in that they consist of various nutrients and organic materials that support the growth of crops, while also enriching the soil and enhancing its capacity to retain water.
Nutrifor is the brand name of a Biosolid created by Metro Vancouver, Canada, by using biosolids recovered from advanced water treatments. This substance undergoes a process of high-temperature treatment and decomposition by microorganisms to eradicate detrimental bacteria and diminish unpleasant smells. The final outcome is a nutrient-rich, soil-like substance that can be applied directly onto the ground as a fertilizer or incorporated into soil composition.
The Halton Region, in Ontario, Canada, has introduced a Biosolids Recycling Program where Biosolids are extracted from Halton's 7 wastewater treatment facilities and are recycled as an agricultural fertilizer and soil conditioner. They follow The Ontario Ministry of the Environment, Conservation and Parks (MECP) quality and safety standards for Biosolids processing and re-use. On average, The Halton region has produced over 35,000 tonnes of biosolids per year.
Forestry
Biosolids have been identified to accelerate the growth of timber, which facilitates a faster and more effective growth of a forests. The Regional District of Nanaimo, in BC, Canada, has introduced a award-winning Biosolids management program called Forest Fertilization Program, through which the created biosolids have the potential to enhance the growth of trees in areas with limited nutrients, all the while aligning land utilization practices with forestry activities and recreational pursuits.
Soil Remediation
Biosolids have proven effective in promoting the growth of sustainable vegetation, mitigating the presence of harmful substances in soil, managing soil erosion, and revitalizing soil profiles for compromised areas which in turn is crucial for the rehabilitation of sites lacking adequate topsoil.
The Regional District of Nanaimo in partnership with Nanaimo Forest Products Ltd. have introduced a Soil Fabrication Program to manufacture soil at the Harmac Mill in Duke Point, British Columbia, Canada. This is also a planned contingency site for GNPCC biosolids
Lawns and Home Gardens
The United States Environmental Protection Agency mentions that biosolids that adhere to the most rigorous standards for reducing pollutants, pathogens, and attractiveness to vectors can be bought by individuals from hardware stores, home and garden centers, or directly from their community's wastewater treatment facility.
Biogas as an alternative Power Source
The City of Barrie, in Ontario, Canada has partnered with a Consulting Engineering Firm to convert and optimize Biogas obtained from the Biosolids from their local Wastewater Treatment Facility (WwTF) and to use this Biogas generated from anaerobic digesters as power source for the facility's two cogeneration (cogen) engines and boilers.
The City of Hamilton, in Ontario, Canada also has a similar Master Plan for the Biosolids present in the city's Wastewater treatments. In their master plan for the next 20 years, they aim to reduce Biosolids Management and foster the use of the produced Biogas for energy production through co-generation.
In a publication by the Government of Canada and the Ontario Federation of Agriculture, the author mentions that using the Land Application method of Biosolids is more cost-effective to taxpayers as compared to the alternative methods of management such as disposal in a landfill.
Classification systems
United States
In the United States Code of Federal Regulations (CFR), Title 40, Part 503 governs the management of biosolids. Within that federal regulation biosolids are generally classified differently depending upon the quantity of pollutants they contain and the level of treatment they have been subjected to (the latter of which determines both the level of vector attraction reduction and the level of pathogen reduction). These factors also affect how they may be disseminated (bulk or bagged) and the level of monitoring oversight which, in turn determines where and in what quantity they may be applied. The National Organic Program prohibits the use of biosolids in farming certified organic crops.
European Union
The European Union (EU) was the first to issue regulations for biosolids land application; this aimed to put a limit to the pathogen and pollution risk. These risks come from the fact that some metabolites remain intact after waste water treatment processes. Debates over biosolid use vary in severity across the EU.
New Zealand
In 2003, the Ministry for the Environment and the New Zealand Water & Wastes Association produced the document Guidelines for the safe application of biosolids to land in New Zealand. In the document, biosolids were defined as "sewage sludges or sewage sludges mixed with other materials that have been treated and/or stabilised to the extent that they are able to be safely and beneficially applied to land... [and noted that they] have significant fertilising and soil conditioning properties as a result of the nutrients and organic materials they contain."
A New Zealand scientist, Jacqui Horswell later led collaborative research by the Institute of Environmental Science and Research, Scion, Landcare Research and the Cawthron Institute into the management of waste, in particular biosolids, and this has informed the development of frameworks for engaging local communities in the process. In 2016 the project developed a Community Engagement Framework for Biowastes to provide guidelines in effective consultation with communities about the discharge of biowastes to land, and in 2017 another collaborative three-year project with councils aimed to develop a collective biosolids strategy and use the programme in the lower North Island. When the project was reviewed in 2020, the conclusion was that it had shown biosolids can be beneficially reused.
A research paper in 2019, reported on the management considerations around using biosolids as a fertilizer, specifically to account for the complexity of the nutrients reducing the availability for plant uptake, and noted that stakeholders need to "factor in the expected plant availability of the nutrients when
assessing the risk and benefits of these biological materials."
History
As public concern arose about the disposal of increased volumes of solids in the United States being removed from sewage during sewage treatment mandated by the Clean Water Act. The Water Environment Federation (WEF) sought a new name to distinguish the clean, agriculturally viable product generated by modern wastewater treatment from earlier forms of sewage sludge widely remembered for causing offensive or dangerous conditions. Of 300 suggestions, biosolids was attributed to Dr. Bruce Logan of the University of Arizona, and recognized by WEF in 1991.
Examples
Milorganite is the trademark of a biosolids fertilizer produced by the Milwaukee Metropolitan Sewerage District. The recycled organic nitrogen fertilizer from the Jones Island Water Reclamation Facility in Milwaukee, Wisconsin, is sold throughout North America, reduces the need for manufactured nutrients.
Loop is the trademark of a biosolids soil amendment produced by the King County Wastewater Treatment Division. Loop has been blended into GroCo, a commercially available compost product, since 1976. Several local farms and forests also use Loop directly.
TAGRO is short for "Tacoma Grow" and is produced by the City of Tacoma, Washington, since 1991.
Dillo Dirt has been produced by the City of Austin, Texas, since 1989.
Biosolids are applied as fertilizer in the Central Wheatbelt of Australia as a recycling program by the Water Corporation.
See also
Reuse of excreta
References
Organic fertilizers
Sewerage
Environmental engineering | Biosolids | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,179 | [
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
1,143,008 | https://en.wikipedia.org/wiki/Steel%20mill | A steel mill or steelworks is an industrial plant for the manufacture of steel. It may be an integrated steel works carrying out all steps of steelmaking from smelting iron ore to rolled product, but may also be a plant where steel semi-finished casting products are made from molten pig iron or from scrap.
History
Since the invention of the Bessemer process, steel mills have replaced ironworks, based on puddling or fining methods. New ways to produce steel appeared later: from scrap melted in an electric arc furnace and, more recently, from direct reduced iron processes.
In the late 19th and early 20th centuries the world's largest steel mill was the Barrow Hematite Steel Company steelworks located in Barrow-in-Furness, United Kingdom. Today, the world's largest steel mill is in Gwangyang, South Korea.
Integrated mill
An integrated steel mill has all the functions for primary steel
production:
iron making (conversion of ore to liquid iron),
steel making (conversion of pig iron to liquid steel),
casting (solidification of the liquid steel),
roughing rolling/billet rolling (reducing size of blocks)
product rolling (finished shapes).
The principal raw materials for an integrated mill are iron ore, limestone, and coal (or coke). These materials are charged in batches into a blast furnace where the iron compounds in the ore give up excess oxygen and become liquid iron. At intervals of a few hours, the accumulated liquid iron is tapped from the blast furnace and either cast into pig iron or directed to other vessels for further steel making operations. Historically the Bessemer process was a major advancement in the production of economical steel, but it has now been entirely replaced by other processes such as the basic oxygen furnace.
Molten steel is cast into large blocks called blooms. During the casting process various methods are used, such as addition of aluminum, so that impurities in the steel float to the surface where they can be cut off the finished bloom.
Because of the energy cost and structural stress associated with heating and cooling a blast furnace, typically these primary steel making vessels will operate on a continuous production campaign of several years duration. Even during periods of low steel demand, it may not be feasible to let the blast furnace grow cold, though some adjustment of the production rate is possible.
Integrated mills are large facilities that are typically only economical to build in 2,000,000-ton per year annual capacity and up. Final products made by an integrated plant are usually large structural sections, heavy plate, strip, wire rod, railway rails, and occasionally long products such as bars and pipe.
A major environmental hazard associated with integrated steel mills is the pollution produced in the manufacture of coke, which is an essential intermediate product in the reduction of iron ore in a blast furnace.
Integrated mills may also adopt some of the processes used in mini-mills, such as arc furnaces and direct casting, to reduce production costs.
Minimill
A minimill is traditionally a secondary steel producer; however, Nucor (one of the world's largest steel producers) and Commercial Metals Company (CMC) use minimills exclusively. Usually it obtains most of its iron from scrap steel, recycled from used automobiles and equipment or byproducts of manufacturing. Direct reduced iron (DRI) is sometimes used with scrap, to help maintain desired chemistry of the steel, though usually DRI is too expensive to use as the primary raw steelmaking material. A typical mini-mill will have an electric arc furnace for scrap melting, a ladle furnace or vacuum furnace for precision control of chemistry, a strip or billet continuous caster for converting molten steel to solid form, a reheat furnace and a rolling mill.
Originally the minimill was adapted to production of bar products only, such as concrete reinforcing bar, flats, angles, channels, pipe, and light rails. Since the late 1980s, successful introduction of the direct strip casting process has made minimill production of strip feasible. Often a minimill will be constructed in an area with no other steel production, to take advantage of local markets, resources, or lower-cost labour. Minimill plants may specialize, for example, in making coils of rod for wire-drawing use, or pipe, or in special sections for transportation and agriculture.
Capacities of minimills vary: some plants may make as much as 3,000,000 tons per year, a typical size is in the range 200,000 to 400,000 tons per year, and some old or specialty plants may make as little as 50,000 tons per year of finished product. Nucor Corporation, for example, annually produces around 9,100,000 tons of sheet steel from its four sheet mills, 6,700,000 tons of bar steel from its 10 bar mills and 2,100,000 tons of plate steel from its two plate mills.
Since the electric arc furnace can be easily started and stopped on a regular basis, minimills can follow the market demand for their products easily, operating on 24-hour schedules when demand is high and cutting back production when sales are lower.
See also
Foundry
List of steel producers
Steel § Industry
References
Further reading
McGannon, Harold E. (editor) (1971). The Making, Shaping and Treating of Steel: Ninth Edition. Pittsburgh, Pennsylvania: United States Steel Corporation.
External links
Travel Channel video 1 of the Homestead Works
An extensive picture gallery of all methods of production in North America and Europe
History of steelworks in Scotland
History of steelworks in Scotland
Trends in EAF quality capability 1980–2010
Firing techniques
Manufacturing buildings and structures
Steelmaking | Steel mill | [
"Chemistry"
] | 1,142 | [
"Iron and steel mills",
"Metallurgical processes",
"Steelmaking",
"Metallurgical facilities"
] |
1,143,734 | https://en.wikipedia.org/wiki/Peptidomimetic | A peptidomimetic is a small protein-like chain designed to mimic a peptide. They typically arise either from modification of an existing peptide, or by designing similar systems that mimic peptides, such as peptoids and β-peptides. Irrespective of the approach, the altered chemical structure is designed to advantageously adjust the molecular properties such as stability or biological activity. This can have a role in the development of drug-like compounds from existing peptides. Peptidomimetics can be prepared by cyclization of linear peptides or coupling of stable unnatural amino acids. These modifications involve changes to the peptide that will not occur naturally (such as altered backbones and the incorporation of nonnatural amino acids). Unnatural amino acids can be generated from their native analogs via modifications such as amine alkylation, side chain substitution, structural bond extension cyclization, and isosteric replacements within the amino acid backbone. Based on their similarity with the precursor peptide, peptidomimetics can be grouped into four classes (A – D) where A features the most and D the least similarities. Classes A and B involve peptide-like scaffolds, while classes C and D include small molecules (Figure 1).
Class A peptidomimetics
This group includes modified peptides that are mainly composed of proteogenic amino acids thereby closely resembling a natural peptide binding epitope. Introduced modifications usually aim to increase the stability of the peptide, its affinity for a desired binding partner, oral availability or cell permeability. The design of class A peptidomimetics often involves macrocyclization strategies as for example in stapled peptides.
Class B peptidomimetics
This class of peptidomimetics encompasses peptides with a large number of non-natural amino acids, major backbone modifications or larger non-natural building fragments that resemble the conformation of a particular peptide binding motif. Examples involve D-peptide and peptidic foldamers such as beta-peptides.
Class C peptidomimetics
These structural mimetics include molecules that are highly modified when compared to their parent peptide sequence. Usually, a small-molecular scaffold is appyled to project groups in analogy to the bioactive conformation of a peptide.
Class D peptidomimetics
These mechanistic mimetics do not directly recapitulate the side chains or conformation of a peptide but mimic its mode-of-action. Class D peptidomimetics can be directly designed from a small peptide sequence or identified the screening of compound libraries. For example, Nirmatrelvir is an orally-active small molecule drug derived from lufotrelvir, a modified L-peptide.
Uses and examples
The use of peptides as drugs has some disadvantages because of their bioavailability and biostability. Rapid degradation, poor oral availability, difficult transportation through cell membranes, nonselective receptor binding, and challenging multistep preparation are the major limitations of peptides as active pharmaceutical ingredients. Therefore, small protein-like chains called peptidomimetics could be designed and used to mimic native analogs and conceivably exhibit better pharmacological properties. Many peptidomimetics are utilized as FDA-approved drugs, such as Romidepsin (Istodax), Atazanavir (Reyataz), Saquinavir (Invirase), Oktreotid (Sandostatin), Lanreotide (Somatuline), Plecanatide (Trulance), Ximelagatran (Exanta), Etelcalcetide (Parsabiv), and Bortezomib (Velcade).
Peptidomimetic approaches have been utilized to design small molecules that selectively target cancer cells, an approach known as targeted chemotherapy, by inducing programmed cell death by a process called apoptosis. The following two examples mimic proteins involved in key Protein–protein interactions that reactivate the apoptotic pathway in cancer but do so by distinct mechanisms.
In 2004, Walensky and co-workers reported a stabilized alpha helical peptide that mimics pro-apoptotic BH3-only proteins, such as BID and BAD. This molecule was designed to stabilize the native helical structure by forming a macrocycle between side chains that are not involved in binding. This process, referred to as peptide stapling, uses non-natural amino acids to facilitate macrocyclization by ring-closing olefin metathesis. In this case, a stapled BH3 helix was identified which specifically activates the mitochondrial apoptotic pathway by antagonizing the sequestration of BH3-only proteins by anti-apoptotic proteins (e.g. Bcl-2, see also intrinsic and extrinsic inducers of the apoptosis). This molecule suppressed growth of human leukemia in a mouse xenograft model.
Also in 2004, Harran and co-workers reported a dimeric small molecule that mimics the proapoptotic protein Smac (see mitochondrial regulation in apoptosis). This molecule mimics the N-terminal linear motif Ala-Val-Pro-Ile. Uniquely, the dimeric structure of this peptidomimetic led to a marked increase in activity over an analogous monomer. This binding cooperativity results from the molecule's ability to also mimic the homodimeric structure of Smac, which is functionally important for reactivating caspases. Smac mimetics of this type can sensitize an array of non-small-cell lung cancer cells to conventional chemotherapeutics (e.g. Gemcitabine, Vinorelbine) both in vitro and in mouse xenograft models.
Heterocycles are often used to mimic the amide bond of peptides. Thiazoles, for example, are found in naturally occurring peptides and used by researchers to mimic the amide bond of peptides.
See also
Apoptosis
Beta-peptide
Cancer
Clicked peptide polymer
Depsipeptide
Expanded genetic code
Foldamers
Non-proteinogenic amino acids
Stapled Peptides
References
Further reading
Molecular biology
Organic chemistry
Chemical biology | Peptidomimetic | [
"Chemistry",
"Biology"
] | 1,299 | [
"Biochemistry",
"Chemical biology",
"nan",
"Molecular biology"
] |
1,143,981 | https://en.wikipedia.org/wiki/Electrostatic%20lens | An electrostatic lens is a device that assists in the transport of charged particles. For instance, it can guide electrons emitted from a sample to an electron analyzer, analogous to the way an optical lens assists in the transport of light in an optical instrument. Systems of electrostatic lenses can be designed in the same way as optical lenses, so electrostatic lenses easily magnify or converge the electron trajectories. An electrostatic lens can also be used to focus an ion beam, for example to make a microbeam for irradiating individual cells.
Cylinder lens
A cylinder lens consists of several cylinders whose sides are thin walls. Each cylinder lines up parallel to the optical axis into which electrons enter. There are small gaps put between the cylinders. When each cylinder has a different voltage, the gap between the cylinders works as a lens. The magnification is able to be changed by choosing different voltage combinations. Although the magnification of two cylinder lenses can be changed, the focal point is also changed by this operation. Three cylinder lenses achieve the change of the magnification while holding the object and image positions because there are two gaps that work as lenses. Although the voltages have to change depending on the electron kinetic energy, the voltage ratio is kept constant when the optical parameters are not changed.
While a charged particle is in an electric field force acts upon it. The faster the particle the smaller the accumulated impulse. For a collimated beam the focal length is given as the initial impulse divided by the accumulated (perpendicular) impulse by the lens. This makes the focal length of a single lens a function of the second order of the speed of the charged particle. Single lenses as known from photonics are not easily available for electrons.
The cylinder lens consists of defocusing lens, a focusing lens and a second defocusing lens, with the sum of their refractive powers being zero. But because there is some distance between the lenses, the electron makes three turns and hits the focusing lens at a position farther away from the axis and so travels through a field with greater strength. This indirectness leads to the fact that the resulting refractive power is the square of the refractive power of a single lens.
Einzel lens
An einzel lens is an electrostatic lens that focuses without changing the energy of the beam. It consists of three or more sets of cylindrical or rectangular tubes in series along an axis.
Quadrupole lens
The quadrupole lens consists of two single quadrupoles turned 90° with respect to each other. Let z be the optical axis then one can deduce separately for the x and the y axis that the refractive power is again the square of the refractive power of a single lens.
A magnetic quadrupole works very similar to an electric quadrupole, however the Lorentz force increases with the velocity of the charged particle. In spirit of a Wien filter, a combined magnetic, electric quadrupole is achromatic around a given velocity. Bohr and Pauli claim that this lens leads to aberration when applied to ions with spin (in the sense of chromatic aberration), but not when applied to electrons which also have a spin. See Stern–Gerlach experiment.
Magnetic lens
A magnetic field can also be used to focus charged particles. The Lorentz force acting on the electron is perpendicular to both the direction of motion and to the direction of the magnetic field (vxB). A homogeneous field deflects charged particles, but does not focus them. The simplest magnetic lens is a donut-shaped coil through which the beam passes, preferably along the axis of the coil. To generate the magnetic field, an electric current is passed through the coil. The magnetic field is strongest in the plane of the coil and gets weaker moving away from it. In the plane of the coil, the field gets stronger as we move away from the axis. Thus, a charged particle further from the axis experiences a stronger Lorentz force than a particle closer to the axis (assuming that they have the same velocity). This gives rise to the focusing action. Unlike the paths in an electrostatic lens, the paths in a magnetic lens contain a spiraling component, i.e. the charged particles spiral around the optical axis. As a consequence, the image formed by a magnetic lens is rotated relative to the object. This rotation is absent for an electrostatic lens.
The spatial extent of the magnetic field can be controlled by using an iron (or other magnetically soft material) magnetic circuit. This makes it possible to design and build more compact magnetic lenses with well defined optical properties. The vast majority of electron microscopes in use today use magnetic lenses due to their superior imaging properties and the absence of the high voltages that are required for electrostatic lenses.
Multipole lenses
Multipoles beyond the quadrupole can correct for spherical aberration and in particle accelerators the dipole bending magnets are really composed of a large number of elements with different superpositions of multipoles.
Usually the dependency is given for the kinetic energy itself depending on the power of the velocity.
So for an electrostatic lens the focal length varies with the second power of the kinetic energy,
while for a magnetostatic lens the focal length varies proportional to the kinetic energy.
And a combined quadrupole can be achromatic around a given energy.
If a distribution of particles with different kinetic energies is accelerated by a longitudinal electric field, the relative energy spread is reduced leading to less chromatic error. An example of this is in the electron microscope.
Electron spectroscopy
The recent development of electron spectroscopy makes it possible to reveal the electronic structures of molecules. Although this is mainly accomplished by electron analysers, electrostatic lenses also play a significant role in the development of electron spectroscopy.
Since electron spectroscopy detects several physical phenomena from the electrons emitted from samples, it is necessary to transport the electrons to the electron analyser. Electrostatic lenses satisfy the general properties of lenses.
See also
SIMION
Ion funnel
References
Further reading
E. Harting, F.H. Read, Electrostatic Lenses, Elsevier, Amsterdam, 1976.
Electrostatics
Spectroscopy | Electrostatic lens | [
"Physics",
"Chemistry"
] | 1,254 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
1,144,624 | https://en.wikipedia.org/wiki/Radical%20initiator | In chemistry, radical initiators are substances that can produce radical species under mild conditions and promote radical reactions. These substances generally possess weak bonds—bonds that have small bond dissociation energies. Radical initiators are utilized in industrial processes such as polymer synthesis. Typical examples are molecules with a nitrogen-halogen bond, azo compounds, and organic and inorganic peroxides.
Main types of initiation reaction
Halogens undergo homolytic fission relatively easily. Chlorine, for example, gives two chlorine radicals (Cl•) by irradiation with ultraviolet light. This process is used for chlorination of alkanes.
Azo compounds (R-N=N-R') can be the precursor of two carbon-centered radicals (R• and R'•) and nitrogen gas upon heating and/or by irradiation. For example, AIBN and ABCN yield isobutyronitrile and cyclohexanecarbonitrile radicals, respectively.
Organic peroxides each have a peroxide bond (-O-O-), which is readily cleaved to give two oxygen-centered radicals. The oxyl radicals are unstable and believed to be transformed into relatively stable carbon-centered radicals. For example, di-tert-butyl peroxide (t-BuOOt-Bu) gives two t-butoxy radicals (t-BuO•) and the radicals become methyl radicals (CH3•) with the loss of acetone. Benzoyl peroxide ((PhC)OO)2) generates benzoyloxyl radicals (PhCOO•), each of which loses carbon dioxide to be converted into a phenyl radical (Ph•). Methyl ethyl ketone peroxide is also common, and acetone peroxide is on rare occasions used as a radical initiator, too.
Inorganic peroxides function analogously to organic peroxides. Many polymers are often produced from the alkenes upon initiation with peroxydisulfate salts. In solution, peroxydisulfate dissociates to give sulfate radicals:
[O3SO-OSO3]2− 2 [SO4]−
The sulfate radical adds to an alkene forming radical sulfate esters, e.g. .CHPhCH2OSO3−, that add further alkenes via formation of C-C bonds. Many styrene and fluoroalkene polymers are produced in this way.
In atom transfer radical polymerization (ATRP), carbon-halides reversibly generate organic radicals in the presence of transition metal catalyst.
Safety
Some radical initiators such as azo compounds and peroxides can detonate at elevated temperatures so they must be stored cold.
References | Radical initiator | [
"Chemistry",
"Materials_science"
] | 575 | [
"Radical initiators",
"Polymer chemistry",
"Reagents for organic chemistry"
] |
1,145,404 | https://en.wikipedia.org/wiki/Lead%20zirconate%20titanate | Lead zirconate titanate, also called lead zirconium titanate and commonly abbreviated as PZT, is an inorganic compound with the chemical formula . It is a ceramic perovskite material that shows a marked piezoelectric effect, meaning that the compound changes shape when an electric field is applied. It is used in a number of practical applications such as ultrasonic transducers and piezoelectric resonators. It is a white to off-white solid.
Lead zirconium titanate was first developed around 1952 at the Tokyo Institute of Technology. Compared to barium titanate, a previously discovered metallic-oxide-based piezoelectric material, lead zirconium titanate exhibits greater sensitivity and has a higher operating temperature. Piezoelectric ceramics are chosen for applications because of their physical strength, chemical inertness and their relatively low manufacturing cost. PZT ceramic is the most commonly used piezoelectric ceramic because it has an even greater sensitivity and higher operating temperature than other piezoceramics. Recently, there has been a large push towards finding alternatives to PZT due to legislations in many countries restricting the use of lead alloys and compounds in commercial products.
Electroceramic properties
Being piezoelectric, lead zirconate titanate develops a voltage (or potential difference) across two of its faces when compressed (useful for sensor applications), and physically changes shape when an external electric field is applied (useful for actuator applications). The relative permittivity of lead zirconate titanate can range from 300 to 20000, depending upon orientation and doping.
Being pyroelectric, this material develops a voltage difference across two of its faces under changing temperature conditions; consequently, lead zirconate titanate can be used as a heat sensor. Lead zirconate titanate is also ferroelectric, which means that it has a spontaneous electric polarization (electric dipole) that can be reversed in the presence of an electric field.
The material features an extremely large relative permittivity at the morphotropic phase boundary (MPB) near x = 0.52.
Some formulations are ohmic until at least (), after which current grows exponentially with field strength before reaching avalanche breakdown; but lead zirconate titanate exhibits time-dependent dielectric breakdown — breakdown may occur under constant-voltage stress after minutes or hours, depending on voltage and temperature, so its dielectric strength depends on the time scale over which it is measured. Other formulations have dielectric strengths measured in the range.
Uses
Lead zirconate titanate-based materials are components of ceramic capacitors and STM/AFM actuators (tubes).
Lead zirconate titanate is used to make ultrasound transducers and other sensors and actuators, as well as high-value ceramic capacitors and FRAM chips. Lead zirconate titanate is also used in the manufacture of ceramic resonators for reference timing in electronic circuitry. Anti-flash goggles featuring PLZT protect aircrew from burns and blindness in case of a nuclear explosion. The PLZT lenses could turn opaque in less than 150 microseconds.
Commercially, it is usually not used in its pure form, rather it is doped with either acceptors, which create oxygen (anion) vacancies, or donors, which create metal (cation) vacancies and facilitate domain wall motion in the material. In general, acceptor doping creates hard lead zirconate titanate, while donor doping creates soft lead zirconate titanate. Hard and soft lead zirconate titanate generally differ in their piezoelectric constants. Piezoelectric constants are proportional to the polarization or to the electrical field generated per unit of mechanical stress, or alternatively is the mechanical strain produced by per unit of electric field applied. In general, soft lead zirconate titanate has a higher piezoelectric constant, but larger losses in the material due to internal friction. In hard lead zirconate titanate, domain wall motion is pinned by the impurities, thereby lowering the losses in the material, but at the expense of a reduced piezoelectric constant.
Varieties
One of the commonly studied chemical composition is . The increased piezoelectric response and poling efficiency near to x = 0.52 is due to the increased number of allowable domain states at the MPB. At this boundary, the 6 possible domain states from the tetragonal phase ⟨100⟩ and the 8 possible domain states from the rhombohedral phase ⟨111⟩ are equally favorable energetically, thereby allowing a maximum 14 possible domain states.
Like structurally similar lead scandium tantalate and barium strontium titanate, lead zirconate titanate can be used for manufacture of uncooled staring array infrared imaging sensors for thermographic cameras. Both thin film (usually obtained by chemical vapor deposition) and bulk structures are used. The formula of the material used usually approaches (called lead zirconate titanate 30/70). Its properties may be modified by doping it with lanthanum, resulting in lanthanum-doped lead zirconium titanate (lead zirconate titanate, also called lead lanthanum zirconium titanate), with formula (lead zirconate titanate 17/30/70).
See also
Polyvinylidene fluoride (PVDF)
Lithium niobate
References
External links
zirconate titanate-5A Lead zirconium titanate Material properties
Ceramic materials
Piezoelectric materials
Lead(II) compounds
Titanates
Zirconates
Infrared sensor materials
Ferroelectric materials
Perovskites | Lead zirconate titanate | [
"Physics",
"Materials_science",
"Engineering"
] | 1,210 | [
"Physical phenomena",
"Ferroelectric materials",
"Materials",
"Electrical phenomena",
"Ceramic materials",
"Ceramic engineering",
"Piezoelectric materials",
"Hysteresis",
"Matter"
] |
1,145,414 | https://en.wikipedia.org/wiki/Van%20Arkel%E2%80%93Ketelaar%20triangle | Bond triangles or Van Arkel–Ketelaar triangles (named after Anton Eduard van Arkel and J. A. A. Ketelaar) are triangles used for showing different compounds in varying degrees of ionic, metallic and covalent bonding.
History
In 1941 Van Arkel recognised three extreme materials and associated bonding types. Using 36 main group elements, such as metals, metalloids and non-metals, he placed ionic, metallic and covalent bonds on the corners of an equilateral triangle, as well as suggested intermediate species. The bond triangle shows that chemical bonds are not just particular bonds of a specific type. Rather, bond types are interconnected and different compounds have varying degrees of different bonding character (for example, covalent bonds with significant ionic character are called polar covalent bonds).
Six years later, in 1947, Ketelaar developed van Arkel's idea by adding more compounds and placing bonds on different sides of the triangle.
Many people developed the triangle idea. Some (e.g. Allen's quantitative triangle) used electron configuration energy as an atom parameter, others (Jensen's quantitative triangle, Norman's quantitative triangle) used electronegativity of compounds. Nowadays, electronegativity triangles are mostly used to rate the chemical bond type.
Usage
Different compounds that obey the octet rule (sp-elements) and hydrogen can be placed on the triangle. Unfortunately, d-elements cannot be analysed using van Arkel-Ketelaar triangle, as their electronegativity is so high that it is taken as a constant. Using electronegativity - two compound average electronegativity on x-axis and electronegativity difference on y-axis, we can rate the dominant bond between the compounds. Example is here
On the right side (from ionic to covalent) should be compounds with varying difference in electronegativity. The compounds with equal electronegativity, such as Cl2 (chlorine) are placed in the covalent corner, while the ionic corner has compounds with large electronegativity difference, such as NaCl (table salt). The bottom side (from metallic to covalent) contains compounds with varying degree of directionality in the bond.
At one extreme is metallic bonds with delocalized bonding and the other are covalent bonds in which the orbitals overlap in a particular direction. The left side (from ionic to metallic) is meant for delocalized bonds with varying electronegativity difference.
See also
Covalent bonding
Ionic bonding
Metallic bonding
Atomic orbitals
Hund's rules
Ternary plot
External links
Van Arkel–Ketelaar Triangles of Bonding
Chemical bonding | Van Arkel–Ketelaar triangle | [
"Physics",
"Chemistry",
"Materials_science"
] | 544 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
1,145,733 | https://en.wikipedia.org/wiki/BIBO%20stability | In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded.
A signal is bounded if there is a finite value such that the signal magnitude never exceeds , that is
For discrete-time signals:
For continuous-time signals:
Time-domain condition for linear time-invariant systems
Continuous-time necessary and sufficient condition
For a continuous time linear time-invariant (LTI) system, the condition for BIBO stability is that the impulse response, , be absolutely integrable, i.e., its L1 norm exists.
Discrete-time sufficient condition
For a discrete time LTI system, the condition for BIBO stability is that the impulse response be absolutely summable, i.e., its norm exists.
Proof of sufficiency
Given a discrete time LTI system with impulse response the relationship between the input and the output is
where denotes convolution. Then it follows by the definition of convolution
Let be the maximum value of , i.e., the -norm.
(by the triangle inequality)
If is absolutely summable, then and
So if is absolutely summable and is bounded, then is bounded as well because .
The proof for continuous-time follows the same arguments.
Frequency-domain condition for linear time-invariant systems
Continuous-time signals
For a rational and continuous-time system, the condition for stability is that the region of convergence (ROC) of the Laplace transform includes the imaginary axis. When the system is causal, the ROC is the open region to the right of a vertical line whose abscissa is the real part of the "largest pole", or the pole that has the greatest real part of any pole in the system. The real part of the largest pole defining the ROC is called the abscissa of convergence. Therefore, all poles of the system must be in the strict left half of the s-plane for BIBO stability.
This stability condition can be derived from the above time-domain condition as follows:
where and
The region of convergence must therefore include the imaginary axis.
Discrete-time signals
For a rational and discrete time system, the condition for stability is that the region of convergence (ROC) of the z-transform includes the unit circle. When the system is causal, the ROC is the open region outside a circle whose radius is the magnitude of the pole with largest magnitude. Therefore, all poles of the system must be inside the unit circle in the z-plane for BIBO stability.
This stability condition can be derived in a similar fashion to the continuous-time derivation:
where and .
The region of convergence must therefore include the unit circle.
See also
LTI system theory
Finite impulse response (FIR) filter
Infinite impulse response (IIR) filter
Nyquist plot
Routh–Hurwitz stability criterion
Bode plot
Phase margin
Root locus method
Input-to-state stability
Further reading
Gordon E. Carlson Signal and Linear Systems Analysis with Matlab second edition, Wiley, 1998,
John G. Proakis and Dimitris G. Manolakis Digital Signal Processing Principals, Algorithms and Applications third edition, Prentice Hall, 1996,
D. Ronald Fannin, William H. Tranter, and Rodger E. Ziemer Signals & Systems Continuous and Discrete fourth edition, Prentice Hall, 1998,
Proof of the necessary conditions for BIBO stability.
Christophe Basso Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide first edition, Artech House, 2012, 978-1608075577
References
Signal processing
Digital signal processing
Articles containing proofs
Stability theory | BIBO stability | [
"Mathematics",
"Technology",
"Engineering"
] | 760 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Stability theory",
"Articles containing proofs",
"Dynamical systems"
] |
14,096,979 | https://en.wikipedia.org/wiki/Linear%20sweep%20voltammetry | In analytical chemistry, linear sweep voltammetry is a method of voltammetry where the current at a working electrode is measured while the potential between the working electrode and a reference electrode is swept linearly in time. Oxidation or reduction of species is registered as a peak or trough in the current signal at the potential at which the species begins to be oxidized or reduced.
Experimental method
The experimental setup for linear sweep voltammetry utilizes a potentiostat and a three-electrode setup to deliver a potential to a solution and monitor its change in current. The three-electrode setup consists of a working electrode, an auxiliary electrode, and a reference electrode. The potentiostat delivers the potentials through the three-electrode setup. A potential, , is delivered through the working electrode. The slope of the potential vs. time graph is called the scan rate and can range from mV/s to 1,000,000 V/s.
The working electrode is one of the electrodes at which the oxidation/reduction reactions occur—the processes that occur at this electrode are the ones being monitored. The auxiliary electrode (or counter electrode) is the one at which a process opposite from the one taking place at the working electrode occurs. The processes at this electrode are not monitored. The equation below gives an example of a reduction occurring at the surface of the working electrode. is the reduction potential of A (if the electrolyte and the electrode are in their standard conditions, then this potential is a standard reduction potential). As approaches , the current on the surface increases, and when , the concentration of A equals that of the oxidized/reduced A at the surface ([A] = [A−]). As the molecules on the surface of the working electrode are oxidized/reduced, they move away from the surface and new molecules come into contact with the surface of the working electrode. The flow of electrons into or out of the electrode causes the current. The current is a direct measure of the rate at which electrons are being exchanged through the electrode-electrolyte interface. When this rate becomes higher than the rate at which the oxidizing or reducing species can diffuse from the bulk of the electrolyte to the surface of the electrode, the current reaches a plateau or exhibits a peak:
Reduction of molecule A at the surface of the working electrode.
The auxiliary and reference electrode work in unison to balance out the charge added or removed by the working electrode. The auxiliary electrode balances the working electrode, but in order to know how much potential it has to add or remove it relies on the reference electrode. The reference electrode has a known reduction potential. The auxiliary electrode tries to keep the reference electrode at a certain reduction potential and to do this it has to balance the working electrode.
Characterization
Linear sweep voltammetry can identify unknown species and determine the concentration of solutions. E1/2 can be used to identify the unknown species while the height of the limiting current can determine the concentration. The sensitivity of current changes vs. voltage can be increased by increasing the scan rate. Higher potentials per second result in more oxidation/reduction of a species at the surface of the working electrode.
Variations
For reversible reactions cyclic voltammetry can be used to find information about the forward reaction and the reverse reaction. Like linear sweep voltammetry, cyclic voltammetry applies a linear potential over time and at a certain potential the potentiostat will reverse the potential applied and sweep back to the beginning point. Cyclic voltammetry provides information about the oxidation and reduction reactions.
Applications
While cyclic voltammetry is applicable to most cases where linear sweep voltammetry is used, there are some instances where linear sweep voltammetry is more useful. In cases where the reaction is irreversible cyclic voltammetry will not give any additional data that linear sweep voltammetry would give us. In one example, linear voltammetry was used to examine direct methane production via a biocathode. Since the production of methane from is an irreversible reaction, cyclic voltammetry did not present any distinct advantage over linear sweep voltammetry. This group found that the biocathode produced higher current densities than a plain carbon cathode and that methane can be produced from a direct electric current without the need of hydrogen gas.
See also
Voltammetry
Cyclic voltammetry
Electroanalytical methods
References
Electroanalytical methods | Linear sweep voltammetry | [
"Chemistry"
] | 897 | [
"Electroanalytical methods",
"Electroanalytical chemistry"
] |
14,097,159 | https://en.wikipedia.org/wiki/Staircase%20voltammetry | Staircase voltammetry is a derivative of linear sweep voltammetry. In linear sweep voltammetry the current at a working electrode is measured while the potential between the working electrode and a reference electrode is swept linearly in time. Oxidation or reduction of species is registered as a peak or trough in the current signal at the potential at which the species begins to be oxidized or reduced.
In staircase voltammetry, the potential sweep is a series of stair steps. The current is measured at the end of each potential change, right before the next, so that the contribution to the current signal from the capacitive charging current is reduced.
See also
Voltammetry
Electroanalytical methods
Squarewave voltammetry
References
Electroanalytical methods | Staircase voltammetry | [
"Chemistry"
] | 152 | [
"Electroanalytical methods",
"Electroanalytical chemistry",
"Analytical chemistry stubs"
] |
14,097,440 | https://en.wikipedia.org/wiki/Working%20electrode | In electrochemistry, the working electrode is the electrode in an electrochemical system on which the reaction of interest is occurring. The working electrode is often used in conjunction with an auxiliary electrode, and a reference electrode in a three-electrode system. Depending on whether the reaction on the electrode is a reduction or an oxidation, the working electrode is called cathodic or anodic, respectively. Common working electrodes can consist of materials ranging from noble metals such as gold or platinum, to inert carbon such as glassy carbon, boron-doped diamond or pyrolytic carbon, and mercury drop and film electrodes. Chemically modified electrodes are employed for the analysis of both organic and inorganic samples.
Special types
Ultramicroelectrode (UME)
Rotating disk electrode (RDE)
Rotating ring-disk electrode (RRDE)
Hanging mercury drop electrode (HMDE)
Dropping mercury electrode (DME)
See also
Auxiliary electrode
Electrochemical cell
Electrochemistry
Electrode potential
Electrosynthesis
Reference electrode
Voltammetry
References
External links
Electroanalytical chemistry devices
Electrodes | Working electrode | [
"Chemistry"
] | 223 | [
"Electroanalytical chemistry",
"Electrodes",
"Electrochemistry",
"Electroanalytical chemistry devices",
"Electrochemistry stubs",
"Physical chemistry stubs"
] |
14,097,579 | https://en.wikipedia.org/wiki/Squarewave%20voltammetry | Squarewave voltammetry (SWV) is a form of linear potential sweep voltammetry that uses a combined square wave and staircase potential applied to a stationary electrode. It has found numerous applications in various fields, including within medicinal and various sensing communities.
History
When first reported by Barker in 1957, the working electrode utilized was primarily a dropping mercury electrode (DME). When using a DME, the surface area of the mercury drop is constantly changing throughout the course of the experiment; for this reason, complex mathematical modeling was at times required in order to analyze collected electrochemical data. The squarewave voltammetric technique allowed for the collection of the desired electrochemical data within one mercury drop, meaning that the need for mathematical modeling to account for the changing working electrode surface area was no longer needed. In short, the introduction and development of this technique allowed for the rapid collection of reliable and easily reproducible electrochemical data using DME or SDME working electrodes. With continued improvements from many electrochemists (particularly the Osteryoungs), SWV is now one of the primary voltammetric techniques available on modern potentiostats.
Theory
In a squarewave voltammetric experiment, the current at a (usually stationary) working electrode is measured while the potential between the working electrode and a reference electrode is pulsed forward and backward at a constant frequency. The potential waveform can be viewed as a superposition of a regular squarewave onto an underlying staircase (see figure above); in this sense, SWV can be considered a modification of staircase voltammetry.
The current is sampled at two times - once at the end of the forward potential pulse and again at the end of the reverse potential pulse (in both cases immediately before the potential direction is reversed). As a result of this current sampling technique, the contribution to the current signal resulting from capacitive (sometimes referred to as non-faradaic or charging) current is minimal. As a result of having current sampling at two different instances per squarewave cycle, two current waveforms are collected - both have diagnostic value, and are therefore preserved. When viewed in isolation, the forward and reverse current waveforms mimic the appearance of a cyclic voltammogram (which corresponds to the anodic or cathodic halves, however, is dependent upon experimental conditions).
Despite both the forward and reverse current waveforms having diagnostic worth, it is almost always the case in SWV for the potentiostat software to plot a differential current waveform derived by subtracting the reverse current waveform from the forward current waveform. This differential curve is then plotted against the applied potential. Peaks in the differential current vs. applied potential plot are indicative of redox processes, and the magnitudes of the peaks in this plot are proportional to the concentrations of the various redox active species according to:
where Δip is the differential current peak value, A is the surface area of the electrode, C0* is the concentration of the species, D0 is the diffusivity of the species, tp is the pulse width, and ΔΨp is a dimensionless parameter which gauges the peak height in SWV relative to the limiting response in normal pulse voltammetry.
Renewal of diffusion layer
It is important to note that in squarewave voltammetric analyses, the diffusion layer is not renewed between potential cycles. Thus, it is not possible/accurate to view each cycle in isolation; the conditions present for each cycle is a complex diffusion layer which has evolved through all prior potential cycles. The conditions for a particular cycle are also a function of electrode kinetics, along with other electrochemical considerations.
Applications
Because of the minimal contributions from non-faradaic currents, the use of a differential current plot instead of separate forward and reverse current plots, and significant time evolution between potential reversal and current sampling, high sensitivity screening can be obtained utilizing SWV. For this reason, squarewave voltammetry has been utilized in numerous electrochemical measurements and can be viewed as an improvement to other electroanalytical techniques. For instance, SWV suppresses background currents much more effectively than cyclic voltammetry - for this reason, analyte concentrations on the nanomolar scale can be registered utilizing SWV over CV.
SWV analysis has been used recently in the development of a voltammetric catechol sensor, in the analysis of a large number of pharmaceuticals, and in the development and construction of a 2,4,6-TNT and 2,4-DNT sensor
In addition to being utilized in independent analyses, SWV has also been coupled with other analytical techniques, including but not limited to thin-layer chromatography (TLC) and high-pressure liquid chromatography.
See also
Voltammetry
Electroanalytical method
References
Electroanalytical methods | Squarewave voltammetry | [
"Chemistry"
] | 982 | [
"Electroanalytical methods",
"Electroanalytical chemistry"
] |
15,181,461 | https://en.wikipedia.org/wiki/Cartilage-derived%20angiogenesis%20inhibitor | A cartilage-derived angiogenesis inhibitor is an angiogenesis inhibitor produced from cartilage. Examples include the peptide troponin I and chondromodulin I.
The antiangiogenic effect may be an inhibition of basement membrane degradation.
These inhibitory agents prevent 'vascular invasion', which is the proliferation of tumor cells in the blood or lymph vessels. They are usually highly expressed in cartilage and within chondrocytes. Their genetic transcription increases upon the expansion of cartilaginous regions.
Recent studies on Troponin I hypothesize that this protein performs its anti-proliferation effect on endothelial cells via interactions with a bFGF receptor. Neighboring studies on other anti-angiogenic factors are evolving, however, the general mechanism of action is still unknown today.
References
Angiogenesis inhibitors | Cartilage-derived angiogenesis inhibitor | [
"Chemistry",
"Biology"
] | 177 | [
"Angiogenesis",
"Biotechnology stubs",
"Biochemistry stubs",
"Angiogenesis inhibitors",
"Biochemistry"
] |
15,181,637 | https://en.wikipedia.org/wiki/MED27 | Mediator of RNA polymerase II transcription subunit 27 is an enzyme that in humans is encoded by the MED27 gene. It forms part of the Mediator complex.
The ubiquitous expression of Med27 mRNA suggests a universal requirement for Med27 in transcriptional initiation. Loss of Crsp34/Med27 decreases amacrine cell number, but increases the number of rod photoreceptor cells.
The activation of gene transcription is a multistep process that is triggered by factors that recognize transcriptional enhancer sites in DNA. These factors work with co-activators to direct transcriptional initiation by the RNA polymerase II apparatus. The protein encoded by this gene is a subunit of the CRSP (cofactor required for SP1 activation) complex, which, along with TFIID, is required for efficient activation by SP1. This protein is also a component of other multisubunit complexes e.g. thyroid hormone receptor-(TR-) associated proteins which interact with TR and facilitate TR function on DNA templates in conjunction with initiation factors and cofactors.
See also
Mediator complex
References
Further reading
Protein families | MED27 | [
"Biology"
] | 232 | [
"Protein families",
"Protein classification"
] |
15,181,699 | https://en.wikipedia.org/wiki/TBPL1 | TATA box-binding protein-like protein 1 is a protein that in humans is encoded by the TBPL1 gene.
Function
Initiation of transcription by RNA polymerase II requires the activities of more than 70 polypeptides. The protein that coordinates these activities is transcription factor IID (TFIID), which binds to the core promoter to position the polymerase properly, serves as the scaffold for assembly of the remainder of the transcription complex, and acts as a channel for regulatory signals. TFIID is composed of the TATA-binding protein (TBP) and a group of evolutionarily conserved proteins known as TBP-associated factors or TAFs. TAFs may participate in basal transcription, serve as coactivators, function in promoter recognition or modify general transcription factors (GTFs) to facilitate complex assembly and transcription initiation. This gene encodes a protein that serves the same function as TBP and substitutes for TBP at some promoters that are not recognized by TFIID. It is essential for spermiogenesis and believed to be important in expression of developmentally regulated genes.
Interactions
TBPL1 has been shown to interact with GTF2A1.
References
Further reading
External links
Transcription factors | TBPL1 | [
"Chemistry",
"Biology"
] | 246 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,181,810 | https://en.wikipedia.org/wiki/MLL4 | Myeloid/lymphoid or mixed-lineage leukemia 4, also known as MLL4, is a human gene.
This gene encodes a protein which contains multiple domains including a CXXC zinc finger, three PHD zinc fingers, two FY-rich domains, and a SET (suppressor of variegation, enhancer of zeste, and trithorax) domain. The SET domain is a conserved C-terminal domain that characterizes proteins of the MLL (mixed-lineage leukemia) family. This gene is ubiquitously expressed in adult tissues. It is also amplified in solid tumor cell lines, and may be involved in human cancer. Two alternatively spliced transcript variants encoding distinct isoforms have been reported for this gene, however, the full length nature of the shorter transcript is not known.
References
Further reading
External links
Transcription factors | MLL4 | [
"Chemistry",
"Biology"
] | 176 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,182,309 | https://en.wikipedia.org/wiki/Underpotential%20deposition | Underpotential deposition (UPD), in electrochemistry, is a phenomenon of electrodeposition of a species (typically reduction of a metal cation to a solid metal) at a potential less negative than the equilibrium (Nernst) potential for the reduction of this metal. The equilibrium potential for the reduction of a metal in this context is the potential at which it will deposit onto itself. Underpotential deposition can then be understood to be when a metal can deposit onto another material more easily than it can deposit onto itself.
Interpretation
The occurrence of underpotential deposition is often interpreted as a result of a strong interaction between the electrodepositing metal M with the substrate S (of which the electrode is built). The M-S interaction needs to be energetically favoured to the M-M interaction in the crystal lattice of the pure metal M. This mechanism is deduced from the observation that UPD typically occurs only up to a monolayer of M (sometimes up to two monolayers). The electrodeposition of a metal on a substrate of the same metal occurs at an equilibrium potential, thus defining the reference point for the underpotential deposition. Underpotential deposition is much sharper on monocrystals than on polycrystalline materials.
References
Electrochemistry
Electrochemical potentials | Underpotential deposition | [
"Chemistry"
] | 267 | [
"Electrochemical potentials",
"Electrochemistry",
"Physical chemistry stubs",
"Electrochemistry stubs"
] |
15,182,511 | https://en.wikipedia.org/wiki/ZNF274 | Zinc finger protein 274 is a protein that in humans is encoded by the ZNF274 gene.
This gene encodes a zinc finger protein containing five C2H2-type zinc finger domains, one or two Kruppel-associated box A (KRAB A) domains, and a leucine-rich domain. The encoded protein has been suggested to be a transcriptional repressor. It localizes predominantly to the nucleolus. Alternatively spliced transcript variants encoding different isoforms exist. These variants utilize alternative polyadenylation signals.
References
Further reading
External links
Transcription factors | ZNF274 | [
"Chemistry",
"Biology"
] | 124 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,182,944 | https://en.wikipedia.org/wiki/KLF8 | Krueppel-like factor 8 is a protein that in humans is encoded by the KLF8 gene.
KLF8 belongs to the family of KLF protein. KLF8 is activated by KLF1 along with KLF3 while KLF3 represses KLF8.
Interactions
KLF8 has been shown to interact with CTBP2.
References
Further reading
External links
Transcription factors | KLF8 | [
"Chemistry",
"Biology"
] | 83 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,183,349 | https://en.wikipedia.org/wiki/Twist-related%20protein%202 | Twist-related protein 2 is a protein that in humans is encoded by the TWIST2 gene. The protein encoded by this gene is a basic helix-loop-helix (bHLH) transcription factor and shares similarity with another bHLH transcription factor, TWIST1. bHLH transcription factors have been implicated in cell lineage determination and differentiation. It is thought that during osteoblast development, this protein may inhibit osteoblast maturation and maintain cells in a preosteoblast phenotype.
Interactions
TWIST2 has been shown to interact with SREBF1.
Clinical significance
Mutations in the TWIST2 gene that alter DNA-binding activity through both dominant-negative and gain-of-function effects are associated with ablepharon macrostomia syndrome and Barber–Say syndrome.
References
Further reading
External links
Transcription factors | Twist-related protein 2 | [
"Chemistry",
"Biology"
] | 171 | [
"Protein stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Transcription factors"
] |
15,184,100 | https://en.wikipedia.org/wiki/Neber%20rearrangement | The Neber rearrangement is an organic reaction in which a ketoxime is converted into an alpha-aminoketone via a rearrangement reaction.
The oxime is first converted to an O-sulfonate, for example a tosylate by reaction with tosyl chloride. Added base forms a carbanion which displaces the tosylate group in a nucleophilic displacement to an azirine and added water subsequently hydrolyses it to the aminoketone.
The Beckmann rearrangement is a side reaction.
References
Rearrangement reactions
Name reactions | Neber rearrangement | [
"Chemistry"
] | 125 | [
"Name reactions",
"Chemical reaction stubs",
"Rearrangement reactions",
"Organic reactions"
] |
15,185,443 | https://en.wikipedia.org/wiki/Korn%27s%20inequality | In mathematical analysis, Korn's inequality is an inequality concerning the gradient of a vector field that generalizes the following classical theorem: if the gradient of a vector field is skew-symmetric at every point, then the gradient must be equal to a constant skew-symmetric matrix. Korn's theorem is a quantitative version of this statement, which intuitively says that if the gradient of a vector field is on average not far from the space of skew-symmetric matrices, then the gradient must not be far from a particular skew-symmetric matrix. The statement that Korn's inequality generalizes thus arises as a special case of rigidity.
In (linear) elasticity theory, the symmetric part of the gradient is a measure of the strain that an elastic body experiences when it is deformed by a given vector-valued function. The inequality is therefore an important tool as an a priori estimate in linear elasticity theory.
Statement of the inequality
Let be an open, connected domain in -dimensional Euclidean space , . Let be the Sobolev space of all vector fields on that, along with their (first) weak derivatives, lie in the Lebesgue space . Denoting the partial derivative with respect to the ith component by , the norm in is given by
Then there is a (minimal) constant , known as the Korn constant of , such that, for all ,
where denotes the symmetrized gradient given by
Inequality is known as Korn's inequality.
See also
Hardy inequality
Poincaré inequality
References
.
.
.
.
External links
Inequalities
Sobolev spaces
Solid mechanics | Korn's inequality | [
"Physics",
"Mathematics"
] | 331 | [
"Solid mechanics",
"Mathematical theorems",
"Binary relations",
"Mathematical relations",
"Mechanics",
"Inequalities (mathematics)",
"Mathematical problems"
] |
15,187,614 | https://en.wikipedia.org/wiki/Procrustes%20transformation | A Procrustes transformation is a geometric transformation that involves only translation, rotation, uniform scaling, or a combination of these transformations. Hence, it may change the size, position, and orientation of a geometric object, but not its shape.
The Procrustes transformation is named after the mythical Greek robber Procrustes who made his victims fit his bed either by stretching their limbs or cutting them off.
See also
Procrustes analysis
Orthogonal Procrustes problem
Singular value decomposition
Affine transformation, which also allows for shear
References
External links
Procrustes transformation
Euclidean symmetries | Procrustes transformation | [
"Physics",
"Mathematics"
] | 122 | [
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Mathematical relations",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
16,921,412 | https://en.wikipedia.org/wiki/Leaching%20%28chemistry%29 | Leaching is the process of a solute becoming detached or extracted from its carrier substance by way of a solvent.
Leaching is a naturally occurring process which scientists have adapted for a variety of applications with a variety of methods. Specific extraction methods depend on the soluble characteristics relative to the sorbent material such as concentration, distribution, nature, and size. Leaching can occur naturally seen from plant substances (inorganic and organic), solute leaching in soil, and in the decomposition of organic materials. Leaching can also be applied affectedly to enhance water quality and contaminant removal, as well as for disposal of hazardous waste products such as fly ash, or rare earth elements (REEs). Understanding leaching characteristics is important in preventing or encouraging the leaching process and preparing for it in the case where it is inevitable.
In an ideal leaching equilibrium stage, all the solute is dissolved by the solvent, leaving the carrier of the solute unchanged. The process of leaching however is not always ideal, and can be quite complex to understand and replicate, and often different methodologies will produce different results.
Leaching processes
There are many types of leaching scenarios; therefore, the extent of this topic is vast. In general, however, the three substances can be described as:
a carrier, substance A;
a solute, substance B;
and a solvent, substance C.
Substance A and B are somewhat homogenous in a system prior to the introduction of substance C. At the beginning of the leaching process, substance C will work at dissolving the surficial substance B at a fairly high rate. The rate of dissolution will decrease substantially once it needs to penetrate through the pores of substance A in order to continue targeting substance B. This penetration can often lead to dissolution of substance A, or the product of more than one solute, both unsatisfactory if specific leaching is desired. The physiochemical and biological properties of the carrier and solute should be considered when observing the leaching process, and certain properties may be more important depending on the material, the solvent, and their availability. These specific properties can include, but are not limited to:
Particle size
Solvent
Temperature
Agitation
Surface area
Homogeneity of the carrier and solute
Microorganism activity
Mineralogy
Intermediate products
Crystal structure
The general process is typically broken up and summarized into three parts:
Dissolution of surficial solute by solvent
Diffusion of inner-solute through the pores of the carrier to reach the solvent
Transfer of dissolved solute out of the system
Leaching processes for biological substances
Biological substances can experience leaching themselves, as well as be used for leaching as part of the solvent substance to recover heavy metals. Many plants experience leaching of phenolics, carbohydrates, and amino acids, and can experience as much as 30% mass loss from leaching, just from sources of water such as rain, dew, mist, and fog. These sources of water would be considered the solvent in the leaching process and can also lead to the leaching of organic nutrients from plants such as free sugars, pectic substances, and sugar alcohols. This can in turn lead to more diversity in plant species that may experience a more direct access to water. This type of leaching can often lead to the removal of an undesirable component from the solid by water, this process is called washing. A major concern for leaching of plants, is if pesticides are leached and carried through stormwater runoff,; this is not only necessary to plant health, but it is important to control because pesticides can be toxic to human and animal health.
Bioleaching is a term that describes the removal of metal cations from insoluble ores by biological oxidation and complexation processes. This process is done in most part to extract copper, cobalt, nickel, zinc, and uranium from insoluble sulfides or oxides. Bioleaching processes can also be used in the re-use of fly ash by recovering aluminum using sulfuric acid.
Leaching processes for fly ash
Coal fly ash is a product that experiences heavy amounts of leaching during disposal. Though the re-use of fly ash in other materials such as concrete and bricks is encouraged, still much of it in the United States is disposed of in holding ponds, lagoons, landfills, and slag heaps. These disposal sites all contain water where washing effects can cause leaching of many different major elements, depending on the type of fly ash and the location where it originated. The leaching of fly ash is only concerning if the fly ash has not been disposed of properly, such as in the case of the Kingston Fossil Plant in Roane County, Tennessee. The Tennessee Valley Authority Kingston Fossil Plant structural failure lead to massive destruction throughout the area and serious levels of contamination downstream to both Emory River and Clinch River.
Leaching processes in soil
Leaching in soil is highly dependent on the characteristics of the soil, which makes modeling efforts difficult. Most leaching comes from infiltration of water, a washing effect much like that described for the leaching process of biological substances. The leaching is typically described by solute transport models, such as Darcy's Law, mass flow expressions, and diffusion-dispersion understandings. Leaching is controlled largely by the hydraulic conductivity of the soil, which is dependent on particle size and relative density that the soil has been consolidated to via stress. Diffusion is controlled by other factors such as pore size and soil skeleton, tortuosity of flow path, and distribution of the solvent (water) and solutes.
Leaching for mineral extraction
Leaching can sometimes be used to extract valuable materials from a wastewater product/ raw materials. In the field of mineralogy, acid leaching is common to extract Metals such as vanadium, Cobalt, Nickel, Manganese, Iron etc. from raw materials/ reused materials. In recent years, there has been more attention given to metal leaching to recover precious metals from waste materials. For example, the extraction of valuable metals from wastewater.
Leaching mechanisms
Due to the assortment of leaching processes there are many variations in the data to be collected through laboratory methods and modeling, making it hard to interpret the data itself. Not only is the specified leaching process important, but also the focus of the experimentation itself. For instance, the focus could be directed toward mechanisms causing leaching, mineralogy as a group or individually, or the solvent that causes leaching. Most tests are done by evaluating mass loss due to a reagent, heat, or simply washing with water. A summary of various leaching processes and their respective laboratory tests can be viewed in the following table:
Environmentally friendly leaching
Some recent work has been done to see if organic acids can be used to leach lithium and cobalt from spent batteries with some success. Experiments performed with varying temperatures and concentrations of malic acid show that the optimal conditions are 2.0 m/L of organic acid at a temperature of 90 °C. The reaction had an overall efficiency exceeding 90% with no harmful byproducts.
4 LiCoO2(solid) + 12 C4H6O5(liquid) → 4 LiC4H5O5(liquid) + 4 Co(C4H6O5)2(liquid) + 6 H2O(liquid) + O2(gas)
The same analysis with citric acid showed similar results with an optimal temperature and concentration of 90 °C and 1.5 molar solution of citric acid.
See also
Extraction
Leachate
Parboiling
Surfactant leaching
Sorption
Weathering
References
Industrial processes
Solid-solid separation | Leaching (chemistry) | [
"Chemistry"
] | 1,572 | [
"Solid-solid separation",
"Separation processes by phases"
] |
16,921,425 | https://en.wikipedia.org/wiki/Leaching%20%28metallurgy%29 | Leaching is a process widely used in extractive metallurgy where ore is treated with chemicals to convert the valuable metals within the ore, into soluble salts while the impurity remains insoluble. These can then be washed out and processed to give the pure metal; the materials left over are commonly known as tailings.
Compared to pyrometallurgy, leaching is easier to perform, requires less energy and is potentially less harmful as no gaseous pollution occurs. Drawbacks of leaching include its lower efficiency and the often significant quantities of waste effluent and tailings produced, which are usually either highly acidic or alkali as well as toxic (e.g. bauxite tailings).
There are four types of leaching:
Cyanide leaching (e.g. gold ore)
Ammonia leaching (e.g. crushed ore)
Alkali leaching (e.g. bauxite ore)
Acid leaching (e.g. sulfide ore)
Leaching is also notable in the extraction of rare earth elements, which consists of lanthanides, yttrium and scandium.
Chemistry
Leaching is done in long pressure vessels which are cylindrical (horizontal or vertical) or of horizontal tube form known as autoclaves. A good example of the autoclave leach process can also be found in the metallurgy of zinc. It is best described by the following chemical reaction:
This reaction proceeds at temperatures above the boiling point of water, thus creating a vapour pressure inside the vessel. Oxygen is injected under pressure, making the total pressure in the autoclave more than 0.6 MPa and temperature at 473-523 K.
The leaching of precious metals such as gold can be carried out with cyanide or ozone under mild conditions.
Historical uses
Origins
Heap leaching dates back to the second century BC in China, where iron was combined with copper sulfate. By the time of the Northern Song dynasty, a copper alloy was able to be recovered by leaching.
Leaching can also be traced back to alchemy. Early examples of leaching performed by alchemists resembled mixing iron with copper sulfate, yielding a layer of metallic copper. In the eighth century, Jabir Ibn Hayyan, a Persian alchemist, discovered a substance he coined "aqua regia". Aqua regia, a combination of hydrochloric acid and nitric acid, was found to be effective in dissolving gold, which was previously thought to be insoluble.
Pre-World War II
In the sixteenth century, heap leaching became commonly used to extract copper and saltpeter from organic matter. Primarily used in Germany and Spain, pyrite would be brought to the surface and left out in the open. The pyrite would be set outside for months at a time, where rain and air exposure would lead to chemical weathering. A solution containing copper sulfide would be collected in a basin, then precipitated in a process called cementation, resulting in metallic copper. Heap leaching, in this natural chemical-free form, was further developed to obtain different, more economically viable, types of ore. This was done by incorporating chemical lixiviation, which applies more chemical manipulation and technique to heap leaching.
From 1767 to 1867, the production of potash in Quebec became an important industry to supply France's glass and soap manufacturers. Potash was most frequently made from the ash remains of wood-burning stoves and fireplaces, which were agitated with water and filtered. Once evaporated, the remains would be potash. 400 tons of hardwood would be required to burn to yield one ton of potash.
In 1858 Adolf Von Patera, a metallurgist in Austria, utilized lixiviation separate soluble and insoluble compounds from silver in an aqueous solution. Von Patera's process, though successful, did not generate much use due partly to the price of hyposulphite. Additionally, with Patera's process, if the sodium hyposulphite failed to dissolve perfectly, silver would often be caught in the extra solution and not properly extracted.
The technique of Patera's lixiviation was further developed by American E.H Russell around 1884, creating the "Russell Process". Prior leaching processes often could not concentrate ores with too much base metal, something thing the Russel Process was able to solve thus making it more lucrative.
In 1887, when the cyanidation process was patented in England, it began to phase out the existing Russell Process. Cyanidation was much more efficient and had a recovery rate of up to 90%.
Leading up to World War I, many new ideas for leaching processes were experimented. This included using ammonia solutions for copper sulfides, and nitric acid for leaching sulfide ores. Most of these ideas were phased out into obscurity due to the high cost of the leaching agents required.
Modern leaching
In the 1940s, as a result of the Manhattan Project, the United States government needed ready access to uranium. Many different techniques in leaching were quickly employed at a large scale. Both synthetic resins and organic solvents were used early on to extract uranium. Ultimately, the use of organic solvents was less tedious compared to ion exchange through synthetic resins, and further production of uranium and other rare earth metals moved towards solvent extraction.
In the 1950s, pressure hydrometallurgy was developed for the leaching of multiple different metals, such as sulfide concentrates and laterites. Particularly at the Mines Branch in Ottawa (now known as CANMET), it was demonstrated that pyrrhotite-penthandite concentrate could be treated in autoclaves, with the resulting nickel in a solution while iron oxide and sulfur remain in the residue. This process was later used in other nickel recovery operations across the globe.
In the 1960s, heap and in situ leaching became widely practiced, particularly for copper. In situ leaching was later used for the extraction of uranium as well.
Pressure leaching was further refined in the 1970s and 80s.
See also
Heap leaching
In-situ leaching
Tank leaching
Uranium mining
References
Metallurgical processes | Leaching (metallurgy) | [
"Chemistry",
"Materials_science"
] | 1,281 | [
"Metallurgical processes",
"Metallurgy"
] |
11,503,412 | https://en.wikipedia.org/wiki/Combinatorial%20commutative%20algebra | Combinatorial commutative algebra is a relatively new, rapidly developing mathematical discipline. As the name implies, it lies at the intersection of two more established fields, commutative algebra and combinatorics, and frequently uses methods of one to address problems arising in the other. Less obviously, polyhedral geometry plays a significant role.
One of the milestones in the development of the subject was Richard Stanley's 1975 proof of the Upper Bound Conjecture for simplicial spheres, which was based on earlier work of Melvin Hochster and Gerald Reisner. While the problem can be formulated purely in geometric terms, the methods of the proof drew on commutative algebra techniques.
A signature theorem in combinatorial commutative algebra is the characterization of h-vectors of simplicial polytopes conjectured in 1970 by Peter McMullen. Known as the g-theorem, it was proved in 1979 by Stanley (necessity of the conditions, algebraic argument) and by Louis Billera and Carl W. Lee (sufficiency, combinatorial and geometric construction). A major open question was the extension of this characterization from simplicial polytopes to simplicial spheres, the g-conjecture, which was resolved in 2018 by Karim Adiprasito.
Important notions of combinatorial commutative algebra
Square-free monomial ideal in a polynomial ring and Stanley–Reisner ring of a simplicial complex.
Cohen–Macaulay rings.
Monomial ring, closely related to an affine semigroup ring and to the coordinate ring of an affine toric variety.
Algebra with a straightening law. There are several versions of those, including Hodge algebras of Corrado de Concini, David Eisenbud, and Claudio Procesi.
See also
Algebraic combinatorics
Polyhedral combinatorics
Zero-divisor graph
References
A foundational paper on Stanley–Reisner complexes by one of the pioneers of the theory:
The first book is a classic (first edition published in 1983):
Very influential, and well written, textbook-monograph:
Additional reading:
A recent addition to the growing literature in the field, contains exposition of current research topics:
Commutative algebra
Algebraic geometry
Algebraic combinatorics | Combinatorial commutative algebra | [
"Mathematics"
] | 460 | [
"Combinatorics",
"Fields of abstract algebra",
"Algebraic geometry",
"Algebraic combinatorics",
"Commutative algebra"
] |
11,503,563 | https://en.wikipedia.org/wiki/H-vector | In algebraic combinatorics, the h-vector of a simplicial polytope is a fundamental invariant of the polytope which encodes the number of faces of different dimensions and allows one to express the Dehn–Sommerville equations in a particularly simple form. A characterization of the set of h-vectors of simplicial polytopes was conjectured by Peter McMullen and proved by Lou Billera and Carl W. Lee and Richard Stanley (g-theorem). The definition of h-vector applies to arbitrary abstract simplicial complexes. The g-conjecture stated that for simplicial spheres, all possible h-vectors occur already among the h-vectors of the boundaries of convex simplicial polytopes. It was proven in December 2018 by Karim Adiprasito.
Stanley introduced a generalization of the h-vector, the toric h-vector, which is defined for an arbitrary ranked poset, and proved that for the class of Eulerian posets, the Dehn–Sommerville equations continue to hold. A different, more combinatorial, generalization of the h-vector that has been extensively studied is the flag h-vector of a ranked poset. For Eulerian posets, it can be more concisely expressed by means of a noncommutative polynomial in two variables called the cd-index.
Definition
Let Δ be an abstract simplicial complex of dimension d − 1 with fi i-dimensional faces and f−1 = 1. These numbers are arranged into the f-vector of Δ,
An important special case occurs when Δ is the boundary of a d-dimensional convex polytope.
For k = 0, 1, …, d, let
The tuple
is called the h-vector of Δ. In particular, , , and , where is the Euler characteristic of . The f-vector and the h-vector uniquely determine each other through the linear relation
from which it follows that, for ,
In particular, . Let R = k[Δ] be the Stanley–Reisner ring of Δ. Then its Hilbert–Poincaré series can be expressed as
This motivates the definition of the h-vector of a finitely generated positively graded algebra of Krull dimension d as the numerator of its Hilbert–Poincaré series written with the denominator (1 − t)d.
The h-vector is closely related to the h*-vector for a convex lattice polytope, see Ehrhart polynomial.
Recurrence relation
The -vector can be computed from the -vector by using the recurrence relation
.
and finally setting for . For small examples, one can use this method to compute -vectors quickly by hand by recursively filling the entries of an array similar to Pascal's triangle. For example, consider the boundary complex of an octahedron. The -vector of is . To compute the -vector of , construct a triangular array by first writing s down the left edge and the -vector down the right edge.
(We set just to make the array triangular.) Then, starting from the top, fill each remaining entry by subtracting its upper-left neighbor from its upper-right neighbor. In this way, we generate the following array:
The entries of the bottom row (apart from the final ) are the entries of the -vector. Hence, the -vector of is .
Toric h-vector
To an arbitrary graded poset P, Stanley associated a pair of polynomials f(P,x) and g(P,x). Their definition is recursive in terms of the polynomials associated to intervals [0,y] for all y ∈ P, y ≠ 1, viewed as ranked posets of lower rank (0 and 1 denote the minimal and the maximal elements of P). The coefficients of f(P,x) form the toric h-vector of P. When P is an Eulerian poset of rank d + 1 such that P − 1 is simplicial, the toric h-vector coincides with the ordinary h-vector constructed using the numbers fi of elements of P − 1 of given rank i + 1. In this case the toric h-vector of P satisfies the Dehn–Sommerville equations
The reason for the adjective "toric" is a connection of the toric h-vector with the intersection cohomology of a certain projective toric variety X whenever P is the boundary complex of rational convex polytope. Namely, the components are the dimensions of the even intersection cohomology groups of X:
(the odd intersection cohomology groups of X are all zero). The Dehn–Sommerville equations are a manifestation of the Poincaré duality in the intersection cohomology of X. Kalle Karu proved that the toric h-vector of a polytope is unimodal, regardless of whether the polytope is rational or not.
Flag h-vector and cd-index
A different generalization of the notions of f-vector and h-vector of a convex polytope has been extensively studied. Let be a finite graded poset of rank n, so that each maximal chain in has length n. For any , a subset of , let denote the number of chains in whose ranks constitute the set . More formally, let
be the rank function of and let be the -rank selected subposet, which consists of the elements from whose rank is in :
Then is the number of the maximal chains in and the function
is called the flag f-vector of P. The function
is called the flag h-vector of . By the inclusion–exclusion principle,
The flag f- and h-vectors of refine the ordinary f- and h-vectors of its order complex :
The flag h-vector of can be displayed via a polynomial in noncommutative variables a and b. For any subset of {1,…,n}, define the corresponding monomial in a and b,
Then the noncommutative generating function for the flag h-vector of P is defined by
From the relation between αP(S) and βP(S), the noncommutative generating function for the flag f-vector of P is
Margaret Bayer and Louis Billera determined the most general linear relations that hold between the components of the flag h-vector of an Eulerian poset P.
Fine noted an elegant way to state these relations: there exists a noncommutative polynomial ΦP(c,d), called the cd''-index of P, such that
Stanley proved that all coefficients of the cd''-index of the boundary complex of a convex polytope are non-negative. He conjectured that this positivity phenomenon persists for a more general class of Eulerian posets that Stanley calls Gorenstein* complexes and which includes simplicial spheres and complete fans. This conjecture was proved by Kalle Karu. The combinatorial meaning of these non-negative coefficients (an answer to the question "what do they count?") remains unclear.
References
Further reading
.
.
Algebraic combinatorics
Polyhedral combinatorics | H-vector | [
"Mathematics"
] | 1,487 | [
"Polyhedral combinatorics",
"Algebraic combinatorics",
"Fields of abstract algebra",
"Combinatorics"
] |
11,506,732 | https://en.wikipedia.org/wiki/Journal%20of%20Symbolic%20Logic | The Journal of Symbolic Logic is a peer-reviewed mathematics journal published quarterly by Association for Symbolic Logic. It was established in 1936 and covers mathematical logic. The journal is indexed by Mathematical Reviews, Zentralblatt MATH, and Scopus. Its 2009 MCQ was 0.28, and its 2009 impact factor was 0.631.
External links
Mathematical logic journals
Academic journals established in 1936
Multilingual journals
Quarterly journals
Association for Symbolic Logic academic journals
Logic journals
Cambridge University Press academic journals | Journal of Symbolic Logic | [
"Mathematics"
] | 99 | [
"Mathematical logic",
"Mathematical logic journals"
] |
3,319,001 | https://en.wikipedia.org/wiki/Advanced%20process%20control | In control theory, advanced process control (APC) refers to a broad range of techniques and technologies implemented within industrial process control systems. Advanced process controls are usually deployed optionally and in addition to basic process controls. Basic process controls are designed and built with the process itself, to facilitate basic operation, control and automation requirements. Advanced process controls are typically added subsequently, often over the course of many years, to address particular performance or economic improvement opportunities in the process.
Process control (basic and advanced) normally implies the process industries, which includes chemicals, petrochemicals, oil and mineral refining, food processing, pharmaceuticals, power generation, etc. These industries are characterized by continuous processes and fluid processing, as opposed to discrete parts manufacturing, such as automobile and electronics manufacturing. The term process automation is essentially synonymous with process control.
Process controls (basic as well as advanced) are implemented within the process control system, which may mean a distributed control system (DCS), programmable logic controller (PLC), and/or a supervisory control computer. DCSs and PLCs are typically industrially hardened and fault-tolerant. Supervisory control computers are often not hardened or fault-tolerant, but they bring a higher level of computational capability to the control system, to host valuable, but not critical, advanced control applications. Advanced controls may reside in either the DCS or the supervisory computer, depending on the application. Basic controls reside in the DCS and its subsystems, including PLCs.
Types of Advanced Process Control
Following is a list of well known types of advanced process control:
Advanced regulatory control (ARC) refers to several proven advanced control techniques, such as override or adaptive gain (but in all cases, "regulating or feedback"). ARC is also a catch-all term used to refer to any customized or non-simple technique that does not fall into any other category. ARCs are typically implemented using function blocks or custom programming capabilities at the DCS level. In some cases, ARCs reside at the supervisory control computer level.
Advanced process control (APC) refers to several proven advanced control techniques, such as feedforward, decoupling, and inferential control. APC can also include Model Predictive Control, described below. APC is typically implemented using function blocks or custom programming capabilities at the DCS level. In some cases, APC resides at the supervisory control computer level.
Multivariable Model predictive control (MPC) is a popular technology, usually deployed on a supervisory control computer, that identifies important independent and dependent process variables and the dynamic relationships (models) between them, and often uses matrix-math based control and optimization algorithms to control multiple variables simultaneously. One requirement of MPC is that the models must be linear across the operating range of the controller. MPC has been a prominent part of APC ever since supervisory computers first brought the necessary computational capabilities to control systems in the 1980s.
Nonlinear MPC: Similar to Multivariable MPC in that it incorporates dynamic models and matrix-math based control; however, it does not have the requirement for model linearity. Nonlinear MPC is capable of accommodating processes with models that have varying process gains and dynamics (i.e. dead-times and lag times).
Inferential Measurements: The concept behind inferentials is to calculate a stream property from readily available process measurements, such as temperature and pressure, that otherwise might be too costly or time-consuming to measure directly in real time. The accuracy of the inference can be periodically cross-checked with laboratory analysis. Inferentials can be utilized in place of actual online analyzers, whether for operator information, cascaded to base-layer process controllers, or multivariable controller CVs.
Sequential control refers to discontinuous time- and event-based automation sequences that occur within continuous processes. These may be implemented as a collection of time and logic function blocks, a custom algorithm, or using a formalized Sequential function chart methodology.
Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms.
Related Technologies
The following technologies are related to APC and in some contexts can be considered part of APC, but are generally separate technologies having their own (or in need of their own) Wiki articles.
Statistical process control (SPC), despite its name, is much more common in discrete parts manufacturing and batch process control than in continuous process control. In SPC, “process” refers to the work and quality control process, rather than continuous process control.
Batch process control (see ANSI/ISA-88) is employed in non-continuous batch processes, such as many pharmaceuticals, chemicals, and foods.
Simulation-based optimization incorporates dynamic or steady-state computer-based process simulation models to determine more optimal operating targets in real-time, i.e. on a periodic basis, ranging from hourly to daily. This is sometimes considered a part of APC, but in practice it is still an emerging technology and is more often part of MPO.
Manufacturing planning and optimization (MPO) refers to ongoing business activity to arrive at optimal operating targets that are then implemented in the operating organization, either manually or in some cases automatically communicated to the process control system.
Safety instrumented system refers to a system that is independent of the process control system, both physically and administratively, whose purpose is to assure basic safety of the process.
APC Business and Professionals
Those responsible for the design, implementation and maintenance of APC applications are often referred to as APC Engineers or Control Application Engineers. Usually their education is dependent upon the field of specialization. For example, in the process industries many APC Engineers have a chemical engineering background, combining process control and chemical processing expertise.
Most large operating facilities, such as oil refineries, employ a number of control system specialists and professionals, ranging from field instrumentation, regulatory control system (DCS and PLC), advanced process control, and control system network and security. Depending on facility size and circumstances, these personnel may have responsibilities across multiple areas, or be dedicated to each area. There are also many process control service companies that can be hired for support and services in each area.
Artificial Intelligence and Process Control
The use of Artificial Intelligence, Machine Learning and Deep Learning techniques in Process Control is also considered as an advanced process control approach in which intelligence is used to further optimize operational parameters.
Operations and Logics in process control systems in oil and gas and for decades are based only on physics equations that dictates parameters along with operators’ interactions based on experience and operating manuals. Artificial Intelligence and Machine Learning algorithms can look into the dynamic operational conditions, analyse them and suggest optimized parameters that can either directly tune logic parameters or give suggestion to operators. Interventions by such intelligent models leads to optimization in cost, production and safety.
Terminology
APC: Advanced process control, including feedforward, decoupling, inferentials, and custom algorithms; usually implies DCS-based.
ARC: Advanced regulatory control, including adaptive gain, override, logic, fuzzy logic, sequence control, device control, and custom algorithms; usually implies DCS-based.
Base-Layer: Includes DCS, SIS, field devices, and other DCS subsystems, such as analyzers, equipment health systems, and PLCs.
BPCS: Basic process control system (see "base-layer")
DCS: Distributed control system, often synonymous with BPCS
MPO: Manufacturing planning optimization
MPC: Multivariable Model predictive control
SIS: Safety instrumented system
SME: Subject matter expert
References
External links
Article about Advanced Process Control.
Control theory
Cybernetics
Digital signal processing | Advanced process control | [
"Mathematics"
] | 1,581 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
3,320,310 | https://en.wikipedia.org/wiki/Ferrosilicon | Ferrosilicon is an alloy of iron and silicon with a typical silicon content by weight of 15–90%. It contains a high proportion of iron silicides.
Production and reactions
Ferrosilicon is produced by reduction of silica or sand with coke in the presence of iron. Typical sources of iron are scrap iron or millscale. Ferrosilicons with silicon content up to about 15% are made in blast furnaces lined with acid fire bricks.
Ferrosilicons with higher silicon content are made in electric arc furnaces. The usual formulations on the market are ferrosilicons with 15%, 45%, 75%, and 90% silicon. The remainder is iron, with about 2% consisting of other elements like aluminium and calcium. An overabundance of silica is used to prevent formation of silicon carbide. Microsilica is a useful byproduct.
A mineral perryite is similar to ferrosilicon, with its composition Fe5Si2. In contact with water, ferrosilicon may slowly produce hydrogen. The reaction, which is accelerated in the presence of base, is used for hydrogen production. The melting point and density of ferrosilicon depends on its silicon content, with two nearly-eutectic areas, one near Fe2Si and second spanning FeSi2-FeSi3 composition range.
{| class="wikitable" style="text-align:center;"
|+ Physical properties of ferrosilicon
|-
! Si mass fraction (%)
| 0 || 20 || 35 || 50 || 60 || 80 || 100
|-
! Solidus point (°C)
| 1538 || 1200 || 1203 || 1212 || 1207 || 1207 || 1414
|-
! Liquidus point (°C)
| 1538 || 1212 || 1410 || 1220 || 1230 || 1360 || 1414
|-
! Density (g/cm3)
| 7.87 || 6.76 || 5.65 || 5.1 || 4.27 || 3.44 || 2.33
|}
Uses
Ferrosilicon is used as a source of silicon to reduce metals from their oxides and to deoxidize steel and other ferrous alloys. This prevents the loss of carbon from the molten steel (so called blocking the heat); ferromanganese, spiegeleisen, calcium silicides, and many other materials are used for the same purpose. It can be used to make other ferroalloys. Ferrosilicon is also used for manufacture of silicon, corrosion-resistant and high-temperature-resistant ferrous silicon alloys, and silicon steel for electromotors and transformer cores. In the manufacture of cast iron, ferrosilicon is used for inoculation of the iron to accelerate graphitization. In arc welding, ferrosilicon can be found in some electrode coatings.
Ferrosilicon is a basis for manufacture of prealloys like magnesium ferrosilicon (MgFeSi), used for production of ductile iron. MgFeSi contains 3–42% magnesium and small amounts of rare-earth elements. Ferrosilicon is also important as an additive to cast irons for controlling the initial content of silicon.
Magnesium ferrosilicon is instrumental in the formation of nodules, which give ductile iron its flexible property. Unlike gray cast iron, which forms graphite flakes, ductile iron contains graphite nodules, or pores, which make cracking more difficult.
Ferrosilicon is also used in the Pidgeon process to make magnesium from dolomite.
Silanes
Treatment of high-silicon ferrosilicon with hydrogen chloride is the basis of the industrial synthesis of trichlorosilane.
Ferrosilicon is also used in a ratio of 3–3.5% in the manufacture of sheets for the magnetic circuit of electrical transformers.
Hydrogen production
The method has been in use since World War I. Prior to this, the process and purity of hydrogen generation relying on steam passing over hot iron was difficult to control. The chemical reaction uses sodium hydroxide (NaOH), ferrosilicon, and water (H2O). While in the "silicol" process, a heavy steel pressure vessel is filled with sodium hydroxide and ferrosilicon, and upon closing, a controlled amount of water is added; the dissolving of the hydroxide heats the mixture to about and starts the reaction; sodium silicate, hydrogen and steam are produced. The overall reaction of the process is believed to be:
2NaOH + Si + H2O → Na2SiO3 + 2H2
Ferrosilicon is used by the military to quickly produce hydrogen for balloons by the ferrosilicon method. The generator may be small enough to fit in a truck and requires only a small amount of electric power, the materials are stable and not combustible, and they do not generate hydrogen until mixed.
One report notes that this method of hydrogen production wasn't thoroughly investigated for about century despite being reported by the US military in the beginning of 20th century.
Footnotes
References
Further reading
Deoxidizers
Ferroalloys
Iron
Silicon alloys | Ferrosilicon | [
"Chemistry",
"Materials_science"
] | 1,107 | [
"Deoxidizers",
"Silicon alloys",
"Alloys",
"Metallurgy"
] |
3,320,853 | https://en.wikipedia.org/wiki/Chemical%20process | In a scientific sense, a chemical process is a method or means of somehow changing one or more chemicals or chemical compounds. Such a chemical process can occur by itself or be caused by an outside force, and involves a chemical reaction of some sort. In an "engineering" sense, a chemical process is a method intended to be used in manufacturing or on an industrial scale (see Industrial process) to change the composition of chemical(s) or material(s), usually using technology similar or related to that used in chemical plants or the chemical industry.
Neither of these definitions are exact in the sense that one can always tell definitively what is a chemical process and what is not; they are practical definitions. There is also significant overlap in these two definition variations. Because of the inexactness of the definition, chemists and other scientists use the term "chemical process" only in a general sense or in the engineering sense. However, in the "process (engineering)" sense, the term "chemical process" is used extensively. The rest of the article will cover the engineering type of chemical processes.
Although this type of chemical process may sometimes involve only one step, often multiple steps, referred to as unit operations, are involved. In a plant, each of the unit operations commonly occur in individual vessels or sections of the plant called units. Often, one or more chemical reactions are involved, but other ways of changing chemical (or material) composition may be used, such as mixing or separation processes. The process steps may be sequential in time or sequential in space along a stream of flowing or moving material; see Chemical plant. For a given amount of a feed (input) material or product (output) material, an expected amount of material can be determined at key steps in the process from empirical data and material balance calculations. These amounts can be scaled up or down to suit the desired capacity or operation of a particular chemical plant built for such a process. More than one chemical plant may use the same chemical process, each plant perhaps at differently scaled capacities.
Chemical processes like distillation and crystallization go back to alchemy in Alexandria, Egypt.
Such chemical processes can be illustrated generally as block flow diagrams or in more detail as process flow diagrams. Block flow diagrams show the units as blocks and the streams flowing between them as connecting lines with arrowheads to show direction of flow.
In addition to chemical plants for producing chemicals, chemical processes with similar technology and equipment are also used in oil refining and other refineries, natural gas processing, polymer and pharmaceutical manufacturing, food processing, and water and wastewater treatment.
Unit processing in chemical process
Unit processing is the basic processing in chemical engineering. Together with unit operations it forms the main principle of the varied chemical industries. Each genre of unit processing follows the same chemical law much as each genre of unit operations follows the same physical law.
Chemical engineering unit processing consists of the following important processes:
Fractionation
Decontamination
Distillation
Filtration
Oxidation
Reduction
Refining / Refining (metallurgy)
Hydrogenation
Dehydrogenation
Hydrolysis
Hydration
Dehydration
Halogenation
Nitrification
Sulfonation
Amination
Alkylation
Dealkylation
Esterification
Polymerization
Polycondensation
Purification
Catalysis
Academic research institutes in process chemistry
Institute of Process Research & Development, University of Leeds
See also
Chemical plant
Chemical reaction
Foam fractionation
Industrial process
Process (engineering)
Separation process
References
Secondary sector of the economy
Industrial processes | Chemical process | [
"Chemistry"
] | 699 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
3,321,592 | https://en.wikipedia.org/wiki/Signal-to-quantization-noise%20ratio | Signal-to-quantization-noise ratio (SQNR or SNqR) is widely used quality measure in analysing digitizing schemes such as pulse-code modulation (PCM). The SQNR reflects the relationship between the maximum nominal signal strength and the quantization error (also known as quantization noise) introduced in the analog-to-digital conversion.
The SQNR formula is derived from the general signal-to-noise ratio (SNR) formula:
where:
is the probability of received bit error
is the peak message signal level
is the mean message signal level
As SQNR applies to quantized signals, the formulae for SQNR refer to discrete-time digital signals. Instead of , the digitized signal will be used. For quantization steps, each sample, requires bits. The probability distribution function (PDF) represents the distribution of values in and can be denoted as . The maximum magnitude value of any is denoted by .
As SQNR, like SNR, is a ratio of signal power to some noise power, it can be calculated as:
The signal power is:
The quantization noise power can be expressed as:
Giving:
When the SQNR is desired in terms of decibels (dB), a useful approximation to SQNR is:
where is the number of bits in a quantized sample, and is the signal power calculated above. Note that for each bit added to a sample, the SQNR goes up by approximately 6 dB ().
References
B. P. Lathi, Modern Digital and Analog Communication Systems (3rd edition), Oxford University Press, 1998
External links
Signal to quantization noise in quantized sinusoidal - Analysis of quantization error on a sine wave
Digital audio
Engineering ratios
Noise (electronics) | Signal-to-quantization-noise ratio | [
"Mathematics",
"Engineering"
] | 365 | [
"Quantity",
"Metrics",
"Engineering ratios"
] |
3,321,849 | https://en.wikipedia.org/wiki/Carbon-13%20nuclear%20magnetic%20resonance | Carbon-13 (C13) nuclear magnetic resonance (most commonly known as carbon-13 NMR spectroscopy or 13C NMR spectroscopy or sometimes simply referred to as carbon NMR) is the application of nuclear magnetic resonance (NMR) spectroscopy to carbon. It is analogous to proton NMR ( NMR) and allows the identification of carbon atoms in an organic molecule just as proton NMR identifies hydrogen atoms. 13C NMR detects only the isotope. The main carbon isotope, does not produce an NMR signal. Although ca. 1 mln. times less sensitive than 1H NMR spectroscopy, 13C NMR spectroscopy is widely used for characterizing organic and organometallic compounds, primarily because 1H-decoupled 13C-NMR spectra are more simple, have a greater sensitivity to differences in the chemical structure, and, thus, are better suited for identifying molecules in complex mixtures. At the same time, such spectra lack quantitative information about the atomic ratios of different types of carbon nuclei, because nuclear Overhauser effect used in 1H-decoupled 13C-NMR spectroscopy enhances the signals from carbon atoms with a larger number of hydrogen atoms attached to them more than from carbon atoms with a smaller number of H's, and because full relaxation of 13C nuclei is usually not attained (for the sake of reducing the experiment time), and the nuclei with shorter relaxation times produce more intense signals.
The major isotope of carbon, the 12C isotope, has a spin quantum number of zero and so is not magnetically active and therefore not detectable by NMR. 13C, with a spin quantum number of 1/2, is not abundant (1.1%), whereas other popular nuclei are 100% abundant, e.g. 1H, 19F, 31P.
Receptivity
13C NMR spectroscopy is much less sensitive (ca. by 4 orders of magnitude) to carbon than 1H NMR spectroscopy is to hydrogen, because of the lower abundance (1.1%) of 13C compared to 1H (>99%), and because of a lower(0.702 vs. 2.8) nuclear magnetic moment. Stated equivalently, the gyromagnetic ratio (6.728284 107 rad T−1 s−1) is only 1/4th that of 1H.
On the other hand, the sensitivity of 13C NMR spectroscopy benefits to some extent from nuclear Overhauser effect, which enhances signal intensity for non-quaternary 13C atoms.
Chemical shifts
The disadvantages in "receptivity" are compensated by the high sensitivity of 13C NMR signals to the chemical environment of the nucleus, i.e. the chemical shift "dispersion" is great, covering nearly 250 ppm. This dispersion reflects the fact that non-1H nuclei are strongly influenced by excited states ("paramagnetic" contribution to shielding tensor. This paramagnetic contribution is unrelated to paramagnetism). For example, most 1H NMR signals for most organic compounds are within 15 ppm.
The chemical shift reference standard for
13C is the carbons in tetramethylsilane (TMS),
whose chemical shift is set as 0.0 ppm at every temperature.
Typical chemical shifts in 13C-NMR
Coupling constants
Homonuclear 13C-13C coupling is normally only observed in samples that are enriched with 13C. The range for one-bond 1J(13C,13C) is 50–130 Hz. Two-bond 2J(13C,13C) are near 10 Hz.
The trends in J(1H,13C) and J(13C,13C) are similar, except that J(1H,13C are smaller owing to the modest value of the 13C nuclear magnetic moment. Values for 1J(1H,13C) range from 125 to 250 Hz. Values for 2J(1H,13C) are near 5 Hz and often are negative.
Implementation
Sensitivity
As a consequence of low receptivity, 13C NMR spectroscopy suffers from complications not encountered in proton NMR spectroscopy. Many measures can be implemented to compensate for the low receptivity of this nucleus. For example, high field magnets with internal bores are capable of accepting larger sample tubes (typically 10 mm in diameter for 13C NMR versus 5 mm for 1H NMR). Relaxation reagents allow more rapid pulsing. A typical relaxation agent is chromium(III) acetylacetonate. For a typical sample, recording a 13C NMR spectrum may require several hours, compared to 15–30 minutes for 1H NMR. The nuclear dipole is weaker, the difference in energy between alpha and beta states is one-quarter that of proton NMR, and the Boltzmann population difference is correspondingly less. One final measure to compensate for low receptivity is isotopic enrichment.
Some NMR probes, called cryoprobes, offer 20x signal enhancement relative to ordinary NMR probes. In cryoprobes, the RF generating and receiving electronics are maintained at ~ 25K using helium gas, which greatly enhances their sensitivity. The trade-off is that cryoprobes are costly.
Coupling modes
Another potential complication results from the presence of large one bond J-coupling constants between carbon and hydrogen (typically from 100 to 250 Hz). While potentially informative, these couplings can complicate the spectra and reduce sensitivity. For these reasons, 13C-NMR spectra are usually recorded with proton NMR decoupling. Couplings between carbons can be ignored due to the low natural abundance of 13C. Hence in contrast to typical proton NMR spectra, which show multiplets for each proton position, carbon NMR spectra show a single peak for each chemically non-equivalent carbon atom.
In further contrast to 1H NMR, the intensities of the signals are often not proportional to the number of equivalent 13C atoms. Instead, signal intensity is strongly influenced by (and proportional to) the number of surrounding spins (typically 1H). Integrations are more quantitative if the delay times are long, i.e. if the delay times greatly exceed relaxation times.
The most common modes of recording 13C spectra are proton-noise decoupling (also known as noise-, proton-, or broadband- decoupling), off-resonance decoupling, and gated decoupling. These modes are meant to address the large J values for 13C - H (110–320 Hz), 13C - C - H (5–60 Hz), and 13C - C - C - H (5–25 Hz) which otherwise make completely proton coupled 13C spectra difficult to interpret.
With proton-noise decoupling, in which most spectra are run, a noise decoupler strongly irradiates the sample with a broad (approximately 1000 Hz) range of radio frequencies covering the range (such as 100 MHz for a 23,486 gauss field) at which protons change their nuclear spin. The rapid changes in proton spin create an effective heteronuclear decoupling, increasing carbon signal strength on account of the nuclear Overhauser effect (NOE) and simplifying the spectrum so that each non-equivalent carbon produces a singlet peak.
Both the atoms, carbon and hydrogen exhibit spins and are NMR active. The nuclear Overhauser Effect is in general, showing up when one of two different types of atoms is irradiated while the NMR spectrum of the other type is determined. If the absorption intensities of the observed (i.e., non-irradiated) atom change, enhancement occurs. The effect can be either positive or negative, depending on which atom types are involved.
The relative intensities are unreliable because some carbons have a larger spin-lattice relaxation time and others have weaker NOE enhancement.
In gated decoupling, the noise decoupler is gated on early in the free induction delay but gated off for the pulse delay. This largely prevents NOE enhancement, allowing the strength of individual 13C peaks to be meaningfully compared by integration, at a cost of half to two-thirds of the overall sensitivity.
With off-resonance decoupling, the noise decoupler irradiates the sample at 1000–2000 Hz upfield or 2000–3000 Hz downfield of the proton resonance frequency. This retains couplings between protons immediately adjacent to 13C atoms but most often removes the others, allowing narrow multiplets to be visualized with one extra peak per bound proton (unless bound methylene protons are non-equivalent, in which case a pair of doublets may be observed).
Distortionless enhancement by polarization transfer spectra
Distortionless enhancement by polarization transfer (DEPT) is an NMR method used for determining the presence of primary, secondary and tertiary carbon atoms. The DEPT experiment differentiates between CH, CH2 and CH3 groups by variation of the selection angle parameter (the tip angle of the final 1H pulse): 135° angle gives all CH and CH3 in a phase opposite to CH2; 90° angle gives only CH groups, the others being suppressed; 45° angle gives all carbons with attached protons (regardless of number) in phase.
Signals from quaternary carbons and other carbons with no attached protons are always absent (due to the lack of attached protons).
The polarization transfer from 1H to 13C has the
secondary advantage of increasing the sensitivity over the normal 13C
spectrum (which has a modest enhancement from the nuclear overhauser effect (NOE) due to the 1H decoupling).
Attached proton test spectra
Another useful way of determining how many protons a carbon in a molecule is bonded to is to use an attached proton test (APT), which distinguishes between carbon atoms with even or odd number of attached hydrogens. A proper spin-echo sequence is able to distinguish between S, I2S and I1S, I3S spin systems: the first will appear as positive peaks in the spectrum, while the latter as negative peaks (pointing downwards), while retaining relative simplicity in the spectrum since it is still broadband proton decoupled.
Even though this technique does not distinguish fully between CHn groups, it is so easy and reliable that it is frequently employed as a first attempt to assign peaks in the spectrum and elucidate the structure. Additionally, signals from quaternary carbons and other carbons with no attached protons are still detectable, so in many cases an additional conventional 13C spectrum is not required, which is an advantage over DEPT. It is, however, sometimes possible that a CH and CH2 signal have coincidentally equivalent chemical shifts resulting in annulment in the APT spectrum due to the opposite phases. For this reason the conventional 13C{1H} spectrum or HSQC are occasionally also acquired.
See also
Nuclear magnetic resonance
Hyperpolarized carbon-13 MRI
Triple-resonance nuclear magnetic resonance spectroscopy
References
External links
Carbon NMR spectra, where there are three spectra of ethyl phthalate, ethyl ester of orthophthalic acid: completely coupled, completely decoupled and off-resonance decoupled (in this order).
For an extended tabulation of 13C shifts and coupling constants.
Nuclear magnetic resonance
Carbon | Carbon-13 nuclear magnetic resonance | [
"Physics",
"Chemistry"
] | 2,368 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
3,322,454 | https://en.wikipedia.org/wiki/Glyoxylate%20cycle | The glyoxylate cycle, a variation of the tricarboxylic acid cycle, is an anabolic pathway occurring in plants, bacteria, protists, and fungi. The glyoxylate cycle centers on the conversion of acetyl-CoA to succinate for the synthesis of carbohydrates. In microorganisms, the glyoxylate cycle allows cells to use two carbons (C2 compounds), such as acetate, to satisfy cellular carbon requirements when simple sugars such as glucose or fructose are not available. The cycle is generally assumed to be absent in animals, with the exception of nematodes at the early stages of embryogenesis. In recent years, however, the detection of malate synthase (MS) and isocitrate lyase (ICL), key enzymes involved in the glyoxylate cycle, in some animal tissue has raised questions regarding the evolutionary relationship of enzymes in bacteria and animals and suggests that animals encode alternative enzymes of the cycle that differ in function from known MS and ICL in non-metazoan species.
Plants as well as some algae and bacteria can use acetate as the carbon source for the production of carbon compounds. Plants and bacteria employ a modification of the TCA cycle called the glyoxylate cycle to produce four carbon dicarboxylic acid from two carbon acetate units. The glyoxylate cycle bypasses the two oxidative decarboxylation reactions of the TCA cycle and directly converts isocitrate through isocitrate lyase and malate synthase into malate and succinate.
The glyoxylate cycle was discovered in 1957 at the University of Oxford by Sir Hans Kornberg and his mentor Hans Krebs, resulting in a Nature paper Synthesis of Cell Constituents from C2-Units by a Modified Tricarboxylic Acid Cycle.
Similarities with TCA cycle
The glyoxylate cycle uses five of the eight enzymes associated with the tricarboxylic acid cycle: citrate synthase, aconitase, succinate dehydrogenase, fumarase, and malate dehydrogenase. The two cycles differ in that in the glyoxylate cycle, isocitrate is converted into glyoxylate and succinate by isocitrate lyase (ICL) instead of into α-ketoglutarate. This bypasses the decarboxylation steps that take place in the citric acid cycle (TCA cycle), allowing simple carbon compounds to be used in the later synthesis of macromolecules, including glucose. Glyoxylate is subsequently combined with acetyl-CoA to produce malate, catalyzed by malate synthase. Malate is also formed in parallel from succinate by the action of succinate dehydrogenase and fumarase.
Role in gluconeogenesis
Fatty acids from lipids are commonly used as an energy source by vertebrates as fatty acids are degraded through beta oxidation into acetate molecules. This acetate, bound to the active thiol group of coenzyme A, enters the citric acid cycle (TCA cycle) where it is fully oxidized to carbon dioxide. This pathway thus allows cells to obtain energy from fat. To use acetate from fat for biosynthesis of carbohydrates, the glyoxylate cycle, whose initial reactions are identical to the TCA cycle, is used.
Cell-wall containing organisms, such as plants, fungi, and bacteria, require very large amounts of carbohydrates during growth for the biosynthesis of complex structural polysaccharides, such as cellulose, glucans, and chitin. In these organisms, in the absence of available carbohydrates (for example, in certain microbial environments or during seed germination in plants), the glyoxylate cycle permits the synthesis of glucose from lipids via acetate generated in fatty acid β-oxidation.
The glyoxylate cycle bypasses the steps in the citric acid cycle where carbon is lost in the form of CO2. The two initial steps of the glyoxylate cycle are identical to those in the citric acid cycle: acetate → citrate → isocitrate. In the next step, catalyzed by the first glyoxylate cycle enzyme, isocitrate lyase, isocitrate undergoes cleavage into succinate and glyoxylate (the latter gives the cycle its name). Glyoxylate condenses with acetyl-CoA (a step catalyzed by malate synthase), yielding malate. Both malate and oxaloacetate can be converted into phosphoenolpyruvate, which is the product of phosphoenolpyruvate carboxykinase, the first enzyme in gluconeogenesis. The net result of the glyoxylate cycle is therefore the production of glucose from fatty acids. Succinate generated in the first step can enter into the citric acid cycle to eventually form oxaloacetate.
Function in organisms
Plants
In plants the glyoxylate cycle occurs in special peroxisomes which are called glyoxysomes. This cycle allows seeds to use lipids as a source of energy to form the shoot during germination. The seed cannot produce biomass using photosynthesis because of lack of an organ to perform this function. The lipid stores of germinating seeds are used for the formation of the carbohydrates that fuel the growth and development of the organism.
The glyoxylate cycle can also provide plants with another aspect of metabolic diversity. This cycle allows plants to take in acetate both as a carbon source and as a source of energy. Acetate is converted to acetyl CoA (similar to the TCA cycle). This acetyl CoA can proceed through the glyoxylate cycle, and some succinate is released during the cycle. The four carbon succinate molecule can be transformed into a variety of carbohydrates through combinations of other metabolic processes; the plant can synthesize molecules using acetate as a source for carbon. The acetyl CoA can also react with glyoxylate to produce some NADPH from NADP+, which is used to drive energy synthesis in the form of ATP later in the electron transport chain.
Pathogenic fungi
The glyoxylate cycle may serve an entirely different purpose in some species of pathogenic fungi. The levels of the main enzymes of the glyoxylate cycle, ICL and MS, are greatly increased upon contact with a human host. Mutants of a particular species of fungi that lacked ICL were also significantly less virulent in studies with mice compared to the wild type. The exact link between these two observations is still being explored, but it can be concluded that the glyoxylate cycle is a significant factor in the pathogenesis of these microbes.
Vertebrates
Vertebrates were once thought to be unable to perform this cycle because there was no evidence of its two key enzymes, isocitrate lyase and malate synthase. However, some research suggests that this pathway may exist in some, if not all, vertebrates.
Specifically, some studies show evidence of components of the glyoxylate cycle existing in significant amounts in the liver tissue of chickens. Data such as these support the idea that the cycle could theoretically occur in even the most complex vertebrates. Other experiments have also provided evidence that the cycle is present among certain insect and marine invertebrate species, as well as strong evidence of the cycle's presence in nematode species. However, other experiments refute this claim. Some publications conflict on the presence of the cycle in mammals: for example, one paper has stated that the glyoxylate cycle is active in hibernating bears, but this report was disputed in a later paper. Evidence exists for malate synthase activity in humans due to a dual functional malate/B-methylmalate synthase of mitochondrial origin called CLYBL expressed in brown fat and kidney. Vitamin D may regulate this pathway in vertebrates.
Inhibition of the glyoxylate cycle
Due to the central role of the glyoxylate cycle in the metabolism of pathogenic species including fungi and bacteria, enzymes of the glyoxylate cycle are current inhibition targets for the treatment of diseases. Most reported inhibitors of the glyoxylate cycle target the first enzyme of the cycle (ICL). Inhibitors were reported for Candida albicans for potential use as antifungal agents. The mycobacterial glyoxylate cycle is also being targeted for potential treatments of tuberculosis.
Engineering concepts
The prospect of engineering various metabolic pathways into mammals which do not possess them is a topic of great interest for bio-engineers today. The glyoxylate cycle is one of the pathways which engineers have attempted to manipulate into mammalian cells. This is primarily of interest for engineers in order to increase the production of wool in sheep, which is limited by the access to stores of glucose. By introducing the pathway into sheep, the large stores of acetate in cells could be used in order to synthesize glucose through the cycle, allowing for increased production of wool. Mammals are incapable of executing the pathway due to the lack of two enzymes, isocitrate lyase and malate synthase, which are needed in order for the cycle to take place. It is believed by some that the genes to produce these enzymes, however, are pseudogenic in mammals, meaning that the gene is not necessarily absent, rather, it is merely "turned off".
In order to engineer the pathway into cells, the genes responsible for coding for the enzymes had to be isolated and sequenced, which was done using the bacteria E.coli, from which the AceA gene, responsible for encoding for isocitrate lyase, and the AceB gene, responsible for encoding for malate synthase were sequenced. Engineers have been able to successfully incorporate the AceA and AceB genes into mammalian cells in culture, and the cells were successful in translating and transcribing the genes into the appropriate enzymes, proving that the genes could successfully be incorporated into the cell’s DNA without damaging the functionality or health of the cell. However, being able to engineer the pathway into transgenic mice has proven to be difficult for engineers. While the DNA has been expressed in some tissues, including the liver and small intestine in test animals, the level of expression is not high, and not found to be statistically significant. In order to successfully engineer the pathway, engineers would have to fuse the gene with promoters which could be regulated in order to increase the level of expression, and have the expression in the right cells, such as epithelial cells.
Efforts to engineer the pathway into more complex animals, such as sheep, have not been effective. This illustrates that much more research needs to be done on the topic, and suggests it is possible that a high expression of the cycle in animals would not be tolerated by the chemistry of the cell. Incorporating the cycle into mammals will benefit from advances in nuclear transfer technology, which will enable engineers to examine and access the pathway for functional integration within the genome before its transfer to animals.
There are possible benefits, however, to the cycle's absence in mammalian cells. The cycle is present in microorganisms that cause disease but is absent in mammals, for example humans. There is a strong plausibility of the development of antibiotics that would attack the glyoxylate cycle, which would kill the disease-causing microorganisms that depend on the cycle for their survival, yet would not harm humans where the cycle, and thus the enzymes that the antibiotic would target, are absent.
See also
Citric acid cycle (Tricarboxylic acid cycle)
References
External links
Comparative Analysis of Glyoxylate Cycle Key Enzyme Isocitrate Lyase from Organisms of Different Systematic Groups
Biochemical reactions
Carbohydrate metabolism
Metabolic pathways | Glyoxylate cycle | [
"Chemistry",
"Biology"
] | 2,537 | [
"Carbohydrate metabolism",
"Biochemistry",
"Biochemical reactions",
"Carbohydrate chemistry",
"Metabolic pathways",
"Metabolism"
] |
3,323,565 | https://en.wikipedia.org/wiki/Cauchy%20stress%20tensor | In continuum mechanics, the Cauchy stress tensor (symbol , named after Augustin-Louis Cauchy), also called true stress tensor or simply stress tensor, completely defines the state of stress at a point inside a material in the deformed state, placement, or configuration. The second order tensor consists of nine components and relates a unit-length direction vector e to the traction vector T(e) across an imaginary surface perpendicular to e:
The SI base units of both stress tensor and traction vector are newton per square metre (N/m2) or pascal (Pa), corresponding to the stress scalar. The unit vector is dimensionless.
The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle for stress.
The Cauchy stress tensor is used for stress analysis of material bodies experiencing small deformations: it is a central concept in the linear theory of elasticity. For large deformations, also called finite deformations, other measures of stress are required, such as the Piola–Kirchhoff stress tensor, the Biot stress tensor, and the Kirchhoff stress tensor.
According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). At the same time, according to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine. However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers.
There are certain invariants associated with the stress tensor, whose values do not depend upon the coordinate system chosen, or the area element upon which the stress tensor operates. These are the three eigenvalues of the stress tensor, which are called the principal stresses.
Euler–Cauchy stress principle – stress vector
The Euler–Cauchy stress principle states that upon any surface (real or imaginary) that divides the body, the action of one part of the body on the other is equivalent (equipollent) to the system of distributed forces and couples on the surface dividing the body, and it is represented by a field , called the traction vector, defined on the surface and assumed to depend continuously on the surface's unit vector .
To formulate the Euler–Cauchy stress principle, consider an imaginary surface passing through an internal material point dividing the continuous body into two segments, as seen in Figure 2.1a or 2.1b (one may use either the cutting plane diagram or the diagram with the arbitrary volume inside the continuum enclosed by the surface ).
Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as:
Only surface forces will be discussed in this article as they are relevant to the Cauchy stress tensor.
When the body is subjected to external surface forces or contact forces , following Euler's equations of motion, internal contact forces and moments are transmitted from point to point in the body, and from one segment to the other through the dividing surface , due to the mechanical contact of one portion of the continuum onto the other (Figure 2.1a and 2.1b). On an element of area containing , with normal vector , the force distribution is equipollent to a contact force exerted at point P and surface moment . In particular, the contact force is given by
where is the mean surface traction.
Cauchy's stress principle asserts that as becomes very small and tends to zero the ratio becomes and the couple stress vector vanishes. In specific fields of continuum mechanics the couple stress is assumed not to vanish; however, classical branches of continuum mechanics address non-polar materials which do not consider couple stresses and body moments.
The resultant vector is defined as the surface traction, also called stress vector, traction, or traction vector. given by at the point associated with a plane with a normal vector :
This equation means that the stress vector depends on its location in the body and the orientation of the plane on which it is acting.
This implies that the balancing action of internal contact forces generates a contact force density or Cauchy traction field that represents a distribution of internal contact forces throughout the volume of the body in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector .
Depending on the orientation of the plane under consideration, the stress vector may not necessarily be perpendicular to that plane, i.e. parallel to , and can be resolved into two components (Figure 2.1c):
one normal to the plane, called normal stress
where is the normal component of the force to the differential area
and the other parallel to this plane, called the shear stress
where is the tangential component of the force to the differential surface area . The shear stress can be further decomposed into two mutually perpendicular vectors.
Cauchy's postulate
According to the Cauchy Postulate, the stress vector remains unchanged for all surfaces passing through the point and having the same normal vector at , i.e., having a common tangent at . This means that the stress vector is a function of the normal vector only, and is not influenced by the curvature of the internal surfaces.
Cauchy's fundamental lemma
A consequence of Cauchy's postulate is Cauchy's Fundamental Lemma, also called the Cauchy reciprocal theorem, which states that the stress vectors acting on opposite sides of the same surface are equal in magnitude and opposite in direction. Cauchy's fundamental lemma is equivalent to Newton's third law of motion of action and reaction, and is expressed as
Cauchy's stress theorem—stress tensor
The state of stress at a point in the body is then defined by all the stress vectors T(n) associated with all planes (infinite in number) that pass through that point. However, according to Cauchy's fundamental theorem, also called Cauchy's stress theorem, merely by knowing the stress vectors on three mutually perpendicular planes, the stress vector on any other plane passing through that point can be found through coordinate transformation equations.
Cauchy's stress theorem states that there exists a second-order tensor field σ(x, t), called the Cauchy stress tensor, independent of n, such that T is a linear function of n:
This equation implies that the stress vector T(n) at any point P in a continuum associated with a plane with normal unit vector n can be expressed as a function of the stress vectors on the planes perpendicular to the coordinate axes, i.e. in terms of the components σij of the stress tensor σ.
To prove this expression, consider a tetrahedron with three faces oriented in the coordinate planes, and with an infinitesimal area dA oriented in an arbitrary direction specified by a normal unit vector n (Figure 2.2). The tetrahedron is formed by slicing the infinitesimal element along an arbitrary plane with unit normal n. The stress vector on this plane is denoted by T(n). The stress vectors acting on the faces of the tetrahedron are denoted as T(e1), T(e2), and T(e3), and are by definition the components σij of the stress tensor σ. This tetrahedron is sometimes called the Cauchy tetrahedron. The equilibrium of forces, i.e. Euler's first law of motion (Newton's second law of motion), gives:
where the right-hand-side represents the product of the mass enclosed by the tetrahedron and its acceleration: ρ is the density, a is the acceleration, and h is the height of the tetrahedron, considering the plane n as the base. The area of the faces of the tetrahedron perpendicular to the axes can be found by projecting dA into each face (using the dot product):
and then substituting into the equation to cancel out dA:
To consider the limiting case as the tetrahedron shrinks to a point, h must go to 0 (intuitively, the plane n is translated along n toward O). As a result, the right-hand-side of the equation approaches 0, so
Assuming a material element (see figure at the top of the page) with planes perpendicular to the coordinate axes of a Cartesian coordinate system, the stress vectors associated with each of the element planes, i.e. T(e1), T(e2), and T(e3) can be decomposed into a normal component and two shear components, i.e. components in the direction of the three coordinate axes. For the particular case of a surface with normal unit vector oriented in the direction of the x1-axis, denote the normal stress by σ11, and the two shear stresses as σ12 and σ13:
In index notation this is
The nine components σij of the stress vectors are the components of a second-order Cartesian tensor called the Cauchy stress tensor, which can be used to completely define the state of stress at a point and is given by
where σ11, σ22, and σ33 are normal stresses, and σ12, σ13, σ21, σ23, σ31, and σ32 are shear stresses. The first index i indicates that the stress acts on a plane normal to the Xi -axis, and the second index j denotes the direction in which the stress acts (For example, σ12 implies that the stress is acting on the plane that is normal to the 1st axis i.e.;X1 and acts along the 2nd axis i.e.;X2). A stress component is positive if it acts in the positive direction of the coordinate axes, and if the plane where it acts has an outward normal vector pointing in the positive coordinate direction.
Thus, using the components of the stress tensor
or, equivalently,
Alternatively, in matrix form we have
The Voigt notation representation of the Cauchy stress tensor takes advantage of the symmetry of the stress tensor to express the stress as a six-dimensional vector of the form:
The Voigt notation is used extensively in representing stress–strain relations in solid mechanics and for computational efficiency in numerical structural mechanics software.
Transformation rule of the stress tensor
It can be shown that the stress tensor is a contravariant second order tensor, which is a statement of how it transforms under a change of the coordinate system. From an xi-system to an xi' -system, the components σij in the initial system are transformed into the components σij' in the new system according to the tensor transformation rule (Figure 2.4):
where A is a rotation matrix with components aij. In matrix form this is
Expanding the matrix operation, and simplifying terms using the symmetry of the stress tensor, gives
The Mohr circle for stress is a graphical representation of this transformation of stresses.
Normal and shear stresses
The magnitude of the normal stress component σn of any stress vector T(n) acting on an arbitrary plane with normal unit vector n at a given point, in terms of the components σij of the stress tensor σ, is the dot product of the stress vector and the normal unit vector:
The magnitude of the shear stress component τn, acting orthogonal to the vector n, can then be found using the Pythagorean theorem:
where
Balance laws – Cauchy's equations of motion
Cauchy's first law of motion
According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations:
,
where
For example, for a hydrostatic fluid in equilibrium conditions, the stress tensor takes on the form:
where is the hydrostatic pressure, and is the kronecker delta.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equilibrium equations
|-
|Consider a continuum body (see Figure 4) occupying a volume , having a surface area , with defined traction or surface forces per unit area acting on every point of the body surface, and body forces per unit of volume on every point within the volume . Thus, if the body is in equilibrium the resultant force acting on the volume is zero, thus:
By definition the stress vector is , then
Using the Gauss's divergence theorem to convert a surface integral to a volume integral gives
For an arbitrary volume the integral vanishes, and we have the equilibrium equations
|}
Cauchy's second law of motion
According to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine:
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of symmetry of the stress tensor
|-
| Summing moments about point O (Figure 4) the resultant moment is zero as the body is in equilibrium. Thus,
where is the position vector and is expressed as
Knowing that and using Gauss's divergence theorem to change from a surface integral to a volume integral, we have
The second integral is zero as it contains the equilibrium equations. This leaves the first integral, where , therefore
For an arbitrary volume V, we then have
which is satisfied at every point within the body. Expanding this equation we have
, , and
or in general
This proves that the stress tensor is symmetric
|}
However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers.
Principal stresses and stress invariants
At every point in a stressed body there are at least three planes, called principal planes, with normal vectors , called principal directions, where the corresponding stress vector is perpendicular to the plane, i.e., parallel or in the same direction as the normal vector , and where there are no normal shear stresses . The three stresses normal to these principal planes are called principal stresses.
The components of the stress tensor depend on the orientation of the coordinate system at the point under consideration. However, the stress tensor itself is a physical quantity and as such, it is independent of the coordinate system chosen to represent it. There are certain invariants associated with every tensor which are also independent of the coordinate system. For example, a vector is a simple tensor of rank one. In three dimensions, it has three components. The value of these components will depend on the coordinate system chosen to represent the vector, but the magnitude of the vector is a physical quantity (a scalar) and is independent of the Cartesian coordinate system chosen to represent the vector (so long as it is normal). Similarly, every second rank tensor (such as the stress and the strain tensors) has three independent invariant quantities associated with it. One set of such invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors.
A stress vector parallel to the normal unit vector is given by:
where is a constant of proportionality, and in this particular case corresponds to the magnitudes of the normal stress vectors or principal stresses.
Knowing that and , we have
This is a homogeneous system, i.e. equal to zero, of three linear equations where are the unknowns. To obtain a nontrivial (non-zero) solution for , the determinant matrix of the coefficients must be equal to zero, i.e. the system is singular. Thus,
Expanding the determinant leads to the characteristic equation
where
The characteristic equation has three real roots , i.e. not imaginary due to the symmetry of the stress tensor. The , and , are the principal stresses, functions of the eigenvalues . The eigenvalues are the roots of the characteristic polynomial. The principal stresses are unique for a given stress tensor. Therefore, from the characteristic equation, the coefficients , and , called the first, second, and third stress invariants, respectively, always have the same value regardless of the coordinate system's orientation.
For each eigenvalue, there is a non-trivial solution for in the equation . These solutions are the principal directions or eigenvectors defining the plane where the principal stresses act. The principal stresses and principal directions characterize the stress at a point and are independent of the orientation.
A coordinate system with axes oriented to the principal directions implies that the normal stresses are the principal stresses and the stress tensor is represented by a diagonal matrix:
The principal stresses can be combined to form the stress invariants, , , and . The first and third invariant are the trace and determinant respectively, of the stress tensor. Thus,
Because of its simplicity, the principal coordinate system is often useful when considering the state of the elastic medium at a particular point. Principal stresses are often expressed in the following equation for evaluating stresses in the x and y directions or axial and bending stresses on a part. The principal normal stresses can then be used to calculate the von Mises stress and ultimately the safety factor and margin of safety.
Using just the part of the equation under the square root is equal to the maximum and minimum shear stress for plus and minus. This is shown as:
Maximum and minimum shear stresses
The maximum shear stress or maximum principal shear stress is equal to one-half the difference between the largest and smallest principal stresses, and acts on the plane that bisects the angle between the directions of the largest and smallest principal stresses, i.e. the plane of the maximum shear stress is oriented from the principal stress planes. The maximum shear stress is expressed as
Assuming then
When the stress tensor is non zero the normal stress component acting on the plane for the maximum shear stress is non-zero and it is equal to
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of the maximum and minimum shear stresses
|-
|The normal stress can be written in terms of principal stresses as
Knowing that , the shear stress in terms of principal stresses components is expressed as
The maximum shear stress at a point in a continuum body is determined by maximizing subject to the condition that
This is a constrained maximization problem, which can be solved using the Lagrangian multiplier technique to convert the problem into an unconstrained optimization problem. Thus, the stationary values (maximum and minimum values)of occur where the gradient of is parallel to the gradient of .
The Lagrangian function for this problem can be written as
where is the Lagrangian multiplier (which is different from the use to denote eigenvalues).
The extreme values of these functions are
thence
These three equations together with the condition may be solved for and
By multiplying the first three equations by and , respectively, and knowing that we obtain
Adding these three equations we get
this result can be substituted into each of the first three equations to obtain
Doing the same for the other two equations we have
A first approach to solve these last three equations is to consider the trivial solution . However, this option does not fulfill the constraint .
Considering the solution where and , it is determine from the condition that , then from the original equation for it is seen that .
The other two possible values for can be obtained similarly by assuming
and
and
Thus, one set of solutions for these four equations is:
These correspond to minimum values for and verifies that there are no shear stresses on planes normal to the principal directions of stress, as shown previously.
A second set of solutions is obtained by assuming and . Thus we have
To find the values for and we first add these two equations
Knowing that for
and
we have
and solving for we have
Then solving for we have
and
The other two possible values for can be obtained similarly by assuming
and
and
Therefore, the second set of solutions for , representing a maximum for is
Therefore, assuming , the maximum shear stress is expressed by
and it can be stated as being equal to one-half the difference between the largest and smallest principal stresses, acting on the plane that bisects the angle between the directions of the largest and smallest principal stresses.
|}
Stress deviator tensor
The stress tensor can be expressed as the sum of two other stress tensors:
a mean hydrostatic stress tensor or volumetric stress tensor or mean normal stress tensor, , which tends to change the volume of the stressed body; and
a deviatoric component called the stress deviator tensor, , which tends to distort it.
So
where is the mean stress given by
Pressure () is generally defined as negative one-third the trace of the stress tensor minus any stress the divergence of the velocity contributes with, i.e.
where is a proportionality constant (viz. the first of the Lamé parameters), is the divergence operator, is the k:th Cartesian coordinate, is the flow velocity and is the k:th Cartesian component of .
The deviatoric stress tensor can be obtained by subtracting the hydrostatic stress tensor from the Cauchy stress tensor:
Invariants of the stress deviator tensor
As it is a second order tensor, the stress deviator tensor also has a set of invariants, which can be obtained using the same procedure used to calculate the invariants of the stress tensor. It can be shown that the principal directions of the stress deviator tensor are the same as the principal directions of the stress tensor . Thus, the characteristic equation is
where , and are the first, second, and third deviatoric stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen. These deviatoric stress invariants can be expressed as a function of the components of or its principal values , , and , or alternatively, as a function of or its principal values , , and . Thus,
Because , the stress deviator tensor is in a state of pure shear.
A quantity called the equivalent stress or von Mises stress is commonly used in solid mechanics. The equivalent stress is defined as
Octahedral stresses
Considering the principal directions as the coordinate axes, a plane whose normal vector makes equal angles with each of the principal axes (i.e. having direction cosines equal to ) is called an octahedral plane. There are a total of eight octahedral planes (Figure 6). The normal and shear components of the stress tensor on these planes are called octahedral normal stress and octahedral shear stress , respectively. Octahedral plane passing through the origin is known as the π-plane (π not to be confused with mean stress denoted by π in above section) . On the π-plane, .
Knowing that the stress tensor of point O (Figure 6) in the principal axes is
the stress vector on an octahedral plane is then given by:
The normal component of the stress vector at point O associated with the octahedral plane is
which is the mean normal stress or hydrostatic stress. This value is the same in all eight octahedral planes.
The shear stress on the octahedral plane is then
See also
Cauchy momentum equation
Critical plane analysis
Stress–energy tensor
Notes
References
Tensor physical quantities
Solid mechanics
Continuum mechanics
Structural analysis | Cauchy stress tensor | [
"Physics",
"Mathematics",
"Engineering"
] | 5,031 | [
"Structural engineering",
"Solid mechanics",
"Tensors",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Structural analysis",
"Tensor physical quantities",
"Classical mechanics",
"Mechanics",
"Mechanical engineering",
"Aerospace engineering"
] |
4,503,954 | https://en.wikipedia.org/wiki/FOUP | FOUP (an acronym for Front Opening Unified Pod or Front Opening Universal Pod) is a specialized plastic carrier designed to hold silicon wafers securely and safely in a controlled environment, and to allow the wafers to be transferred between machines for processing or measurement.
FOUPs began to appear along with the first 300mm wafer processing tools in the mid 1990s. The size of the wafers and their comparative lack of rigidity meant that SMIF pods were not a viable form factor. FOUP standards were developed by SEMI and SEMI members to ensure that FOUPs and all equipment that interacts with FOUPs work together seamlessly. Transitioning from a SMIF pod to a FOUP design, the removable cassette used to hold wafers was replaced by fixed wafer columns. The door was relocated from a bottom orientation to a front orientation, where automated handling equipment can access the wafers. Pitch for a 300 mm FOUP is 10 mm, while 13 slot FOUPs can have a pitch up to 20 mm. The weight of a fully loaded 25 wafer FOUP is between 7 and 9 kilograms which means that automated material handling systems are essential for all but the smallest of fabrication plants. To allow this, each FOUP has coupling plates and interface holes to allow the FOUP to be positioned on a load port, and to be picked up and transferred by the AMHS (Automated Material Handling System) to other process tools or to storage locations such as a stocker or undertrack storage. FOUPs may use RF tags that allow them to be identified by RF readers on tools or AMHS. FOUPs are available in several colors, depending on the customer's wish.
FOUPs have begun to have the capability to have a purge gas applied by process, measurement and storage tools in an effort to increase device yield. FOUPs can be purged inside a FOUP stocker or at the equipment accessing the FOUP.
FOSB
FOSB is an acronym for Front Opening Shipping Box. FOSBs are used for transporting wafers between manufacturing facilities.
Manufacturers
ePAK
3S Korea
CKplas
Dainichi Shoji
Entegris
E-SUN System Technology
Gudeng Precision
Mirail
Shin-Etsu Polymer
See also
SMIF
Wafer fabrication equipment
References
Semiconductor fabrication equipment
Semiconductor device fabrication | FOUP | [
"Materials_science",
"Engineering"
] | 484 | [
"Semiconductor device fabrication",
"Semiconductor fabrication equipment",
"Microtechnology"
] |
4,506,313 | https://en.wikipedia.org/wiki/2-Mercaptoethanol | 2-Mercaptoethanol (also β-mercaptoethanol, BME, 2BME, 2-ME or β-met) is the chemical compound with the formula HOCH2CH2SH. ME or βME, as it is commonly abbreviated, is used to reduce disulfide bonds and can act as a biological antioxidant by scavenging hydroxyl radicals (amongst others). It is widely used because the hydroxyl group confers solubility in water and lowers the volatility. Due to its diminished vapor pressure, its odor, while unpleasant, is less objectionable than related thiols.
Production
2-Mercaptoethanol is manufactured industrially by the reaction of ethylene oxide with hydrogen sulfide. Thiodiglycol and various zeolites catalyze the reaction.
Reactions
2-Mercaptoethanol reacts with aldehydes and ketones to give the corresponding oxathiolanes. This makes 2-mercaptoethanol useful as a protecting group, giving a derivative whose stability is between that of a dioxolane and a dithiolane.
Applications
Reducing proteins
Some proteins can be denatured by 2-mercaptoethanol, which cleaves the disulfide bonds that may form between thiol groups of cysteine residues. In the case of excess 2-mercaptoethanol, the following equilibrium is shifted to the right:
RS–SR + 2 HOCH2CH2SH 2 RSH + HOCH2CH2S–SCH2CH2OH
By breaking the S-S bonds, both the tertiary structure and the quaternary structure of some proteins can be disrupted. Because of its ability to disrupt the structure of proteins, it was used in the analysis of proteins, for instance, to ensure that a protein solution contains monomeric protein molecules, instead of disulfide linked dimers or higher order oligomers. However, since 2-mercaptoethanol forms adducts with free cysteines and is somewhat more toxic, dithiothreitol (DTT) is generally more used especially in SDS-PAGE. DTT is also a more powerful reducing agent with a redox potential (at pH 7) of −0.33 V, compared to −0.26 V for 2-mercaptoethanol.
2-Mercaptoethanol is often used interchangeably with dithiothreitol (DTT) or the odorless tris(2-carboxyethyl)phosphine (TCEP) in biological applications.
Although 2-mercaptoethanol has a higher volatility than DTT, it is more stable: 2-mercaptoethanol's half-life is more than 100 hours at pH 6.5 and 4 hours at pH 8.5; DTT's half-life is 40 hours at pH 6.5 and 1.5 hours at pH 8.5.
Preventing protein oxidation
2-Mercaptoethanol and related reducing agents (e.g., DTT) are often included in enzymatic reactions to inhibit the oxidation of free sulfhydryl residues, and hence maintain protein activity. It is often used in enzyme assays as a standard buffer component.
Denaturing ribonucleases
2-Mercaptoethanol is used in some RNA isolation procedures to eliminate ribonuclease released during cell lysis. Numerous disulfide bonds make ribonucleases very stable enzymes, so 2-mercaptoethanol is used to reduce these disulfide bonds and irreversibly denature the proteins. This prevents them from digesting the RNA during its extraction procedure.
Deprotecting carbamates
Some carbamate protecting groups such as carboxybenzyl (Cbz) or allyloxycarbonyl (alloc) can be deprotected using 2-mercaptoethanol in the presence of potassium phosphate in dimethylacetamide.
Safety
2-Mercaptoethanol is considered toxic, causing irritation to the nasal passageways and respiratory tract upon inhalation, irritation to the skin, vomiting and stomach pain through ingestion, and potentially death if severe exposure occurs.
See also
Dithiothreitol (DTT)
Dithiobutylamine (DTBA)
TCEP
References
Thiols
Primary alcohols
Reducing agents
Foul-smelling chemicals | 2-Mercaptoethanol | [
"Chemistry"
] | 943 | [
"Organic compounds",
"Thiols",
"Redox",
"Reducing agents"
] |
4,508,797 | https://en.wikipedia.org/wiki/King%27s%20graph | In graph theory, a king's graph is a graph that represents all legal moves of the king chess piece on a chessboard where each vertex represents a square on a chessboard and each edge is a legal move. More specifically, an king's graph is a king's graph of an chessboard. It is the map graph formed from the squares of a chessboard by making a vertex for each square and an edge for each two squares that share an edge or a corner. It can also be constructed as the strong product of two path graphs.
For an king's graph the total number of vertices is and the number of edges is . For a square king's graph this simplifies so that the total number of vertices is and the total number of edges is .
The neighbourhood of a vertex in the king's graph corresponds to the Moore neighborhood for cellular automata.
A generalization of the king's graph, called a kinggraph, is formed from a squaregraph (a planar graph in which each bounded face is a quadrilateral and each interior vertex has at least four neighbors) by adding the two diagonals of every quadrilateral face of the squaregraph.
In the drawing of a king's graph obtained from an chessboard, there are crossings, but it is possible to obtain a drawing with fewer crossings by connecting the two nearest neighbors of each corner square by a curve outside the chessboard instead of by a diagonal line segment. In this way, crossings are always possible. For the king's graph of small chessboards, other drawings lead to even fewer crossings; in particular every king's graph is a planar graph. However, when both and are at least four, and they are not both equal to four, is the optimal number of crossings.
See also
Knight's graph
Queen's graph
Rook's graph
Bishop's graph
Lattice graph
References
Mathematical chess problems
Parametric families of graphs | King's graph | [
"Mathematics"
] | 397 | [
"Recreational mathematics",
"Mathematical chess problems"
] |
19,135,050 | https://en.wikipedia.org/wiki/Compressed%20hydrogen | Compressed hydrogen (CH2, CGH2 or CGH2) is the gaseous state of the element hydrogen kept under pressure. Compressed hydrogen in hydrogen tanks at 350 bar (5,000 psi) and 700 bar (10,000 psi) is used for mobile hydrogen storage in hydrogen vehicles. It is used as a fuel gas.
Infrastructure
Compressed hydrogen is used in hydrogen pipeline transport and in compressed hydrogen tube trailer transport.
See also
Combined cycle powered railway locomotive
Cryo-adsorption
Gas compressor
Gasoline gallon equivalent
Hydrogen compressor
Hydrogen safety
Liquid hydrogen
Liquefaction of gases
Metallic hydrogen
Slush hydrogen
Standard cubic foot
Timeline of hydrogen technologies
References
External links
COMPRESSED HYDROGEN INFRASTRUCTURE PROGRAM ("CH2IP")
Hydrogen physics
Hydrogen technologies
Hydrogen storage
Fuel gas
Gases
Industrial gases | Compressed hydrogen | [
"Physics",
"Chemistry"
] | 154 | [
"Matter",
"Phases of matter",
"Industrial gases",
"Chemical process engineering",
"Statistical mechanics",
"Gases"
] |
19,137,315 | https://en.wikipedia.org/wiki/Solid%20hydrogen | Solid hydrogen is the solid state of the element hydrogen. At standard pressure, this is achieved by decreasing the temperature below hydrogen's melting point of . It was collected for the first time by James Dewar in 1899 and published with the title "Sur la solidification de l'hydrogène" (English: On the freezing of hydrogen) in the Annales de Chimie et de Physique, 7th series, vol. 18, Oct. 1899. Solid hydrogen has a density of 0.086 g/cm3 making it one of the lowest-density solids.
Molecular solid hydrogen
At low temperatures and at pressures up to around , hydrogen forms a series of solid phases formed from discrete H2 molecules. Phase I occurs at low temperatures and pressures, and consists of a hexagonal close-packed array of freely rotating H2 molecules. Upon increasing the pressure at low temperature, a transition to Phase II occurs at up to 110 GPa. Phase II is a broken-symmetry structure in which the H2 molecules are no longer able to rotate freely. If the pressure is further increased at low temperature, a Phase III is encountered at about 160 GPa. Upon increasing the temperature, a transition to a Phase IV occurs at a temperature of a few hundred kelvin at a range of pressures above 220 GPa.
Identifying the atomic structures of the different phases of molecular solid hydrogen is extremely challenging, because hydrogen atoms interact with X-rays very weakly and only small samples of solid hydrogen can be achieved in diamond anvil cells, so that X-ray diffraction provides very limited information about the structures. Nevertheless, phase transitions can be detected by looking for abrupt changes in the Raman spectra of samples. Furthermore, atomic structures can be inferred from a combination of experimental Raman spectra and first-principles modelling. Density functional theory calculations have been used to search for candidate atomic structures for each phase. These candidate structures have low free energies and Raman spectra in agreement with the experimental spectra. Quantum Monte Carlo methods together with a first-principles treatment of anharmonic vibrational effects have then been used to obtain the relative Gibbs free energies of these structures and hence to obtain a theoretical pressure-temperature phase diagram that is in reasonable quantitative agreement with experiment. On this basis, Phase II is believed to be a molecular structure of P21/c symmetry; Phase III is (or is similar to) a structure of C2/c symmetry consisting of flat layers of molecules in a distorted hexagonal arrangement; and Phase IV is (or is similar to) a structure of Pc symmetry, consisting of alternate layers of strongly bonded molecules and weakly bonded graphene-like sheets.
See also
Compressed hydrogen
Liquid hydrogen
Metallic hydrogen
Slush hydrogen
Timeline of hydrogen technologies
References
Further reading
Melting Characteristics and Bulk Thermophysical Properties of Solid Hydrogen", Air Force Rocket Propulsion Laboratory, Technical Report, 1972
External links
"Properties of solid hydrogen at very low temperatures" (2001)
"Solid hydrogen experiments for atomic propellants"
Hydrogen physics
Solid-state chemistry
Cryogenics
Ice
fr:Hydrogène solide | Solid hydrogen | [
"Physics",
"Chemistry",
"Materials_science"
] | 623 | [
"Applied and interdisciplinary physics",
"Cryogenics",
"Condensed matter physics",
"nan",
"Solid-state chemistry"
] |
19,144,088 | https://en.wikipedia.org/wiki/Antoine%20Brice | Antoine Brice (26 May 1752, in Brussels, Austrian Netherlands – 23 January 1817, in Brussels, United Kingdom of the Netherlands) was a painter from Brussels.
Life
Antoine Brice was the son of the painter Pierre-François Brice, working in the entourage of Prince Charles Alexander of Lorraine, and his own son Ignace also became a painter. Antoine began his training as a painter under his father at the Brussels Court and was made a master by the Corporation of Painters of Brussels on 5 February 1783. In the meantime he had also followed a more classical training at the Academy of Painting, Sculpture and Architecture of Brussels, where he won first prize in 1776. This training and the entourage of the governor-general's court led him, at the end of the 18th century and the end of the Austrian regime in Brussels, to become a kind of official painter to the city's aristocratic circles.
He became a professor at the Brussels academy and there headed a course on classical art and the principals of drawing. His students included Jean Baptiste Madou. In 1810 he joined with the painters Antoine Cardon, Charles Verhulst and François-Joseph Navez to found a "Société des Amateurs d'Arts".
References
1752 births
1817 deaths
Painters from Brussels
Painters from the Austrian Netherlands
Draughtsmen | Antoine Brice | [
"Engineering"
] | 263 | [
"Design engineering",
"Draughtsmen"
] |
19,145,299 | https://en.wikipedia.org/wiki/Tee%20%28symbol%29 | The tee (⊤, \top in LaTeX), also called down tack (as opposed to the up tack) or verum, is a symbol used to represent:
The top element in lattice theory.
The truth value of being true in logic, or a sentence (e.g., formula in propositional calculus) which is unconditionally true. By definition, every tautology is logically equivalent to the verum.
The top type in type theory.
Mixed radix encoding in the APL programming language.
A lowered phonic in the International Phonetic Alphabet and phonetics. In this usage, it is usually written under the primary IPA symbol.
A similar-looking superscript T may be used to mean the transpose of a matrix.
Encoding
In Unicode, the tee character is encoded as . The symbol is encoded in LaTeX as \top.
A large variant is encoded as in the Unicode block Miscellaneous Mathematical Symbols-A.
See also
Turnstile (⊢)
Up tack (⊥)
List of logic symbols
List of mathematical symbols
Notes
Logic symbols
Typographical symbols | Tee (symbol) | [
"Mathematics"
] | 222 | [
"Typographical symbols",
"Symbols",
"Mathematical symbols",
"Logic symbols"
] |
10,078,099 | https://en.wikipedia.org/wiki/Bandelet%20%28computer%20science%29 | Bandelets are an orthonormal basis that is adapted to geometric boundaries. Bandelets can be interpreted as a warped wavelet basis. The motivation behind bandelets is to perform a transform on functions defined as smooth functions on smoothly bounded domains. As bandelet construction utilizes wavelets, many of the results follow. Similar approaches to take account of geometric structure were taken for contourlets and curvelets.
See also
Wavelet
Multiresolution analysis
Scale space
References
External links
Bandelet toolbox on MatLab Central
Wavelets | Bandelet (computer science) | [
"Technology"
] | 113 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
10,081,505 | https://en.wikipedia.org/wiki/MSCRAMM | MSCRAMM (acronym for "microbial surface components recognizing adhesive matrix molecules") are adhesin proteins that mediate the initial attachment of bacteria to host tissue, providing a critical step to establish infection.
Examples include clumping factor A (ClfA), fibronectin binding protein A (FnbpA) from Staphylococcus aureus, SdrG from Staphylococcus epidermidis, M protein from Streptococcus pyogenes, and protein G in other Streptococcus species. All of these MSCRAMMs bind to fibrinogen, but also other targets for MSCRAMMs are known, such as fibronectin. Protein M binds to the Fc region of certain antibodies.
The MSCRAMMs have mainly been studied in Gram positive pathogens and are promising drug targets.
The monoclonal antibody tefibazumab targets ClfA and has been tested in a phase II trial.
Staphylococcus aureus
An example for MSCRAMMs is S. aureus. On its surface, protein A is expressed, which binds to the Fc region of IgG antibodies (the default antibody type, dealing with bacterial and viral infections). This has an antiphagocytic effect, i.e. macrophages cannot "see" these bacteria as easily as if they were correctly opsonised by antigen. Also, S. aureus expresses fibronectin-binding proteins, which promote binding to mucosal cells and tissue matrices. This protein is also referred to as clumping factor.
References
Bacterial proteins | MSCRAMM | [
"Chemistry"
] | 331 | [
"Biochemistry stubs",
"Protein stubs"
] |
10,082,867 | https://en.wikipedia.org/wiki/Vehicular%20metrics | There are a broad range of metrics that denote the relative capabilities of various vehicles. Most of them apply to all vehicles while others are type-specific.
See also
References
External links
Car Performance Meters
Vehicles
Metrics | Vehicular metrics | [
"Physics",
"Mathematics"
] | 44 | [
"Vehicles",
"Metrics",
"Quantity",
"Physical systems",
"Transport"
] |
10,083,278 | https://en.wikipedia.org/wiki/5-cubic%20honeycomb | In geometry, the 5-cubic honeycomb or penteractic honeycomb is the only regular space-filling tessellation (or honeycomb) in Euclidean 5-space. Four 5-cubes meet at each cubic cell, and it is more explicitly called an order-4 penteractic honeycomb.
It is analogous to the square tiling of the plane and to the cubic honeycomb of 3-space, and the tesseractic honeycomb of 4-space.
Constructions
There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,33,4}. Another form has two alternating 5-cube facets (like a checkerboard) with Schläfli symbol {4,3,3,31,1}. The lowest symmetry Wythoff construction has 32 types of facets around each vertex and a prismatic product Schläfli symbol {∞}(5).
Related polytopes and honeycombs
The [4,33,4], , Coxeter group generates 63 permutations of uniform tessellations, 35 with unique symmetry and 34 with unique geometry. The expanded 5-cubic honeycomb is geometrically identical to the 5-cubic honeycomb.
The 5-cubic honeycomb can be alternated into the 5-demicubic honeycomb, replacing the 5-cubes with 5-demicubes, and the alternated gaps are filled by 5-orthoplex facets.
It is also related to the regular 6-cube which exists in 6-space with three 5-cubes on each cell. This could be considered as a tessellation on the 5-sphere, an order-3 penteractic honeycomb, {4,34}.
The Penrose tilings are 2-dimensional aperiodic tilings that can be obtained as a projection of the 5-cubic honeycomb along a 5-fold rotational axis of symmetry. The vertices correspond to points in the 5-dimensional cubic lattice, and the tiles are formed by connecting points in a predefined manner.
Tritruncated 5-cubic honeycomb
A tritruncated 5-cubic honeycomb, , contains all bitruncated 5-orthoplex facets and is the Voronoi tessellation of the D5* lattice. Facets can be identically colored from a doubled ×2, [[4,33,4]] symmetry, alternately colored from , [4,33,4] symmetry, three colors from , [4,3,3,31,1] symmetry, and 4 colors from , [31,1,3,31,1] symmetry.
See also
List of regular polytopes
Regular and uniform honeycombs in 5-space:
5-demicubic honeycomb
5-simplex honeycomb
Truncated 5-simplex honeycomb
Omnitruncated 5-simplex honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
Honeycombs (geometry)
6-polytopes
Regular tessellations | 5-cubic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 747 | [
"Regular tessellations",
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Symmetry"
] |
10,083,813 | https://en.wikipedia.org/wiki/IntruShield | The McAfee IntruShield is a network-based intrusion prevention sensor appliance that is used in prevention of zero-day, DoS (Denial of Service) attacks, spyware, malware, botnets and VoIP threats. It is now called McAfee Network Security Platform.
References
Computer network security | IntruShield | [
"Engineering"
] | 67 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
18,119,040 | https://en.wikipedia.org/wiki/Apeirogonal%20prism | In geometry, an apeirogonal prism or infinite prism is the arithmetic limit of the family of prisms; it can be considered an infinite polyhedron or a tiling of the plane.
Thorold Gosset called it a 2-dimensional semi-check, like a single row of a checkerboard.
If the sides are squares, it is a uniform tiling. If colored with two sets of alternating squares it is still uniform.
Related tilings and polyhedra
The apeirogonal tiling is the arithmetic limit of the family of prisms t{2, p} or p.4.4, as p tends to infinity, thereby turning the prism into a Euclidean tiling.
An alternation operation can create an apeirogonal antiprism composed of three triangles and one apeirogon at each vertex.
Similarly to the uniform polyhedra and the uniform tilings, eight uniform tilings may be based from the regular apeirogonal tiling. The rectified and cantellated forms are duplicated, and as two times infinity is also infinity, the truncated and omnitruncated forms are also duplicated, therefore reducing the number of unique forms to four: the apeirogonal tiling, the apeirogonal hosohedron, the apeirogonal prism, and the apeirogonal antiprism.
Notes
References
T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900
Apeirogonal tilings
Euclidean tilings
Isogonal tilings
Prismatoid polyhedra | Apeirogonal prism | [
"Physics",
"Mathematics"
] | 321 | [
"Isogonal tilings",
"Tessellation",
"Euclidean plane geometry",
"Euclidean tilings",
"Planes (geometry)",
"Symmetry"
] |
18,119,626 | https://en.wikipedia.org/wiki/Apeirogonal%20antiprism | In geometry, an apeirogonal antiprism or infinite antiprism is the arithmetic limit of the family of antiprisms; it can be considered an infinite polyhedron or a tiling of the plane.
If the sides are equilateral triangles, it is a uniform tiling. In general, it can have two sets of alternating congruent isosceles triangles, surrounded by two half-planes.
Related tilings and polyhedra
The apeirogonal antiprism is the arithmetic limit of the family of antiprisms sr{2, p} or p.3.3.3, as p tends to infinity, thereby turning the antiprism into a Euclidean tiling.
Similarly to the uniform polyhedra and the uniform tilings, eight uniform tilings may be based from the regular apeirogonal tiling. The rectified and cantellated forms are duplicated, and as two times infinity is also infinity, the truncated and omnitruncated forms are also duplicated, therefore reducing the number of unique forms to four: the apeirogonal tiling, the apeirogonal hosohedron, the apeirogonal prism, and the apeirogonal antiprism.
Notes
References
The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss,
T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900
Apeirogonal tilings
Euclidean tilings
Isogonal tilings
Prismatoid polyhedra | Apeirogonal antiprism | [
"Physics",
"Mathematics"
] | 321 | [
"Isogonal tilings",
"Tessellation",
"Euclidean plane geometry",
"Euclidean tilings",
"Planes (geometry)",
"Symmetry"
] |
18,121,901 | https://en.wikipedia.org/wiki/Michell%20solution | In continuum mechanics, the Michell solution is a general solution to the elasticity equations in polar coordinates () developed by John Henry Michell in 1899. The solution is such that the stress components are in the form of a Fourier series in .
Michell showed that the general solution can be expressed in terms of an Airy stress function of the form
The terms and define a trivial null state of stress and are ignored.
Stress components
The stress components can be obtained by substituting the Michell solution into the equations for stress in terms of the Airy stress function (in cylindrical coordinates). A table of stress components is shown below.
Displacement components
Displacements can be obtained from the Michell solution by using the stress-strain and strain-displacement relations. A table of displacement components corresponding the terms in the Airy stress function for the Michell solution is given below. In this table
where is the Poisson's ratio, and is the shear modulus.
Note that a rigid body displacement can be superposed on the Michell solution of the form
to obtain an admissible displacement field.
See also
Linear elasticity
Flamant solution
John Henry Michell
References
Elasticity (physics) | Michell solution | [
"Physics",
"Materials_science"
] | 240 | [
"Deformation (mechanics)",
"Physical phenomena",
"Physical properties",
"Elasticity (physics)"
] |
18,124,446 | https://en.wikipedia.org/wiki/Volumetric%20mesh | In 3D computer graphics and modeling, a volumetric mesh is a polyhedral representation of the interior region of an object. It is unlike polygon meshes, which represent only the surface as polygons.
Applications
One application of volumetric meshes is in finite element analysis, which may use regular or irregular volumetric meshes to compute internal stresses and forces in an object throughout the entire volume of the object.
Volume meshes may also be used for portal rendering.
See also
B-rep
Voxels
Hypergraph
Volume rendering
References
3D computer graphics
Computer graphics data structures
Mesh generation | Volumetric mesh | [
"Physics",
"Mathematics"
] | 117 | [
"Mesh generation",
"Tessellation",
"Applied mathematics",
"Applied mathematics stubs",
"Symmetry"
] |
18,125,091 | https://en.wikipedia.org/wiki/Toda%27s%20theorem | Toda's theorem is a result in computational complexity theory that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" and was given the 1998 Gödel Prize.
Statement
The theorem states that the entire polynomial hierarchy PH is contained in PPP; this implies a closely related statement, that PH is contained in P#P.
Definitions
#P is the problem of exactly counting the number of solutions to a polynomially-verifiable question (that is, to a question in NP), while loosely speaking, PP is the problem of giving an answer that is correct more than half the time. The class P#P consists of all the problems that can be solved in polynomial time if you have access to instantaneous answers to any counting problem in #P (polynomial time relative to a #P oracle). Thus Toda's theorem implies that for any problem in the polynomial hierarchy there is a deterministic polynomial-time Turing reduction to a counting problem.
An analogous result in the complexity theory over the reals (in the sense of Blum–Shub–Smale real Turing machines) was proved by Saugata Basu and Thierry Zell in 2009 and a complex analogue of Toda's theorem was proved by Saugata Basu in 2011.
Proof
The proof is broken into two parts.
First, it is established that
The proof uses a variation of Valiant–Vazirani theorem. Because contains and is closed under complement, it follows by induction that .
Second, it is established that
Together, the two parts imply
References
Structural complexity theory
Theorems in computational complexity theory | Toda's theorem | [
"Mathematics"
] | 335 | [
"Theorems in computational complexity theory",
"Theorems in discrete mathematics"
] |
18,128,464 | https://en.wikipedia.org/wiki/Jumboization | Jumboization is a technique in shipbuilding consisting of enlarging a ship by adding an entire section to it. By contrast with refitting or installation of equipment, jumboization is a long and complex endeavour which can require a specialized shipyard.
Enlarging a ship by jumboization allows an increase in its capacity and revenue potential without needing to purchase or build an entirely new ship. This technique has been used on cruise ships and tankers, as well as smaller vessels like sailing or fishing ships.
Methods
Large ships often have a long midsection with a uniform profile. In such cases, the ship is cut in two pieces and an additional section is inserted in between. This operation must be performed in a drydock.
On large ships, the additional sections are typically 20 to 30 metres long, consisting of an oil tank, a cargo ship hold, or a group of cabins, depending on the type of ship. The tanker Seawise Giant became the largest ship in the world after her jumboization.
Smaller ships are usually jumboized by replacing the entire bow or stern section of the ship. This is done because the shape of their hull is usually incompatible with the previous method.
See also
List of stretched cruise ships
References
External links
2011 Jumboisation/Body Swapping at Keppel Shipyard
Shipbuilding | Jumboization | [
"Engineering"
] | 263 | [
"Shipbuilding",
"Marine engineering"
] |
5,939,333 | https://en.wikipedia.org/wiki/Secure%20Network | Secure Network is a small offensive security and security research company focusing on Information Security based in Milano, Italy. Besides having notability in Italy, it received international exposure with a research project on Bluetooth security (co-sponsored by F-Secure) codenamed BlueBag, which has been also selected for the Black Hat Briefings conference 2006 in Las Vegas.
In 2009, it also organized SEaCURE.IT, the first international technical security conference ever held in Italy.
Secure Network also offers internet security compliance consulting to private companies.
References
Companies of Italy
Data security
Companies based in Milan | Secure Network | [
"Engineering"
] | 118 | [
"Cybersecurity engineering",
"Data security"
] |
7,811,800 | https://en.wikipedia.org/wiki/Bianchi%20classification | In mathematics, the Bianchi classification provides a list of all real 3-dimensional Lie algebras (up to isomorphism). The classification contains 11 classes, 9 of which contain a single Lie algebra and two of which contain a continuum-sized family of Lie algebras. (Sometimes two of the groups are included in the infinite families, giving 9 instead of 11 classes.) The classification is important in geometry and physics, because the associated Lie groups serve as symmetry groups of 3-dimensional Riemannian manifolds. It is named for Luigi Bianchi, who worked it out in 1898.
The term "Bianchi classification" is also used for similar classifications in other dimensions and for classifications of complex Lie algebras.
Classification in dimension less than 3
Dimension 0: The only Lie algebra is the abelian Lie algebra R0.
Dimension 1: The only Lie algebra is the abelian Lie algebra R1, with outer automorphism group the multiplicative group of non-zero real numbers.
Dimension 2: There are two Lie algebras:
(1) The abelian Lie algebra R2, with outer automorphism group GL2(R).
(2) The solvable Lie algebra of 2×2 upper triangular matrices of trace 0. It has trivial center and trivial outer automorphism group. The associated simply connected Lie group is the affine group of the line.
Classification in dimension 3
All the 3-dimensional Lie algebras other than types VIII and IX can be constructed as a semidirect product of R2 and R, with R acting on R2 by some 2 by 2 matrix M. The different types correspond to different types of matrices M, as described below.
Type I: This is the abelian and unimodular Lie algebra R3. The simply connected group has center R3 and outer automorphism group GL3(R). This is the case when M is 0.
Type II: The Heisenberg algebra, which is nilpotent and unimodular. The simply connected group has center R and outer automorphism group GL2(R). This is the case when M is nilpotent but not 0 (eigenvalues all 0).
Type III: This algebra is a product of R and the 2-dimensional non-abelian Lie algebra. (It is a limiting case of type VI, where one eigenvalue becomes zero.) It is solvable and not unimodular. The simply connected group has center R and outer automorphism group the group of non-zero real numbers. The matrix M has one zero and one non-zero eigenvalue.
Type IV: The algebra generated by [y,z] = 0, [x,y] = y, [x, z] = y + z. It is solvable and not unimodular. The simply connected group has trivial center and outer automorphism group the product of the reals and a group of order 2. The matrix M has two equal non-zero eigenvalues, but is not diagonalizable.
Type V: [y,z] = 0, [x,y] = y, [x, z] = z. Solvable and not unimodular. (A limiting case of type VI where both eigenvalues are equal.) The simply connected group has trivial center and outer automorphism group the elements of GL2(R) of determinant +1 or −1. The matrix M has two equal eigenvalues, and is diagonalizable.
Type VI: An infinite family: semidirect products of R2 by R, where the matrix M has non-zero distinct real eigenvalues with non-zero sum. The algebras are solvable and not unimodular. The simply connected group has trivial center and outer automorphism group a product of the non-zero real numbers and a group of order 2.
Type VI0: This Lie algebra is the semidirect product of R2 by R, with R where the matrix M has non-zero distinct real eigenvalues with zero sum. It is solvable and unimodular. It is the Lie algebra of the 2-dimensional Poincaré group, the group of isometries of 2-dimensional Minkowski space. The simply connected group has trivial center and outer automorphism group the product of the positive real numbers with the dihedral group of order 8.
Type VII: An infinite family: semidirect products of R2 by R, where the matrix M has non-real and non-imaginary eigenvalues. Solvable and not unimodular. The simply connected group has trivial center and outer automorphism group the non-zero reals.
Type VII0: Semidirect product of R2 by R, where the matrix M has non-zero imaginary eigenvalues. Solvable and unimodular. This is the Lie algebra of the group of isometries of the plane. The simply connected group has center Z and outer automorphism group a product of the non-zero real numbers and a group of order 2.
Type VIII: The Lie algebra sl2(R) of traceless 2 by 2 matrices, associated to the group SL2(R). It is simple and unimodular. The simply connected group is not a matrix group; it is denoted by , has center Z and its outer automorphism group has order 2.
Type IX: The Lie algebra of the orthogonal group O3(R). It is denoted by 𝖘𝖔(3) and is simple and unimodular. The corresponding simply connected group is SU(2); it has center of order 2 and trivial outer automorphism group, and is a spin group.
The classification of 3-dimensional complex Lie algebras is similar except that types VIII and IX become isomorphic, and types VI and VII both become part of a single family of Lie algebras.
The connected 3-dimensional Lie groups can be classified as follows: they are a quotient of the corresponding simply connected Lie group by a discrete subgroup of the center, so can be read off from the table above.
The groups are related to the 8 geometries of Thurston's geometrization conjecture. More precisely, seven of the 8 geometries can be realized as a left-invariant metric on the simply connected group (sometimes in more than one way). The Thurston geometry of type S2×R cannot be realized in this way.
Structure constants
The three-dimensional Bianchi spaces each admit a set of three Killing vector fields which obey the following property:
where , the "structure constants" of the group, form a constant order-three tensor antisymmetric in its lower two indices. For any three-dimensional Bianchi space, is given by the relationship
where is the Levi-Civita symbol, is the Kronecker delta, and the vector and diagonal tensor are described by the following table, where gives the ith eigenvalue of ; the parameter a runs over all positive real numbers:
The standard Bianchi classification can be derived from the structural constants in the following six steps:
Due to the antisymmetry , there are nine independent constants . These can be equivalently represented by the nine components of an arbitrary constant matrix Cab: where εabd is the totally antisymmetric three-dimensional Levi-Civita symbol (ε123 = 1). Substitution of this expression for into the Jacobi identity, results in
The structure constants can be transformed as:Appearance of det A in this formula is due to the fact that the symbol εabd transforms as tensor density: , where έmnd ≡ εmnd. By this transformation it is always possible to reduce the matrix Cab to the form:After such a choice, one still have the freedom of making triad transformations but with the restrictions and
Now, the Jacobi identities give only one constraint:
If n1 ≠ 0 then C23 – C32 = 0 and by the remaining transformations with , the 2 × 2 matrix in Cab can be made diagonal. ThenThe diagonality condition for Cab is preserved under the transformations with diagonal . Under these transformations, the three parameters n1, n2, n3 change in the following way:By these diagonal transformations, the modulus of any na (if it is not zero) can be made equal to unity. Taking into account that the simultaneous change of sign of all na produce nothing new, one arrives to the following invariantly different sets for the numbers n1, n2, n3 (invariantly different in the sense that there is no way to pass from one to another by some transformation of the triad ), that is to the following different types of homogeneous spaces with diagonal matrix Cab:
Consider now the case n1 = 0. It can also happen in that case that C23 – C32 = 0. This returns to the situation already analyzed in the previous step but with the additional condition n1 = 0. Now, all essentially different types for the sets n1, n2, n3 are (0, 1, 1), (0, 1, −1), (0, 0, 1) and (0, 0, 0). The first three repeat the types VII0, VI0, II. Consequently, only one new type arises:
The only case left is n1 = 0 and C23 – C32 ≠ 0. Now the 2 × 2 matrix is non-symmetric and it cannot be made diagonal by transformations using . However, its symmetric part can be diagonalized, that is the 3 × 3 matrix Cab can be reduced to the form:where a is an arbitrary number. After this is done, there still remains the possibility to perform transformations with diagonal , under which the quantities n2, n3 and a change as follows:These formulas show that for nonzero n2, n3, a, the combination a2(n2n3)−1 is an invariant quantity. By a choice of , one can impose the condition a > 0 and after this is done, the choice of the sign of permits one to change both signs of n2 and n3 simultaneously, that is the set (n2 , n3) is equivalent to the set (−n2,−n3). It follows that there are the following four different possibilities:For the first two, the number a can be transformed to unity by a choice ofthe parameters and . For the second two possibilities, both of these parameters are already fixed and a remains an invariant and arbitrary positive number. Historically these four types of homogeneous spaces have been classified as:Type III is just a particular case of type VI corresponding to a = 1. Types VII and VI contain an infinity of invariantly different types of algebras corresponding to the arbitrariness of the continuous parameter a. Type VII0 is a particular case of VII corresponding to a = 0 while type VI0 is a particular case of VI corresponding also to a = 0.
Curvature of Bianchi spaces
The Bianchi spaces have the property that their Ricci tensors can be separated into a product of the basis vectors associated with the space and a coordinate-independent tensor.
For a given metric:
(where are 1-forms), the Ricci curvature tensor is given by:
where the indices on the structure constants are raised and lowered with which is not a function of .
Cosmological application
In cosmology, this classification is used for a homogeneous spacetime of dimension 3+1. The 3-dimensional Lie group is as the symmetry group of the 3-dimensional spacelike slice, and the Lorentz metric satisfying the Einstein equation is generated by varying the metric components as a function of t. The Friedmann–Lemaître–Robertson–Walker metrics are isotropic, which are particular cases of types I, V, and IX. The Bianchi type I models include the Kasner metric as a special case.
The Bianchi IX cosmologies include the Taub metric. However, the dynamics near the singularity is approximately governed by a series of successive Kasner (Bianchi I) periods. The complicated dynamics,
which essentially amounts to billiard motion in a portion of hyperbolic space, exhibits chaotic behaviour, and is named Mixmaster; its analysis is referred to as the BKL analysis after Belinskii, Khalatnikov and Lifshitz.
More recent work has established a relation of (super-)gravity theories near a spacelike singularity (BKL-limit) with Lorentzian Kac–Moody algebras, Weyl groups and hyperbolic Coxeter groups.
Other more recent work is concerned with the discrete nature of the Kasner map and a continuous generalisation. In a space that is both homogeneous and isotropic the metric is determined completely, leaving free only the sign of the curvature. Assuming only space homogeneity with no additional symmetry such as isotropy leaves considerably more freedom in choosing the metric. The following pertains to the space part of the metric at a given instant of time t assuming a synchronous frame so that t is the same synchronised time for the whole space.
Homogeneity implies identical metric properties at all points of the space. An exact definition of this concept involves considering sets of coordinate transformations that transform the space into itself, i.e. leave its metric unchanged: if the line element before transformation is
then after transformation the same line element is
with the same functional dependence of γαβ on the new coordinates. (For a more theoretical and coordinate-independent definition of homogeneous space see homogeneous space). A space is homogeneous if it admits a set of transformations (a group of motions) that brings any given point to the position of any other point. Since space is three-dimensional the different transformations of the group are labelled by three independent parameters.
In Euclidean space the homogeneity of space is expressed by the invariance of the metric under parallel displacements (translations) of the Cartesian coordinate system. Each translation is determined by three parameters — the components of the displacement vector of the coordinate origin. All these transformations leave invariant the three independent differentials (dx, dy, dz) from which the line element is constructed. In the general case of a non-Euclidean homogeneous space, the transformations of its group of motions again leave invariant three independent linear differential forms, which do not, however, reduce to total differentials of any coordinate functions. These forms are written as where the Latin index (a) labels three independent vectors (coordinate functions); these vectors are called a frame field or triad. The Greek letters label the three space-like curvilinear coordinates. A spatial metric invariant is constructed under the given group of motions with the use of the above forms:
i.e. the metric tensor is
where the coefficients ηab, which are symmetric in the indices a and b, are functions of time. The choice of basis vectors is dictated by the symmetry properties of the space and, in general, these basis vectors are not orthogonal (so that the matrix ηab is not diagonal).
The reciprocal triple of vectors is introduced with the help of Kronecker delta
In the three-dimensional case, the relation between the two vector triples can be written explicitly
where the volume v is
with e(a) and e(a) regarded as Cartesian vectors with components and , respectively. The determinant of the metric tensor is γ = ηv2 where η is the determinant of the matrix ηab.
The required conditions for the homogeneity of the space are
The constants are called the structure constants of the group.
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Proof of
|-
|
The invariance of the differential forms means that
where the on the two sides of the equation are the same functions of the old and new coordinates, respectively. Multiplying this equation by , setting and comparing coefficients of the same differentials dxα, one finds
These equations are a system of differential equations that determine the functions for a given frame. In order to be integrable, these equations must satisfy identically the conditions
Calculating the derivatives, one finds
Multiplying both sides of the equations by and shifting the differentiation from one factor to the other by using , one gets for the left side:
and for the right, the same expression in the variable x. Since x and x are arbitrary, these expression must reduce to constants to obtain .
|}
Multiplying by , can be rewritten in the form
can be written in a vector form as
where again the vector operations are done as if the coordinates xα were Cartesian. Using , one obtains
and six more equations obtained by a cyclic permutation of indices 1, 2, 3.
The structure constants are antisymmetric in their lower indices as seen from their definition : . Another condition on the structure constants can be obtained by noting that can be written in the form of commutation relations
for the linear differential operators
In the mathematical theory of continuous groups (Lie groups) the operators Xa satisfying conditions are called the generators of the group. The theory of Lie groups uses operators defined using the Killing vectors instead of triads . Since in the synchronous metric none of the γαβ components depends on time, the Killing vectors (triads) are time-like.
The conditions follow from the Jacobi identity
and have the form
It is a definite advantage to use, in place of the three-index constants , a set of two-index quantities, obtained by the dual transformation
where eabc = eabc is the unit antisymmetric symbol (with e123 = +1). With these constants the commutation relations are written as
The antisymmetry property is already taken into account in the definition , while property takes the form
The choice of the three frame vectors in the differential forms (and with them the operators Xa) is not unique. They can be subjected to any linear transformation with constant coefficients:
The quantities ηab and Cab behave like tensors (are invariant) with respect to such transformations.
The conditions are the only ones that the structure constants must satisfy. But among the constants admissible by these conditions, there are equivalent sets, in the sense that their difference is related to a transformation of the type . The question of the classification of homogeneous spaces reduces to determining all nonequivalent sets of structure constants. This can be done, using the "tensor" properties of the quantities Cab, by the following simple method (C. G. Behr, 1962).
The asymmetric tensor Cab can be resolved into a symmetric and an antisymmetric part. The first is denoted by nab, and the second is expressed in terms of its dual vector ac:
Substitution of this expression in leads to the condition
By means of the transformations the symmetric tensor nab can be brought to diagonal form with eigenvalues n1, n2, n3. Equation shows that the vector ab (if it exists) lies along one of the principal directions of the tensor nab, the one corresponding to the eigenvalue zero. Without loss of generality one can therefore set ab = (a, 0, 0). Then reduces to an1 = 0, i.e. one of the quantities a or n1 must be zero. The Jacobi identities take the form:
The only remaining freedoms are sign changes of the operators Xa and their multiplication by arbitrary constants. This permits to simultaneously change the sign of all the na and also to make the quantity a positive (if it is different from zero). Also all structure constants can be made equal to ±1, if at least one of the quantities a, n2, n3 vanishes. But if all three of these quantities differ from zero, the scale transformations leave invariant the ratio h = a2(n2n3)−1.
Thus one arrives at the Bianchi classification listing the possible types of homogeneous spaces classified by the values of a, n1, n2, n3 which is graphically presented in Fig. 3. In the class A case (a = 0), type IX (n(1)=1, n(2)=1, n(3)=1) is represented by octant 2, type VIII (n(1)=1, n(2)=1, n(3)=–1) is represented by octant 6, while type VII0 (n(1)=1, n(2)=1, n(3)=0) is represented by the first quadrant of the horizontal plane and type VI0 (n(1)=1, n(2)=–1, n(3)=0) is represented by the fourth quadrant of this plane; type II ((n(1)=1, n(2)=0, n(3)=0) is represented by the interval [0,1] along n(1) and type I (n(1)=0, n(2)=0, n(3)=0) is at the origin. Similarly in the class B case (with n(3) = 0), Bianchi type VIh (a=h, n(1)=1, n(2)=–1) projects to the fourth quadrant of the horizontal plane and type VIIh (a=h, n(1)=1, n(2)=1) projects to the first quadrant of the horizontal plane; these last two types are a single isomorphism class corresponding to a constant value surface of the function h = a2(n(1)n(2))−1. A typical such surface is illustrated in one octant, the angle θ given by tan θ = |h/2|1/2; those in the remaining octants are obtained by rotation through multiples of π/2, h alternating in sign for a given magnitude |h|. Type III is a subtype of VIh with a=1. Type V (a=1, n(1)=0, n(2)=0) is the interval (0,1] along the axis a and type IV''' (a=1, n(1)=1, n(2)=0) is the vertical open face between the first and fourth quadrants of the a = 0 plane with the latter giving the class A limit of each type.
The Einstein equations for a universe with a homogeneous space can reduce to a system of ordinary differential equations containing only functions of time with the help of a frame field. To do this one must resolve the spatial components of four-vectors and four-tensors along the triad of basis vectors of the space:
where all these quantities are now functions of t alone; the scalar quantities, the energy density ε and the pressure of the matter p, are also functions of the time.
The Einstein equations in vacuum in synchronous reference frame are
where is the 3-dimensional tensor , and Pαβ is the 3-dimensional Ricci tensor, which is expressed by the 3-dimensional metric tensor γαβ in the same way as Rik is expressed by gik; Pαβ contains only the space (but not the time) derivatives of γαβ. Using triads, for one has simply
The components of P(a)(b) can be expressed in terms of the quantities ηab and the structure constants of the group by using the tetrad representation of the Ricci tensor in terms of quantities
After replacing the three-index symbols by two-index symbols Cab and the transformations:
one gets the "homogeneous" Ricci tensor expressed in structure constants:
Here, all indices are raised and lowered with the local metric tensor ηab
The Bianchi identities for the three-dimensional tensor Pαβ in the homogeneous space take the form
Taking into account the transformations of covariant derivatives for arbitrary four-vectors Ai and four-tensors Aik
the final expressions for the triad components of the Ricci four-tensor are:
In setting up the Einstein equations there is thus no need to use explicit expressions for the basis vectors as functions of the coordinates.
See also
Table of Lie groups
List of simple Lie groups
BKL singularity
Notes
References
Bibliography
L. Bianchi, Sugli spazi a tre dimensioni che ammettono un gruppo continuo di movimenti. (On the spaces of three dimensions that admit a continuous group of movements.) Soc. Ital. Sci. Mem. di Mat. 11, 267 (1898) English translation
Guido Fubini Sugli spazi a quattro dimensioni che ammettono un gruppo continuo di movimenti, (On the spaces of four dimensions that admit a continuous group of movements.) Ann. Mat. pura appli. (3) 9, 33-90 (1904); reprinted in Opere Scelte, a cura dell'Unione matematica italiana e col contributo del Consiglio nazionale delle ricerche, Roma Edizioni Cremonese, 1957–62
MacCallum, On the classification of the real four-dimensional Lie algebras'', in "On Einstein's path: essays in honor of Engelbert Schucking" edited by A. L. Harvey, Springer
Robert T. Jantzen, Bianchi classification of 3-geometries: original papers in translation
Vol. 2 of the Course of Theoretical Physics
; English translation in
Lie algebras
Lie groups
Physical cosmology | Bianchi classification | [
"Physics",
"Astronomy",
"Mathematics"
] | 5,333 | [
"Lie groups",
"Mathematical structures",
"Theoretical physics",
"Astrophysics",
"Algebraic structures",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
7,812,016 | https://en.wikipedia.org/wiki/Bernd%20Sturmfels | Bernd Sturmfels (born March 28, 1962, in Kassel, West Germany) is a Professor of Mathematics and Computer Science at the University of California, Berkeley and is a director of the Max Planck Institute for Mathematics in the Sciences in Leipzig since 2017.
Education and career
He received his PhD in 1987 from the University of Washington and the Technische Universität Darmstadt. After two postdoctoral years at the Institute for Mathematics and its Applications in Minneapolis, Minnesota, and the Research Institute for Symbolic Computation in Linz, Austria, he taught at Cornell University, before joining University of California, Berkeley in 1995. His Ph.D. students include Melody Chan, Jesús A. De Loera, Mike Develin, Diane Maclagan, Rekha R. Thomas, Caroline Uhler, and Cynthia Vinzant.
Contributions
Bernd Sturmfels has made contributions to a variety of areas of mathematics, including algebraic geometry, commutative algebra, discrete geometry, Gröbner bases, toric varieties, tropical geometry, algebraic statistics, and computational biology. He has written several highly cited papers in algebra with Dave Bayer.
He has authored or co-authored multiple books including Introduction to Tropical Geometry with Diane Maclagan.
Awards and honors
Sturmfels' honors include a National Young Investigator Fellowship, an Alfred P. Sloan Fellowship, and a David and Lucile Packard Fellowship. In 1999 he received a Lester R. Ford Award for his expository article Polynomial equations and convex polytopes. He was awarded a Miller Research Professorship at the University of California Berkeley for 2000–2001. In 2018, he was awarded the George David Birkhoff Prize in Applied Mathematics.
In 2012, he became a fellow of the American Mathematical Society.
References
Further reading
External links
Homepage at Berkeley
1962 births
Living people
Scientists from Kassel
20th-century German mathematicians
20th-century American mathematicians
21st-century American mathematicians
University of Washington alumni
Fellows of the American Mathematical Society
UC Berkeley College of Engineering faculty
Fellows of the Society for Industrial and Applied Mathematics
Mathematics popularizers
Technische Universität Darmstadt alumni
Algebraic geometers
Combinatorialists
Sloan Research Fellows
Algebraists
Cornell University faculty
21st-century German mathematicians
Academic staff of Max Planck Society
Max Planck Institute directors | Bernd Sturmfels | [
"Mathematics"
] | 454 | [
"Combinatorialists",
"Combinatorics",
"Algebra",
"Algebraists"
] |
12,458,499 | https://en.wikipedia.org/wiki/Liouville%27s%20theorem%20%28conformal%20mappings%29 | In mathematics, Liouville's theorem, proved by Joseph Liouville in 1850, is a rigidity theorem about conformal mappings in Euclidean space. It states that every smooth conformal mapping on a domain of R, where n > 2, can be expressed as a composition of translations, similarities, orthogonal transformations and inversions: they are Möbius transformations (in n dimensions). This theorem severely limits the variety of possible conformal mappings in R and higher-dimensional spaces. By contrast, conformal mappings in R can be much more complicated – for example, all simply connected planar domains are conformally equivalent, by the Riemann mapping theorem.
Generalizations of the theorem hold for transformations that are only weakly differentiable . The focus of such a study is the non-linear Cauchy–Riemann system that is a necessary and sufficient condition for a smooth mapping to be conformal:
where Df is the Jacobian derivative, T is the matrix transpose, and I is the identity matrix. A weak solution of this system is defined to be an element f of the Sobolev space with non-negative Jacobian determinant almost everywhere, such that the Cauchy–Riemann system holds at almost every point of Ω. Liouville's theorem is then that every weak solution (in this sense) is a Möbius transformation, meaning that it has the form
where a, b are vectors in R, α is a scalar, A is a rotation matrix, ε = 0 or 2, and the matrix in parentheses is I or a Householder matrix (so, orthogonal). Equivalently stated, any quasiconformal map of a domain in Euclidean space that is also conformal is a Möbius transformation. This equivalent statement justifies using the Sobolev space W, since then follows from the geometrical condition of conformality and the ACL characterization of Sobolev space. The result is not optimal however: in even dimensions n = 2k, the theorem also holds for solutions that are only assumed to be in the space W, and this result is sharp in the sense that there are weak solutions of the Cauchy–Riemann system in W for any that are not Möbius transformations. In odd dimensions, it is known that W is not optimal, but a sharp result is not known.
Similar rigidity results (in the smooth case) hold on any conformal manifold. The group of conformal isometries of an n-dimensional conformal Riemannian manifold always has dimension that cannot exceed that of the full conformal group SO(n + 1, 1). Equality of the two dimensions holds exactly when the conformal manifold is isometric with the n-sphere or projective space. Local versions of the result also hold: The Lie algebra of conformal Killing fields in an open set has dimension less than or equal to that of the conformal group, with equality holding if and only if the open set is locally conformally flat.
Notes
References
.
Harley Flanders (1966) "Liouville's theorem on conformal mapping", Journal of Mathematics and Mechanics 15: 157–61,
.
.
Conformal mappings
Eponymous theorems of geometry | Liouville's theorem (conformal mappings) | [
"Mathematics"
] | 655 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Eponymous theorems of geometry",
"Theorems in geometry",
"Mathematical problems"
] |
16,924,116 | https://en.wikipedia.org/wiki/Introduction%20to%20M-theory | In non-technical terms, M-theory presents an idea about the basic substance of the universe. Although a complete mathematical formulation of M-theory is not known, the general approach is the leading contender for a universal "Theory of Everything" that unifies gravity with other forces such as electromagnetism. M-theory aims to unify quantum mechanics with general relativity's gravitational force in a mathematically consistent way. In comparison, other theories such as loop quantum gravity are considered by physicists and researchers to be less elegant, because they posit gravity to be completely different from forces such as the electromagnetic force.
Background
In the early years of the 20th century, the atom – long believed to be the smallest building-block of matter – was proven to consist of even smaller components called protons, neutrons and electrons, which are known as subatomic particles. Other subatomic particles began being discovered in the 1960s. In the 1970s, it was discovered that protons and neutrons (and other hadrons) are themselves made up of smaller particles called quarks. The Standard Model is the set of rules that describes the interactions of these particles.
In the 1980s, a new mathematical model of theoretical physics, called string theory, emerged. It showed how all the different subatomic particles known to science could be constructed by hypothetical one-dimensional "strings", infinitesimal building-blocks that have only the dimension of length, but not height or width. These strings vibrate in multiple dimensions and, depending on how they vibrate, they might be seen in three-dimensional space as matter, light or gravity. In string theory, every form of matter is said to be the result of the vibration of strings.
However, for string theory to be mathematically consistent, the strings must live in a universe with ten dimensions. String theory explains our perception of the universe to have four dimensions (three space dimensions and one time dimension) by imagining that the extra six dimensions are "curled up", to be so small that they can't be observed day-to-day. The technical term for this is compactification. These dimensions are usually made to take the shape of mathematical objects called Calabi–Yau manifolds.
Five major string theories were developed and found to be mathematically consistent with the principle of all matter being made of strings. Having five different versions of string theory was seen as a puzzle.
Speaking at the string theory conference at the University of Southern California in 1995, Edward Witten of the Institute for Advanced Study suggested that the five different versions of string theory might be describing the same thing seen from different perspectives. He proposed a unifying theory called "M-theory", which brought all of the string theories together. It did this by asserting that strings are an approximation of curled-up two-dimensional membranes vibrating in an 11-dimensional spacetime. According to Witten, the M could stand for "magic", "mystery", or "membrane" according to taste, and the true meaning of the title should be decided when a better understanding of the theory is discovered.
Status
M-theory is not complete, and the mathematics of the approach are not yet well understood. M-theory is a theory of quantum gravity; and as all others it has not gained experimental evidence that would confirm its validity. It also does not single out our observable universe as being special, and so does not aim to predict from first principles everything we can measure about it.
Nevertheless, some physicists are drawn to M-theory because of its degree of uniqueness and rich set of mathematical properties, triggering the hope that it may describe our world within a single framework.
One feature of M-theory that has drawn great interest is that it naturally predicts the existence of the graviton, a spin-2 particle hypothesized to mediate the gravitational force. Furthermore, M-theory naturally predicts a phenomenon that resembles black hole evaporation. Competing unification theories such as asymptotically safe gravity, E8 theory, noncommutative geometry, and causal fermion systems have not demonstrated any level of mathematical consistency.
See also
History of string theory
References
Further reading
External links
The Elegant Universe – A three-hour miniseries with Brian Greene by NOVA (original PBS Broadcast Dates: October 28 and November 4, 2003). Various images, texts, videos and animations explaining string theory and M-theory.
Philosophy of science
Physical cosmology
String theory | Introduction to M-theory | [
"Physics",
"Astronomy"
] | 911 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"String theory",
"Physical cosmology"
] |
16,926,906 | https://en.wikipedia.org/wiki/Schuetze%20reagent | Schuetze reagent, also written as Schütze reagent, is made up of iodine pentoxide (I2O5) and sulfuric acid on granular silica gel. It is used to convert carbon monoxide (CO) into carbon dioxide (CO2) at room temperature. This can be used as a method for assaying carbon content in quality control of the production of uranium carbide fuel for nuclear reactors.
References
Oxidizing mixtures
Iodine compounds
Oxides
Acidic oxides | Schuetze reagent | [
"Chemistry"
] | 110 | [
"Inorganic compounds",
"Oxides",
"Oxidizing agents",
"Salts",
"Oxidizing mixtures",
"Inorganic compound stubs"
] |
16,928,506 | https://en.wikipedia.org/wiki/Visual%20servoing | Visual servoing, also known as vision-based robot control and abbreviated VS, is a technique which uses feedback information extracted from a vision sensor (visual feedback) to control the motion of a robot. One of the earliest papers that talks about visual servoing was from the SRI International Labs in 1979.
Visual servoing taxonomy
There are two fundamental configurations of the robot end-effector (hand) and the camera:
Eye-in-hand, or end-point open-loop control, where the camera is attached to the moving hand and observing the relative position of the target.
Eye-to-hand, or end-point closed-loop control, where the camera is fixed in the world and observing the target and the motion of the hand.
Visual Servoing control techniques are broadly classified into the following types:
Image-based (IBVS)
Position/pose-based (PBVS)
Hybrid approach
IBVS was proposed by Weiss and Sanderson. The control law is based on the error between current and desired features on the image plane, and does not involve any estimate of the pose of the target. The features may be the coordinates of visual features, lines or moments of regions. IBVS has difficulties with motions very large rotations, which has come to be called camera retreat.
PBVS is a model-based technique (with a single camera). This is because the pose of the object of interest is estimated with respect to the camera and then a command is issued to the robot controller, which in turn controls the robot. In this case the image features are extracted as well, but are additionally used to estimate 3D information (pose of the object in Cartesian space), hence it is servoing in 3D.
Hybrid approaches use some combination of the 2D and 3D servoing. There have been a few different approaches to hybrid servoing
2-1/2-D Servoing
Motion partition-based
Partitioned DOF Based
Survey
The following description of the prior work is divided into 3 parts
Survey of existing visual servoing methods.
Various features used and their impacts on visual servoing.
Error and stability analysis of visual servoing schemes.
Survey of existing visual servoing methods
Visual servo systems, also called servoing, have been around since the early 1980s
, although the term visual servo itself was only coined in 1987.
Visual Servoing is, in essence, a method for robot control where the sensor used is a camera (visual sensor).
Servoing consists primarily of two techniques,
one involves using information from the image to directly control the degrees of freedom (DOF) of the robot, thus referred to as Image Based Visual Servoing (IBVS).
While the other involves the geometric interpretation of the information extracted from the camera, such as estimating the pose of the target and parameters of the camera (assuming some basic model of the target is known). Other servoing classifications exist based on the variations in each component of a servoing system
,
e.g. the location of the camera, the two kinds are eye-in-hand and hand–eye configurations.
Based on the control loop, the two kinds are end-point-open-loop and end-point-closed-loop. Based on whether the control is applied to the joints (or DOF)
directly or as a position command to a robot controller the two types are
direct servoing and dynamic look-and-move.
Being one of the earliest works
the authors proposed a hierarchical
visual servo scheme applied to image-based servoing. The technique relies on
the assumption that a good set of features can be extracted from the object
of interest (e.g. edges, corners and centroids) and used as a partial model
along with global models of the scene and robot. The control strategy is
applied to a simulation of a two and three DOF robot arm.
Feddema et al.
introduced the idea of generating task trajectory
with respect to the feature velocity. This is to ensure that the sensors are
not rendered ineffective (stopping the feedback) for any the robot motions.
The authors assume that the objects are known a priori (e.g. CAD model)
and all the features can be extracted from the object.
The work by Espiau et al.
discusses some of the basic questions in
visual servoing. The discussions concentrate on modeling of the interaction
matrix, camera, visual features (points, lines, etc..).
In
an adaptive servoing system was proposed with a look-and-move
servoing architecture. The method used optical flow along with SSD to
provide a confidence metric and a stochastic controller with Kalman filtering
for the control scheme. The system assumes (in the examples) that the plane
of the camera and the plane of the features are parallel., discusses an approach of velocity control using the Jacobian relationship s˙ = Jv˙ . In addition the author uses Kalman filtering, assuming that
the extracted position of the target have inherent errors (sensor errors). A
model of the target velocity is developed and used as a feed-forward input
in the control loop. Also, mentions the importance of looking into kinematic
discrepancy, dynamic effects, repeatability, settling time oscillations and lag
in response.
Corke poses a set of very critical questions on visual servoing and tries
to elaborate on their implications. The paper primarily focuses the dynamics
of visual servoing. The author tries to address problems like lag and stability,
while also talking about feed-forward paths in the control loop. The paper
also, tries to seek justification for trajectory generation, methodology of axis
control and development of performance metrics.
Chaumette in provides good insight into the two major problems with
IBVS. One, servoing to a local minima and second, reaching a Jacobian singularity. The author show that image points alone do not make good features
due to the occurrence of singularities. The paper continues, by discussing the
possible additional checks to prevent singularities namely, condition numbers
of J_s and Jˆ+_s, to check the null space of ˆ J_s and J^T_s . One main point that
the author highlights is the relation between local minima and unrealizable
image feature motions.
Over the years many hybrid techniques have been developed. These
involve computing partial/complete pose from Epipolar Geometry using multiple views or multiple cameras. The values are obtained by direct estimation or through a learning or a statistical scheme. While others have used
a switching approach that changes between image-based and position-based
on a Lyapnov function.
The early hybrid techniques that used a combination of image-based and
pose-based (2D and 3D information) approaches for servoing required either
a full or partial model of the object in order to extract the pose information
and used a variety of techniques to extract the motion information from the
image. used an affine motion model from the image motion in addition
to a rough polyhedral CAD model to extract the object pose with respect to
the camera to be able to servo onto the object (on the lines of PBVS).
2-1/2-D visual servoing developed by Malis et al. is a well known technique that breaks down the information required for servoing into an organized fashion which decouples rotations and translations. The papers
assume that the desired pose is known a priori. The rotational information is
obtained from partial pose estimation, a homography, (essentially 3D information) giving an axis of rotation and the angle (by computing the eigenvalues and eigenvectors of the homography). The translational information is
obtained from the image directly by tracking a set of feature points. The only
conditions being that the feature points being tracked never leave the field of
view and that a depth estimate be predetermined by some off-line technique.
2-1/2-D servoing has been shown to be more stable than the techniques that
preceded it. Another interesting observation with this formulation is that
the authors claim that the visual Jacobian will have no singularities during
the motions.
The hybrid technique developed by Corke and Hutchinson, popularly called portioned approach partitions the visual (or image) Jacobian into
motions (both rotations and translations) relating X and Y axes and motions related to the Z axis. outlines the technique, to break out columns
of the visual Jacobian that correspond to the Z axis translation and rotation
(namely, the third and sixth columns). The partitioned approach is shown to
handle the Chaumette Conundrum discussed in. This technique requires
a good depth estimate in order to function properly.
outlines a hybrid approach where the servoing task is split into two,
namely main and secondary. The main task is keep the features of interest within the field of view. While the secondary task is to mark a fixation
point and use it as a reference to bring the camera to the desired pose. The
technique does need a depth estimate from an off-line procedure. The paper
discusses two examples for which depth estimates are obtained from robot
odometry and by assuming that all features are on a plane. The secondary
task is achieved by using the notion of parallax. The features that are tracked
are chosen by an initialization performed on the first frame, which are typically points.
carries out a discussion on two aspects of visual servoing, feature
modeling and model-based tracking. Primary assumption made is that the
3D model of the object is available. The authors highlights the notion that
ideal features should be chosen such that the DOF of motion can be decoupled
by linear relation. The authors also introduce an estimate of the target
velocity into the interaction matrix to improve tracking performance. The
results are compared to well known servoing techniques even when occlusions
occur.
Various features used and their impacts on visual servoing
This section discusses the work done in the field of visual servoing. We try
to track the various techniques in the use of features. Most of the work
has used image points as visual features. The formulation of the interaction
matrix in assumes points in the image are used to represent the target.
There has some body of work that deviates from the use of points and use
feature regions, lines, image moments and moment invariants.
In, the authors discuss an affine based tracking of image features.
The image features are chosen based on a discrepancy measure, which is
based on the deformation that the features undergo. The features used were
texture patches. One of key points of the paper was that it highlighted the
need to look at features for improving visual servoing.
In the authors look into choice of image features (the same question
was also discussed in in the context of tracking). The effect of the choice
of image features on the control law is discussed with respect to just the
depth axis. Authors consider the distance between feature points and the
area of an object as features. These features are used in the control law with
slightly different forms to highlight the effects on performance. It was noted
that better performance was achieved when the servo error was proportional
to the change in depth axis.
provides one of the early discussions of the use of moments. The
authors provide a new formulation of the interaction matrix using the velocity
of the moments in the image, albeit complicated. Even though the moments
are used, the moments are of the small change in the location of contour
points with the use of Green’s theorem. The paper also tries to determine
the set of features (on a plane) to for a 6 DOF robot.
In discusses the use of image moments to formulate the visual Jacobian.
This formulation allows for decoupling of the DOF based on type of moments
chosen. The simple case of this formulation is notionally similar to the 2-1/2-
D servoing. The time variation of the moments (m˙ij) are determined using
the motion between two images and Greens Theorem. The relation between
m˙ij and the velocity screw (v) is given as m˙_ij = L_m_ij v. This technique
avoids camera calibration by assuming that the objects are planar and using
a depth estimate. The technique works well in the planar case but tends to
be complicated in the general case. The basic idea is based on the work in [4]
Moment Invariants have been used in. The key idea being to find
the feature vector that decouples all the DOF of motion. Some observations
made were that centralized moments are invariant for 2D translations. A
complicated polynomial form is developed for 2D rotations. The technique
follows teaching-by-showing, hence requiring the values of desired depth and
area of object (assuming that the plane of camera and object are parallel,
and the object is planar). Other parts of the feature vector are invariants
R3, R4. The authors claim that occlusions can be handled.
and build on the work described in. The major differ-
ence being that the authors use a technique similar to, where the task is
broken into two (in the case where the features are not parallel to the cam-
era plane). A virtual rotation is performed to bring the featured parallel to
the camera plane. consolidates the work done by the authors on image
moments.
Error and stability analysis of visual servoing schemes
Espiau in showed from purely experimental work that image based visual servoing (IBVS)
is robust to calibration errors. The author used a camera with no explicit
calibration along with point matching and without pose estimation. The
paper looks at the effect of errors and uncertainty on the terms in the interaction matrix from an experimental approach. The targets used were points
and were assumed to be planar.
A similar study was done in where the
authors carry out experimental evaluation of a few uncalibrated visual servo
systems that were popular in the 90’s. The major outcome was the experimental evidence of the effectiveness of visual servo control over conventional
control methods.
Kyrki et al. analyze servoing errors for position based and 2-1/2-D
visual servoing. The technique involves determining the error in extracting
image position and propagating it to pose estimation and servoing control.
Points from the image are mapped to points in the world a priori to obtain a mapping (which is basically the homography, although not explicitly stated
in the paper). This mapping is broken down to pure rotations and translations. Pose estimation is performed using standard technique from Computer
Vision. Pixel errors are transformed to the pose. These are propagating to
the controller. An observation from the analysis shows that errors in the
image plane are proportional to the depth and error in the depth-axis is
proportional to square of depth.
Measurement errors in visual servoing have been looked into extensively.
Most error functions relate to two aspects of visual servoing. One being
steady state error (once servoed) and two on the stability of the control
loop. Other servoing errors that have been of interest are those that arise
from pose estimation and camera calibration. In, the authors extend the
work done in by considering global stability in the presence of intrinsic
and extrinsic calibration errors. provides an approach to bound the task
function tracking error. In, the authors use teaching-by-showing visual
servoing technique. Where the desired pose is known a priori and the robot
is moved from a given pose. The main aim of the paper is to determine the
upper bound on the positioning error due to image noise using a convex-
optimization technique.
provides a discussion on stability analysis with respect the uncertainty
in depth estimates. The authors conclude the paper with the observation that
for unknown target geometry a more accurate depth estimate is required in
order to limit the error.
Many of the visual servoing techniques implicitly assume that
only one object is present in the image and the relevant feature for tracking
along with the area of the object are available. Most techniques require either
a partial pose estimate or a precise depth estimate of the current and desired
pose.
Software
Matlab toolbox for visual servoing.
Java-based visual servoing simulator.
ViSP (ViSP states for "Visual Servoing Platform") is a modular software that allows fast development of visual servoing applications.
See also
Robotics
Robot
Computer Vision
Machine Vision
Robot control
References
External links
S. A. Hutchinson, G. D. Hager, and P. I. Corke. A tutorial on visual servo control. IEEE Trans. Robot. Automat., 12(5):651—670, Oct. 1996.
F. Chaumette, S. Hutchinson. Visual Servo Control, Part I: Basic Approaches. IEEE Robotics and Automation Magazine, 13(4):82-90, December 2006.
F. Chaumette, S. Hutchinson. Visual Servo Control, Part II: Advanced Approaches. IEEE Robotics and Automation Magazine, 14(1):109-118, March 2007.
Notes from IROS 2004 tutorial on advanced visual servoing.
Springer Handbook of Robotics Chapter 24: Visual Servoing and Visual Tracking (François Chaumette, Seth Hutchinson)
UW-Madison, Robotics and Intelligent Systems Lab
INRIA Lagadic research group
Johns Hopkins University, LIMBS Laboratory
University of Siena, SIRSLab Vision & Robotics Group
Tohoku University, Intelligent Control Systems Laboratory
INRIA Arobas research group
LASMEA, Rosace group
UIUC, Beckman Institute
Robotic sensing
Computer vision
Robot control
Articles containing video clips | Visual servoing | [
"Engineering"
] | 3,608 | [
"Robotics engineering",
"Packaging machinery",
"Robot control",
"Artificial intelligence engineering",
"Computer vision"
] |
16,929,085 | https://en.wikipedia.org/wiki/R%20%28cross%20section%20ratio%29 | R is the ratio of the hadronic cross section to the muon cross section in electron–positron collisions:
where the superscript (0) indicates that the cross section has been corrected for initial state radiation. R is an important input in the calculation of the anomalous magnetic dipole moment. Experimental values have been measured for center-of-mass energies from 400 MeV to 150 GeV.
R also provides experimental confirmation of the electric charge of quarks, in particular the charm quark and bottom quark, and the existence of three quark colors. A simplified calculation of R yields
where the sum is over all quark flavors with mass less than the beam energy. eq is the electric charge of the quark, and the factor of 3 accounts for the three colors of the quarks. QCD corrections to this formula have been calculated.
Usually, the denominator in R is not the actual experimental μμ cross section, but the off-resonance theoretical QED cross-section: this makes resonances more visibly dramatic than normalization by the μμ cross section, which is also greatly enhanced at these resonances (hadronic states, and Z boson).
Notes
Scattering
Particle physics | R (cross section ratio) | [
"Physics",
"Chemistry",
"Materials_science"
] | 252 | [
"Scattering stubs",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics"
] |
16,930,987 | https://en.wikipedia.org/wiki/Direct%20exchange%20geothermal%20heat%20pump | A direct exchange (DX) geothermal heat pump is a type of ground source heat pump in which refrigerant circulates through copper tubing placed in the ground unlike other ground source heat pumps where refrigerant is restricted to the heat pump itself with a secondary loop in the ground filled with a mixture of water and anti-freeze.
The simplicity of the DX designs is that high efficiencies can be reached using a shorter and smaller amount of buried tubing thereby reducing both the footprint and installation cost.
Other appellations
The technology has many different others names and designations:
Direct exchange geothermal heat pump
DX
Direct Geoexchange heat pumps – used by the Air-Conditioning, Heating, and Refrigeration Institute (AHRI) and is abbreviated DGX
Direct-expansion ground source heat pumps – used by The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE)
Direct geothermal heat pumps
Refrigerant-based geothermal systems
Waterless Geothermal
History
The first geothermal heat pump was a DX system built in the late 1940s by Robert C. Webber. It used Freon gas and buried copper tubing, for increased efficiency.
Later geothermal heat pumps designs started incorporating an additional plastic pipe loop to circulate water in deep wells in an effort to gather sufficient heat for large industrial applications, such as cement plants. Thus water-source technology advanced due to industrial interest while DX, more suited for residential and light commercial projects such as small businesses and private homes, lagged behind.
While the technology was expanding in the 80s and 90s, several of the early manufacturers faced issues with the refrigerant and oil management system. In the past, some DX geothermal manufactures designed their equipment similar to an ordinary heat pump being sold on the market today. Some of these older DX geothermal designs worked great while others experienced oil return issues.
In 2016 it was suggested only a minority of future systems would use DX technology.
Operation principles
Direct exchange heat pumps are closed-loop geothermal systems which rely on (¼” to 1-1/8”) copper pipes to exchange heat with the earth. The copper pipes are placed in the ground and form a ground loop – sometimes also referred to as earth loop or refrigerant loop – where the circulating refrigerant undergoes phase transition by exchanging heat with the ground: in heating mode it absorbs heat and changes from liquid to gas (evaporation), while in cooling mode it gives heat off and changes from gas to liquid (condensing).
Applications
Direct exchange systems are rather simple refrigerant-based systems and operate without requiring anti-freezing agents, system flushing, circulating pump, water well drilling or plumbing. Direct exchange geothermal systems are the least invasive geothermal systems and feature small earth loop size. Because of that, they can be installed in relatively small areas and in relatively shallow soil – typical loop depth does not exceed 100 linear feet. The compactness of the earth loop systems, which require less drilling and smaller borehole, makes up for a simpler system that is cheaper and quicker to install.
Use of copper
Direct exchange systems use copper because it is an excellent ground heat exchanger material and easy to manufacture. Copper tubing is strong and ductile; resistant to corrosion; has a very high thermal conductivity; and is available in many different diameters and in long coil lengths. Copper connections can be brazed, the tubing may be bent, and copper tubing is economically available.
In addition, copper has a long history of use in air conditioning and refrigeration, and is the material of choice for potable water for water lines buried underground and in buildings.
Copper has been used since antiquity in architectural constructions because it is a noble metal – one of a few that can be found naturally in the ground. This makes it a durable, weatherproof and corrosion-resistant material with an indefinite lifetime in most soils.
Although copper is extracted from the ground itself and is a noble metal – and is therefore almost completely impervious to corrosion from soils found worldwide – it might still undergo some corrosion in abnormally aggressive soils. It generally requires an oxidizing environment to start corrosion, and most soils are reducing, thus they contribute electrons to the copper and protect it against corrosion. In those areas where corrosive conditions may exist, copper will then naturally form a protective film on its surface which remains intact under most soil conditions.
In anticipation of particularly corrosive soils, DX systems come with a Cathodic Protection system. The principle is to protect the metal surface from corrosion by making it the cathode of an electrochemical cell. In that process, the metal –copper – is connected to a sacrificial metal which will corrode in its place. Corrosion of metals is an electrochemical process of deterioration that results from a loss of electrons as they react with water and/or oxygen. As the current flows from the Earth Loop Protection system, the metal surface to be protected is given in a uniform negative electrical potential that precludes corrosion of the ground loops, even in hostile environments.
Ground loop configuration
The ground loop system may be installed in several different configurations. The three most common configurations are:
Vertical
Diagonal
Horizontal
Diagonal and Vertical configurations typically require drilling and grouting to be installed in drilled bore holes. Grout reseals the earth below the surface so that natural ground water aquifers are not interrupted. All diagonal and vertical systems must be grouted from the bottom up to the top.
Diagonal systems usually have a very small footprint.
Horizontal configurations usually only require trenching to be installed in excavated trenches or a pit. Horizontal systems do not usually require grout, except in the case of directional boring.
System sizing
DX systems are currently manufactured in sizes from 2 tons (7.03KW) to 6 tons (21.10KW) . Larger projects can be accomplished through installation of multiple units.
See also
Deep water source cooling
Geothermal heating and cooling
Geothermal heat pump
Ground-coupled heat exchanger
Heat pumps
References
External links
Geothermal Heat Pumps (US Department of Energy)
US Dept. of Energy
Geothermal Exchange Organization (GEO)
Canadian Geoexchange Coalition
International Ground Source Heat Pump Association (IGSHPA)
Ground Source Heat Pump Association – UK
Air-Conditioning, Heating, and Refrigeration Institute (AHRI)
The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE)
Copper Development Association Inc.
Heat pumps
Energy conversion
Building engineering | Direct exchange geothermal heat pump | [
"Engineering"
] | 1,345 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
16,933,230 | https://en.wikipedia.org/wiki/Oceaneering%20International | Oceaneering International, Inc. is a subsea engineering and applied technology company based in Houston, Texas, U.S. that provides engineered services and hardware to customers who operate in marine, space, and other environments.
Oceaneering's business offerings include remotely operated vehicle (ROV) services, specialty oilfield subsea hardware, deepwater intervention and crewed diving services, non-destructive testing and inspections, engineering and project management, and surveying and mapping services. Its services and products are marketed worldwide to oil and gas companies, government agencies, and firms in the aerospace, marine engineering and mobile robotics and construction industries.
History
Oceaneering was founded in 1964 with the incorporation of World Wide Divers, Inc., one of three companies who merged in 1969 to operate under the name Oceaneering International, Inc. The merged companies were World Wide Divers, Inc. (Morgan City, LA), California Divers, Inc. (Santa Barbara, CA), and Can-Dive Services Ltd (North Vancouver, BC).
World Wide Divers, Inc. was owned by Mike Hughes and Johnny Johnson. California Divers, Inc. was owned by Lad Handelman, Gene Handelman, Kevin Lengyel, and Bob Ratcliffe. Can-Dive Services Ltd was owned by Phil Nuytten and partners. Mike Hughes served as Chairman of the Board and Lad Handelman served as President of the merged companies.
In the early 1970s, Oceaneering supported considerable research into ways to increase safety of their divers and general diving efficiency, including their collaboration with Duke University Medical Center to explore the use of trimix breathing gas to reduce the incidence of high-pressure nervous syndrome.
Oceaneering purchased the rights to the JIM suit in 1975. By 1979, a team from Oceaneering assisted Dr. Sylvia Earle in testing Atmospheric diving suits for scientific diving operations by diving a JIM suit to 1,250 fsw. Oceaneering also used WASP atmospheric diving suits.
A dive team from Oceaneering salvaged three of the four propellers from the RMS Lusitania in 1982.
From 1984 to 1988, Michael L. Gernhardt served as Oceaneering's Manager and then Vice President of Special Projects. He led the development of a telerobotic system for subsea platform cleaning and inspection, and of a variety of new diver and robot tools. In 1988, he founded Oceaneering Space Systems, to transfer subsea technology and operational experience to the ISS program.
After the 1986 Space Shuttle Challenger disaster, Oceaneering teams recovered the Solid Rocket Booster that contained the faulty O-ring responsible for launch's failure.
Oceaneering was a NASDAQ listed company until 1991, when they moved to the New York Stock Exchange.
Oceaneering ROVs were used to determine what happened to the cargo ship Lucona in the 1991 murder and fraud investigation that claimed uranium mining equipment was lost when the vessel went down.
Recovery of the airplane cockpit voice recorder in the loss of ValuJet Flight 592 was a priority in early 1996. In the days following the loss of TWA Flight 800 later that same year, Oceaneering was contacted to provide ROV support to the US Navy lead search and recovery effort.
Boeing and Fugro teamed up with Oceaneering in 2001 to begin integration of their advanced technology into deep sea exploration.
Oceaneering helped recover the Confederate submarine H. L. Hunley, which sank in 1864. Several recovery plans were evaluated; the final recovery included a truss structure with foam to surround the body of the submarine. On August 8, 2000, at 8:37 a.m., the sub broke the surface for the first time in 136 years.
On August 2, 2006, NASA announced it would issue a Request for Proposal (RFP) for the design, development, certification, production and sustaining engineering of the Constellation Space Suit to meet the needs of the Constellation Program. On June 11, 2008, NASA awarded a USD$745 million contract to Oceaneering for the creation and manufacture of this new space suit.
In 2006, NAVSEA awarded Oceaneering a maintenance contract for the Dry Deck Shelter program. Dry Deck Shelters are used to transport equipment such as the Advanced SEAL Delivery System and Combat Rubber Raiding Craft aboard a submarine.
In 2009, Oceaneering installed a demonstrator crane aboard the SS Flickertail State to evaluate its performance in transferring containers between two moving ships, in an operational environment using commercial and oil industry at-sea mooring techniques in the Gulf of Mexico. Developed in conjunction with the Sea Warfare and Weapons Department in the Office of Naval Research, the crane has sensors and cameras as well as motion-sensing algorithms that automatically compensate for the rolling and pitching of the sea, making it much easier for operators to center it over and transfer cargo.
Oceaneering teamed up with the Canadian company GRI Simulations to design and produce the ROV simulators they utilize for training, development of procedures, and equipment staging. After a dispute over theft of trade secrets and copyright infringement that lasted several years, Oceaneering now licenses the VROV simulator system from GRI Simulations.
A 2009 collaboration with Royal Dutch Shell saw the installation of a wireline at a record of water for repairing a safety valve.
On April 22, 2010, three Oceaneering ROV crews aboard the Oceaneering vessel Ocean Intervention III, the DOF ASA Skandi Neptune and the Boa International Boa Sub C began to map the seabed and assess the wreckage from the Deepwater Horizon oil spill. The crews reported "large amounts of oil that flowed out." Oceaneering ROV Technician Tyrone Benton was later called as a witness to provide information on the leaks associated with BOP stack investigation, but gave no reason why he later failed to appear in court.
Petrobras, the biggest deepwater oilfield company in the world, placed the largest umbilical order in company history in 2012.
As of 2012, eighty percent of Oceaneering's income has been derived from deepwater work. It is also the world's largest operator of ROVs.
BAE Systems was contracted in October 2013 to build a Jones Act-compliant multi-service vessel to serve Oceaneering's "subsea intervention services in the ultra-deep waters of the Gulf of Mexico", which was delivered in 2019.
Oceaneering was sanctioned by the Chinese government on December 27, 2024 due to arm sales to Taiwan.
Oceaneering Entertainment Systems
The Oceaneering Entertainment Systems (OES) division is an active developer of educational and entertainment technology, such as the Shuttle Launch Experience at the Kennedy Space Center Visitor Complex in Florida. It is based in Orlando, Florida, with an additional site in Hanover, Maryland.
OES was formed in 1992 when Oceaneering International purchased Eastport International, Inc., which specialized in underwater remotely operated vehicles (ROVs) and had recently been contracted by Universal Studios Florida to redesign and build the animatronic sharks for its Jaws attraction. The original animatronics, ride system and control system had malfunctioned, causing the attraction to close soon after its grand opening. After Eastport's acquisition by Oceaneering, the themed attraction work was moved to the new OES division, which completed the Jaws contract.
OES has since developed motion-based dark ride vehicles for Transformers: The Ride at Universal Studios Florida, Justice League: Battle for Metropolis at Six Flags parks, Antarctica: Empire of the Penguin at SeaWorld, and Speed of Magic at Ferrari World Abu Dhabi, among others. It has also developed animatronics for Universal Studios' Jurassic Park and Jaws rides. It has provided custom show-action equipment for various entertainment projects, including Revenge of the Mummy at Universal Studios Orlando, and Curse of DarKastle at Busch Gardens Williamsburg.
In 2014, the Themed Entertainment Association presented their THEA Award to OES for their Revolution Tru-Trackless ride system. In 2013, OES won the THEA for Transformers The Ride 3-D at Universal Studios Hollywood and Singapore, for Ride & Show Systems. In 2008 they won the THEA for Shuttle Launch Experience.
Community outreach
Oceaneering donated a hyperbaric chamber to assist with the treatment on the Miskito Indian population in 1986. They donated a compressor in 1997 that, along with funding from the Divers Alert Network, supported continued medical support of the Miskito population.
In November 2009, Oceaneering donated an ROV to Stavanger Offshore Tekniske Skole, a Norwegian technical college, to facilitate their students' qualification exams. They donated an ROV to South Central Louisiana Technical College in 2011 to support its unique ROV maintenance curriculum.
See also
List of oilfield service companies
:Category:Amusement rides manufactured by Oceaneering International
References
External links
Oceaneering International home page
SEC Filings
Aerospace companies of the United States
American entities subject to Chinese sanctions
Companies based in Houston
Companies listed on the New York Stock Exchange
Technology companies established in 1964
Underwater diving engineering
Offshore engineering
1964 establishments in Texas | Oceaneering International | [
"Engineering"
] | 1,798 | [
"Construction",
"Underwater diving engineering",
"Marine engineering",
"Offshore engineering"
] |
16,934,550 | https://en.wikipedia.org/wiki/Incremental%20rendering | Incremental rendering refers to a feature built into most modern Web browsers. Specifically, this refers to the browser's ability to display a partially downloaded Web page to the user while the browser awaits the remaining files from the server. The advantage to the user is a perceived improvement in responsiveness, both from the Web browser and from the web site.
The purpose of incremental rendering is similar to the purpose of the interlaced JPEG, which improves the presentation speed to the user by quickly displaying a low-resolution version of an image which improves to a high-resolution, rather than an image that slowly paints from top to bottom.
Without incremental rendering, a web browser must wait until the code for a page is fully loaded before it can present content to the user. Earlier web browsers offered something of a compromise - displaying the HTML page once the entire HTML file had been retrieved, and then inserting the images one-by-one as they were retrieved afterwards.
Although the utility of incremental rendering seems intuitively obvious, making it happen is something of an art as well as a science. The sequence in which the various elements of a Web page render is almost never strictly top-to-bottom. The programming that fills in the missing pieces must do a certain amount of guesswork to determine how to best display partial content. Images in particular are virtually always loaded following the HTML page, as the browser must consult the HTML file in order to know which images to request from the server - as the server doesn't present them automatically without the follow-up request. Web designers and web design software often include hints that assist with this process - for example, including the expected heights and widths of images in the HTML code so the browser may allocate the correct amount of screen space before the image is actually retrieved from the server.
References
Web software | Incremental rendering | [
"Technology"
] | 381 | [
"Computing stubs",
"World Wide Web stubs"
] |
15,192,512 | https://en.wikipedia.org/wiki/Thermal%20time%20scale | In astrophysics, the thermal time scale or Kelvin–Helmholtz time scale is the approximate time it takes for a star to radiate away its total kinetic energy content at its current luminosity rate. Along with the nuclear and free-fall (aka dynamical) time scales, it is used to estimate the length of time a particular star will remain in a certain phase of its life and its lifespan if hypothetical conditions are met. In reality, the lifespan of a star is greater than what is estimated by the thermal time scale because as one fuel becomes scarce, another will generally take its place – hydrogen burning gives way to helium burning, which is replaced by carbon burning.
Stellar astrophysics
The size of a star as well as its energy output generally determine a star's thermal lifetime because the measurement is independent of the type of fuel normally found at its center. Indeed, the thermal time scale assumes that there is no fuel at all inside the star and simply predicts the length of time it would take for the resulting change in outputted energy to reach the surface of the star and become visually apparent to an outside observer.
where is the gravitational constant, is the mass of the star, is the radius of the star, and is the star's luminosity. As an example, the Sun's thermal time scale is approximately 15.7 million years.
References
Time scales
Stellar astronomy | Thermal time scale | [
"Physics",
"Astronomy"
] | 283 | [
"Physical quantities",
"Time",
"Astronomy stubs",
"Astrophysics",
"Astronomical coordinate systems",
"Astrophysics stubs",
"Spacetime",
"Time scales",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
15,193,503 | https://en.wikipedia.org/wiki/TCL%20%28GTPase%29 | TCL is a small (~21 kDa) signaling G protein (more specifically a GTPase), and is a member of the Rho family of GTPases.
TCL (TC10-like) shares 85% and 78% amino acid similarity to TC10 and Cdc42, respectively. TCL mRNA is 2.5 kb long and is mainly expressed in heart. In vitro, TCL shows rapid GDP/GTP exchange and displays higher GTP dissociation and hydrolysis rates than TC10. Like other Rac/Cdc42/RhoUV members, GTP-bound TCL interacts with CRIB domains, such as those found in PAK and WASP. TCL produces large and dynamic F-actin-rich ruffles on the dorsal cell membrane in REF-52 fibroblasts. TCL activity is blocked by dominant negative Rac1 and Cdc42 mutants, suggesting a cross-talk between these three Rho GTPases.
TCL is unrelated to TCL1A, a proto-oncogene implicated in the development of T-Cell Leukemias.
See also
TCL1A
References
G proteins | TCL (GTPase) | [
"Chemistry",
"Biology"
] | 241 | [
"G proteins",
"Biotechnology stubs",
"Signal transduction",
"Biochemistry stubs",
"Biochemistry"
] |
15,194,124 | https://en.wikipedia.org/wiki/L-drive | An L-drive is a type of azimuth thruster where the electric motor is mounted vertically, removing the second bevel gear from the drivetrain. Azimuth thruster pods can be rotated through a full 360 degrees, allowing for rapid changes in thrust direction and eliminating the need for a conventional rudder. This form of power transmission is called a L-drive because the rotary motion has to make one right angle turn, thus looking a bit like the letter "L". This name is used to make clear the arrangement of drive is different from Z-drive.
See also
Notes
Marine propulsion | L-drive | [
"Engineering"
] | 122 | [
"Marine propulsion",
"Marine engineering"
] |
15,197,395 | https://en.wikipedia.org/wiki/Return%20ratio | The return ratio of a dependent source in a linear electrical circuit is the negative of the ratio of the current (voltage) returned to the site of the dependent source to the current (voltage) of a replacement independent source. The terms loop gain and return ratio are often used interchangeably; however, they are necessarily equivalent only in the case of a single feedback loop system with unilateral blocks.
Calculating the return ratio
The steps for calculating the return ratio of a source are as follows:
Set all independent sources to zero.
Select the dependent source for which the return ratio is sought.
Place an independent source of the same type (voltage or current) and polarity in parallel with the selected dependent source.
Move the dependent source to the side of the inserted source and cut the two leads joining the dependent source to the independent source.
For a voltage source the return ratio is minus the ratio of the voltage across the dependent source divided by the voltage of the independent replacement source.
For a current source, short-circuit the broken leads of the dependent source. The return ratio is minus the ratio of the resulting short-circuit current to the current of the independent replacement source.
Other Methods
These steps may not be feasible when the dependent sources inside the devices are not directly accessible, for example when using built-in "black box" SPICE models or when measuring the return ratio experimentally.
For SPICE simulations, one potential workaround is to manually replace non-linear devices by their small-signal equivalent model, with exposed dependent sources. However this will have to be redone if the bias point changes.
A result by Rosenstark shows that return ratio can be calculated by breaking the loop at any unilateral point in the circuit. The problem is now finding how to break the loop without affecting the bias point and altering the results. Middlebrook and Rosenstark have proposed several methods for experimental evaluation of return ratio (loosely referred to by these authors as simply loop gain), and similar methods have been adapted for use in SPICE by Hurst. See Spectrum user note or Roberts, or Sedra, and especially Tuinenga.
Example: Collector-to-base biased bipolar amplifier
Figure 1 (top right) shows a bipolar amplifier with feedback bias resistor Rf driven by a Norton signal source. Figure 2 (left panel) shows the corresponding small-signal circuit obtained by replacing the transistor with its hybrid-pi model. The objective is to find the return ratio of the dependent current source in this amplifier. To reach the objective, the steps outlined above are followed. Figure 2 (center panel) shows the application of these steps up to Step 4, with the dependent source moved to the left of the inserted source of value it, and the leads targeted for cutting marked with an x. Figure 2 (right panel) shows the circuit set up for calculation of the return ratio T, which is
The return current is
The feedback current in Rf is found by current division to be:
The base-emitter voltage vπ is then, from Ohm's law:
Consequently,
Application in asymptotic gain model
The overall transresistance gain of this amplifier can be shown to be:
with R1 = RS || rπ and R2 = RD || rO.
This expression can be rewritten in the form used by the asymptotic gain model, which expresses the overall gain of a feedback amplifier in terms of several independent factors that are often more easily derived separately than the overall gain itself, and that often provide insight into the circuit. This form is:
where the so-called asymptotic gain G∞ is the gain at infinite gm, namely:
and the so-called feed forward or direct feedthrough G0 is the gain for zero gm, namely:
For additional applications of this method, see asymptotic gain model and Blackman's theorem.
References
See also
Asymptotic gain model
Blackman's theorem
Extra element theorem
Control theory
Signal processing
Electronic feedback | Return ratio | [
"Mathematics",
"Technology",
"Engineering"
] | 809 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
15,202,398 | https://en.wikipedia.org/wiki/Anisotropy%20energy | Anisotropic energy is energy that is directionally specific. The word anisotropy means "directionally dependent", hence the definition. The most common form of anisotropic energy is magnetocrystalline anisotropy, which is commonly studied in ferromagnets. In ferromagnets, there are islands or domains of atoms that are all coordinated in a certain direction; this spontaneous positioning is often called the "easy" direction, indicating that this is the lowest energy state for these atoms. In order to study magnetocrystalline anisotropy, energy (usually in the form of an electric current) is applied to the domain, which causes the crystals to deflect from the "easy" to "hard" positions. The energy required to do this is defined as the anisotropic energy. The easy and hard alignments and their relative energies are due to the interaction between spin magnetic moment of each atom and the crystal lattice of the compound being studied.
See also
Magnetic anisotropy
References
Energy (physics)
Asymmetry
Orientation (geometry)
Magnetic ordering | Anisotropy energy | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 230 | [
"Physical quantities",
"Quantity",
"Electric and magnetic fields in matter",
"Materials science",
"Energy (physics)",
"Magnetic ordering",
"Topology",
"Space",
"Condensed matter physics",
"Geometry",
"Asymmetry",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Orienta... |
14,100,331 | https://en.wikipedia.org/wiki/Artificial%20silk | Artificial silk or art silk is any synthetic fiber which resembles silk, but typically costs less to produce. Frequently, the term artificial silk is just a synonym for rayon. When made out of bamboo viscose it is also sometimes called bamboo silk.
The first successful artificial silks were developed in the 1890s of cellulose fiber and marketed as art silk or viscose, a trade name for a specific manufacturer.
In the 1910s and 1920s, several manufacturers of viscose competed in Europe and the United States to produce what was frequently called artificial silk. In 1924, the name of the fiber was officially changed in the U.S. to Rayon, although the term viscose continued to be used in Europe. The material is commonly referred to in the industry as viscose rayon.
In 1931, Henry Ford hired chemists Robert Boyer and Frank Calvert to produce artificial silk made with soybean fibers. They succeeded in making a textile fiber of spun soy protein fibers, hardened or tanned in a formaldehyde bath, which was given the name Azlon. It was usable for making suits, felt hats, and overcoats. Though pilot production of Azlon reached per day in 1940, it never reached the commercial market; DuPont's nylon became the most important artificial silk.
Although not sold under the name art silk initially, nylon, the first synthetic fiber, was developed in the United States in the late 1930s and was used as a replacement for Japanese silk during World War II. Its properties are far superior to rayon and silk when wet, and so it was used for many military applications, such as parachutes. Although nylon is not a good substitute for silk fabric in appearance, it is a successful functional alternative. DuPont's original plans for nylon to become a cheaper and superior replacement for silk stockings were soon realized, then redirected for military use just two years later during World War II. Nylon became a prominent industrial fiber in a short time frame, permanently replacing silk in many applications.
In the present day, imitation silk may be made with rayon, mercerized cotton, polyester, a blend of these materials, or a blend of rayon and silk.
Despite a generally similar appearance, genuine silk has unique features that are distinguishable from artificial silk. However, in some cases artificial silk can be passed off as real silk to unwary buyers. A number of tests are available to determine a fabric's basic fiber makeup, some of which can be performed prior to purchasing a fabric whose composition is questionable. Tests include rubbing the pile in the hand, burning a small piece of the fringe to smell the ash and smell smoke, and dissolving the pile by performing a chemical test.
References
External links
The burn test and other methods for fiber identification:
Fiber Content Tests
Is Your Silk Oriental Rug Made of Real Silk?
See The Stocking Story: You Be the Historian at the Smithsonian website.
Organic polymers
Cellulose
Synthetic fibers
Silk
Woven fabrics | Artificial silk | [
"Chemistry"
] | 597 | [
"Organic compounds",
"Synthetic materials",
"Organic polymers",
"Synthetic fibers"
] |
14,104,872 | https://en.wikipedia.org/wiki/Tetrakis%28dimethylamido%29titanium | Tetrakis(dimethylamino)titanium (TDMAT), also known as Titanium(IV) dimethylamide, is a chemical compound. The compound is generally classified as a metalorganic species, meaning that its properties are strongly influenced by the organic ligands but the compound lacks metal-carbon bonds. It is used in chemical vapor deposition to prepare titanium nitride (TiN) surfaces and in atomic layer deposition as a titanium dioxide precursor. The prefix "tetrakis" refers the presence of four of the same ligand, in this case dimethylamides.
Preparation and properties
Tetrakis(dimethylamino)titanium is a conventional Ti(IV) compound in the sense that it is tetrahedral and diamagnetic. Unlike the many alkoxides, the diorganoamides of titanium are monomeric and thus at least somewhat volatile. It is prepared from titanium tetrachloride (which is also tetrahedral, diamagnetic, and volatile) by treatment with lithium dimethylamide:
TiCl4 + 4 LiNMe2 → Ti(NMe2)4 + 4 LiCl
Like many amido complexes, TDMAT is quite sensitive toward water, and its handling requires air-free techniques. The ultimate products of its hydrolysis is titanium dioxide and dimethylamine:
Ti(NMe2)4 + 2 H2O → TiO2 + 4 HNMe2
In a related reaction, the compound undergoes exchange with other amines, evolving dimethylamine.
TMAT has been used in metalorganic chemical vapor deposition (MOCVD).
Related compounds
Tetrakis(dimethylamido)vanadium (registry number 19824-56-7)
References
Semiconductor device fabrication
Titanium(IV) compounds
Metal amides | Tetrakis(dimethylamido)titanium | [
"Chemistry",
"Materials_science"
] | 372 | [
"Metal amides",
"Coordination chemistry",
"Semiconductor device fabrication",
"Microtechnology"
] |
14,105,333 | https://en.wikipedia.org/wiki/Electric%20power%20system | An electric power system is a network of electrical components deployed to supply, transfer, and use electric power. An example of a power system is the electrical grid that provides power to homes and industries within an extended area. The electrical grid can be broadly divided into the generators that supply the power, the transmission system that carries the power from the generating centers to the load centers, and the distribution system that feeds the power to nearby homes and industries.
Smaller power systems are also found in industry, hospitals, commercial buildings, and homes. A single line diagram helps to represent this whole system. The majority of these systems rely upon three-phase AC power—the standard for large-scale power transmission and distribution across the modern world. Specialized power systems that do not always rely upon three-phase AC power are found in aircraft, electric rail systems, ocean liners, submarines, and automobiles.
History
In 1881, two electricians built the world's first power system at Godalming in England. It was powered by two water wheels and produced an alternating current that in turn supplied seven Siemens arc lamps at 250 volts and 34 incandescent lamps at 40 volts. However, supply to the lamps was intermittent and in 1882 Thomas Edison and his company, Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station initially powered around 3,000 lamps for 59 customers. The power station generated direct current and operated at a single voltage. Direct current power could not be transformed easily or efficiently to the higher voltages necessary to minimize power loss during long-distance transmission, so the maximum economic distance between the generators and load was limited to around half a mile (800 m).
That same year in London, Lucien Gaulard and John Dixon Gibbs demonstrated the "secondary generator"—the first transformer suitable for use in a real power system. The practical value of Gaulard and Gibbs' transformer was demonstrated in 1884 at Turin where the transformer was used to light up of railway from a single alternating current generator. Despite the success of the system, the pair made some fundamental mistakes. Perhaps the most serious was connecting the primaries of the transformers in series so that active lamps would affect the brightness of other lamps further down the line.
In 1885, Ottó Titusz Bláthy working with Károly Zipernowsky and Miksa Déri perfected the secondary generator of Gaulard and Gibbs, providing it with a closed iron core and its present name: the "transformer". The three engineers went on to present a power system at the National General Exhibition of Budapest that implemented the parallel AC distribution system proposed by a British scientist in which several power transformers have their primary windings fed in parallel from a high-voltage distribution line. The system lit more than 1000 carbon filament lamps and operated successfully from May until November of that year.
Also in 1885 George Westinghouse, an American entrepreneur, obtained the patent rights to the Gaulard-Gibbs transformer and imported a number of them along with a Siemens generator, and set his engineers to experimenting with them in hopes of improving them for use in a commercial power system. In 1886, one of Westinghouse's engineers, William Stanley, independently recognized the problem with connecting transformers in series as opposed to parallel and also realized that making the iron core of a transformer a fully enclosed loop would improve the voltage regulation of the secondary winding. Using this knowledge he built a multi-voltage transformer-based alternating-current power system serving multiple homes and businesses at Great Barrington, Massachusetts in 1886. The system was unreliable and short-lived, though, due primarily to generation issues. However, based on that system, Westinghouse would begin installing AC transformer systems in competition with the Edison Company later that year. In 1888, Westinghouse licensed Nikola Tesla's patents for a polyphase AC induction motor and transformer designs. Tesla consulted for a year at the Westinghouse Electric & Manufacturing Company but it took a further four years for Westinghouse engineers to develop a workable polyphase motor and transmission system.
By 1889, the electric power industry was flourishing, and power companies had built thousands of power systems (both direct and alternating current) in the United States and Europe. These networks were effectively dedicated to providing electric lighting. During this time the rivalry between Thomas Edison and George Westinghouse's companies had grown into a propaganda campaign over which form of transmission (direct or alternating current) was superior, a series of events known as the "war of the currents". In 1891, Westinghouse installed the first major power system that was designed to drive a synchronous electric motor, as well as provide electric lighting, at Telluride, Colorado. On the other side of the Atlantic, Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown, built the first long-distance () high-voltage (15 kV, then a record) three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt, where power was used to light lamps and run a water pump. In the United States the AC/DC competition came to an end when Edison General Electric was taken over by their chief AC rival, the Thomson-Houston Electric Company, forming General Electric. In 1895, after a protracted decision-making process, alternating current was chosen as the transmission standard with Westinghouse building the Adams No. 1 generating station at Niagara Falls and General Electric building the three-phase alternating current power system to supply Buffalo at 11 kV.
Developments in power systems continued beyond the nineteenth century. In 1936 the first experimental high voltage direct current (HVDC) line using mercury arc valves was built between Schenectady and Mechanicville, New York. HVDC had previously been achieved by series-connected direct current generators and motors (the Thury system) although this suffered from serious reliability issues. The first solid-state metal diode suitable for general power uses was developed by Ernst Presser at TeKaDe in 1928. It consisted of a layer of selenium applied on an aluminum plate.
In 1957, a General Electric research group developed the first thyristor suitable for use in power applications, starting a revolution in power electronics. In that same year, Siemens demonstrated a solid-state rectifier, but it was not until the early 1970s that solid-state devices became the standard in HVDC, when GE emerged as one of the top suppliers of thyristor-based HVDC.
In 1979, a European consortium including Siemens, Brown Boveri & Cie and AEG realized the record HVDC link from Cabora Bassa to Johannesburg, extending more than that carried 1.9 GW at 533 kV.
In recent times, many important developments have come from extending innovations in the information and communications technology (ICT) field to the power engineering field. For example, the development of computers meant load flow studies could be run more efficiently, allowing for much better planning of power systems. Advances in information technology and telecommunication also allowed for effective remote control of a power system's switchgear and generators.
Basics of electric power
Electric power is the product of two quantities: current and voltage. These two quantities can vary with respect to time (AC power) or can be kept at constant levels (DC power).
Most refrigerators, air conditioners, pumps and industrial machinery use AC power, whereas most computers and digital equipment use DC power (digital devices plugged into the mains typically have an internal or external power adapter to convert from AC to DC power). AC power has the advantage of being easy to transform between voltages and is able to be generated and utilised by brushless machinery. DC power remains the only practical choice in digital systems and can be more economical to transmit over long distances at very high voltages (see HVDC).
The ability to easily transform the voltage of AC power is important for two reasons: firstly, power can be transmitted over long distances with less loss at higher voltages. So in power systems where generation is distant from the load, it is desirable to step-up (increase) the voltage of power at the generation point and then step-down (decrease) the voltage near the load. Secondly, it is often more economical to install turbines that produce higher voltages than would be used by most appliances, so the ability to easily transform voltages means this mismatch between voltages can be easily managed.
Solid-state devices, which are products of the semiconductor revolution, make it possible to transform DC power to different voltages, build brushless DC machines and convert between AC and DC power. Nevertheless, devices utilising solid-state technology are often more expensive than their traditional counterparts, so AC power remains in widespread use.
Components of power systems
Supplies
All power systems have one or more sources of power. For some power systems, the source of power is external to the system but for others, it is part of the system itself—it is these internal power sources that are discussed in the remainder of this section. Direct current power can be supplied by batteries, fuel cells or photovoltaic cells. Alternating current power is typically supplied by a rotor that spins in a magnetic field in a device known as a turbo generator. There have been a wide range of techniques used to spin a turbine's rotor, from steam heated using fossil fuel (including coal, gas and oil) or nuclear energy to falling water (hydroelectric power) and wind (wind power).
The speed at which the rotor spins in combination with the number of generator poles determines the frequency of the alternating current produced by the generator. All generators on a single synchronous system, for example, the national grid, rotate at sub-multiples of the same speed and so generate electric current at the same frequency. If the load on the system increases, the generators will require more torque to spin at that speed and, in a steam power station, more steam must be supplied to the turbines driving them. Thus the steam used and the fuel expended directly relate to the quantity of electrical energy supplied. An exception exists for generators incorporating power electronics such as gearless wind turbines or linked to a grid through an asynchronous tie such as a HVDC link — these can operate at frequencies independent of the power system frequency.
Depending on how the poles are fed, alternating current generators can produce a variable number of phases of power. A higher number of phases leads to more efficient power system operation but also increases the infrastructure requirements of the system. Electricity grid systems connect multiple generators operating at the same frequency: the most common being three-phase at 50 or 60 Hz.
There are a range of design considerations for power supplies. These range from the obvious: How much power should the generator be able to supply? What is an acceptable length of time for starting the generator (some generators can take hours to start)? Is the availability of the power source acceptable (some renewables are only available when the sun is shining or the wind is blowing)? To the more technical: How should the generator start (some turbines act like a motor to bring themselves up to speed in which case they need an appropriate starting circuit)? What is the mechanical speed of operation for the turbine and consequently what are the number of poles required? What type of generator is suitable (synchronous or asynchronous) and what type of rotor (squirrel-cage rotor, wound rotor, salient pole rotor or cylindrical rotor)?
Loads
Power systems deliver energy to loads that perform a function. These loads range from household appliances to industrial machinery. Most loads expect a certain voltage and, for alternating current devices, a certain frequency and number of phases. The appliances found in residential settings, for example, will typically be single-phase operating at 50 or 60 Hz with a voltage between 110 and 260 volts (depending on national standards). An exception exists for larger centralized air conditioning systems as these are now often three-phase because this allows them to operate more efficiently. All electrical appliances also have a wattage rating, which specifies the amount of power the device consumes. At any one time, the net amount of power consumed by the loads on a power system must equal the net amount of power produced by the supplies less the power lost in transmission.
Making sure that the voltage, frequency and amount of power supplied to the loads is in line with expectations is one of the great challenges of power system engineering. However it is not the only challenge, in addition to the power used by a load to do useful work (termed real power) many alternating current devices also use an additional amount of power because they cause the alternating voltage and alternating current to become slightly out-of-sync (termed reactive power). The reactive power like the real power must balance (that is the reactive power produced on a system must equal the reactive power consumed) and can be supplied from the generators, however it is often more economical to supply such power from capacitors (see "Capacitors and reactors" below for more details).
A final consideration with loads has to do with power quality. In addition to sustained overvoltages and undervoltages (voltage regulation issues) as well as sustained deviations from the system frequency (frequency regulation issues), power system loads can be adversely affected by a range of temporal issues. These include voltage sags, dips and swells, transient overvoltages, flicker, high-frequency noise, phase imbalance and poor power factor. Power quality issues occur when the power supply to a load deviates from the ideal. Power quality issues can be especially important when it comes to specialist industrial machinery or hospital equipment.
Conductors
Conductors carry power from the generators to the load. In a grid, conductors may be classified as belonging to the transmission system, which carries large amounts of power at high voltages (typically more than 69 kV) from the generating centres to the load centres, or the distribution system, which feeds smaller amounts of power at lower voltages (typically less than 69 kV) from the load centres to nearby homes and industry.
Choice of conductors is based on considerations such as cost, transmission losses and other desirable characteristics of the metal like tensile strength. Copper, with lower resistivity than aluminum, was once the conductor of choice for most power systems. However, aluminum has a lower cost for the same current carrying capacity and is now often the conductor of choice. Overhead line conductors may be reinforced with steel or aluminium alloys.
Conductors in exterior power systems may be placed overhead or underground. Overhead conductors are usually air insulated and supported on porcelain, glass or polymer insulators. Cables used for underground transmission or building wiring are insulated with cross-linked polyethylene or other flexible insulation. Conductors are often stranded for to make them more flexible and therefore easier to install.
Conductors are typically rated for the maximum current that they can carry at a given temperature rise over ambient conditions. As current flow increases through a conductor it heats up. For insulated conductors, the rating is determined by the insulation. For bare conductors, the rating is determined by the point at which the sag of the conductors would become unacceptable.
Capacitors and reactors
The majority of the load in a typical AC power system is inductive; the current lags behind the voltage. Since the voltage and current are out-of-phase, this leads to the emergence of an "imaginary" form of power known as reactive power. Reactive power does no measurable work but is transmitted back and forth between the reactive power source and load every cycle. This reactive power can be provided by the generators themselves but it is often cheaper to provide it through capacitors, hence capacitors are often placed near inductive loads (i.e. if not on-site at the nearest substation) to reduce current demand on the power system (i.e. increase the power factor).
Reactors consume reactive power and are used to regulate voltage on long transmission lines. In light load conditions, where the loading on transmission lines is well below the surge impedance loading, the efficiency of the power system may actually be improved by switching in reactors. Reactors installed in series in a power system also limit rushes of current flow, small reactors are therefore almost always installed in series with capacitors to limit the current rush associated with switching in a capacitor. Series reactors can also be used to limit fault currents.
Capacitors and reactors are switched by circuit breakers, which results in sizeable step changes of reactive power. A solution to this comes in the form of synchronous condensers, static VAR compensators and static synchronous compensators. Briefly, synchronous condensers are synchronous motors that spin freely to generate or absorb reactive power. Static VAR compensators work by switching in capacitors using thyristors as opposed to circuit breakers allowing capacitors to be switched-in and switched-out within a single cycle. This provides a far more refined response than circuit-breaker-switched capacitors. Static synchronous compensators take this a step further by achieving reactive power adjustments using only power electronics.
Power electronics
Power electronics are semiconductor based devices that are able to switch quantities of power ranging from a few hundred watts to several hundred megawatts. Despite their relatively simple function, their speed of operation (typically in the order of nanoseconds) means they are capable of a wide range of tasks that would be difficult or impossible with conventional technology. The classic function of power electronics is rectification, or the conversion of AC-to-DC power, power electronics are therefore found in almost every digital device that is supplied from an AC source either as an adapter that plugs into the wall (see photo) or as component internal to the device. High-powered power electronics can also be used to convert AC power to DC power for long distance transmission in a system known as HVDC. HVDC is used because it proves to be more economical than similar high voltage AC systems for very long distances (hundreds to thousands of kilometres). HVDC is also desirable for interconnects because it allows frequency independence thus improving system stability. Power electronics are also essential for any power source that is required to produce an AC output but that by its nature produces a DC output. They are therefore used by photovoltaic installations.
Power electronics also feature in a wide range of more exotic uses. They are at the heart of all modern electric and hybrid vehicles—where they are used for both motor control and as part of the brushless DC motor. Power electronics are also found in practically all modern petrol-powered vehicles, this is because the power provided by the car's batteries alone is insufficient to provide ignition, air-conditioning, internal lighting, radio and dashboard displays for the life of the car. So the batteries must be recharged while driving—a feat that is typically accomplished using power electronics.
Some electric railway systems also use DC power and thus make use of power electronics to feed grid power to the locomotives and often for speed control of the locomotive's motor. In the middle twentieth century, rectifier locomotives were popular, these used power electronics to convert AC power from the railway network for use by a DC motor. Today most electric locomotives are supplied with AC power and run using AC motors, but still use power electronics to provide suitable motor control. The use of power electronics to assist with the motor control and with starter circuits, in addition to rectification, is responsible for power electronics appearing in a wide range of industrial machinery. Power electronics even appear in modern residential air conditioners allow are at the heart of the variable speed wind turbine.
Protective devices
Power systems contain protective devices to prevent injury or damage during failures. The quintessential protective device is the fuse. When the current through a fuse exceeds a certain threshold, the fuse element melts, producing an arc across the resulting gap that is then extinguished, interrupting the circuit. Given that fuses can be built as the weak point of a system, fuses are ideal for protecting circuitry from damage. Fuses however have two problems: First, after they have functioned, fuses must be replaced as they cannot be reset. This can prove inconvenient if the fuse is at a remote site or a spare fuse is not on hand. And second, fuses are typically inadequate as the sole safety device in most power systems as they allow current flows well in excess of that that would prove lethal to a human or animal.
The first problem is resolved by the use of circuit breakers—devices that can be reset after they have broken current flow. In modern systems that use less than about 10 kW, miniature circuit breakers are typically used. These devices combine the mechanism that initiates the trip (by sensing excess current) as well as the mechanism that breaks the current flow in a single unit. Some miniature circuit breakers operate solely on the basis of electromagnetism. In these miniature circuit breakers, the current is run through a solenoid, and, in the event of excess current flow, the magnetic pull of the solenoid is sufficient to force open the circuit breaker's contacts (often indirectly through a tripping mechanism).
In higher powered applications, the protective relays that detect a fault and initiate a trip are separate from the circuit breaker. Early relays worked based upon electromagnetic principles similar to those mentioned in the previous paragraph, modern relays are application-specific computers that determine whether to trip based upon readings from the power system. Different relays will initiate trips depending upon different protection schemes. For example, an overcurrent relay might initiate a trip if the current on any phase exceeds a certain threshold whereas a set of differential relays might initiate a trip if the sum of currents between them indicates there may be current leaking to earth. The circuit breakers in higher powered applications are different too. Air is typically no longer sufficient to quench the arc that forms when the contacts are forced open so a variety of techniques are used. One of the most popular techniques is to keep the chamber enclosing the contacts flooded with sulfur hexafluoride (SF6)—a non-toxic gas with sound arc-quenching properties. Other techniques are discussed in the reference.
The second problem, the inadequacy of fuses to act as the sole safety device in most power systems, is probably best resolved by the use of residual-current devices (RCDs). In any properly functioning electrical appliance, the current flowing into the appliance on the active line should equal the current flowing out of the appliance on the neutral line. A residual current device works by monitoring the active and neutral lines and tripping the active line if it notices a difference. Residual current devices require a separate neutral line for each phase and to be able to trip within a time frame before harm occurs. This is typically not a problem in most residential applications where standard wiring provides an active and neutral line for each appliance (that is why your power plugs always have at least two tongs) and the voltages are relatively low however these issues limit the effectiveness of RCDs in other applications such as industry. Even with the installation of an RCD, exposure to electricity can still prove fatal.
SCADA systems
In large electric power systems, supervisory control and data acquisition (SCADA) is used for tasks such as switching on generators, controlling generator output and switching in or out system elements for maintenance. The first supervisory control systems implemented consisted of a panel of lamps and switches at a central console near the controlled plant. The lamps provided feedback on the state of the plant (the data acquisition function) and the switches allowed adjustments to the plant to be made (the supervisory control function). Today, SCADA systems are much more sophisticated and, due to advances in communication systems, the consoles controlling the plant no longer need to be near the plant itself. Instead, it is now common for plants to be controlled with equipment similar (if not identical) to a desktop computer. The ability to control such plants through computers has increased the need for security—there have already been reports of cyber-attacks on such systems causing significant disruptions to power systems.
Power systems in practice
Despite their common components, power systems vary widely both with respect to their design and how they operate. This section introduces some common power system types and briefly explains their operation.
Residential power systems
Residential dwellings almost always take supply from the low voltage distribution lines or cables that run past the dwelling. These operate at voltages of between 110 and 260 volts (phase-to-earth) depending upon national standards. A few decades ago small dwellings would be fed a single phase using a dedicated two-core service cable (one core for the active phase and one core for the neutral return). The active line would then be run through a main isolating switch in the fuse box and then split into one or more circuits to feed lighting and appliances inside the house. By convention, the lighting and appliance circuits are kept separate so the failure of an appliance does not leave the dwelling's occupants in the dark. All circuits would be fused with an appropriate fuse based upon the wire size used for that circuit. Circuits would have both an active and neutral wire with both the lighting and power sockets being connected in parallel. Sockets would also be provided with a protective earth. This would be made available to appliances to connect to any metallic casing. If this casing were to become live, the theory is the connection to earth would cause an RCD or fuse to trip—thus preventing the future electrocution of an occupant handling the appliance. Earthing systems vary between regions, but in countries such as the United Kingdom and Australia both the protective earth and neutral line would be earthed together near the fuse box before the main isolating switch and the neutral earthed once again back at the distribution transformer.
There have been a number of minor changes over the years to practice of residential wiring. Some of the most significant ways modern residential power systems in developed countries tend to vary from older ones include:
For convenience, miniature circuit breakers are now almost always used in the fuse box instead of fuses as these can easily be reset by occupants and, if of the thermomagnetic type, can respond more quickly to some types of fault.
For safety reasons, RCDs are now often installed on appliance circuits and, increasingly, even on lighting circuits.
Whereas residential air conditioners of the past might have been fed from a dedicated circuit attached to a single phase, larger centralised air conditioners that require three-phase power are now becoming common in some countries.
Protective earths are now run with lighting circuits to allow for metallic lamp holders to be earthed.
Increasingly residential power systems are incorporating microgenerators, most notably, photovoltaic cells.
Commercial power systems
Commercial power systems such as shopping centers or high-rise buildings are larger in scale than residential systems. Electrical designs for larger commercial systems are usually studied for load flow, short-circuit fault levels and voltage drop. The objectives of the studies are to assure proper equipment and conductor sizing, and to coordinate protective devices so that minimal disruption is caused when a fault is cleared. Large commercial installations will have an orderly system of sub-panels, separate from the main distribution board to allow for better system protection and more efficient electrical installation.
Typically one of the largest appliances connected to a commercial power system in hot climates is the HVAC unit, and ensuring this unit is adequately supplied is an important consideration in commercial power systems. Regulations for commercial establishments place other requirements on commercial systems that are not placed on residential systems. For example, in Australia, commercial systems must comply with AS 2293, the standard for emergency lighting, which requires emergency lighting be maintained for at least 90 minutes in the event of loss of mains supply. In the United States, the National Electrical Code requires commercial systems to be built with at least one 20 A sign outlet in order to light outdoor signage. Building code regulations may place special requirements on the electrical system for emergency lighting, evacuation, emergency power, smoke control and fire protection.
Power system management
Power system management varies depending upon the power system. Residential power systems and even automotive electrical systems are often run-to-fail. In aviation, the power system uses redundancy to ensure availability. On the Boeing 747-400 any of the four engines can provide power and circuit breakers are checked as part of power-up (a tripped circuit breaker indicating a fault). Larger power systems require active management. In industrial plants or mining sites a single team might be responsible for fault management, augmentation and maintenance. Where as for the electric grid, management is divided amongst several specialised teams.
Fault management
Fault management involves monitoring the behaviour of the power system so as to identify and correct issues that affect the system's reliability. Fault management can be specific and reactive: for example, dispatching a team to restring conductor that has been brought down during a storm. Or, alternatively, can focus on systemic improvements: such as the installation of reclosers on sections of the system that are subject to frequent temporary disruptions (as might be caused by vegetation, lightning or wildlife).
Maintenance and augmentation
In addition to fault management, power systems may require maintenance or augmentation. As often it is neither economical nor practical for large parts of the system to be offline during this work, power systems are built with many switches. These switches allow the part of the system being worked on to be isolated while the rest of the system remains live. At high voltages, there are two switches of note: isolators and circuit breakers. Circuit breakers are load-breaking switches where as operating isolators under load would lead to unacceptable and dangerous arcing. In a typical planned outage, several circuit breakers are tripped to allow the isolators to be switched before the circuit breakers are again closed to reroute power around the isolated area. This allows work to be completed on the isolated area.
Frequency and voltage management
Beyond fault management and maintenance one of the main difficulties in power systems is that the active power consumed plus losses must equal the active power produced. If load is reduced while generation inputs remain constant the synchronous generators will spin faster and the system frequency will rise. The opposite occurs if load is increased. As such the system frequency must be actively managed primarily through switching on and off dispatchable loads and generation. Making sure the frequency is constant is usually the task of a system operator. Even with frequency maintained, the system operator can be kept occupied ensuring:
Notes
See also
Power system simulation
References
External links
IEEE Power Engineering Society
Power Engineering International Magazine Articles
Power Engineering Magazine Articles
American Society of Power Engineers, Inc.
National Institute for the Uniform Licensing of Power Engineer Inc.
Power engineering
Electric power | Electric power system | [
"Physics",
"Engineering"
] | 6,320 | [
"Physical quantities",
"Energy engineering",
"Power (physics)",
"Electric power",
"Power engineering",
"Electrical engineering"
] |
11,511,542 | https://en.wikipedia.org/wiki/Onboard%20refueling%20vapor%20recovery | An onboard refueling vapor recovery system (ORVR) is a vehicle fuel vapor emission control system that captures volatile organic compounds (VOC, potentially harmful vapors) during refueling. There are two types of vehicle fuel vapor emission control systems: the ORVR, and the Stage II vapor recovery system. Without either of these two systems, fuel vapors trapped inside gas tanks would be released into the atmosphere, each time refueling of the vehicle occurred. However, an ORVR system is able to retain those emissions, delivering them to the vehicle's activated carbon-filled canister and then to dispose of those vapors by adding them to the engine's inlet manifold and the stream of fuel supplying the engine, during normal operation. The goal behind implementing the ORVR system throughout the U.S. is to eventually make the Stage II systems obsolete.
History
William F. Woodcock, William E. Ruhig, Jr., and Loren H. Kline hold the patents for the ORVR system.
According to Freda Fung and Bob Maxwell, the Environmental Protection Agency (EPA) has been controlling emissions among the United States since the 1970s. They implemented regulations which would limit the amount of fuel vapor released into the atmosphere during the refueling of a motor vehicle. Before any EPA mandate was put into action, California devised its own regulations, ahead of every other state, by 16 years, when it required the implementation of the Stage II vapor recovery system. The ORVR systems were required but did not take over instantaneously; instead, EPA decided that Stage II control systems were necessary for all areas of non attainment (an area considered to have air quality worse than the National Ambient Air Quality Standards as defined in the Clean Air Act Amendments of 1970) until the requirement had been dropped by the Clean Air Act of 1990. In the United States, ORVR has been mandated on all passenger cars (phasing in over the 1998-2000 model years) and light trucks up to 10,000 lbs gross vehicle weight rating (phasing in over the 2001-2006 model years) by the EPA.
As the years went by, ORVR systems became so widespread throughout the United States, that Stage II systems were becoming obsolete. On May 9, 2012 EPA Administrators released their final rule making which acknowledged enough ORVR systems were operational to remove further need for Stage II systems. However, it left the option open to those states that felt the use of Stage II was still necessary for their particular area.
Benefits
According to EPA, ORVR vehicles function at 98% efficiency, while the efficiency of the Stage II system ranges from 62% to 92%.
The ORVR system is significantly less expensive than the Stage II system.
The lifespan and durability of the ORVR system is much greater than the Stage II System. That translates into much lower maintenance costs for the ORVR system.
The Stage II system cannot collect diurnal emissions, while the ORVR system can.
Complications between ORVR & Stage II
Vehicle ORVR systems have design characteristics that are not compatible with Stage II vacuum assisted systems. When these two systems work in conjunction, the overall efficiency declines significantly, as compared to each system functioning on its own.
Problem
An ORVR carbon-filled canister (installed on modern vehicles) is designed to capture fuel vapors displaced while refueling, and then to inject them into the intake manifold later on, so that they are burned along with the regular fuel, during normal engine operation. However, a Stage II vapor recovery system, installed on refueling gas station pumps, uses a vacuum to prevent fuel vapors from being released into the atmosphere. The design of the fill pipe seal in ORVR systems prevents fuel vapors from entering the fuel tank fill pipe. That frustrates the purpose of the Stage II nozzle, which was designed to vacuum away any fuel vapors that come up that fill pipe during the refueling process. If the car's own vapor recovery system is working properly, then the Stage II nozzle will only be vacuuming normal fresh air and depositing that into the gas station's underground fuel storage tanks. That ends up causing evaporation of fuel vapors into the atmosphere, because too much pressure builds up in those fuel storage tanks. When that pressure becomes too great, it is released into the atmosphere via a pressure relief pipe.
General components
Carbon canister
The vapors which are displaced from the fuel tank by the incoming fuel are routed via the vapor vent line to the canister and are absorbed by activated carbon. These canisters are made of either steel or plastic. The size of this canister is tailored to accommodate expected evaporative emissions. The emissions occur throughout the day, even when the vehicle is parked.
Fuel tank filler pipe with seal
The fuel tank filler pipe has a seal, either mechanical or liquid, to stop vapors from escaping the filler tube.
A mechanical seal is usually an annular elastomeric material through which the nozzle must pass during refueling, preventing vapors from escaping alongside the nozzle. A liquid seal is created by the design of the filler pipe, which creates a seal with the liquid flowing into the tank. Since the liquid fills the entire pipe, no vapors can escape during refueling. A liquid seal is usually used for smaller vehicles, while the mechanical seal is used for larger vehicles.
Vapor vent line
The vent line is a tube that routes vapors from the fuel tank to the vapor storage device.
Vapor purge line
The vapor purge line directs vapors from the vapor storage device to the engine to be burned, to purge the vapor storage device.
Purge valve
The purge valve controls the purge vapor purge line, through which the manifold vacuum pulls air through the vapor storage device and purges it of fuel vapors. The electronic control unit (ECU) controls the opening and closing of this valve.
Anti-spitback valve
The anti-spitback valve prevents spilling of vapors and is located in the fillneck.
Fuel tank
The fuel tank necessarily contains fuel vapors (occupying all the space which is not occupied by fuel). These are the vapors that must be contained so they do not escape into the atmosphere.
Vent/rollover valve
The vent/rollover valve provides a method of controlled escape for gasoline vapors during the refueling process. It has a mechanism which closes the vent in the event the vehicle rolls over, to prevent spilling of VOCs or fuel in general. It also acts as a fill limiter.
References
External links
Stage II vapor recovery system
F. Woodcock, William E. Ruhig, Jr., Loren H. Kline
Vehicle emission controls
Pollution control technologies | Onboard refueling vapor recovery | [
"Chemistry",
"Engineering"
] | 1,348 | [
"Pollution control technologies",
"Environmental engineering"
] |
11,513,302 | https://en.wikipedia.org/wiki/Stray%20voltage | Stray voltage is the occurrence of electrical potential between two objects that ideally should not have any voltage difference between them. Small voltages often exist between two grounded objects in separate locations by the normal current flow in the power system. Contact voltage is a better defined term when large voltage appear as a result of a fault. Contact voltage on the enclosure of electrical equipment can appear from a fault in the electrical power system, such as a failure of insulation.
Terminology
The terminology, stray voltage may be used in any case of undesirable elevated electrical potential. More precise terminology gives an indication of the source of the voltage. Neutral to earth voltage (NEV) specifically refers to a difference in potential between a locally grounded object and the grounded return conductor, or neutral, of an electrical system. The neutral is theoretically at 0 V potential, as any grounded object, but current flows on the neutral back to the source, somewhat elevating the neutral voltage. NEV is the product of current flowing on the neutral and the finite, non-zero impedance of the neutral conductor between a given point and its source, often a distant electrical substation. NEV differs from accidentally-energized objects because it is an unavoidable result of normal system operation, not an accident or a fault in materials or design.
Definitions
Official definition (draft)
In 2005, the Institute of Electrical and Electronics Engineers (IEEE) convened Working Group 1695 in an attempt to lay down definitions and guidelines for mitigating the various phenomena referred to as "stray voltage". The working group attempted to distinguish between the terms stray voltage and contact voltage as follows:
Stray voltage is defined as "A voltage resulting from the normal delivery and/or use of electricity (usually smaller than 10 volts) that may be present between two conductive surfaces that can be simultaneously contacted by members of the general public and/or their animals. Stray voltage is caused by primary and/or secondary return current, and power system induced currents, as these currents flow through the impedance of the intended return pathway, its parallel conductive pathways, and conductive loops in close proximity to the power system. Stray voltage is not related to power system faults, and is generally not considered hazardous."
Contact voltage is defined as "A voltage resulting from abnormal power system conditions that may be present between two conductive surfaces that can be simultaneously contacted by members of the general public and/or their animals. Contact voltage is caused by power system fault current as it flows through the impedance of available fault current pathways. Contact voltage is not related to normal system operation and can exist at levels that may be hazardous."
Working definition
In spite of the above definitions, the term stray voltage continues to be used by both utility workers and the general public for all occurrences of unwanted excess electricity. For example, at the annual "Jodie S. Lane Stray Voltage Detection, Mitigation & Prevention Conference", held at the Con Edison headquarters in New York City in April 2009, the presidents of most major utilities from throughout the United States and Canada continued to use stray voltage for all occurrences of unwanted excess electricity. The term contact voltage was used only once, possibly because "contact voltage" is generally the fault of the supply, network, or installation company. Few companies are willing to openly discuss their faults, let alone those that are seen as life-threatening. It would seem that stray voltage is now the common term for all unwanted voltage leakage because it categorises the fault as part of normal operation and so limits liability.
In New York City, a woman, Jodie S. Lane, was electrocuted in January 2004 by a five-foot by eight-foot metal road utility vault plate that was energized by an "improperly insulated wire." In the coverage of her death and the growing concern regarding the role of public utilities in electrical safety in the urban environment, both the media and the New York State regulatory agency used stray voltage for neutral-to-earth voltage (NEV), but conceded that the notoriety of the Lane incident had caused stray voltage to be a term that is well recognized by the public.
The regulator then used stray voltage to refer to any "voltage conditions on electric facilities that should not ordinarily exist. These conditions may be due to one or more factors, including, but not limited to, damaged cables, deteriorated, frayed or missing insulation, improper maintenance, or improper installation." In the same document, the commission accepted NEV to be a naturally occurring condition.
Since then, the term had at least two very different definitions, which confused utilities, regulators, and the public. The term "stray voltage" is commonly used for all unwanted electrical leakage, by both the general public and many electrical utility professionals. Other more esoteric phenomenon that also result in elevated voltages on normally non-energized surfaces, are also referred to as “stray voltage.” Examples are voltage from capacitive coupling, current induced by power lines, EMF, lightning, earth potential rise, and problems stemming from open (disconnected) neutrals.
Causes
Coupled voltages
Ungrounded metal objects close to electric field sources such as neon signs or conductors carrying alternating currents may have measurable voltage levels caused by capacitive coupling. Since voltages detected by high-impedance instruments disappear or become greatly reduced when a low impedance is substituted, the effect is sometimes called phantom voltage (or ghost voltage). The term is often used by electricians, and might be seen, for example, when measuring the voltage at a lighting fixture after removing the bulb. It is common to measure phantom voltages of 50–90 V in testing the wiring of ordinary 120 V circuits with a high-impedance instrument. The voltage produced may read almost to the full supply voltage, but the capacitance or mutual inductance between the wires of building wiring systems is typically quite low and incapable of supplying significant amounts of current.
However, in overhead transmission work on or near high-voltage lines, safety rules require connecting a conductor to earth ground during maintenance. That is since induced voltages and currents on a conductor may cause electrocution or serious injury.
Capacitive leakage through insulation
Alternating current is different from direct current in that the current can flow through what would ordinarily seem to be a physical barrier. In a series circuit, a capacitor blocks direct current but passes alternating current.
In power transmission systems, one side of the circuit, known as the neutral, is grounded to dissipate static electricity and to reduce hazardous voltages caused by insulation failure and other electrical faults.
Even a person is standing on an insulated surface may get a shock only by touching the hot wire because of the person's body being capacitively coupled to the ground upon which the person stands.
Induced voltages
Classical electromagnetic induction can occur when long conductors form an open grounded loop under and parallel to transmission or distribution lines. In those cases, current is induced in the loop when a person makes contact with it and ground. Since this involves real current flow, it is potentially hazardous. This type of induced current occurs most often on long fences and distribution lines built under high-power transmission lines.
Degraded insulation on power conductors
Stray voltage may leak via damaged or degraded insulation. Failing insulation is essentially a high impedance fault which will allow current to flow through any available path to ground, a condition which can cause shocks or fires if left unmitigated. This leakage can occur when there is damage caused by physical, thermal, or chemical stresses to insulation on power lines, especially but not limited to underground or underwater cables. Examples of this damage are swollen or cracked insulation from overheating, abrasions caused by digging or ground seizing, and corrosion damage from salt or oil exposure. Electrical leakage can occur also from moisture, salt, dust, and dirt buildup on open air insulators in overhead power distribution. If the leakage in these cases is severe enough, it can lead to a utility pole fire.
Leakage from single-wire earth return
The term "stray voltage" is used for the gradient (rate of change with respect to distance) of electrical potential in the surface of the soil, associated with single-wire earth return electricity distribution systems used in some rural locations. This gradient is low at points far away from the earth return connections, but increases near the ground rods where the metallic circuit enters the earth.
Neutral return currents through the ground
In three phase four-wire ("wye") electrical power systems, when the load on the phases is not exactly equal, there is some current in the neutral conductor. Because both the primary and secondary of the distribution transformer are grounded, and the primary ground is grounded at more than one point, the earth forms a parallel return path for the neutral current, allowing part of the neutral current to continuously flow through the earth. This arrangement is partially responsible for stray voltage.
Stray voltage is a result of the design of a 4 wire distribution system and as such has existed as long as such systems have been used. Stray voltage became a problem for the dairy industry some time after electric milking machines were introduced, and large numbers of animals were simultaneously in contact with metal objects grounded to the electric distribution system and the earth. Numerous studies document the causes, physiological effects, and prevention, of stray voltage in the farm environment. Today, stray voltage on farms is regulated by state governments and controlled by the design of equipotential planes in areas where livestock eat, drink or give milk. Commercially available neutral isolators also prevent elevated potentials on the utility system neutral from raising the voltage of farm neutral or ground wires.
Railway stray current
Typically a rail transit systems will have at least one of the rails as a return conductor for the traction current. This arrangement is common, based on economic considerations, since it does not require the installation of an additional return conductor. This rail is in contact with the earth at many places throughout its length. Since current will follow every parallel path between source and load, some part of the traction current will also flow through the earth. This is normally referred to as leakage current or stray current. The amount of leaking current depends on the conductance of the return tracks compared to the soil; and on the quality of the insulation between the tracks and soil. Where the railway uses direct current, this stray current can cause damage to other buried metallic objects by electrolysis and accelerate corrosion of metal objects in contact with the soil.
Stray Voltage Effects
Electrolysis and corrosion
Dissimilar buried metals such as copper and steel can function as the poles of a galvanic cell, using moist soil as the electrolyte. Stray direct currents in soil may counteract the anti-corrosion effect of a cathodic protection system. Design of high voltage direct current transmission systems must take care so that current flowing in the earth does not cause objectionable corrosion to buried objects such as pipelines.
The stray currents from railways create or accelerate the electrolytic corrosion of metallic structures located in the proximity of the transit system. Metal pipes, cables and earthing grids laid in the ground near tracks may have a much shorter usable and safety functional life.
Persons
Small stray voltages may never be noticed and may be detected only by a voltmeter. Larger stray voltages may have a range of effects, from barely perceptible to dangerous electric shocks, or unintended electrical heating resulting in fires. Normally, metal electrical equipment cases are bonded to ground to prevent a shock hazard if energized conductors accidentally contact the case. Where this bonding is not provided or has failed, a severe hazard of electric shock or electrocution is presented when circuit conductors contact the case.
In any situation where energized equipment is in intimate electrical contact with a person or animal (such as swimming pools, surgery, electric milking machines, car washes, laundries, and many others), particular attention must be paid to elimination of stray voltages. Dry intact skin has a higher resistance than wet skin or a wound, so voltages that would otherwise be unnoticed become significant in a wet or surgical situation.
Potential differences between pool water and railings, or shower facilities and grounded drain pipes are common as a result of neutral to earth voltages (NEV). Potential differences can be a major nuisance, but are usually not life-threatening. However, a current carrying conductor with damaged insulation can result in contact voltage in unexpected places. Contact voltage energized metal parts can be very dangerous, and can lead to shock or electrocution. A contact voltage condition can arise spontaneously from mechanical, thermal, or chemical stress on insulation materials, or from unintentional damage from digging activity, freeze-frost seizing, corrosion and collapse of conduit, or even workmanship issues.
Contact voltage energizes objects which are normally safe – metal fences, metal telephone booths, metal street signs, etc. Anywhere buried electric wiring exists, a failure can occur in that wiring and create conditions that allow electricity to flow into the immediate surroundings. Some circuitry systems have protective devices such as circuit breakers or Ground Fault Circuit Interrupters (GFCI), designed to isolate such a fault. However, in the absence of protective devices, a fault will go undetected until it either causes a failure or an energy discharge incident.
Farm animals
Stray voltage can have harmful effects on animal health and productivity. Some dairy farmers have claimed damage to yields or stock caused by it.
Dr. Douglas J. Reinemann, Professor of Biological Systems Engineering at University of Wisconsin–Madison, reported on stray voltages on dairy farms in 2003. Investigation of stray voltage claims must also consider other animal health concerns.
Legal proceedings in Wisconsin
In 2003, the Wisconsin Supreme Court upheld a judgement of $1.2 million against the Wisconsin electrical utility WEPCO in Hoffman
v. Wisconsin Electric Power Company. The Hoffman family, dairy farmers near New London, had sued WEPCO after several years of declining production. WEPCO had measured on the farm currents because of stray voltage <1 mA, the "level of concern" set by the Public Service Commission of Wisconsin, but the court ruled on procedural grounds that the utility could be found negligent under common law even though they met the state standard. The Hoffmans had presented, the court said, a viable alternative theory that stray voltage had caused them economic harm.
In 2017, a jury sided with farmers Paul and Lyn Halderson for a $4.5 million settlement against Xcel Energy. The Haldersons claimed stray voltage from high voltage power lines hurt their 1,000 cow herd and lowered milk production. The jury found that Xcel subsidiary – Northern States Power Company – was "negligent with respect to the delivery of electrical service." The jury awarded $4.09 million for economic damages and another $409,000 for "inconvenience, annoyance and loss of use and enjoyment" of property.
Public concerns about stray voltage
In metropolitan areas, stray voltage issues became a concern in the 1990s. Many of these areas have large amounts of aging underground and aboveground electrical distribution equipment in crowded public spaces. Even a low rate of insulation failures or current leakage can result in hazardous exposure to the general public.
Consolidated Edison in New York City has had frequent incidents of stray voltage, including the electrocution death of Jodie S. Lane in 2004, while walking her dog in Manhattan. In 2009, the Jodie S. Lane Public Safety Foundation announced a publicly accessible website with maps showing thousands of reported stray voltage locations in New York City. In addition, the Foundation sponsors the "Jodie S. Lane Stray Voltage Detection, Mitigation & Prevention Conference", an annual meeting attended by power utilities and regulators from around the country to discuss stray voltage detection programs. The Foundation also initiated and advocates regular mobile scanning by utility companies for stray voltage hazards.
In Boston, NSTAR Electric (formerly Boston Edison) has also had problems with hazardous stray voltages, which have killed several dogs during the 1990s. As a result, the City of Boston government started a program to detect, report on, and repair stray voltage hazards.
Toronto Hydro pulled all employees off regular duty on the weekend of January 30, 2009 to deal with ongoing stray voltage problems in the city. This came after as many as five children were shocked though none suffered serious injury. The stray voltage problem had claimed the lives of two dogs in the previous few months.
In March 2013, Californian Simona Wilson won a $4 million lawsuit against her power company after stray voltage from an electrical substation near her house repeatedly shocked her and members of her family whenever they were in the shower.
The United States Social Security Administration, Administrative Law Judge, Edward Bergtholdt, in an August 17, 2000 decision awarded Michael Gunner permanent disability from exposure to stray voltage.
Stray/contact voltage detection
Stray voltage is generally discovered during routine electrical work, or as a result of a customer complaint or shock incident. A growing number of utilities in urban areas now conduct routine periodic and systematic active tests for stray voltage (or more specifically, contact voltage) for public safety reasons. Some incipient electrical faults may also be discovered during routine work or inspection programs which are not specifically focused on stray voltage.
Equipment used to detect stray voltage varies, but common devices are electrical tester pens or electric field detectors, with follow-up testing using a low-impedance voltmeter. Electrical tester pens are hand-held devices which detect a potential difference between the user's hand and the object being tested. They generally indicate on contact with an energized object, if the potential difference is above the sensitivity threshold of the device. Reliability of the test can be affected if the user is at an elevated potential him/herself, or if the user is not making firm contact with a bare hand on the reference terminal of the tester.
Capacitive coupling is the mechanism used by electrical tester pen devices. Because the capacitance between an object and a current source is typically small, only very small currents can flow from the energized source to the coupled object. High-impedance digital or analog voltmeters may measure elevated voltages from non-energized objects from the coupling and in effect provide a misleading reading. For that reason, high-impedance voltage measurements of normally non-energized objects must be verified.
Verification of a voltage reading is performed using a low-impedance voltmeter, which usually has a shunt resistor load bridging the voltmeter terminals. Since very little current can flow from a coupled surface through the small shunt or meter resistance, capacitively coupled voltages will collapse to zero, indicating a harmless "false alarm". By contrast, if an object being tested is in contact with a current source, or coupled by a very large capacitance (possible but unlikely in this context), the voltage will drop only slightly as dictated by Ohm's law. In this latter case, real power is being delivered, indicating a potentially hazardous situation.
Electric field detectors detect the electric field strength relative to the user's body or mounting platform. By sensing electric field gradients at a distance, they can detect energized objects without making direct contact, making these instruments useful for scanning or screening large areas for potential electrical hazards. A low electric field reading also provides a definitive indication that no objects are energized within a tested area. Electric field detectors respond to all field sources, and any positive indications must be verified with a low-impedance voltmeter to eliminate false positives. Electric field proximity sensing also has other industrial applications from manufacturing to building security.
Since stray voltage cannot be seen, smelled, or heard, there is no easy way for the public to know when a dangerous condition exists. Periodic testing is an important precaution, but it is possible that a dangerous condition can develop without warning.
See also
Disturbance voltage
Earth potential rise
Earthing system
Electrical bonding
Gas leak
Neutral and ground
Shaft voltage
References
External links
University of Wisconsin–Madison Report on Stray Voltage
'Electrified Cover Safeguard' website
'Stray voltage' website from the LaCrosse Tribune, including their award-winning coverage
Self-Help Guide: Stray Voltage Detection, Wisconsin Farm Electric Council (2/1997), well written, for farmer-consumers, at
Midwest Rural Energy Council Stray Voltage portal
Wisconsin Public Service Stray Voltage site
Public Service Commission of Wisconsin Stray Voltage documents (technical)
Pacific Gas and Electric Power Quality Bulletin No. 2, "Stray Voltage" (10/2004)
First conference about "Stray currents in our environment" - November 29, 2007, Ester Technopole Limoges, France
Stray voltage description and mitigation
Electrical parameters | Stray voltage | [
"Engineering"
] | 4,228 | [
"Electrical engineering",
"Electrical parameters"
] |
11,515,113 | https://en.wikipedia.org/wiki/Scanning%20capacitance%20microscopy | Scanning capacitance microscopy (SCM) is a variety of scanning probe microscopy in which a narrow probe electrode is positioned in contact or close proximity of a sample's surface and scanned. SCM characterizes the surface of the sample using information obtained from the change in electrostatic capacitance between the surface and the probe.
History
The name Scanning Capacitance Microscopy was first used to describe a quality control tool for the RCA/CED (Capacitance Electronic Disc), a video disk technology that was a predecessor of the DVD. It has since been adapted for use in combination with scanned probe microscopes for measuring other systems and materials with semiconductor doping profiling being the most prevalent.
SCM applied to semiconductors uses an ultra-sharp conducting probe (often Pt/Ir or Co/Cr thin film metal coating applied to an etched silicon probe) to form a metal-insulator-semiconductor (MIS/MOS) capacitor with a semiconductor sample if a native oxide is present. When no oxide is present, a Schottky capacitor is formed. With the probe and surface in contact, a bias applied between the tip and sample will generate capacitance variations between the tip and sample. The capacitance microscopy method developed by Williams et al. used the RCA video disk capacitance sensor connected to the probe to detect the tiny changes in semiconductor surface capacitance (attofarads to femptofarads). The tip is then scanned across the semiconductor's surface in while the tip's height is controlled by conventional contact force feedback.
By applying an alternating bias to the metal-coated probe, carriers are alternately accumulated and depleted within the semiconductor's surface layers, changing the tip-sample capacitance. The magnitude of this change in capacitance with the applied voltage gives information about the concentration of carriers (SCM amplitude data), whereas the difference in phase between the capacitance change and the applied, alternating bias carries information about the sign of the charge carriers (SCM phase data). Because SCM functions even through an insulating layer, a finite conductivity is not required to measure the electrical properties.
Resolution
On the conducting surfaces, the resolution limit is estimated as 2 nm. For the high resolution, the quick analysis of capacitance of a capacitor with rough electrode is required. This SCM resolution is an order of magnitude better than that estimated for the atomic nanoscope; however, as other kinds of the probe microscopy, SCM requires careful preparation of the analyzed surface, which is supposed to be almost flat.
Applications
Owing to the high spatial resolution of SCM, it is a useful nanospectroscopy characterization tool. Some applications of the SCM technique involve mapping the dopant profile in a semiconductor device on a 10 nm scale, quantification of the local dielectric properties in hafnium-based high-k dielectric films grown by an atomic layer deposition method and the study of the room temperature resonant electronic structure of individual germanium quantum dot with different shapes.
The high sensitivity of dynamical scanning capacitance microscopy,
in which the capacitance signal is modulated periodically by the tip motion of the atomic force microscope (AFM), was used to image compressible and incompressible strips in a two-dimensional electron gas (2DEG) buried 50 nm below an insulating layer in a large magnetic field and at cryogenic temperatures.
References
Scanning probe microscopy | Scanning capacitance microscopy | [
"Chemistry",
"Materials_science"
] | 725 | [
"Nanotechnology",
"Scanning probe microscopy",
"Microscopy"
] |
11,515,823 | https://en.wikipedia.org/wiki/Polariton%20superfluid | Polariton superfluid is predicted to be a state of the exciton-polaritons system that combines the characteristics of lasers with those of excellent electrical conductors. Researchers look for this state in a solid state optical microcavity coupled with quantum well excitons. The idea is to create an ensemble of particles known as exciton-polaritons and trap them.
Wave behavior in this state results in a light beam similar to that from a laser but possibly more energy efficient.
Unlike traditional superfluids that need temperatures of approximately ~4 K, the polariton superfluid could in principle be stable at much higher temperatures, and might soon be demonstrable at room temperature. Evidence for polariton superfluidity was reported in by Alberto Amo and coworkers, based on the suppressed scattering of the polaritons during their motion.
Although several other researchers are working in the same field, the terminology and conclusions are not completely shared by the different groups. In particular, important properties of superfluids, such as zero viscosity, and of lasers, such as perfect optical coherence, are a matter of debate. Although, there is clear indication of quantized vortices when the pump beam has orbital angular momentum.
Furthermore, clear evidence has been demonstrated also for superfluid motion of polaritons, in terms of the Landau criterion and the suppression of scattering from defects when the flow velocity is slower than the speed of sound in the fluid.
The same phenomena have been demonstrated in an organic exciton polariton fluid, representing the first achievement of room-temperature superfluidity of a hybrid fluid of photons and excitons.
See also
Bose–Einstein condensation of polaritons
References
External links
YouTube animation explaining what is polariton in a semiconductor micro-resonator.
Description of the experimental research on polariton fluids at the Institute of Nanotechnologies.
Phases of matter
Superfluidity | Polariton superfluid | [
"Physics",
"Chemistry",
"Materials_science"
] | 406 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Superfluidity",
"Condensed matter physics",
"Exotic matter",
"Matter",
"Fluid dynamics"
] |
11,516,599 | https://en.wikipedia.org/wiki/Activation%20product | An activation product is a material that has been made radioactive by the process of neutron activation.
Fission products and actinides produced by neutron absorption of nuclear fuel itself are normally referred to by those specific names, and activation product reserved for products of neutron capture by other materials, such as structural components of the nuclear reactor or nuclear bomb, the reactor coolant, control rods or other neutron poisons, or materials in the environment. All of these, however, need to be handled as radioactive waste. Some nuclides originate in more than one way, as activation products or fission products.
Activation products in a reactor's primary coolant loop are a main reason reactors use a chain of two or even three coolant loops linked by heat exchangers.
Fusion reactors will not produce radioactive waste from the fusion product nuclei themselves, which are normally just helium-4, but generate high neutron fluxes, so activation products are a particular concern.
Activation product radionuclides include:
[1] Branching fractions from LNHB database.
[2] Branching fractions renormalised to sum to 1.0..
References
External links
Handbook on Nuclear Activation Cross-Sections, IAEA, 1974
Nuclear Structure and Decay Databases (nndc.bnl.gov)
New and revised half-life measurements results made by the Radioactivity Group of NIST
HANDBOOK OF NUCLEAR DATA FOR SAFEGUARDS: DATABASE EXTENSIONS, AUGUST 2008
Radiation
Neutron
Radiation effects
Nuclear materials | Activation product | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 294 | [
"Transport phenomena",
"Physical phenomena",
"Materials science",
"Waves",
"Materials",
"Radiation",
"Nuclear and atomic physics stubs",
"Condensed matter physics",
"Nuclear materials",
"Nuclear physics",
"Radiation effects",
"Matter"
] |
11,517,213 | https://en.wikipedia.org/wiki/Jaumea%20carnosa | Jaumea carnosa, known by the common names marsh jaumea, fleshy jaumea, or simply jaumea, is a halophytic salt marsh plant native to the wetlands, coastal sea cliffs and salt marshes of the western coast of North America.
Description
It is a perennial dicotyledon. It has succulent green leaves on soft pinkish-green stems, not unlike ice plant in appearance. Its stems are weak and long. Flowers are yellow and the peduncle is enlarged below the head. It spreads by an extensive rhizome system.
Distribution
Jaumea carnosa ranges from British Columbia to northern Baja California, and can be found in wetlands and salt marshes. Some populations are located on the Channel Islands of California.
References
External links
Jepson Manual Treatment, University of California
United States Department of Agriculture Plants Profile
Calflora photo gallery, University of California
Washington State University, intertidal organisms, Jaumea carnosa (Marsh jaumea) photo and commentary
Paul Slichter, Members of the Sunflower Family Found West of the Cascade Mountains With Flower Heads Consisting of Both Disc and Ray Flowers, Fleshy Jaumea, Marsh Jaumea Jaumea carnosa photos
Tageteae
Flora of California
Flora of Washington (state)
Flora of Oregon
Flora of British Columbia
Flora of Baja California
Plants described in 1831
Halophytes
Salt marsh plants
Flora without expected TNC conservation status | Jaumea carnosa | [
"Chemistry"
] | 296 | [
"Halophytes",
"Salts"
] |
11,518,797 | https://en.wikipedia.org/wiki/Cell%20lists | Cell lists (also sometimes referred to as cell linked-lists) is a data structure in molecular dynamics simulations to find all atom pairs within a given cut-off distance of each other. These pairs are needed to compute the short-range non-bonded interactions in a system, such as Van der Waals forces or the short-range part of the electrostatic interaction when using Ewald summation.
Algorithm
Cell lists work by subdividing the simulation domain into cells with an edge length greater than or equal to the cut-off radius of the interaction to be computed. The particles are sorted into these cells and the interactions are computed between particles in the same or neighbouring cells.
In its most basic form, the non-bonded interactions for a cut-off distance are computed as follows:
for all neighbouring cell pairs do
for all do
for all do
if then
Compute the interaction between and .
end if
end for
end for
end for
Since the cell length is at least in all dimensions, no particles within of each other can be missed.
Given a simulation with particles with a homogeneous particle density, the number of cells is proportional to and inversely proportional to the cut-off radius (i.e. if increases, so does the number of cells). The average number of particles per cell therefore does not depend on the total number of particles. The cost of interacting two cells is in . The number of cell pairs is proportional to the number of cells which is again proportional to the number of particles . The total cost of finding all pairwise distances within a given cut-off is in , which is significantly better than computing the pairwise distances naively.
Periodic boundary conditions
In most simulations, periodic boundary conditions are used to avoid imposing artificial boundary conditions. Using cell lists, these boundaries can be implemented in two ways.
Ghost cells
In the ghost cells approach, the simulation box is wrapped in an additional layer of cells. These cells contain periodically wrapped copies of the corresponding simulation cells inside the domain.
Although the data—and usually also the computational cost—is doubled for interactions over the periodic boundary, this approach has the advantage of being straightforward to implement and very easy to parallelize, since cells will only interact with their geographical neighbours.
Periodic wrapping
Instead of creating ghost cells, cell pairs that interact over a periodic boundary can also use a periodic correction vector . This vector, which can be stored or computed for every cell pair , contains the correction which needs to be applied to "wrap" one cell around the domain to neighbour the other. The pairwise distance between two particles and is then computed as
.
This approach, although more efficient than using ghost cells, is less straightforward to implement (the cell pairs need to be identified over the periodic boundaries and the vector needs to be computed/stored).
Improvements
Despite reducing the computational cost of finding all pairs within a given cut-off distance from to , the cell list algorithm listed above still has some inefficiencies.
Consider a computational cell in three dimensions with edge length equal to the cut-off radius . The pairwise distance between all particles in the cell and in one of the neighbouring cells is computed. The cell has 26 neighbours: 6 sharing a common face, 12 sharing a common edge and 8 sharing a common corner. Of all the pairwise distances computed, only about 16% will actually be less than or equal to . In other words, 84% of all pairwise distance computations are spurious.
One way of overcoming this inefficiency is to partition the domain into cells of edge length smaller than . The pairwise interactions are then not just computed between neighboring cells, but between all cells within of each other (first suggested in and implemented and analysed in and ). This approach can be taken to the limit wherein each cell holds at most one single particle, therefore reducing the number of spurious pairwise distance evaluations to zero. This gain in efficiency, however, is quickly offset by the number of cells that need to be inspected for every interaction with a cell , which, for example in three dimensions, grows cubically with the inverse of the cell edge length. Setting the edge length to , however, already reduces the number of spurious distance evaluations to 63%.
Another approach is outlined and tested in Gonnet, in which the particles are first sorted along the axis connecting the cell centers. This approach generates only about 40% spurious pairwise distance computations, yet carries an additional cost due to sorting the particles.
See also
Verlet list
References
Molecular dynamics
Computational chemistry
Molecular physics
Computational physics
Numerical differential equations | Cell lists | [
"Physics",
"Chemistry"
] | 923 | [
"Molecular physics",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
11,518,816 | https://en.wikipedia.org/wiki/Verlet%20list | A Verlet list (named after Loup Verlet) is a data structure in molecular dynamics simulations to efficiently maintain a list of all particles within a given cut-off distance of each other.
This method may easily be applied to Monte Carlo simulations. For short-range interactions, a cut-off radius is typically used, beyond which particle interactions are considered "close enough" to zero to be safely ignored. For each particle, a Verlet list is constructed that lists all other particles within the potential cut-off distance, plus some extra distance so that the list may be used for several consecutive Monte Carlo "sweeps" (set of Monte Carlo steps or moves) before being updated. If we wish to use the same Verlet list times before updating, then the cut-off distance for inclusion in the Verlet list should be , where is the cut-off distance of the potential, and is the maximum Monte Carlo step (move) of a single particle. Thus, we will spend of order time to compute the Verlet lists ( is the total number of particles), but are rewarded with Monte Carlo "sweeps" of order instead of . By optimizing our choice of it can be shown that Verlet lists allow converting the problem of Monte Carlo sweeps to an problem.
Using cell lists to identify the nearest neighbors in further reduces the computational cost.
See also
Verlet integration
Fast multipole method
Molecular mechanics
Software for molecular mechanics modeling
References
External links
Constructing a Neighbour List — from Introduction to Atomistic Simulations course at the University of Helsinki.
Molecular dynamics
Computational chemistry | Verlet list | [
"Physics",
"Chemistry"
] | 324 | [
"Molecular physics",
"Theoretical chemistry stubs",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Computational chemistry stubs",
"Theoretical chemistry",
"Physical chemistry stubs"
] |
11,519,910 | https://en.wikipedia.org/wiki/NanoPutian | NanoPutians are a series of organic molecules whose structural formulae resemble human forms. James Tour's research group designed and synthesized these compounds in 2003 as a part of a sequence on chemical education for young students. The compounds consist of two benzene rings connected via a few carbon atoms as the body, four acetylene units each carrying an alkyl group at their ends which represents the hands and legs, and a 1,3-dioxolane ring as the head. Tour and his team at Rice University used the NanoPutians in their NanoKids educational outreach program. The goal of this program was to educate children in the sciences in an effective and enjoyable manner. They have made several videos featuring the NanoPutians as anthropomorphic animated characters.
Construction of the structures depends on Sonogashira coupling and other synthetic techniques. By replacing the 1,3-dioxolane group with an appropriate ring structure, various other types of putians have been synthesized, e.g. NanoAthlete, NanoPilgrim, and NanoGreenBeret. Placing thiol (R-SH) functional groups at the end of the legs enables them to "stand" on a gold surface.
"NanoPutian" is a portmanteau of nanometer, a unit of length commonly used to measure chemical compounds, and lilliputian, a fictional race of humans in the novel Gulliver's Travels by Jonathan Swift.
Background
NanoKids Educational Outreach Program
While there are no chemical or practical uses for the NanoKid molecule or any of its known derivatives outside of the classroom, James Tour has turned the NanoKid into a lifelike character to educate children in the sciences. The goals of the outreach program, as described on the NanoKids website, are:
“To significantly increase students’ comprehension of chemistry, physics, biology, and materials science at the molecular level."
"To provide teachers with conceptual tools to teach nanoscale science and emerging molecular technology."
"To demonstrate that art and science can combine to facilitate learning for students with diverse learning styles and interests."
"To generate informed interest in nanotechnology that encourages participation in and funding for research in the field.”
To accomplish these goals, several video clips, CDs, as well as interactive computer programs were created. Tour and his team invested over $250,000 into their project. In order to raise the funds for this endeavor, Tour used unrestricted funds from his professorship and small grants from Rice University, the Welch Foundation, the nanotech firm Zyvex, and Texas A&M University. Tour also received $100,000 in 2002 from the Small Grants for Exploratory Research program, a division of the National Science Foundation.
The main characters in the videos are animated versions of the NanoKid. They star in several videos and explain various scientific concepts, such as the periodic table, DNA, and covalent bonding.
Rice conducted several studies into the effectiveness of using the NanoKids materials. These studies found mostly positive results for the use of the NanoKids in the classroom. A 2004–2005 study in two schools districts in Ohio and Kentucky found that using NanoKids led to a 10–59% increase in understanding of the material presented. Additionally, it was found that 82% of students found that NanoKids made learning science more interesting.
Synthesis of NanoKid
Upper body of NanoKid
To create the first NanoPutian, dubbed the NanoKid, 1,4-dibromobenzene was iodinated in sulfuric acid. To this product, “arms”, or 3,3-Dimethylbutyne, were then added through Sonogashira coupling. Formylation of this structure was then achieved through using the organolithium reagent n-butyllithium followed by quenching with N,N-dimethylformamide (DMF) to create the aldehyde. 1,2-Ethanediol was added to this structure to protect the aldehyde using p-toluenesulfonic acid as a catalyst. Originally, Chanteau and Tour aimed to couple this structure with alkynes, but this resulted in very low yields of the desired products. To remedy this, the bromide was replaced with iodide through lithium-halogen exchange and quenching by using 1,2-diiodoethane. This created the final structure of the upper body for the NanoKid.
Lower body of NanoKid
The synthesis of NanoPutian’s lower body begins with nitroaniline as a starting material. Addition of Br2 in acetic acid places two equivalents of bromine on the benzene ring. NH2 is an electron donating group, and NO2 is an electron withdrawing group, which both direct bromination to the meta position relative to the NO2 substituent. Addition of NaNO2, H2SO4, and EtOH removes the NH2 substituent. The Lewis acid SnCl2, a reducing agent in THF/EtOH solvent, replaces NO2 with NH2, which is subsequently replaced by iodine upon the addition of NaNO2, H2SO4, and KI to yield 3,5-dibromoiodobenzene. In this step, the Sandmeyer reaction converts the primary amino group (NH2) to a diazonium leaving group (N2), which is subsequently replaced by iodine. Iodine serves as an excellent coupling partner for the attachment of the stomach, which is executed through Sonogashira coupling with trimethylsilylacetylene to yield 3,5-dibromo(trimethylsilylethynyl)benzene. Attachment of the legs replaces the Br substituents with 1-pentyne through another Sonogashira coupling to produce 3,5-(1′-Pentynyl)-1-(trimethylsilylethynyl) benzene. To complete the synthesis of the lower body, the TMS protecting group is removed by selective deprotection through the addition of K2CO3, MeOH, and CH2Cl2 to yield 3,5-(1′-Pentynyl)-1-ethynylbenzene.
Attachment
To attach the upper body of the NanoKid to the lower body, the two components were added to a solution of bis(triphenylphosphine)palladium(II) dichloride, copper(I) iodide, TEA, and THF. This resulted in the final structure of the NanoKid.
Derivatives of NanoKid
Synthesis of NanoProfessionals
NanoProfessionals have alternate molecular structures for the top of the head, and possibly include a hat. Most can be synthesized from the NanoKid by an acetal exchange reaction with the desired 1,2- or 1,3- diol, using p-toluenesulfonic acid as catalyst and heated by microwave irradiation for a few minutes. The ultimate set of products was a recognizably diverse population of NanoPutians: NanoAthlete, NanoPilgrim, NanoGreenBeret, NanoJester, NanoMonarch, NanoTexan, NanoScholar, NanoBaker, and NanoChef.
The majority of the figures are easily recognizable in their most stable conformation. A few have as their stable conformation a less recognizable shape, so these are often drawn in the more recognizable but less stable way. Many liberties were taken in the visual depiction of the head dressings of the NanoPutians. Some products are formed as a mixture of diastereomers—the configuration of the "neck" compared to parts of the "hat".
Synthesis of the NanoKid in upright form
3-Butyn-1-ol was reacted with methanesulfonyl chloride and triethanolamine to produce its mesylate. The mesylate was displaced to make thiolacetate. The thiol was coupled with 3,5-dibromo(trimethylsilylethynyl)benzene to create a free alkyne. The resulting product, 3,5-(4’-thiolacetyl-1’-butynyl)-1-(trimethylsilylethynyl)-benzene, had its trimethylsilyl group removed using tetra-n-butylammonium fluoride (TBAF) and AcOH/Ac2O in THF. The free alkyne was then coupled with the upper body product from the earlier synthesis. This resulted in a NanoKid with protected thiol feet.
To make the NanoKid “stand’, the acetyl protecting groups were removed through the use of ammonium hydroxide in THF to create the free thiols. A gold-plated substrate was then dipped into the solution and incubated for four days. Ellipsometry was used to determine the resulting thickness of the compound, and it was determined that the NanoKid was upright on the substrate.
Synthesis of NanoPutian chain
Synthesis of the upper part of the NanoPutian chain begins with 1,3-dibromo-2,4-diiodobenzene as the starting material. Sonogashira coupling with 4-oxytrimethylsilylbut-1-yne produces 2,5-bis(4-tert-butyldimethylsiloxy-1′-butynyl)-1,4-di-bromobenzene. One of the bromine substituents is converted to an aldehyde through an SN2 reaction with the strong base, n-BuLi, and THF in the aprotic polar solvent, DMF to produce 2,5-bis(4-tert-butyldimethylsiloxy-1′-butynyl)-4-bromobenzaldehyde. Another Sonogashira coupling with 3,5-(1′-Pentynyl)-1-ethynylbenzene attaches the lower body of the NanoPutian. The conversion of the aldehyde group to a diether “head” occurs in two steps. The first step involves addition of ethylene glycol and trimethylsilyl chloride (TMSCl) in CH2Cl2 solvent. Addition of TBAF in THF solvent removes the silyl protecting group.
See also
Nanocar
Nanotechnology
Nanostructure
References
External links
http://cohesion.rice.edu/naturalsciences/nanokids/index.cfm
http://pubs.acs.org/cen/education/8214/8214nanokids.html
Nanotechnology
Dioxolanes
Chemistry education
Benzene derivatives
Alkyne derivatives
Substances discovered in the 2000s | NanoPutian | [
"Materials_science",
"Engineering"
] | 2,278 | [
"Nanotechnology",
"Materials science"
] |
704,160 | https://en.wikipedia.org/wiki/Tree%20%28set%20theory%29 | In set theory, a tree is a partially ordered set (T, <) such that for each t ∈ T, the set {s ∈ T : s < t} is well-ordered by the relation <. Frequently trees are assumed to have only one root (i.e. minimal element), as the typical questions investigated in this field are easily reduced to questions about single-rooted trees.
Definition
A tree is a partially ordered set (poset) (T, <) such that for each t ∈ T, the set {s ∈ T : s < t} is well-ordered by the relation <. In particular, each well-ordered set (T, <) is a tree. For each t ∈ T, the order type of {s ∈ T : s < t} is called the height of t, denoted ht(t, T). The height of T itself is the least ordinal greater than the height of each element of T. A root of a tree T is an element of height 0. Frequently trees are assumed to have only one root. Trees in set theory are often defined to grow downward making the root the greatest node.
Trees with a single root may be viewed as rooted trees in the sense of graph theory in one of two ways: either as a tree (graph theory) or as a trivially perfect graph. In the first case, the graph is the undirected Hasse diagram of the partially ordered set, and in the second case, the graph is simply the underlying (undirected) graph of the partially ordered set. However, if T is a tree of height > ω, then the Hasse diagram definition does not work. For example, the partially ordered set does not have a Hasse Diagram, as there is no predecessor to ω. Hence a height of at most ω is required in this case.
A branch of a tree is a maximal chain in the tree (that is, any two elements of the branch are comparable, and any element of the tree not in the branch is incomparable with at least one element of the branch). The length of a branch is the ordinal that is order isomorphic to the branch. For each ordinal α, the α-th level of T is the set of all elements of T of height α. A tree is a κ-tree, for an ordinal number κ, if and only if it has height κ and every level has cardinality less than the cardinality of κ. The width of a tree is the supremum of the cardinalities of its levels.
Any single-rooted tree of height forms a meet-semilattice, where meet (common ancestor) is given by maximal element of intersection of ancestors, which exists as the set of ancestors is non-empty and finite well-ordered, hence has a maximal element. Without a single root, the intersection of parents can be empty (two elements need not have common ancestors), for example where the elements are not comparable; while if there are an infinite number of ancestors there need not be a maximal element – for example, where are not comparable.
A subtree of a tree is a tree where and is downward closed under , i.e., if and then .
Set-theoretic properties
There are some fairly simply stated yet hard problems in infinite tree theory. Examples of this are the Kurepa conjecture and the Suslin conjecture. Both of these problems are known to be independent of Zermelo–Fraenkel set theory. By Kőnig's lemma, every ω-tree has an infinite branch. On the other hand, it is a theorem of ZFC that there are uncountable trees with no uncountable branches and no uncountable levels; such trees are known as Aronszajn trees. Given a cardinal number κ, a κ-Suslin tree is a tree of height κ which has no chains or antichains of size κ. In particular, if κ is singular then there exists a κ-Aronszajn tree and a κ-Suslin tree. In fact, for any infinite cardinal κ, every κ-Suslin tree is a κ-Aronszajn tree (the converse does not hold).
The Suslin conjecture was originally stated as a question about certain total orderings but it is equivalent to the statement: Every tree of height ω1 has an antichain of cardinality ω1 or a branch of length ω1.
If (T,<) is a tree, then the reflexive closure ≤ of < is a prefix order on T.
The converse does not hold: for example, the usual order ≤ on the set Z of integers is a total and hence a prefix order, but (Z,<) is not a set-theoretic tree since e.g. the set {n ∈Z: n < 0} has no least element.
Examples of infinite trees
Let be an ordinal number, and let be a set. Let be set of all functions where . Define if the domain of is a proper subset of the domain of and the two functions agree on the domain of . Then is a set-theoretic tree. Its root is the unique function on the empty set, and its height is . The union of all functions along a branch yields a function from to , that is, a generalized sequence of members of . If is a limit ordinal, none of the branches has a maximal element ("leaf"). The picture shows an example for and .
Each tree data structure in computer science is a set-theoretic tree: for two nodes , define if is a proper descendant of . The notions of root, node height, and branch length coincide, while the notions of tree height differ by one.
Infinite trees considered in automata theory (see e.g. tree (automata theory)) are also set-theoretic trees, with a tree height of up to .
A graph-theoretic tree can be turned into a set-theoretic one by choosing a root node and defining if and lies on the (unique) undirected path from to .
Each Cantor tree, each Kurepa tree, and each Laver tree is a set-theoretic tree.
See also
Tree (descriptive set theory)
Continuous graph
References
Chapter 2, Section 5.
External links
Sets, Models and Proofs by Ieke Moerdijk and Jaap van Oosten, see Definition 3.1 and Exercise 56 on pp. 68–69.
tree (set theoretic) by Henry on PlanetMath
branch by Henry on PlanetMath
example of tree (set theoretic) by uzeromay on PlanetMath
Set theory | Tree (set theory) | [
"Mathematics"
] | 1,386 | [
"Mathematical logic",
"Set theory"
] |
705,600 | https://en.wikipedia.org/wiki/Ces%C3%A0ro%20summation | In mathematical analysis, Cesàro summation (also known as the Cesàro mean or Cesàro limit) assigns values to some infinite sums that are not necessarily convergent in the usual sense. The Cesàro sum is defined as the limit, as n tends to infinity, of the sequence of arithmetic means of the first n partial sums of the series.
This special case of a matrix summability method is named for the Italian analyst Ernesto Cesàro (1859–1906).
The term summation can be misleading, as some statements and proofs regarding Cesàro summation can be said to implicate the Eilenberg–Mazur swindle. For example, it is commonly applied to Grandi's series with the conclusion that the sum of that series is 1/2.
Definition
Let be a sequence, and let
be its th partial sum.
The sequence is called Cesàro summable, with Cesàro sum , if, as tends to infinity, the arithmetic mean of its first n partial sums tends to :
The value of the resulting limit is called the Cesàro sum of the series If this series is convergent, then it is Cesàro summable and its Cesàro sum is the usual sum.
Examples
First example
Let for . That is, is the sequence
Let denote the series
The series is known as Grandi's series.
Let denote the sequence of partial sums of :
This sequence of partial sums does not converge, so the series is divergent. However, Cesàro summable. Let be the sequence of arithmetic means of the first partial sums:
Then
and therefore, the Cesàro sum of the series is .
Second example
As another example, let for . That is, is the sequence
Let now denote the series
Then the sequence of partial sums is
Since the sequence of partial sums grows without bound, the series diverges to infinity. The sequence of means of partial sums of G is
This sequence diverges to infinity as well, so is Cesàro summable. In fact, for the series of any sequence which diverges to (positive or negative) infinity, the Cesàro method also leads to the series of a sequence that diverges likewise, and hence such a series is not Cesàro summable.
summation
In 1890, Ernesto Cesàro stated a broader family of summation methods which have since been called for non-negative integers . The method is just ordinary summation, and is Cesàro summation as described above.
The higher-order methods can be described as follows: given a series , define the quantities
(where the upper indices do not denote exponents) and define to be for the series . Then the sum of is denoted by and has the value
if it exists . This description represents an -times iterated application of the initial summation method and can be restated as
Even more generally, for , let be implicitly given by the coefficients of the series
and as above. In particular, are the binomial coefficients of power . Then the sum of is defined as above.
If has a sum, then it also has a sum for every , and the sums agree; furthermore we have if (see little- notation).
Cesàro summability of an integral
Let . The integral is summable if
exists and is finite . The value of this limit, should it exist, is the sum of the integral. Analogously to the case of the sum of a series, if , the result is convergence of the improper integral. In the case , convergence is equivalent to the existence of the limit
which is the limit of means of the partial integrals.
As is the case with series, if an integral is summable for some value of , then it is also summable for all , and the value of the resulting limit is the same.
See also
Abel summation
Abel's summation formula
Abel–Plana formula
Abelian and tauberian theorems
Almost convergent sequence
Borel summation
Divergent series
Euler summation
Euler–Boole summation
Fejér's theorem
Hölder summation
Lambert summation
Perron's formula
Ramanujan summation
Riesz mean
Silverman–Toeplitz theorem
Stolz–Cesàro theorem
Cauchy's limit theorem
Summation by parts
References
Bibliography
. Reprinted 1986 with .
Summability methods
Means | Cesàro summation | [
"Physics",
"Mathematics"
] | 875 | [
"Means",
"Sequences and series",
"Mathematical analysis",
"Point (geometry)",
"Mathematical structures",
"Geometric centers",
"Summability methods",
"Symmetry"
] |
705,621 | https://en.wikipedia.org/wiki/Web%20Services%20Interoperability | The Web Services Interoperability Organization (WS-I) was an industry consortium created in 2002 and chartered to promote interoperability amongst the stack of web services specifications. WS-I did not define standards for web services; rather, it created guidelines and tests for interoperability.
In July 2010, WS-I joined the OASIS, standardization consortium as a member section.
It operated until December 2017.
The WS-I standards were then maintained by relevant technical committees within OASIS.
It was governed by a board of directors consisting of the founding members (IBM, Microsoft, BEA Systems, SAP, Oracle, Fujitsu, Hewlett-Packard, and Intel) and two elected members (Sun Microsystems and webMethods). After it joined OASIS, other organizations have joined the WS-I technical committee including CA Technologies, JumpSoft and Booz Allen Hamilton.
The organization's deliverables included profiles, sample applications that demonstrate the profiles' use, and test tools to help determine profile conformance.
WS-I Profiles
According to WS-I, a profile is
A set of named web services specifications at specific revision levels, together with a set of implementation and interoperability guidelines recommending how the specifications may be used to develop interoperable web services.
WS-I Basic Profile
WS-I Basic Security Profile
Simple Soap Binding Profile
WS-I Profile Compliance
The WS-I is not a certifying authority; thus, every vendor can claim to be compliant to a profile. However the use of the test tool is required before a company can claim a product to be compliant. See WS-I Trademarks and Compliance claims requirements
See also
Web Services Resource Framework
OASIS
References
External links
WS-I consortium's Home Page
WS-I OASIS member section Home page (2010-2017, maintained as archive by OASIS)
The Microsoft - WS-I controversy, cnet news, May 2002
Web services
Interoperability | Web Services Interoperability | [
"Engineering"
] | 404 | [
"Telecommunications engineering",
"Interoperability"
] |
705,635 | https://en.wikipedia.org/wiki/Air%20mass%20%28astronomy%29 | In astronomy, air mass or airmass is a measure of the amount of air along the line of sight when observing a star or other celestial source from below Earth's atmosphere . It is formulated as the integral of air density along the light ray.
As it penetrates the atmosphere, light is attenuated by scattering and absorption; the thicker atmosphere through which it passes, the greater the attenuation. Consequently, celestial bodies when nearer the horizon appear less bright than when nearer the zenith. This attenuation, known as atmospheric extinction, is described quantitatively by the Beer–Lambert law.
"Air mass" normally indicates relative air mass, the ratio of absolute air masses (as defined above) at oblique incidence relative to that at zenith. So, by definition, the relative air mass at the zenith is 1. Air mass increases as the angle between the source and the zenith increases, reaching a value of approximately 38 at the horizon. Air mass can be less than one at an elevation greater than sea level; however, most closed-form expressions for air mass do not include the effects of the observer's elevation, so adjustment must usually be accomplished by other means.
Tables of air mass have been published by numerous authors, including , , and .
Definition
The absolute air mass is defined as:
where is volumetric density of air. Thus is a type of oblique column density.
In the vertical direction, the absolute air mass at zenith is:
So is a type of vertical column density.
Finally, the relative air mass is:
Assuming air density to be uniform allows removing it from the integrals. The absolute air mass then simplifies to a product:
where is the average density and the arc length of the oblique and zenith light paths are:
In the corresponding simplified relative air mass, the average density cancels out in the fraction, leading to the ratio of path lengths:
Further simplifications are often made, assuming straight-line propagation (neglecting ray bending), as discussed below.
Calculation
Background
The angle of a celestial body with the zenith is the zenith angle (in astronomy, commonly referred to as the zenith distance). A body's angular position can also be given in terms of altitude, the angle above the geometric horizon; the altitude and the zenith angle are thus related by
Atmospheric refraction causes light entering the atmosphere to follow an approximately circular path that is slightly longer than the geometric path. Air mass must take into account the longer path . Additionally, refraction causes a celestial body to appear higher above the horizon than it actually is; at the horizon, the difference between the true zenith angle and the apparent zenith angle is approximately 34 minutes of arc. Most air mass formulas are based on the apparent zenith angle, but some are based on the true zenith angle, so it is important to ensure that the correct value is used, especially near the horizon.
Plane-parallel atmosphere
When the zenith angle is small to moderate, a good approximation is given by assuming a homogeneous plane-parallel atmosphere (i.e., one in which density is constant and Earth's curvature is ignored). The air mass then is simply the secant of the zenith angle :
At a zenith angle of 60°, the air mass is approximately 2. However, because the Earth is not flat, this formula is only usable for zenith angles up to about 60° to 75°, depending on accuracy requirements. At greater zenith angles, the accuracy degrades rapidly, with becoming infinite at the horizon; the horizon air mass in the more realistic spherical atmosphere is usually less than 40.
Interpolative formulas
Many formulas have been developed to fit tabular values of air mass; one by included a simple corrective term:
where is the true zenith angle. This gives usable results up to approximately 80°, but the accuracy degrades rapidly at greater zenith angles. The calculated air mass reaches a maximum of 11.13 at 86.6°, becomes zero at 88°, and approaches negative infinity at the horizon. The plot of this formula on the accompanying graph includes a correction for atmospheric refraction so that the calculated air mass is for apparent rather than true zenith angle.
introduced a polynomial in :
which gives usable results for zenith angles of up to perhaps 85°. As with the previous formula, the calculated air mass reaches a maximum, and then approaches negative infinity at the horizon.
suggested
which gives reasonable results for high zenith angles, with a horizon air mass of 40.
developed
which gives reasonable results for zenith angles of up to 90°, with an air mass of approximately 38 at the horizon. Here the second term is in degrees.
developed
in terms of the true zenith angle , for which he claimed a maximum error (at the horizon) of 0.0037 air mass.
developed
where is apparent altitude in degrees. Pickering claimed his equation to have a tenth the error of near the horizon.
Atmospheric models
Interpolative formulas attempt to provide a good fit to tabular values of air mass using minimal computational overhead. The tabular values, however, must be determined from measurements or atmospheric models that derive from geometrical and physical considerations of Earth and its atmosphere.
Nonrefracting spherical atmosphere
If atmospheric refraction is ignored, it can be shown from simple geometrical considerations (Schoenberg 1929, 173) that the path of a light ray at zenith angle
through a radially symmetrical atmosphere of height above the Earth is given by
or alternatively,
where is the radius of the Earth.
The relative air mass is then:
Homogeneous atmosphere
If the atmosphere is homogeneous (i.e., density is constant), the atmospheric height follows from hydrostatic considerations as:
where is the Boltzmann constant, is the sea-level temperature, is the molecular mass of air, and is the acceleration due to gravity. Although this is the same as the pressure scale height of an isothermal atmosphere, the implication is slightly different. In an isothermal atmosphere, 37% (1/e) of the atmosphere is above the pressure scale height; in a homogeneous atmosphere, there is no atmosphere above the atmospheric height.
Taking , , and gives . Using Earth's mean radius of 6371 km, the sea-level air mass at the horizon is
The homogeneous spherical model slightly underestimates the rate of increase in air mass near the horizon; a reasonable overall fit to values determined from more rigorous models can be had by setting the air mass to match a value at a zenith angle less than 90°. The air mass equation can be rearranged to give
matching Bemporad's value of 19.787 at = 88°
gives ≈ 631.01 and
≈ 35.54. With the same value for as above, ≈ 10,096 m.
While a homogeneous atmosphere is not a physically realistic model, the approximation is reasonable as long as the scale height of the atmosphere is small compared to the radius of the planet. The model is usable (i.e., it does not diverge or go to zero) at all zenith angles, including those greater than 90° (see ). The model requires comparatively little computational overhead, and if high accuracy is not required, it gives reasonable results.
However, for zenith angles less than 90°, a better fit to accepted values of air mass can be had with several
of the interpolative formulas.
Variable-density atmosphere
In a real atmosphere, density is not constant (it decreases with elevation above mean sea level. The absolute air mass for the geometrical light path discussed above, becomes, for a sea-level observer,
Isothermal atmosphere
Several basic models for density variation with elevation are commonly used. The simplest, an isothermal atmosphere, gives
where is the sea-level density and is the density scale height. When the limits of integration are zero and infinity, the result is known as Chapman function. An approximate result is obtained if some high-order terms are dropped, yielding ,
An approximate correction for refraction can be made by taking
where is the physical radius of the Earth. At the horizon, the approximate equation becomes
Using a scale height of 8435 m, Earth's mean radius of 6371 km, and including the correction for refraction,
Polytropic atmosphere
The assumption of constant temperature is simplistic; a more realistic model is the polytropic atmosphere, for which
where is the sea-level temperature and is the temperature lapse rate. The density as a function of elevation is
where is the polytropic exponent (or polytropic index). The air mass integral for the polytropic model does not lend itself to a closed-form solution except at the zenith, so the integration usually is performed numerically.
Layered atmosphere
Earth's atmosphere consists of multiple layers with different temperature and density characteristics; common atmospheric models include the International Standard Atmosphere and the US Standard Atmosphere. A good approximation for many purposes is a polytropic troposphere of 11 km height with a lapse rate of 6.5 K/km and an isothermal stratosphere of infinite height , which corresponds very closely to the first two layers of the International Standard Atmosphere. More layers can be used if greater accuracy is required.
Refracting radially symmetrical atmosphere
When atmospheric refraction is considered, ray tracing becomes necessary , and the absolute air mass integral becomes
where is the index of refraction of air at the observer's elevation above sea level, is the index of refraction at elevation above sea level, ,
is the distance from the center of the Earth to a point at elevation , and is distance to the upper limit of the atmosphere at elevation . The index of refraction in terms of density is usually given to sufficient accuracy (Garfinkel 1967) by the Gladstone–Dale relation
Rearrangement and substitution into the absolute air mass integral gives
The quantity is quite small; expanding the first term in parentheses, rearranging several times, and ignoring terms in after each rearrangement, gives
Homogeneous spherical atmosphere with elevated observer
In the figure at right, an observer at O is at an elevation above sea level in a uniform radially symmetrical atmosphere of height . The path length of a light ray at zenith angle is ; is the radius of the Earth. Applying the law of cosines to triangle OAC,
expanding the left- and right-hand sides, eliminating the common terms, and rearranging gives
Solving the quadratic for the path length s, factoring, and rearranging,
The negative sign of the radical gives a negative result, which is not physically meaningful. Using the positive sign, dividing by , and cancelling common terms and rearranging gives the relative air mass:
With the substitutions and , this can be given as
When the observer's elevation is zero, the air mass equation simplifies to
In the limit of grazing incidence, the absolute airmass equals the distance to the horizon. Furthermore, if the observer is elevated, the horizon zenith angle can be greater than 90°.
Nonuniform distribution of attenuating species
Atmospheric models that derive from hydrostatic considerations assume an atmosphere of constant composition and a single mechanism of extinction, which isn't quite correct. There are three main sources of attenuation : Rayleigh scattering by air molecules, Mie scattering by aerosols, and molecular absorption (primarily by ozone). The relative contribution of each source varies with elevation above sea level, and the concentrations of aerosols and ozone cannot be derived simply from hydrostatic considerations.
Rigorously, when the extinction coefficient depends on elevation, it must be determined as part of the air mass integral, as described by . A compromise approach often is possible, however. Methods for separately calculating the extinction from each species using closed-form expressions are described in and . The latter reference includes source code for a BASIC program to perform the calculations. Reasonably accurate calculation of extinction can sometimes be done by using one of the simple air mass formulas and separately determining extinction coefficients for each of the attenuating species (, ).
Implications
Air mass and astronomy
In optical astronomy, the air mass provides an indication of the deterioration of the observed image, not only as regards direct effects of spectral absorption, scattering and reduced brightness, but also an aggregation of visual aberrations, e.g. resulting from atmospheric turbulence, collectively referred to as the quality of the "seeing". On bigger telescopes, such as the WHT and VLT , the atmospheric dispersion can be so severe that it affects the pointing of the telescope to the target. In such cases an atmospheric dispersion compensator is used, which usually consists of two prisms.
The Greenwood frequency and Fried parameter, both relevant for adaptive optics, depend on the air mass above them (or more specifically, on the zenith angle).
In radio astronomy the air mass (which influences the optical path length) is not relevant. The lower layers of the atmosphere, modeled by the air mass, do not significantly impede radio waves, which are of much lower frequency than optical waves. Instead, some radio waves are affected by the ionosphere in the upper atmosphere. Newer aperture synthesis radio telescopes are especially affected by this as they “see” a much larger portion of the sky and thus the ionosphere. In fact, LOFAR needs to explicitly calibrate for these distorting effects (; ), but on the other hand can also study the ionosphere by instead measuring these distortions .
Air mass and solar energy
In some fields, such as solar energy and photovoltaics, air mass is indicated by the acronym AM; additionally, the value of the air mass is often given by appending its value to AM, so that AM1 indicates an air mass of 1, AM2 indicates an air mass of 2, and so on. The region above Earth's atmosphere, where there is no atmospheric attenuation of solar radiation, is considered to have "air mass zero" (AM0).
Atmospheric attenuation of solar radiation is not the same for all wavelengths; consequently, passage through the atmosphere not only reduces intensity but also alters the spectral irradiance. Photovoltaic modules are commonly rated using spectral irradiance for an air mass of 1.5 (AM1.5); tables of these standard spectra are given in ASTM G 173-03. The extraterrestrial spectral irradiance (i.e., that for AM0) is given in ASTM E 490-00a.
For many solar energy applications when high accuracy near the horizon is not required, air mass is commonly determined using the simple secant formula described in .
See also
Air mass (solar energy)
Atmospheric extinction
Beer–Lambert–Bouguer law
Chapman function
Computation of radiowave attenuation in the atmosphere
Diffuse sky radiation
Extinction coefficient
Illuminance
International Standard Atmosphere
Irradiance
Law of atmospheres
Light diffusion
Mie scattering
Path loss
Photovoltaic module
Rayleigh scattering
Solar irradiation
Notes
References
Optical Telescopes of Today and Tomorrow
External links
Reed Meyer's downloadable airmass calculator, written in C (notes in the source code describe the theory in detail)
NASA Astrophysics Data System A source for electronic copies of some of the references.
Astronomical imaging
Observational astronomy
Atmospheric optical phenomena
Optical quantities | Air mass (astronomy) | [
"Physics",
"Astronomy",
"Mathematics"
] | 3,089 | [
"Physical phenomena",
"Earth phenomena",
"Physical quantities",
"Quantity",
"Observational astronomy",
"Optical phenomena",
"Optical quantities",
"Atmospheric optical phenomena",
"Astronomical sub-disciplines"
] |
705,749 | https://en.wikipedia.org/wiki/Intersection%20homology | In topology, a branch of mathematics, intersection homology is an analogue of singular homology especially well-suited for the study of singular spaces, discovered by Mark Goresky and Robert MacPherson in the fall of 1974 and developed by them over the next few years.
Intersection cohomology was used to prove the Kazhdan–Lusztig conjectures and the Riemann–Hilbert correspondence. It is closely related to L2 cohomology.
Goresky–MacPherson approach
The homology groups of a compact, oriented, connected, n-dimensional manifold X have a fundamental property called Poincaré duality: there is a perfect pairing
Classically—going back, for instance, to Henri Poincaré—this duality was understood in terms of intersection theory. An element of
is represented by a j-dimensional cycle. If an i-dimensional and an -dimensional cycle are in general position, then their intersection is a finite collection of points. Using the orientation of X one may assign to each of these points a sign; in other words intersection yields a 0-dimensional cycle. One may prove that the homology class of this cycle depends only on the homology classes of the original i- and -dimensional cycles; one may furthermore prove that this pairing is perfect.
When X has singularities—that is, when the space has places that do not look like —these ideas break down. For example, it is no longer possible to make sense of the notion of "general position" for cycles. Goresky and MacPherson introduced a class of "allowable" cycles for which general position does make sense. They introduced an equivalence relation for allowable cycles (where only "allowable boundaries" are equivalent to zero), and called the group
of i-dimensional allowable cycles modulo this equivalence relation "intersection homology". They furthermore showed that the intersection of an i- and an -dimensional allowable cycle gives an (ordinary) zero-cycle whose homology class is well-defined.
Stratifications
Intersection homology was originally defined on suitable spaces with a stratification, though the groups often turn out to be independent of the choice of stratification. There are many different definitions of stratified spaces. A convenient one for intersection homology is an n-dimensional topological pseudomanifold. This is a (paracompact, Hausdorff) space X that has a filtration
of X by closed subspaces such that:
For each i and for each point x of , there exists a neighborhood of x in X, a compact -dimensional stratified space L, and a filtration-preserving homeomorphism . Here is the open cone on L.
.
is dense in X.
If X is a topological pseudomanifold, the i-dimensional stratum of X is the space .
Examples:
If X is an n-dimensional simplicial complex such that every simplex is contained in an n-simplex and n−1 simplex is contained in exactly two n-simplexes, then the underlying space of X is a topological pseudomanifold.
If X is any complex quasi-projective variety (possibly with singularities) then its underlying space is a topological pseudomanifold, with all strata of even dimension.
Perversities
Intersection homology groups depend on a choice of perversity , which measures how far cycles are allowed to deviate from transversality. (The origin of the name "perversity" was explained by .) A perversity is a function
from integers to the integers such that
.
.
The second condition is used to show invariance of intersection homology groups under change of stratification.
The complementary perversity of is the one with
.
Intersection homology groups of complementary dimension and complementary perversity are dually paired.
Examples of perversities
The minimal perversity has . Its complement is the maximal perversity with .
The (lower) middle perversity m is defined by , the integer part of . Its complement is the upper middle perversity, with values . If the perversity is not specified, then one usually means the lower middle perversity. If a space can be stratified with all strata of even dimension (for example, any complex variety) then the intersection homology groups are independent of the values of the perversity on odd integers, so the upper and lower middle perversities are equivalent.
Singular intersection homology
Fix a topological pseudomanifold X of dimension n with some stratification, and a perversity p.
A map σ from the standard i-simplex to X (a singular simplex) is called allowable if
is contained in the skeleton of .
The chain complex is a subcomplex of the complex of singular chains on X that consists of all singular chains such that both the chain and its boundary are linear combinations of allowable singular simplexes. The singular intersection homology groups (with perversity p)
are the homology groups of this complex.
If X has a triangulation compatible with the stratification, then simplicial intersection homology groups can be defined in a similar way, and are naturally isomorphic to the singular intersection homology groups.
The intersection homology groups are independent of the choice of stratification of X.
If X is a topological manifold, then the intersection homology groups (for any perversity) are the same as the usual homology groups.
Small resolutions
A resolution of singularities
of a complex variety Y is called a small resolution if for every r > 0, the space of points of Y where the fiber has dimension r is of codimension greater than 2r. Roughly speaking, this means that most fibers are small. In this case the morphism induces an isomorphism from the (intersection) homology of X to the intersection homology of Y (with the middle perversity).
There is a variety with two different small resolutions that have different ring structures on their cohomology, showing that there is in general no natural ring structure on intersection (co)homology.
Sheaf theory
Deligne's formula for intersection cohomology states that
where is the intersection complex, a certain complex of constructible sheaves on X (considered as an element of the derived category, so the cohomology on the right means the hypercohomology of the complex). The complex is given by starting with the constant sheaf on the open set and repeatedly extending it to larger open sets and then truncating it in the derived category; more precisely it is given by Deligne's formula
where is a truncation functor in the derived category, is the inclusion of into , and is the constant sheaf on .
By replacing the constant sheaf on with a local system, one can use Deligne's formula to define intersection cohomology with coefficients in a local system.
Examples
Given a smooth elliptic curve defined by a cubic homogeneous polynomial , such as , the affine cone has an isolated singularity at the origin since and all partial derivatives vanish. This is because it is homogeneous of degree , and the derivatives are homogeneous of degree 2. Setting and the inclusion map, the intersection complex is given as
This can be computed explicitly by looking at the stalks of the cohomology. At where the derived pushforward is the identity map on a smooth point, hence the only possible cohomology is concentrated in degree . For the cohomology is more interesting since
for where the closure of contains the origin . Since any such can be refined by considering the intersection of an open disk in with , we can just compute the cohomology . This can be done by observing is a bundle over the elliptic curve , the hyperplane bundle, and the Wang sequence gives the cohomology groupshence the cohomology sheaves at the stalk are
Truncating this gives the nontrivial cohomology sheaves , hence the intersection complex has cohomology sheaves
Properties of the complex IC(X)
The complex ICp(X) has the following properties
On the complement of some closed set of codimension 2, we have
is 0 for i + m ≠ 0, and for i = −m the groups form the constant local system C
is 0 for i + m < 0
If i > 0 then is zero except on a set of codimension at least a for the smallest a with p(a) ≥ m − i
If i > 0 then is zero except on a set of codimension at least a for the smallest a with q(a) ≥ (i)
As usual, q is the complementary perversity to p. Moreover, the complex is uniquely characterized by these conditions, up to isomorphism in the derived category. The conditions do not depend on the choice of stratification, so this shows that intersection cohomology does not depend on the choice of stratification either.
Verdier duality takes ICp to ICq shifted by n = dim(X) in the derived category.
See also
Decomposition theorem
Borel–Moore homology
Topologically stratified space
Intersection theory
Perverse sheaf
Mixed Hodge structure
References
Armand Borel, Intersection Cohomology. Progress in Mathematics, Birkhauser Boston
Mark Goresky and Robert MacPherson, La dualité de Poincaré pour les espaces singuliers. C.R. Acad. Sci. t. 284 (1977), pp. 1549–1551 Serie A .
Goresky, Mark; MacPherson, Robert, Intersection homology theory, Topology 19 (1980), no. 2, 135–162.
Goresky, Mark; MacPherson, Robert, Intersection homology. II, Inventiones Mathematicae 72 (1983), no. 1, 77–129. 10.1007/BF01389130 This gives a sheaf-theoretic approach to intersection cohomology.
Frances Kirwan, Jonathan Woolf, An Introduction to Intersection Homology Theory
Kleiman, Steven. The development of intersection homology theory. A Century of Mathematics in America, Part II, Hist. Math. 2, Amer. Math. Soc., 1989, pp. 543–585.
External links
What is the etymology of the term "perverse sheaf"? (includes discussion on the etymology of the term "intersection homology") – MathOverflow
Intersection theory
Algebraic topology
Generalized manifolds
Duality theories
Cohomology theories | Intersection homology | [
"Mathematics"
] | 2,161 | [
"Mathematical structures",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Duality theories",
"Geometry"
] |
706,247 | https://en.wikipedia.org/wiki/Electronic%20band%20structure | In solid-state physics, the electronic band structure (or simply band structure) of a solid describes the range of energy levels that electrons may have within it, as well as the ranges of energy that they may not have (called band gaps or forbidden bands).
Band theory derives these bands and band gaps by examining the allowed quantum mechanical wave functions for an electron in a large, periodic lattice of atoms or molecules. Band theory has been successfully used to explain many physical properties of solids, such as electrical resistivity and optical absorption, and forms the foundation of the understanding of all solid-state devices (transistors, solar cells, etc.).
Why bands and band gaps occur
The formation of electronic bands and band gaps can be illustrated with two complementary models for electrons in solids. The first one is the nearly free electron model, in which the electrons are assumed to move almost freely within the material. In this model, the electronic states resemble free electron plane waves, and are only slightly perturbed by the crystal lattice. This model explains the origin of the electronic dispersion relation, but the explanation for band gaps is subtle in this model.
The second model starts from the opposite limit, in which the electrons are tightly bound to individual atoms. The electrons of a single, isolated atom occupy atomic orbitals with discrete energy levels. If two atoms come close enough so that their atomic orbitals overlap, the electrons can tunnel between the atoms. This tunneling splits (hybridizes) the atomic orbitals into molecular orbitals with different energies.
Similarly, if a large number of identical atoms come together to form a solid, such as a crystal lattice, the atoms' atomic orbitals overlap with the nearby orbitals. Each discrete energy level splits into levels, each with a different energy. Since the number of atoms in a macroscopic piece of solid is a very large number (), the number of orbitals that hybridize with each other is very large. For this reason, the adjacent levels are very closely spaced in energy (of the order of ), and can be considered to form a continuum, an energy band.
This formation of bands is mostly a feature of the outermost electrons (valence electrons) in the atom, which are the ones involved in chemical bonding and electrical conductivity. The inner electron orbitals do not overlap to a significant degree, so their bands are very narrow.
Band gaps are essentially leftover ranges of energy not covered by any band, a result of the finite widths of the energy bands. The bands have different widths, with the widths depending upon the degree of overlap in the atomic orbitals from which they arise. Two adjacent bands may simply not be wide enough to fully cover the range of energy. For example, the bands associated with core orbitals (such as 1s electrons) are extremely narrow due to the small overlap between adjacent atoms. As a result, there tend to be large band gaps between the core bands. Higher bands involve comparatively larger orbitals with more overlap, becoming progressively wider at higher energies so that there are no band gaps at higher energies.
Basic concepts
Assumptions and limits of band structure theory
Band theory is only an approximation to the quantum state of a solid, which applies to solids consisting of many identical atoms or molecules bonded together. These are the assumptions necessary for band theory to be valid:
Infinite-size system: For the bands to be continuous, the piece of material must consist of a large number of atoms. Since a macroscopic piece of material contains on the order of 1022 atoms, this is not a serious restriction; band theory even applies to microscopic-sized transistors in integrated circuits. With modifications, the concept of band structure can also be extended to systems which are only "large" along some dimensions, such as two-dimensional electron systems.
Homogeneous system: Band structure is an intrinsic property of a material, which assumes that the material is homogeneous. Practically, this means that the chemical makeup of the material must be uniform throughout the piece.
Non-interactivity: The band structure describes "single electron states". The existence of these states assumes that the electrons travel in a static potential without dynamically interacting with lattice vibrations, other electrons, photons, etc.
The above assumptions are broken in a number of important practical situations, and the use of band structure requires one to keep a close check on the limitations of band theory:
Inhomogeneities and interfaces: Near surfaces, junctions, and other inhomogeneities, the bulk band structure is disrupted. Not only are there local small-scale disruptions (e.g., surface states or dopant states inside the band gap), but also local charge imbalances. These charge imbalances have electrostatic effects that extend deeply into semiconductors, insulators, and the vacuum (see doping, band bending).
Along the same lines, most electronic effects (capacitance, electrical conductance, electric-field screening) involve the physics of electrons passing through surfaces and/or near interfaces. The full description of these effects, in a band structure picture, requires at least a rudimentary model of electron-electron interactions (see space charge, band bending).
Small systems: For systems which are small along every dimension (e.g., a small molecule or a quantum dot), there is no continuous band structure. The crossover between small and large dimensions is the realm of mesoscopic physics.
Strongly correlated materials (for example, Mott insulators) simply cannot be understood in terms of single-electron states. The electronic band structures of these materials are poorly defined (or at least, not uniquely defined) and may not provide useful information about their physical state.
Crystalline symmetry and wavevectors
Band structure calculations take advantage of the periodic nature of a crystal lattice, exploiting its symmetry. The single-electron Schrödinger equation is solved for an electron in a lattice-periodic potential, giving Bloch electrons as solutions
where is called the wavevector. For each value of , there are multiple solutions to the Schrödinger equation labelled by , the band index, which simply numbers the energy bands.
Each of these energy levels evolves smoothly with changes in , forming a smooth band of states. For each band we can define a function , which is the dispersion relation for electrons in that band.
The wavevector takes on any value inside the Brillouin zone, which is a polyhedron in wavevector (reciprocal lattice) space that is related to the crystal's lattice.
Wavevectors outside the Brillouin zone simply correspond to states that are physically identical to those states within the Brillouin zone.
Special high symmetry points/lines in the Brillouin zone are assigned labels like Γ, Δ, Λ, Σ (see Fig 1).
It is difficult to visualize the shape of a band as a function of wavevector, as it would require a plot in four-dimensional space, vs. , , . In scientific literature it is common to see band structure plots which show the values of for values of along straight lines connecting symmetry points, often labelled Δ, Λ, Σ, or [100], [111], and [110], respectively. Another method for visualizing band structure is to plot a constant-energy isosurface in wavevector space, showing all of the states with energy equal to a particular value. The isosurface of states with energy equal to the Fermi level is known as the Fermi surface.
Energy band gaps can be classified using the wavevectors of the states surrounding the band gap:
Direct band gap: the lowest-energy state above the band gap has the same as the highest-energy state beneath the band gap.
Indirect band gap: the closest states above and beneath the band gap do not have the same value.
Asymmetry: Band structures in non-crystalline solids
Although electronic band structures are usually associated with crystalline materials, quasi-crystalline and amorphous solids may also exhibit band gaps. These are somewhat more difficult to study theoretically since they lack the simple symmetry of a crystal, and it is not usually possible to determine a precise dispersion relation. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on crystalline materials.
Density of states
The density of states function is defined as the number of electronic states per unit volume, per unit energy, for electron energies near .
The density of states function is important for calculations of effects based on band theory.
In Fermi's Golden Rule, a calculation for the rate of optical absorption, it provides both the number of excitable electrons and the number of final states for an electron. It appears in calculations of electrical conductivity where it provides the number of mobile states, and in computing electron scattering rates where it provides the number of final states after scattering.
For energies inside a band gap, .
Filling of bands
At thermodynamic equilibrium, the likelihood of a state of energy being filled with an electron is given by the Fermi–Dirac distribution, a thermodynamic distribution that takes into account the Pauli exclusion principle:
where:
is the product of the Boltzmann constant and temperature, and
is the total chemical potential of electrons, or Fermi level (in semiconductor physics, this quantity is more often denoted ). The Fermi level of a solid is directly related to the voltage on that solid, as measured with a voltmeter. Conventionally, in band structure plots the Fermi level is taken to be the zero of energy (an arbitrary choice).
The density of electrons in the material is simply the integral of the Fermi–Dirac distribution times the density of states:
Although there are an infinite number of bands and thus an infinite number of states, there are only a finite number of electrons to place in these bands.
The preferred value for the number of electrons is a consequence of electrostatics: even though the surface of a material can be charged, the internal bulk of a material prefers to be charge neutral.
The condition of charge neutrality means that must match the density of protons in the material. For this to occur, the material electrostatically adjusts itself, shifting its band structure up or down in energy (thereby shifting ), until it is at the correct equilibrium with respect to the Fermi level.
Names of bands near the Fermi level (conduction band, valence band)
A solid has an infinite number of allowed bands, just as an atom has infinitely many energy levels. However, most of the bands simply have too high energy, and are usually disregarded under ordinary circumstances.
Conversely, there are very low energy bands associated with the core orbitals (such as 1s electrons). These low-energy core bands are also usually disregarded since they remain filled with electrons at all times, and are therefore inert.
Likewise, materials have several band gaps throughout their band structure.
The most important bands and band gaps—those relevant for electronics and optoelectronics—are those with energies near the Fermi level.
The bands and band gaps near the Fermi level are given special names, depending on the material:
In a semiconductor or band insulator, the Fermi level is surrounded by a band gap, referred to as the band gap (to distinguish it from the other band gaps in the band structure). The closest band above the band gap is called the conduction band, and the closest band beneath the band gap is called the valence band. The name "valence band" was coined by analogy to chemistry, since in semiconductors (and insulators) the valence band is built out of the valence orbitals.
In a metal or semimetal, the Fermi level is inside of one or more allowed bands. In semimetals the bands are usually referred to as "conduction band" or "valence band" depending on whether the charge transport is more electron-like or hole-like, by analogy to semiconductors. In many metals, however, the bands are neither electron-like nor hole-like, and often just called "valence band" as they are made of valence orbitals. The band gaps in a metal's band structure are not important for low energy physics, since they are too far from the Fermi level.
Theory in crystals
The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch's theorem as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors . Now, any periodic potential which shares the same periodicity as the direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as:
where for any set of integers .
From this theory, an attempt can be made to predict the band structure of a particular material, however most ab initio methods for electronic structure calculations fail to predict the observed band gap.
Nearly free electron approximation
In the nearly free electron approximation, interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are described mathematically by the Bloch's theorem, which states that the eigenstate wavefunctions have the form
where the Bloch function is periodic over the crystal lattice, that is,
Here index refers to the th energy band, wavevector is related to the direction of motion of the electron, is the position in the crystal, and is the location of an atomic site.
The NFE model works particularly well in materials like metals where distances between neighbouring atoms are small. In such materials the overlap of atomic orbitals and potentials on neighbouring atoms is relatively large. In that case the wave function of the electron can be approximated by a (modified) plane wave. The band structure of a metal like aluminium even gets close to the empty lattice approximation.
Tight binding model
The opposite extreme to the nearly free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This tight binding model assumes the solution to the time-independent single electron Schrödinger equation is well approximated by a linear combination of atomic orbitals .
where the coefficients are selected to give the best approximate solution of this form. Index refers to an atomic energy level and refers to an atomic site. A more accurate approach using this idea employs Wannier functions, defined by:
in which is the periodic part of the Bloch's theorem and the integral is over the Brillouin zone. Here index refers to the -th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the -th energy band as:
The TB model works well in materials with limited overlap between atomic orbitals and potentials on neighbouring atoms. Band structures of materials like Si, GaAs, SiO2 and diamond for instance are well described by TB-Hamiltonians on the basis of atomic sp3 orbitals. In transition metals a mixed TB-NFE model is used to describe the broad NFE conduction band and the narrow embedded TB d-bands. The radial functions of the atomic orbital part of the Wannier functions are most easily calculated by the use of pseudopotential methods. NFE, TB or combined NFE-TB band structure calculations,
sometimes extended with wave function approximations based on pseudopotential methods, are often used as an economic starting point for further calculations.
KKR model
The KKR method, also called "multiple scattering theory" or Green's function method, finds the stationary values of the inverse transition matrix T rather than the Hamiltonian. A variational implementation was suggested by Korringa, Kohn and Rostocker, and is often referred to as the Korringa–Kohn–Rostoker method. The most important features of the KKR or Green's function formulation are (1) it separates the two aspects of the problem: structure (positions of the atoms) from the scattering (chemical identity of the atoms); and (2) Green's functions provide a natural approach to a localized description of electronic properties that can be adapted to alloys and other disordered system. The simplest form of this approximation centers non-overlapping spheres (referred to as muffin tins) on the atomic positions. Within these regions, the potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the screened potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
Density-functional theory
In recent physics literature, a large majority of the electronic structures and band plots are calculated using density-functional theory (DFT), which is not a model but rather a theory, i.e., a microscopic first-principles theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density. DFT-calculated bands are in many cases found to be in agreement with experimentally measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In particular, the band shape is typically well reproduced by DFT. But there are also systematic errors in DFT bands when compared to experiment results. In particular, DFT seems to systematically underestimate by about 30-40% the band gap in insulators and semiconductors.
It is commonly believed that DFT is a theory to predict ground state properties of a system only (e.g. the total energy, the atomic structure, etc.), and that excited state properties cannot be determined by DFT. This is a misconception. In principle, DFT can determine any property (ground state or excited state) of a system given a functional that maps the ground state density to that property. This is the essence of the Hohenberg–Kohn theorem. In practice, however, no known functional exists that maps the ground state density to excitation energies of electrons within a material. Thus, what in the literature is quoted as a DFT band plot is a representation of the DFT Kohn–Sham energies, i.e., the energies of a fictive non-interacting system, the Kohn–Sham system, which has no physical interpretation at all. The Kohn–Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopmans' theorem holding for Kohn–Sham energies, as there is for Hartree–Fock energies, which can be truly considered as an approximation for quasiparticle energies. Hence, in principle, Kohn–Sham based DFT is not a band theory, i.e., not a theory suitable for calculating bands and band-plots. In principle time-dependent DFT can be used to calculate the true band structure although in practice this is often difficult. A popular approach is the use of hybrid functionals, which incorporate a portion of Hartree–Fock exact exchange; this produces a substantial improvement in predicted bandgaps of semiconductors, but is less reliable for metals and wide-bandgap materials.
Green's function methods and the ab initio GW approximation
To calculate the bands including electron-electron interaction many-body effects, one can resort to so-called Green's function methods. Indeed, knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One such approximation is the GW approximation, so called from the mathematical form the self-energy takes as the product Σ = GW of the Green's function G and the dynamically screened interaction W. This approach is more pertinent when addressing the calculation of band plots (and also quantities beyond, such as the spectral function) and can also be formulated in a completely ab initio way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with experiment, and hence to correct the systematic DFT underestimation.
Dynamical mean-field theory
Although the nearly free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material a conductor. However, materials such as CoO that have an odd number of electrons per unit cell are insulators, in direct conflict with this result. This kind of material is known as a Mott insulator, and requires inclusion of detailed electron-electron interactions (treated only as an averaged effect on the crystal potential in band theory) to explain the discrepancy. The Hubbard model is an approximate theory that can include these interactions. It can be treated non-perturbatively within the so-called dynamical mean-field theory, which attempts to bridge the gap between the nearly free electron approximation and the atomic limit. Formally, however, the states are not non-interacting in this case and the concept of a band structure is not adequate to describe these cases.
Others
Calculating band structures is an important topic in theoretical solid state physics. In addition to the models mentioned above, other models include the following:
Empty lattice approximation: the "band structure" of a region of free space that has been divided into a lattice.
k·p perturbation theory is a technique that allows a band structure to be approximately described in terms of just a few parameters. The technique is commonly used for semiconductors, and the parameters in the model are often determined by experiment.
The Kronig–Penney model, a one-dimensional rectangular well model useful for illustration of band formation. While simple, it predicts many important phenomena, but is not quantitative.
Hubbard model
The band structure has been generalised to wavevectors that are complex numbers, resulting in what is called a complex band structure, which is of interest at surfaces and interfaces.
Each model describes some types of solids very well, and others poorly. The nearly free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl).
Band diagrams
To understand how band structure changes relative to the Fermi level in real space, a band structure plot is often first simplified in the form of a band diagram. In a band diagram the vertical axis is energy while the horizontal axis represents real space. Horizontal lines represent energy levels, while blocks represent energy bands. When the horizontal lines in these diagram are slanted then the energy of the level or band changes with distance. Diagrammatically, this depicts the presence of an electric field within the crystal system. Band diagrams are useful in relating the general band structure properties of different materials to one another when placed in contact with each other.
See also
Band-gap engineering – the process of altering a material's band structure
Felix Bloch – pioneer in the theory of band structure
Alan Herries Wilson – pioneer in the theory of band structure
References
Further reading
Ashcroft, Neil and N. David Mermin, Solid State Physics,
Harrison, Walter A., Elementary Electronic Structure,
Harrison, Walter A.; W. A. Benjamin Pseudopotentials in the theory of metals, (New York) 1966
Marder, Michael P., Condensed Matter Physics,
Martin, Richard, Electronic Structure: Basic Theory and Practical Methods,
Millman, Jacob; Arvin Gabriel, Microelectronics, , Tata McGraw-Hill Edition.
Nemoshkalenko, V. V., and N. V. Antonov, Computational Methods in Solid State Physics,
Omar, M. Ali, Elementary Solid State Physics: Principles and Applications,
Singh, Jasprit, Electronic and Optoelectronic Properties of Semiconductor Structures Chapters 2 and 3,
Vasileska, Dragica, Tutorial on Bandstructure Methods (2008)
Solid state engineering
Articles containing video clips | Electronic band structure | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,149 | [
"Electron",
"Electronic band structures",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering"
] |
706,278 | https://en.wikipedia.org/wiki/Squeezed%20coherent%20state | In physics, a squeezed coherent state is a quantum state that is usually described by two non-commuting observables having continuous spectra of eigenvalues. Examples are position and momentum of a particle, and the (dimension-less) electric field in the amplitude (phase 0) and in the mode (phase 90°) of a light wave (the wave's quadratures). The product of the standard deviations of two such operators obeys the uncertainty principle:
and , respectively.
Trivial examples, which are in fact not squeezed, are the ground state of the quantum harmonic oscillator and the family of coherent states . These states saturate the uncertainty above and have a symmetric distribution of the operator uncertainties with in "natural oscillator units" and .
The term squeezed state is actually used for states with a standard deviation below that of the ground state for one of the operators or for a linear combination of the two. The idea behind this is that the circle denoting the uncertainty of a coherent state in the quadrature phase space (see right) has been "squeezed" to an ellipse of the same area. Note that a squeezed state does not need to saturate the uncertainty principle.
Squeezed states of light were first produced in the mid 1980s. At that time, quantum noise squeezing by up to a factor of about 2 (3 dB) in variance was achieved, i.e. . As of 2017, squeeze factors larger than 10 (10 dB) have been directly observed.
Mathematical definition
The most general wave function that satisfies the identity above is the squeezed coherent state (we work in units with )
where are constants (a normalization constant, the center of the wavepacket, its width, and the expectation value of its momentum). The new feature relative to a coherent state is the free value of the width , which is the reason why the state is called "squeezed".
The squeezed state above is an eigenstate of a linear operator
and the corresponding eigenvalue equals . In this sense, it is a generalization of the ground state as well as the coherent state.
Operator representation
The general form of a squeezed coherent state for a quantum harmonic oscillator is given by
where is the vacuum state, is the displacement operator and is the squeeze operator, given by
where and are annihilation and creation operators, respectively. For a quantum harmonic oscillator of angular frequency , these operators are given by
For a real , (note that , where r is squeezing parameter), the uncertainty in and are given by
Therefore, a squeezed coherent state saturates the Heisenberg uncertainty principle , with reduced uncertainty in one of its quadrature components and increased uncertainty in the other.
Some expectation values for squeezed coherent states are
The general form of a displaced squeezed state for a quantum harmonic oscillator is given by
Some expectation values for displaced squeezed state are
Since and do not commute with each other,
where , with
Examples
Depending on the phase angle at which the state's width is reduced, one can distinguish amplitude-squeezed, phase-squeezed, and general quadrature-squeezed states. If the squeezing operator is applied directly to the vacuum, rather than to a coherent state, the result is called the squeezed vacuum. The figures below give a nice visual demonstration of the close connection between squeezed states and Heisenberg's uncertainty relation: Diminishing the quantum noise at a specific quadrature (phase) of the wave has as a direct consequence an enhancement of the noise of the complementary quadrature, that is, the field at the phase shifted by .
As can be seen in the illustrations, in contrast to a coherent state, the quantum noise for a squeezed state is no longer independent of the phase of the light wave. A characteristic broadening and narrowing of the noise during one oscillation period can be observed. The probability distribution of a squeezed state is defined as the norm squared of the wave function mentioned in the last paragraph. It corresponds to the square of the electric (and magnetic) field strength of a classical light wave. The moving wave packets display an oscillatory motion combined with the widening and narrowing of their distribution: the "breathing" of the wave packet. For an amplitude-squeezed state, the most narrow distribution of the wave packet is reached at the field maximum, resulting in an amplitude that is defined more precisely than the one of a coherent state. For a phase-squeezed state, the most narrow distribution is reached at field zero, resulting in an average phase value that is better defined than the one of a coherent state.
In phase space, quantum mechanical uncertainties can be depicted by the Wigner quasi-probability distribution. The intensity of the light wave, its coherent excitation, is given by the displacement of the Wigner distribution from the origin. A change in the phase of the squeezed quadrature results in a rotation of the distribution.
Photon number distributions and phase distributions
The squeezing angle, that is the phase with minimum quantum noise, has a large influence on the photon number distribution of the light wave and its phase distribution as well.
For amplitude squeezed light the photon number distribution is usually narrower than the one of a coherent state of the same amplitude resulting in sub-Poissonian light, whereas its phase distribution is wider. The opposite is true for the phase-squeezed light, which displays a large intensity (photon number) noise but a narrow phase distribution. Nevertheless, the statistics of amplitude squeezed light was not observed directly with photon number resolving detector due to experimental difficulty.
For the squeezed vacuum state the photon number distribution displays odd-even-oscillations. This can be explained by the mathematical form of the squeezing operator, that resembles the operator for two-photon generation and annihilation processes. Photons in a squeezed vacuum state are more likely to appear in pairs.
Classification
Based on the number of modes
Squeezed states of light are broadly classified into single-mode squeezed states and two-mode squeezed states, depending on the number of modes of the electromagnetic field involved in the process. Recent studies have looked into multimode squeezed states showing quantum correlations among more than two modes as well.
Single-mode squeezed states
Single-mode squeezed states, as the name suggests, consists of a single mode of the electromagnetic field whose one quadrature has fluctuations below the shot noise level and the orthogonal quadrature has excess noise. Specifically, a single-mode squeezed vacuum (SMSV) state can be mathematically represented as,
where the squeezing operator S is the same as introduced in the section on operator representations above. In the photon number basis, writing this can be expanded as,
which explicitly shows that the pure SMSV consists entirely of even-photon Fock state superpositions.
Single mode squeezed states are typically generated by degenerate parametric oscillation in an optical parametric oscillator, or using four-wave mixing.
Two-mode squeezed states
Two-mode squeezing involves two modes of the electromagnetic field which exhibit quantum noise reduction below the shot noise level in a linear combination of the quadratures of the two fields. For example, the field produced by a nondegenerate parametric oscillator above threshold shows squeezing in the amplitude difference quadrature. The first experimental demonstration of two-mode squeezing in optics was by Heidmann et al.. More recently, two-mode squeezing was generated on-chip using a four-wave mixing OPO above threshold. Two-mode squeezing is often seen as a precursor to continuous-variable entanglement, and hence a demonstration of the Einstein-Podolsky-Rosen paradox in its original formulation in terms of continuous position and momentum observables. A two-mode squeezed vacuum (TMSV) state can be mathematically represented as,
,
and, writing down , in the photon number basis as,
If the individual modes of a TMSV are considered separately (i.e., ), then tracing over or absorbing one of the modes leaves the remaining mode in a thermal state
with an effective average number of photons .
Based on the presence of a mean field
Squeezed states of light can be divided into squeezed vacuum and bright squeezed light, depending on the absence or presence of a non-zero mean field (also called a carrier), respectively. An optical parametric oscillator operated below threshold produces squeezed vacuum, whereas the same OPO operated above threshold produces bright squeezed light. Bright squeezed light can be advantageous for certain quantum information processing applications as it obviates the need of sending local oscillator to provide a phase reference, whereas squeezed vacuum is considered more suitable for quantum enhanced sensing applications. The AdLIGO and GEO600 gravitational wave detectors use squeezed vacuum to achieve enhanced sensitivity beyond the standard quantum limit.
Atomic spin squeezing
For squeezing of two-level neutral atom ensembles it is useful to consider the atoms as spin-1/2 particles with corresponding angular momentum operators defined as
where and is the single-spin operator in the -direction. Here will correspond to the population difference in the two level system, i.e. for an equal superposition of the up and down state . The − plane represents the phase difference between the two states. This is also known as the Bloch sphere picture. We can then define uncertainty relations such as . For a coherent (unentangled) state, . Squeezing is here considered the redistribution of uncertainty from one variable (typically ) to another (typically ). If we consider a state pointing in the direction, we can define the Wineland criterion for squeezing, or the metrological enhancement of the squeezed state as
.
This criterion has two factors, the first factor is the spin noise reduction, i.e. how much the quantum noise in is reduced relative to the coherent (unentangled) state. The second factor is how much the coherence (the length of the Bloch vector, ) is reduced due to the squeezing procedure. Together these quantities tell you how much metrological enhancement the squeezing procedure gives. Here, metrological enhancement is the reduction in averaging time or atom number needed to make a measurement of a specific uncertainty. 20 dB of metrological enhancement means the same precision measurement can be made with 100 times fewer atoms or 100 times shorter averaging time.
Relation with the concept of quantum phase space
The concept of Quantum Phase Space (QPS) extends the notion of phase space from classical to quantum physics by taking into account the uncertainty principle. The definition of the QPS is based on the introduction of joint momentum-coordinate quantum states denoted which can be considered as some kind of squeezed coherent states. The expression of the wavefunction corresponding to a state in coordinate representation is
in which :
is the reduced Planck constant
and are respectively the eigenvalues (possible values) of the coordinate operator and the momentum operator
, , and are respectively the mean values and statistical variances of the coordinate and momentum corresponding to the quantum state itself
A state saturates the uncertainty relation i.e. one has the following relation
It can be shown that a state is an eigenstate of the operator . The corresponding eigenvalue equation is
with
It was also shown that the multidimensional generalization of the states are the basic quantum states which corresponds to wavefunctions that are covariants under the action of the group formed by multidimensional Linear Canonical Transformations.
The quantum phase space (QPS) is defined as the set of all possible values of , or equivalently as the set of possible values of the pair , for a given value of the momentum statistical variance . It follows from this definition that the structure of the quantum phase space depends explicitly on the value of the momentum statistical variance. It is this explicit dependence that makes this definition naturally compatible with the uncertainty principle. It can also be remarked here that, at thermodynamic equilibrium, the momentum statistical variance can be related to thermodynamics parameters like temperature, pressure and volume shape and size. At the classical limit, when the momentum and coordinate statistical variances are taken to be equal to zero (ignoring the uncertainty principle), the quantum phase space as defined previously is reduced to the classical phase space.
There are more generalized squeezed coherent states, denoted with a positive integer, that are related to the concept of QPS and which do not saturate the uncertainty relation for . These states can be deduced from to the states using the following relation
The coordinate and momentum statistical variances, denoted respectively and , corresponding to a state are
We then have the following relation
This relation shows that a state does not saturate the uncertainty relation for as said before.
Experimental realizations
There has been a whole variety of successful demonstrations of squeezed states. The first demonstrations were experiments with light fields using lasers and non-linear optics (see optical parametric oscillator). This is achieved by a simple process of four-wave mixing with a crystal; similarly travelling wave phase-sensitive amplifiers generate spatially multimode quadrature-squeezed states of light when the crystal is pumped in absence of any signal. Sub-Poissonian current sources driving semiconductor laser diodes have led to amplitude squeezed light.
Squeezed states have also been realized via motional states of an ion in a trap, phonon states in crystal lattices, and spin states in neutral atom ensembles. Much progress has been made on the creation and observation of spin squeezed states in ensembles of neutral atoms and ions, which can be used to enhancement measurements of time, accelerations, fields, and the current state of the art for measurement enhancement is 20 dB. Generation of spin squeezed states have been demonstrated using both coherent evolution of a coherent spin state and projective, coherence-preserving measurements. Even macroscopic oscillators were driven into classical motional states that were very similar to squeezed coherent states. Current state of the art in noise suppression, for laser radiation using squeezed light, amounts to 15 dB (as of 2016), which broke the previous record of 12.7 dB (2010).
Applications
Squeezed states of the light field can be used to enhance precision measurements. For example, phase-squeezed light can improve the phase read out of interferometric measurements (see for example gravitational waves). Amplitude-squeezed light can improve the readout of very weak spectroscopic signals.
Spin squeezed states of atoms can be used to improve the precision of atomic clocks. This is an important problem in atomic clocks and other sensors that use small ensembles of cold atoms where the quantum projection noise represents a fundamental limitation to the precision of the sensor.
Various squeezed coherent states, generalized to the case of many degrees of freedom, are used in various calculations in quantum field theory, for example Unruh effect and Hawking radiation, and generally, particle production in curved backgrounds and Bogoliubov transformations.
Recently, the use of squeezed states for quantum information processing in the continuous variables (CV) regime has been increasing rapidly. Continuous variable quantum optics uses squeezing of light as an essential resource to realize CV protocols for quantum communication, unconditional quantum teleportation and one-way quantum computing. This is in contrast to quantum information processing with single photons or photon pairs as qubits. CV quantum information processing relies heavily on the fact that squeezing is intimately related to quantum entanglement, as the quadratures of a squeezed state exhibit sub-shot-noise quantum correlations.
See also
Negative energy
Nonclassical light
Optical phase space
Quantum optics
Squeeze operator
References
External links
Tutorial about quantum optics of the light field
Quantum optics
Quantum states | Squeezed coherent state | [
"Physics"
] | 3,174 | [
"Quantum optics",
"Quantum states",
"Quantum mechanics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.