id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
77,585,629 | https://en.wikipedia.org/wiki/RTI-5152-12 | RTI-5152-12, or WW-12 (in patent), is a synthetic small-molecule agonist of the atypical chemokine receptor ACKR3 (CXCR7) that was derived from the naturally occurring alkaloid conolidine. RTI-5152-12 has 15-fold improved potency towards ACKR3 relative to conolidine.
ACKR3 is a novel opioid receptor which functions as a broad-spectrum trap or scavenger for endogenous opioid peptides, including enkephalins, dynorphins, and nociceptin. The receptor acts as a negative modulator of the opioid system by decreasing the availability of opioid peptides for their classical receptors like the μ-opioid receptor. Ligands of ACKR3, by competitively displacing endogenous opioid peptides from ACKR3, can potentiate the actions of these endogenous opioids and produce effects like analgesia and anxiolysis in animals.
RTI-5152-12 is being developed as a potential pharmaceutical drug and, as of December 2021, is in the preclinical stage of development for treatment of pain. The chemical structure was not disclosed until a patent was published in June 2022.
See also
LIH383
References
External links
The Good Drug Guide - Conolidine / RTI-5152-12 - David Pearce
Opioid modulators
Opioid peptides
Piperidines
Indoles
Ketones | RTI-5152-12 | Chemistry | 326 |
63,133,081 | https://en.wikipedia.org/wiki/Dapagliflozin/saxagliptin | Dapagliflozin/saxagliptin, sold under the brand name Qtern, is a fixed-dose combination anti-diabetic medication used as an adjunct to diet and exercise to improve glycemic control in adults with type 2 diabetes. It is a combination of dapagliflozin and saxagliptin. It is taken by mouth.
The most common side effects include upper respiratory tract infection (such as nose and throat infections) and, when used with a sulphonylurea, hypoglycaemia (low blood glucose levels).
Dapagliflozin/saxagliptin was approved for medical use in the European Union in July 2016, and in the United States in February 2017.
Medical uses
In the United States, dapagliflozin/saxagliptin is indicated as an adjunct to diet and exercise to improve glycemic control in adults with type 2 diabetes.
In the European Union, it is indicated in adults aged 18 years and older with type 2 diabetes:
to improve glycemic control when metformin with or without sulphonylurea (SU) and either saxagliptin or dapagliflozin does not provide adequate glycemic control.
when already being treated with saxagliptin and dapagliflozin.
References
Adamantanes
Drugs developed by AstraZeneca
Carboxamides
Chloroarenes
Combination diabetes drugs
Dipeptidyl peptidase-4 inhibitors
Glucosides
Nitriles
Nitrogen heterocycles
Phenol ethers
SGLT2 inhibitors
Tertiary alcohols | Dapagliflozin/saxagliptin | Chemistry | 336 |
34,751,472 | https://en.wikipedia.org/wiki/Octopine | Octopine is a derivative of the amino acids arginine and alanine. It was the first member of the class of chemical compounds known as opines to be discovered. Octopine gets its name from Octopus octopodia from which it was first isolated in 1927.
Octopine has been isolated from the muscle tissue of invertebrates such as octopus, Pecten maximus and Sipunculus nudus where it functions as an analog of lactic acid. Plants may also produce this compound after infection by Agrobacterium tumefaciens and transfer of the octopine synthesis gene from the bacterium to the plant.
Octopine is formed by reductive condensation of pyruvic acid and arginine through the action of the NADH-dependent enzyme octopine dehydrogenase (ODH). The reaction is reversible so that pyruvic acid and arginine can be regenerated.
References
Alpha-Amino acids
Amino acid derivatives
Dicarboxylic acids
Guanidines | Octopine | Chemistry | 216 |
55,545,526 | https://en.wikipedia.org/wiki/Ammonifex%20thiophilus | Ammonifex thiophilus is an extremely thermophilic, anaerobic, and facultatively chemolithoautotrophic bacterium from the genus of Ammonifex which has been isolated from a hot spring in Uzon Caldera in Russia.
References
Thermoanaerobacterales
Bacteria described in 2008
Thermophiles
Anaerobes | Ammonifex thiophilus | Biology | 82 |
25,698,853 | https://en.wikipedia.org/wiki/Journal%20of%20Nonlinear%20Optical%20Physics%20%26%20Materials | The Journal of Nonlinear Optical Physics & Materials is a quarterly peer-reviewed scientific journal that was established in 1992 and is published by World Scientific. It covers developments in the field of nonlinear interactions of light with matter, guided waves, and solitons, as well as their applications, such as in laser and coherent lightwave amplification, and information processing.
Abstracting and indexing
The journal is abstracted and indexed in:
Astrophysics Data System
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
EBSCO databases
Ei Compendex
Inspec
ProQuest databases
Science Citation Index Expanded
Scopus
References
External links
Academic journals established in 1992
Optics journals
Materials science journals
World Scientific academic journals
English-language journals
Quarterly journals | Journal of Nonlinear Optical Physics & Materials | Materials_science,Engineering | 149 |
64,998,690 | https://en.wikipedia.org/wiki/Timeless%20universe | The timeless universe is the philosophical and ontological view that time and associated ideas are human illusions caused by our ordering of observable phenomena. Unlike most variants of presentism and eternalism, the timeless universe entirely rejects the notion of the reality of any time, arguing that it is exclusively a human illusion, and since the universe can know no time, no dimension of time can be permitted in any theoretical explanation of parts of the observable universe. All purported measurements of time must hence according to this view be correlation measurements between movements, as stated by physicist Ernst Mach in 1883:It is utterly beyond our power to measure the changes of things by time. Quite the contrary, time is an abstraction at which we arrive by means of the changes of things; made because we are not restricted to any one definite measure, all being interconnected.In a timeless universe the cosmos in its broadest definition is eternal, without beginning or end, and all physical processes operate within a timeless framework. Since fundamental problems related to time, such as the Arrow of time and time travel, are still among the great unsolved problems of physics, discussions of timeless universes revolve around proposed solutions to these fundamental problems and paradoxa, and the related fundamental problems of philosophy and science.
History
The first person in recorded history to contemplate a timeless universe was the Ancient Greek philosopher Antiphon, who in the 5th century B.C. held that time was a man-made measure, rather than a real thing or substance. The same opinion was held by another Ancient Greek philosopher in the 2nd century B.C., Kritolaos.
In the 2400 years since Antiphon, many different theories of possible timeless universes have been advanced, seeking to explain in whole or in part phenomena otherwise associated with or requiring the existence of time. Thinkers who have examined possible mechanisms responsible for such phenomena in a timeless universe include physicists such as Ernst Mach, Albert Einstein, Kurt Gödel and Carlo Rovelli, and timeless universes have been contemplated in works dedicated specifically to the subject by physicists such as John McTaggart, Julian Barbour and others.
Research indicates that concepts of time are not universally known to humanity since the advent of behavioral modernity, but were developed by some cultures at some unknown point along the trajectory of the development of early human society. Indigenous tribes have been studied in Amazonas and elsewhere that were found by researchers not to have any developed concept of time at all.
Ernst Mach and Albert Einstein
The fundamental 1883 statement above was outlined by Ernst Mach in his book "The Science of Mechanics" and with him the idea of a timeless universe first came to prominence as a serious scientific idea. The particular passage quoted was singled out by Albert Einstein in his 1916 obituary for Mach, in which Einstein also described the great influence the Machian corpus had exerted and continued to exert upon generations of physicists:I believe that even those who consider themselves opponents of Mach are hardly aware of how much of Mach's way of thinking they imbibed, so to speak, with their mother's milk.Einstein opens the obituary in the Physikalische Zeitschrift with a firm rejection of those of his contemporary colleagues who view the epistemological inquiries of Mach as being work of lesser importance, and then opens the obituary proper with the central question which according to him ought to passionately interest and concern any disciple of science, such as himself:What objective can and will that science achieve to which I have devoted myself? To what extent are its general results "true"? What is essential and what is only dependent on the accidents of development?Einstein then proceeds to the central and largest part of his obituary, stating the importance of Mach and his work as follows:The importance of such minds, like Mach, certainly lies not only therein, that they satisfy certain philosophical requirements of the age, which by the average inveterate specialist might be characterized as unnecessary luxury. Concepts which have proven useful in the ordering of things, through ourselves easily rise to such authority, that we forget that their origin is earthly - and accept them as unalterable/absolute facts. They are then established as "Logical necessities", "givens a priori", et cetera. The path of scientific progress is oft made inaccessible for generations through such errors. Hence it is certainly no frivolous activity, when we exercise ourselves in the analysis of these long-established concepts; in the analysis and demonstration of the circumstances upon which their justification and utility depend, even as they have emerged from the realm of experience and observed phenomena. In such exercise their overly large authority is broken: If they are unable to properly legitimize themselves, they are eliminated; if their application to a corresponding object in reality was sloppy, they are corrected; and they are replaced by others if it proves possible to adopt some new system, the which we may for some reasons prefer.
Such analyses are mostly perceived by the expert scientific specialist as superfluous, stilted, at times even ridiculous. The situation changes however, when one of the casually employed terms is replaced by a sharper more accurate one, because the development of a particular science-branch commands it: Then energic protest and lamentations are voiced by those guilty of sloppy procedure with their own terms, fearing for their most holy possessions. In this howling the voices of other philosophers chime in who think themselves unable to do without yonder term, because they had so lined it up within their little treasure-trove of the "absolute", the "a priori", et cetera, that they had proclaimed it an irrevocable fundamental principle.
The reader already conjectured that I am here particularly alluding to certain concepts of space and time and mechanics, which have been subjected to a modification through the theory of relativity. No one can take it away from the epistemologists that they paved the way here; at least in my own case I know that I have been propelled immensely, directly and indirectly, especially by Hume and Mach. I ask the reader to take into his hand the work of Mach: "The Science of Mechanics", and to contemplate his observations contained in its second chapter under parts VI. and VII. ("Newton's views of time, space, and motion." and "Discussion of Newton's view of time.")
Here we find thoughts masterfully presented, which as of yet have not at all become the common property of physicists. These parts are also especially intriguing for the reason that they grow directly out of Newtonian passages, quoted verbatim. I provide here a selection of the choicest morsels:Newton: "Absolute, true, and mathematical time, of itself, and by its own nature, flows uniformly on, without regard to anything external. It is also called duration. Relative, apparent, and common time, is some sensible and external measure of absolute time (duration), estimated by the motions of bodies, whether accurate or inequable, and is commonly employed in of true time; as an hour, a day, a month, a year..."
Mach: "... When we say a ThingA changes with time, we mean simply that the conditions that determine a ThingA depend on the conditions that determine another ThingB. The vibrations of a pendulum take place in time when its excursion depends on the position of the earth. Since, however, in the observation of the pendulum, we are not under the necessity of taking into account its dependence on the position of the earth, but may compare it with any other thing (the conditions of which of course also depend on the position of the earth), the illusory notion easily arises that all the things with which we compare it are unessential. (....) It is utterly beyond our power to measure the changes of things by time. Quite the contrary, time is an abstraction, at which we arrive by means of the changes of things; made because we are not restricted to any one definite measure, all being interconnected."These quoted lines show that Mach had clearly recognized the weak sides of classical mechanics, and was not far removed from demanding a general theory of relativity, and this already half a century ago! It is not improbable that Mach should have himself discovered the theory of relativity, if the question of the constancy of the speed of light had pre-occupied the community of physicists when he still had a fresh and youthful mind.
Einstein thus hails Mach as one of the great scientists of the era, and within his obituary gives the above passage dealing with the non-existence of time as evidence for why Mach was such a great physicist.
Kurt Gödel and Albert Einstein
Though the influence of Mach upon Einstein here described by the latter was certainly the most pronounced in the first four decades of his life, it is true that the warmth of Einstein's admiration for Mach cooled considerably after 1921 due to the posthumous publication of a text penned by Mach shortly before the latter's death disavowing any support for the theory of relativity. It must have disappointed Einstein, since we can see from partially extant friendly correspondence between the two that Mach had initially responded positively to the theory of relativity, much to the delight of Einstein. Einstein would later come to view Mach's ideas upon a number of subjects with increasing skepticism, but he never ceased to mention how Mach's work had influenced him and continued to discuss Machian problems and concepts with his close friends and colleagues.
One of these friends was Kurt Gödel, who in his works contemplated and examined universes both with and without concepts of time, the natures of which continued to be the subject of long discussions between himself and Einstein. Although in 1949 Gödel postulated a theorem that stated: "In any universe described by the theory of relativity, time cannot exist", it is important to stress that Gödel was considering many different universes employing for their theoretical constitutions many different theoretical conceptions of time. It was his endeavour to reconcile the paradoxa of time and timelessness with relativity theory upon a sound epistemological footing. Hence when Gödel is popularly known for proposing "his" universe in which time-travel is a possibility, it is in reality a gross overreduction and oversimplification of his work.
Calendars in timeless universes
In a universe where time does not exist, the measure of time is spoken of only as existing within the sphere of man-made society and its sciences. The regular motions of the sun, earth and moon are real, and the shadows they produce can hence be measured and artificially quantified into days, months and years, and subdivided using other movements for measurement into increasingly small subdivisions. This is done for convenience in human societies for purposes of measurement or comparative analysis in history or other sciences.
It is therefore still possible in a timeless universe to speak of a date in time, such as the Battle of Marathon in 490 B.C., as long as such time is clearly separated from physics, and confined to the sphere of man-made historical science and its conventions. That is, when we understand that the year is a measure for revolutions of a planet around the sun, and that our planet was measured by historians as having completed some 2510 revolutions since that event occurred, which we may read out from our catalogues that fix events to observations of physical revolutions. No real time is measured, rather observations of changes in seasons and nights and days are recorded by historians.
References
Ontology
Time | Timeless universe | Physics,Mathematics | 2,359 |
26,715,309 | https://en.wikipedia.org/wiki/Open%20Mobile%20Video%20Coalition | The Open Mobile Video Coalition (OMVC) is a consortium founded to advance free broadcast mobile television in the United States. It was created by TV stations to promote the ATSC-M/H television standard to consumers, electronics manufacturers, the wireless industry, and the Federal Communications Commission.
The OMVC set-up the first real-life beta tests for ATSC-M/H on WATL and WPXA in Atlanta, and on KOMO and KONG in Seattle. Most recently, it has also advocated to the FCC, trying to keep it from taking even more of the UHF upper-band TV channels for wireless broadband. The OMVC commissioned a study to emphasize the fact that broadcasting is a far more efficient use of bandwidth than unicasting the same live video stream hundreds of times to every mobile phone that wants to watch local television.
As of January 1, 2013 the OMVC became integrated in the National Association of Broadcasters.
References
Mobile television | Open Mobile Video Coalition | Technology | 194 |
24,478,493 | https://en.wikipedia.org/wiki/C10H14N5O7P | The molecular formula C10H14N5O7P (molar mass: 347.22 g/mol) may refer to:
Adenosine monophosphate (AMP)
Deoxyguanosine monophosphate (dGMP)
Vidarabine phosphate | C10H14N5O7P | Chemistry | 61 |
48,593,234 | https://en.wikipedia.org/wiki/Jenny%20Glusker | Jenny Pickworth Glusker (born 28 June 1931) is a British biochemist and crystallographer. Since 1956 she has worked at the Fox Chase Cancer Center, a National Cancer Research Institute in the United States. She was also an adjunct professor of biochemistry and biophysics at the University of Pennsylvania.
Biography
Jenny Pickworth was born on 28 June 1931 in Birmingham, England, the eldest of three siblings. Her parents were both physicians. Her father Frederick Alfred Pickworth was a chemist who studied medicine and did neurology research in Birmingham. Her mother, Jane Wylie Stocks, was from Scotland, studied medicine in Glasgow, and worked in Dublin in the 1920s. Stocks later got a job in Birmingham, where she married Frederick Alfred Pickworth.
During her school years, Pickworth developed an early enthusiasm for chemistry, due largely to her chemistry teacher and her mother's textbooks. Her parents wanted her to study medicine. She agreed with her father that she would attend the Medical School of the University of Birmingham if she was rejected from Somerville College of the University of Oxford. She successfully completed her entrance exam in Oxford, receiving her bachelor's degree in chemistry in 1953 and later earning her doctorate under Dorothy Hodgkin. By the end of 1955, she was involved in the X-ray structural analysis of corrin ring from vitamin B12, for which Hodgkin was awarded the 1964 Nobel Prize in Chemistry; Glusker earned her doctorate in 1957.
During her undergraduate studies, she met the American chemist Donald L. Glusker, who had secured a Rhodes Scholarship to Oxford. They married in 1955 in the United States and went together as post-doctoral researchers to Caltech, where Jenny Glusker worked in the laboratory of Linus Pauling. In 1956 she moved with her husband to Philadelphia where she became a research fellow and research associate with Arthur Lindo Patterson at the Institute for Cancer Research (ICR; now Fox Chase Cancer Center). She initially worked only part-time in order to raise her three children. After Patterson died in November 1966, Glusker eventually took over his lab and became a junior faculty member at the Institute for Cancer Research. She became a Member of the Institute for Cancer Research of the Fox Chase Cancer Center in 1977 (equivalent of associate professor) and a Senior Member (equivalent of full professor) in 1979, until her retirement in 2003. She remains a professor emerita of the Fox Chase Cancer Center.
At the ICR, Glusker initially examined the structure of small molecules of the citric acid cycle, in particular aconitase-catalyzed citrate, and its conformation as a ligand to iron atoms of the iron-sulfur cluster of aconitase, which led to a better understanding of the three-dimensional structure and mechanism of the enzymes (ferrous-wheel mechanism). Later her laboratory performed crystallographic analyses of anti-tumor agents and, amongst others, the structure and conformation of estramustine and acridine. They further tested carcinogens such as polycyclic aromatic hydrocarbons as well as the structure of the enzyme xylose isomerase. In 1972 Glusker and structural biologist Helen M. Berman reported on the crystal structure of a nucleic acid-drug complex as a model for anti-tumor agent and mutagen action.
Awards
1979: Garvan–Olin Medal (American Chemical Society)
1995: Fankuchen Award (American Crystallographic Association)
2011: John Scott Medal (Philadelphia City Council)
2014: William Procter Prize for Scientific Achievement
Glusker is an Honorary Fellow of Somerville College.
Selected publications
By Kenneth N. Trueblood: Crystal Structure Analysis:. A primer (1st edition 1972, 2nd edition 1985) 3rd Edition, Oxford University Press, Oxford / New York 2010, .
With Dan McLachlan (ed.): Crystallography in North America American Crystallographic Association, New York 1983. .
With Mitchell Lewis, Miriam Rossi: Crystal Structure Analysis for Chemists and Biologists VCH, New York 1994. .
References
Further reading
Elizabeth H. Oakes: Encyclopedia of World Scientists. Überarb. Auflage, Facts On File, 2007, , S. 276 f (online).
Tiffany K. Wayne: American Women of Science Since 1900 (Vol.1: Essays A-H). ABC-Clio, 2011, , S. 435 f.
External links
Memoir: Jenny Pickworth Glusker. ACA History, American Crystallographic Association.
Jenny Glusker, DSc: A Storied 60-Year Career at Fox Chase Fox Chase Cancer Center, Temple University Health System (TUHS).
1931 births
Living people
Structural biologists
British biochemists
British women biochemists
British crystallographers
20th-century British women scientists
Alumni of Somerville College, Oxford
Fellows of Somerville College, Oxford
People educated at King Edward VI High School for Girls, Birmingham
Presidents of the American Crystallographic Association
Fox Chase Cancer Center people | Jenny Glusker | Chemistry | 1,023 |
41,366,705 | https://en.wikipedia.org/wiki/Hoopla%20%28digital%20media%20service%29 | Hoopla (stylized as hoopla) is a web and mobile (Android/iOS) library media streaming platform launched in 2010 for audio books, comics, e-books, movies, music, and TV. Patrons of a library that supports Hoopla have access to its collection of digital media.
Hoopla Digital is a division of Midwest Tape.
Business model
Hoopla is free-of-charge for patrons of participating libraries. The content is paid for by library systems, using a "per circulation transaction model".
Content
Hoopla claims to have over 500,000 content titles across six formats, including over 25,000 comic books. As of November 2016, Hoopla's content comprised 35% audiobooks (for which Hoopla has contracts with publishers such as Blackstone Audio, HarperCollins, Simon & Schuster Audio, Tantor Audio, and others), followed by 22% movies (for which Hoopla has motion picture contracts with publishers such as Disney, Lionsgate, Starz, Warner Bros., and others), 19% music, 12% ebooks, 6% comics, and 6% television. One drawback is that Hoopla has few new bestsellers.
Areas served
Hoopla expanded to serve Australia and New Zealand in June 2021.
Technology
Hoopla content can be borrowed and consumed on the web, or via the native Android or iOS apps. Hoopla broadcasts only in Standard definition unlike most of its competitors such as Kanopy.
Parent company
John Eldred and Jeff Jankowski founded Hoopla's parent company, Midwest Tape, in 1989. Midwest Tape is a library vendor of physical media such as audiobooks, CDs, and DVD/Blu-ray.
Controversy
Hoopla and Midwest Tapes were censured by the Library Freedom Project and Library Futures in a joint statement for hosting what it described as "fascist propaganda", including a recent English translation of A New Nobility of Blood and Soil by Richard Walther Darré of the SS and books related to Holocaust denial, in public library collections without the input from the staff. Criticism was also directed at the inclusion of books on homosexuality, abortion, and vaccines claimed by the Library Freedom Project and Library Futures to be misinformation. On February 17, 2022, Hoopla removed a number of titles after public outcry about Holocaust denial books available on the app under non-fiction.
See also
Kanopy
Libro.fm
OverDrive (maker of Libby)
CloudLibrary
References
Digital media
Video on demand services
Online content distribution
Mass media companies of the United States
Audiobook companies and organizations | Hoopla (digital media service) | Technology | 522 |
1,326,899 | https://en.wikipedia.org/wiki/Ship%27s%20wheel | A ship's wheel or boat's wheel is a device used aboard a water vessel or airship, in which a helmsman steers the vessel and control its course. Together with the rest of the steering mechanism, it forms part of the helm (the term helm can mean the wheel alone, or the entire mechanism by which the rudder is controlled). It is connected to a mechanical, electric servo, or hydraulic system which alters the horizontal angle of the vessel's rudder relative to its hull. In some modern ships the wheel is replaced with a simple toggle that remotely controls an electro-mechanical or electro-hydraulic drive for the rudder, with a rudder position indicator presenting feedback to the helmsman.
History
Until the invention of the ship's wheel, the helmsman relied on a tiller—a horizontal bar fitted directly to the top of the rudder post—or a whipstaff—a vertical stick acting on the arm of the ship's tiller. Near the start of the 18th century, a large number of vessels appeared using the ship's wheel design, but historians are unclear when the approach was first used.
Design
A ship's wheel is composed of eight cylindrical wooden spokes (though sometimes as few as six or as many as ten or twelve depending on the wheel's size and how much force is needed to turn it.) shaped like balusters and all joined at a central wooden hub or nave (sometimes covered with a brass nave plate) which houses the axle. The square hole at the centre of the hub through which the axle runs is called the drive square and was often lined with a brass plate (and therefore called a brass boss, though this term was used more often to refer to a brass hub and nave plate) which was frequently etched with the name of the wheel's manufacturer. The outer rim is composed of sections each made up of stacks of three felloes, the facing felloe, the middle felloe, and the after felloe. Because each group of three felloes at one time made up a quarter of the distance around the rim, the entire outer wooden wheel was sometimes called the quadrant. Each spoke runs through the middle felloe, creating a series of handles beyond the wheel's rim. One of these handles/spokes was frequently provided with extra grooves at its tip which could be felt by a helmsman steering in the dark and used by him to determine the exact position of the rudder—this is the king spoke, and when it pointed straight upward the rudder was believed to be dead straight to the hull. The completed ship's wheel and associated axle and pedestals might even be taller than the person using it. The wood used in construction of this type of wheel was most often either teak or mahogany, both of which are very durable tropical hardwoods capable of surviving the effects of salt water spray and regular use without significant decomposition. Modern design—particularly on smaller vessels—can deviate from the template.
Mechanism
The steering gear of earlier ships' wheels sometimes consisted of a double wheel where each wheel was connected to the other with a wooden spindle that ran through a barrel or drum. The spindle was held up by two pedestals that rested on a wooden platform, often no more than a grate. A tiller rope or tiller chain (sometimes called a steering rope or steering chain) ran around the barrel in five or six loops and then down through two tiller rope/ chain slots at the top of the platform before connecting to two sheaves just below deck (one on either side of the ship's wheel) and thence out to a pair of pulleys before coming back together at the tiller and connecting to the ship's rudder. Movement of the wheels (which were connected and moved in unison) caused the tiller rope to wind in one of two directions and angled the tiller left or right. In a typical and intuitive arrangement, a forward-facing helmsman turning the wheel counterclockwise would cause the tiller to angle to starboard and therefore the rudder to swing to port causing the vessel to also turn to port (see animation). Having two wheels connected by an axle allowed two people to take the helm in severe weather when one person alone might not have had enough strength to control the ship's movements.
When at the full extent of travel, the wheel and rudder are said to be "hard over", hence the order "hard port/starboard" given by Captain/Officer of the Watch.
Gallery
See also
Steering engine
Steering wheel
Tiller
References
External links
Control devices
Watercraft components | Ship's wheel | Engineering | 934 |
4,525,977 | https://en.wikipedia.org/wiki/Building%20implosion | In the controlled demolition industry, building implosion is the strategic placing of explosive material and timing of its detonation so that a structure collapses on itself in a matter of seconds, minimizing the physical damage to its immediate surroundings. Despite its terminology, building implosion also includes the controlled demolition of other structures, like bridges, smokestacks, towers, and tunnels. This is typically done to save time and money of what would otherwise be an extensive demolition process with construction equipment, as well as to reduce construction workers exposure to infrastructure that is in severe disrepair.
Building implosion, which reduces to seconds a process which could take months or years to achieve by other methods, typically occurs in urban areas and often involves large landmark structures.
The actual use of the term "implosion" to refer to the destruction of a building is a misnomer. This had been stated of the destruction of 1515 Tower in West Palm Beach, Florida. "What happens is, you use explosive materials in critical structural connections to allow gravity to bring it down."
Terminology
The term building implosion can be misleading to a layperson: The technique is not a true implosion phenomenon. A true implosion usually involves a difference between internal (lower) and external (higher) pressure, or inward and outward forces, that is so large that the structure collapses inward into itself.
In contrast, building implosion techniques do not rely on the difference between internal and external pressure to collapse a structure. Instead, the goal is to induce a progressive collapse by weakening or removing critical supports; therefore, the building can no longer withstand gravity loads and will fail under its own weight.
Numerous small explosives, strategically placed within the structure, are used to catalyze the collapse. Nitroglycerin, dynamite, or other explosives are used to shatter reinforced concrete supports. Linear shaped charges are used to sever steel supports. These explosives are progressively detonated on supports throughout the structure. Then, explosives on the lower floors initiate the controlled collapse.
A simple structure like a chimney can be prepared for demolition in less than a day. Larger or more complex structures can take up to six months of preparation to remove internal walls and wrap columns with fabric and fencing before firing the explosives.
Historical overview
As part of the demolition industry, the history of building implosion is tied to the development of explosives technology.
One of the earliest documented attempts at building implosion was the 1773 razing of Holy Trinity Cathedral in Waterford, Ireland with of gunpowder, a huge amount of explosives at the time. The use of low velocity explosive produced a deafening explosion that instantly reduced the building to rubble.
The late 19th century saw the erection of—and ultimately the need to demolish—the first skyscrapers, which had more complicated structures, allowing greater heights. This led to other considerations in the explosive demolition of buildings, such as worker and spectator safety and limiting collateral damage. Benefiting from the availability of dynamite, a high-velocity explosive based on a stabilized form of nitroglycerine, and borrowing from techniques used in rock-blasting, such as staggered detonation of several small charges, the process of building implosion gradually became more efficient.
Following World War II, European demolition experts, faced with huge reconstruction projects in dense urban areas, gathered practical knowledge and experience for bringing down large structures without harming adjacent properties. This led to the emergence of a demolition industry that grew and matured during the latter half of the twentieth century. At the same time, the development of more efficient high-velocity explosives, such as RDX, and non-electrical firing systems combined to make this a period of time in which the building implosion technique was extensively used.
Meanwhile, public interest in the spectacle of controlled building explosion also grew. The October 1994 demolition of the Sears Merchandise Center in Philadelphia, Pennsylvania drew a cheering crowd of 50,000, as well as protesters, bands, and street vendors selling building implosion memorabilia. Evolution in the mastery of controlled demolition led to the world record demolition of the Seattle Kingdome on March 26, 2000.
In 1997, the Royal Canberra Hospital in Canberra, Australia, was demolished. The main building did not fully disintegrate and had to be manually demolished. The explosion during the initial demolition attempt was not contained on the site and large pieces of debris were projected towards spectators away, in a location considered safe for viewing. A twelve-year-old girl was killed instantly, and nine others were injured. Large fragments of masonry and metal were found from the demolition site.
On October 24, 1998, the J. L. Hudson Department Store and Addition in Detroit, Michigan became the tallest, and the largest, building ever imploded.
On February 23, 2007 an unfinished Intel building known as the Intel Shell was imploded in Austin, Texas, which was halted in April 2001.
On December 13, 2009, an unfinished 31-story condominium tower, known as the Ocean Tower, was imploded in South Padre Island, Texas. Construction on the new tower had begun in 2006, but it had been sinking unevenly during construction, which halted in 2008, and could not be saved. It is believed to be one of the tallest reinforced concrete structures ever imploded.
Building implosion has been successfully used at Department of Energy sites such as the Savannah River Site (SRS) in South Carolina and the Hanford Site in Washington. The SRS 185-3K or "K" Area Cooling Tower, built in 1992 to cool the water from the K Reactor, was no longer needed when the Cold War ended and was safely demolished by explosive demolition on May 25, 2010.
The Hanford Site Buildings 337, 337B, and the 309 Exhaust Stack, built in the early 1970s and vacated in the mid-2000s due to deteriorating physical condition, were safely razed by explosive demolition on October 9, 2010.
Gallery
See also
Controlled Demolition, Inc.
List of tallest voluntarily demolished buildings
References
External links
Demolition Simulation Advanced structural analysis for predicting demolitions.
A History of Structural Demolition in America by Brent L. Blanchard
How Building Implosions Work by Tom Harris on www.howstuffworks.com
Building Implosions Videos
Bank Implosion Videos and Photos Videos
Building engineering
Demolition
Articles containing video clips
Implosion | Building implosion | Physics,Engineering | 1,282 |
3,111,337 | https://en.wikipedia.org/wiki/Call%20super | Call super is a code smell or anti-pattern of some object-oriented programming languages. Call super is a design pattern in which a particular class stipulates that in a derived subclass, the user is required to override a method and call back the overridden function itself at a particular point. The overridden method may be intentionally incomplete, and reliant on the overriding method to augment its functionality in a prescribed manner. However, the fact that the language itself may not be able to enforce all conditions prescribed on this call is what makes this an anti-pattern.
Description
In object-oriented programming, users can inherit the properties and behaviour of a superclass in subclasses. A subclass can override methods of its superclass, substituting its own implementation of the method for the superclass's implementation. Sometimes the overriding method will completely replace the corresponding functionality in the superclass, while in other cases the superclass's method must still be called from the overriding method. Therefore, most programming languages require that an overriding method must explicitly call the overridden method on the superclass for it to be executed.
The call super anti-pattern relies on the users of an interface or framework to derive a subclass from a particular class, override a certain method and require the overridden method to call the original method from the overriding method:
This is often required, since the superclass must perform some setup tasks for the class or framework to work correctly, or since the superclass's main task (which is performed by this method) is only augmented by the subclass.
The anti-pattern is the of calling the parent. There are many examples in real code where the method in the subclass may still want the superclass's functionality, usually where it is only augmenting the parent functionality. If it still has to call the parent class even if it is fully replacing the functionality, the anti-pattern is present.
A better approach to solve these issues is instead to use the template method pattern, where the superclass includes a purely abstract method that must be implemented by the subclasses and have the original method call that method:
Language variation
The appearance of this anti-pattern in programs is usually because few programming languages provide a feature to contractually ensure that a super method is called from a derived class. One language that does have this feature, in a quite radical fashion, is BETA. The feature is found in a limited way in for instance Java and C++, where a child class constructor always calls the parent class constructor.
Languages that support before and after methods, such as Common Lisp (specifically the Common Lisp Object System), provide a different way to avoid this anti-pattern. The subclass's programmer can, instead of overriding the superclass's method, supply an additional method which will be executed before or after the superclass's method. Also, the superclass's programmer can specify before, after, and around methods that are guaranteed to be executed in addition to the subclass's actions.
Example
Suppose there is a class for generating a report about the inventory of a video rental store. Each particular store has a different way of tabulating the videos currently available, but the algorithm for generating the final report is the same for all stores. A framework that uses the call super anti-pattern may provide the following abstract class (in C#):
abstract class ReportGenerator
{
public virtual Report CreateReport()
{
// Generate the general report object
// ...
return new Report(...);
}
}
A user of the class is expected to implement a subclass like this:
class ConcreteReportGenerator : ReportGenerator
{
public override Report CreateReport()
{
// Tabulate data in the store-specific way
// ...
// Design of this class requires the parent CreateReport() function to be called at the
// end of the overridden function. But note this line could easily be left out, or the
// returned report could be further modified after the call, violating the class design
// and possibly also the company-wide report format.
return base.CreateReport();
}
}
A preferable interface looks like this:
abstract class ReportGenerator
{
public Report CreateReport()
{
Tabulate();
// Generate the general report object
// ...
return new Report(...);
}
protected abstract void Tabulate();
}
An implementation would override this class like this:
class ConcreteReportGenerator : ReportGenerator
{
protected override void Tabulate()
{
// Tabulate data in the store-specific way
// ...
}
}
References
Anti-patterns
Object-oriented programming | Call super | Technology | 998 |
701,169 | https://en.wikipedia.org/wiki/Tadpole%20%28physics%29 | In quantum field theory, a tadpole is a one-loop Feynman diagram with one external leg, giving a contribution to a one-point correlation function (i.e., the field's vacuum expectation value). One-loop diagrams with a propagator that connects back to its originating vertex are often also referred as tadpoles. For many massless theories, these graphs vanish in dimensional regularization (by dimensional analysis and the absence of any inherent mass scale in the loop integral).
Tadpole corrections are needed if the corresponding external field has a non-zero vacuum expectation value, such as the Higgs field.
Tadpole diagrams were first used in the 1960s. An early example was published by Abdus Salam in 1961, though he did not take credit for the name. Physicists Sidney Coleman and Sheldon Glashow made an influential use of tadpole diagrams to explain symmetry breaking in the strong interaction in 1964.
In 1985 Coleman stated (perhaps as a joke) that Physical Review’s editors rejected the originally proposed name "spermion".
References
Quantum field theory | Tadpole (physics) | Physics | 222 |
1,131,022 | https://en.wikipedia.org/wiki/Apoplast | The apoplast is the extracellular space outside of plant cell membranes, especially the fluid-filled cell walls of adjacent cells where water and dissolved material can flow and diffuse freely. Fluid and material flows occurring in any extracellular space are called apoplastic flow or apoplastic transport. The apoplastic pathway is one route by which water and solutes are transported and distributed to different places through tissues and organs, contrasting with the symplastic pathway.
To prevent uncontrolled leakage to unwanted places, in certain areas there are barriers to the apoplastic flow:
in roots the Casparian strip has this function
Outside the plant epidermis of aerial plant parts is a protective waxy film called plant cuticle that protects against drying out, but also waterproofs the plant against external water.
The apoplast is important for all the plant's interaction with its environment:
The main carbon source (carbon dioxide) needs to be solubilized, which happens in the apoplast, before it diffuses through the cell wall and across the plasma membrane, into the cell's inner content, the cytoplasm, where it diffuses in the symplast to the chloroplasts for photosynthesis.
In the roots, ions diffuse into the apoplast of the epidermis before diffusing into the symplast, or in some cases being taken up by specific ion channels, and being pulled by the plant's transpiration stream, which also occurs completely within the boundaries of the apoplast. Similarly, all gaseous molecules emitted and received by plants such as oxygen must pass through the apoplast.
In nitrate poor soils, acidification of the apoplast increases cell wall extensibility and root growth rate. This is believed to be caused by a decrease in nitrate uptake (due to deficit in the soil medium) and supplanted with an increase in chloride uptake. H+ATPase increases the efflux of H+, thus acidifying the apoplast.
The apoplast is a site for cell-to-cell communication. During local oxidative stress, hydrogen peroxide and superoxide anions can diffuse through the apoplast and transport a warning signal to neighbouring cells. In addition, a local alkalinization of the apoplast due to such stress can travel within minutes to the rest of the plant body via the xylem and trigger systemic acquired resistance.
The apoplast also plays an important role in resistance to aluminium toxicity.
In addition to resistance to chemicals, the apoplast provides the rich environment for microorganisms endophytes which arises[??] the abiotic resistance of plants.
Exclusion of aluminium ions in the apoplast prevent toxic levels which inhibit shoot growth, reducing[?] crop yields.
History
The term apoplast was coined in 1930 by Münch in order to separate the "living" symplast from the "dead" apoplast.
Apoplastic transport
The apoplastic pathway is one of the two main pathways for water transport in plants, the other being symplastic pathway.
In the root via the apoplast water and minerals flow in an upward direction to the xylem.
The concentration of solutes transported through the apoplast in aboveground organs is established through a combination of import from the xylem, absorption by cells, and export by the phloem.
Transport velocity is higher (transport is faster) in the apoplast than in the symplast.
This method of transport also accounts for a higher proportion of water transport in plant tissues than does symplastic transport.
The apoplastic pathway is also involved in passive exclusion. Some of the ions that enter through the roots do not make it to the xylem. The ions are excluded by the cell walls (plasma membranes) of the endodermal cells.
Apoplastic colonization
It is well known that the apoplast is rich in nutrients, and microorganisms accordingly thrive there. There is an apoplastic immune system, but pathogens with effectors can modulate or suppress the host’s immune responses. This is known as effector-triggered susceptibility. Another factor in pathogens’ frequent colonization of the apoplast is that when they enter from the leaves, the apoplast is the first thing they come across. Therefore, the apoplast is a popular biotic interface and also a reservoir for microbes. One common apoplastic disease appearing in plants without restricted habitat or climate is black rot, caused by the gram-negative bacteria Xanthomonas campestris.
Entophytic bacteria can cause severe problems in agriculture by alkalizing the apoplast with their volatiles and therefore inhibiting plant growth. In particular, the largest phytoyoxic component of the volatiles of rhizobacteria has been identified as 2-phenylethanol. 2-phenylethanol can influence the regulation of WRKY18, a transcription factor engaged in multiple plant hormones, one of which is abscisic acid (ABA) hormone. 2-phyenlethanol modulates the sensitivity of ABA through WRKY18 and WRKY40, but WRKY18 is the central mediator of the pathway of triggering cell death and modulation of ABA sensitivity influenced by 2-phyenlethanol. Therefore, it results in the inhibition of root growth, and the plants have no capacity to grow without having the roots absorb nutrients in soils.
However, the microbial colonization in the apoplast is not always harmful to the plants, indeed, it can be beneficial to establish a symbiotic relationship with the host. One of the examples is the endophytic and phyllosphere microbes can indirectly promote plant growth and protect the plant from other pathogens by inducing salicylic acid (SA)and jasmonic acid (JA) signaling pathways, and they are both parts of the pathogen associated molecular patterns triggered immunity (PTI). The productions of SA and JA hormones also modulate the ABA signaling to be the components on the defense gene expression, and there are a lot more responses with the involvement of other hormones to respond to different biotic and abiotic stress. In the experiment performed by Romero et al., they inoculated the known entophytic bacteria, Xanthomonas into Canola, a plant that grows in multiple habitats, and it is found its apoplastic fluids that are 99% identity to another bacteria, Pseudomonas viridiflava, by performing 16S rRNA sequences with the Genebank and reference strains. They further used the markers on the SA-responsive transcriptional factor and other specific genes such as lipoxygenase 3 as marker genes for JA signaling and ABA signaling to perform quantitative reverse-transcription PCR. It has shown Xanthomonas only activates the related gene of SA pathway, in comparison, Pseudomonas viridiflava is able to trigger the genes of both SA and JA pathway, which suggest Pseudomonas viridiflava originally in Canola can stimulate PTI by the accumulation of both signaling pathway to inhibit the growth of Xanthomonas. In conclusion, the apoplast acts as a crucial role in plants, involving in all kinds of regulations of hormone and transportation of nutrients, so once it has been colonized, the effect it brings cannot be neglected.
See also
Symplast
Tonoplast
Vacuolar pathway
Notes
Apoplast was previously defined as "everything but the symplast, consisting of cell walls and spaces between cells in which water and solutes can move freely". However, since solutes can neither freely move through the air spaces between plant cells nor through the cuticle, this definition has been changed. When referring to "everything outside the plasma membrane", the term "extracellular space" is in use.
The word apoplasm is also in use with similar meaning as apoplast, although less common.
References
Footnotes
.
Plant physiology
Plant anatomy | Apoplast | Biology | 1,706 |
2,578,793 | https://en.wikipedia.org/wiki/Z%20User%20Group | The Z User Group (ZUG) was established in 1992 to promote use and development of the Z notation, a formal specification language for the description of and reasoning about computer-based systems. It was formally constituted on 14 December 1992 during the ZUM'92 Z User Meeting in London, England.
Meetings and conferences
ZUG has organised a series of Z User Meetings approximately every 18 months initially. From 2000, these became the ZB Conference (jointly with the B-Method, co-organized with APCB), and from 2008 the ABZ Conference (with abstract state machines as well). In 2010, the ABZ Conference also includes Alloy, a Z-like specification language with associated tool support.
The Z User Group participated at the FM'99 World Congress on Formal Methods in Toulouse, France, in 1999. The group and the associated Z notation have been studied as a community of practice.
List of proceedings
The following proceedings were produced by the Z User Group:
Bowen, J.P.; Nicholls, J.E., eds. (1993). Z User Workshop, London 1992, Proceedings of the Seventh Annual Z User Meeting, 14–15 December 1992. Springer, Workshops in Computing.
Bowen, J.P.; Hall, J.A., eds. (1994). Z User Workshop, Cambridge 1994, Proceedings of the Eighth Annual Z User Meeting, 29–30 June 1994. Springer, Workshops in Computing.
Bowen, J.P.; Hinchey, M.G, eds. (1995). ZUM '95: The Z Formal Specification Notation, 9th International Conference of Z Users, Limerick, Ireland, September 7–9, 1995. Springer, Lecture Notes in Computer Science, Volume 967.
Bowen, J.P.; Hinchey, M.G.; Till, D., eds. (1997). ZUM '97: The Z Formal Specification Notation, 10th International Conference of Z Users, Reading, UK, April 3–4, 1997. Springer, Lecture Notes in Computer Science, Volume 1212.
Bowen, J.P.; Fett, A.; Hinchey, M.G., eds. (1998). ZUM '98: The Z Formal Specification Notation, 11th International Conference of Z Users, Berlin, Germany, September 24–26, 1998. Springer, Lecture Notes in Computer Science, Volume 1493.
The following ZB conference proceedings were jointly produced with the Association de Pilotage des Conférences B (APCB), covering the Z notation and the related B-Method:
Bowen, J.P.; Dunne, S.; Galloway, A.; King. S., eds. (2000). ZB 2000: Formal Specification and Development in Z and B, First International Conference of B and Z Users, York, UK, August 29 – September 2, 2000. Springer, Lecture Notes in Computer Science, Volume 1878.
Bert, D.; Bowen, J.P.; Henson, M.C.; Robinson, K., eds. (2002). ZB 2002: Formal Specification and Development in Z and B: 2nd International Conference of B and Z Users Grenoble, France, January 23–25, 2002. Springer, Lecture Notes in Computer Science, Volume 2272.
Bert, D.; Bowen, J.P.; King, S.; Walden, M., eds. (2003). ZB 2003: Formal Specification and Development in Z and B: Third International Conference of B and Z Users, Turku, Finland, June 4–6, 2003. Springer, Lecture Notes in Computer Science, Volume 2651.
Treharne, H.; King, S.; Henson, M.C.; Schneider, S., eds. (2005). ZB 2005: Formal Specification and Development in Z and B: 4th International Conference of B and Z Users, Guildford, UK, April 13–15, 2005. Springer, Lecture Notes in Computer Science, Volume 3455.
From 2008, the ZB conferences were expanded to be the ABZ conference, also including abstract state machines.
Chair and secretary
Successive chairs have been:
John Nicholls (1992–1994)
Jonathan Bowen (1994–2011)
Steve Reeves (2011–)
Successive secretaries have been:
Mike Hinchey (1994–2011)
Randolph Johnson (2011–)
See also
Formal methods
References
External links
Z User Group
1992 establishments in the United Kingdom
Organizations established in 1992
Formal methods organizations
Z notation
User groups
Computer clubs in the United Kingdom | Z User Group | Mathematics | 927 |
22,711,136 | https://en.wikipedia.org/wiki/Neoparium | Neoparium, also known as Neopariés, is a glass material made in Japan by Nippon Electric Glass. Described as "crystalized glass ceramic," it was developed as an architectural cladding material for use in harsh environments. Typical units are 5/8" thick in a number of opaque colors. Panels can be fabricated with curves.
The material was most notably used to replace failing marble cladding on the BMA Tower in Kansas City, Missouri, where material replacement was reviewed for compliance with National Register of Historic Places status. Another use was cladding Marco Polo House, an office building built in 1987 in the Victorian district of Battersea, London.
References
External links
Neopariés at TGP Architectural
Neopariés crystallized glass cladding on Selector
Building materials | Neoparium | Physics,Engineering | 156 |
1,204,012 | https://en.wikipedia.org/wiki/Bhangmeter | A bhangmeter is a non-imaging radiometer installed on reconnaissance and navigation satellites to detect atmospheric nuclear detonations and determine the yield of the nuclear weapon. They are also installed on some armored fighting vehicles, in particular NBC reconnaissance vehicles, in order to help detect, localise and analyse tactical nuclear detonations. They are often used alongside pressure and sound sensors in this role in addition to standard radiation sensors. Some nuclear bunkers and military facilities may also be equipped with such sensors alongside seismic event detectors.
The bhangmeter was developed at Los Alamos National Laboratory by a team led by Hermann Hoerlin.
History
The bhangmeter was invented, and the first proof-of-concept device was built, in 1948 to measure the nuclear test detonations of Operation Sandstone. Prototype and production instruments were later built by EG&G, and the name "bhangmeter" was coined in 1950 by Frederick Reines. Bhangmeters became standard instruments used to observe US nuclear tests. A bhangmeter was developed to observe the detonations of Operation Buster-Jangle (1951) and Operation Tumbler-Snapper (1952). These tests lay the groundwork for a large deployment of nationwide North American bhangmeters with the Bomb Alarm System (1961-1967).
US president John F. Kennedy and the First Secretary of the Communist Party of the Soviet Union Nikita Khrushchev signed the Partial Test Ban Treaty on August 5, 1963, under the condition that each party could use its own technical means to monitor the ban on nuclear testing in the atmosphere or in outer space.
Bhangmeters were first installed, in 1961, aboard a modified US KC-135A aircraft monitoring the pre-announced Soviet test of Tsar Bomba.
The Vela satellites were the first space-based observation devices jointly developed by the U.S. Air Force and the Atomic Energy Commission. The first generation of Vela satellites were not equipped with bhangmeters but with X-ray sensors to detect the intense single pulse of X-rays produced by a nuclear explosion. The first satellites which incorporated bhangmeters were the Advanced Vela satellites.
Since 1980, bhangmeters have been included on US GPS navigation satellites.
Description
The silicon photodiode sensors are designed to detect the distinctive bright double pulse of visible light that is emitted from atmospheric nuclear weapons explosions. This signature consists of a short and intense flash lasting around 1 millisecond, followed by a second much more prolonged and less intense emission of light taking a fraction of a second to several seconds to build up. This signature, with a double intensity maximum, is characteristic of atmospheric nuclear explosions and is the result of the Earth's atmosphere becoming opaque to visible light and transparent again as the explosion's shock wave travels through it.
The effect occurs because the surface of the early fireball is quickly overtaken by the expanding "case shock", the atmospheric shock wave composed of the ionised plasma of what was once the casing and other matter of the device. Although it emits a considerable amount of light itself, it is opaque and prevents the far brighter fireball from shining through. The net result recorded is a decrease of the light visible from outer space as the shock wave expands, producing the first peak recorded by the bhangmeter.
As it expands, the shock wave cools off and becomes less opaque to the visible light produced by the inner fireball. The bhangmeter starts eventually to record an increase in visible light intensity. The expansion of the fireball leads to an increase of its surface area and consequently an increase of the amount of visible light radiated off to space. The fireball continues to cool down so the amount of light eventually starts to decrease, causing the second peak observed by the bhangmeter. The time between the first and second peaks can be used to determine its nuclear yield.
The effect is unambiguous for explosions below about altitude, but above this height a more ambiguous single pulse is produced.
Origin of the name
The name of the detector is a pun which was bestowed upon it by Fred Reines, one of the scientists working on the project. The name is derived from the Hindi word "bhang", a locally grown variety of cannabis which is smoked or drunk to induce intoxicating effects, the joke being that one would have to be on drugs to believe the bhangmeter detectors would work properly. This is in contrast to a "bangmeter" one might associate with detection of nuclear explosions.
See also
Vela incident
WC-135 Constant Phoenix
Nuclear MASINT
Electro-optical MASINT
References
Further reading
Nuclear weapons
Nuclear warfare
Electromagnetic radiation meters
Chemical, biological, radiological and nuclear defense | Bhangmeter | Physics,Chemistry,Technology,Engineering,Biology | 955 |
36,952,902 | https://en.wikipedia.org/wiki/Omphalotus%20mangensis | Omphalotus mangensis is a species of agaric fungus in the family Marasmiaceae. Found in China, the fruit bodies of the fungus are bioluminescent.
See also
List of bioluminescent fungi
References
External links
mangensis
Bioluminescent fungi
Fungi described in 1993
Fungi of China
Fungus species | Omphalotus mangensis | Biology | 69 |
61,928,528 | https://en.wikipedia.org/wiki/Karen%20L.%20Collins | Karen Linda Collins is an American mathematician at Wesleyan University, where she is the Edward Burr Van Vleck Professor of Mathematics, Chair of Mathematics and Computer Science, and Professor of Integrative Sciences. The main topics in her research are combinatorics and graph theory.
Collins graduated from Smith College in 1981, and completed her Ph.D. in 1986 at the Massachusetts Institute of Technology. Her dissertation, Distance Matrices of Graphs, was supervised by Richard P. Stanley. In the same year, she joined the Wesleyan faculty.
She was given the Edward Burr Van Vleck Professorship in 2017.
References
External links
Home page
Year of birth missing (living people)
Living people
20th-century American mathematicians
21st-century American mathematicians
21st-century American women mathematicians
Graph theorists
Smith College alumni
Massachusetts Institute of Technology alumni
Wesleyan University faculty
20th-century American women | Karen L. Collins | Mathematics | 170 |
5,018,166 | https://en.wikipedia.org/wiki/Co-adaptation | In biology, co-adaptation is the process by which two or more species, genes or phenotypic traits undergo adaptation as a pair or group. This occurs when two or more interacting characteristics undergo natural selection together in response to the same selective pressure or when selective pressures alter one characteristic and consecutively alter the interactive characteristic. These interacting characteristics are only beneficial when together, sometimes leading to increased interdependence. Co-adaptation and coevolution, although similar in process, are not the same; co-adaptation refers to the interactions between two units, whereas co-evolution refers to their evolutionary history. Co-adaptation and its examples are often seen as evidence for co-evolution.
Genes and Protein Complexes
At genetic level, co-adaptation is the accumulation of interacting genes in the gene pool of a population by selection. Selection pressures on one of the genes will affect its interacting proteins, after which compensatory changes occur.
Proteins often act in complex interactions with other proteins and functionally related proteins often show a similar evolutionary path. A possible explanation is co-adaptation. An example of this is the interaction between proteins encoded by mitochondrial DNA (mtDNA) and nuclear DNA (nDNA). MtDNA has a higher rate of evolution/mutation than nDNA, especially in specific coding regions. However, in order to maintain physiological functionality, selection for functionally interacting proteins, and therefore co-adapted nDNA will be favourable.
Co-adaptation between mtDNA and nDNA sequences has been studied in the copepod Tigriopus californicus. The mtDNA of COII coding sequences among conspecific populations of this species diverges extensively. When mtDNA of one population was placed in a nuclear background of another population, cytochrome c oxidase activity is significantly decreased, suggesting co-adaptation. Results show an unlikely relationship between the variation in mtDNA and environmental factors. A more likely explanation is the neutral evolution of mtDNA with compensatory changes by the nDNA driven by neutral evolution of mtDNA (random mutations over time in isolated populations).
Bacteria and bacteriophage
Gene blocks in bacterial genomes are sequences of genes, co-located on the chromosome, that are evolutionarily conserved across numerous taxa. Some conserved blocks are operons, where the genes are cotranscribed to polycistronic mRNA, and such operons are often associated with a single function such as a metabolic pathway or a protein complex. The co-location of genes with related function and the preservation of these relationships over evolutionary time indicates that natural selection has been operating to maintain a co-adaptive benefit.
As the early mapping of genes on the bacteriophage T4 chromosome progressed, it became evident that the arrangement of the genes is far from random. Genes with like functions tend to fall into clusters and appear to be co-adapted to each other. For instance genes that specify proteins employed in bacteriophage head morphogenesis are tightly clustered. Other examples of apparently co-adapted clusters are the genes that determine the baseplate wedge, the tail fibers, and DNA polymerase accessory proteins. In other cases where the structural relationship of the gene products is not as evident, a co-adapted clustering based on functional interaction may also occur. Thus Obringer proposed that a specific cluster of genes, centered around the imm and spackle genes encodes proteins adapted for competition and defense at the DNA level.
Organs
Similar to traits on a genetic level, aspects of organs can also be subject to co-adaptation. For example, slender bones can have similar performance in regards to bearing daily loads as thicker bones, due to slender bones having more mineralized tissue. This means that slenderness and the level of mineralization have probably been co-adapted. However, due to being harder than thick bones, slender bones are generally less pliant and more prone to breakage, especially when subjected to more extreme load conditions.
Weakly electric fish are capable of creating a weak electric field using an electric organ. These electric fields can be used to communicate between individuals through electric organ discharges (EOD), which can be further modulated to create context-specific signals called ‘chirps’. Fish can sense these electric fields and signals using electroreceptors. Research on ghost knifefish indicates that the signals produced by electric fish and the way they are received might be co-adapted, as the environment in which the fish resides (both physical and social) influences selection for the chirps, EODs, and detection. Interactions between territorial fish favour different signal parameters than interactions within social groups of fish.
Behaviour
The behaviour of parents and their offspring during feeding is influenced by one another. Parents feed depending on how much their offspring begs, while the offspring begs depending on how hungry it is. This would normally lead to a conflict of interest between parent and offspring, as the offspring will want to be fed as much as possible, whereas the parent can only invest a limited amount of energy into parental care. As such, selection would occur for the combination of begging and feeding behaviours that leads to the highest fitness, resulting in co-adaptation. Parent-offspring co-adaptation can be further influenced by information asymmetry, such as female blue tits being exposed more to begging behaviour in nature, resulting in them responding more than males to similar levels of stimuli.
Partial and antagonistic co-adaptation
It is also possible for related traits to only partially co-adapt due to traits not developing at the same speed, or contradict each other entirely. Research on Australian skinks revealed that diurnal skinks have a high temperature preference and can sprint optimally at higher temperatures, while nocturnal skinks have a low preferred temperature and optimum temperature. However, the differences between high and low optimal temperatures were much smaller than between preferred temperatures, which means that nocturnal skinks sprint slower compared to their diurnal counterparts. In the case of Eremiascincus, the optimum temperature and preferred temperature diverged from one another in opposite directions, creating antagonistic co-adaptation.
See also
Evolutionary biology
Coevolution
Mutualism
Symbiosis
Linkage disequilibrium
Epistasis
References
External links
Coadaptation entry in a dictionary on evolution.
Evolutionary biology | Co-adaptation | Biology | 1,279 |
1,598,792 | https://en.wikipedia.org/wiki/Scantling | Scantling is a measurement of prescribed size, dimensions, or cross sectional areas.
When used in regard to timber, the scantling is (also "the scantlings are") the thickness and breadth, the sectional dimensions; in the case of stone it refers to the dimensions of thickness, breadth and length.
The word is a variation of scantillon, a carpenter's or stonemason's measuring tool, also used of the measurements taken by it, and of a piece of timber of small size cut as a sample. Sometimes synonymous with story pole. The Old French escantillon, mod. échantillon, is usually taken to be related to Italian scandaglio, sounding-line (Latin scandere, to climb; cf. scansio, the metrical scansion). It was probably influenced by cantel, cantle, a small piece, a corner piece.
Shipbuilding
In shipbuilding, the scantling refers to the collective dimensions of the framing (apart from the keel) to which planks or plates are attached to form the hull. The word is most often used in the plural to describe how much structural strength in the form of girders, I-beams, etc., is in a given section.
Scantling length
The scantling length refers to the structural length of a ship. Its distance is slightly less than the waterline length of a ship, and generally less than the overall length of a ship.
In the American Bureau of Shipping's Rules for Building and Classing Steel Vessels, it is defined as the distance on the summer load line from the fore side of the stem to the centerline of the rudder stock. Scantling length need not be less than 96%, nor more than 97% of the length of the summer load line.
Most other classification societies use a similar definition of scantling length to define the general length of a ship. The scantling length is used by classification societies for all calculations where the waterline length, overall length, displacement length, etc. is called for. Naval architects wishing to comply with class rules would also use the scantling length.
Shipping
In shipping, a "full scantling vessel" is understood to be a geared ship, that can reach all parts of its own cargo spaces with its own cranes.
References
Oxford English Dictionary
External links
Units of length
Nautical terminology
Naval architecture
Timber framing | Scantling | Mathematics,Technology,Engineering | 483 |
25,092,574 | https://en.wikipedia.org/wiki/Ayodele%20Awojobi | Ayodele Oluwatumininu Awojobi (12 March 1937 – 23 September 1984), also known by the nicknames "Dead Easy",
"The Akoka Giant", and "Macbeth", was a Nigerian academic, author, inventor, social crusader and activist.
He was considered a scholarly genius by his teachers and peers alike.
His research papers, particularly in the field of vibration, are still cited by international research fellows in Engineering as lately as the year 2020, and are archived by such publishers as the Royal Society.
Early life
Born in Oshodi, Lagos State, Awojobi's father, Chief Daniel Adekoya Awojobi, was a stationmaster at the Nigerian Railway Corporation who hailed from Ikorodu in Lagos State. His mother, Comfort Bamidele Awojobi (née Adetunji), was a petty trader who hailed from Modakeke, Ile-Ife, Osun State.
Between 1942 and 1947, he attended St. Peter's Primary School, Faji, Lagos.
It was while at his secondary school, the CMS Grammar School, Lagos, that his academic traits began to manifest. Not only was he seen to be gifted in mathematics and the sciences, he was comfortable also in the arts, becoming a member of the school's literary and debating society. It was during this period that he earned the nickname, "Macbeth": William Shakespeare's famous play, Macbeth, was to be staged in the school. The lead actor took ill a week before, and so Ayodele was called upon to play the lead role in his stead. It is said that not only did Ayodele master his lines as lead actor, but also the entire play, such that he was able to prompt the cast whenever they forgot their lines.
Academic achievements
Ayodele Oluwatumininu Awojobi was a straight-A's secondary school student, while at the CMS Grammar school, passing his West African School Certificate examinations with a record eight distinctions in 1955.
He proceeded to the Nigerian College of Arts, Science and Technology, Ibadan, for his General Certificate of Examinations, GCE (Advanced Level), where in 1958 he sat for, and obtained distinctions in all his papers: Physics, Pure Mathematics and Applied Mathematics.
In 1962 Awojobi was awarded his first degree in mechanical engineering – a BSc (Eng) London, with first class honours, at the then Nigerian College of Arts, Science and Technology, Zaria (now Ahmadu Bello University, Zaria).
He had studied there on a federal government scholarship won on the merit of his performance in the GCE (Advanced-level) examinations of 1958.
It was said by Akintola Ajai (himself an engineering graduate of the University of London), that when Awojobi arrived at the Nigerian College of Arts, Science and Technology, Zaria, he boasted openly saying that it was his intention to finish the whole course within a period of three years only; an impracticable feat due to the fact that nowhere was the BSc Mechanical Engineering curriculum designed to run less than four years. Ayodele accomplished it in three years just as he had predicted.
The federal government awarded Awojobi another scholarship in 1962 to study further at the post-graduate level in the field of Mechanical Engineering at the Imperial College of the University of London (now Imperial College London). He completed the course, successfully defending his thesis, and was awarded a PhD in mechanical engineering in 1966.
Landmark degree award
After a period teaching at the University of Lagos, he returned to the Imperial College London for a research study in the field of Vibration, and was awarded the degree of Doctor of Science, DSc. He was the first African to be awarded the Doctor of Science degree in mechanical engineering, at the Imperial College London.
The first university to admit an individual to this degree was the University of London in 1860.
Educator
On his return from England in 1966 Awojobi enrolled as a lecturer in the Faculty of Engineering, University of Lagos, Akoka. His teaching methods endeared him to his engineering students, whose public chants: "Dead easy... Dead easy...", would often be heard shouted in his direction as he went along the campus grounds. He quickly rose in the ranks among his colleagues and would later become the head of department, Mechanical Engineering, University of Lagos.
Awojobi went back to London to study for his doctorate. He returned in 1974 and was made an associate professor in mechanical engineering at the University of Lagos. However, one week after having been appointed associate professor, the University of Lagos Senate, after receiving news that Awojobi had just been awarded the degree of Doctor of Science (DSc), immediately appointed him professor in mechanical engineering, making him the youngest professor in the Faculty of Engineering, University of Lagos and the first ever to be expressly promoted from associate to full professorship within a week.
By nature, Ayodele Awojobi was a teacher. He imparted knowledge at various other levels, even as he contended with his day job as a full-time professor and university lecturer. He envisaged his country as a whole becoming more advanced, technologically – this was exemplified when he refused lucrative offers from commercial outfits for his Autonov 1 invention, he rather preferring to preserve his design for his country's future benefit.
He engaged with great educators of his, and earlier generations, such as the late nationalist and Yoruba leader, Obafemi Awolowo (who forwarded several of Ayodele's educational books), the late activist, social crusader and educator, Tai Solarin, and the once Lagos State governor, Lateef Kayode Jakande, who achieved free education at all educational levels in Lagos State, Nigeria.
Jakande believed in Awolowo's visionary ideas about the way forward for the nation, particularly in Awolowo's resounding theme of qualitative and quantitative education across the nation, free of over-bearing school fees.
Ayodele Awojobi became, at one time, the chairman, Lagos State School's Management Board, out of his concern for ways to better improve the problems inherent in secondary school education in Lagos State, Nigeria.
He desired that all his children go to public schools. The older ones all did. Such was his vision and hope that the country would some day attain equitable distribution in the quality of education cutting across different social strata.
He authored several books for both the secondary and tertiary levels of education in Nigeria.
His natural propensity to inform, to educate, drove him to become, in the early 1970s, a quiz-master on national television. The quiz-show, Mastermind, consisted of weekly contestants taking turns in isolation on "the hot-seat", whereupon various categories of questions would be thrown at them. Otunba Gbenga Daniel, former governor of Ogun State, Nigeria, was a returning winner and champion on Mastermind for several episodes over; he being in his undergraduate years at the time.
Inventor
While as a lecturer at the University of Lagos, Awojobi successfully converted his own family car, an Opel Olympia Rekord, from a right-hand drive to a left-hand drive.
He tinkered further with motor engines when he acquired an army-type jeep and proceeded to invent a second steering-wheel mechanism adjoined to the pre-existing engine at the rear end so that the vehicle could move forward and backwards with all four pre-existing gears.
This gave the hybrid vehicle, which he christened Autonov 1, the ability to achieve its highest speeds at a moment's notice, in the normal reverse direction. He highlighted the advantage this might offer to army vehicles, as an example, that might need to make a fast retreat, in a cul-de-sac or ambush situation.
Activist
Ayodele Awojobi, in the wake of the presidential election results that returned the incumbent, Shehu Shagari as president in the Second Nigerian Republic, became very vocal in the national newspapers and magazines, going as far as suing the Federal Government of Nigeria for what he strongly believed was a widespread election rigging.
With all his court cases against the Nigerian government thrown out of court, he delved into the law books, himself being only a mechanical engineer, claiming that he would earn his law degrees in record time, to enable him better argue with the opposition at the federal courts.
He used the universities as a bastion, going from campus to campus to make speeches at student-rallies, hoping to sensitise them to what he perceived as the ills of a corrupt government.
Ayodele Awojobi authored several political books over the course of his ideological struggles against a perceived, corrupt federal government. These books were usually made available during his public rallies or symposiums.
Political ambition
Any intention Ayodele Awojobi ever had of entering partisan politics, was revealed by the man himself when he spoke on national television, saying: "At the age of 65, I will have built the infrastructure. There would be very few illiterates in Nigeria when I mount the soapbox. Then, I will go into proper politics". However this ambition was never realised as he died at the age of 47 in 1984.
Death
Ayodele Awojobi died in the morning of Sunday, 23 September 1984, at the age of 47. His death made headline news in most of the national newspapers for days following and he was laid to rest at Ikorodu Cemetery, Lagos. He was survived by his wife, Mrs Iyabode Mabel Awojobi (née Odetunde), and children.
Tribute
Usually every year till date, a tribute or two in Ayodele's honour would be published in the form of an article in a national newspaper, such as the one published by The Nation on 5 November 2009, entitled "Tribute to Ayodele Awojobi".
In October 2009, the governor of Lagos State Babatunde Fashola dedicated a statue of Awojobi at Onike Roundabout, Yaba, Lagos, in a garden named after him.
On 23 September 2010, Birrel Street – a prominent street in Yaba Local Government Council Area – was renamed "Prof. Ayodele Awojobi Avenue", a further tribute to Awojobi's memory.
List of publications
Research
Vibration of rigid bodies on semi-infinite elastic media – A. O. Awojobi, P. Grootenhuis, 1965
Plane strain and axially symmetric problems of a linearly non-homogeneous elastic half-space – A. O. Awojobi, R. E. Gibson, 1973
Vibration of rigid bodies on non-homogeneous semi-infinite elastic media – A. O. Awojobi, 1973
Torsional vibration of a rigid circular body on an infinite elastic stratum – A. O. Awojobi, 1969
Torsional vibration of a rigid circular body on a non-homogeneous elastic stratum – A. O. Awojobi, 1973
Vertical vibration of rigid bodies with rectangular bases on elastic media – A. O. Awojobi, P. H. Tabiowo
Factors in the design of ultrasonic probes – W. M. R. Smith, A. O. Awojobi, 1979
Determination of the dynamic shear modulus and the depth of the dominant layer of a vibrating elastic medium – A. O. Awojobi, 1970
Ground vibrations due to seismic detonation in oil exploration – A. O. Awojobi, O. A. Sobayo, 1974
Vertical vibrations of a rigid circular body on a non-homogeneous half-space interrupted by a frictionless plane – A. O. Awojobi
Educational
Technical Drawing for Secondary Schools. A. O. Awojobi
325 Worked Examples in Intermediate Mechanics. A. O. Awojobi
Notes and Worked Examples in Physics. A. O. Awojobi
Engineering Drawing. A. O. Awojobi
Political
References
1937 births
CMS Grammar School, Lagos alumni
Alumni of Imperial College London
People associated with the University of London
Ahmadu Bello University alumni
Academic staff of the University of Lagos
Mechanical engineers
Mechanical vibrations
Nigerian democracy activists
20th-century Nigerian inventors
Politics of Nigeria
1984 deaths
Nigerian mechanical engineers
Burials in Lagos State
20th-century Nigerian engineers
20th-century Nigerian educators
Scientists from Lagos | Ayodele Awojobi | Physics,Engineering | 2,590 |
27,612,945 | https://en.wikipedia.org/wiki/Cyclic%20ozone | Cyclic ozone is a theoretically predicted form of ozone. Like ordinary ozone (O3), it would have three oxygen atoms. It would differ from ordinary ozone in how those three oxygen atoms are arranged. In ordinary ozone, the atoms are arranged in a bent line; in cyclic ozone, they would form an equilateral triangle.
Some of the properties of cyclic ozone have been predicted theoretically. It should have more energy than ordinary ozone.
There is evidence that tiny quantities of cyclic ozone exist at the surface of magnesium oxide crystals in air. Cyclic ozone has not been made in bulk, although at least one researcher has attempted to do so using lasers. Another possibility to stabilize this form of oxygen is to produce it inside confined spaces, e.g., fullerene.
It has been speculated that, if cyclic ozone could be made in bulk, and if it proved to have good stability properties, it could be added to liquid oxygen to improve the specific impulse of rocket fuel.
Currently, the possibility of cyclic ozone is confirmed within diverse theoretical approaches.
References
External links
Allotropes of oxygen
Hypothetical chemical compounds
Three-membered rings
Ozone
Homonuclear triatomic molecules | Cyclic ozone | Chemistry | 237 |
50,102,585 | https://en.wikipedia.org/wiki/Rainer%20Philippson | Rainer Philippson was a German organic chemist. He is known for inventing and patenting the synthesis of clocortolone with Emanuel Kaspar in 1973. The original assignee of the patent was Schering AG. Philippson held a PhD (Dr.rer.nat.) and worked as a researcher at the German Academy of Sciences at Berlin (in East Berlin) in the 1950s and 1960s. In the 1960s, he defected to West Germany and joined Schering AG as a researcher. Philippson was also a co-inventor of several other patents held by Schering.
References
20th-century German chemists
German organic chemists
Schering people
German Academy of Sciences at Berlin people
Year of birth missing
Year of death missing | Rainer Philippson | Chemistry | 149 |
661,648 | https://en.wikipedia.org/wiki/Pluggable%20Authentication%20Module | A pluggable authentication module (PAM) is a mechanism to integrate multiple low-level authentication schemes into a high-level application programming interface (API). PAM allows programs that rely on authentication to be written independently of the underlying authentication scheme. It was first proposed by Sun Microsystems in an Open Software Foundation Request for Comments (RFC) 86.0 dated October 1995. It was adopted as the authentication framework of the Common Desktop Environment. As a stand-alone open-source infrastructure, PAM first appeared in Red Hat Linux 3.0.4 in August 1996 in the Linux PAM project. PAM is currently supported in the AIX operating system, DragonFly BSD, FreeBSD, HP-UX, Linux, macOS, NetBSD and Solaris.
Since no central standard of PAM behavior exists, there was a later attempt to standardize PAM as part of the X/Open UNIX standardization process, resulting in the X/Open Single Sign-on (XSSO) standard. This standard was not ratified, but the standard draft has served as a reference point for later PAM implementations (for example, OpenPAM).
Criticisms
Since most PAM implementations do not interface with remote clients themselves, PAM, on its own, cannot implement Kerberos, the most common type of SSO used in Unix environments. This led to SSO's incorporation as the "primary authentication" portion of the would-be XSSO standard and the advent of technologies such as SPNEGO and SASL. This lack of functionality is also the reason SSH does its own authentication mechanism negotiation.
In most PAM implementations, pam_krb5 only fetches Ticket Granting Tickets, which involves prompting the user for credentials, and this is only used for the initial login in an SSO environment. To fetch a service ticket for a particular application, and not prompt the user to enter credentials again, that application must be specifically coded to support Kerberos. This is because pam_krb5 cannot itself get service tickets, although there are versions of PAM-KRB5 that are attempting to work around the issue.
See also
Implementations:
Java Authentication and Authorization Service
Linux PAM
OpenPAM
Identity management – the general topic
Name Service Switch – manages user databases
System Security Services Daemon – SSO implementation based on PAM and NSS
References
External links
Specifications:
The Original Solaris PAM RFC
X/Open Single Sign-on (XSSO) 1997 Draft Working Paper
Guides:
Pluggable Authentication Modules for Linux
Making the Most of Pluggable Authentication Modules (PAM)
Oracle Solaris Administration: Security Services: Using PAM
Open Group standards
Unix authentication-related software
Computer access control frameworks
Computer security standards
Application programming interfaces | Pluggable Authentication Module | Technology,Engineering | 545 |
4,432,542 | https://en.wikipedia.org/wiki/Renewable%20heat | Renewable heat is an application of renewable energy referring to the generation of heat from renewable sources; for example, feeding radiators with water warmed by focused solar radiation rather than by a fossil fuel boiler. Renewable heat technologies include renewable biofuels, solar heating, geothermal heating, heat pumps and heat exchangers. Insulation is almost always an important factor in how renewable heating is implemented.
Many colder countries consume more energy for heating than for supplying electricity. For example, in 2005 the United Kingdom consumed 354 TWh of electric power, but had a heat requirement of 907 TWh, the majority of which (81%) was met using gas. The residential sector alone consumed 550 TWh of energy for heating, mainly derived from methane. Almost half of the final energy consumed in the UK (49%) was in the form of heat, of which 70% was used by households and in commercial and public buildings. Households used heat mainly for space heating (69%).
The relative competitiveness of renewable electricity and renewable heat depends on a nation's approach to energy and environment policy. In some countries renewable heat is hindered by subsidies for fossil fuelled heat. In those countries, such as Sweden, Denmark and Finland, where government intervention has been closest to a technology-neutral form of carbon valuation (i.e. carbon and energy taxes), renewable heat has played the leading role in a very substantial renewable contribution to final energy consumption. In those countries, such as Germany, Spain, the US, and the UK, where government intervention has been set at different levels for different technologies, uses and scales, the contributions of renewable heat and renewable electricity technologies have depended on the relative levels of support, and have resulted generally in a lower renewable contribution to final energy consumption.
Leading renewable heat technologies
Solar heating
Solar heating is a style of building construction which uses the energy of summer or winter sunshine to provide an economic supply of primary or supplementary heat to a structure. The heat can be used for both space heating (see solar air heat) and water heating (see solar hot water). Solar heating design is divided into two groups:
Passive solar heating relies on the design and structure of the house to collect heat. Passive solar building design must also consider the storage and distribution of heat, which may be accomplished passively, or use air ducting to draw heat actively to the foundation of the building for storage. One such design was measured lifting the temperature of a house to on a partially sunny winter day (-7 °C or 19 °F), and it is claimed that the system provides passively for the bulk of the building's heating. The home cost $125 per square foot (or 370 m2 at $1,351/m2), similar to the cost of a traditional new home.
Active solar heating uses pumps to move air or a liquid from the solar collector into the building or storage area. Applications such as solar air heating and solar water heating typically capture solar heat in panels which can then be used for applications such as space heating and supplementation of residential water heaters. In contrast to photovoltaic panels, which are used to generate electricity, solar heating panels are less expensive and capture a much higher proportion of the sun's energy.
Solar heating systems usually require a small supplementary backup heating system, either conventional or renewable.
Geothermal heating
Geothermal energy is accessed by drilling water or steam wells in a process similar to drilling for oil. Geothermal energy is an enormous, underused heat and power resource that is clean (emits little or no greenhouse gases), reliable (average system availability of 95%), and homegrown (making populations less dependent on oil).
The earth absorbs the sun's energy and stores it as heat in the oceans and underground. The ground temperature remains constant at a point of all year round depending on where you live on earth. A geothermal heating system takes advantage of the consistent temperature found below the Earth's surface and uses it to heat and cool buildings. The system is made up of a series of pipes installed underground, connected to pipes in a building. A pump circulates liquid through the circuit. In the winter the fluid in the pipe absorbs the heat of the earth and uses it to heat the building. In the summer the fluid absorbs heat from the building and disposes of it in the earth.
Heat pumps
Heat pumps use work to move heat from one place to another, and can be used for both heating and air conditioning. Though capital intensive, heat pumps are economical to run and can be powered by renewable electricity. Two common types of heat pump are air source heat pumps (ASHP) and ground-source heat pumps (GSHP), depending on whether heat is transferred from the air or from the ground. Air source heat pumps are not effective when the outside air temperature is lower than about -15 °C, while ground-source heat pumps are not affected. The efficiency of a heat pump is measured by the coefficient of performance (CoP): For every unit of electricity used to pump the heat, an air source heat pump generates 2.5 to 3 units of heat (i.e. it has a CoP of 2.5 to 3), whereas a GSHP generates 3 to 3.5 units of heat. Based on current fuel prices for the United Kingdom, assuming a CoP of 3–4, a GSHP is sometimes a cheaper form of space heating than electric, oil, and solid fuel heating. Heat pumps can be linked to an interseasonal thermal energy storage (hot or cold), doubling the CoP from 4 to 8 by extracting heat from warmer ground.
Interseasonal heat transfer
A heat pump with Interseasonal Heat Transfer combines active solar collection to store surplus summer heat in thermal banks with ground-source heat pumps to extract it for space heating in winter. This reduces the "Lift" needed and doubles the CoP of the heat pump because the pump starts with warmth from the thermal bank in place of cold from the ground.
CoP and lift
A heat pump CoP increases as the temperature difference, or "Lift", decreases between heat source and destination. The CoP can be maximized at design time by choosing a heating system requiring only a low final water temperature (e.g., underfloor heating), and by choosing a heat source with a high average temperature (e.g., the ground). Domestic hot water (DHW) and conventional radiators require high water temperatures, affecting the choice of heat pump technology. Low temperature radiators provide an alternative to conventional radiators.
Resistive electrical heating
Renewable electricity can be generated by hydropower, solar, wind, geothermal and by burning biomass. In a few countries where renewable electricity is inexpensive, resistance heating is common. In countries like Denmark where electricity is expensive, it is not permitted to install electric heating as the main heat source. Wind turbines have more output at night when there is a small demand for electricity, storage heaters consume this lower cost electricity at night and give off heat during the day.
Wood-pellet heating
Wood-pellet heating and other types of wood heating systems have achieved their greatest success in heating premises that are off the gas grid, typically being previously heated using heating oil or coal. Solid wood fuel requires a large amount of dedicated storage space, and the specialized heating systems can be expensive (though grant schemes are available in many European countries to offset this capital cost.) Low fuel costs mean that wood fuelled heating in Europe is frequently able to achieve a payback period of less than 3 to 5 years. Because of the large fuel storage requirement wood fuel can be less attractive in urban residential scenarios, or for premises connected to the gas grid (though rising gas prices and uncertainty of supply mean that wood fuel is becoming more competitive.) There is also growing concern over the air pollution from wood heating versus oil or gas heat, especially the fine particulates.
Wood-stove heating
Burning wood fuel in an open fire is both extremely inefficient (0-20%) and polluting due to low temperature partial combustion. In the same way that a drafty building loses heat through loss of warm air through poor sealing, an open fire is responsible for large heat losses by drawing very large volumes of warm air out of the building.
Modern wood stove designs allow for more efficient combustion and then heat extraction. In the United States, new wood stoves are certified by the U.S. Environmental Protection Agency (EPA) and burn cleaner and more efficiently (the overall efficiency is 60-80%) and draw smaller volumes of warm air from the building.
"Cleaner" should not, however, be confused with clean. An Australian study of real-life emissions from woodheaters satisfying the current Australian standard, found that particle emissions averaged 9.4 g/kg wood burned (range 2.6 to 21.7). A heater with average wood consumption of 4 tonnes per year therefore emits 37.6 kg of PM2.5, i.e. particles less than 2.5 micrometers. This can be compared with a passenger car satisfying the current Euro 5 standards (introduced September 2009) of 0.005 g/km. So one new wood heater emits as much PM2.5 per year as 367 passenger cars each driving 20,000 km a year. A recent European study identified PM2.5 as the most health-hazardous air pollutant, causing an estimated 492,000 premature deaths. The next worst pollutant, ozone, is responsible for 21,000 premature deaths.
Because of the problems with pollution, the Australian Lung Foundation recommends using alternative means for climate control. The American Lung Association "strongly recommends using cleaner, less toxic sources of heat. Converting a wood-burning fireplace or stove to use either natural gas or propane will eliminate exposure to the dangerous toxins wood burning generates including dioxin, arsenic and formaldehyde.
"Renewable" should not be confused with "greenhouse neutral". A recent peer-reviewed paper found that, even if burning firewood from a sustainable supply, methane emissions from a typical Australian wood heater satisfying the current standard cause more global warming than heating the same house with gas. However, because a large proportion of firewood sold in Australia is not from sustainable supplies, Australian households that use wood heating often cause more global warming than heating three similar homes with gas.
High efficiency stoves should meet the following design criteria:
Well sealed and precisely calibrated to draw a low yet sufficient volume of air. Air-flow restriction is critical; a lower inflow of cold air cools the furnace less (a higher temperature is thus achieved). It also allows greater time for extraction of heat from the exhaust gas, and draws less heat from the building.
The furnace must be well insulated to increase combustion temperature, and thus completeness.
A well insulated furnace radiates little heat. Thus heat must be extracted instead from the exhaust gas duct. Heat absorption efficiencies are higher when the heat-exchange duct is longer, and when the flow of exhaust gas is slower.
In many designs, the heat-exchange duct is built of a very large mass of heat-absorbing brick or stone. This design causes the absorbed heat to be emitted over a longer period - typically a day.
Renewable natural gas
Renewable natural gas is defined as gas obtained from biomass which is upgraded to a quality similar to natural gas. By upgrading the quality to that of natural gas, it becomes possible to distribute the gas to customers via the existing gas grid. According to the Energy research Centre of the Netherlands, renewable natural gas is 'cheaper than alternatives where biomass is used in a combined heat and power plant or local combustion plant'. Energy unit costs are lowered through 'favourable scale and operating hours', and end-user capital costs eliminated through distribution via the existing gas grid.
Energy efficiency
Renewable heat goes hand in hand with energy efficiency. Indeed, renewable heating projects depend heavily for their success on energy efficiency; in the case of solar heating to cut reliance on the requirement supplementary heating, in the case of wood fuel heating to cut the cost of wood purchased and volume stored, and in the case of heat pumps to reduce the size and investment in heat pump, heat sink and electricity costs.
Two main types of improvement can be made to a building's energy efficiency:
Insulation
Improvements to insulation can cut energy consumption greatly, making a space cheaper to heat and to cool. However existing housing can often be difficult or expensive to improve. Newer buildings can benefit from many of the techniques of superinsulation. Older buildings can benefit from several kinds of improvement:
Solid wall insulation: A building with solid walls can benefit from internal or external insulation. External wall insulation involves adding decorative weather-proof insulating panels or other treatment to the outside of the wall. Alternatively, internal wall insulation can be applied using ready-made insulation/plaster board laminates, or other methods. Thicknesses of internal or external insulation typically range between 50 and 100 mm.
Cavity wall insulation: A building with cavity walls can benefit from insulation pumped into the cavity. This form of insulation is very cost effective.
Programmable thermostats allow heating and cooling of a room to be switched off depending the time, day of the week, and temperature. A bedroom, for example, does not need to be heated during the day, but a living room does not need to be heated during the night.
Roof insulation
Insulated windows and doors
Draught proofing
Underfloor heating
Underfloor heating may sometimes be more energy efficient than traditional methods of heating:
Water circulates within the system at low temperatures (35 °C - 50 °C) making gas boilers, wood fired boilers, and heat pumps significantly more efficient.
Rooms with underfloor heating are cooler near the ceiling, where heat is not required, but warmer underfoot, where comfort is most required.
Traditional radiators are frequently positioned underneath poorly insulated windows, heating them unnecessarily.
Waste-water heat recovery
It is possible to recover significant amounts of heat from waste hot water via hot water heat recycling. Major consumption of hot water is sinks, showers, baths, dishwashers, and clothes washers. On average 30% of a property's domestic hot water is used for showering. Incoming fresh water is typically of a far lower temperature than the waste water from a shower. An inexpensive heat exchanger recovers up on average 40% of the heat that would normally be wasted, by warming incoming cold fresh water with heat from outgoing waste water.
Heat recovery ventilation
Heat recovery ventilation (HRV) is an energy recovery ventilation system which works between two air sources at different temperatures. By recovering the residual heat in the exhaust gas, the fresh air introduced into the air conditioning system is preheated.
See also
s
References
External links
Heat pumps based on R744 (CO2) FAQ
Heat pumps Long Awaited Way out of the Global Warming - Information from Heat Pump & Thermal Storage Technology Center of Japan
Department of Trade and Industry, 2005 study on Renewable Heat
Renewable Heat combining asphalt solar collectors, thermal banks and ground source heat pumps.
Energy Saving Trust information on Home Insulation
The Gill report on biomass in the UK - download
Solid wall insulation
Cavity wall insulation
Energy economics
Energy conservation
Heating
Low-energy building
Residential heating
Renewable energy technology
Sustainable technologies
Sustainable building
Sustainable architecture
Sustainable energy | Renewable heat | Engineering,Environmental_science | 3,130 |
24,674,758 | https://en.wikipedia.org/wiki/Goldner%E2%80%93Harary%20graph | In the mathematical field of graph theory, the Goldner–Harary graph is a simple undirected graph with 11 vertices and 27 edges. It is named after A. Goldner and Frank Harary, who proved in 1975 that it was the smallest non-Hamiltonian maximal planar graph. The same graph had already been given as an example of a non-Hamiltonian simplicial polyhedron by Branko Grünbaum in 1967.
Properties
The Goldner–Harary graph is a planar graph: it can be drawn in the plane with none of its edges crossing. When drawn on a plane, all its faces are triangular, making it a maximal planar graph. As with every maximal planar graph, it is also 3-vertex-connected: the removal of any two of its vertices leaves a connected subgraph.
The Goldner–Harary graph is also non-Hamiltonian. The smallest possible number of vertices for a non-Hamiltonian polyhedral graph is 11. Therefore, the Goldner–Harary graph is a minimal example of graphs of this type. However, the Herschel graph, another non-Hamiltonian polyhedron with 11 vertices, has fewer edges.
As a non-Hamiltonian maximal planar graph, the Goldner–Harary graph provides an example of a planar graph with book thickness greater than two. Based on the existence of such examples, Bernhart and Kainen conjectured that the book thickness of planar graphs could be made arbitrarily large, but it was subsequently shown that all planar graphs have book thickness at most four.
It has book thickness 3, chromatic number 4, chromatic index 8, girth 3, radius 2, diameter 2 and is a 3-edge-connected graph.
It is also a 3-tree, and therefore it has treewidth 3. Like any k-tree, it is a chordal graph. As a planar 3-tree, it forms an example of an Apollonian network.
Geometry
By Steinitz's theorem, the Goldner–Harary graph is a polyhedral graph: it is planar and 3-connected, so there exists a convex polyhedron having the Goldner–Harary graph as its skeleton.
Geometrically, a polyhedron representing the Goldner–Harary graph may be formed by gluing a tetrahedron onto each face of a triangular dipyramid, similarly to the way a triakis octahedron is formed by gluing a tetrahedron onto each face of an octahedron. That is, it is the Kleetope of the triangular dipyramid. The dual graph of the Goldner–Harary graph is represented geometrically by the truncation of the triangular prism.
Algebraic properties
The automorphism group of the Goldner–Harary graph is of order 12 and is isomorphic to the dihedral group D6, the group of symmetries of a regular hexagon, including both rotations and reflections.
The characteristic polynomial of the Goldner–Harary graph is : .
References
External links
Individual graphs
Planar graphs | Goldner–Harary graph | Mathematics | 643 |
73,090,850 | https://en.wikipedia.org/wiki/Cadusafos | Cadusafos (2-[butan-2-ylsulfanyl(ethoxy)phosphoryl]sulfanylbutane) is a chemical insecticide and nematicide often used against parasitic nematode populations. The compound acts as a acetylcholinesterase inhibitor. It belongs the chemical class of synthetic organic thiophosphates and it is a volatile and persistent clear liquid. It is used on food crops such as tomatoes, bananas and chickpeas. It is currently not approved by the European Commission
for use in the EU. Exposure can occur through inhalation, ingestion or contact with the skin. The compound is highly toxic to nematodes,
earthworms and birds but poses no carcinogenic risk to humans.
History
A patent application for Cadusafos was first filed in Europe on August 13, 1982 by FMC Corporation, an American chemical company which originated as an insecticide producer. In their patent application, they claimed that the compound should preferably be used to “control nematodes and soil insects, but may also control some insects which feed on the above ground portions of the plant.” The patent is expired, meaning that the compound is commercially available from chemical vendors such as Sigma Aldrich. However, the pesticide is not approved for use in Europe due to the lack of information on consumer exposure and the risk to groundwater.
Structure and reactivity
Cadusafos is a synthetic organic thiophosphate compound which is observed as a volatile and
persistent clear liquid. The toxin is an organothiophosphate insecticide.
Organothiophosphorus compounds are identified as compounds which contain carbonphosphorus bonds where the phosphorus atom is also bound to sulphur. Many of these
compounds serve as insecticides and cholinergic agents.
Cadusafos contains the phosphorus atom bound to two sulphurs which are attached to iso-butyl
substituents. The phosphorus is also connected to oxygen by a double bond and is bound to an
ethyl ether group.
The exact reactivity of Cadusafos as well as that of organothiophosphate compounds in general
is, as of yet, unknown. However, the cholinesterase enzyme inhibition mechanism of action of
these compounds works similarly to other organophosphates. Examples of
organophosphates include nerve gasses such as sarin and VX as well as pesticides like
malathion.
Synthesis
The synthesis of Cadusafos can be performed via the substitution reaction of O-ethyl phosphoric dichloride and two equivalents of 2-butanethiol.
Mechanism of action
Cadusafos is an inhibitor of the enzyme acetylcholinesterase. This enzyme binds to
acetylcholine and cleaves it into choline and acetate. Acetylcholine is a neurotransmitter which
is used in neurons to pass on a neural stimulus. Cadusafos inhibits the function of
acetylcholinesterase by occupying the active site of the enzyme which will no longer be able to
function properly, resulting in the accumulation of acetylcholine. This might result in excessive
nervous stimulation, respiratory failure and death.
Cadusafos is an organothiophosphate, which is a subclass of organophosphates.
Organophosphates can act as an inhibitor for acetylcholinesterase in a way for which the
mechanism is known. The active site of acetylcholinesterase contains an anionic site and
an esteratic site. This esteratic side contains a serine at the 200th position, which usually binds
acetylcholine. Organophosphate inhibitors can phosphorylate this serine and with that inhibit
the enzyme.
Metabolism and biotransformation
In a study, 14C radiolabeled Cadusafos was administered orally to rats. The excretion of feces, urine and CO2 was monitored for seven days. This showed that cadusafos is readily absorbed (90-100%) and mainly eliminated via urine (around 75%), followed by elimination via expired air (10-15%) and via feces (5-15%). Over 90% of the administered dose was eliminated within 48 hours after administration. Analysis of tissue and blood samples collected after seven days showed a remaining radioactivity between 1-3%. The majority of this radioactivity was found in fat, liver, kidney and lung tissue and no evidence of accumulation was found.
A different study was performed in order to identify the metabolites formed in rats after receiving either an oral or intravenous dose of Cadusafos. The metabolic products were analyzed using several analysis methods (HPLC, TLC, GC-MS, 1H-NMR and liquid scintillation). This indicated the presence of the parent compound, Cadusafos, as well as 10 other metabolites. The main pathway of metabolism involves the cleavage of the thio-(sec-butyl) group, forming two primary products: Sec-butyl mercaptan and Oethyl-S-(2-butyl) phosphorothioic acid (OSPA). These intermediate compounds are then degraded further into several metabolites. The major metabolites were hydroxysulfones, followed by phosphorothionic acids and sulfonic acids, which then form conjugates.
Toxicity
A study has been conducted by the Joint FAO/WHO Meeting on Pesticide Residues (JMPR),
on rats in which the lethal dose of Cadusafos was investigated. The researchers found a median
lethal dose via the oral pathway of 68.4 mg/kg bodyweight (bw) in male rats and 82.1 mg/kg
bw in female rats. The rats died of typical symptoms of acetylcholinesterase inhibition. Via the
dermal pathway, lower median lethal doses were found; mg/kg bw in males and 41.8 mg/kg bw
in females.
Considering the toxicity in humans, there is no data available yet regarding the median lethal
dose for a human. The United States Environmental Protection Agency (EPA), did publish a
report on the safety concerns of Cadusafos used as a pesticide on bananas and concluded that
“Potential acute and chronic dietary exposures from eating bananas treated with Cadusafos are
below the level of concern for the entire U.S. population, including infants and children.”
Effects on animals
Cadusafos has been proved to be toxic to fish, aquatic invertebrates, bees, earthworms and other
arthropods. Further research was conducted on terrestrial vertebrates, and it is expected to have
toxic effects on mammals. Besides its direct toxicity to multiple species, Cadusafos also
has a potential to bioaccumulate so secondary poisoning for earthworm eating mammals and
birds should also be taken into consideration. The estimated risk to bees and aquatic
organisms is low due to the application of Cadusafos, even though the toxicity to bees is high.
The compound is also estimated to be highly toxic to earthworms and birds. A multigeneration
study in rats has established a No Adverse Effect Level (NOAEL) of 0.03 mg/kg bw per day
for the inhibition of cholinesterase activity in plasma and erythrocytes. There has been no
adequate evidence that Cadusafos could prove a genotoxic compound. Due to this and
additional research on mice and rats which proved Cadusafos as non-carcinogenic, it can be
concluded that Cadusafos is non-carcinogenic for humans.
Efficacy
Cadusafos proved to be very effective against parasitic nematode populations such as Rotylenchulus reniformis and Meloidogyne incognita. It showed to be more effective against endoparasitic nematodes
than ectoparasitic nematodes and when compared to other nematicides like triazophos,
methyl bromide, aldicarb, carbofuran and phorate, Cadusafos proved to be the most efficient.
The effectiveness of Cadusafos improves when increasing the dosage or the exposure time.
Efficacy after application for several successive cropping seasons seemed to remain
the same for up to four seasons. However, when it is used for more than 4 consecutive seasons, this can cause a linear decrease in the efficacy.
References
Nematicides
Ethyl esters
Phosphorodithioates
Insecticides
Thioesters
Sec-Butyl compounds | Cadusafos | Chemistry | 1,796 |
42,138,344 | https://en.wikipedia.org/wiki/Rosa%27s%20rule | Rosa's rule, also known as Rosa's law of progressive reduction of variability, is a biological rule that observes the tendency to go from character variation in more primitive representatives of a taxonomic group or clade to a fixed character state in more advanced members. An example of Rosa's rule is that the number of thoracic segments in adults (or holaspids) may vary in Cambrian trilobite species, while from the Ordovician the number of thoracic segments is constant in entire genera, families, and even suborders. Thus, a trend of decreasing trait variation between individuals of a taxon as the taxon develops across evolutionary time can be observed. The rule is named for Italian paleontologist Daniele Rosa.
References
Biological rules
Evolutionary biology | Rosa's rule | Biology | 158 |
12,139,297 | https://en.wikipedia.org/wiki/Spirotryprostatin%20B | Spirotryprostatin B is an indolic alkaloid found in the Aspergillus fumigatus fungus that belongs to a class of naturally occurring 2,5-diketopiperazines. Spirotryprostatin B and several other indolic alkaloids (including Spirotryprostatin A, as well as other tryprostatins and cyclotryprostatins) have been found to have anti-mitotic properties, and as such they have become of great interest as anti-cancer drugs. Because of this, the total syntheses of these compounds is a major pursuit of organic chemists, and a number of different syntheses have been published in the chemical literature.
Total synthesis
The first total synthesis was accomplished in 2000 by the Danishefsky group at Columbia University, with a number of other syntheses following shortly thereafter by Williams, Ganesan, Fuji, Carreira, Horne, Overman, and most recently Trost.
From a synthetic point of view, the most challenging structural features of the molecule are the C3 spirocyclic ring juncture and the adjacent prenyl-substituted carbon. Approaches toward preparing the skeleton of spirotryprostatin B have varied considerably.
Danishefsky spirotryprostatin B synthesis
In the Danishefsky synthesis, an amine derived from tryptophan was condensed with an aldehyde, triggering a Mannich-type reaction wherein the pendant oxindole acted as a nucleophile toward the intermediate iminium species.
Williams spirotryprostatin B synthesis
The synthesis by the Williams group utilized a 3-component coupling reaction. A secondary amine was combined with an aldehyde to form an intermediate azomethine ylide, which underwent a 1,3-dipolar cycloaddition with an unsaturated oxindole also present in the reaction mixture.
Ganesan spirotryprostatin B synthesis
Ganesan made use of a biomimetic strategy in his synthesis of spirotryprostatin B. An indole was treated with N-bromosuccinimide to trigger an oxidative rearrangement, forming the quaternary stereocenter in a diastereoselective manner.
Fuji spirotryprostatin B synthesis
In the synthesis developed by the Fuji group, the stereochemistry at the spirocyclic carbon was established by a nitroolefination reaction. An oxindole with pendant prenyl group was reacted with a nitroolefin bearing a chiral leaving group.
Carreira spirotryprostatin B synthesis
The Carreira group made use of a magnesium iodide promoted annulation reaction in their approach toward spirotryprostatin B. An oxindole bearing a cyclopropane was reacted with an imine in the presence of the magnesium iodide, triggering the ring-expansion reaction.
Horne spirotryprostatin B synthesis
Horne's synthesis of spirotryprostatin B also made use of a Mannich-type process, wherein a chloro-indole served as the pro-nucleophile. The cyclization was triggered by treating the pendant imine with the acyl chloride derived from proline. The resulting iminium species was attacked by the chloro-indole, forming the spirocyclic bond.
Overman spirotryprostatin B synthesis
The Overman group utilized a Heck reaction to prepare the molecule. An iodo-aniline with a tethered alkene was subjected to palladium catalysis. The intermediate palladium-allyl species was intercepted by the pendant amide nitrogen to generate the prenyl stereocenter in the same reaction.
Trost spirotryprostatin B synthesis
In the synthesis developed by the Trost group, the stereochemistry at the spirocyclic ring juncture is established by a decarboxylation-prenylation sequence, reminiscent of the Carroll reaction. Here, a prenyl ester serves as both the nucleophile and electrophile precursor. Upon treatment with a chiral palladium catalyst the prenyl group ionizes and decarboxylates. The resulting ion pair subsequently recombines to generate the prenylated product. Notably, double bond migration occurs and the prenyl group is attacked at the oxindole carbon.
References
Indole alkaloids
Total synthesis
Lactams
Spiro compounds
Diketopiperazines
Heterocyclic compounds with 3 rings
Heterocyclic compounds with 2 rings | Spirotryprostatin B | Chemistry | 965 |
33,294,058 | https://en.wikipedia.org/wiki/Busemann%E2%80%93Petty%20problem | In the mathematical field of convex geometry, the Busemann–Petty problem, introduced by , asks whether it is true that a symmetric convex body with larger central hyperplane sections has larger volume. More precisely, if K, T are symmetric convex bodies in Rn such that
for every hyperplane A passing through the origin, is it true that Voln K ≤ Voln T?
Busemann and Petty showed that the answer is positive if K is a ball. In general, the answer is positive in dimensions at most 4, and negative in dimensions at least 5.
History
Unexpectedly at the time, showed that the Busemann–Petty problem has a negative solution in dimensions at least 12, and this bound was reduced to dimensions at least 5 by several other authors. pointed out a particularly simple counterexample: all sections of the unit volume cube have measure at most , while in dimensions at least 10 all central sections of the unit volume ball have measure at least . introduced intersection bodies, and showed that the Busemann–Petty problem has a positive solution in a given dimension if and only if every symmetric convex body is an intersection body. An intersection body is a star body whose radial function in a given direction u is the volume of the hyperplane section u⊥ ∩ K for some fixed star body K.
used Lutwak's result to show that the Busemann–Petty problem has a positive solution if the dimension is 3. claimed incorrectly that the unit cube in R4 is not an intersection body, which would have implied that the Busemann–Petty problem has a negative solution if the dimension is at least 4. However showed that a centrally symmetric star-shaped body is an intersection body if and only if the function 1/||x|| is a positive definite distribution, where ||x|| is the homogeneous function of degree 1 that is 1 on the boundary of the body, and used this to show that the unit balls l, 1 < p ≤ ∞ in n-dimensional space with the lp norm are intersection bodies for n = 4 but are not intersection bodies for n ≥ 5, showing that Zhang's result was incorrect. then showed that the Busemann–Petty problem has a positive solution in dimension 4.
gave a uniform solution for all dimensions.
See also
Shephard's problem
References
Convex geometry
Geometry problems | Busemann–Petty problem | Mathematics | 471 |
195,454 | https://en.wikipedia.org/wiki/Amiskwia | Amiskwia is a genus of soft-bodied animals known from fossils of the Middle Cambrian Lagerstätten both in the Burgess Shale in British Columbia, Canada and the Maotianshan shales of Yunnan Province, China. It is interpreted as a member of the clade Gnathifera sensu lato or as a basal cucullophoran.
Etymology
The scientific name Amiskwia sagittiformis derives from the Cree amiskwi, "beavertail", a name of various objects in Yoho National Park, and from the Latin sagitta ("arrow") and formis ("shape"), in reference to the general appearance of the animal. "Sinica", of A. sinica, refers to that species' origin from China.
Description
Known specimens of Amiskwia vary in length from and in width from . The body was somewhat flattened. The head had a pair of tentacles that emerged from the midline of the head. The tentacles had a relatively thick base and tapered to a point. Along the sides of the trunk were a pair of lateral fins, which were around one third of the total body length. The trunk terminated with a flat, rounded caudal fin. The gut was straight, and ran from the mouth to the anus, which was located on the underside of the body near the caudal fin. Within the mouth is a pair of semi-circular structures, described as "jaws" each bearing 8-10 conical spikes, which increased in size away from the midline of the structure. Two other structures, dubbed the "dorsal plate" and "ventral plate", are also present in the mouth.
Phylogeny
The following dendrogram shows the evolutionary relationships of Amiskwia as in Park et al. 2024.
A yet undescribed chaetognath, as of January 2024, from Sirius Passet
Ecology
Amiskwia was likely a freely swimming (nektonic) organism that was either a predator or a scavenger.
History of research
Amiskwia was originally categorized by paleontologist Charles Walcott. Walcott thought he saw three buccal spines in the fossils, and therefore categorized Amiskwia as a chaetognath worm (arrow worm). However, Amiskwia appears to lack the characteristic grasping spines and teeth of other Burgess fossil arrow worms. Later scientists suggested an affinity with the nemerteans (ribbon worms), but the evidence for this was somewhat inadequate. Conway Morris, on re-examining of the Burgess Shale fauna in the 1970s, described it as being the single known species in an otherwise unknown phylum, given that it has two tentacles near its mouth, rather than the characteristic single tentacle of true nemerteans. (Nemerteans do not have a single tentacle. However, a pair of antero-lateral tentacles is present in two of the many genera of pelagic nemerteans. Nemerteans do have a single eversible—normally internal—proboscis, which when everted could resemble an anterior median tentacle if fossilized. Whether retracted or everted, the proboscis is the only structure in pelagic nemerteans likely to fossilize, as it is the only structure with substantial connective tissue and muscle. The body wall has almost no muscle or connective tissue and is exceedingly unlikely to fossilize; hence, a pelagic nemertean fossil would be only the proboscis). Butterfield implies from the appearance of the fossils that the organisms may have lacked a cuticle: while this is also true of the nemerteans, these organisms lack a coelom and are thus unlikely to fossilise. He goes on to argue that the absence of cuticle is characteristic of the chaetognaths; whilst teeth would be expected, a similar fossil, Wiwaxia, shows such structures in only 10% of the expected instances, and anomalocaridids are often found detached from their mouthparts, so the absence may be taphonomic rather than genuine. The absence of spines could simply mean that the fossils represent young organisms — or that later chaetognath evolution involved paedomorphosis.
Two studies published in 2019 redescribed Amiskwia. Vinther and Parry (2019) argued that Amiskwia was a stem-group chaetognath, while Caron and Cheung (2019) suggested that the organism was a total group gnathiferan, based on the presence of gnathiferan-like jaws and ventral plates within the mouth. Its precise affinity within this group is difficult to resolve, they suggested that if it fell in the stem lineage of any extant phylum then it would be a gnathostomulid. A 2022 study supported a stem-chaetognath interpretation, suggesting that gnathiferan-like jaws were lost in the ancestor of chaetognaths. A 2024 study again supported a stem-chaetognath position.
See also
Paleobiota of the Burgess Shale
References
External links
Graphic of Amiskwia in motion
Cambrian invertebrates
Burgess Shale fossils
Maotianshan shales fossils
Chaetognatha
Controversial taxa
Cambrian genus extinctions | Amiskwia | Biology | 1,086 |
21,474,902 | https://en.wikipedia.org/wiki/Xi%20Hydrae | Xi Hydrae, Latinised from ξ Hydrae, is a solitary star in the equatorial constellation of Hydra. It was also given the Flamsteed designation 19 Crateris. This magnitude 3.54 star is situated 130 light-years from Earth and has a radius about 10 times that of the Sun. It is radiating 58 times as much luminosity as the Sun.
Flamsteed gave Xi Hydrae the designation 19 Crateris. He included a number of stars now within the IAU boundaries of Hydra as part of a Hydra & Crater constellation overlapping parts of both modern constellations.
The star Xi Hydrae is particularly interesting in the field of asteroseismology since it shows solar-like oscillations. Multiple frequency oscillations are found with periods between 2.0 and 5.5 hours.
Xi Hydrae has left the main sequence, having exhausted the supply of hydrogen in its core. Its spectrum is that of a red giant. Modelling its physical properties against theoretical evolutionary tracks shows that it has just reached the foot of the red giant branch for a star with an initial mass around . This puts its age at about .
References
External links
Wisky.org Star Catalogue
ESO Article: The Ultrabass Sounds of the Giant Star Xi Hya.
Listen to Xi Hya (wav).
Hydrae, Xi
Hydra (constellation)
Hydrae, 288
100407
056343
4450
Crateris, 19
Durchmusterung objects
G-type giants | Xi Hydrae | Astronomy | 306 |
10,998,659 | https://en.wikipedia.org/wiki/Organ%20console | The pipe organ is played from an area called the console or keydesk, which holds the manuals (keyboards), pedals, and stop controls. In electric-action organs, the console is often movable. This allows for greater flexibility in placement of the console for various activities. Some very large organs, such as the van den Heuvel organ at the Church of St. Eustache in Paris, have more than one console, enabling the organ to be played from several locations depending on the nature of the performance.
Controls at the console called stops select which ranks of pipes are used. These controls are generally either draw knobs (or stop knobs), which engage the stops when pulled out from the console; stop tablets (or tilting tablets) which are hinged at their far end; or rocker-tablets, which rock up and down on a central axle.
Different combinations of stops change the timbre of the instrument considerably. The selection of stops is called the registration. On modern organs, the registration can be changed instantaneously with the aid of a combination action, usually featuring pistons. Pistons are buttons that can be pressed by the organist to change registrations; they are generally found between the manuals or above the pedalboard. In the latter case they are called toe studs or toe pistons (as opposed to thumb pistons). Most large organs have both preset and programmable pistons, with some of the couplers repeated for convenience as pistons and toe studs. Programmable pistons allow comprehensive and rapid control over changes in registration.
Newer organs in the 2000s may have multiple levels of solid-state memory, allowing each piston to be programmed more than once. This allows more than one organist to store their own registrations. Many newer consoles also feature MIDI, which allows the organist to record performances. It also allows an external keyboard to be plugged in, which assists in tuning and maintenance.
Organization of console controls
The layout of an organ console is not standardized, but most organs follow historic conventions for the country and style of organ, so that the layout of stops and pistons is broadly predictable. The stops controlling each division (see Keyboards) are grouped together. Within these, the standard arrangement is for the lowest sounding stops (32 ft or 16 ft) to be placed at the bottom of the columns, with the higher pitched stops placed above this, (8 ft 4 ft, 2 ft, 2 ft, etc.); the mixtures are placed above this (II, III, V, etc.). The stops controlling the reed ranks are placed collectively above these in the same order as above, often with the stop engraving in red. In a horizontal row of stop tabs, a similar arrangement would be applied left to right rather than bottom to top. Among stops of the same pitch, louder stops are generally placed below softer ones (so an Open Diapason would be placed towards the bottom and a Dulciana towards the top), but this is less predictable since it depends on the exact stops available and the space available to arrange stop knobs.
Thus, an example stop configuration for a Great division may look like this:
The standard position for these columns of stops (assuming drawknobs are used) is for the Choir or Positive division to be on the outside of the player's right, with the Great nearer the center of the console and the music rest. On the left hand side, the Pedal division is on the outside, with the Swell to the inside. Other divisions can be placed on either side, depending on the amount of space available. Manual couplers and octave extensions are placed either within the stop knobs of the divisions that they control, or grouped together above the uppermost manual. The pistons, if present, are placed directly under the manual they control.
To be more historically accurate, organs built along historical models will often use older schemes for organizing the keydesk controls.
Keyboards
The organ is played with at least one keyboard, with configurations featuring from two to five keyboards being the most common. A keyboard to be played by the hands is called a manual (from the Latin , "hand"); an organ with four keyboards is said to have four manuals. Most organs also have a pedalboard, a large keyboard to be played by the feet. [Note that the keyboards are never actually referred to as "keyboards", but as "manuals" and "pedalboard", as the case may be.]
The collection of ranks controlled by a particular manual is called a division. The names of the divisions of the organ vary geographically and stylistically. Common names for divisions are:
Great, Swell, Choir, Solo, Orchestral, Echo, Antiphonal (English-speaking countries)
(Germany)
(France)
(the Netherlands)
Like the arrangement of stops, the keyboard divisions are also arranged in a common order. Taking the English names as an example, the main manual (the bottom manual on two-manual instruments or the middle manual on three-manual instruments) is traditionally called the Great, and the upper manual is called the Swell. If there is a third manual, it is usually the Choir and is placed below the Great. (The name "Choir" is a corruption of "Chair", as this division initially came from the practice of placing a smaller, self-contained, organ at the rear of the organist's bench. This is also why it is called a Positif which means portable organ.) If it is included, the Solo manual is usually placed above the Swell. Some larger organs contain an Echo or Antiphonal division, usually controlled by a manual placed above the Solo. German and American organs generally use the same configuration of manuals as English organs. On French instruments, the main manual (the Grand Orgue) is at the bottom, with the Positif and the Récit above it. If there are more manuals, the Bombarde is usually above the Récit and the Grand Choeur is below the Grand Orgue or above the Bombarde.
In addition to names, the manuals may be numbered with Roman numerals, starting from the bottom. Organists will frequently mark a part in their music with the number of the manual they intend to play it on, and this is sometimes seen in the original composition, typically in pieces written when organs were smaller and only had two or three manuals. It is also common to see couplers labeled as "II to I" (see Couplers below).
In some cases, an organ contains more divisions than it does manuals. In these cases, the extra divisions are called floating divisions and are played by coupling them to another manual. Usually this is the case with Echo/Antiphonal and Orchestral divisions, and sometimes it is seen with Solo and Bombarde divisions.
Although manuals are almost always horizontal, organs with three or more manuals may incline the uppermost manuals towards the organist to make them easier to reach.
Many new chamber organs and harpsichords today feature transposing keyboards, which can slide up or down one or more semitones. This allows these instruments to be played with Baroque instruments at a ft=415 Hz, modern instruments at a ft=440 Hz, or Renaissance instruments at a ft=466 Hz. Modern organs are typically tuned in equal temperament, in which every semitone is 100 cents wide. Many organs that are built today following historical models are still tuned to historically-appropriate temperaments.
The range (compass) of the keyboards on an organ has varied widely between different time periods and different nationalities. Portative organs may have a range of only an octave or two, while a few large organs, such as the Boardwalk Hall Auditorium Organ, may have some manual keyboards approaching the size of a modern piano. German organs of the seventeenth and eighteenth centuries featured manual ranges from C to f and pedal ranges from C to d, though some organs only had manual ranges that extended down to F. Many French organs of this period had pedal ranges that went down to AA (though this ravalement applied only to the reeds, and may have only included the low AA, not AA-sharp or BB). French organs of the nineteenth century typically had manual ranges from C to g and pedal ranges from C to f; in the twentieth century the manual range was extended to a. The modern console specification recommended by the American Guild of Organists calls for manual keyboards with sixty-one notes (five octaves, from C to c) and pedal keyboards with thirty-two notes (two and a half octaves, from C to g). These ranges apply to the notes written on the page; depending on the registration, the actual range of the instrument may be much greater.
Enclosure and expression pedals
On most organs, at least one division will be enclosed. On a two-manual (Great and Swell) organ, this will be the Swell division (from where the name comes); on larger organs often part, or all of, the Choir and Solo divisions will be enclosed as well.
Enclosure is the term for the device that allows volume control (crescendo and diminuendo) for a manual without the addition or subtraction of stops. All the pipes for the division are surrounded by a box-like structure (often simply called the swell box). One side of the box, usually that facing the console or the listener, will be constructed from vertical or horizontal palettes (wooden flaps) which can be opened or closed from the console. This works in a similar fashion to a Venetian blind. When the box is 'open' it allows more sound to be heard than if it were 'closed'.
The most common form of controlling the level of sound released from the enclosed box is by the use of a balanced expression pedal. This is usually placed above the centre of the pedalboard, rotating away from the organist from a near vertical position ("shut") to a near horizontal position ("open"). Unlike a car accelerator pedal, a balanced expression pedal remains in whatever position it was last moved to.
Historically, the enclosure was operated by the use of the ratchet swell lever, a spring-loaded lever that locks into two or three positions controlling the opening of the shutters. Many ratchet swell devices were replaced by the more advanced balanced pedal because it allows the enclosure to be left at any point, without having to keep a foot on the lever.
In addition, an organ may have a crescendo pedal, which would be found to the right of any expression pedals, and similarly balanced. Applying the crescendo pedal will incrementally activate the majority of the stops in the organ, starting with the softest stops and ending with the loudest, excluding only a handful of specialized stops that serve no purpose in a full ensemble. The order in which the stops are activated is usually preset by the organ builder and the crescendo pedal serves as a quick way for the organist to get to a registration that will sound attractive at a given volume without choosing a particular registration, or simply to get to full organ.
Most organs also have a piston and/or toe-stud labeled "Tutti" or "Sforzando" that activates full organ.
Couplers
A device called a coupler allows the pipes of one division to be played simultaneously from an alternative manual. For example, a coupler labelled "Swell to Great" allows the stops of the Swell division to be played by the Great manual. It is unnecessary to couple the pipes of a division to the manual of the same name (for example, coupling the Great division to the Great manual), because those stops play by default on that manual (though this is done with super- and sub-couplers, see below). By using the couplers, the entire resources of an organ can be played simultaneously from one manual. On a mechanical-action organ, a coupler may connect one division's manual directly to the other, actually moving the keys of the first manual when the second is played.
Some organs feature a device to add the octave above or below what is being played by the fingers. The "super-octave" adds the octave above, the "sub-octave" the octave below. These may be attached to one division only, for example "Swell octave" (the super is often assumed), or they may act as a coupler, for example "Swell octave to Great" which gives the effect while playing on the Great division of adding the Swell division an octave above what is being played. These can be used in conjunction with the standard eight foot coupler. The super-octave may be labelled, for example, Swell to Great 4 ft; in the same manner, the sub-octave may be labelled Choir to Great 16 ft.
The inclusion of these couplers allows for greater registrational flexibility and color. Some literature (particularly romantic literature from France) calls explicitly for octaves aigües (super-couplers) to add brightness, or octaves graves (sub-couplers) to add gravity. Some organs feature extended ranks to accommodate the top and bottom octaves when the super- and sub-couplers are engaged (see the discussion under "Unification and extension").
In a similar vein are unison off couplers, which act to "turn off" the stops of a division on its own keyboard. For example, a coupler labelled "Great unison off" would keep the stops of the Great division from sounding, even if they were pulled. Unison off couplers can be used in combination with super- and sub-couplers to create complex registrations that would otherwise not be possible. In addition, the unison off couplers can be used with other couplers to change the order of the manuals at the console: engaging the Great to Choir and Choir to Great couplers along with the Great unison off and Choir unison off couplers would have the effect of moving the Great to the bottom manual and the Choir to the middle manual.
Divided pedal
Another form of coupler found on some large organs is the divided pedal. This is a device that allows the sounds played on the pedals to be split, so the lower octave (principally that of the left foot) plays stops from the pedal division while the upper half (played by the right foot), plays stops from one of the manual divisions. The choice of manual is at the discretion of the performer, as is the 'split point' of the system.
The system can be found on the organs of Gloucester Cathedral, having been added by Nicholson & Co (Worcester) Ltd/David Briggs and Truro Cathedral, having been added by Mander Organs/David Briggs, as well as on the new nave console of Ripon Cathedral. The system as found in Truro Cathedral operates like this:
Divided Pedal (adjustable dividing point): A# B c c# d d#
under the 'divide': Pedal stops and couplers
above the 'divide': four illuminated controls: Choir/Swell/Great/Solo to Pedal
This allows four different sounds to be played at once (without thumbing down across manuals), for example:
Right hand: Great principals 8 ft and 4 ft
Left hand: Swell strings
Left foot: Pedal 16 ft and 8 ft flutes and Swell to Pedal coupler
Right foot: Solo Clarinet via divided pedal coupler
Notes and references
Pipe organ components
Musical instrument parts and accessories | Organ console | Technology | 3,145 |
526,618 | https://en.wikipedia.org/wiki/Psychological%20abuse | Psychological abuse, often known as emotional abuse or mental abuse or psychological violence or non-physical abuse, is a form of abuse characterized by a person subjecting or exposing another person to a behavior that may result in psychological trauma, including anxiety, chronic depression, clinical depression or post-traumatic stress disorder amongst other psychological problems.
It is often associated with situations of power imbalance in abusive relationships, and may include bullying, gaslighting, abuse in the workplace, amongst other behaviors that may cause an individual to feel unsafe. It also may be perpetrated by persons conducting torture, other violence, acute or prolonged human rights abuse, particularly without legal redress such as detention without trial, false accusations, false convictions, and extreme defamation such as where perpetrated by state and media.
General definition
Clinicians and researchers have offered different definitions of psychological abuse. According to current research the terms "psychological abuse" and "emotional abuse" can be used interchangeably, unless associated with psychological violence. More specifically, "emotional abuse" is any abuse that is emotional rather than physical. It can include anything from verbal abuse and constant criticism to more subtle tactics such as intimidation, manipulation, and refusal to ever be pleased. This abuse occurs when someone uses words or actions to try and control the other person, to keep someone afraid or isolated, or try to break someone's self-esteem.
Emotional abuse can take several forms. Three general patterns of abusive behavior include aggressing, denying, and minimizing; "Withholding is another form of denying. Withholding includes refusing to listen, to communicate, and emotionally withdrawing as punishment." Even though there is no established definition for emotional abuse, emotional abuse can possess a definition beyond verbal and psychological abuse. Blaming, shaming, and name calling are a few verbally abusive behaviors that can affect a victim emotionally. The victim's self-worth and emotional well-being are altered and even diminished by the verbal abuse, resulting in an emotionally abused victim.
The victim may experience severe psychological effects. This would involve the tactics of brainwashing, which can fall under psychological abuse as well, but emotional abuse consists of the manipulation of the victim's emotions. The victim may feel their emotions are being affected by the abuser to such an extent that the victim may no longer recognize their own feelings regarding the issues the abuser is trying to control. The result is the victim's self-concept and independence are systematically taken away.
The U.S. Department of Justice defines emotionally abusive traits as causing fear by intimidation, threatening physical harm to self, partner, children, or partner's family or friends, destruction of pets and property, and forcing isolation from family, friends, or school or work. More subtle emotionally abusive behaviors include insults, putdowns, arbitrary and unpredictable behavior, and gaslighting (e.g. the denial that previous abusive incidents occurred). Modern technology has led to new forms of abuse, by text messaging and online cyber-bullying.
In 1996, Health Canada argued that emotional abuse is "based on power and control", and defines emotional abuse as including rejecting, degrading, terrorizing, isolating, corrupting/exploiting and "denying emotional responsiveness" as characteristic of emotional abuse.
Several studies have argued that an isolated incident of verbal aggression, dominant conduct or jealous behaviors does not constitute the term "psychological abuse". Rather, it is defined by a pattern of such behaviors, unlike physical and sexual maltreatment where only one incident is necessary to label it as abuse. Tomison and Tucci write, "emotional abuse is characterized by a climate or pattern of behavior(s) occurring over time ... Thus, 'sustained' and 'repetitive' are the crucial components of any definition of emotional abuse." Andrew Vachss, an author, attorney, and former sex crimes investigator, defines emotional abuse as "the systematic diminishment of another. It may be intentional or subconscious (or both), but it is always a course of conduct, not a single event."
Prevalence
Intimate relationships
When discussing the different types of psychological abuse in terms of domestically violent relationships, it is important to recognize the 4 different types; Denigrating Damage to Partner's Self-Image or Esteem, Passive Aggressive Withholding of Emotional Support, Threatening Behavior, and Restricting Personal Territory and Freedom:
Denigrating Damage refers to an individual using verbal aggression like yelling towards their partner that is delivered as profane and derogatory.
Passive Aggressive Withholding of Emotional Support refers to an individual intentionally avoiding and withdrawing themselves from their partner in an attempt to be neglectful and emotionally abandoning.
Threatening Behavior refers to an individual making verbal threats towards their partner that could imply eliciting physical harm, threats of divorce, lying, and threats of reckless behavior that could put their safety at risk.
Restricting Personal Territory and Freedom refers to the isolation of social support from family and friends. This could include taking away partner's autonomy and having a lack of personal boundaries.
It has been reported that at least 80% of women who have entered the criminal justice system due to partner violence have also experienced psychological abuse from their partner. This partner violence is also known as domestic abuse.
Domestic abuse—defined as chronic mistreatment in marriage, families, dating, and other intimate relationships—can include emotionally abusive behavior. Although psychological abuse does not always lead to physical abuse, physical abuse in domestic relationships is nearly always preceded and accompanied by psychological abuse. Murphy and O'Leary reported that psychological aggression is the most reliable predictor of later physical aggression.
A 2012 review by Capaldi et al., which evaluated risk factors for intimate partner violence (IPV), noted that psychological abuse has been shown to be both associated with and common in IPV. High levels of verbal aggression and relationship conflict, "practically akin to psychological aggression", strongly predicted IPV; male jealousy in particular was associated with female injuries from IPV.
Attempts to define and describe violence and abuse in hetero-normative intimate relationships can become contentious as different studies present different conclusions about whether men or women are the primary instigators. For instance, a 2005 study by Hamel reports that "men and women physically and emotionally abuse each other at equal rates." Basile found that psychological aggression was effectively bidirectional in cases where heterosexual and homosexual couples went to court for domestic disturbances.
A 2007 study of Spanish college students aged 18–27 found that psychological aggression (as measured by the Conflict Tactics Scale) is so pervasive in dating relationships that it can be regarded as a normalized element of dating, and that women are substantially more likely to exhibit psychological aggression. Similar findings have been reported in other studies. Strauss et al. found that female intimate partners in heterosexual relationships were more likely than males to use psychological aggression, including threats to hit or throw an object.
A study of young adults by Giordano et al. found that females in intimate heterosexual relationships were more likely than males to threaten to use a knife or gun against their partner. While studies allege that women use violence in intimate relationships as often or more often than men, women's violence is typically self-defensive rather than aggressive.
In 1996, the National Clearinghouse on Family Violence, for Health Canada, reported that 39% of married women or common-law wives suffered emotional abuse by husbands/partners; and a 1995 survey of women 15 and over 36–43% reported emotional abuse during childhood or adolescence, and 39% experienced emotional abuse in marriage/dating; this report does not address boys or men suffering emotional abuse from families or intimate partners. A BBC radio documentary on domestic abuse, including emotional maltreatment, reports that 20% of men and 30% of women have been abused by a spouse or other intimate partner.
Child emotional abuse
Psychological abuse of a child is commonly defined as a pattern of behavior by parents or caregivers that can seriously interfere with a child's cognitive, emotional, psychological, or social development. According to the DSM-5, Child Psychological Abuse is defined as verbal or symbolic acts given by parent or caregiver which can result in significant psychological harm. Examples are yelling, comparing to others, name-calling, blaming, gaslighting, manipulating, and normalizing abuse due to the status of being underage.
Some parents may emotionally and psychologically harm their children because of stress, poor parenting skills, social isolation, and lack of available resources or inappropriate expectations of their children. Straus and Field report that psychological aggression is a pervasive trait of American families: "verbal attacks on children, like physical attacks, are so prevalent as to be just about universal." A 2008 study by English, et al. found that fathers and mothers were equally likely to be verbally aggressive towards their children.
Elder emotional abuse
Choi and Mayer performed a study on elder abuse (causing harm or distress to an older person), with results showing that 10.5% of the participants were victims of "emotional/psychological abuse", which was most often perpetrated by a son or other relative of the victim. Of 1288 cases in 2002–2004, 1201 individuals, 42 couples, and 45 groups were found to have been abused. Of these, 70% were female. Psychological abuse (59%) and material/financial abuse (42%) were the most frequently identified types of abuse. One study found that the overall prevalence rate of abused elderly in Hong Kong was 21.4%. Out of this percentage, 20.8% reported being verbally abused.
Workplace
Rates of reported emotional abuse in the workplace vary, with studies showing 10%, 24%, and 36% of respondents indicating persistent and substantial emotional abuse from coworkers.
Keashly and Jagatic found that males and females commit "emotionally abusive behaviors" in the workplace at roughly similar rates. In a web-based survey, Namie found that women were more likely to engage in workplace bullying, such as name calling, and that the average length of abuse was 16.5 months.
Pai and Lee found that the incidence of workplace violence typically occurs more often in younger workers. "Younger age may be a reflection of lack of job experience, resulting in [an inability] to identify or prevent potentially abusive situations... Another finding showed that lower education is a risk factor for violence." This study also reports that 51.4% of the workers surveyed have already experienced verbal abuse, and 29.8% of them have encountered workplace bullying and mobbing.
Characteristics of abusers
In their review of data from the Dunedin Multidisciplinary Health and Development Study (a longitudinal birth cohort study) Moffitt et al. report that while men exhibit more aggression overall, sex is not a reliable predictor of interpersonal aggression, including psychological aggression.
The DARVO study found that no matter what gender a person is, aggressive people share a cluster of traits, including high rates of suspicion and jealousy; sudden and drastic mood swings; poor self-control; and higher than average rates of approval of violence and aggression. Moffitt et al. also argue that antisocial men exhibit two distinct types of interpersonal aggression (one against strangers, the other against intimate female partners), while antisocial women are rarely aggressive against anyone other than intimate male partners or their own children.
Abusers may aim to avoid household chores or exercise total control of family finances. Abusers can be very manipulative, often recruiting friends, law officers and court officials, and even the victim's family to their side, while shifting blame to the victim. A victim may internalize the abuse and may form future relationships with abusers.
Effects
Abuse of intimate partners
Most victims of psychological abuse within intimate relationships often experience changes to their psyche and actions. This varies throughout the various types and lengths of emotional abuse. Long-term emotional abuse has long term debilitating effects on a person's sense of self and integrity. Often, research shows that emotional abuse is a precursor to physical abuse when three particular forms of emotional abuse are present in the relationship: threats, restriction of the abused party and damage to the victim's property.
Psychological abuse is often not recognized by survivors of domestic violence as abuse. A study of college students by Goldsmith and Freyd report that many who have experienced emotional abuse do not characterize the mistreatment as abusive. Additionally, Goldsmith and Freyd show that these people also tend to exhibit higher than average rates of alexithymia (difficulty identifying and processing their own emotions). This is often the case when referring to victims of abuse within intimate relationships, as non-recognition of the actions as abuse may be a coping or defense mechanism in order to either seek to master, minimize or tolerate stress or conflict.
Marital or relationship dissatisfaction can be caused by psychological abuse or aggression. In a 2007 study, Laurent et al. report that psychological aggression in young couples is associated with decreased satisfaction for both partners: "psychological aggression may serve as an impediment to couples' development because it reflects less mature coercive tactics and an inability to balance self/other needs effectively." In a 2008 study on relationship dissatisfaction in adolescents Walsh and Shulman explain, "The more psychologically aggressive females were, the less satisfied were both partners. The unique importance of males' behavior was found in the form of withdrawal, a less mature conflict negotiation strategy. Males' withdrawal during joint discussions predicted increased satisfaction."
There are many different responses to psychological abuse. Jacobson et al. found that women report markedly higher rates of fear during marital conflicts. However, a rejoinder argued that Jacobson's results were invalid due to men and women's drastically differing interpretations of questionnaires. Coker et al. found that the effects of mental abuse were similar whether the victim was male or female. A 1998 study of male college students by Simonelli & Ingram found that men who were emotionally abused by their female partners exhibited higher rates of chronic depression than the general population. Pimlott-Kubiak and Cortina found that severity and duration of abuse were the only accurate predictors of after effects of abuse; sex of perpetrator or victim were not reliable predictors.
Abuse of children
The effects of psychological abuse on children can involve a variety of mental health concerns such as post-traumatic stress disorder, major depressive disorder, personality disorders, low self-esteem, aggression, anxiety, and emotional unresponsiveness. These effects can be exemplified by the constant criticism, regular living with threats, or being rejected, that can be exemplified by withholding love and support as well as not having any guidance from the guardians of the children.
English et al. report that children specifically whose families are characterized by interpersonal violence, including psychological aggression and verbal aggression, may exhibit these disorders. Additionally, English et al. report that the impact of emotional abuse "did not differ significantly" from that of physical abuse. Johnson et al. report that, in a survey of female patients, 24% suffered emotional abuse, and that this group experienced higher rates of gynecological problems. In their study of men emotionally abused by a wife/partner or parent, Hines and Malley-Morrison report that victims exhibit high rates of post-traumatic stress disorder and drug addiction, including alcoholism.
Glaser reports, "An infant who is severely deprived of basic emotional nurturance, even though physically well cared for, can fail to thrive and can eventually die. Babies with less severe emotional deprivation can grow into anxious and insecure children who are slow to develop and who have low self-esteem." Glaser also informs that the abuse impacts the child in a number of ways, especially on their behavior, including: "insecurity, poor self-esteem, destructive behavior, angry acts (such as fire setting and animal cruelty), withdrawal, poor development of basic skills, alcohol or drug abuse, suicide, difficulty forming relationships and unstable job histories."
Oberlander et al. performed a study which discovered that among the youth, those with a history of maltreatment showed that emotional distress is a predictor of early initiation of sexual intercourse. Oberlander et al. state, "A childhood history of maltreatment, including... psychological abuse, and neglect, has been identified as a risk factor for early initiation of sexual intercourse ... In families where child maltreatment had occurred, children were more likely to experience heightened emotional distress and subsequently to engage in sexual intercourse by age 14. It is possible that maltreated youth feel disconnected from families that did not protect them and subsequently seek sexual relationships to gain support, seek companionship, or enhance their standing with peers." It is apparent that psychological abuse sustained during childhood is a predictor of the onset of sexual conduct occurring earlier in life, as opposed to later.
Abuse in the workplace
Psychological abuse has been found present within the workplace as evidenced by previous research. Namie's study of workplace emotional abuse found that 31% of women and 21% of men who reported workplace emotional abuse exhibited three key symptoms of post-traumatic stress disorder (hypervigilance, intrusive imagery, and avoidance behaviors). The most common psychological, professional, financial, and social effects of sexual harassment and retaliation are as follows:
Psychological stress and health impairment, loss of motivation.
Decreased work or school performance as a result of stressful conditions; increased absenteeism in fear of harassment repetition.
Having to drop courses, change academic plans, or leave school (loss of tuition) in fear of harassment repetition or as a result of stress.
Being objectified and humiliated by scrutiny and gossip.
Loss of trust in environments similar to where the harassment occurred.
Loss of trust in the types of people that occupy similar positions as the harasser or their colleagues, especially in cases where they are not supportive, difficulties or stress on peer relationships, or relationships with colleagues.
Effects on sexual life and relationships: can put extreme stress upon relationships with significant others, sometimes resulting in divorce.
Weakening of support network, or being ostracized from professional or academic circles (friends, colleagues, or family may distance themselves from the victim, or shun him or her altogether).
Depression, anxiety or panic attacks.
Sleeplessness or nightmares, difficulty concentrating, headaches, fatigue.
Eating disorders (weight loss or gain), alcoholism, and feeling powerless or out of control.
Abuse of the elderly
Elderly who have suffered psychological abuse have been found to experience similar outcomes as other population groups such as depression, anxiety, feelings of isolation and neglect, and powerlessness. One study examined 355 Chinese elderly participants (60 and older) and found that 75% of reported abusers were grown-up children of the elderly. Within this study, these individuals suffered outcomes from the abuse, specifically verbal abuse which contributed to their psychological distress.
Prevention
In intimate relationships
Recognition of abuse is the first step to prevention. It is often difficult for abuse victims to acknowledge their situation and to seek help. For those who do seek help, research has shown that people who participate in an intimate partner violence prevention program report less psychological aggression toward their targets of psychological abuse, and reported victimization from psychological abuse decreased over time for the treatment group.
There are non-profit organizations that provide support and prevention services such as the National Domestic Violence Hotline, The Salvation Army, and Benefits.gov.
In the family
Child abuse in the sole form of emotional/psychological maltreatment is often the most difficult to identify and prevent, as government organizations, such as Child Protective Services in the US, is often the only method of intervention, and the institute "must have demonstrable evidence that harm to a child has been done before they can intervene. Due to this a lot of victims may stay in the care of their abuser. Since emotional abuse doesn't result in physical evidence such as bruising or malnutrition, it can be very hard to diagnose." Some researchers have, however, begun to develop methods to diagnose and treat such abuse, including the ability to: identify risk factors, provide resources to victims and their families, and ask appropriate questions to help identify the abuse.
In the workplace
The majority of companies within the United States provide access to a human resources department, in which to report cases of psychological/emotional abuse. Also, many managers are required to participate in conflict management programs, in order to ensure the workplace maintains an "open and respectful atmosphere, with tolerance for diversity and where the existence of interpersonal frustration and friction is accepted but also properly managed." Organizations must adopt zero-tolerance policies for professional verbal abuse. Education and coaching are needed to help employees to improve their skills when responding to professional-to-professional verbal abuse.
Popular perceptions
Several studies found double standards in how people tend to view emotional abuse by men versus emotional abuse by women. Follingstad et al. found that, when rating hypothetical vignettes of psychological abuse in marriages, professional psychologists tend to rate male abuse of females as more serious than identical scenarios describing female abuse of males: "the stereotypical association between physical aggression and males appears to extend to an association of psychological abuse and males".
Similarly, Sorenson and Taylor randomly surveyed a group of Los Angeles, California residents for their opinions of hypothetical vignettes of abuse in heterosexual relationships. Their study found that abuse committed by women, including emotional and psychological abuse such as controlling or humiliating behavior, was typically viewed as less serious or detrimental than identical abuse committed by men. Additionally, Sorenson and Taylor found that respondents had a broader range of opinions about female perpetrators, representing a lack of clearly defined mores when compared to responses about male perpetrators.
When considering the emotional state of psychological abusers, psychologists have focused on aggression as a contributing factor. While it is typical for people to consider males to be the more aggressive of the two sexes, researchers have studied female aggression to help understand psychological abuse patterns in situations involving female abusers. According to Walsh and Shluman, "The higher rates of female initiated aggression [including psychological aggression] may result, in part, from adolescents' attitudes about the unacceptability of male aggression and the relatively less negative attitudes toward female aggression". This concept that females are raised with fewer restrictions on aggressive behaviors (possibly due to the anxiety over aggression being focused on males) is a possible explanation for women who utilize aggression when being mentally abusive.
Some researchers have become interested in discovering exactly why women are usually not considered to be abusive. Hamel's 2007 study found that a "prevailing patriarchal conception of intimate partner violence" led to a systematic reluctance to study women who psychologically and physically abuse their male partners. These findings state that existing cultural norms show males as more dominant and are therefore more likely to begin abusing their significant partners.
Dutton found that men who are emotionally or physically abused often encounter victim blaming that erroneously presumes the man either provoked or deserved the mistreatment by their female partners. Similarly, domestic violence victims will often blame their own behavior, rather than the violent actions of the abuser. Victims may try continually to alter their behavior and circumstances in order to please their abuser. Often, this results in further dependence of the individual on their abuser, as they may often change certain aspects of their lives that limit their resources. A 2002 study concluded that emotional abusers frequently aim to exercise total control of different aspects of family life. This behavior is only supported when the victim of the abuse aims to please their abuser.
Many abusers are able to control their victims in a manipulative manner, utilizing methods to persuade others to conform to the wishes of the abuser, rather than to force them to do something they do not wish to do. Simon argues that because aggression in abusive relationships can be carried out subtly and covertly through various manipulation and control tactics, victims often do not perceive the true nature of the relationship until conditions worsen considerably.
Cultural causes
A researcher in 1988 said that wife abuse stems from "normal psychological and behavioral patterns of most men ... feminists seek to understand why men, in general, use physical force against their partners and what functions this serves for a society in a given historical context". Dobash and Dobash (1979) said that "Men who assault their wives are living up to cultural prescriptions that are cherished in Western society--aggressiveness, male dominance and female subordination--and they are using physical force as a means to enforce that dominance," while Walker claims that men exhibit a "socialized androcentric need for power".
While some women are aggressive and dominating to male partners, a 2003 report concluded that the majority of abuse in heterosexual partnerships, at about 80% in the US, is perpetrated by men. (Critics stress that this Department of Justice study examines crime figures, and does not specifically address domestic abuse figures. While the categories of crime and domestic abuse may cross-over, many instances of domestic abuse are either not regarded as crimes or reported to police—critics thus argue that it is inaccurate to regard the DOJ study as a comprehensive statement on domestic abuse.) A 2002 study reports that ten percent of violence in the UK, overall, is by females against males. However, more recent data specifically regarding domestic abuse (including emotional abuse) report that 3 in 10 women, and 1 in 5 men, have experienced domestic abuse.
One source said that legal systems have in the past endorsed these traditions of male domination, and it is only in recent years that abusers have begun to be punished for their behavior. In 1879, a Harvard University law scholar wrote, "The cases in the American courts are uniform against the right of the husband to use any chastisement, moderate or otherwise, toward the wife, for any purpose."
While recognizing that researchers have done valuable work and highlighted neglected topics critics suggest that the male cultural domination hypothesis for abuse is untenable as a generalized explanation for numerous reasons:
A 1989 study concluded that many variables (racial, ethnic, cultural and subcultural, nationality, religion, family dynamics, and mental illness) make it very difficult or impossible to define male and female roles in any meaningful way that apply to the entire population.
A 1995 study concluded that disagreements about power-sharing in relationships are more strongly associated with abuse than are imbalances of power.
Peer-reviewed studies have produced inconsistent results when directly examining patriarchal beliefs and wife abuse. Yllo and Straus (1990) said that "low status" women in the United States suffered higher rates of spousal abuse; however, a rejoinder argued that Yllo and Straus's interpretive conclusions were "confusing and contradictory". Smith (1990) estimated that patriarchal beliefs were a causative factor for only 20% of wife abuse. Campbell (1993) writes that "there is not a simple linear correlation between female status and rates of wife assault." Other studies had similar findings. Additionally, a 1994 study of Hispanic Americans revealed that traditionalist men exhibited lower rates of abuse towards women.
Studies from the 1980s showed that treatment programs based on the patriarchal privilege model are flawed due to a weak connection between abusiveness and one's cultural or social attitudes.
A 1992 study challenge the concept that male abuse or control of women is culturally sanctioned, and concluded that abusive men are widely viewed as unsuitable partners for dating or marriage. A 1988 study concluded that a minority of abusive men qualify as pervasively misogynistic. A 1986 study concluded that the majority of men who commit spousal abuse agree that their behavior was inappropriate. A 1970 study concluded that a minority of men approve of spousal abuse under even limited circumstances. Studies from the 1970 and 1980s concluded that the majority of men are non-abusive towards girlfriends or wives for the duration of relationships, contrary to predictions that aggression or abuse towards women is an innate element of masculine culture.
In 1994, a researcher said that the numerous studies establishing that heterosexual and gay male relationships have lower rates of abuse than lesbian relationships, and the fact that women who have been involved with both men and women were more likely to have been abused by a woman "are difficult to explain in terms of male domination." Additionally, Dutton said that "patriarchy must interact with psychological variables in order to account for the great variation in power-violence data. It is suggested that some forms of psychopathology lead to some men adopting patriarchal ideology to justify and rationalize their own pathology."
A 2010 study said that fundamentalist views of religions tend to reinforce emotional abuse, and that "Gender inequity is usually translated into a power imbalance with women being more vulnerable. This vulnerability is more precarious in traditional patriarchal societies."
In the Book of Genesis God specifically punishes women after Adam and Eve disobey Him: "in sorrow, thou shalt bring forth children: and thy desire shall be to thy husband, and he shall rule over thee"; God also condemns Adam to a lifetime of work, for the sin of listening to his wife.
Some studies say that fundamentalist religious prohibitions against divorce may make it more difficult for religious men or women to leave an abusive marriage. A 1985 survey of Protestant clergy in the United States by Jim M Alsdurf found that 21% of them agreed that "no amount of abuse would justify a woman's leaving her husband, ever", and 26% agreed with the statement that "a wife should submit to her husband and trust that God would honor her action by either stopping the abuse or giving her the strength to endure it." A 2016 report by the Muslim Women's Network UK cited several barriers for Muslim women in abusive marriages who seek divorce through Sharia Council services. These barriers include: selectively quoting religious text to discourage divorce; blaming the woman for the failed marriage; placing greater weight on the husband's testimony; requiring the woman to present two male witnesses; and pressuring women into mediation or reconciliation rather than granting a divorce, even when domestic violence is present.
In today's world, various forms of abuse are increasingly being recognized in all parts of the world. The level of awareness regarding abuse varies from one country to another. Both victims and perpetrators around the world are being acknowledged more readily compared to past eras.
See also
References
External links
Harassment and bullying | Psychological abuse | Biology | 6,167 |
21,406,724 | https://en.wikipedia.org/wiki/U-Key | A U-Key is an implementation of the MIFARE RFID chip, encased in a plastic key style housing. It is used as a prepayment system on vending machines and for some self-service diving air compressors in Switzerland, and they will be most likely made by Selecta (company).
References
Smart cards
Radio-frequency identification
Automatic identification and data capture | U-Key | Technology,Engineering | 78 |
483,108 | https://en.wikipedia.org/wiki/Ultrastructure | Ultrastructure (or ultra-structure) is the architecture of cells and biomaterials that is visible at higher magnifications than found on a standard optical light microscope. This traditionally meant the resolution and magnification range of a conventional transmission electron microscope (TEM) when viewing biological specimens such as cells, tissue, or organs. Ultrastructure can also be viewed with scanning electron microscopy and super-resolution microscopy, although TEM is a standard histology technique for viewing ultrastructure. Such cellular structures as organelles, which allow the cell to function properly within its specified environment, can be examined at the ultrastructural level.
Ultrastructure, along with molecular phylogeny, is a reliable phylogenetic way of classifying organisms. Features of ultrastructure are used industrially to control material properties and promote biocompatibility.
History
In 1931, German engineers Max Knoll and Ernst Ruska invented the first electron microscope. With the development and invention of this microscope, the range of observable structures that were able to be explored and analyzed increased immensely, as biologists became progressively interested in the submicroscopic organization of cells. This new area of research concerned itself with substructure, also known as the ultrastructure.
Applications
Many scientists use ultrastructural observations to study the following, including but not limited to:
Human Tumors
Chloroplasts
Bone
Platelets
Sperm
Biology
A common ultrastructural feature found in plant cells is the formation of calcium oxalate crystals. It has been theorized that these crystals function to store calcium within the cell until it is needed for growth or development.
Calcium oxalate crystals can also form in animals, and kidney stones are a form of these ultrastructural features. Theoretically, nanobacteria could be used to decrease the formation of calcium oxalate kidney stones.
Engineering
Controlling ultrastructure has engineering uses for controlling the behavior of cells. Cells respond readily to changes in their extracellular matrix (ECM), so manufacturing materials to mimic ECM allows for increased control over the cell cycle and protein expression.
Many cells, such as plants, produce calcium oxalate crystals, and these crystals are usually considered ultrastructural components of plant cells. Calcium oxalate is a material that is used to manufacture ceramic glazes [6], and it also has biomaterial properties. For culturing cells and tissue engineering, this crystal is found in fetal bovine serum, and is an important aspect of the extracellular matrix for culturing cells.
Ultrastructure is an important factor to consider when engineering dental implants. Since these devices interface directly with bone, their incorporation to surrounding tissue is necessary to optimal device function. It has been found that applying a load to a healing dental implant allows for increased osseointegration with facial bones. Analyzing the ultrastructure surrounding an implant is useful in determining how biocompatible it is and how the body reacts to it. One study found implanting granules of a biomaterial derived from pig bone caused the human body to incorporate the material into its ultrastructure and form new bone.
Hydroxyapatite is a biomaterial used to interface medical devices directly to bone by ultrastructure. Grafts can be created along with 𝛃-tricalcium phosphate, and it has been observed that surrounding bone tissue with incorporate the new material into its extracellular matrix. Hydroxyapatite is a highly biocompatible material, and its ultrastructural features, such as crystalline orientation, can be controlled carefully to ensure optimal biocompatibility. Proper crystal fiber orientation can make introduced minerals, like hydroxyapatite, more similar to the biological materials they intend to replace. Controlling ultrastructural features makes obtaining specific material properties possible.
References
External links
Electron microscopy
Cell anatomy | Ultrastructure | Chemistry | 802 |
2,057,204 | https://en.wikipedia.org/wiki/Thorin%20%28chemistry%29 | Thorin (also called thoron or thoronol) is an indicator used in the determination of barium, beryllium, lithium, uranium and thorium compounds. Being a compound of arsenic, it is highly toxic.
References
External links
MSDS at Oxford University
Azo compounds
Naphthalenesulfonates
Organic sodium salts
2-Naphthols
Titration
Arsonic acids | Thorin (chemistry) | Chemistry | 81 |
2,733,186 | https://en.wikipedia.org/wiki/Prelog%20strain | In organic chemistry, transannular strain (also called Prelog strain after chemist Vladimir Prelog) is the unfavorable interactions of ring substituents on non-adjacent carbons. These interactions, called transannular interactions, arise from a lack of space in the interior of the ring, which forces substituents into conflict with one another. In medium-sized cycloalkanes, which have between 8 and 11 carbons constituting the ring, transannular strain can be a major source of the overall strain, especially in some conformations, to which there is also contribution from large-angle strain and Pitzer strain. In larger rings, transannular strain drops off until the ring is sufficiently large that it can adopt conformations devoid of any negative interactions.
Transannular strain can also be demonstrated in other cyclo-organic molecules, such as lactones, lactams, ethers, cycloalkenes, and cycloalkynes. These compounds are not without significance, since they are particularly useful in the study of transannular strain. Furthermore, transannular interactions are not relegated to only conflicts between hydrogen atoms, but can also arise from larger, more complicated substituents interacting across a ring.
Thermodynamics
By definition, strain implies discomfiture, so it should follow that molecules with large amounts of transannular strain should have higher energies than those without. Cyclohexane, for the most part, is without strain and is therefore quite stable and low in energy. Rings smaller than cyclohexane, like cyclopropane and cyclobutane, have significant tension caused by small-angle strain, but there is no transannular strain. While there is no small-angle strain present in medium-sized rings, there does exist something called large-angle strain. Some angle and torsional strain is used by rings with more than nine members to relieve some of the distress caused by transannular strain.
As the plot to the left indicates, the relative energies of cycloalkanes increases as the size of the ring increases, with a peak at cyclononane (with nine members in its ring.) At this point, the flexibility of the rings increases with increasing size; this allows for conformations that can significantly mitigate transannular interactions.
Kinetics
Rates of reaction can be affected by the size of rings. Essentially each reaction should be studied on a case-by-case basis but some general trends have been seen. Molecular mechanics calculations of strain energy differences ΔSI between a sp2 and sp3 state in cycloalkanes show linear correlations with rates (as ) of many reactions involving the transition between sp2 and sp3 states, such as ketone reduction, alcohol oxidation or nucleophilic substitution, the contribution of transannular strain is below 3%.
Rings with transannular strain have faster SN1, SN2, and free radical reactions compared to most smaller and normal sized rings. Five membered rings show an exception to this trend. On the other hand, some nucleophilic addition reactions involving addition to a carbonyl group in general show the opposite trend. Smaller and normal rings, with five membered rings being the anomaly, have faster reaction rates while those with transannular strain are slower.
One specific example of a study of rates of reactions for an SN1 reaction is shown on the right. Various sized rings, ranging from four to seventeen members, were used to compare the relative rates and better understand the effect of transannular strain on this reaction. The solvolysis reaction in acetic acid involved the formation of a carbocation as the chloride ion leaves the cyclic molecule. This study fits the general trend seen above that rings with transannular strain show increased reactions rates compared to smaller rings in SN1 reactions.
Examples of transannular strain
Influence on regioselectivity
The regioselectivity of water elimination is highly influenced by ring size. When water is eliminated from cyclic tertiary alcohols by an E1 route, three major products are formed. The semicyclic isomer (so-called because the double bond is shared by a ring atom and an exocyclic atom) and the (E) endocyclic isomer are expected to predominate; the (Z) endocyclic isomer is not expected to be formed until the ring size is large enough to accommodate the awkward angles of the trans configuration. The exact population of each product relative to the others differs considerably depending upon the size of the ring involved. As the ring size increases, the semicyclic isomer decreases rapidly and the (E) endocyclic isomer increases, but after a certain point, the semicyclic isomer begins to increase again. This can be attributed to transannular strain; this strain is significantly reduced in the (E) endocyclic isomer because it has one less substituent in the ring than the semicyclic isomer.
Influence on medium-sized ring synthesis
One of the effects of transannular strain is the difficulty of synthesizing medium-sized rings. Illuminati et al. have studied the kinetics of intramolecular ring closing using the simple nucleophilic substitution reaction of ortho-bromoalkoxyphenoxides. Specifically, they studied the ring closing of 5 to 10 carbon cyclic ethers. They found that as the number of carbons increased, so did the enthalpy of activation for the reaction. This indicates that strain within the cyclic transition states is higher if there are more carbons in the ring. Since transannular strain is the largest source of strain in rings this size, the larger enthalpies of activation result in much slower cyclizations due to transannular interactions in the cyclic ethers.
Influence of bridges on transannular strain
Transannular strain can be eliminated by the simple addition of a carbon bridge. E,Z,E,Z,Z-[10]-annulene is quite unstable; while it has the requisite number of π-electrons to be aromatic, they are for the most part isolated. Ultimately, the molecule itself is very difficult to observe. However, by the simple addition of a methylene bridge between the 1 and 6 positions, a stable, flat, aromatic molecule can be made and observed.
References
External links
Prelog strain definition: Link
Stereochemistry | Prelog strain | Physics,Chemistry | 1,349 |
19,566,626 | https://en.wikipedia.org/wiki/Erythrism | Erythrism or erythrochroism refers to an unusual reddish pigmentation of an animal's hair, skin, feathers, or eggshells.
Causes of erythrism include:
Genetic mutations which cause an absence of a normal pigment and/or excessive production of others
Diet, as in bees feeding on "bright red corn syrup" used in maraschino cherry manufacturing.
Erythrism in katydids has been occasionally observed. The coloring might be a camouflage that helps some members of the species survive on red plants. There is also consensus that the erythristic mutation is actually a dominant trait among katydid species, albeit a disadvantageous one, due to the overwhelmingly green coloration of most foliage. Hence, most pink or otherwise vividly colored katydids do not survive to adulthood, and this observation explains their rarity. Erythrism in leopards is rare, but one study reported that two of twenty-eight leopards seen in camera traps in a South African nature reserve were erythristic, and the authors found records of five other "strawberry" leopards from the region.
Gallery
See also
Albinism
Amelanism
Dyschromia
Heterochromia iridum
Leucism
Melanism
Piebaldism
Red hair
Vitiligo
Xanthochromism
References
External links
The Mystery of the Red Bees of Red Hook, The New York Times, November 30, 2010
Rare Pink Katydid Discovered in Northern Illinois, Chicago Tribune, August 10, 2011
Another Nice Example of Erythrism: Grasshopper, August 28, 2009
Erythrism: Grasshopper in New Zealand, Rod Morris, 2010
Pink Animal Amazingness , Paula Kashtan, lemondrop.com, December 18, 2008
Disturbances of pigmentation
Genetic disorders with no OMIM
Dermatologic terminology | Erythrism | Biology | 379 |
8,030,336 | https://en.wikipedia.org/wiki/Differentiated%20security | Differentiated security is a form of computer security that deploys a range of different security policies and mechanisms according to the identity and context of a user or transaction.
This makes it much more difficult to scale or replicate attacks, since each cluster/individual has a different security profile and there should be no common weaknesses.
One way of achieving this is by subdividing the population into small differentiated clusters. At the extreme, each individual belongs to a different class.
See also
Differentiated service (design pattern)
External links
Differentiated security in wireless networks Andreas Johnsson, 2002.
Computer security procedures | Differentiated security | Engineering | 117 |
7,076,870 | https://en.wikipedia.org/wiki/Myc | Myc is a family of regulator genes and proto-oncogenes that code for transcription factors. The Myc family consists of three related human genes: c-myc (MYC), l-myc (MYCL), and n-myc (MYCN). c-myc (also sometimes referred to as MYC) was the first gene to be discovered in this family, due to homology with the viral gene v-myc.
In cancer, c-myc is often constitutively (persistently) expressed. This leads to the increased expression of many genes, some of which are involved in cell proliferation, contributing to the formation of cancer. A common human translocation involving c-myc is critical to the development of most cases of Burkitt lymphoma. Constitutive upregulation of Myc genes have also been observed in carcinoma of the cervix, colon, breast, lung and stomach.
Myc is thus viewed as a promising target for anti-cancer drugs. Unfortunately, Myc possesses several features that have rendered it difficult to drug to date, such that any anti-cancer drugs aimed at inhibiting Myc may continue to require perturbing the protein indirectly, such as by targeting the mRNA for the protein rather than via a small molecule that targets the protein itself.
c-Myc also plays an important role in stem cell biology and was one of the original Yamanaka factors used to reprogram somatic cells into induced pluripotent stem cells.
In the human genome, C-myc is located on chromosome 8 and is believed to regulate expression of 15% of all genes through binding on enhancer box sequences (E-boxes).
In addition to its role as a classical transcription factor, N-myc may recruit histone acetyltransferases (HATs). This allows it to regulate global chromatin structure via histone acetylation.
Discovery
The Myc family was first established after discovery of homology between an oncogene carried by the Avian virus, Myelocytomatosis (v-myc; ) and a human gene over-expressed in various cancers, cellular Myc (c-Myc). Later, discovery of further homologous genes in humans led to the addition of n-Myc and l-Myc to the family of genes.
The most frequently discussed example of c-Myc as a proto-oncogene is its implication in Burkitt's lymphoma. In Burkitt's lymphoma, cancer cells show chromosomal translocations, most commonly between chromosome 8 and chromosome 14 [t(8;14)]. This causes c-Myc to be placed downstream of the highly active immunoglobulin (Ig) promoter region, leading to overexpression of Myc.
Structure
The protein products of Myc family genes all belong to the Myc family of transcription factors, which contain bHLH (basic helix-loop-helix) and LZ (leucine zipper) structural motifs. The bHLH motif allows Myc proteins to bind with DNA, while the leucine zipper TF-binding motif allows dimerization with Max, another bHLH transcription factor.
Myc mRNA contains an IRES (internal ribosome entry site) that allows the RNA to be translated into protein when 5' cap-dependent translation is inhibited, such as during viral infection.
Function
Myc proteins are transcription factors that activate expression of many pro-proliferative genes through binding enhancer box sequences (E-boxes) and recruiting histone acetyltransferases (HATs). Myc is thought to function by upregulating transcript elongation of actively transcribed genes through the recruitment of transcriptional elongation factors. It can also act as a transcriptional repressor. By binding Miz-1 transcription factor and displacing the p300 co-activator, it inhibits expression of Miz-1 target genes. In addition, myc has a direct role in the control of DNA replication. This activity could contribute to DNA amplification in cancer cells.
Myc is activated upon various mitogenic signals such as serum stimulation or by Wnt, Shh and EGF (via the MAPK/ERK pathway).
By modifying the expression of its target genes, Myc activation results in numerous biological effects. The first to be discovered was its capability to drive cell proliferation (upregulates cyclins, downregulates p21), but it also plays a very important role in regulating cell growth (upregulates ribosomal RNA and proteins), apoptosis (downregulates Bcl-2), differentiation, and stem cell self-renewal. Nucleotide metabolism genes are upregulated by Myc, which are necessary for Myc induced proliferation or cell growth.
There have been several studies that have clearly indicated Myc's role in cell competition.
A major effect of c-myc is B cell proliferation, and gain of MYC has been associated with B cell malignancies and their increased aggressiveness, including histological transformation. In B cells, Myc acts as a classical oncogene by regulating a number of pro-proliferative and anti-apoptotic pathways, this also includes tuning of BCR signaling and CD40 signaling in regulation of microRNAs (miR-29, miR-150, miR-17-92).
c-Myc induces MTDH(AEG-1) gene expression and in turn itself requires AEG-1 oncogene for its expression.
Myc-nick
Myc-nick is a cytoplasmic form of Myc produced by a partial proteolytic cleavage of full-length c-Myc and N-Myc. Myc cleavage is mediated by the calpain family of calcium-dependent cytosolic proteases.
The cleavage of Myc by calpains is a constitutive process but is enhanced under conditions that require rapid downregulation of Myc levels, such as during terminal differentiation. Upon cleavage, the C-terminus of Myc (containing the DNA binding domain) is degraded, while Myc-nick, the N-terminal segment 298-residue segment remains in the cytoplasm. Myc-nick contains binding domains for histone acetyltransferases and for ubiquitin ligases.
The functions of Myc-nick are currently under investigation, but this new Myc family member was found to regulate cell morphology, at least in part, by interacting with acetyl transferases to promote the acetylation of α-tubulin. Ectopic expression of Myc-nick accelerates the differentiation of committed myoblasts into muscle cells.
Clinical significance
A large body of evidence shows that Myc genes and proteins are highly relevant for treating tumors. Except for early response genes, Myc universally upregulates gene expression. Furthermore, the upregulation is nonlinear. Genes for which expression is already significantly upregulated in the absence of Myc are strongly boosted in the presence of Myc, whereas genes for which expression is low in the absence Myc get only a small boost when Myc is present.
Inactivation of SUMO-activating enzyme (SAE1 / SAE2) in the presence of Myc hyperactivation results in mitotic catastrophe and cell death in cancer cells. Hence inhibitors of SUMOylation may be a possible treatment for cancer.
Amplification of the MYC gene was found in a significant number of epithelial ovarian cancer cases. In TCGA datasets, the amplification of Myc occurs in several cancer types, including breast, colorectal, pancreatic, gastric, and uterine cancers.
In the experimental transformation process of normal cells into cancer cells, the MYC gene can cooperate with the RAS gene.
Expression of Myc is highly dependent on BRD4 function in some cancers. BET inhibitors have been used to successfully block Myc function in pre-clinical cancer models and are currently being evaluated in clinical trials.
MYC expression is controlled by a wide variety of noncoding RNAs, including miRNA, lncRNA, and circRNA. Some of these RNAs have been shown to be specific for certain types of human tissues and tumors. Changes in the expression of such RNAs can potentially be used to develop targeted tumor therapy.
MYC rearrangements
MYC chromosomal rearrangements (MYC-R) occur in 10% to 15% of diffuse large B-cell lymphoma (DLBCLs), an aggressive Non-Hodgkin Lymphoma (NHL). Patients with MYC-R have inferior outcomes and can be classified as single-hit, when they only have MYC-R; as double hit when the rearrangement is accompanied by a translocation of BCL2 or BCL6; and as triple hit when MYC-R includes both BCL2 and BCL6. Double and triple hit lymphoma have been recently classified as high-grade B-cell lymphoma (HGBCL) and it is associated with a poor prognosis.
MYC-R in DLBCL/HGBCL is believed to arise through the aberrant activity of activation-induced cytidine deaminase (AICDA), which facilitates somatic hypermutation (SHM) and class-switch recombination (CSR). Although AICDA primarily targets IG loci for SHM and CSR, its off-target mutagenic effects can impact lymphoma-associated oncogenes like MYC, potentially leading to oncogenic rearrangements. The breakpoints in MYC rearrangements show considerable variability within the MYC region. These breakpoints may occur within the so-called “genic cluster,” a region spanning approximately 1.5 kb upstream of the transcription start site, as well as the first exon and intron of MYC.
Fluorescence in situ hybridization (FISH) has become a routine practice in many clinical laboratories for lymphoma characterization. A break-apart (BAP) FISH probe is commonly utilized for the detection of MYC-R due to the variability of breakpoints in the MYC locus and the diversity of rearrangement partners, including immunoglobulin (IG) and non-IG partners (i.e. BCL2/BCL6). The MYC BAP probe includes a red and a green probe which hybridize 5’ and 3’ to the MYC gen, respectively. In an intact MYC locus, these probes yield a fusion signal. When MYC-R occur, two types of signals can be observed:
Balanced patterns: These patterns present separate red and green signals.
Unbalanced patterns: When isolated red or green signals in the absence of the corresponding green or red signal is observed. Unbalanced MYC-R are frequently associated with increased MYC expression.
There is a large variability in the interpretation of unbalanced MYC BAP results among the scientists, which can impact diagnostic classification and therapeutic management of the patients.
Animal models
In Drosophila Myc is encoded by the diminutive locus, (which was known to geneticists prior to 1935). Classical diminutive alleles resulted in a viable animal with small body size. Drosophila has subsequently been used to implicate Myc in cell competition, endoreplication, and cell growth.
During the discovery of Myc gene, it was realized that chromosomes that reciprocally translocate to chromosome 8 contained immunoglobulin genes at the break-point. To study the mechanism of tumorigenesis in Burkitt lymphoma by mimicking expression pattern of Myc in these cancer cells, transgenic mouse models were developed. Myc gene placed under the control of IgM heavy chain enhancer in transgenic mice gives rise to mainly lymphomas. Later on, in order to study effects of Myc in other types of cancer, transgenic mice that overexpress Myc in different tissues (liver, breast) were also made. In all these mouse models overexpression of Myc causes tumorigenesis, illustrating the potency of Myc oncogene.
In a study with mice, reduced expression of Myc was shown to induce longevity, with significantly extended median and maximum lifespans in both sexes and a reduced mortality rate across all ages, better health, cancer progression was slower, better metabolism and they had smaller bodies. Also, Less TOR, AKT, S6K and other changes in energy and metabolic pathways (such as AMPK, more oxygen consumption, more body movements, etc.). The study by John M. Sedivy and others used Cre-Loxp -recombinase to knockout one copy of Myc and this resulted in a "Haplo-insufficient" genotype noted as Myc+/-. The phenotypes seen oppose the effects of normal aging and are shared with many other long-lived mouse models such as CR (calorie restriction) ames dwarf, rapamycin, metformin and resveratrol. One study found that Myc and p53 genes were key to the survival of chronic myeloid leukaemia (CML) cells. Targeting Myc and p53 proteins with drugs gave positive results on mice with CML.
Relationship to stem cells
Myc genes play a number of normal roles in stem cells including pluripotent stem cells. In neural stem cells, N-Myc promotes a rapidly proliferative stem cell and precursor-like state in the developing brain, while inhibiting differentiation. In hematopoietic stem cells, Myc controls the balance between self-renewal and differentiation. In particular, long-term hematopoietic stem cells (LT-HSCs) express low levels of c-Myc, ensuring self-renewal. Enforced expression of c-Myc in LT-HSCs promotes differentiation at the expense of self-renewal, resulting in stem cell exhaustion. In pathological states and specifically in acute myeloid leukemia, oxidant stress can trigger higher levels of Myc expression that affects the behavior of leukemia stem cells.
c-Myc plays a major role in the generation of induced pluripotent stem cells (iPSCs). It is one of the original factors discovered by Yamanaka et al. to encourage cells to return to a 'stem-like' state alongside transcription factors Oct4, Sox2 and Klf4. It has since been shown that it is possible to generate iPSCs without c-Myc.
Interactions
Myc has been shown to interact with:
ACTL6A
BRCA1
Bcl-2
Cyclin T1
CHD8
DNMT3A
EP400
GTF2I
HTATIP
let-7
MAPK1
MAPK8
MAX
MLH1
MYCBP2
MYCBP
NMI
NFYB
NFYC
P73
PCAF
PFDN5
RuvB-like 1
SAP130
SMAD2
SMAD3
SMARCA4
SMARCB1
SUPT3H
TIAM1
TADA2L
TAF9
TFAP2A
TRRAP
WDR5
YY1 and
ZBTB17.
C2orf16
See also
Myc-tag
C-myc mRNA
References
Further reading
External links
InterPro signatures for protein family: , ,
The Myc Protein
NCBI Human Myc protein
Myc cancer gene
Generating iPS Cells from MEFS through Forced Expression of Sox-2, Oct-4, c-Myc, and Klf4
Drosophila Myc - The Interactive Fly
PDBe-KB provides an overview of all the structure information available in the PDB for Human Myc proto-oncogene protein
Oncogenes
Transcription factors
Human proteins | Myc | Chemistry,Biology | 3,358 |
28,004,040 | https://en.wikipedia.org/wiki/Kolster-Brandes | Kolster-Brandes Ltd was an American owned, British manufacturer of radio and television sets based in Foots Cray, Sidcup, Kent.
History
The company was a descendant of Brandes, a Canadian company founded in Toronto in 1908. Brandes became part of AT&T in 1922 and a British subsidiary Brandes Ltd. was established in Slough, in 1924, to manufacture headphones. The company rapidly expanded producing a range of loud speakers and in 1928 moved to a former silk mill at Foots Cray. The company was renamed Kolster-Brandes Ltd. after the American parent company merged with the Kolster Radio Corporation. In 1930 the company supplied 40,000 of its Masterpiece two-valve, bakelite cabinet radios to the Godfrey Phillips tobacco company, who gave them away to customers in exchange for cigarette coupons. K-B also began a long association with Cunard after they won a contract to provide communications equipment for the ocean liner. In 1938 K-B became part of ITT's British subsidiary STC. The Foots Cray site was also shared by Brimar, another STC company founded in 1933 to manufacture American pattern valves for the British market.
In 1960/61 STC took over Ace, Argosy, Regentone and RGD, and then in 1968 the KB logo name changed to ITT KB, and between 1973 and 1974 the KB was dropped from the logo and sets were only made under the ITT label.
Products
Kolster-Brandes later went on to make mid-range electronics such as radios, radiograms, televisions, tape recorders, amplifiers and gramophones.
KB made a large number of radios and radiograms, a few models of which were the 285, 422 Cavalcade, 666 and the CG20.
The company also made a popular selection of record players which include the Playtime, Gaytime, Dancetime, Tunetime and Rhythm, the last two of which are valve operated.
References
Electronics companies of the United Kingdom
Defunct manufacturing companies of the United Kingdom
Radio manufacturers | Kolster-Brandes | Engineering | 425 |
40,407,271 | https://en.wikipedia.org/wiki/Bitruncated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the bitruncated tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space. It is constructed by a bitruncation of a tesseractic honeycomb. It is also called a cantic quarter tesseractic honeycomb from its q2{4,3,3,4} construction.
Other names
Bitruncated tesseractic tetracomb (batitit)
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
Demitesseractic honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
x3x3x *b3o *b3o, x3x3x *b3o4o, o3x3o *b3x4o, o4x3x3o4o - batitit - O92
5-polytopes
Honeycombs (geometry)
Bitruncated tilings | Bitruncated tesseractic honeycomb | Physics,Chemistry,Materials_science | 374 |
65,557,712 | https://en.wikipedia.org/wiki/Fenner%20Medal | The Fenner Medal, named after the Australian virologist Frank Fenner, is awarded each year by The Australian Academy of Science for distinguished research in biology (excluding the biomedical sciences) by a scientist up to 10 years post-PhD in the calendar year of nomination.
The award is restricted to Australian residents or for biologists whose research was conducted mainly in Australia.
Recipients
Source: Fenner Medal Awardees Australian Academy of Science
See also
List of academic awards
References
Biology awards
Australian Academy of Science Awards
Awards established in 2000 | Fenner Medal | Technology | 105 |
22,656,247 | https://en.wikipedia.org/wiki/Double-skin%20facade | The double-skin façade is a system of building consisting of two skins, or façades, placed in such a way that air flows in the intermediate cavity. The ventilation of the cavity can be natural, fan supported or mechanical. Apart from the type of the ventilation inside the cavity, the origin and destination of the air can differ depending mostly on climatic conditions, the use, the location, the occupational hours of the building and the HVAC strategy.
The glass skins can be single or double glazing units with a distance from 20 cm up to 2 metres. Often, for protection and heat extraction reasons during the cooling period, solar shading devices are placed inside the cavity.
History
The essential concept of the double-skin facade was first explored and tested by the Swiss-French architect Le Corbusier in the early 20th century. His idea, which he called mur neutralisant (neutralizing wall), involved the insertion of heating/cooling pipes between large
layers of glass. Such a system was employed in his Villa Schwob (La Chaux-de-Fonds, Switzerland, 1916), and proposed for several other projects, including the League of Nations competition (1927), Centrosoyuz building (Moscow, 1928–33), and Cité du Refuge (Paris, 1930). American engineers studying the system in 1930 informed Le Corbusier that it would use much more energy than a conventional air system, but Harvey Bryan later concluded Le Corbusier's idea had merit if it included solar heating.
Another early experiment was the 1937 Alfred Loomis house by architect William Lescaze in Tuxedo Park, NY. This house included "an elaborate double envelope" with a 2-foot-deep air space conditioned by a separate system from the house itself. The object was to maintain high humidity levels inside.
One of the first modern examples to be constructed was the Occidental Chemical Building (Niagara Falls, New York, 1980) by Cannon Design. This building, essentially a glass cube, included a 4-feet-deep cavity between glass layers to pre-heat air in winter.
The recent resurgence of efficient building design has renewed interest in this concept. Since the USGBC rewards points for reduction in energy consumption vs. a base case, this strategy has been used to optimize energy performance of buildings.
Examples
Examples of notable buildings which utilise a double-skin facade are 30 St Mary Axe (also known as The Gherkin) and 1 Angel Square. Both of these buildings achieve great environmental credentials for their size, with the benefits of a double skin key to this. The Gherkin features triangular windows on the outer skin which skelter up the skyscraper. These windows open according to weather and building data, allowing more or less air to cross flow through the building for ventilation.
Technical details
The cavity between the two skins may be either naturally or mechanically ventilated. In cool climates the solar gain within the cavity may be circulated to the occupied space to offset heating requirements, while in hot climates the cavity may be vented out of the building to mitigate solar gain and decrease the cooling load. In each case the assumption is that a higher insulative value may be achieved by using this glazing configuration versus a conventional glazing configuration.
Recent studies showed that the energy performance of a building connected to a double-skin facade can be improved both in the cold and the warm season or in cold and warm climates by optimizing the ventilation strategy of the facade.
Criticisms
The advantages of double-skin facades over conventional single skin facades are not clear-cut; similar insulative values may be obtained using conventional high performance, low-e windows. The cavity results in a decrease in usable floor space, and depending on the strategy for ventilating the cavity, it may have problems with condensation, becoming soiled or introducing outside noise. The construction of a second skin may also present a significant increase in materials and design costs.
Building energy modelling of double-skin facades is inherently more difficult because of varying heat transfer properties within the cavity, making the modeling of energy performance and the prediction of savings debatable.
See also
Climate-adaptive building shell
Rainscreen
References
External links
Ventilated facades (Wandegar)
European Aluminium Association's publications dedicated to Buildings
European Commission's portal for efficient facades
EN 13830: Curtain Walling - Product Standard>
Architectural elements
Energy conservation
Types of wall
Solar architecture | Double-skin facade | Technology,Engineering | 909 |
37,754,174 | https://en.wikipedia.org/wiki/%28S%29-iPr-PHOX | (S)-iPr-PHOX, or (S)-2-[2-(diphenylphosphino)phenyl]-4-isopropyl-4,5-dihydrooxazole, is a chiral, bidentate, ligand derived from the amino alcohol valinol. It is part of a broader class of phosphinooxazolines ligands and has found application in asymmetric catalysis.
Preparation
(S)-iPr-PHOX is prepared using the amino alcohol valinol, which is derived from valine. The phosphine moiety may be introduced first, by a reaction between 2-bromobenzonitrile and chlorodiphenylphosphine; the oxazoline ring is then formed in a Witte Seeliger reaction. This yields an air stable zinc complex which must be treated with bipyridine in order to obtain the free ligand. Synthesis is performed under argon or nitrogen to avoid contact with air, however the final product is not air sensitive.
Uses
Iridium complexes incorporating (S)-iPr-PHOX have been used for asymmetric hydrogenation.
References
Ligands
Phosphines
Oxazolines | (S)-iPr-PHOX | Chemistry | 265 |
43,886,930 | https://en.wikipedia.org/wiki/Disodium%20hydrogen%20arsenate | Disodium hydrogen arsenate is the inorganic compound with the formula Na2HAsO4.7H2O. The compound consists of a salt and seven molecules of water of crystallization although for simplicity the formula usually omits the water component. The other sodium arsenates are NaH2AsO4 and Na3AsO4, the latter being called sodium arsenate. Disodium hydrogen arsenate is highly toxic. The salt is the conjugate base of arsenic acid. It is a white, water-soluble solid.
Being a diprotic acid, its acid-base properties is described by two equilibria:
+ H2O + H3O+ (pKa2 = 6.94)
+ H2O + H3O+ (pKa3 = 11.5)
Related compounds
Monopotassium arsenate, KH2AsO4
References
Arsenates
Sodium compounds | Disodium hydrogen arsenate | Chemistry | 193 |
3,465,656 | https://en.wikipedia.org/wiki/Plate%20Boundary%20Observatory | The Plate Boundary Observatory (PBO) was the geodetic component of the EarthScope Facility. EarthScope was an Earth science program that explored the 4-dimensional structure of the North American Continent. EarthScope (and PBO) was a 15-year project (2003-2018) funded by the National Science Foundation (NSF) in conjunction with NASA. PBO construction (an NSF MREFC) took place from October 2003 through September 2008. Phase 1 of operations and maintenance concluded in September 2013. Phase 2 of operations ended in September 2018, along with the end of the EarthScope project. In October 2018, PBO was assimilated into a broader Network of the Americas (NOTA), along with networks in Mexico (TLALOCNet) and the Caribbean (COCONet), as part of the NSF's Geodetic Facility for the Advancement of Geosciences (GAGE). GAGE is operated by EarthScope Consortium.
PBO precisely measured Earth deformation resulting from the constant motion of the Pacific and North American tectonic plates in the western United States. These Earth movements can be very small and incremental and not felt by people, or they can be very large and sudden, such as those that occur during earthquakes and volcanic eruptions. The high-precision instrumentation of the PBO enabled detection of motions to a sub-centimeter level. PBO measured Earth deformation through a network of instrumentation including: high precision Global Positioning System (GPS) and Global Navigation Satellite System (GNSS) receivers, strainmeters, seismometers, tiltmeters, and other geodetic instruments.
The PBO GPS network included 1100 stations extending from the Aleutian Islands south to Baja and eastward across the continental United States. During the construction phase, 891 permanent and continuously operating GPS stations were installed, and another 209 existing stations were integrated (PBO Nucleus stations) into the network. Geodetic imaging data was transmitted, often in realtime, from a wide network of GPS stations, augmented by seismometers, strainmeters and tiltmeters, complemented by InSAR (interferometric synthetic aperture radar), LiDAR (light-activated radar), and geochronology.
The GPS stations were categorized into clusters. The transform cluster was near the San Andreas Fault in California; the subduction cluster was in the Cascadia subduction zone (northern California, Oregon, Washington, and southern British Columbia); the extension cluster was in the Basin and Range region; the volcanic cluster was in the Yellowstone caldera, the Long Valley caldera, and the Cascade Volcanoes; the backbone cluster was at 100–200 km intervals across the United States to provide complete spatial coverage.
Data from the PBO was, and NOTA data continue to be, transmitted to the GAGE Facility, operated by EarthScope Consortium, to the data center where it is collected, archived and distributed. These data sets continue to be freely and openly available to the public, with equal access provided for all users. PBO data includes the raw data collected from each instrument, quality-checked data in formats commonly used by PBO's various user communities, and processed data such as calibrated time series, velocity fields, and error estimates.
Some scientific questions that addressed by the EarthScope project and the PBO data include:
How does accumulated strain lead to earthquakes?
Are there recognizable precursors to earthquakes?
How does the evolution of the continent influence the motions that are happening today?
What happens to geologic structures at depth?
What influences the location of features such as faults and mountain ranges?
Is it inherited from earlier tectonic events or related to deeper processes in the mantle?
How is magma generated? How does it travel from the mantle to reach the surface?
What are the precursors to a volcanic eruption?
References
Global Positioning System
Plate tectonics | Plate Boundary Observatory | Technology,Engineering | 789 |
43,558,540 | https://en.wikipedia.org/wiki/Z-FA-FMK | Z-FA-FMK, abbreviating for benzyloxycarbonyl-phenylalanyl-alanyl-fluoromethyl ketone, is a very potent irreversible inhibitor of cysteine proteases, including cathepsins B, L, and S, cruzain, and papain. It also selectively inhibits effector caspases 2, 3, 6, and 7 but not caspases 8 and 10. This compound has been shown to block the production of IL1-α, IL1-β, and TNF-α induced by LPS in macrophages by inhibiting NF-κB pathways.
References
Protease inhibitors
Carbamates
Organofluorides | Z-FA-FMK | Chemistry,Biology | 157 |
40,125,862 | https://en.wikipedia.org/wiki/Dark%20Ages%20Radio%20Explorer | The Dark Ages Radio Explorer (DARE) is a proposed NASA mission aimed at detecting redshifted line emissions from the earliest neutral hydrogen atoms, formed post-Cosmic Dawn. Emissions from these neutral hydrogen atoms, characterized by a rest wavelength of 21 cm and a frequency of 1420 MHz, offer insights into the formation of the universe's first stars and the epoch succeeding the cosmic Dark Ages. The intended orbiter aims to investigate the universe's state from approximately 80 million years to 420 million years post-Big Bang by capturing the line emissions at their redshifted frequencies originating from that period. Data collected by this mission is expected to shed light on the genesis of the first stars, the rapid growth of the initial black holes, and the universe’s reionization process. Moreover, it would facilitate the testing of computational galaxy formation models. Furthermore, the mission could advance research into dark matter decay and inform the development of lunar surface telescopes, enhancing the exploration of exoplanets around proximate stars.
Background
The epoch between recombination and the emergence of stars and galaxies is termed the "cosmic Dark Ages". In this era, neutral hydrogen predominated the universe's matter composition. While this hydrogen has not yet been directly observed, ongoing experiments aim to detect the characteristic hydrogen line from this period. The hydrogen line arises when an electron in a neutral hydrogen atom transitions between hyperfine states, either by excitation to a state with aligned spins or by de-excitation as the spins move from alignment to anti-alignment. The energy differential between these hyperfine states, electron volts, equates to a photon with a wavelength of 21 centimeters. When neutral hydrogen attains thermodynamic equilibrium with cosmic microwave background (CMB) photons, a "coupling" occurs, rendering the hydrogen line undetectable. Observation of the hydrogen line is feasible only when there is a temperature discrepancy between the neutral hydrogen and the CMB.
Theoretical motivation
In the immediate aftermath of the Big Bang, the universe was characterized by intense heat, density, and near-uniformity. Its subsequent expansion and cooling created conducive conditions for nuclear and atomic formation. Around 400,000 years post-Big Bang, at a redshift of approximately 1100, the cooling of primordial plasma allowed protons and electrons to merge into neutral hydrogen atoms, rendering the universe transparent as photons ceased to interact significantly with matter. These ancient photons are detectable in the present as the cosmic microwave background (CMB). The CMB reveals a universe that remained smooth and homogeneous.
Following the formation of the initial hydrogen atoms, the universe was composed of an almost entirely neutral, uniformly distributed intergalactic medium (IGM), predominantly made up of hydrogen gas. This epoch, devoid of luminous bodies, is referred to as the cosmic Dark Ages. Theoretical models forecast that, over subsequent hundreds of millions of years, gravitational forces gradually compressed the gas into denser regions, culminating in the emergence of the first stars—a milestone known as Cosmic Dawn.
The formation of additional stars and the assembly of the earliest galaxies inundated the universe with ultraviolet photons, which had the potential to ionize hydrogen gas. Several hundred million years post-Cosmic Dawn, the initial stars emitted sufficient ultraviolet photons to reionize the vast majority of hydrogen atoms in the universe. This reionization epoch signifies the IGM’s transition back to a state of near-complete ionization.
Observational studies have not yet explored the universe’s emerging structural complexity. Studying the universe’s earliest structures necessitates a telescope surpassing the capabilities of the Hubble Space Telescope. While theoretical models indicate that current measurements are starting to examine the concluding phase of Reionization, the initial stars and galaxies from the Dark Ages and Cosmic Dawn remain beyond the observational reach of contemporary instruments.
The envisioned DARE mission aims to conduct pioneering measurements of the inception of the first stars and black holes, as well as ascertain the characteristics of hitherto undetectable stellar populations. These observations would contextualize existing data and enhance our comprehension of the developmental processes of the first galaxies from antecedent cosmic structures.
Mission
The DARE mission aims to analyze the spectral profile of the sky-averaged, redshifted 21-cm signal within a 40–120 MHz radio bandpass, targeting neutral hydrogen at redshifts between 11-35, corresponding to a period 420-80 million years subsequent to the Big Bang. DARE’s tentative schedule involves a 3-year lunar orbit, focusing on data collection above the Moon’s far side—a region considered devoid of human-made radio frequency interference and substantial ionospheric activity.
The mission’s scientific apparatus, affixed to an RF-quiet spacecraft bus, comprises a three-part radiometer system featuring an electrically short, tapered, biconical dipole antenna, along with a receiver and a digital spectrometer. DARE’s utilization of the antenna’s smooth frequency response and a differential spectral calibration technique is anticipated to mitigate intense cosmic foregrounds, thereby facilitating the detection of the faint cosmic 21-cm signal.
Related initiatives
In addition to the DARE mission, several other initiatives have been proposed to investigate this field. These include the Precision Array for Probing the Epoch of Reionization (PAPER), the Low Frequency Array (LOFAR), the Murchison Widefield Array (MWA), the Giant Metrewave Radio Telescope (GMRT), and the Large Aperture Experiment to Detect the Dark Ages (LEDA).
See also
Reionization
Wouthuysen-Field coupling
References
Further reading
External links
JPL Helps Shoot for the Moon, Stars, Planets and More
Proposed NASA space probes
Cosmic background radiation
Big Bang
Physical cosmology | Dark Ages Radio Explorer | Physics,Astronomy | 1,182 |
10,995,827 | https://en.wikipedia.org/wiki/Lewin%27s%20equation | Lewin's equation, B = f(P, E), is a heuristic formula proposed by psychologist Kurt Lewin as an explanation of what determines behavior.
Description
The formula states that behavior is a function of the person and their environment:
Where is behavior, is person, and is the environment.
This equation was first presented in Lewin's book, Principles of Topological Psychology, published in 1936. The equation was proposed as an attempt to unify the different branches of psychology (e.g. child psychology, animal psychology, psychopathology) with a flexible theory applicable to all distinct branches of psychology. This equation is directly related to Lewin's field theory. Field theory is centered around the idea that a person's life space determines their behavior. Thus, the equation was also expressed as B = f(L), where L is the life space. In Lewin's book, he first presents the equation as B = f(S), where behavior is a function of the whole situation (S). He then extended this original equation by suggesting that the whole situation could be roughly split into two parts: the person (P) and the environment (E). According to Lewin, social behavior, in particular, was the most psychologically interesting and relevant behavior.
Lewin held that the variables in the equation (e.g. P and E) could be replaced with the specific, unique situational and personal characteristics of the individual. As a result, he also believed that his formula, while seemingly abstract and theoretical, had distinct concrete applications for psychology.
Gestalt influence
Many scholars (and even Lewin himself) have acknowledged the influence of Gestalt psychology on Lewin's work. Lewin's field theory holds that a number of different and competing forces combine to result in the totality of the situation. A single person's behavior may be different in unique situations, as he or she is acting partly in response to these differential forces and factors (e.g. the environment, or E):"A physically identical environment can be psychologically different even for the same man in different conditions."Similarly, two different individuals placed in exactly the same situation will not necessarily engage in the same behavior. "Even when from the standpoint of the physicist the environment is identical or nearly identical for a child and or an adult, the psychological situation can be fundamentally different."For this reason, Lewin holds that the person (e.g. P) must be considered in conjunction with the environment. P consists of the entirety of a person (e.g. his or her past, present, future, personality, motivations, desires). All elements within P are contained within the life space, and all elements within P interact with each other.
Lewin emphasizes that the desires and motivations within the person and the situation in its entirety, the sum of all these competing forces, combine to form something larger: the life space. This notion speaks directly to the gestalt idea that the "whole is greater than the sum of its parts." The idea that the parts (e.g. P and E) of the whole (e.g. S) combine to form an interactive system has been called Lewin's 'dynamic approach,' a term that specifically refers to regarding "the elements of any situation...as parts of a system."
Interaction of person and environment
Relative importance of P and E
Lewin explicitly stated that either the person or the environment may be more important in particular situations:"Every psychological event depends upon the state of the person and at the same time on the environment, although their relative importance is different in different cases."Thus, Lewin believed he succeeded in creating an applicable theory that was also "flexible enough to do justice to the enormous differences between the various events and organisms." In a sense, he held that it was inappropriate to pick a side on the classic psychological debate of nature versus nurture, as he held that "every scientific psychology must take into account whole situations, i.e., the state of both person and environment." Further, Lewin stated that:"The question whether heredity or environment plays the greater part also belongs to this kind of thinking. The transition of the Galilean thinking involved a recognition of the general validity of the thesis: An event is always the result of the interaction of several facts."
Specific function linking P and E
Lewin defined an empirical law as "the functional relationship between various facts," where facts are the "different characteristics of an event or situation." In Lewin's original proposal of his equation, he did not specify how exactly the person and the environment interact to produce behavior. Some scholars have noted that Lewin's use of the comma in his equation between the P and E represents Lewin's flexibility and receptiveness to multiple ways that these two may interact. Lewin indeed held that the importance of the person or of the environment may vary on a case-by-case basis. The use of the comma may provide the flexibility to support this assertion.
Psychological reality
Lewin differentiates between multiple realities. For example, the psychological reality encompasses everything that an individual perceives and believes to be true. Only what is contained within the psychological reality can affect behavior. In contrast, things that may be outside the psychological reality, such as bits of the physical reality or social reality, has no direct relation to behavior. Lewin states:"The psychological reality...does not depend upon whether or not the content...exists in a physical or social sense....The existence or nonexistence...of a psychological fact are independent of the existence or nonexistence to which its content refers."As a result, the only reality that is contained within the life space is the psychological reality, as this is the reality that has direct consequences for behavior. For example, in Principles of Topological Psychology, Lewin continually reiterates the sentiment that "the physical reality of the object concerned is not decisive for the degree of psychological reality." Lewin refers to the example of a "child living in a 'magic world.'" Lewin asserts that, for this child, the realities of the 'magic world' are a psychological reality, and thus must be considered as an influence on their subsequent behavior, even though this 'magic world' does not exist within the physical reality. Likewise, scholars familiar with Lewin's work have emphasized that the psychological situation, as defined by Lewin, is strictly composed of those facts which the individual perceives or believes.
Principle of contemporaneity
In Lewin's theoretical framework, the whole situation—or the life space, which contains both the person and the environment—is dynamic. In order to accurately determine behavior, Lewin's equation holds that one must consider and examine the life space at the exact moment when the behavior occurred. The life space, even moments after such behavior has occurred, is no longer exactly the same as it was when behavior occurred and thus may not accurately represent the whole situation that led to the behavior in the first place. This focus on the present situation represented a departure from many other theories at the time. Most theories tended to focus on looking at an individual's past in order to explain their present behavior, such as Sigmund Freud's psychoanalysis. Lewin's emphasis on the present state of the life space did not preclude the idea that an individual's past may impact the present state of the life space:"[The] influence of the previous history is to be thought of as indirect in dynamic psychology: From the point of view of systematic causation, past events cannot influence present events. Past events can only have a position in the historical causal chains whose interweavings create the present situation."Lewin referred to this concept as the principle of contemporaneity.
References
Further reading
Helbing, D. (2010). Quantitative Sociodynamics: Stochastic Methods and Models of Social Interaction Processes (2nd ed.). Springer.
Lewin, K. (1943). Defining the "Field at a Given Time." Psychological Review, 50, 292–310.
Lewin, K (1936). Principles of Topological Psychology. New York: McGraw-Hill.
External links
Lewin, Sticky Minds
Psychological theories
Behavioral concepts | Lewin's equation | Biology | 1,729 |
8,119,545 | https://en.wikipedia.org/wiki/Disodium%20citrate | Disodium citrate, also known as disodium hydrogen citrate, (Neo-Alkacitron) and sesquihydrate, is an acid salt of citric acid with the chemical formula . It is used as an antioxidant in food and to improve the effects of other antioxidants. It is also used as an acidity regulator and sequestrant. Typical products include gelatin, jam, sweets, ice cream, carbonated beverages, milk powder, wine, and processed cheeses.
Uses
Food
It is used as an antioxidant in food and to improve the effects of other antioxidants. It is also used as an acidity regulator and sequestrant. Typical products include gelatin, jam, sweets, ice cream, carbonated beverages, milk powder, wine, and processed cheeses. Disodium citrate can also be used as a thickening agent or stabilizer.
Manufacturing
Disodium citrate can also be used as an ingredient in household products that remove stains.
Health
Disodium citrate may be used in patients to alleviate discomfort from urinary-tract infections.
References
Citrates
Organic sodium salts
Acid salts
E-number additives | Disodium citrate | Chemistry | 256 |
41,449,427 | https://en.wikipedia.org/wiki/Heinz%20Billing | Heinz Billing (7 April 1914 – 4 January 2017) was a German physicist and computer scientist, widely considered a pioneer in the construction of computer systems and computer data storage, who built a prototype laser interferometric gravitational wave detector.
Biography
Billing was born in Salzwedel, in Saxony-Anhalt, Germany. After studying mathematics and physics in University of Göttingen he received his doctorate in 1938 in Munich at the age of 24. During the Second World War he worked in the Aerodynamics Research Institute in Göttingen.
On 3 October 1943 he married Anneliese Oetker. Billing has three children: Heiner Erhard Billing (born 18 November 1944 in Salzwedel), Dorit Gerda Gronefeld Billing (born 27 June 1946 in Göttingen) and Arend Gerd Billing (born 19 September 1954 in Göttingen).
He turned 100 in April 2014 and died on 4 January 2017 at the age of 102. Advanced LIGO detected the fourth gravitational wave event GW170104 on the same day.
Computer science
Billing worked at the Aerodynamic Research Institute in Göttingen, where he developed a magnetic drum memory.
According to Billing's memoirs, published by Genscher, Düsseldorf (1997), there was a meeting between Alan Turing and Konrad Zuse. It took place in Göttingen in 1947. The interrogation had the form of a colloquium. Participants were Womersley, Turing, Porter from England and a few German researchers like Zuse, Walther, and Billing. (For more details see Herbert Bruderer, Konrad Zuse und die Schweiz).
After a brief stay at the University of Sydney, Billing returned to join the Max Planck Institute for Physics in 1951. From 1952 through 1961 the group under Billing's direction constructed a series of four digital computers: the G1, G2, G1a, and G3.
He is the designer of the first German sequence-controlled electronic digital computer as well as of the first German stored-program electronic digital computer.
Gravitational wave detector
After transistors had been firmly established, when microelectronics arrived, after scientific computers were slowly overshadowed by commercial applications and computers were mass-produced in factories, Heinz Billing left the computer field in which he had been a pioneer for nearly 30 years.
In 1972, Billing returned to his original field of physics, at the Max Planck Institute's new location at Garching near Munich. Beginning in 1972, Heinz Billing became involved in gravitational physics, when he tried to verify the detection claims made by American physicist Joseph Weber. Weber's results were considered to be proven wrong by these experiments.
In 1975, Billing acted on a proposal by Rainer Weiss from the Massachusetts Institute of technology (MIT) to use laser interferometry to detect gravitational waves. He and colleagues built a 3m prototype Michelson interferometer using optical delay lines. From 1980 onward Billing commissioned the development and construction in MPA in Garching of a laser interferometer with an arm length of 30m. Without the knowledge gained from this prototype, the LIGO project would not have been started when it did.
Awards and honors
In 1987, Heinz Billing received the Konrad Zuse Medal for the invention of magnetic drum storage. In 2015 he received the Order of Merit of the Federal Republic of Germany.
In 1993, the annual Heinz Billing prize for "outstanding contributions to computational science" was established by the Max Planck Society in his honor, with a prize amount of 5,000 Euro.
Selected publications
Heinz Billing: Ein Interferenzversuch mit dem Lichte eines Kanalstrahles. J. A. Barth, Leipzig 1938.
Heinz Billing, Wilhelm Hopmann: Mikroprogramm-Steuerwerk. In: Elektronische Rundschau. Heft 10, 1955.
Heinz Billing, Albrecht Rüdiger: Das Parametron verspricht neue Möglichkeiten im Rechenmaschinenbau. In: eR – Elektronische Rechenanlagen. Band 1, Heft 3, 1959.
Heinz Billing: Lernende Automaten. Oldenbourg Verlag, München 1961.
Heinz Billing: Die im MPI für Physik und Astrophysik entwickelte Rechenanlage G3. In: eR – Elektronische Rechenanlagen. Band 5, Heft 2, 1961.
Heinz Billing: Magnetische Stufenschichten als Speicherelemente. In: eR – Elektronische Rechenanlagen. Band 5, Heft 6, 1963.
Heinz Billing: Schnelle Rechenmaschinenspeicher und ihre Geschwindigkeits- und Kapazitätsgrenzen. In: eR – Elektronische Rechenanlagen. Band 5, Heft 2, 1963.
Heinz Billing, Albrecht Rüdiger, Roland Schilling: BRUSH – Ein Spezialrechner zur Spurerkennung und Spurverfolgung in Blasenkammerbildern. In: eR – Elektronische Rechenanlagen. Band 11, Heft 3, 1969.
Heinz Billing: Zur Entwicklungsgeschichte der digitalen Speicher. In: eR – Elektronische Rechenanlagen. Band 19, Heft 5, 1977.
Heinz Billing: A wide-band laser interferometer for the detection of gravitational radiation. progress report, Max-Planck-Institut für Physik und Astrophysik, München 1979.
Heinz Billing: Die Göttinger Rechenmaschinen G1, G2, G3. In: Entwicklungstendenzen wissenschaftlicher Rechenzentren, Kolloquium, Göttingen. Springer, Berlin 1980, .
Heinz Billing: The Munich gravitational wave detector using laser interferometry. Max-Planck-Institut für Physik und Astrophysik, München 1982.
Heinz Billing: Die Göttinger Rechenmaschinen G1, G2 und G3. In: MPG-Spiegel. 4, 1982.
Heinz Billing: Meine Lebenserinnerungen. Selbstverlag, 1994.
Heinz Billing: Ein Leben zwischen Forschung und Praxis. Selbstverlag F. Genscher, Düsseldorf 1997.
Heinz Billing: Fast memories for computers and their limitations regarding speed and capacity (Schnelle Rechenmaschinen- speicher und ihre Geschwindigkeits- und Kapazitätsgrenzen). In: IT – Information Technology. Band 50, Heft 5, 2008.
References
External links
Tracking down the gentle tremble at Max-Planck-Gesellschaft's website on account history of GEO600 with Heinz Billing.
1914 births
2017 deaths
People from Salzwedel
Scientists from the Province of Saxony
German computer scientists
20th-century German physicists
Gravitational-wave astronomy
Max Planck Society people
German men centenarians
Officers Crosses of the Order of Merit of the Federal Republic of Germany
Max Planck Institute directors | Heinz Billing | Physics,Astronomy | 1,464 |
9,775,501 | https://en.wikipedia.org/wiki/Granin | Granin (chromogranin and secretogranin) is a protein family of regulated secretory proteins ubiquitously found in the cores of amine and peptide hormone and neurotransmitter dense-core secretory vesicles.
Function
Granins (chromogranins or secretogranins) are acidic proteins and are present in the secretory granules of a wide variety of endocrine and neuro-endocrine cells. The exact function(s) of these proteins is not yet settled but there is evidence that granins function as pro-hormones, giving rise to an array of peptide fragments for which autocrine, paracrine, and endocrine activities have been demonstrated in vitro and in vivo. The intracellular biochemistry of granins includes binding of Ca2+, ATP and catecholamines (epinephrine, norepinephrine) within the hormone storage vesicle core. There is also evidence that CgA, and perhaps other granins, regulate the biogenesis of dense-core secretory vesicles and hormone sequestration in neuroendocrine cells.
Structure
Apart from their subcellular location and the abundance of acidic residues (Asp and Glu), these proteins do not share many structural similarities. Only one short region, located in the C-terminal section, is conserved in all these proteins. Chromogranins and secretogranins together share a C-terminal motif, whereas chromogranins A and B share a region of high similarity in their N-terminal section; this region includes two cysteine residues involved in a disulfide bond.
There are considerable differences in the amino acid composition between different animals. Commercial assays for measuring human CGA can usually not be used for measuring CGA in samples from other species. Some specific parts of the molecule have a higher degree of amino acid homology and methods where the antibodies are directed against specific epitopes can be used to measure samples from different animals. Region-specific assays measuring defined parts of CGA, CGB and SG2 can be used for measurements in samples from cats and dogs.
Members
Chromogranins
chromogranin A (CgA)
chromogranin B (CgB)
Secretogranins
secretogranin II (SgII) (see also secretoneurin)
secretogranin III (SgIII)
secretogranin V (SgV)
Extended group
Some other proteins are also proposed to belong to the granins based on their physico-chemical properties. These include NESP55 (SgVI), VGF (SgVII), and ProSAAS (SgVIII).
References
External links
Protein structural motifs | Granin | Biology | 568 |
95,154 | https://en.wikipedia.org/wiki/Associative%20array | In computer science, an associative array, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a function with finite domain. It supports 'lookup', 'remove', and 'insert' operations.
The dictionary problem is the classic problem of designing efficient data structures that implement associative arrays.
The two major solutions to the dictionary problem are hash tables and search trees.
It is sometimes also possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures.
Many programming languages include associative arrays as primitive data types, while many other languages provide software libraries that support associative arrays. Content-addressable memory is a form of direct hardware-level support for associative arrays.
Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern.
The name does not come from the associative property known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with associative processors.
Operations
In an associative array, the association between a key and a value is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association.
The operations that are usually defined for an associative array are:
Insert or put
add a new pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value.
Remove or delete
remove a pair from the collection, unmapping a given key from its value. The argument to this operation is the key.
Lookup, find, or get
find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation. If no value is found, some lookup functions raise an exception, while others return a default value (such as zero, null, or a specific value passed to the constructor).
Associative arrays may also include other operations such as determining the number of mappings or constructing an iterator to loop over all the mappings. For such operations, the order in which the mappings are returned is usually implementation-defined.
A multimap generalizes an associative array by allowing multiple values to be associated with a single key. A bidirectional map is a related abstract data type in which the mappings operate in both directions: each value must be associated with a unique key, and a second lookup operation takes a value as an argument and looks up the key associated with that value.
Properties
The operations of the associative array should satisfy various properties:
lookup(k, insert(j, v, D)) = if k == j then v else lookup(k, D)
lookup(k, new()) = fail, where fail is an exception or default value
remove(k, insert(j, v, D)) = if k == j then remove(k, D) else insert(j, v, remove(k, D))
remove(k, new()) = new()
where k and j are keys, v is a value, D is an associative array, and new() creates a new, empty associative array.
Example
Suppose that the set of loans made by a library is represented in a data structure. Each book in a library may be checked out by one patron at a time. However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be:
{
"Pride and Prejudice": "Alice",
"Wuthering Heights": "Alice",
"Great Expectations": "John"
}
A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, and if Pat checks out a book, that would cause an insertion operation, leading to a different state:
{
"Pride and Prejudice": "Alice",
"The Brothers Karamazov": "Pat",
"Wuthering Heights": "Alice"
}
Implementation
For dictionaries with very few mappings, it may make sense to implement the dictionary using an association list, which is a linked list of mappings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of mappings. However, it is easy to implement and the constant factors in its running time are small.
Another very simple implementation technique, usable when the keys are restricted to a narrow range, is direct addressing into an array: the value for a given key k is stored at the array cell A[k], or if there is no mapping for k then the cell stores a special sentinel value that indicates the lack of a mapping. This technique is simple and fast, with each dictionary operation taking constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small.
The two major approaches for implementing dictionaries are a hash table or a search tree.
Hash table implementations
The most frequently used general-purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and usually outperform alternative implementations.
Hash tables must be able to handle collisions: the mapping by the hash function of two different keys to the same bucket of the array. The two most widespread approaches to this problem are separate chaining and open addressing. In separate chaining, the array does not store the value itself but stores a pointer to another container, usually an association list, that stores all the values matching the hash. By contrast, in open addressing, if a hash collision is found, the table seeks an empty spot in an array to store the value in a deterministic manner, usually by looking at the next immediate position in the array.
Open addressing has a lower cache miss ratio than separate chaining when the table is mostly empty. However, as the table becomes filled with more elements, open addressing's performance degrades exponentially. Additionally, separate chaining uses less memory in most cases, unless the entries are very small (less than four times the size of a pointer).
Tree implementations
Self-balancing binary search trees
Another common approach is to implement an associative array with a self-balancing binary search tree, such as an AVL tree or a red–black tree.
Compared to hash tables, these structures have both strengths and weaknesses. The worst-case performance of self-balancing binary search trees is significantly better than that of a hash table, with a time complexity in big O notation of O(log n). This is in contrast to hash tables, whose worst-case performance involves all elements sharing a single bucket, resulting in O(n) time complexity. In addition, and like all binary search trees, self-balancing binary search trees keep their elements in order. Thus, traversing its elements follows a least-to-greatest pattern, whereas traversing a hash table can result in elements being in seemingly random order. Because they are in order, tree-based maps can also satisfy range queries (find all values between two bounds) whereas a hashmap can only find exact values. However, hash tables have a much better average-case time complexity than self-balancing binary search trees of O(1), and their worst-case performance is highly unlikely when a good hash function is used.
A self-balancing binary search tree can be used to implement the buckets for a hash table that uses separate chaining. This allows for average-case constant lookup, but assures a worst-case performance of O(log n). However, this introduces extra complexity into the implementation and may cause even worse performance for smaller hash tables, where the time spent inserting into and balancing the tree is greater than the time needed to perform a linear search on all elements of a linked list or similar data structure.
Other trees
Associative arrays may also be stored in unbalanced binary search trees or in data structures specialized to a particular type of keys such as radix trees, tries, Judy arrays, or van Emde Boas trees, though the relative performance of these implementations varies. For instance, Judy trees have been found to perform less efficiently than hash tables, while carefully selected hash tables generally perform more efficiently than adaptive radix trees, with potentially greater restrictions on the data types they can handle. The advantages of these alternative structures come from their ability to handle additional associative array operations, such as finding the mapping whose key is the closest to a queried key when the query is absent in the set of mappings.
Comparison
Ordered dictionary
The basic definition of a dictionary does not mandate an order. To guarantee a fixed order of enumeration, ordered versions of the associative array are often used. There are two senses of an ordered dictionary:
The order of enumeration is always deterministic for a given set of keys by sorting. This is the case for tree-based implementations, one representative being the container of C++.
The order of enumeration is key-independent and is instead based on the order of insertion. This is the case for the "ordered dictionary" in .NET Framework, the LinkedHashMap of Java and Python.
The latter is more common. Such ordered dictionaries can be implemented using an association list, by overlaying a doubly linked list on top of a normal dictionary, or by moving the actual data out of the sparse (unordered) array and into a dense insertion-ordered one.
Language support
Associative arrays can be implemented in any programming language as a package and many language systems provide them as part of their standard library. In some languages, they are not only built into the standard system, but have special syntax, often using array-like subscripting.
Built-in syntactic support for associative arrays was introduced in 1969 by SNOBOL4, under the name "table". TMG offered tables with string keys and integer values. MUMPS made multi-dimensional associative arrays, optionally persistent, its key data structure. SETL supported them as one possible implementation of sets and maps. Most modern scripting languages, starting with AWK and including Rexx, Perl, PHP, Tcl, JavaScript, Maple, Python, Ruby, Wolfram Language, Go, and Lua, support associative arrays as a primary container type. In many more languages, they are available as library functions without special syntax.
In Smalltalk, Objective-C, .NET, Python, REALbasic, Swift, VBA and Delphi they are called dictionaries; in Perl, Ruby and Seed7 they are called hashes; in C++, C#, Java, Go, Clojure, Scala, OCaml, Haskell they are called maps (see map (C++), unordered_map (C++), and ); in Common Lisp and Windows PowerShell, they are called hash tables (since both typically use this implementation); in Maple and Lua, they are called tables. In PHP and R, all arrays can be associative, except that the keys are limited to integers and strings. In JavaScript (see also JSON), all objects behave as associative arrays with string-valued keys, while the Map and WeakMap types take arbitrary objects as keys. In Lua, they are used as the primitive building block for all data structures. In Visual FoxPro, they are called Collections. The D language also supports associative arrays.
Permanent storage
Many programs using associative arrays will need to store that data in a more permanent form, such as a computer file. A common solution to this problem is a generalized concept known as archiving or serialization, which produces a text or binary representation of the original objects that can be written directly to a file. This is most commonly implemented in the underlying object model, like .Net or Cocoa, which includes standard functions that convert the internal data into text. The program can create a complete text representation of any group of objects by calling these methods, which are almost always already implemented in the base associative array class.
For programs that use very large data sets, this sort of individual file storage is not appropriate, and a database management system (DB) is required. Some DB systems natively store associative arrays by serializing the data and then storing that serialized data and the key. Individual arrays can then be loaded or saved from the database using the key to refer to them. These key–value stores have been used for many years and have a history as long as that of the more common relational database (RDBs), but a lack of standardization, among other reasons, limited their use to certain niche roles. RDBs were used for these roles in most cases, although saving objects to a RDB can be complicated, a problem known as object-relational impedance mismatch.
After approximately 2010, the need for high-performance databases suitable for cloud computing and more closely matching the internal structure of the programs using them led to a renaissance in the key–value store market. These systems can store and retrieve associative arrays in a native fashion, which can greatly improve performance in common web-related workflows.
See also
Tuple
Function (mathematics)
References
External links
NIST's Dictionary of Algorithms and Data Structures: Associative Array
Abstract data types
Composite data types
Data types | Associative array | Mathematics | 3,011 |
22,902,471 | https://en.wikipedia.org/wiki/Lonnie%20D.%20Bentley | Lonnie D. Bentley (born 1957) is an American computer scientist, and Professor and former Department Head of Computer and Information Technology at Purdue University, known with Kevin C. Dittman and Jeffrey L. Whitten as co-author of the textbook Systems Analysis and Design Methods, which is now in its 7th edition.
Life and work
Born in 1957, Bentley attended the Mountain Home High School in Arkansas. He studied at the Arkansas State University, where in 1979 he received his B.S. in Business Data Processing, and in 1981 his M.S. in Information Systems.
Bentley has taught courses such as Systems Analysis and Design Methods; Systems Analysis (using structured analysis-based methods); Systems Analysis (using information engineering-based methods); Systems Design (using RAD design-based methods); Systems Design (using structured design-based methods); Enterprise Resource Planning and Integration; Business Process Redesign.
Aside from systems analysis and design, Bentley also focuses on enterprise applications, business process redesign, computer-aided software engineering (CASE), rapid application development (RAD), and graphical user interface (GUI) design.
Along with his contributions to higher education, Lonnie is also a founder of Broadband Antenna Tracking Systems (BATS).
Honors and awards
1985 - Best Teacher Award, Department of Computer Technology
1995 - Outstanding Tenured Faculty Award, School of Technology
1998 - $1,000 OOPSLA Educators Scholarship
1998 - $500 Dow Corning Faculty Scholarship
2006 - Arkansas State University Distinguished Alumnus
Selected publications
Bentley, Lonnie D., Kevin C. Dittman, and Jeffrey L. Whitten. Systems analysis and design methods. (1986, 1997, 2004).
Whitten, Jeffrey L., and Lonnie D. Bentley. Using Excelerator for systems analysis and design. (1987).
Bentley, Lonnie D., and Jeffrey L. Whitten. Systems analysis and design for the global enterprise. McGraw-Hill Irwin, 2007.
Whitten, Jeffrey L., and Lonnie D. Bentley. Introduction to systems analysis and design. McGraw Hill Irwin, 2008.
References
External links
Lonnie D. Bentley at purdue.edu
Technology professor wins commercialization award, October 10, 2011
1957 births
Living people
American computer scientists
Information systems researchers
Systems engineers
Arkansas State University alumni
Purdue University faculty | Lonnie D. Bentley | Technology | 481 |
17,051,674 | https://en.wikipedia.org/wiki/AHI%20%28Amiga%29 | AHI (AHI audio system) is a retargetable audio subsystem for AmigaOS, MorphOS and AROS. It was created by Martin Blom in the mid-1990s to allow standardized operating system support for audio hardware other than just the native Amiga sound chip, for example 16-bit sound cards.
AHI offers improved functionality not available through the AmigaOS audio device driver, such as seamless audio playback from a user selected audio device (in applications which supported AHI), standardized functionality for audio recording and efficient software mixing routines for combining multiple sound channels thus overcoming the four channel hardware limit of the original Amiga chipset. It also incorporated a unique mode that produced 14-bit playback using the Amiga chipset by combining two 8-bit channels set at different volumes. The first official release of AHI was in 1996. AHI became a widely supported standard for audio hardware and audio software on Amiga systems and was officially included in later operating system releases.
The author has stated that when referring to this software the correct term is 'AHI audio system' or just 'AHI' and not 'Audio Hardware Interface', term sometimes used by the press.
References
External links
AHI web site
AmigaOS
Amiga
Amiga APIs
Audio libraries
MorphOS | AHI (Amiga) | Technology | 260 |
231,017 | https://en.wikipedia.org/wiki/Advanced%20sleep%20phase%20disorder | Advanced Sleep Phase Disorder (ASPD), also known as the advanced sleep-phase type (ASPT) of circadian rhythm sleep disorder, is a condition that is characterized by a recurrent pattern of early evening (e.g. 7-9 PM) sleepiness and very early morning awakening (e.g. 2-4 AM). This sleep phase advancement can interfere with daily social and work schedules, and results in shortened sleep duration and excessive daytime sleepiness. The timing of sleep and melatonin levels are regulated by the body's central circadian clock, which is located in the suprachiasmatic nucleus in the hypothalamus.
Symptoms
Individuals with ASPD report being unable to stay awake until conventional bedtime, falling asleep too quickly and/or early in the evening, and being unable to stay asleep until their desired waking time, experiencing early morning insomnia. When someone has advanced sleep phase disorder their melatonin levels and core body temperature cycle hours earlier than an average person. These symptoms must be present and stable for a substantial period of time to be correctly diagnosed.
Diagnosis
Individuals expressing the above symptoms may be diagnosed with ASPD using a variety of methods and tests. Sleep specialists measure the patient's sleep onset and offset, dim light melatonin onset, and evaluate Horne-Ostberg morningness-eveningness questionnaire results. Sleep specialists may also conduct a polysomnography test to rule out other sleep disorders like narcolepsy. Age and family history of the patient is also taken into consideration.
Treatment
Once diagnosed, ASPD may be treated with bright light therapy in the evenings, or behaviorally with chronotherapy, in order to delay sleep onset and offset. The use of pharmacological approaches to treatment are less successful due to the risks of administering sleep-promoting agents early in the morning. Additional methods of treatment, like timed melatonin administration or hypnotics have been proposed, but determining their safety and efficacy will require further research. Unlike other sleep disorders, ASPD does not necessarily disrupt normal functioning at work during the day and some patients may not complain of excessive daytime sleepiness. Social obligations may cause an individual to stay up later than their circadian rhythm requires, however, they will still wake up very early. If this cycle continues, it can lead to chronic sleep deprivation and other sleep disorders.
Epidemiology
ASPD is more common among middle and older adults. The estimated prevalence of ASPD is about 1% in middle-age adults, and is believed to affect men and women equally. The disorder has a strong familial tendency, with 40-50% of affected individuals having relatives with ASPD. A genetic basis has been demonstrated in one form of ASPD, familial advanced sleep phase syndrome (FASPS), which implicates missense mutations in genes hPER2 and CKIdelta in producing the advanced sleep phase phenotype. The identification of two different genetic mutations suggests that there is heterogeneity of this disorder.
Familial advanced sleep phase syndrome
FASPS symptoms
While advanced sleep and wake times are relatively common, especially among older adults, the extreme phase advance characteristic of familial advanced sleep phase syndrome (also known as familial advanced sleep phase disorder) is rare. Individuals with FASPS fall asleep and wake up 4–6 hours earlier than the average population, generally sleeping from 7:30pm to 4:30am. They also have a free running circadian period of 22 hours, which is significantly shorter than the average human period of slightly over 24 hours. The shortened period associated with FASPS results in a shortened period of activity, causing earlier sleep onset and offset. This means that individuals with FASPS must delay their sleep onset and offset each day in order to entrain to the 24-hour day. On holidays and weekends, when the average person's sleep phase is delayed relative to their workday sleep phase, individuals with FASPS experience further advance in their sleep phase.
Aside from the unusual timing of sleep, FASPS patients experience normal quality and quantity of sleep. Like general ASPD, this syndrome does not inherently cause negative impacts, however, sleep deprivation may be imposed by social norms causing individuals to delay sleep until a more socially acceptable time, causing them to losing sleep due to earlier-than-usual wakeup time.
Another factor that distinguishes FASPS from other advanced sleep phase disorders is its strong familial tendency and life-long expression. Studies of affected lineages have found that approximately 50% of directly related family members experience the symptoms of FASPS, which is an autosomal dominant trait. Diagnosis of FASPS can be confirmed through genetic sequencing analysis by locating genetic mutations known to cause the disorder. Treatment with sleep and wake scheduling and bright light therapy can be used to try to delay sleep phase to a more conventional time frame, however treatment of FASPS has proven largely unsuccessful. Bright light exposure in the evening (between 7:00 and 9:00), during the delay zone as indicated by the phase response curve to light, has been shown to delay circadian rhythms, resulting in later sleep onset and offset in patients with FASPS or other advanced sleep phase disorders.
Discovery
In 1999, Louis Ptáček conducted a study at the University of Utah in which he coined the term familial advanced sleep phase disorder after identifying individuals with a genetic basis for an advanced sleep phase. The first patient evaluated during the study reported "disabling early evening sleepiness" and "early morning awakening"; similar symptoms were also reported in her family members. Consenting relatives of the initial patient were evaluated, as well as those from two additional families. The clinical histories, sleep logs and actigraphy patterns of subject families were used to define a hereditary circadian rhythm variant associated with a short endogenous (i.e. internally-derived) period. The subjects demonstrated a phase advance of sleep-wake rhythms that was distinct not only from control subjects, but also to sleep-wake schedules widely considered to be conventional. The subjects were also evaluated using the Horne-Östberg questionnaire, a structured self-assessment questionnaire used to determine morningness-eveningness in human circadian rhythms. The Horne-Östberg scores of first-degree relatives of affected individuals were higher than those of 'marry-in' spouses and unrelated control subjects. While much of morning and evening preference is heritable, the allele causing FASPS was hypothesized to have a quantitatively larger effect on clock function than the more common genetic variations that influence these preferences. Additionally, the circadian phase of subjects was determined using plasma melatonin and body core temperature measurements; these rhythms were both phase-advanced by 3–4 hours in FASPS subjects compared with control subjects. The Ptáček group also constructed a pedigree of the three FASPS kindreds which indicated a clear autosomal dominant transmission of the sleep phase advance.
In 2001, the research group of Phyllis C. Zee phenotypically characterized an additional family affected with ASPS. This study involved an analysis of sleep/wake patterns, diurnal preferences (using a Horne-Östberg questionnaire), and the construction of a pedigree for the affected family. Consistent with established ASPS criteria, the evaluation of subject sleep architecture indicated that the advanced sleep phase was due to an alteration of circadian timing rather than an exogenous (i.e. externally-derived) disruption of sleep homeostasis, a mechanism of sleep regulation. Furthermore, the identified family was one in which an ASPS-affected member was present in every generation; consistent with earlier work done by the Ptáček group, this pattern suggests that the phenotype segregates as a single gene with an autosomal dominant mode of inheritance.
In 2001, the research groups of Ptáček and Ying-Hui Fu published a genetic analysis of subjects experiencing the advanced sleep phase, implicating a mutation in the CK1-binding region of PER2 in producing the FASPS behavioral phenotype. FASPS is the first disorder to link known core clock genes directly with human circadian sleep disorders. As the PER2 mutation is not exclusively responsible for causing FASPS, current research has continued to evaluate cases in order to identify new mutations that contribute to the disorder.
Mechanisms (Per2 and CK1)
Two years after reporting the finding of FASPS, Ptáček's and Fu's groups published results of genetic sequencing analysis on a family with FASPS. They genetically mapped the FASPS locus to chromosome 2q where very little human genome sequencing was then available. Thus, they identified and sequenced all the genes in the critical interval. One of these was Period2 (Per2) which is a mammalian gene sufficient for the maintenance of circadian rhythms. Sequencing of the hPer2 gene ('h' denoting a human strain, as opposed to Drosophila or mouse strains) revealed a serine-to-glycine point mutation in the Casein Kinase I (CK1) binding domain of the hPER2 protein that resulted in hypophosphorylation of hPER2 in vitro. The hypophosphorylation of hPER2 disrupts the transcription-translation (negative) feedback loop (TTFL) required for regulating the stable production of hPER2 protein. In a wildtype individual, Per2 mRNA is transcribed and translated to form a PER2 protein. Large concentrations of PER2 protein inhibits further transcription of Per2 mRNA. CK1 regulates PER2 levels by binding to a CK1 binding site on the protein, allowing for phosphorylation which marks the protein for degradation, reducing protein levels. Once proteins become phosphorylated, PER2 levels decrease again, and Per2 mRNA transcription can resume. This negative feedback regulates the levels and expression of these circadian clock components.
Without proper phosphorylation of hPER2 in the instance of a mutation in the CK1 binding site, less Per2 mRNA is transcribed and the period is shortened to less than 24 hours. Individuals with a shortened period due to this phosphorylation disruption entrain to a 24h light-dark cycle, which may lead to a phase advance, causing earlier sleep and wake patterns. However, a 22h period does not necessitate a phase shift, but a shift can be predicted depending on the time the subject is exposed to the stimulus, visualized on a Phase Response Curve (PRC). This is consistent with studies of the role of CK1ɛ (a unique member of the CK1 family) in the TTFL in mammals and more studies have been conducted looking at specific regions of the Per2 transcript. In 2005, Fu's and Ptáček's labs reported discovery of a mutation in CKIδ (a functionally redundant form of CK1ɛ in the phosphorylation process of PER2) also causing FASPS. An A-to-G missense mutation resulted in a threonine-to-alanine alteration in the protein. This mutation prevented the proper phosphorylation of PER2. The evidence for both a mutation in the binding domain of PER2 and a mutation in CKIδ as causes of FASPS is strengthened by the lack of the FASPS phenotype in wild type individuals and by the observed change in the circadian phenotype of these mutant individuals in vitro and an absence of said mutations in all tested control subjects. Fruit flies and mice engineered to carry the human mutation also demonstrated abnormal circadian phenotypes, although the mutant flies had a long circadian period while the mutant mice had a shorter period. The genetic differences between flies and mammals that account for this difference circadian phenotypes are not known. Most recently, Ptáček and Fu reported additional studies of the human Per2 S662G mutation and generation of mice carrying the human mutation. These mice had a circadian period almost 2 hours shorter than wild-type animals under constant darkness. Genetic dosage studies of CKIδ on the Per2 S662G mutation revealed that depending on the binding site on Per2 that CK1δ interacts with, CK1δ may lead to hypo- or hyperphosphorylation of the Per2 gene.
See also
Delayed sleep phase disorder
Irregular sleep–wake rhythm
Non-24-hour sleep–wake disorder
References
External links
Sleep disorders
Circadian rhythm
Syndromes affecting the nervous system
Sleep physiology
Circadian rhythm sleep–wake disorders | Advanced sleep phase disorder | Biology | 2,600 |
3,546,462 | https://en.wikipedia.org/wiki/Computer%20booking%20system | A computer booking system is a system whereby publicly accessible computers can be reserved for a period of time. These systems are commonly used in facilities such as public libraries to ensure equitable use of limited numbers of computers. Bookings may be done over the internet or within the library itself using a separate computer set up as a booking terminal. Computer booking systems allow public service with reduced staff involvement.
Typically a computer booking system consists of both server and client software. The server software might run within the LAN or more typically is run from a publicly accessible web-server thus enabling users to book or reserve their computer time from their web-browser. There are both commercial and Open Source computer booking products on the market.
External links
Libki: a cross-platform Open Source computer reservation & time management system.
An example of a computer booking system at Birmingham Central Library, UK
An Open Source PC Reservations and Bookings system
Library resources
Personal computing | Computer booking system | Technology | 187 |
34,015,321 | https://en.wikipedia.org/wiki/Ocean%20disposal%20of%20radioactive%20waste | From 1946 through 1993, thirteen countries used ocean disposal or ocean dumping as a method to dispose of nuclear/radioactive waste with an approximation of 200,000 tons sourcing mainly from the medical, research and nuclear industry.
The waste materials included both liquids and solids housed in various containers, as well as reactor vessels, with and without spent or damaged nuclear fuel. Since 1993, ocean disposal has been banned by international treaties. (London Convention (1972), Basel Convention, MARPOL 73/78). There has only been the disposal of low level radioactive waste (LLW) thus far in terms of ocean dumping as high level waste has been strictly prohibited.
Ocean floor disposal (or sub-seabed disposal)—a more deliberate method of delivering radioactive waste to the ocean floor and depositing it into the seabed—was studied by the United Kingdom and Sweden, but never implemented.
History
Data are from IAEA-TECDOC-1105, pages 3–4.
1946 First dumping operation at Northeast Pacific Ocean (about 80 km off the coast of California)
1957 First IAEA Advisory Group Meeting on Radioactive Waste Disposal into the Sea
1958 First United Nations Conference on the Law of the Sea (UNCLOS I)
1964 On the 21 April, a satellite failed carrying a SNAP-9A radiothermal generator. 17,000 Ci (630 TBq) plutonium metal fuel burned up.
1972 Adoption of the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter (London Convention 1972)
1975 The London Convention 1972 entered into force (Prohibition of dumping of high level radioactive waste.)
1978 On the 24 January a satellite named Kosmos 954 failed. It was powered by a liquid sodium–potassium thermionic converter driven by a nuclear reactor containing around 50 kilograms (110 lb) of uranium-235.
1983 Moratorium on low-level waste dumping
1988 Assessing the Impact of Deep Sea Disposal of Low-level Radioactive Waste on Living Marine Resources. IAEA Technical Reports Series No. 288
1990 Estimation of Radiation Risks at Low Dose. IAEA-TECDOC-557
1993 Russia reported the dumping of high level nuclear waste including spent fuel by former USSR.
1994 (February 20) Total prohibition of disposal at sea came into force
1946–1993
Data are from IAEA-TECDOC-1105. Summary of pages 27–120:
Disposal projects attempted to locate ideal dumping sites based on depth, stability and currents, and to treat, solidify and contain the waste. However, some dumping only involved diluting the waste with surface water, or used containers that imploded at depth. Even containers that survived the pressure could physically decay over time.
The countries involved – listed in order of total contributions measured in TBq (TBq=1012 becquerel) – were the Soviet Union, the United Kingdom, Switzerland, the United States, Belgium, France, the Netherlands, Japan, Sweden, Russia, New Zealand, Germany, Italy and South Korea. Together, they dumped a total of 85,100 TBq (85.1x1015 Bq) of radioactive waste at over 100 ocean sites, as measured in initial radioactivity at the time of dump.
For comparison:
Global fallout of nuclear weapon tests – 2,566,087x1015 Bq.
1986 Chernobyl disaster total release – 12,060x1015 Bq.
2011 Fukushima Daiichi nuclear disaster, estimated total 340x1015 to 780x1015 Bq, with 80% falling into the Pacific Ocean.
Fukushima Daiichi nuclear plant cooling water dumped (leaked) to the sea – TEPCO estimate 4.7x1015 Bq, Japanese Nuclear Safety Commission estimate 15x1015 Bq, French Nuclear Safety Committee estimate 27x1015 Bq.
Naturally occurring Potassium 40 in all oceans – 14,000,000x1015 Bq.
One container (net 400 kg) of vitrified high-level radioactive waste has an average radioactivity of 4x1015 Bq (Max 45x1015 Bq).
Types of waste and packaging
Data are from IAEA-TECDOC-1105.
Liquid waste
unpackaged and diluted in surface waters
contained in package but not solidified
Solid waste
low level waste like resins, filters, material used for decontamination processes, etc., solidified with cement or bitumen and packaged in metal containers
unpackaged solid waste, mainly large parts of nuclear installations (steam generators, pumps, lids of reactor pressure vessels, etc.)
Reactor vessels
without nuclear fuel
containing damaged spent nuclear fuel solidified with polymer agent
special container with damaged spent nuclear fuel (icebreaker Lenin by the former Soviet Union)
Dump sites
Data are from IAEA-TECDOC-1105. There are three dump sites in the Pacific Ocean.
Arctic
Mainly at the east coast of Novaya Zemlya at Kara Sea and relatively small proportion at Barents Sea by the Soviet Union. Dumped at 20 sites from 1959 to 1992, total of 222,000 m3 including reactors and spent fuel.
North Atlantic
Dumping occurred from 1948 to 1982. The UK accounts for 78% of dumping in the Atlantic (35,088 TBq), followed by Switzerland (4,419 TBq), the United States (2,924 TBq) and Belgium (2,120 TBq). Sunken Soviet nuclear submarines are not included; see List of sunken nuclear submarines
There were 137,000 tonnes dumped by eight European countries. The United States reported neither tonnage nor volume for 34,282 containers.
Pacific Ocean
The Soviet Union 874 TBq, US 554 TBq, Japan 606.2 Tonnes, New Zealand 1+ TBq. 751,000 m3 was dumped by Japan and the Soviet Union. The United States reported neither tonnage nor volume of 56,261 containers.
Dumping of contaminated water at the 2011 Fukushima nuclear accident (estimate 4,700–27,000 TBq) is not included.
Sea of Japan
The Soviet Union dumped 749 TBq. Japan dumped 15.1 TBq south of main island. South Korea dumped 45 tonnes (unknown radioactivity value).
Environmental impact
Data are from IAEA-TECDOC-1105.
Arctic Ocean
Joint Russian-Norwegian expeditions (1992–94) collected samples from four dump sites. At immediate vicinity of waste containers, elevated levels of radionuclide were found, but had not contaminated the surrounding area.
North-East Atlantic Ocean
Dumping was undertaken by UK, Switzerland, Belgium, France, the Netherlands, Sweden, Germany and Italy.
IAEA had been studying since 1977. The report of 1996, by CRESP suggests measurable leakages of radioactive material, and, concluded that environmental impact is negligible.
North-East Pacific Ocean, North-West Atlantic Ocean dump sites of USA
These sites are monitored by the United States Environmental Protection Agency and US National Oceanic and Atmospheric Administration. So far, no excess level of radionuclides was found in samples (sea water, sediments) collected in the area, except the sample taken at a location close to disposed packages that contained elevated levels of isotopes of caesium and plutonium.
North-West Pacific Ocean dump sites of the Soviet Union, Japan, Russia, and Korea
The joint Japanese-Korean-Russian expedition (1994–95) concluded that contamination resulted mainly from global fallout. The USSR dumped waste in the Sea of Japan. Japan dumped waste south of the main island.
Policies
The first conversations surrounding dumping radioactive waste into the ocean began in 1958 at the United Nations Law of the Sea Conference (UNCLOS). The conference resulted in an agreement that all states should actively try to prevent radioactive waste pollution in the sea and follow any international guidelines regarding the issue. The UNCLOS also instigated research into the issues radioactive waste dumping caused.
However, by the late 1960s to early 1970s, millions of tons of waste were still being dumped into the ocean annually. By this time, governments began to realize the severe impacts of marine pollution, which led to one of the first international policies regarding ocean dumping in 1972 – the London Convention. The London Convention's main goals were to effectively control sources of marine pollution and take the proper steps to prevent it from happening, mainly accomplishing this by banning specific substances from being dumped in the ocean. The most recent version of the London Convention now bans all materials from marine dumping, except a thoroughly researched list of certain wastes. It also prohibits waste from being exported to other countries for disposal, as well as incinerating waste in the ocean. While smaller organizations like the Nuclear Energy Agency of the European Organization for Economic Cooperation and Development have produced similar regulations, the London Convention remains the central international figure of radioactive waste policies.
Although there are many existing regulations that ban ocean dumping, it is still a prevalent issue. Different countries enforce the ban on radioactive waste dumping on different levels, resulting in an inconsistent implementation of the agreed upon policies. Because of these discrepancies, it is hard to judge the effectiveness of international regulations like the London Convention.
Ocean floor disposal
Ocean floor disposal is a method of sequestering radioactive waste in ocean floor sediment where it is unlikely to be disturbed either geologically or by human activity.
Several methods of depositing material in the ocean floor have been proposed, including encasing it in concrete and as the United Kingdom has previously done, dropping it in torpedoes designed to increase the depth of penetration into the ocean floor, or depositing containers in shafts drilled with techniques similar to those used in oil exploration.
Ocean floor sediment is saturated with water, but since there is no water table per se and the water does not flow through it the migration of dissolved waste is limited to the rate at which it can diffuse through dense clay. This is slow enough that it could potentially take millions of years for waste to diffuse through several tens of meters of sediment so that by the time it reaches open ocean it would be highly dilute and decayed. Large regions of the ocean floor are thought to be completely geologically inactive and it is not expected that there will be extensive human activity there in the future. Water absorbs essentially all radiation within a few meters provided the waste remains contained.
One of the problems associated with this option includes the difficulty of recovering the waste, if necessary, once it is emplaced deep in the ocean. Also, establishing an effective international structure to develop, regulate, and monitor a sub-seabed repository would be extremely difficult.
Beyond technical and political considerations, the London Convention places prohibitions on disposing of radioactive materials at sea and does not make a distinction between waste dumped directly into the water and waste that is buried underneath the ocean's floor. It remained in force until 2018, after which the sub-seabed disposal option can be revisited at 25-year intervals.
Depositing waste, in suitable containers, in subduction zones has also been suggested. Here, waste would be transported by plate tectonic movement into the Earth's mantle and rendered harmless through dilution and natural decay. Several objections have been raised to this method, including vulnerabilities during transport and disposal, as well as uncertainties in the actual tectonic processes.
See also
Horizontal drillhole disposal
Deep borehole disposal
Yucca Mountain nuclear waste repository
Waste Isolation Pilot Plant
Grigory Pasko
Nuclear fuel cycle
Decommissioning of Russian nuclear-powered vessels
References
Radioactive waste repositories
Hazardous waste
Environmental impact of nuclear power
Ocean pollution | Ocean disposal of radioactive waste | Chemistry,Technology,Environmental_science | 2,343 |
6,037,443 | https://en.wikipedia.org/wiki/Friction%20tape | Friction tape is a type of woven cloth adhesive tape, historically made of cotton, impregnated with a rubber-based adhesive. Sticky on both sides, it is mainly used by electricians to insulate splices in electric wires and cables. The rubber-based adhesive provides a degree of protection from liquids and corrosion, while the cloth mesh protects against punctures and abrasion. It has been supplanted by PVC-based electrical tape except commonly used for over wrapping.
Other uses
Friction tape is commonly used to improve the grip on various sporting implements, including tennis racquets, baseball bats, and hockey sticks. It is also used similarly on the handlebars of bicycles, dirt bikes, lawnmowers, and other small machines that require gripping or steering.
See also
List of adhesive tapes
References
Adhesive tape
Dielectrics | Friction tape | Physics | 181 |
53,868,622 | https://en.wikipedia.org/wiki/Juansher%20Chkareuli | Juansher Chkareuli (; born January 13, 1940, Tbilisi) is a Georgian theoretical physicist working in particle physics, Head of Particle Physics Department at Andronikashvili Institute of Physics of Tbilisi State University and Professor at Institute of Theoretical Physics of Ilia State University in Tbilisi.
Academic career
He studied at Tbilisi State University and Lebedev Physical Institute (Moscow), and received MSc in Theoretical Physics in 1965. He completed his PhD in 1970 and DSc in 1985 in Andronikashvili Institute of Physics (Tbilisi) and Joint Institute for Nuclear Research (Russia).
Subsequently, he worked as Principal Research Fellow at Andronikashvili Institute of Physics (1985–present); Professor of Theoretical Physics at Tbilisi State University (1986-1990); Professor of Theoretical Physics at Ilia State University (2006–present).
In 1991-2012 he was also a Visiting research professor at many leading centers in high energy physics including European Organization for Nuclear Research (CERN) in Geneva, International Center for Theoretical Physics (ICTP) in Trieste, Max-Planck Institute in Munich, University of Glasgow, University of Maryland, University of Melbourne, Institute of High Energy Physics in Beijing .
J.L. Chkareuli is primarily known for his works on family symmetries, extended grand unified theories and emergent gauge and gravity theories. These developments include: An introduction of the chiral family symmetry SU(3) for quark-lepton generations and its application to the flavor mixing of quarks and leptons; A novel missing VEV mechanism in the supersymmetric SU(8) grand unified theory suggesting a simultaneous solution to the gauge hierarchy problem and unification of flavor; New nonlinear sigma models for emergent gauge and gravity theories leading to dynamical generation of local internal and spacetime symmetries with gauge fields and gravitons as massless vector/tensor Goldstone bosons.
He is also known as President of the Georgian Physical Society (1993–99), and an organizer and co-organizer of some notable conferences and workshops on high energy physics – Annual Georgian Winter School on particle physics and cosmology (Bakuriani, Georgia, 1970–1993) which was one of the most popular scientific meetings in the former Soviet Union; International seminar "Standard Model and Beyond" (Tbilisi, 1996); International conference "Low dimensional physics and gauge principles" (Yerevan & Tbilisi, 2011) and others.
Honours and awards
Royal Society Fellowship (1993–94); Royal Society Joint Project Grant (1999-2000), Georgia-US Bilateral Grant (2003-2005); Member of the American Physical Society (1993), Fellow of the Institute of Physics (UK, 2000). Listed in the biographical dictionaries including «Who is Who in Science and Engineering» (2008), Marquis Who's Who, NY; «2000 Outstanding Scientists 2008/2009» (2010), International Biographical Centre, Cambridge.
References
External links
J.L. Chkareuli at Center for Elementary Particle Physics, Ilia State University
J.L. Chkareuli at Andronikashvili Institute of Physics, Tbilisi State University
Scientific publications of J.L. Chkareuli on INSPIRE-HEP
J.L. Chkareuli on Google Scholar
1940 births
Living people
Scientists from Tbilisi
Physicists from Georgia (country)
Theoretical physicists
Particle physicists
People associated with CERN | Juansher Chkareuli | Physics | 696 |
171,104 | https://en.wikipedia.org/wiki/Chitin | Chitin (C8H13O5N)n ( ) is a long-chain polymer of N-acetylglucosamine, an amide derivative of glucose. Chitin is the second most abundant polysaccharide in nature (behind only cellulose); an estimated 1 billion tons of chitin are produced each year in the biosphere. It is a primary component of cell walls in fungi (especially filamentous and mushroom-forming fungi), the exoskeletons of arthropods such as crustaceans and insects, the radulae, cephalopod beaks and gladii of molluscs and in some nematodes and diatoms.
It is also synthesised by at least some fish and lissamphibians. Commercially, chitin is extracted from the shells of crabs, shrimps, shellfish and lobsters, which are major by-products of the seafood industry. The structure of chitin is comparable to cellulose, forming crystalline nanofibrils or whiskers. It is functionally comparable to the protein keratin. Chitin has proved useful for several medicinal, industrial and biotechnological purposes.
Etymology
The English word "chitin" comes from the French word chitine, which was derived in 1821 from the Greek word χιτών (khitōn) meaning covering.
A similar word, "chiton", refers to a marine animal with a protective shell.
Chemistry, physical properties and biological function
The structure of chitin was determined by Albert Hofmann in 1929. Hofmann hydrolyzed chitin using a crude preparation of the enzyme chitinase, which he obtained from the snail Helix pomatia.
Chitin is a modified polysaccharide that contains nitrogen; it is synthesized from units of N-acetyl-D-glucosamine (to be precise, 2-(acetylamino)-2-deoxy-D-glucose). These units form covalent β-(1→4)-linkages (like the linkages between glucose units forming cellulose). Therefore, chitin may be described as cellulose with one hydroxyl group on each monomer replaced with an acetyl amine group. This allows for increased hydrogen bonding between adjacent polymers, giving the chitin-polymer matrix increased strength.
In its pure, unmodified form, chitin is translucent, pliable, resilient, and quite tough. In most arthropods, however, it is often modified, occurring largely as a component of composite materials, such as in sclerotin, a tanned proteinaceous matrix, which forms much of the exoskeleton of insects. Combined with calcium carbonate, as in the shells of crustaceans and molluscs, chitin produces a much stronger composite. This composite material is much harder and stiffer than pure chitin, and is tougher and less brittle than pure calcium carbonate. Another difference between pure and composite forms can be seen by comparing the flexible body wall of a caterpillar (mainly chitin) to the stiff, light elytron of a beetle (containing a large proportion of sclerotin).
In butterfly wing scales, chitin is organized into stacks of gyroids constructed of chitin photonic crystals that produce various iridescent colors serving phenotypic signaling and communication for mating and foraging. The elaborate chitin gyroid construction in butterfly wings creates a model of optical devices having potential for innovations in biomimicry. Scarab beetles in the genus Cyphochilus also utilize chitin to form extremely thin scales (five to fifteen micrometres thick) that diffusely reflect white light. These scales are networks of randomly ordered filaments of chitin with diameters on the scale of hundreds of nanometres, which serve to scatter light. The multiple scattering of light is thought to play a role in the unusual whiteness of the scales. In addition, some social wasps, such as Protopolybia chartergoides, orally secrete material containing predominantly chitin to reinforce the outer nest envelopes, composed of paper.
Chitosan is produced commercially by deacetylation of chitin by treatment with sodium hydroxide. Chitosan has a wide range of biomedical applications including wound healing, drug delivery and tissue engineering. Due to its specific intermolecular hydrogen bonding network, dissolving chitin in water is very difficult. Chitosan (with a degree of deacetylation of more than ~28%), on the other hand, can be dissolved in dilute acidic aqueous solutions below a pH of 6.0 such as acetic, formic and lactic acids. Chitosan with a degree of deacetylation greater than ~49% is soluble in water
Humans and other mammals
Humans and other mammals have chitinase and chitinase-like proteins that can degrade chitin; they also possess several immune receptors that can recognize chitin and its degradation products, initiating an immune response.
Chitin is sensed mostly in the lungs or gastrointestinal tract where it can activate the innate immune system through eosinophils or macrophages, as well as an adaptive immune response through T helper cells. Keratinocytes in skin can also react to chitin or chitin fragments.
Plants
Plants also have receptors that can cause a response to chitin, namely chitin elicitor receptor kinase 1 and chitin elicitor-binding protein. The first chitin receptor was cloned in 2006. When the receptors are activated by chitin, genes related to plant defense are expressed, and jasmonate hormones are activated, which in turn activate systemic defenses. Commensal fungi have ways to interact with the host immune response that, , were not well understood.
Some pathogens produce chitin-binding proteins that mask the chitin they shed from these receptors. Zymoseptoria tritici is an example of a fungal pathogen that has such blocking proteins; it is a major pest in wheat crops.
Fossil record
Chitin was probably present in the exoskeletons of Cambrian arthropods such as trilobites. The oldest preserved (intact) chitin samples thus far reported are dated to the Oligocene, about , from specimens encased in amber where the chitin has not completely degraded.
Uses
Agriculture
Chitin is a good inducer of plant defense mechanisms for controlling diseases. It has potential for use as a soil fertilizer or conditioner to improve fertility and plant resilience that may enhance crop yields.
Industrial
Chitin is used in many industrial processes. Examples of the potential uses of chemically modified chitin in food processing include the formation of edible films and as an additive to thicken and stabilize foods and food emulsions. Processes to size and strengthen paper employ chitin and chitosan.
Research
How chitin interacts with the immune system of plants and animals has been an active area of research, including the identity of key receptors with which chitin interacts, whether the size of chitin particles is relevant to the kind of immune response triggered, and mechanisms by which immune systems respond. Chitin is deacetylated chemically or enzymatically to produce chitosan, a highly biocompatible polymer which has found a wide range of applications in the biomedical industry. Chitin and chitosan have been explored as a vaccine adjuvant due to its ability to stimulate an immune response.
Chitin and chitosan are under development as scaffolds in studies of how tissue grows and how wounds heal, and in efforts to invent better bandages, surgical thread, and materials for allotransplantation. Sutures made of chitin have been experimentally developed, but their lack of elasticity and problems making thread have prevented commercial success so far.
Chitosan has been demonstrated and proposed to make a reproducible form of biodegradable plastic. Chitin nanofibers are extracted from crustacean waste and mushrooms for possible development of products in tissue engineering, drug delivery and medicine.
Chitin has been proposed for use in building structures, tools, and other solid objects from a composite material, combining chitin with Martian regolith. To build this, the biopolymers in the chitin are suggested as the binder for the regolith aggregate to form a concrete-like composite material. The authors believe that waste materials from food production (e.g. scales from fish, exoskeletons from crustaceans and insects, etc.) could be put to use as feedstock for manufacturing processes.
See also
Chitobiose
Lorica
Sporopollenin
Tectin
References
External links
Acetamides
Biomolecules
Biopesticides
Polysaccharides | Chitin | Chemistry,Biology | 1,830 |
7,222,298 | https://en.wikipedia.org/wiki/Earthpark | Earthpark is a proposed best-in-class educational facility with indoor rain forest and aquarium elements, and a mission of "inspiring generations to learn from the natural world." It was previously called the Environmental Project. Inspired by The Eden Project in Cornwall, England, Earthpark was aimed to be an educational ecosystem and a popular visitor attraction. The project remains dormant since 2008, though it was briefly re-visited in 2018.
History and Funding
Earthpark was to be located around Lake Red Rock, near the town of Pella, Iowa.
Proposals from a variety of potential hosts, some outside of Iowa are being considered. Talks between Earthpark and the city of Coralville, Iowa, the original planned location of Earthpark, ended in 2006. The project then found a new home in Pella, Iowa. but that did not become a reality.
In December 2007, federal funding from the U.S. Department of Energy for the Pella location was rescinded, in a form of $50 million.
In August 2008, when asked if any efforts would be made to get additional federal money for the project, U.S. Senator Chuck Grassley said "Not by this senator, and I don't think there will be any by other senators." Same year, one of the prospective founders behind this project, Ray Townsend's son, Ted, had pledged $32.9 million of his own money.
Size and scope
The total project cost for Earthpark was estimated to be $155 million, though over a third of that financial demand was met. The complex was planned to be in area with a 600,000 gallon aquarium and outdoor wetland and prairie exhibits.
The Earthpark project was expected to employ 150 people directly and create an additional 2000 indirect jobs. The economic impact was estimated to be US$130 million annually. The park was projected to draw 1 million visitors annually to the Pella area.
References
External links
Entertainment venues in Iowa
Ecological experiments
Environmental design | Earthpark | Engineering | 398 |
7,013,296 | https://en.wikipedia.org/wiki/Sparteine | Sparteine is a class 1a antiarrhythmic agent and sodium channel blocker. It is an alkaloid and can be extracted from scotch broom. It is the predominant alkaloid in Lupinus mutabilis, and is thought to chelate the bivalent metals calcium and magnesium. It is not FDA approved for human use as an antiarrhythmic agent, and it is not included in the Vaughan Williams classification of antiarrhythmic drugs.
It is also used as a chiral ligand in organic chemistry, especially in syntheses involving organolithium reagents.
Biosynthesis
Sparteine is a lupin alkaloid containing a tetracyclic bis-quinolizidine ring system derived from three C5 chains of lysine, or more specifically, -lysine. The first intermediate in the biosynthesis is cadaverine, the decarboxylation product of lysine catalyzed by the enzyme lysine decarboxylase (LDC). Three units of cadaverine are used to form the quinolizidine skeleton. The mechanism of formation has been studied enzymatically, as well as with tracer experiments, but the exact route of synthesis still remains unclear.
Tracer studies using 13C-15N-doubly labeled cadaverine have shown three units of cadaverine are incorporated into sparteine and two of the C-N bonds from two of the cadaverine units remain intact. The observations have also been confirmed using 2H NMR labeling experiments.
Enzymatic evidence then showed that the three molecules of cadaverine are transformed to the quinolizidine ring via enzyme bound intermediates, without the generation of any free intermediates. Originally, it was thought that conversion of cadaverine to the corresponding aldehyde, 5-aminopentanal, was catalyzed by the enzyme diamine oxidase. The aldehyde then spontaneously converts to the corresponding Schiff base, Δ1-piperideine. Coupling of two molecules occurs between the two tautomers of Δ1-piperideine in an aldol-type reaction. The imine is then hydrolyzed to the corresponding aldehyde/amine. The primary amine is then oxidized to an aldehyde followed by formation of the imine to yield the quinolizidine ring.
Via 17-oxosparteine synthase
More recent enzymatic evidence has indicated the presence of 17-oxosparteine synthase (OS), a transaminase enzyme. The deaminated cadaverine is not released from the enzyme, thus is can be assumed that the enzyme catalyzes the formation of the quinolizidine skeleton in a channeled fashion . 7-oxosparteine requires four units of pyruvate as the NH2 acceptors and produces four molecules of alanine. Both lysine decarboxylase and the quinolizidine skeleton-forming enzyme are localized in chloroplasts.
See also
Lupinus
Lupin poisoning
References
External links
Antiarrhythmic agents
Quinolizidine alkaloids
Sodium channel blockers | Sparteine | Chemistry | 675 |
7,604,268 | https://en.wikipedia.org/wiki/AecXML | aecXML (architecture, engineering and construction extensible markup language) is a specific XML markup language which uses Industry Foundation Classes to create a vendor-neutral means to access data generated by building information modeling, BIM. It is being developed for use in the architecture, engineering, construction and facility management industries, in conjunction with BIM software, and is trademarked by the buildingSMART (the former International Alliance for Interoperability), a council of the National Institute of Building Sciences.
Specific subsets are being developed, namely:
Common Object Schema
Infrastructure
Structural
Facility management
Procurement
Project Management
Plant Operations
Building Environmental Performance
See also
Industry Foundation Classes
BuildingSMART
BIM Collaboration Format
External links
National Institute of Building Sciences (NIBS) buildingSMART Alliance (bSa)
Model Support Group (MSG) of NIBS bSa Responsible for AEC Industry Foundation Class (IFC) Development since ~2006
Links Obsolete by July 2009 at the latest:
North American Chapter of the International Alliance for Interoperability
Proposed common objects - a PDF file
XML markup languages
Building information modeling | AecXML | Engineering | 219 |
5,539,282 | https://en.wikipedia.org/wiki/Borel%20hierarchy | In mathematical logic, the Borel hierarchy is a stratification of the Borel algebra generated by the open subsets of a Polish space; elements of this algebra are called Borel sets. Each Borel set is assigned a unique countable ordinal number called the rank of the Borel set. The Borel hierarchy is of particular interest in descriptive set theory.
One common use of the Borel hierarchy is to prove facts about the Borel sets using transfinite induction on rank. Properties of sets of small finite ranks are important in measure theory and analysis.
Borel sets
The Borel algebra in an arbitrary topological space is the smallest collection of subsets of the space that contains the open sets and is closed under countable unions and complementation. It can be shown that the Borel algebra is closed under countable intersections as well.
A short proof that the Borel algebra is well-defined proceeds by showing that the entire powerset of the space is closed under complements and countable unions, and thus the Borel algebra is the intersection of all families of subsets of the space that have these closure properties. This proof does not give a simple procedure for determining whether a set is Borel. A motivation for the Borel hierarchy is to provide a more explicit characterization of the Borel sets.
Boldface Borel hierarchy
The Borel hierarchy or boldface Borel hierarchy on a space X consists of classes , , and for every countable ordinal greater than zero. Each of these classes consists of subsets of X. The classes are defined inductively from the following rules:
A set is in if and only if it is open.
A set is in if and only if its complement is in .
A set is in for if and only if there is a sequence of sets such that each is in for some and .
A set is in if and only if it is both in and in .
The motivation for the hierarchy is to follow the way in which a Borel set could be constructed from open sets using complementation and countable unions.
A Borel set is said to have finite rank if it is in for some finite ordinal ; otherwise it has infinite rank.
If then the hierarchy can be shown to have the following properties:
For every α, . Thus, once a set is in or , that set will be in all classes in the hierarchy corresponding to ordinals greater than α
. Moreover, a set is in this union if and only if it is Borel.
If is an uncountable Polish space, it can be shown that is not contained in for any , and thus the hierarchy does not collapse.
Borel sets of small rank
The classes of small rank are known by alternate names in classical descriptive set theory.
The sets are the open sets. The sets are the closed sets.
The sets are countable unions of closed sets, and are called Fσ sets. The sets are the dual class, and can be written as a countable intersection of open sets. These sets are called Gδ sets.
Lightface hierarchy
The lightface Borel hierarchy (also called the effective Borel hierarchypp.163--164) is an effective version of the boldface Borel hierarchy. It is important in effective descriptive set theory and recursion theory. The lightface Borel hierarchy extends the arithmetical hierarchy of subsets of an effective Polish space. It is closely related to the hyperarithmetical hierarchy.
The lightface Borel hierarchy can be defined on any effective Polish space. It consists of classes , and for each nonzero countable ordinal less than the Church–Kleene ordinal . Each class consists of subsets of the space. The classes, and codes for elements of the classes, are inductively defined as follows:
A set is if and only if it is effectively open, that is, an open set which is the union of a computably enumerable sequence of basic open sets. A code for such a set is a pair (0,e), where e is the index of a program enumerating the sequence of basic open sets.
A set is if and only if its complement is . A code for one of these sets is a pair (1,c) where c is a code for the complementary set.
A set is if there is a computably enumerable sequence of codes for a sequence of sets such that each is for some and . A code for a set is a pair (2,e), where e is an index of a program enumerating the codes of the sequence .
A code for a lightface Borel set gives complete information about how to recover the set from sets of smaller rank. This contrasts with the boldface hierarchy, where no such effectivity is required. Each lightface Borel set has infinitely many distinct codes. Other coding systems are possible; the crucial idea is that a code must effectively distinguish between effectively open sets, complements of sets represented by previous codes, and computable enumerations of sequences of codes.
It can be shown that for each there are sets in , and thus the hierarchy does not collapse. No new sets would be added at stage , however.
A famous theorem due to Spector and Kleene states that a set is in the lightface Borel hierarchy if and only if it is at level of the analytical hierarchy. These sets are also called hyperarithmetic. Additionally, for all natural numbers , the classes and of the effective Borel hierarchy are the same as the classes and of the arithmetical hierarchy of the same name.p.168
The code for a lightface Borel set A can be used to inductively define a tree whose nodes are labeled by codes. The root of the tree is labeled by the code for A. If a node is labeled by a code of the form (1,c) then it has a child node whose code is c. If a node is labeled by a code of the form (2,e) then it has one child for each code enumerated by the program with index e. If a node is labeled with a code of the form (0,e) then it has no children. This tree describes how A is built from sets of smaller rank. The ordinals used in the construction of A ensure that this tree has no infinite path, because any infinite path through the tree would have to include infinitely many codes starting with 2, and thus would give an infinite decreasing sequence of ordinals. Conversely, if an arbitrary subtree of has its nodes labeled by codes in a consistent way, and the tree has no infinite paths, then the code at the root of the tree is a code for a lightface Borel set. The rank of this set is bounded by the order type of the tree in the Kleene–Brouwer order. Because the tree is arithmetically definable, this rank must be less than . This is the origin of the Church–Kleene ordinal in the definition of the lightface hierarchy.
Relation to other hierarchies
See also
Projective hierarchy
Wadge hierarchy
Veblen hierarchy
References
Sources
Kechris, Alexander. Classical Descriptive Set Theory. Graduate Texts in Mathematics v. 156, Springer-Verlag, 1995. .
Jech, Thomas. Set Theory, 3rd edition. Springer, 2003. .
Descriptive set theory
Mathematical logic hierarchies | Borel hierarchy | Mathematics | 1,518 |
32,622,962 | https://en.wikipedia.org/wiki/Dipeptidyl-peptidase%20IV%20family | In molecular biology, the dipeptidyl-peptidase IV family is a family of serine peptidases which belong to MEROPS peptidase family S9 (clan SC), subfamily S9B (dipeptidyl-peptidase IV). The protein fold of the peptidase domain for members of this family resembles that of serine carboxypeptidase D, the type example of clan SC. The type example of this family is Dipeptidyl peptidase-4.
Human proteins in this family are:
Dipeptidyl peptidase-4
Dipeptidyl peptidase 8
Dipeptidyl peptidase 9
Inactive dipeptidyl peptidase 10
Dipeptidyl aminopeptidase-like protein 6
Seprase
External links
MEROPS entry for family S9B
References
Protein families | Dipeptidyl-peptidase IV family | Biology | 188 |
1,575,807 | https://en.wikipedia.org/wiki/HD%20188015 | HD 188015 is a yellow-hued star with an exoplanetary companion in the northern constellation of Vulpecula. It has an apparent visual magnitude of 8.24, making it an 8th magnitude star, and thus is too faint to be readily visible to the naked eye. The distance to this star can be estimated through parallax measurements, which yield a separation of 165.6 light years from the Sun.
This star was assigned a stellar classification of G5IV by J. F. Heard in 1956, matching the spectrum of an evolving G-type subgiant star. This suggests it has ceased or is about to stop hydrogen fusion in its core. The absolute magnitude of 4.47 lies just above the main sequence. It is estimated to be six billion years old and is chromospherically quiet with a projected rotational velocity of 5 km/s. The star is almost twice as metal-rich as the Sun. It has 1.1 times the mass and 1.2 times the radius of the Sun. HD 188015 is radiating 1.4 times the luminosity of the Sun from its photosphere at an effective temperature of 5,726 km/s.
Companions
A stellar common proper motion candidate was announced in 2006 and designated HD 188015 B. It is located at an angular separation of along a position angle of 85°. The photometric distance estimate for this object is , matching the primary within the margin of error. They have a projected separation of .
A Jovian planetary companion to this star was announced in 2005, based on radial velocity measurements indicating a periodic perturbation. It is orbiting the host star at a distance of with a period of and an eccentricity (ovalness) of 0.14. The inclination of the orbital plane remains unknown, so only a lower bound on the planet's mass can be determined. It has a minimum mass equal to 1.5 times the mass of Jupiter. The orbital path of this object intersects the habitable zone of the star, which is likely to eject any Earth-like planet from that region. Nevertheless, habitable moons are still possible in this system.
See also
HD 187085
List of extrasolar planets
References
External links
G-type subgiants
Planetary systems with one confirmed planet
Vulpecula
Durchmusterung objects
188015
097769 | HD 188015 | Astronomy | 483 |
77,630,097 | https://en.wikipedia.org/wiki/Ecotoxicology%20and%20Environmental%20Safety | Ecotoxicology and Environmental Safety is an open-access peer-reviewed scientific journal published by Elsevier. The editors-in-chief are Richard Handy (University of Plymouth) and Bing Yan (Guangzhou University). Established in 1977, the journal has published in open-access since 2021. It has been the official journal of the International Society of Ecotoxicology and Environmental Safety since 1986.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 6.2.
See also
List of environmental journals
References
External links
Academic journals established in 1977
Toxicology journals
Environmental health journals
Semi-monthly journals
English-language journals | Ecotoxicology and Environmental Safety | Environmental_science | 144 |
32,189,100 | https://en.wikipedia.org/wiki/Cysteine-rich%20secretory%20protein%20superfamily | The CAP superfamily (cysteine-rich secretory proteins, antigen 5, and pathogenesis-related 1 proteins (CAP)) is a large superfamily of secreted proteins that are produced by a wide range of organisms, including prokaryotes and non-vertebrate eukaryotes.
The nine subfamilies of the mammalian CAP superfamily include: the human glioma pathogenesis-related 1 (GLIPR1), Golgi associated pathogenesis related-1 (GAPR1) proteins, peptidase inhibitor 15 (PI15), peptidase inhibitor 16 (PI16), cysteine-rich secretory proteins (CRISPs), CRISP LCCL domain containing 1 (CRISPLD1), CRISP LCCL domain containing 2 (CRISPLD2), mannose receptor like and the R3H domain containing like proteins. Members are most often secreted and have an extracellular endocrine or paracrine function and are involved in processes including the regulation of extracellular matrix and branching morphogenesis, potentially as either proteases or protease inhibitors; in ion channel regulation in fertility; as tumour suppressor or pro-oncogenic genes in tissues including the prostate; and in cell-cell adhesion during fertilisation. The overall protein structural conservation within the CAP superfamily results in fundamentally similar functions for the CAP domain in all members, yet the diversity outside of this core region dramatically alters the target specificity and, thus, the biological consequences. The calcium-chelating function would fit with the various signalling processes (e.g. the CRISP proteins) that members of this family are involved in, and also the sequence and structural evidence of a conserved pocket containing two histidines and a glutamate.
Many of these proteins contain a C-terminal Cysteine-rich secretory protein (Crisp) domain. This domain is found in the mammalian reproductive tract and the venom of reptiles, and has been shown to regulate ryanodine receptor calcium signalling. It contains 10 conserved cysteines which are all involved in disulphide bonds and is structurally related to the ion channel inhibitor toxins BgK and ShK.
References
Protein domains
Protein superfamilies | Cysteine-rich secretory protein superfamily | Biology | 463 |
43,287,600 | https://en.wikipedia.org/wiki/K%C3%B4di%20Husimi | Kōji Husimi (June 29, 1909 – May 8, 2008, ) was a Japanese theoretical physicist who served as the president of the Science Council of Japan. Husimi trees in graph theory, the Husimi Q representation in quantum mechanics, and Husimi's theorem in the mathematics of paper folding are named after him.
Education and career
Husimi studied at the University of Tokyo, graduating in 1933. He spent a year there as an assistant, and then moved to Osaka University in 1934, where he soon began working with Seishi Kikuchi. At Osaka, he became Dean of the Faculty of Science. He moved to Nagoya University in 1961, and directed the plasma institute there. He retired in 1973, and became a professor emeritus of both Nagoya and Osaka.
Contributions
Physics
A 1940 paper by Husimi introduced the Husimi Q representation in quantum mechanics. Husimi also gave the name to the kagome lattice, frequently used in statistical mechanics.
Graph theory
In the mathematical area of graph theory, the name "Husimi tree" has come to refer to two different kinds of graphs: cactus graphs (the graphs in which each edge belongs to at most one cycle) and block graphs (the graphs in which, for every cycle, all diagonals of the cycle are edges). Husimi studied cactus graphs in a 1950 paper, and the name "Husimi trees" was given to these graphs in a later paper by Frank Harary and George Eugene Uhlenbeck. Due to an error by later researchers, the name came to be applied to block graphs as well, causing it to become ambiguous and fall into disuse.
Pacifism and world affairs
Husimi was an early member of the Science Council of Japan, joining it in 1949, and it was largely through his efforts that the Science Council in 1954 issued a statement proposing principles for the peaceful use of nuclear power and opposing the continued existence of nuclear weapons. This statement, in turn, led to the Japanese law outlawing military uses of nuclear technology.
Later, he served as president of the Science Council of Japan from 1977 to 1982. He was also a frequent participant in the Pugwash Conferences on Science and World Affairs and a leader of the Committee of Seven for World Peace.
Recreational mathematics
Husimi's recreational interests included origami; he designed several variations of the traditional orizuru (paper crane), folded on paper shaped as a rhombus instead of the usual square, and studied the properties of the bird base that allow it to be varied within a continuous family of deformations.
With his wife, Mitsue Husimi, he wrote a book on the mathematics of origami, which included a theorem characterizing the folding patterns with four folds meeting at a single vertex that may be folded flat. The generalization of this theorem to arbitrary numbers of folds at a single vertex is sometimes called Husimi's theorem.
References
Further reading
1909 births
2008 deaths
Japanese physicists
Graph theorists
Quantum physicists
University of Tokyo alumni
Academic staff of Osaka University
Academic staff of Nagoya University
Presidents of the Physical Society of Japan | Kôdi Husimi | Physics,Mathematics | 640 |
702,847 | https://en.wikipedia.org/wiki/True-range%20multilateration | True-range multilateration (also termed range-range multilateration and spherical multilateration) is a method to determine the location of a movable vehicle or stationary point in space using multiple ranges (distances) between the vehicle/point and multiple spatially-separated known locations (often termed "stations"). Energy waves may be involved in determining range, but are not required.
True-range multilateration is both a mathematical topic and an applied technique used in several fields. A practical application involving a fixed location occurs in surveying. Applications involving vehicle location are termed navigation when on-board persons/equipment are informed of its location, and are termed surveillance when off-vehicle entities are informed of the vehicle's location.
Two slant ranges from two known locations can be used to locate a third point in a two-dimensional Cartesian space (plane), which is a frequently applied technique (e.g., in surveying). Similarly, two spherical ranges can be used to locate a point on a sphere, which is a fundamental concept of the ancient discipline of celestial navigation — termed the altitude intercept problem. Moreover, if more than the minimum number of ranges are available, it is good practice to utilize those as well. This article addresses the general issue of position determination using multiple ranges.
In two-dimensional geometry, it is known that if a point lies on two circles, then the circle centers and the two radii provide sufficient information to narrow the possible locations down to two – one of which is the desired solution and the other is an ambiguous solution. Additional information often narrow the possibilities down to a unique location. In three-dimensional geometry, when it is known that a point lies on the surfaces of three spheres, then the centers of the three spheres along with their radii also provide sufficient information to narrow the possible locations down to no more than two (unless the centers lie on a straight line).
True-range multilateration can be contrasted to the more frequently encountered pseudo-range multilateration, which employs range differences to locate a (typically, movable) point. Pseudo range multilateration is almost always implemented by measuring times-of-arrival (TOAs) of energy waves. True-range multilateration can also be contrasted to triangulation, which involves the measurement of angles.
Terminology
There is no accepted or widely-used general term for what is termed true-range multilateration here . That name is selected because it: (a) is an accurate description and partially familiar terminology (multilateration is often used in this context); (b) avoids specifying the number of ranges involved (as does, e.g., range-range; (c) avoids implying an application (as do, e.g., DME/DME navigation or trilateration) and (d) and avoids confusion with the more common pseudo-range multilateration.
Obtaining ranges
For similar ranges and measurement errors, a navigation and surveillance system based on true-range multilateration provide service to a significantly larger 2-D area or 3-D volume than systems based on pseudo-range multilateration. However, it is often more difficult or costly to measure true-ranges than it is to measure pseudo ranges. For distances up to a few miles and fixed locations, true-range can be measured manually. This has been done in surveying for several thousand years e.g., using ropes and chains.
For longer distances and/or moving vehicles, a radio/radar system is generally needed. This technology was first developed circa 1940 in conjunction with radar. Since then, three methods have been employed:
Two-way range measurement, one party active This is the method used by traditional radars (sometimes termed primary radars) to determine the range of a non-cooperative target, and now used by laser rangefinders. Its major limitations are that: (a) the target does not identify itself, and in a multiple target situation, mis-assignment of a return can occur; (b) the return signal is attenuated (relative to the transmitted signal) by the fourth power of the vehicle-station range (thus, for distances of tens of miles or more, stations generally require high-power transmitters and/or large/sensitive antennas); and (c) many systems utilize line-of-sight propagation, which limits their ranges to less than 20 miles when both parties are at similar heights above sea level.
Two-way range measurement, both parties active This method was reportedly first used for navigation by the Y-Gerät aircraft guidance system fielded in 1941 by the Luftwaffe. It is now used globally in air traffic control – e.g., secondary radar surveillance and DME/DME navigation. It requires that both parties have both transmitters and receivers, and may require that interference issues be addressed.
One-way range measurement The time of flight (TOF) of electromagnetic energy between multiple stations and the vehicle is measured based on transmission by one party and reception by the other. This is the most recently developed method, and was enabled by the development of atomic clocks; it requires that the vehicle (user) and stations having synchronized clocks. It has been successfully demonstrated (experimentally) with Loran-C and GPS.
Solution methods
True-range multilateration algorithms may be partitioned based on
problem space dimension (generally, two or three),
problem space geometry (generally, Cartesian or spherical) and
presence of redundant measurements (more than the problem space dimension).
Any pseudo-range multilateration algorithm can be specialized for use with true-range multilateration.
Two Cartesian dimensions, two measured slant ranges (trilateration)
An analytic solution has likely been known for over 1,000 years, and is given in several texts. Moreover, one can easily adapt algorithms for a three dimensional Cartesian space.
The simplest algorithm employs analytic geometry and a station-based coordinate frame. Thus, consider the circle centers (or stations) C1 and C2 in Fig. 1 which have known coordinates (e.g., have already been surveyed) and thus whose separation is known. The figure 'page' contains C1 and C2. If a third 'point of interest' P (e.g., a vehicle or another point to be surveyed) is at unknown point , then Pythagoras's theorem yields
Thus,
Note that has two values (i.e., solution is ambiguous); this is usually not a problem.
While there are many enhancements, Equation is the most fundamental true-range multilateration relationship. Aircraft DME/DME navigation and the trilateration method of surveying are examples of its application. During World War II Oboe and during the Korean War SHORAN used the same principle to guide aircraft based on measured ranges to two ground stations. SHORAN was later used for off-shore oil exploration and for aerial surveying. The Australian Aerodist aerial survey system utilized 2-D Cartesian true-range multilateration. This 2-D scenario is sufficiently important that the term trilateration is often applied to all applications involving a known baseline and two range measurements.
The baseline containing the centers of the circles is a line of symmetry. The correct and ambiguous solutions are perpendicular to and equally distant from (on opposite sides of) the baseline. Usually, the ambiguous solution is easily identified. For example, if P is a vehicle, any motion toward or away from the baseline will be opposite that of the ambiguous solution; thus, a crude measurement of vehicle heading is sufficient. A second example: surveyors are well aware of which side of the baseline that P lies. A third example: in applications where P is an aircraft and C1 and C2 are on the ground, the ambiguous solution is usually below ground.
If needed, the interior angles of triangle C1-C2-P can be found using the trigonometric law of cosines. Also, if needed, the coordinates of P can be expressed in a second, better-known coordinate system—e.g., the Universal Transverse Mercator (UTM) system—provided the coordinates of C1 and C2 are known in that second system. Both are often done in surveying when the trilateration method is employed. Once the coordinates of P are established, lines C1-P and C2-P can be used as new baselines, and additional points surveyed. Thus, large areas or distances can be surveyed based on multiple, smaller triangles—termed a traverse.
An implied assumption for the above equation to be true is that and relate to the same position of P. When P is a vehicle, then typically and must be measured within a synchronization tolerance that depends on the vehicle speed and the allowable vehicle position error. Alternatively, vehicle motion between range measurements may be accounted for, often by dead reckoning.
A trigonometric solution is also possible (side-side-side case). Also, a solution employing graphics is possible. A graphical solution is sometimes employed during real-time navigation, as an overlay on a map.
Three Cartesian dimensions, three measured slant ranges
There are multiple algorithms that solve the 3-D Cartesian true-range multilateration problem directly (i.e., in closed-form) – e.g., Fang. Moreover, one can adopt closed-form algorithms developed for pseudo range multilateration. Bancroft's algorithm (adapted) employs vectors, which is an advantage in some situations.
The simplest algorithm corresponds to the sphere centers in Fig. 2. The figure 'page' is the plane containing C1, C2 and C3. If P is a 'point of interest' (e.g., vehicle) at , then Pythagoras's theorem yields the slant ranges between P and the sphere centers:
Thus, the coordinates of P are:
The plane containing the sphere centers is a plane of symmetry. The correct and ambiguous solutions are perpendicular to it and equally distant from it, on opposite sides.
Many applications of 3-D true-range multilateration involve short ranges—e.g., precision manufacturing. Integrating range measurement from three or more radars (e.g., FAA's ERAM) is a 3-D aircraft surveillance application. 3-D true-range multilateration has been used on an experimental basis with GPS satellites for aircraft navigation. The requirement that an aircraft be equipped with an atomic clock precludes its general use. However, GPS receiver clock aiding is an area of active research, including aiding over a network. Thus, conclusions may change. 3-D true-range multilateration was evaluated by the International Civil Aviation Organization as an aircraft landing system, but another technique was found to be more efficient. Accurately measuring the altitude of aircraft during approach and landing requires many ground stations along the flight path.
Two spherical dimensions, two or more measured spherical ranges
This is a classic celestial (or astronomical) navigation problem, termed the altitude intercept problem (Fig. 3). It's the spherical geometry equivalent of the trilateration method of surveying (although the distances involved are generally much larger). A solution at sea (not necessarily involving the Sun and Moon) was made possible by the marine chronometer (introduced in 1761) and the discovery of the 'line of position' (LOP) in 1837. The solution method now most taught at universities (e.g., U.S. Naval Academy) employs spherical trigonometry to solve an oblique spherical triangle based on sextant measurements of the 'altitude' of two heavenly bodies. This problem can also be addressed using vector analysis. Historically, graphical techniques – e.g., the intercept method – were employed. These can accommodate more than two measured 'altitudes'. Owing to the difficulty of making measurements at sea, 3 to 5 'altitudes' are often recommended.
As the earth is better modeled as an ellipsoid of revolution than a sphere, iterative techniques may be used in modern implementations. In high-altitude aircraft and missiles, a celestial navigation subsystem is often integrated with an inertial navigation subsystem to perform automated navigation—e.g., U.S. Air Force SR-71 Blackbird and B-2 Spirit.
While intended as a 'spherical' pseudo range multilateration system, Loran-C has also been used as a 'spherical' true-range multilateration system by well-equipped users (e.g., Canadian Hydrographic Service). This enabled the coverage area of a Loran-C station triad to be extended significantly (e.g., doubled or tripled) and the minimum number of available transmitters to be reduced from three to two. In modern aviation, slant ranges rather than spherical ranges are more often measured; however, when aircraft altitude is known, slant ranges are readily converted to spherical ranges.
Redundant range measurements
When there are more range measurements available than there are problem dimensions, either from the same C1 and C2 (or C1, C2 and C3) stations, or from additional stations, at least these benefits accrue:
'Bad' measurements can be identified and rejected
Ambiguous solutions can be identified automatically (i.e., without human involvement) -- requires an additional station
Errors in 'good' measurements can be averaged, reducing their effect.
The iterative Gauss–Newton algorithm for solving non-linear least squares (NLLS) problems is generally preferred when there are more 'good' measurements than the minimum necessary. An important advantage of the Gauss–Newton method over many closed-form algorithms is that it treats range errors linearly, which is often their nature, thereby reducing the effect of range errors by averaging. The Gauss–Newton method may also be used with the minimum number of measured ranges. Since it is iterative, the Gauss–Newton method requires an initial solution estimate.
In 3-D Cartesian space, a fourth sphere eliminates the ambiguous solution that occurs with three ranges, provided its center is not co-planar with the first three. In 2-D Cartesian or spherical space, a third circle eliminates the ambiguous solution that occurs with two ranges, provided its center is not co-linear with the first two.
One-time application versus repetitive application
This article largely describes 'one-time' application of the true-range multilateration technique, which is the most basic use of the technique. With reference to Fig. 1, the characteristic of 'one-time' situations is that point P and at least one of C1 and C2 change from one application of the true-range multilateration technique to the next. This is appropriate for surveying, celestial navigation using manual sightings, and some aircraft DME/DME navigation.
However, in other situations, the true-range multilateration technique is applied repetitively (essentially continuously). In those situations, C1 and C2 (and perhaps Cn, n = 3,4,...) remain constant and P is the same vehicle. Example applications (and selected intervals between measurements) are: multiple radar aircraft surveillance (5 and 12 seconds, depending upon radar coverage range), aerial surveying, Loran-C navigation with a high-accuracy user clock (roughly 0.1 seconds), and some aircraft DME/DME navigation (roughly 0.1 seconds). Generally, implementations for repetitive use: (a) employ a 'tracker' algorithm (in addition to the multilateration solution algorithm), which enables measurements collected at different times to be compared and averaged in some manner; and (b) utilize an iterative solution algorithm, as they (b1) admit varying numbers of measurements (including redundant measurements) and (b2) inherently have an initial guess each time the solution algorithm is invoked.
Hybrid multilateration systems
Hybrid multilateration systems – those that are neither true-range nor pseudo range systems – are also possible. For example, in Fig. 1, if the circle centers are shifted to the left so that C1 is at and C2 is at then the point of interest P is at
This form of the solution explicitly depends on the sum and difference of and and does not require 'chaining' from the -solution to the -solution. It could be implemented as a true-range multilateration system by measuring and .
However, it could also be implemented as a hybrid multilateration system by measuring and using different equipment – e.g., for surveillance by a multistatic radar with one transmitter and two receivers (rather than two monostatic radars). While eliminating one transmitter is a benefit, there is a countervailing 'cost': the synchronization tolerance for the two stations becomes dependent on the propagation speed (typically, the speed of light) rather that the speed of point P, in order to accurately measure both .
While not implemented operationally, hybrid multilateration systems have been investigated for aircraft surveillance near airports and as a GPS navigation backup system for aviation.
Preliminary and final computations
The position accuracy of a true-range multilateration system—e.g., accuracy of the coordinates of point P in Fig. 1 -- depends upon two factors: (1) the range measurement accuracy, and (2) the geometric relationship of P to the system's stations C1 and C2. This can be understood from Fig. 4. The two stations are shown as dots, and BLU denotes baseline units. (The measurement pattern is symmetric about both the baseline and the perpendicular bisector of the baseline, and is truncated in the figure.) As is commonly done, individual range measurement errors are taken to be independent of range, statistically independent and identically distributed. This reasonable assumption separates the effects of user-station geometry and range measurement errors on the error in the calculated coordinates of P. Here, the measurement geometry is simply the angle at which two circles cross—or equivalently, the angle between lines P-C1 and P-C2. When point P- is not on a circle, the error in its position is approximately proportional to the area bounded by the nearest two blue and nearest two magenta circles.
Without redundant measurements, a true-range multilateration system can be no more accurate than the range measurements, but can be significantly less accurate if the measurement geometry is not chosen properly. Accordingly, some applications place restrictions on the location of point P. For a 2-D Cartesian (trilateration) situation, these restrictions take one of two equivalent forms:
The allowable interior angle at P between lines P-C1 and P-C2: The ideal is a right angle, which occurs at distances from the baseline of one-half or less of the baseline length; maximum allowable deviations from the ideal 90 degrees may be specified.
The horizontal dilution of precision (HDOP), which multiplies the range error in determining the position error: For two dimensions, the ideal (minimum) HDOP is the square root of 2 (), which occurs when the angle between P-C1 and P-C2 is 90 degrees; a maximum allowable HDOP value may be specified. (Here, equal HDOPs are simply the locus of points in Fig. 4 having the same crossing angle.)
Planning a true-range multilateration navigation or surveillance system often involves a dilution of precision (DOP) analysis to inform decisions on the number and location of the stations and the system's service area (two dimensions) or service volume (three dimensions). Fig. 5 shows horizontal DOPs (HDOPs) for a 2-D, two-station true-range multilateration system. HDOP is infinite along the baseline and its extensions, as only one of the two dimensions is actually measured. A user of such a system should be roughly broadside of the baseline and within an application-dependent range band. For example, for DME/DME navigation fixes by aircraft, the maximum HDOP permitted by the U.S. FAA is twice the minimum possible value, or 2.828, which limits the maximum usage range (which occurs along the baseline bisector) to 1.866 times the baseline length. (The plane containing two DME ground stations and an aircraft in not strictly horizontal, but usually is nearly so.) Similarly, surveyors select point P in Fig. 1 so that C1-C2-P roughly form an equilateral triangle (where HDOP = 1.633).
Errors in trilateration surveys are discussed in several documents. Generally, emphasis is placed on the effects of range measurement errors, rather than on the effects of algorithm numerical errors.
Applications
Land surveying using the trilateration method
Aerial surveying
Maritime archeology surveying
DME/DME RNAV aircraft navigation
Multiple radar integration (e.g., FAA ERAM)
Celestial navigation using the altitude intercept method
Intercept method—Graphical solution to the altitude intercept problem
Calibrating laser interferometers
SHORAN, Oboe, Gee-H—Aircraft guidance systems developed for 'blind' bombing
JTIDS (Joint Tactical Information Distribution System) -- U.S./NATO system that (among other capabilities) locates participants in a network using inter-participant ranges
USAF SR-71 Blackbird aircraft—Employs astro-inertial navigation
USAF B-2 Spirit aircraft—Employs astro-inertial navigation
Experimental Loran-C technique
Navigation and surveillance systems typically involve vehicles and require that a government entity or other organization deploy multiple stations that employ a form of radio technology (i.e., utilize electromagnetic waves). The advantages and disadvantages of employing true-range multilateration for such a system are shown in the following table.
True-range multilateration is often contrasted with (pseudo range) multilateration, as both require a form of user ranges to multiple stations. Complexity and cost of user equipage is likely the most important factor in limiting use of true-range multilateration for vehicle navigation and surveillance. Some uses are not the original purpose for system deployment – e.g., DME/DME aircraft navigation.
See also
Distance geometry problem, similar technique applied to molecules
Celestial navigation—ancient technique of navigation based on heavenly bodies
Distance measuring equipment (DME) -- System used to measure distance between an aircraft and a ground station
Euclidean distance
Intercept method—Graphical technique used in celestial navigation
Laser rangefinder
Multilateration – Addresses pseudo range multilateration
Rangefinder—Systems used to measure distance between two points on the ground
Resection (orientation)
SHORAN—Developed as a military aircraft navigation system, later used for civil purposes
Surveying
Tellurometer—First microwave electronic rangefinder
Triangulation – Surveying method based on measuring angles
References
External links
stackexchange.com, PHP / Python Implementation
Euclidean geometry
Geodesy
Geopositioning | True-range multilateration | Mathematics | 4,653 |
66,395,107 | https://en.wikipedia.org/wiki/2C-T-3 | 2C-T-3 (also initially numbered as 2C-T-20) is a lesser-known psychedelic drug related to compounds such as 2C-T-7 and 2C-T-16. It was named by Alexander Shulgin but was never made or tested by him, and was instead first synthesised by Daniel Trachsel some years later. It has a binding affinity of 11nM at 5-HT2A and 40nM at 5-HT2C. It is reportedly a potent psychedelic drug with an active dose in the 15–40 mg range, and a duration of action of 8–14 hours, with visual effects comparable to related drugs such as methallylescaline.
See also
2C-T-2
2C-T-4
3C-MAL
References
2C (psychedelics)
Entheogens
Thioethers
Amines | 2C-T-3 | Chemistry | 186 |
203,111 | https://en.wikipedia.org/wiki/Tropical%20and%20subtropical%20coniferous%20forests | Tropical and subtropical coniferous forests are a tropical forest habitat type defined by the World Wide Fund for Nature. These forests are found predominantly in North and Central America and experience low levels of precipitation and moderate variability in temperature. Tropical and subtropical coniferous forests are characterized by diverse species of conifers, whose needles are adapted to deal with the variable climatic conditions. Most tropical and subtropical coniferous forest ecoregions are found in the Nearctic and Neotropical realms, from Mexico to Nicaragua and on the Greater Antilles, Bahamas, and Bermuda. Other tropical and subtropical coniferous forests ecoregions occur in Asia. Mexico harbors the world's richest and most complex subtropical coniferous forests. The conifer forests of the Greater Antilles contain many endemics and relictual taxa.
Many migratory birds and butterflies spend winter in tropical and subtropical conifer forests. This biome features a thick, closed canopy which blocks light to the floor and allows little underbrush. As a result, the ground is often covered with fungi and ferns. Shrubs and small trees compose a diverse understory.
Tropical and subtropical coniferous forests ecoregions
See also
Forest
Trees of the world
Arid Forest Research Institute (AFRI)
References
External links
Terrestrial biomes
Conifers
Forests | Tropical and subtropical coniferous forests | Biology | 252 |
67,911 | https://en.wikipedia.org/wiki/Busy%20beaver | In theoretical computer science, the busy beaver game aims to find a terminating program of a given size that (depending on definition) either produces the most output possible, or runs for the longest number of steps. Since an endlessly looping program producing infinite output or running for infinite time is easily conceived, such programs are excluded from the game. Rather than traditional programming languages, the programs used in the game are n-state Turing machines, one of the first mathematical models of computation.
Turing machines consist of an infinite tape, and a finite set of states which serve as the program's "source code". Producing the most output is defined as writing the largest number of 1s on the tape, also referred to as achieving the highest score, and running for the longest time is defined as taking the longest number of steps to halt. The n-state busy beaver game consists of finding the longest-running or highest-scoring Turing machine which has n states and eventually halts. Such machines are assumed to start on a blank tape, and the tape is assumed to contain only zeros and ones (a binary Turing machine). The objective of the game is to program a set of transitions between states aiming for the highest score or longest running time while making sure the machine will halt eventually.
An n-th busy beaver, BB-n or simply "busy beaver" is a Turing machine that wins the n-state busy beaver game. Depending on definition, it either attains the highest score, or runs for the longest time, among all other possible n-state competing Turing machines. The functions determining the highest score or longest running time of the n-state busy beavers by each definition are Σ(n) and S(n) respectively.
Deciding the running time or score of the nth Busy Beaver is incomputable. In fact, both the functions Σ(n) and S(n) eventually become larger than any computable function. This has implications in computability theory, the halting problem, and complexity theory. The concept of a busy beaver was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable Functions". One of the most interesting aspects of the busy beaver game is that, if it were possible to compute the functions Σ(n) and S(n) for all n, then this would resolve all mathematical conjectures which can be encoded in the form "does <this Turing machine> halt". For example, a 27-state Turing machine could check Goldbach's conjecture for each number and halt on a counterexample: if this machine had not halted after running for S(27) steps, then it must run forever, resolving the conjecture. Many other problems, including the Riemann hypothesis (744 states) and the consistency of ZF set theory (745 states), can be expressed in a similar form, where at most a countably infinite number of cases need to be checked.
Technical definition
The n-state busy beaver game (or BB-n game), introduced in Tibor Radó's 1962 paper, involves a class of Turing machines, each member of which is required to meet the following design specifications:
The machine has n "operational" states plus a Halt state, where n is a positive integer, and one of the n states is distinguished as the starting state. (Typically, the states are labelled by 1, 2, ..., n, with state 1 as the starting state, or by A, B, C, ..., with state A as the starting state.)
The machine uses a single two-way infinite (or unbounded) tape.
The tape alphabet is {0, 1}, with 0 serving as the blank symbol.
The machine's transition function takes two inputs:
the current non-Halt state,
the symbol in the current tape cell,
and produces three outputs:
a symbol to write over the symbol in the current tape cell (it may be the same symbol as the symbol overwritten),
a direction to move (left or right; that is, shift to the tape cell one place to the left or right of the current cell), and
a state to transition into (which may be the Halt state).
"Running" the machine consists of starting in the starting state, with the current tape cell being any cell of a blank (all-0) tape, and then iterating the transition function until the Halt state is entered (if ever). If, and only if, the machine eventually halts, then the number of 1s finally remaining on the tape is called the machine's score. The n-state busy beaver (BB-n) game is therefore a contest, depending on definition to find such an n-state Turing machine having the largest possible score or running time.
Example
The rules for one 1-state Turing machine might be:
In state 1, if the current symbol is 0, write a 1, move one space to the right, and transition to state 1
In state 1, if the current symbol is 1, write a 0, move one space to the right, and transition to HALT
This Turing machine would move to the right, swapping the value of all the bits it passes. Since the starting tape is all 0s, it would make an unending string of ones. This machine would not be a busy beaver contender because it runs forever on a blank tape.
Functions
In his original 1962 paper, Radó defined two functions related to the busy beaver game: the score function Σ(n) and the shifts function S(n). Both take a number of Turing machine states and output the maximum score attainable by a Turing machine of that number of states by some measure. The score function Σ(n) gives the maximum number of 1s an -state Turing machine can output before halting, while the shifts function S(n) gives the maximum number of shifts (or equivalently steps, because each step includes a shift) that an -state Turing machine can undergo before halting. He proved that both of these functions were noncomputable, because they each grew faster than any computable function. The function BB(n) has been defined to be either of these functions, so that notation is not used in this article.
A number of other uncomputable functions can also be defined based on measuring the performance of Turing machines in other ways than time or maximal number of ones. For example:
The function is defined to be the maximum number of contiguous ones a halting Turing machine can write on a blank tape. In other words, this is the largest unary number a Turing machine of n states can write on a tape.
The function is defined to be the maximal number of tape squares a halting Turing machine can read (i.e., visit) before halting. This includes the starting square, but not a square that the machine only reaches after the halt transition (if the halt transition is annotated with a move direction), because that square does not influence the machine's behaviour. This is the maximal space complexity of an n-state Turing machine.
These four functions together stand in the relation . More functions can also be defined by operating the game on different computing machines, such as 3-symbol Turing machines, non-deterministic Turing machines, the lambda calculus or even arbitrary programming languages.
Score function Σ
The score function quantifies the maximum score attainable by a busy beaver on a given measure. This is a noncomputable function, because it grows asymptotically faster than any computable function.
The score function, , is defined so that Σ(n) is the maximum attainable score (the maximum number of 1s finally on the tape) among all halting 2-symbol n-state Turing machines of the above-described type, when started on a blank tape.
It is clear that Σ is a well-defined function: for every n, there are at most finitely many n-state Turing machines as above, up to isomorphism, hence at most finitely many possible running times.
According to the score-based definition, any n-state 2-symbol Turing machine M for which (i.e., which attains the maximum score) is called a busy beaver. For each n, there exist at least 4(n − 1)! n-state busy beavers. (Given any n-state busy beaver, another is obtained by merely changing the shift direction in a halting transition, a third by reversing all shift directions uniformly, and a fourth by reversing the halt direction of the all-swapped busy beaver. Furthermore, a permutation of all states except Start and Halt produces a machine that attains the same score. Theoretically, there could be more than one kind of transition leading to the halting state, but in practice it would be wasteful, because there is only one sequence of state transitions producing the sought-after result.)
Non-computability
Radó's 1962 paper proved that if is any computable function, then Σ(n) > f(n) for all sufficiently large n, and hence that Σ is not a computable function.
Moreover, this implies that it is undecidable by a general algorithm whether an arbitrary Turing machine is a busy beaver. (Such an algorithm cannot exist, because its existence would allow Σ to be computed, which is a proven impossibility. In particular, such an algorithm could be used to construct another algorithm that would compute Σ as follows: for any given n, each of the finitely many n-state 2-symbol Turing machines would be tested until an n-state busy beaver is found; this busy beaver machine would then be simulated to determine its score, which is by definition Σ(n).)
Even though Σ(n) is an uncomputable function, there are some small n for which it is possible to obtain its values and prove that they are correct. It is not hard to show that Σ(0) = 0, Σ(1) = 1, Σ(2) = 4, and with progressively more difficulty it can be shown that Σ(3) = 6, Σ(4) = 13 and Σ(5) = 4098 . Σ(n) has not yet been determined for any instance of n > 5, although lower bounds have been established (see the Known values section below).
Complexity and unprovability of Σ
A variant of Kolmogorov complexity is defined as follows: The complexity of a number n is the smallest number of states needed for a BB-class Turing machine that halts with a single block of n consecutive 1s on an initially blank tape. The corresponding variant of Chaitin's incompleteness theorem states that, in the context of a given axiomatic system for the natural numbers, there exists a number k such that no specific number can be proven to have complexity greater than k, and hence that no specific upper bound can be proven for Σ(k) (the latter is because "the complexity of n is greater than k" would be proven if were proven). As mentioned in the cited reference, for any axiomatic system of "ordinary mathematics" the least value k for which this is true is far less than 10⇈10; consequently, in the context of ordinary mathematics, neither the value nor any upper-bound of Σ(10⇈10) can be proven. (Gödel's first incompleteness theorem is illustrated by this result: in an axiomatic system of ordinary mathematics, there is a true-but-unprovable sentence of the form , and there are infinitely many true-but-unprovable sentences of the form .)
Maximum shifts function S
In addition to the function Σ, Radó [1962] introduced another extreme function for Turing machines, the maximum shifts function, S, defined as follows:
= the number of shifts M makes before halting, for any ,
= the largest number of shifts made by any halting n-state 2-symbol Turing machine.
Because normal Turing machines are required to have a shift in each and every transition or "step" (including any transition to a Halt state), the max-shifts function is at the same time a max-steps function.
Radó showed that S is noncomputable for the same reason that Σ is noncomputable — it grows faster than any computable function. He proved this simply by noting that for each n, S(n) ≥ Σ(n). Each shift may write a 0 or a 1 on the tape, while Σ counts a subset of the shifts that wrote a 1, namely the ones that hadn't been overwritten by the time the Turing machine halted; consequently, S grows at least as fast as Σ, which had already been proved to grow faster than any computable function.
The following connection between Σ and S was used by Lin & Radó [Computer Studies of Turing Machine Problems, 1965] to prove that Σ(3) = 6 and that S(3)=21: For a given n, if S(n) is known then all n-state Turing machines can (in principle) be run for up to S(n) steps, at which point any machine that hasn't yet halted will never halt. At that point, by observing which machines have halted with the most 1s on the tape (i.e., the busy beavers), one obtains from their tapes the value of Σ(n). The approach used by Lin & Radó for the case of n = 3 was to conjecture that S(3) = 21 (after unsuccessfully conjecturing 18), then to simulate all the essentially different 3-state machines (82,944 machines, equal to 23) for up to 21 steps. They found 26,073 machines that halted, including one that halted only after 21 steps. By analyzing the behavior of the machines that had not halted within 21 steps, they succeeded in showing that none of those machines would ever halt, most of them following a certain pattern. This proved the conjecture that S(3) = 21, and also determined that Σ(3) = 6, which was attained by several machines, all halting after 11 to 14 steps.
In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum n for which S(n) is unprovable in ZFC. To do so they constructed a 7910-state Turing machine whose behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (stationary Ramsey property). Stefan O'Rear then reduced it to 1919 states, with the dependency on the stationary Ramsey property eliminated, and later to 748 states. In July 2023, Riebel reduced it to 745 states.
Proof for uncomputability of S(n) and Σ(n)
Suppose that S(n) is a computable function and let EvalS denote a TM, evaluating S(n). Given a tape with n 1s it will produce S(n) 1s on the tape and then halt. Let Clean denote a Turing machine cleaning the sequence of 1s initially written on the tape. Let Double denote a Turing machine evaluating function n + n. Given a tape with n 1s it will produce 2n 1s on the tape and then halt.
Let us create the composition Double | EvalS | Clean and let n0 be the number of states of this machine. Let Create_n0 denote a Turing machine creating n0 1s on an initially blank tape. This machine may be constructed in a trivial manner to have n0 states (the state i writes 1, moves the head right and switches to state i + 1, except the state n0, which halts). Let N denote the sum n0 + n0.
Let BadS denote the composition Create_n0 | Double | EvalS | Clean. Notice that this machine has N states. Starting with an initially blank tape it first creates a sequence of n0 1s and then doubles it, producing a sequence of N 1s. Then BadS will produce S(N) 1s on tape, and at last it will clear all 1s and then halt. But the phase of cleaning will continue at least S(N) steps, so the time of working of BadS is strictly greater than S(N), which contradicts to the definition of the function S(n).
The uncomputability of Σ(n) may be proved in a similar way. In the above proof, one must exchange the machine EvalS with EvalΣ and Clean with Increment — a simple TM, searching for a first 0 on the tape and replacing it with 1.
The uncomputability of S(n) can also be established by reference to the blank tape halting problem. The blank tape halting problem is the problem of deciding for any Turing machine whether or not it will halt when started on an empty tape. The blank tape halting problem is equivalent to the standard halting problem and so it is also uncomputable. If S(n) was computable, then we could solve the blank tape halting problem simply by running any given Turing machine with n states for S(n) steps; if it has still not halted, it never will. So, since the blank tape halting problem is not computable, it follows that S(n) must likewise be uncomputable.
Uncomputability of space(n) and num(n)
Both and functions are uncomputable. This can be shown for by noting that every tape square a Turing machine writes a one to, it must also visit: in other words, . The function can be shown to be incomputable by proving, for example, that : this can be done by designing an (3n+3)-state Turing machine which simulates the n-state space champion, and then uses it to write at least contiguous ones to the tape.
Generalizations
Analogs of the shift function can be simply defined in any programming language, given that the programs can be described by bit-strings, and a program's number of steps can be counted. For example, the busy beaver game can also be generalized to two dimensions using Turing machines on two-dimensional tapes, or to Turing machines that are allowed to stay in the same place as well as move to the left and right. Alternatively a "busy beaver function" for diverse models of computation can be defined with Kolmogorov complexity. This is done by taking to be the largest integer such that , where is the length of the shortest program in that outputs : is thereby the largest integer a program with length or less can output in .
The longest running 6-state, 2-symbol machine which has the additional property of reversing the tape value at each step produces 1s after steps. So for the Reversal Turing Machine (RTM) class, SRTM(6) ≥ and ΣRTM(6) ≥ . Likewise we could define an analog to the Σ function for register machines as the largest number which can be present in any register on halting, for a given number of instructions.
Different numbers of symbols
A simple generalization is the extension to Turing machines with m symbols instead of just 2 (0 and 1). For example a trinary Turing machine with m = 3 symbols would have the symbols 0, 1, and 2. The generalization to Turing machines with n states and m symbols defines the following generalized busy beaver functions:
Σ(n, m): the largest number of non-zeros printable by an n-state, m-symbol machine started on an initially blank tape before halting, and
S(n, m): the largest number of steps taken by an n-state, m-symbol machine started on an initially blank tape before halting.
For example, the longest-running 3-state 3-symbol machine found so far runs steps before halting.
Nondeterministic Turing machines
The problem can be extended to nondeterministic Turing machines by looking for the system with the most states across all branches or the branch with the longest number of steps. The question of whether a given NDTM will halt is still computationally irreducible, and the computation required to find an NDTM busy beaver is significantly greater than the deterministic case, since there are multiple branches that need to be considered. For a 2-state, 2-color system with p cases or rules, the table to the right gives the maximum number of steps before halting and maximum number of unique states created by the NDTM.
Applications
Open mathematical problems
In addition to posing a rather challenging mathematical game, the busy beaver functions Σ(n) and S(n) offer an entirely new approach to solving pure mathematics problems. Many open problems in mathematics could in theory, but not in practice, be solved in a systematic way given the value of S(n) for a sufficiently large n. Theoretically speaking, the value of S(n) encodes the answer to all mathematical conjectures that can be checked in infinite time by a Turing machine with less than or equal to n states.
Consider any conjecture: any conjecture that could be disproven via a counterexample among a countable number of cases (e.g. Goldbach's conjecture). Write a computer program that sequentially tests this conjecture for increasing values. In the case of Goldbach's conjecture, we would consider every even number ≥ 4 sequentially and test whether or not it is the sum of two prime numbers. Suppose this program is simulated on an n-state Turing machine. If it finds a counterexample (an even number ≥ 4 that is not the sum of two primes in our example), it halts and indicates that. However, if the conjecture is true, then our program will never halt. (This program halts only if it finds a counterexample.)
Now, this program is simulated by an n-state Turing machine, so if we know S(n) we can decide (in a finite amount of time) whether or not it will ever halt by simply running the machine that many steps. And if, after S(n) steps, the machine does not halt, we know that it never will and thus that there are no counterexamples to the given conjecture (i.e., no even numbers that are not the sum of two primes). This would prove the conjecture to be true. Thus specific values (or upper bounds) for S(n) could be, in theory, used to systematically solve many open problems in mathematics.
However, current results on the busy beaver problem suggest that this will not be practical for two reasons:
It is extremely hard to prove values for the busy beaver function (and the max shift function). Every known exact value of S(n) was proven by enumerating every n-state Turing machine and proving whether or not each halts. One would have to calculate S(n) by some less direct method for it to actually be useful.
The values of S(n) and the other busy beaver functions get very large, very quickly. While the value of S(5) is only around 47 million, the value of S(6) is more than 10⇈15, which is equal to with a stack of 15 tens. This number has 10⇈14 digits and is unreasonable to use in a computation. The value of S(27), which is the number of steps the current program for the Goldbach conjecture would need to be run to give a conclusive answer, is incomprehensibly huge, and not remotely possible to write down, much less run a machine for, in the observable universe.
Consistency of theories
Another property of S(n) is that no arithmetically sound, computably axiomatized theory can prove all of the function's values. Specifically, given a computable and arithmetically sound theory , there is a number such that for all , no statement of the form can be proved in . This implies that for each theory there is a specific largest value of S(n) that it can prove. This is true because for every such , a Turing machine with states can be designed to enumerate every possible proof in . If the theory is inconsistent, then all false statements are provable, and the Turing machine can be given the condition to halt if and only if it finds a proof of, for example, . Any theory that proves the value of proves its own consistency, violating Gödel's second incompleteness theorem. This can be used to place various theories on a scale, for example the various large cardinal axioms in ZFC: if each theory is assigned as its number , theories with larger values of prove the consistency of those below them, placing all such theories on a countably infinite scale.
Notable examples
A 745-state binary Turing machine has been constructed that halts if and only if ZFC is inconsistent.
A 744-state Turing machine has been constructed that halts if, and only if, the Riemann hypothesis is false.
A 43-state Turing machine was constructed that halts if, and only if, Goldbach's conjecture is false. This was further reduced to 25-state machine, and later formally proved and verified in the Lean 4 theorem proving language.
A 15-state Turing machine has been constructed that halts if and only if the following conjecture formulated by Paul Erdős in 1979 is false: for all n > 8 there is at least one digit 2 in the base 3 representation of 2n.
Universal Turing machines
Exploring the relationship between computational universality and the dynamic behavior of Busy Beaver Turing machines, a conjecture was proposed in 2012 suggesting that Busy Beaver machines were natural candidates for Turing universality as they display complex characteristics, known for (1) their maximal computational complexity within size constraints, (2) their ability to perform non-trivial calculations before halting, and (3) the difficulty in finding and proving these machines; these features suggest that Busy Beaver machines possess the necessary complexity for universality.
Known results
Lower bounds
Green machines
In 1964 Milton Green developed a lower bound for the 1s-counting variant of the Busy Beaver function that was published in the proceedings of the 1964 IEEE symposium on switching circuit theory and logical design. Heiner Marxen and Jürgen Buntrock described it as "a non-trivial (not primitive recursive) lower bound". This lower bound can be calculated but is too complex to state as a single expression in terms of n. This was done with a set of Turing machines, each of which demonstrated the lower bound for a certain n. When n=8 the method gives
.
In contrast, the best current (as of 2024) lower bound on is , where each is Knuth's up-arrow notation. This represents , an exponentiated chain of 15 tens equal to . The value of is probably much larger still than that.
Specifically, the lower bound was shown with a series of recursive Turing machines, each of which was made of a smaller one with two additional states that repeatedly applied the smaller machine to the input tape. Defining the value of the N-state busy-beaver competitor on a tape containing ones to be (the ultimate output of each machine being its value on , because a blank tape has 0 ones), the recursion relations are as follows: a
This leads to two formulas, for odd and even numbers, for calculating the lower bound given by the Nth machine, :
for odd N
for even N
The lower bound BB(N) can also be related to the Ackermann function. It can be shown that:
Relationships between Busy beaver functions
Trivially, S(n) ≥ Σ(n) because a machine that writes Σ(n) ones must take at least Σ(n) steps to do so. It is possible to give a number of upper bounds on the time S(n) with the number of ones Σ(n):
(Rado)
(Buro)
(Julstrom and Zwick)
By defining num(n) to be the maximum number of ones an n-state Turing machine is allowed to output contiguously, rather than in any position (the largest unary number it can output), it is possible to show:
(Ben-Amram, et al., 1996)
Ben-Amram and Petersen, 2002, also give an asymptotically improved bound on S(n). There exists a constant c, such that for all :
Exact values and lower and upper bounds
The following table lists the exact values and some known lower bounds for S(n), Σ(n), and several other busy beaver functions. In this table, 2-symbol Turing machines are used. Entries listed as "?" are at least as large as other entries to the left (because all n-state machines are also (n+1) state machines), and no larger than entries above them (because S(n) ≥ space(n) ≥ Σ(n) ≥ num(n)). So, space(6) is known to be greater than 10⇈15, as space(n) ≥ Σ(n) and Σ(6) > 10⇈15. is an upper bound for space(5), because S(5) = () and S(n) ≥ space(n). 4098 is an upper bound for num(5), because Σ(5) = 4098 () and Σ(n) ≥ num(n). The last entry listed as "?" is num(6), because Σ(6) > 10⇈15, but Σ(n) ≥ num(n).
The 5-state busy beaver was discovered by Heiner Marxen and Jürgen Buntrock in 1989, but only proved to be the winning fifth busy beaver — stylized as BB(5) — in 2024 using a proof in Coq.
List of busy beavers
These are tables of rules for Turing machines that generate Σ(1) and S(1), Σ(2) and S(2), Σ(3) (but not S(3)), Σ(4) and S(4), Σ(5) and S(5), and the best known lower bound for Σ(6) and S(6).
In the tables, columns represent the current state and rows represent the current symbol read from the tape. Each table entry is a string of three characters, indicating the symbol to write onto the tape, the direction to move, and the new state (in that order). The halt state is shown as H.
Each machine begins in state A with an infinite tape that contains all 0s. Thus, the initial symbol read from the tape is a 0.
Result key: (starts at the position , halts at the position )
{| class="wikitable"
|+ 1-state, 2-symbol busy beaver
! width="20px" |
! A
|-
! 0
| 1RH
|-
! 1
| (not used)
|}
Result: 0 0 0 (1 step, one "1" total)
{| class="wikitable"
|+ 2-state, 2-symbol busy beaver
! width="20px" |
! A
! B
|-
! 0
| 1RB
| 1LA
|-
! 1
| 1LB
| 1RH
|}
Result: 0 0 1 1 1 0 0 (6 steps, four "1"s total)
{| class="wikitable"
|+ 3-state, 2-symbol busy beaver
! width="20px" |
! A
! B
! C
|-
! 0
| 1RB
| 0RC
| 1LC
|-
! 1
| 1RH
| 1RB
| 1LA
|}
Result: 0 0 1 1 1 1 0 0 (14 steps, six "1"s total).
This is one of several nonequivalent machines giving six 1s. Unlike the previous machines, this one is a busy beaver for Σ, but not for S. (S(3) = 21, and the machine obtains only five 1s.)
{| class="wikitable"
|+ 4-state, 2-symbol busy beaver
! width="20px" |
! A
! B
! C
! D
|-
! 0
| 1RB
| 1LA
| 1RH
| 1RD
|-
! 1
| 1LB
| 0LC
| 1LD
| 0RA
|}
Result: 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 (107 steps, thirteen "1"s total)
{| class="wikitable"
|+ 5-state, 2-symbol busy beaver
! width="20px" |
! A
! B
! C
! D
! E
|-
! 0
| 1RB
| 1RC
| 1RD
| 1LA
| 1RH
|-
! 1
| 1LC
| 1RB
| 0LE
| 1LD
| 0LA
|}
Result: 4098 "1"s with 8191 "0"s interspersed in 47,176,870 steps.
Note in the image to the right how this solution is similar qualitatively to the evolution of some cellular automata.
{| class="wikitable"
|+ current 6-state, 2-symbol best contender
! width="20px" |
! A
! B
! C
! D
! E
! F
|-
! 0
| 1RB
| 1RC
| 1LC
| 0LE
| 1LF
| 0RC
|-
! 1
| 0LD
| 0RF
| 1LA
| 1RH
| 0RB
| 0RE
|}
Result: 1 1 1 1 ... 1 1 1 ("10" followed by more than 10↑↑15 contiguous "1"s in more than 10↑↑15 steps, where 10↑↑15=1010..10, an exponential tower of 15 tens).
Visualizations
In the following table, the rules for each busy beaver (maximizing Σ) are represented visually, with orange squares corresponding to a "1" on the tape, and white corresponding to "0". The position of the head is indicated by the black ovoid, with the orientation of the head representing the state. Individual tapes are laid out horizontally, with time progressing from top to bottom. The halt state is represented by a rule which maps one state to itself (head doesn't move).
See also
Rayo's number
Turmite
Notes
References
This is where Radó first defined the busy beaver problem and proved that it was uncomputable and grew faster than any computable function.
The results of this paper had already appeared in part in Lin's 1963 doctoral dissertation, under Radó's guidance. Lin & Radó prove that Σ(3) = 6 and S(3) = 21 by proving that all 3-state 2-symbol Turing Machines which don't halt within 21 steps will never halt. (Most are proven automatically by a computer program, however 40 are proven by human inspection.)
Brady proves that Σ(4) = 13 and S(4) = 107. Brady defines two new categories for non-halting 3-state 2-symbol Turing Machines: Christmas Trees and Counters. He uses a computer program to prove that all but 27 machines which run over 107 steps are variants of Christmas Trees and Counters which can be proven to run infinitely. The last 27 machines (referred to as holdouts) are proven by personal inspection by Brady himself not to halt.
Machlin and Stout describe the busy beaver problem and many techniques used for finding busy beavers (which they apply to Turing Machines with 4-states and 2-symbols, thus verifying Brady's proof). They suggest how to estimate a variant of Chaitin's halting probability (Ω).
Marxen and Buntrock demonstrate that Σ(5) ≥ 4098 and S(5) ≥ and describe in detail the method they used to find these machines and prove many others will never halt.
Green recursively constructs machines for any number of states and provides the recursive function that computes their score (computes σ), thus providing a lower bound for Σ. This function's growth is comparable to that of Ackermann's function.
Busy beaver programs are described by Alexander Dewdney in Scientific American, August 1984, pages 19–23, also March 1985 p. 23 and April 1985 p. 30.
Wherein Brady (of 4-state fame) describes some history of the beast and calls its pursuit "The Busy Beaver Game". He describes other games (e.g. cellular automata and Conway's Game of Life). Of particular interest is "The Busy Beaver Game in Two Dimensions" (p. 247). With 19 references.
Cf Chapter 9, Turing Machines. A difficult book, meant for electrical engineers and technical specialists. Discusses recursion, partial-recursion with reference to Turing Machines, halting problem. A reference in Booth attributes busy beaver to Rado. Booth also defines Rado's busy beaver problem in "home problems" 3, 4, 5, 6 of Chapter 9, p. 396. Problem 3 is to "show that the busy beaver problem is unsolvable... for all values of n."
Bounds between functions Σ and S.
Improved bounds.
This article contains a complete classification of the 2-state, 3-symbol Turing machines, and thus a proof for the (2, 3) busy beaver: Σ(2, 3) = 9 and S(2, 3) = 38.
This is the description of ideas, of the algorithms and their implementation, with the description of the experiments examining 5-state and 6-state Turing machines by parallel run on 31 4-core computer and finally the best results for 6-state TM.
External links
The page of Heiner Marxen, who, with Jürgen Buntrock, found the above-mentioned records for a 5 and 6-state Turing machine.
Pascal Michel's Historical survey of busy beaver results which also contains best results and some analysis.
Definition of the class RTM - Reversal Turing Machines, simple and strong subclass of the TMs.
"The Busy Beaver Problem: A New Millennium Attack" (archived) at the Rensselaer RAIR Lab. This effort found several new records and established several values for the quadruple formalization.
Daniel Briggs' website archive and forum for solving the 5-state, 2-symbol busy beaver problem, based on Skelet (Georgi Georgiev) nonregular machines list.
Aaronson, Scott (1999), Who can name the bigger number?
Busy Beaver Turing Machines - Computerphile, Youtube
Pascal Michel. The Busy Beaver Competition: a historical survey. 70 pages. 2017. <hal-00396880v5>
Computability theory
Theory of computation
Large integers
Metaphors referring to animals | Busy beaver | Mathematics | 8,093 |
9,436,252 | https://en.wikipedia.org/wiki/Serial%20binary%20adder | The serial binary adder or bit-serial adder is a digital circuit that performs binary addition bit by bit. The serial full adder has three single-bit inputs for the numbers to be added and the carry in. There are two single-bit outputs for the sum and carry out. The carry-in signal is the previously calculated carry-out signal. The addition is performed by adding each bit, lowest to highest, one per clock cycle.
Serial binary addition
Serial binary addition is done by a flip-flop and a full adder. The flip-flop takes the carry-out signal on each clock cycle and provides its value as the carry-in signal on the next clock cycle. After all of the bits of the input operands have arrived, all of the bits of the sum have come out of the sum output.
Serial binary subtractor
The serial binary subtractor operates the same as the serial binary adder, except the subtracted number is converted to its two's complement before being added. Alternatively, the number to be subtracted is converted to its ones' complement, by inverting its bits, and the carry flip-flop is initialized to a 1 instead of to 0 as in addition. The ones' complement plus the 1 is the two's complement.
Example of operation
Decimal 5+9=14
X=5, Y=9, Sum=14
Binary 0101+1001=1110
Addition of each step
*addition starts from LSb
Result=1110 or 14
See also
Parallel binary adder
References
Further reading
http://www.quinapalus.com/wires8.html
http://www.asic-world.com/digital/arithmetic3.html
External links
Interactive Serial Adder, Provides the visual logic of the Serial Adder circuit built with Teahlab's Simulator.
Binary arithmetic
Adders (electronics) | Serial binary adder | Mathematics | 388 |
5,833,630 | https://en.wikipedia.org/wiki/Transport%20Phenomena%20%28book%29 | Transport Phenomena is the first textbook about transport phenomena. It is specifically designed for chemical engineering students. The first edition was published in 1960, two years after having been preliminarily published under the title Notes on Transport Phenomena based on mimeographed notes prepared for a chemical engineering course taught at the University of Wisconsin–Madison during the academic year 1957-1958. The second edition was published in August 2001. A revised second edition was published in 2007. This text is often known simply as BSL after its authors' initials.
History
As the chemical engineering profession developed in the first half of the 20th century, the concept of "unit operations" arose as being needed in the education of undergraduate chemical engineers. The theories of mass, momentum and energy transfer were being taught at that time only to the extent necessary for a narrow range of applications. As chemical engineers began moving into a number of new areas, problem definitions and solutions required a deeper knowledge of the fundamentals of transport phenomena than those provided in the textbooks then available on unit operations.
In the 1950s, R. Byron Bird, Warren E. Stewart and Edwin N. Lightfoot stepped forward to develop an undergraduate course at the University of Wisconsin–Madison to integrate the teaching of fluid flow, heat transfer, and diffusion. From this beginning, they prepared their landmark textbook Transport Phenomena.
Subjects covered in the book
The book is divided into three basic sections, named Momentum Transport, Energy Transport and Mass Transport:
Momentum Transport
Viscosity and the Mechanisms of Momentum Transport
Momentum Balances and Velocity Distributions in Laminar Flow
The Equations of Change for Isothermal Systems
Velocity Distributions in Turbulent Flow
Interphase Transport in Isothermal Systems
Macroscopic Balances for Isothermal Flow Systems
Energy Transport
Thermal Conductivity and the Mechanisms of Energy Transport
Energy Balances and Temperature Distributions in Solids and Laminar Flow
The Equations of Change for Nonisothermal Systems
Temperature Distributions in Turbulent Flow
Interphase Transport in Nonisothermal Systems
Macroscopic Balances for Nonisothermal Systems
Mass transport
Diffusivity and the Mechanisms of Mass Transport
Concentration Distributions in Solids and Laminar Flow
Equations of Change for Multicomponent Systems
Concentration Distributions in Turbulent Flow
Interphase Transport in Nonisothermal Mixtures
Macroscopic Balances for Multicomponent Systems
Other Mechanisms for Mass Transport
Word play
Transport Phenomena contains many instances of hidden messages and other word play.
For example, the first letters of each sentence of the Preface spell out "This book is dedicated to O. A. Hougen." while in the revised second edition, the first letters of each paragraph spell out "Welcome". The first letters of each paragraph in the Postface spell out "On Wisconsin". In the first printing, in Fig. 9.L (p. 305) "Bird" is typeset safely outside the furnace wall.
Advantages of the first edition over the second edition
According to many chemical engineering professors, the first edition is much better than the second edition. There are many reasons in this regard; The second edition has been revised many times despite the fact that there are still many defects and typographical errors in many parts of the book. On account of revision to defects of the revised second edition book, the authors published "Notes for the 2nd revised edition of TRANSPORT PHENOMENA" on 9 Aug 2011.
See also
Chemical engineer
Distillation Design
Transport phenomena
Unit Operations of Chemical Engineering
Perry's Chemical Engineers' Handbook
External links
Publisher's description of this book
References
Chemical engineering books
Engineering textbooks
Science books
Technology books
Transport phenomena | Transport Phenomena (book) | Physics,Chemistry,Engineering | 708 |
59,035,533 | https://en.wikipedia.org/wiki/KZ%20Andromedae | KZ Andromedae (often abbreviated to KZ And) is a double lined spectroscopic binary in the constellation Andromeda. Its apparent visual magnitude varies between 7.91 and 8.03 during a cycle slightly longer than 3 days.
System
Both stars in the KZ Andromedae system are main sequence stars of spectral type K2Ve, meaning that the spectrum shows strong emission lines. This is caused by their active chromospheres that cause large spots on the surface.
KZ Andromedae is listed in the Washington Double Star Catalog as the secondary component in a visual binary system, with the primary being HD 218739. In 50 years of observations, there is little evidence of relative motion between the two stars; however, they have a common proper motion and a similar radial velocity.
Variability
The rotational velocity of both stars is consistent with a synchronous rotation of the pair, and the rotational period is itself comparable to the brightness variation period. KX Andromedae is thus classified as a BY Draconis variable, and the variability is caused by the large spots on the surface.
References
Andromeda (constellation)
Andromedae, KZ
J23095734+4757300
114379
Spectroscopic binaries
BY Draconis variables
K-type main-sequence stars
Emission-line stars
Gliese and GJ objects
218738 | KZ Andromedae | Astronomy | 290 |
9,620,408 | https://en.wikipedia.org/wiki/Laurence%20Chisholm%20Young | Laurence Chisholm Young (14 July 1905 – 24 December 2000) was a British mathematician known for his contributions to measure theory, the calculus of variations, optimal control theory, and potential theory. He was the son of William Henry Young and Grace Chisholm Young, both prominent mathematicians. He moved to the US in 1949 but never sought American citizenship.
The concept of Young measure is named after him: he also introduced the concept of the generalized curve and a concept of generalized surface which later evolved in the concept of varifold. The Young integral also is named after him and has now been generalised in the theory of rough paths.
Life and academic career
Laurence Chisholm Young was born in Göttingen, the fifth of the six children of William Henry Young and Grace Chisholm Young. He held positions of Professor at the University of Cape Town, South Africa, and at the University of Wisconsin-Madison. He was also a chess grandmaster.
Selected publications
Books
, available from the Internet archive.
.
.
Papers
.
, memoir presented by Stanisław Saks at the session of 16 December 1937 of the Warsaw Society of Sciences and Letters. The free PDF copy is made available by the RCIN –Digital Repository of the Scientifics Institutes.
.
.
.
.
.
.
.
.
See also
Bounded variation
Caccioppoli set
Measure theory
Varifold
Notes
References
Biographical and general references
, including a reply by L. C. Young himself (pages 109–112).
.
Scientific references
. One of the most complete monographs on the theory of Young measures, strongly oriented to applications in continuum mechanics of fluids.
. A thorough scrutiny of Young measures and their various generalization is in Chapter 3 from the perspective of convex compactifications.
.
. An extended version of with a list of Almgren's publications.
External links
Obituary on University of Wisconsin web site
20th-century British mathematicians
Alumni of Trinity College, Cambridge
Mathematical analysts
Scientists from Göttingen
1905 births
2000 deaths
Variational analysts
British historians of mathematics
Instituto Nacional de Matemática Pura e Aplicada researchers | Laurence Chisholm Young | Mathematics | 415 |
697,531 | https://en.wikipedia.org/wiki/Geometric%E2%80%93harmonic%20mean | In mathematics, the geometric–harmonic mean M(x, y) of two positive real numbers x and y is defined as follows: we form the geometric mean of g0 = x and h0 = y and call it g1, i.e. g1 is the square root of xy. We also form the harmonic mean of x and y and call it h1, i.e. h1 is the reciprocal of the arithmetic mean of the reciprocals of x and y. These may be done sequentially (in any order) or simultaneously.
Now we can iterate this operation with g1 taking the place of x and h1 taking the place of y. In this way, two interdependent sequences (gn) and (hn) are defined:
and
Both of these sequences converge to the same number, which we call the geometric–harmonic mean M(x, y) of x and y. The geometric–harmonic mean is also designated as the harmonic–geometric mean. (cf. Wolfram MathWorld below.)
The existence of the limit can be proved by the means of Bolzano–Weierstrass theorem in a manner almost identical to the proof of existence of arithmetic–geometric mean.
Properties
M(x, y) is a number between the geometric and harmonic mean of x and y; in particular it is between x and y. M(x, y) is also homogeneous, i.e. if r > 0, then M(rx, ry) = r M(x, y).
If AG(x, y) is the arithmetic–geometric mean, then we also have
Inequalities
We have the following inequality involving the Pythagorean means {H, G, A} and iterated Pythagorean means {HG, HA, GA}:
where the iterated Pythagorean means have been identified with their parts {H, G, A} in progressing order:
H(x, y) is the harmonic mean,
HG(x, y) is the harmonic–geometric mean,
G(x, y) = HA(x, y) is the geometric mean (which is also the harmonic–arithmetic mean),
GA(x, y) is the geometric–arithmetic mean,
A(x, y) is the arithmetic mean.
See also
Arithmetic–geometric mean
Arithmetic–harmonic mean
Mean
External links
Means | Geometric–harmonic mean | Physics,Mathematics | 498 |
61,438,505 | https://en.wikipedia.org/wiki/Near-Earth%20Object%20Confirmation%20Page | The Near-Earth Object Confirmation Page (NEOCP) is a web service listing recently-submitted observations of objects that may be near-Earth objects (NEOs). It is a service of the Minor Planet Center (MPC), which is the official international archive for astrometric observations of minor planets. The NEOCP was established by the MPC on the World Wide Web in March 1996.
Astrometric observations of new NEO candidates are submitted by observers either through email or cURL, after which they are placed in the NEOCP for a period of time until they are confirmed to be a new object, confirmed to be an already-known object, or not confirmed with sufficient follow-up observations. If the object is confirmed as a new NEO, it is given a provisional designation and its observations will be immediately published in a Minor Planet Electronic Circular (MPEC). If the object is a recovery of an already-designated NEO on a new opposition, it will also be immediately published in an MPEC. Otherwise, if the object is confirmed as a minor planet that is not a NEO, it will be published in a Daily Orbit Update MPEC on the following day. Any objects that are not confirmed due to an insufficient observation arc or a false-positive detection will have its observations archived in the MPC's Isolated Tracklet File of unconfirmed minor planet candidates.
This tool is updated throughout the day to facilitate follow-up observations as quickly as possible before an object is lost and no longer observable.
A number of other services make use of the NEOCP and further process the data to make independent predictions of the likelihood of an object being an NEO and also of the likely risk of Earth impact, some of these are listed below.
See also
Scout: NEOCP Hazard Assessment
Near-Earth object
Asteroid impact prediction
NEODyS
References
External links
Near-Earth Object Confirmation Page at MPC
CNEOS / JPL Scout NEOCP Hazard Assessment Tool
SpaceDYS NEOScan Tool
Project Pluto Summary of NEOCP
Separate comet confirmation page at the Minor Planet Center
Interactive graphic showing time of most recent submitted observations for NEOCP objects
Previous NEOCP objects listing disposition/status/outcome upon being removed from the list
Astronomy data and publications | Near-Earth Object Confirmation Page | Astronomy | 448 |
4,269,882 | https://en.wikipedia.org/wiki/Background%20radiation%20equivalent%20time | Background radiation equivalent time (BRET) or background equivalent radiation time (BERT) is a unit of measurement of ionizing radiation dosage amounting to one day worth of average human exposure to background radiation.
BRET units are used as a measure of low level radiation exposure. The health hazards of low doses of ionizing radiation are unknown and controversial, because the effects, mainly cancer and genetic damage, take many years to appear, and the incidence due to radiation exposure can't be statistically separated from the many other causes of these diseases. The purpose of the BRET measure is to allow a low level dose to be easily compared with a universal yardstick: the average dose of background radiation, mostly from natural sources, that every human unavoidably receives during daily life. Background radiation level is widely used in radiological health fields as a standard for setting exposure limits. Presumably, a dose of radiation which is equivalent to what a person would receive in a few days of ordinary life will not increase their rate of disease measurably.
Definition
The BRET is the creation of Professor J R Cameron. The BRET value corresponding to a dose of radiation is the number of days of average natural background dose it is equivalent to. It is calculated from the equivalent dose in sieverts by dividing by the average annual background radiation dose in Sv, and multiplying by 365:
The definition of the BRET unit is apparently unstandardized, and depends on what value is used for the average annual background radiation dose, which varies greatly across time and location. The 2000 UNSCEAR estimate for worldwide average natural background radiation dose is 2.4 mSv (240 mrem), with a range from 1 to 13 mSv. A small area in India as high as 30 mSv (3 rem). Using the 2.4 mSv value each BRET unit equals 6.6 μSv.
BRET values for diagnostic radiography procedures range from 2 BRET for a dental x-ray to around 400 for a barium enema study.
See also
Background radiation
Banana equivalent dose
Flight-time equivalent dose
Radiology
References
Utah Division of Radiation Control: X-ray Dose Comparisons
Radioactivity
Background radiation
Equivalent units | Background radiation equivalent time | Physics,Chemistry,Mathematics | 443 |
8,025,600 | https://en.wikipedia.org/wiki/Keith%20Stattenfield | Keith Stattenfield is a senior Apple Computer software engineer. He started at Apple Computer in 1989 in the Information Systems & Technology group, then worked on the Macintosh operating system starting in 1995, from the Mac OS 7.5 release on. He led the Netbooting project starting in Mac OS 8.6, and then served as the overall technical lead of Mac OS 9. His California license plate reads "MAC OS 9".
In 2001, he was ranked 14 on the MDJ POWER 25, a list of the most influential people in the Macintosh community.
He has often presented at conferences such as Apple's Worldwide Developers Conference and MacHack (convention).
Keith has a Public-access television show and web site called Keith Explains. He is also a frequent guest on the show John Wants Answers.
References
Knaster and Rizzo (2004) "Mac Toys: 12 Cool Projects for Home, Office, and Entertainment"
External links
Keith Explains
Keith's personal web page
Keith's net boot patent mentioned on slashdot
Keith's net boot patent
Year of birth missing (living people)
Living people
Macintosh operating systems people
American software engineers
American public access television personalities | Keith Stattenfield | Technology | 236 |
54,028,304 | https://en.wikipedia.org/wiki/Guyan%20reduction | In computational mechanics, Guyan reduction, also known as static condensation, is a dimensionality reduction method which reduces the number of degrees of freedom by ignoring the inertial terms of the equilibrium equations and expressing the unloaded degrees of freedom in terms of the loaded degrees of freedom.
Basic concept
The static equilibrium equation can be expressed as:
where is the stiffness matrix, the force vector, and the displacement vector. The number of the degrees of freedom of the static equilibrium problem is the length of the displacement vector.
By partitioning the above system of linear equations with regards to loaded (master) and unloaded (slave) degrees of freedom, the static equilibrium equation may be expressed as:
Focusing on the lower partition of the above system of linear equations, the dependent (slave) degrees of freedom are expressed by the following equation.
Solving the above equation in terms of the independent (master) degrees of freedom leads to the following dependency relations
Substituting the dependency relations on the upper partition of the static equilibrium problem condenses away the slave degrees of freedom, leading to the following reduced system of linear equations.
This can be rewritten as:
The above system of linear equations is equivalent to the original problem, but expressed in terms of the master's degrees of freedom alone.
Thus, the Guyan reduction method results in a reduced system by condensing away the slave degrees of freedom.
Linear transformation
The Guyan reduction can also be expressed as a change of basis which produces a low-dimensional representation of the original space, represented by the master's degrees of freedom.
The linear transformation that maps the reduced space onto the full space is expressed as:
where represents the Guyan reduction transformation matrix.
Thus, the reduced problem is represented as:
In the above equation, represents the reduced system of linear equations that's obtained by applying the Guyan reduction transformation on the full system, which is expressed as:
Application
The Guyan reduction is an integral part of the classic dynamic substructuring method known as the Craig-Bampton (CB) method. The static portion of the reduced system matrices derived from the CB method is a direct result of the Guyan reduction. It is calculated in the same manner as the Guyan stiffness matrix shown above. The term , in the CB domain, is referred to as the constraint modes, . It represents the displacement of all unloaded degrees of freedom when a unit displacement is applied at a single, loaded, degree of freedom, while keeping the rest constrained.
See also
Model order reduction
Finite element method
References
Finite element method
Structural analysis | Guyan reduction | Engineering | 520 |
20,598,265 | https://en.wikipedia.org/wiki/Phallus | A phallus (: phalli or phalluses) is a penis (especially when erect), an object that resembles a penis, or a mimetic image of an erect penis. In art history, a figure with an erect penis is described as ithyphallic.
Any object that symbolically—or, more precisely, iconically—resembles a penis may also be referred to as a phallus; however, such objects are more often referred to as being phallic (as in "phallic symbol"). Such symbols often represent fertility and cultural implications that are associated with the male sexual organ, as well as the male orgasm.
Etymology
The term is a loanword from Latin phallus, itself borrowed from Greek (phallos), which is ultimately a derivation from the Proto-Indo-European root *bʰel- "to inflate, swell". Compare with Old Norse (and modern Icelandic) boli, "bull", Old English bulluc, "bullock", Greek , "whale".
Archaeology
The Hohle phallus, a 28,000-year-old siltstone phallus discovered in the Hohle Fels cave and reassembled in 2005, is among the oldest phallic representations known.
Religion
Ancient Egypt
The phallus played a role in the cult of Osiris in ancient Egyptian religion. When Osiris' body was cut in 14 pieces, Set scattered them all over Egypt, and his wife Isis retrieved all of them except one, his penis, which a fish swallowed; Isis made him a wooden replacement.
The phallus was a symbol of fertility, and the god Min was often depicted as ithyphallic, that is, with an erect penis.
Ancient Greece and Rome
In traditional Greek mythology, Hermes, the god of boundaries and exchange (popularly the messenger god), is considered to be a phallic deity by association with representations of him on herms (pillars) featuring a phallus. There is no scholarly consensus on this depiction, and it would be speculation to consider Hermes a fertility god. Pan, son of Hermes, was often depicted as having an exaggerated erect phallus.
Priapus is a Greek god of fertility whose symbol was an exaggerated phallus. The son of Aphrodite and Dionysus, according to Homer and most accounts, he is the protector of livestock, fruit plants, gardens, and male genitalia. His name is the origin of the medical term priapism.
The city of Tyrnavos in Greece holds an annual Phallus festival, a traditional event celebrating the phallus on the first days of Lent.
The phallus was ubiquitous in ancient Roman culture, particularly in the form of the fascinum, a phallic charm. The ruins of Pompeii produced bronze wind chimes (tintinnabula) that featured the phallus, often in multiples, to ward off the evil eye and other malevolent influences. Statues of Priapus similarly guarded gardens. Roman boys wore the bulla, an amulet that contained a phallic charm until they formally came of age. According to Augustine of Hippo, the cult of Father Liber, who presided over the citizen's entry into political and sexual manhood, involved a phallus. The phallic deity Mutunus Tutunus promoted marital sex. A sacred phallus was among the objects considered vital to the security of the Roman state, which was in the keeping of the Vestal Virgins. Sexuality in ancient Rome has sometimes been characterized as "phallocentric".
Ancient India
Shiva, one of the most widely worshiped male deities in Hinduism pantheon, is worshiped much more commonly in the form of the lingam. Evidence of the lingam in India dates back to prehistoric times. Although Lingam is not a mere phallic iconography, nor do the textual sources signify it as so, stone Lingams with several varieties are found to this date in many of the old temples and in museums in India and abroad, which are often more clearly phallic than later stylized lingams. The famous "man-size" Gudimallam Lingam in Andhra Pradesh is about in height, carved in polished black granite, and clearly represents an erect phallus, with a figure of the deity in relief superimposed down the shaft.
Many of the earliest depictions of Shiva as a figure in human form are ithyphallic, for example, in coins of the Kushan Empire. Some figures up to about the 11th century AD have erect phalluses, although they have become increasingly rare.
Indonesia
According to the Indonesian chronicles of the Babad Tanah Jawi, Prince Puger gained the kingly power from God by ingesting semen from the phallus of the already-dead Sultan Amangkurat II of Mataram.
Bhutan
The phallus is commonly depicted in its paintings. Wooden phalluses, with white ribbons hanging from the tip, are often hung above the doorways of houses to deter evil spirits.
Ancient Scandinavia
The Norse god Freyr is a phallic deity, representing male fertility and love.
The short story Völsa þáttr describes a family of Norwegians worshiping a preserved horse penis.
Some image stones, such as the Stora Hammers and Tängelgårda stones, were phallic shaped.
Iran
Khalid Nabi Cemetery (Persian: گورستان خالد نبی, "Cemetery of the Prophet Khaled") is a cemetery in northeastern Iran's Golestan province. Touristic visitors often have perceived the cylindrical shafts with the thicker top as depictions of male phalli. This gave rise to popular hypotheses about pre-Islamic fertility cults.
Japan
The Mara Kannon Shrine () in Nagato, Yamaguchi prefecture is one of many fertility shrines in Japan that still exist today. Also present in festivals such as the Danjiri Matsuri () in Kishiwada, Osaka prefecture, the Kanamara Matsuri in Kawasaki, and the Hōnen Matsuri (, Harvest Festival) in Komaki, Aichi Prefecture, though historically phallus adoration was more widespread.
Balkans
Kuker is a divinity personifying fecundity, sometimes in Bulgaria and Serbia it is a plural divinity. In Bulgaria, a ritual spectacle of spring (a sort of carnival performed by Kukeri) takes place after a scenario of folk theatre, in which Kuker's role is interpreted by a man attired in a sheep or goat-pelt, wearing a horned mask and girded with a large wooden phallus. During the ritual, various physiological acts are interpreted, including the sexual act, as a symbol of the god's sacred marriage, while the symbolical wife, appearing pregnant, mimes the pains of giving birth. This ritual inaugurates the labours of the fields (ploughing, sowing) and is carried out with the participation of numerous allegorical personages, among which are the Emperor and his entourage.
Switzerland
In Switzerland, the heraldic bears in a coat of arms had to be painted with bright red penises, otherwise, they would have been mocked as being she-bears. In 1579, a calendar printed in St. Gallen omitted the genitals from the heraldic bear of Appenzell, nearly leading to war between the two cantons.
The Americas
Figures of Kokopelli and Itzamna (as the Mayan tonsured maize god) in Pre-Columbian America often include phallic content. Additionally, over forty large monolithic sculptures (Xkeptunich) have been documented from Terminal Classic Maya sites, with most examples occurring in the Puuc region of Yucatán (Amrhein 2001). Uxmal has the largest collection, with eleven sculptures now housed under a protective roof. The largest sculpture was recorded at Almuchil measuring more than 320 cm high with a diameter at the base of the shaft measuring 44 cm.
Alternative sects
St. Priapus Church (French: Église S. Priape) is a North American new religion that centres on the worship of the phallus. Founded in the 1980s in Montreal, Quebec, by D. F. Cassidy, it has a following mainly among homosexual men in Canada and the United States. Semen is also treated with reverence, and its consumption is an act of worship. Semen is esteemed as sacred because of its divine life-giving power.
Psychoanalysis
The symbolic version of the phallus, a phallic symbol, is meant to represent male generative powers. According to Sigmund Freud's theory of psychoanalysis, while males possess a penis, no one can possess the symbolic phallus.
Jacques Lacan's Ecrits: A Selection includes an essay titled The Signification of the Phallus in which sexual differentiation is represented in terms of the difference between "being" and "having" the phallus, which for Lacan is the transcendent signifier of desire. Men are positioned as men insofar as they wish to have the phallus. Women, on the other hand, wish to be the phallus. This difference between having and being explains some tragicomic aspects of sexual life. Once a woman becomes, in the realm of the signifier, the phallus the man wants, he ceases to want it because one cannot desire what one has, and the man may be drawn to other women. Similarly, though, for the woman, the gift of the phallus deprives the man of what he has and thereby diminishes her desire.
Norbert Wiley states that Lacan's phallus is akin to Durkheim's mana.
In Gender Trouble, Judith Butler explores Freud's and Lacan's discussions of the symbolic phallus by pointing out the connection between the phallus and the penis. They write, "The law requires conformity to its own notion of 'nature'. It gains its legitimacy through the binary and asymmetrical naturalization of bodies in which the phallus, though clearly not identical to the penis, deploys the penis as its naturalized instrument and sign". In Bodies that Matter, they further explore the possibilities for the phallus in their discussion of The Lesbian Phallus. If, as they note, Freud enumerates a set of analogies and substitutions that rhetorically affirm the fundamental transferability of the phallus from the penis elsewhere, then any number of other things might come to stand in for the phallus.
Modern use of the phallus
The phallus is often used for advertising pornography, as well as the sale of contraception. It has often been used in provocative practical jokes and has been the central focus of adult-audience performances.
The phallus had a new set of art interpretations in the 20th century with the rise of Sigmund Freud, the founder of modern psychoanalysis of psychology. One example is "Princess X" by the Romanian modernist sculptor Constantin Brâncuși. He created a scandal in the Salon in 1919 when he represented or caricatured Princess Marie Bonaparte as a large gleaming bronze phallus. This phallus likely symbolizes Bonaparte's obsession with the penis and her lifelong quest to achieve vaginal orgasm.
See also
Dog's bollocks (typography)
Hōnen Matsuri
Kanamara Matsuri
Mars symbol
Maypole
Phallic architecture
Phallic narcissism
Phallus paintings in Bhutan
Pizzle
Theory of Phallicism (Black male studies)
References
Citations
Bibliography
Vigeland Monolith – Oslo, Norway Polytechnique.fr
Dulaure, Jacques-Antoine (1974). Les Divinités génératrices. Vervier, Belgium: Marabout. Without ISBN.
External links
Cult of Dionysus
Sexology
Sexuality and society | Phallus | Biology | 2,482 |
14,509,243 | https://en.wikipedia.org/wiki/Basel%20Action%20Network | The Basel Action Network (BAN), a charitable non-governmental organization, works to combat the export of toxic waste from technology and other products from industrialized societies to developing countries. BAN is based in Seattle, Washington, United States, with a partner office in the Philippines. BAN is named after the Basel Convention, a 1989 United Nations treaty designed to control and prevent the dumping of toxic wastes, particularly on developing countries. BAN serves as an unofficial watchdog and promoter of the Basel Convention and its decisions.
Campaigns
BAN currently runs four campaigns focusing on decreasing the amount of toxins entering the environment and protecting underdeveloped countries from serving as a toxic dump of the developed countries of the world. These include:
The e-Stewards Initiative
BAN's e-Stewards Electronics Stewardship campaign seeks to prevent toxic trade in hazardous electronic waste and includes a certification program for responsible electronics recycling known as the e-Stewards Initiative. It is available to electronics recyclers after they prove to have environmentally and socially responsible recycling techniques following audits conducted by accredited certifying bodies. Recyclers can become e-Steward certified after proving that they follow all national and international laws concerning electronic waste and its proper disposal, which includes bans on exporting, land dumping, incineration, and use of prison labor.
When the e-Stewards initiative was initially started with the Electronics TakeBack Coalition, it was called "The Electronics Recycler's Pledge of True Stewardship". In the beginning, the initiative verified a recycler's participation through "desk" and paper audits only. The e-Stewards certification, however, has been updated and requires compliance verification by a third party auditor.
Green ship recycling
BAN has teamed up with several other non-governmental organizations (NGOs), including Greenpeace to form the NGO Platform on Shipbreaking. The platform is focused on the responsible ship breaking disposal of end-of-life shipping vessels. The overall purpose of the platform is to stop the illegal dumping of toxic waste traveling from developed countries to undeveloped countries. The platform is focused on finding more sustainable, environmentally and socially responsible disposal techniques of disposing of such wastes, which can be achieved through a system where the polluter will be responsible for paying any fees associated with the legal and safe disposal of ships and other marine vessels. The NGO platform endorses the principles outlined in the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal.
See also
Computer recycling
Electronic waste in the United States
Environmental issues in the United States
Notes
References
Metech Announces Support for BAN E-Stewards Program
USA's trashed TVs, computer monitors can make toxic mess BAN Founder Jim Puckett expects much more e-waste will be exported from the U.S. once the broadcasting industry switches to digital signals on Feb. 17 and millions of households junk their old analog TV sets.
Responsible Electronics Recyclers
"After Dump, What Happens To Electronic Waste?", interview with Jim Puckett by Terry Gross, Fresh Air, NPR, December 21, 2010.
External links
Basel Action Network
e-Stewards
Electronic waste in the United States
Hazardous waste
International environmental organizations
Charities based in Washington (state)
International organizations based in the United States
Non-profit organizations based in Seattle
Waste organizations
501(c)(3) organizations | Basel Action Network | Technology | 685 |
40,449,930 | https://en.wikipedia.org/wiki/Gomisin%20A | Gomisin A is a bio-active compound isolated from Schisandra chinensis.
References
Phytochemicals | Gomisin A | Chemistry | 26 |
772,827 | https://en.wikipedia.org/wiki/Allyl%20group | In organic chemistry, an allyl group is a substituent with the structural formula . It consists of a methylene bridge () attached to a vinyl group (). The name is derived from the scientific name for garlic, . In 1844, Theodor Wertheim isolated an allyl derivative from garlic oil and named it "". The term allyl applies to many compounds related to , some of which are of practical or of everyday importance, for example, allyl chloride.
Allylation is any chemical reaction that adds an allyl group to a substrate.
Nomenclature
A site adjacent to the unsaturated carbon atom is called the allylic position or allylic site. A group attached at this site is sometimes described as allylic. Thus, "has an allylic hydroxyl group". Allylic C−H bonds are about 15% weaker than the C−H bonds in ordinary sp3 carbon centers and are thus more reactive.
Benzylic and allylic are related in terms of structure, bond strength, and reactivity. Other reactions that tend to occur with allylic compounds are allylic oxidations, ene reactions, and the Tsuji–Trost reaction. Benzylic groups are related to allyl groups; both show enhanced reactivity.
Pentadienyl group
A group connected to two vinyl groups is said to be doubly allylic. The bond dissociation energy of C−H bonds on a doubly allylic centre is about 10% less than the bond dissociation energy of a C−H bond that is singly allylic. The weakened C−H bonds is reflected in the easy oxidation of compounds containing 1,4-pentadiene () linkages. Some polyunsaturated fatty acids feature this pentadiene group: linoleic acid, α-linolenic acid, and arachidonic acid. They are susceptible to a range of reactions with oxygen (O2), starting with lipid peroxidation. Products include fatty acid hydroperoxides, epoxy-hydroxy polyunsaturated fatty acids, jasmonates, divinylether fatty acids, and leaf aldehydes. Some of these derivatives are signallng molecules, some are used in plant defense (antifeedants), some are precursors to other metabolites that are used by the plant.
One practical consequence of their high reactivity is that polyunsaturated fatty acids have poor shelf life owing to their tendency toward autoxidation, leading, in the case of edibles, to rancidification. Metals accelerate the degradation. These fats tend to polymerize, forming semisolids. This reactivity pattern is fundamental to the film-forming behavior of the "drying oils", which are components of oil paints and varnishes.
Homoallylic
The term homoallylic refers to the position on a carbon skeleton next to an allylic position. In but-3-enyl chloride , the chloride is homoallylic because it is bonded to the homoallylic site.
Bonding
The allyl group is widely encountered in organic chemistry. Allylic radicals, anions, and cations are often discussed as intermediates in reactions. All feature three contiguous sp²-hybridized carbon centers and all derive stability from resonance. Each species can be presented by two resonance structures with the charge or unpaired electron distributed at both 1,3 positions.
In terms of MO theory, the MO diagram has three molecular orbitals: the first one bonding, the second one non-bonding, and the higher energy orbital is antibonding.
Reactions and applications
This heightened reactivity of allylic groups has many practical consequences. The sulfur vulcanization or various rubbers exploits the conversion of allylic groups into crosslinks. Similarly drying oils such as linseed oil crosslink via oxygenation of allylic (or doubly allylic) sites. This crosslinking underpins the properties of paints and the spoilage of foods by rancidification.
The industrial production of acrylonitrile by ammoxidation of propene exploits the easy oxidation of the allylic C−H centers:
2CH3-CH=CH2 + 2 NH3 + 3 O2 -> 2CH2=CH-C#N + 6 H2O
An estimated 800,000 tonnes (1997) of allyl chloride is produced by the chlorination of propylene:
CH3CH=CH2 + Cl2 -> ClCH2CH=CH2 + HCl
It is the precursor to allyl alcohol and epichlorohydrin.
Allylation
Allylation is the attachment of an allyl group to a substrate, usually another organic compound. Classically, allylation involves the reaction of a carbanion with allyl chloride. Alternatives include carbonyl allylation with allylmetallic reagents, such as allyltrimethylsilane, or the iridium-catalyzed Krische allylation.
Allylation can be effected also by conjugate addition: the addition of an allyl group to the beta-position of an enone. The Hosomi-Sakurai reaction is a common method for conjugate allylation.
Oxidation
Allylic C-H bonds are susceptible to oxidation. One commercial application of allylic oxidation is the synthesis of nootkatone, the fragrance of grapefruit, from valencene, a more abundantly available sesquiterpenoid:
In the synthesis of some fine chemicals, selenium dioxide is used to convert alkenes to allylic alcohols:
R2C=CR'-CHR"2 + [O] → R2C=CR'-C(OH)R"2
where R, R', R" may be alkyl or aryl substituents.
From the industrial perspective, oxidation of benzylic C-H bonds are conducted on a particularly large scale, e.g. production of terephthalic acid, benzoic acid, and cumene hydroperoxide.
Allyl compounds
Many substituents can be attached to the allyl group to give stable compounds. Commercially important allyl compounds include:
Crotyl alcohol (CH3CH=CH−CH2OH)
Dimethylallyl pyrophosphate, central in the biosynthesis of terpenes, a precursor to many natural products, including natural rubber.
Transition-metal allyl complexes, such as allylpalladium chloride dimer
See also
Allylic strain
Carroll rearrangement
Allylic palladium complex
Tsuji–Trost reaction
Propargylic/Homopropargylic
Benzylic
Vinylic
Acetylenic
Naloxone
Allylic rearrangement
References
Alkenyl groups
Allyl compounds | Allyl group | Chemistry | 1,408 |
16,782,074 | https://en.wikipedia.org/wiki/Gliese%20176%20b | Gliese 176 b is a super-Earth exoplanet approximately 31 light years away in the constellation of Taurus. This planet orbits very close to its parent red dwarf star Gliese 176 (also called "HD 285968").
The initial announcement confused the planetary periodicity with the stellar periodicity of 40 days, thus giving a 10.24 day period for a 25 Earth-mass planet. Subsequent readings filtered out the star's rotation, giving a more accurate reading of the planet's orbit and minimum mass.
The planet orbits inside the inner magnetosphere of its star. The quoted temperature of 450 K is a "thermal equilibrium" temperature.
It is projected to be dominated by a rocky core, but the true mass is unknown. If the orbit is oriented such that we are viewing it at a nearly face-on angle, the planet may be significantly more massive than the lower limit. If so, it may have attracted a gas envelope like Uranus or Gliese 436 b.
References
External links
Taurus (constellation)
Exoplanets discovered in 2007
Terrestrial planets
Exoplanets detected by radial velocity
1 | Gliese 176 b | Astronomy | 235 |
28,121,501 | https://en.wikipedia.org/wiki/Amanita%20franchetii | Amanita franchetii, also known as the yellow veiled amanita, or Franchet's amanita, is a species of fungus in the family Amanitaceae.
Taxonomy
It was given its current name by Swiss mycologist Victor Fayod in 1889 in honor of French botanist Adrien René Franchet.
A. aspera is a synonym of A. franchetii.
There exists a variety known as A. franchetii var. lactella that is entirely white except for the bright yellow universal veil remnants.
Description
The cap is wide, and is yellow-brown to brown in color. The flesh is white or pale yellow, and has a mild odor. The closely spaced gills are the same color as the flesh. The stipe is thick and larger at the base, also white to yellowish; loose areas of yellow veil form on the base. A thick ring is left by the partial veil.
Similar species
A similar fungus in western North America was also referred to as A. franchetii, but was long suspected of being a separate, undescribed species, and in 2013 was formally described under the name A. augusta.
Distribution and habitat
A. franchetii occurs in Europe and North Africa with oaks (Quercus ssp.), chestnuts (Castanea ssp.), and pines (Pinus ssp.).
A. franchetii var. lactella is found in the western Mediterranean region, associated with several species of oak (Quercus suber and Q. robur) and hornbeam (Carpinus betulus), and is also reported from Serbia.
Edibility
A. franchetii is considered inedible, and is reported as being toxic when raw or undercooked. Although the species was implicated in the 2005 deaths of ten people in China who displayed symptoms similar to those caused by alpha-Amanitin poisoning, this case report has been called into question for possible misidentification of the mushrooms involved.
See also
List of Amanita species
References
External links
- A description of the western North American species.
Amanita franchetii var. lactella photo, from Aranzadi Society of Sciences, Mycology Gallery.
franchetii
Fungi of North America
Fungi described in 1889
Inedible fungi
Fungus species | Amanita franchetii | Biology | 475 |
435,881 | https://en.wikipedia.org/wiki/Waveform%20monitor | A waveform monitor is a special type of oscilloscope used in television production applications. It is typically used to measure and display the level, or voltage, of a video signal with respect to time.
The level of a video signal usually corresponds to the brightness, or luminance, of the part of the image being drawn onto a regular video screen at the same point in time. A waveform monitor can be used to display the overall brightness of a television picture, or it can zoom in to show one or two individual lines of the video signal. It can also be used to visualize and observe special signals in the vertical blanking interval of a video signal, as well as the colorburst between each line of video.
Waveform monitors are used for the following purposes:
To assist with the calibration of professional video cameras, and to "line up" multiple-camera setups being used at the same location in order to ensure that the same scene shot under the same conditions will produce the same results.
As a tool to assist in telecine (film-to-tape transfer), color correction, and other video production activities
To monitor video signals to make sure that neither the color gamut, nor the analog transmission limits, are violated.
To diagnose and troubleshoot a television studio, or the equipment located therein.
To assist with installation of equipment into a television facility, or with the commissioning or certification of a facility.
In manufacturing test and research and development applications.
For setting camera exposure in the case of video and digital cinema cameras.
A waveform monitor is often used in conjunction with a vectorscope. Originally, these were separate devices; however modern waveform monitors include vectorscope functionality as a separate mode. (The combined device is simply called a "waveform monitor").
Originally, waveform monitors were entirely analog devices; the incoming (analog) video signal was filtered and amplified, and the resulting voltage was used to drive the vertical axis of a cathode-ray tube. A sync stripper circuit was used to isolate the sync pulses and colorburst from the video signal; the recovered sync information was fed to a sweep circuit which drove the horizontal axis. Early waveform monitors differed little from oscilloscopes, except for the specialized video trigger circuitry. Waveform monitors also permit the use of external reference; in this mode the sync and burst signals are taken from a separate input (thus allowing all devices in a facility to be genlocked, or synchronized to the same timing source).
With the advent of digital television and digital signal processing, the waveform monitor acquired many new features and capabilities. Modern waveform monitors contain many additional modes of operation, including picture mode (where the video picture is simply presented on the screen, much like a television), various modes optimized for color gamut checking, support for the audio portion of a television program (either embedded with the video, or on separate inputs), eye pattern and jitter displays for measuring the physical layer parameters of serial-digital television formats, modes for examining the serial digital protocol layer, and support for ancillary data and television-related metadata such as timecode, closed captions and the v-chip rating systems.
Modern waveform monitors and other oscilloscopes have largely abandoned old-style CRT technology as well. All new waveform monitors are based on a rasterizer, a piece of graphics hardware that duplicates the behavior of a CRT vector display, generating a raster signal. They may come with a flat-panel liquid crystal display, or they may be sold without a display, in which case the user can connect any VGA display.
See also
Color suite
Non-linear editing system
Linear video editing
Control room
Television studio production-control room
Television production
Film and video technology
Electronic test equipment | Waveform monitor | Technology,Engineering | 783 |
19,668,009 | https://en.wikipedia.org/wiki/Polymer%20field%20theory | A polymer field theory is a statistical field theory describing the statistical behavior of a neutral or charged polymer system. It can be derived by transforming the partition function from its standard many-dimensional integral representation over the particle degrees of freedom in a functional integral representation over an auxiliary field function, using either the Hubbard–Stratonovich transformation or the delta-functional transformation. Computer simulations based on polymer field theories have been shown to deliver useful results, for example to calculate the structures and properties of polymer solutions (Baeurle 2007, Schmid 1998), polymer melts (Schmid 1998, Matsen 2002, Fredrickson 2002) and thermoplastics (Baeurle 2006).
Canonical ensemble
Particle representation of the canonical partition function
The standard continuum model of flexible polymers, introduced by Edwards (Edwards 1965), treats a solution composed of linear monodisperse homopolymers as a system of coarse-grained polymers, in which the statistical mechanics of the chains is described by the continuous Gaussian thread model (Baeurle 2007) and the solvent is taken into account implicitly. The Gaussian thread model can be viewed as the continuum limit of the discrete Gaussian chain model, in which the polymers are described as continuous, linearly elastic filaments. The canonical partition function of such a system, kept at an inverse temperature and confined in a volume , can be expressed as
where is the potential of mean force given by,
representing the solvent-mediated non-bonded interactions among the segments, while represents the harmonic binding energy of the chains. The latter energy contribution can be formulated as
where is the statistical segment length and the polymerization index.
Field-theoretic transformation
To derive the basic field-theoretic representation of the canonical partition function, one introduces in the following the segment density operator of the polymer system
Using this definition, one can rewrite Eq. (2) as
Next, one converts the model into a field theory by making use of the Hubbard-Stratonovich transformation or delta-functional transformation
where is a functional and
is the delta
functional given by
with representing the
auxiliary field function. Here we note that, expanding the field function in a Fourier series, implies that periodic boundary conditions are applied in all directions and that the -vectors designate the reciprocal lattice vectors of the supercell.
Basic field-theoretic representation of canonical partition function
Using the Eqs. (3), (4) and (5), we can recast the canonical partition function in Eq. (1) in field-theoretic representation, which leads to
where
can be interpreted as the partition function for an ideal gas of non-interacting polymers and
is the path integral of a free polymer in a zero field with elastic energy
In the latter equation the unperturbed radius of gyration of a chain . Moreover, in Eq. (6) the partition function of a single polymer, subjected to the field , is given by
Grand canonical ensemble
Basic field-theoretic representation of grand canonical partition function
To derive the grand canonical partition function, we use its standard thermodynamic relation to the canonical partition function, given by
where is the chemical potential and is given by Eq. (6). Performing the sum, this provides the field-theoretic representation of the grand canonical partition function,
where
is the grand canonical action with defined by
Eq. (8) and the constant
Moreover, the parameter related to the chemical potential is given by
where is provided by Eq. (7).
Mean field approximation
A standard approximation strategy for polymer field theories is the mean field (MF) approximation, which consists in replacing the many-body interaction term in the action by a term where all bodies of the system interact with an average effective field. This approach reduces any multi-body problem into an effective one-body problem by assuming that the partition function integral of the model is dominated by a single field configuration. A major benefit of solving problems with the MF approximation, or its numerical implementation commonly referred to as the self-consistent field theory (SCFT), is that it often provides some useful insights into the properties and behavior of complex many-body systems at relatively low computational cost. Successful applications of this approximation strategy can be found for various systems of polymers and complex fluids, like e.g. strongly segregated block copolymers of high molecular weight, highly concentrated neutral polymer solutions or highly concentrated block polyelectrolyte (PE) solutions (Schmid 1998, Matsen 2002, Fredrickson 2002). There are, however, a multitude of cases for which SCFT provides inaccurate or even qualitatively incorrect results (Baeurle 2006a). These comprise neutral polymer or polyelectrolyte solutions in dilute and semidilute concentration regimes, block copolymers near their order-disorder transition, polymer blends near their phase transitions, etc. In such situations the partition function integral defining the field-theoretic model is not entirely dominated by a single MF configuration and field configurations far from it can make important contributions, which require the use of more sophisticated calculation techniques beyond the MF level of approximation.
Higher-order corrections
One possibility to face the problem is to calculate higher-order corrections to the MF approximation. Tsonchev et al. developed such a strategy including leading (one-loop) order fluctuation corrections, which allowed to gain new insights into the physics of
confined PE solutions (Tsonchev 1999). However, in situations where the MF approximation is bad many computationally demanding higher-order corrections to the integral are necessary to get the desired accuracy.
Renormalization techniques
An alternative theoretical tool to cope with strong fluctuations problems occurring in field theories has been provided in the late 1940s by the concept of renormalization, which has originally been devised to calculate functional integrals arising in quantum field theories (QFT's). In QFT's a standard approximation strategy is to expand the functional integrals in a power series in the coupling constant using perturbation theory. Unfortunately, generally most of the expansion terms turn out to be infinite, rendering such calculations impracticable (Shirkov 2001). A way to remove the infinities from QFT's is to make use of the concept of renormalization (Baeurle 2007). It mainly consists in replacing the bare values of the coupling parameters, like e.g. electric charges or masses, by renormalized coupling parameters and requiring that the physical quantities do not change under this transformation, thereby leading to finite terms in the perturbation expansion. A simple physical picture of the procedure of renormalization can be drawn from the example of a classical electrical charge, , inserted into a polarizable medium, such as in an electrolyte solution. At a distance from the charge due to polarization of the medium, its Coulomb field will effectively depend on a function , i.e. the effective (renormalized) charge, instead of the bare electrical charge, . At the beginning of the 1970s, K.G. Wilson further pioneered the power of renormalization concepts by developing the formalism of renormalization group (RG) theory, to investigate critical phenomena of statistical systems (Wilson 1971).
Renormalization group theory
The RG theory makes use of a series of RG transformations, each of which consists of a coarse-graining step followed by a change of scale (Wilson 1974). In case of statistical-mechanical problems the steps are implemented by successively eliminating and rescaling the degrees of freedom in the partition sum or integral that defines the model under consideration. De Gennes used this strategy to establish an analogy between the behavior of the zero-component classical vector model of ferromagnetism near the phase transition and a self-avoiding random walk of a polymer chain of infinite length on a lattice, to calculate the polymer excluded volume exponents (de Gennes 1972). Adapting this concept to field-theoretic functional integrals, implies to study in a systematic way how a field theory model changes while eliminating and rescaling a certain number of degrees of freedom from the partition function integral (Wilson 1974).
Hartree renormalization
An alternative approach is known as the Hartree approximation or self-consistent one-loop approximation (Amit 1984). It takes advantage of Gaussian fluctuation corrections to the -order MF contribution, to renormalize the model parameters and extract in a self-consistent way the dominant length scale of the concentration fluctuations in critical concentration regimes.
Tadpole renormalization
In a more recent work Efimov and Nogovitsin showed that an alternative renormalization technique originating from QFT, based on the concept of tadpole renormalization, can be a very effective approach for computing functional integrals arising in statistical mechanics of classical many-particle systems (Efimov 1996). They demonstrated that the main contributions to classical partition function integrals are provided by low-order tadpole-type Feynman diagrams, which account for divergent contributions due to particle self-interaction. The renormalization procedure performed in this approach effects on the self-interaction contribution of a charge (like e.g. an electron or an ion), resulting from the static polarization induced in the vacuum due to the presence of that charge (Baeurle 2007). As evidenced by Efimov and Ganbold in an earlier work (Efimov 1991), the procedure of tadpole renormalization can be employed very effectively to remove the divergences from the action of the basic field-theoretic representation of the partition function and leads to an alternative functional integral representation, called the Gaussian equivalent representation (GER). They showed that the procedure provides functional integrals with significantly ameliorated convergence properties for analytical perturbation calculations. In subsequent works Baeurle et al. developed effective low-cost approximation methods based on the tadpole renormalization procedure, which have shown to deliver useful results for prototypical polymer and PE solutions (Baeurle 2006a, Baeurle 2006b, Baeurle 2007a).
Numerical simulation
Another possibility is to use Monte Carlo (MC) algorithms and to sample the full partition function integral in field-theoretic formulation. The resulting procedure is then called a polymer field-theoretic simulation. In a recent work, however, Baeurle demonstrated that MC sampling in conjunction with the basic field-theoretic representation is impracticable due to the so-called numerical sign problem (Baeurle 2002). The difficulty is related to the complex and oscillatory nature of the resulting distribution function, which causes a bad statistical convergence of the ensemble averages of the desired thermodynamic and structural quantities. In such cases special analytical and numerical techniques are necessary to accelerate the statistical convergence (Baeurle 2003, Baeurle 2003a, Baeurle 2004).
Mean field representation
To make the methodology amenable for computation, Baeurle proposed to shift the contour of integration of the partition function integral through the homogeneous MF solution using Cauchy's integral theorem, providing its so-called mean-field representation. This strategy was previously successfully employed by Baer et al. in field-theoretic electronic structure calculations (Baer 1998). Baeurle could demonstrate that this technique provides a significant acceleration of the statistical convergence of the ensemble averages in the MC sampling procedure (Baeurle 2002, Baeurle 2002a).
Gaussian equivalent representation
In subsequent works Baeurle et al. (Baeurle 2002, Baeurle 2002a, Baeurle 2003, Baeurle 2003a, Baeurle 2004) applied the concept of tadpole renormalization, leading to the Gaussian equivalent representationof the partition function integral, in conjunction with advanced MC techniques in the grand canonical ensemble. They could convincingly demonstrate that this strategy provides a further
boost in the statistical convergence of the desired ensemble averages (Baeurle 2002).
References
External links
University of Regensburg Research Group on Theory and Computation of Advanced Materials
Statistical field theories | Polymer field theory | Physics | 2,498 |
42,128,051 | https://en.wikipedia.org/wiki/Nu%20Microscopii | ν Microscopii, Latinized as Nu Microscopii, is a star in the constellation Microscopium. It is an orange hued star that is visible to the naked eye as a faint point of light with an apparent visual magnitude of 4.76. It was first catalogued as Nu Indi by the French explorer and astronomer Nicolas Louis de Lacaille in 1756, before being reclassified in Microscopium and given its current Bayer designation of Nu Microscopii by Gould. The object is located at a distance of around 252 light-years from the Sun, based on parallax, and is drifting further away with a radial velocity of about +9 km/s.
It is an aging giant star with a stellar classification of K0 III. It has expanded to 10.6 times the girth of the Sun after exhausting the supply of hydrogen at its core and evolving off the main sequence. The star has 2.46 times the mass of the Sun. It is radiating 59.5 times the Sun's luminosity from its swollen photosphere at an effective temperature of 4,925 K.
References
K-type giants
Microscopium
Microscopii, Nu
Durchmusterung objects
195569
101477
7846 | Nu Microscopii | Astronomy | 265 |
44,981,117 | https://en.wikipedia.org/wiki/Mohai%20Agnes%20mineral%20water | The spring of Mohai Agnes is in Hungary in Fejér county next to the Bakony hill in Moha village.
Moha is located on the central part of the Fejér county, to the north from Székesfehérvár. The Agnes-spring can be found in the north gate of the 2–4 km long ditch of between the Bakony and the Vértes. The most part of the region is well known of the high ground water level and the known and hiding springs.
History
The first records were made in 1374. Between the people was known as ”Aldokut” (Blessing well) nowadays it is called as Agnes-spring.
The first chemical analysis of the water was made in 1810 that has known so far. After that in 1835 in Leipzig and in 1876 in Stuttgart was written again from the Mohai Agnes. In 1880 the Hungarian Academy of Sciences supported the knowing of the water.
According to the chemical analysis the main components are the following: the calcium carbonate, magnesium, sodium, potassium, lithium, iron oxide, calcium sulfate, silicic acid and titanic acid. This latter one - the analysis emphasizes - is especial, since it is very rare. Similar ones are noted about a Norway mineral water only. Regarding to the curing effects the Mohai Agnes is good for respiratory and digestive diseases. On 11.2 °C enriched with natural carbon dioxide has been known since the 14th century. The Mohai Agnes mineral water contains various mineral each of which is present in natural, ionic form, but there are several other characteristics that make this high-quality drink stand out from other mineral water distributed in Hungary.
Thanks to the hard work of Amade Tadde, the Bajzáth family and Imre Kempelen, the exploration of the spring and starting of bottling, the Mohai Agnes mineral water is popular throughout the country and well known in Europe wide. The distribution of the mineral water started in the 19th century and was growing continuously until the World War I.
After the political transformation and the privatization the Elore Mezogazdasagi Termelo es Szolgaltato Szovetkezet (Forward Agricultural Producer and Service provider Co-operative) bottled the Mohai Agnes. In 1999 the Karsai Holding Plc. obtained the majority property and in August 2000 the Mohai Agnes Ltd. was launched by Karsai.
In 2005 the Mohai Agnes was sold to the Uniresal Food Ltd. and the mineral water appeared on the shelves of the CBA branches. After the remarkable investments and modernizations in 2008 the liquidation began.
Contains
Calcium: 339 mg/l
Magnesium: 67 mg/l
Bicarbonate: 1452 mg/l
Potassium: 11,4 mg/l
Sodium: 21 mg/l
Nitrite: 0 ;mg/l
Nitrate : 0 ;mg/l
1993
References
Mineral waters, medicinal waters, drinking cures
Main Types of Mineral and Medical Waters
Computer Supported Profile Analysis Of Sensory Quality Of Hungarian Mineral Waters
Mohai Agnes mineral water
Company Overview of Mohai Agnes
The structure and environment of the Mohai Ágnes mineral water(A MOHAI ÁGNES ÁSVÁNYVÍZ SZERKEZETI TELEPTANI KÖRNYEZETÉRŐL)
Mohai Ágnes víz (Mohai Agnes mineral water) on facebook
National and Historical Symbol Moha
Bottled water brands
Mineral water | Mohai Agnes mineral water | Chemistry | 682 |
84,400 | https://en.wikipedia.org/wiki/Zero-point%20energy | Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle. Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity.
The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. One proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point energy and thus these energies somehow cancel out each other. This idea would be true if supersymmetry were an exact symmetry of nature; however, the Large Hadron Collider at CERN has so far found no evidence to support it. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry, only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low-energy universe we observe today. This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics. Many physicists believe that "the vacuum holds the key to a full understanding of nature".
Etymology and terminology
The term zero-point energy (ZPE) is a translation from the German . Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy. The term zero-point field (ZPF) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, its associated zero-point energy is called the vacuum energy and the average energy value is called the vacuum expectation value (VEV) also called its condensate.
Overview
In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy. Temperature, for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion). As temperature is reduced to absolute zero, it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes; it is a consequence of the uncertainty principle of quantum mechanics.
The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function, that fluctuates between various energy states (see wave-particle duality). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave-like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy.
Given the equivalence of mass and energy expressed by Albert Einstein's , any point in space that contains energy can be thought of as having mass to create particles. Modern physics has developed quantum field theory (QFT) to understand the fundamental interactions between matter and forces; it treats every single point of space as a quantum harmonic oscillator. According to QFT the universe is made up of matter fields, whose quanta are fermions (i.e. leptons and quarks), and force fields, whose quanta are bosons (e.g. photons and gluons). All these fields have zero-point energy. Recent experiments support the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions of the zero-point field.
The idea that "empty" space can have an intrinsic energy associated with it, and that there is no such thing as a "true vacuum" is seemingly unintuitive. It is often argued that the entire universe is completely bathed in the zero-point radiation, and as such it can add only some constant amount to calculations. Physical measurements will therefore reveal only deviations from this value. For many practical calculations zero-point energy is dismissed by fiat in the mathematical model as a term that has no physical effect. Such treatment causes problems however, as in Einstein's theory of general relativity the absolute energy value of space is not an arbitrary constant and gives rise to the cosmological constant. For decades most physicists assumed that there was some undiscovered fundamental principle that will remove the infinite zero-point energy and make it completely vanish. If the vacuum has no intrinsic, absolute value of energy it will not gravitate. It was believed that as the universe expands from the aftermath of the Big Bang, the energy contained in any unit of empty space will decrease as the total energy spreads out to fill the volume of the universe; galaxies and all matter in the universe should begin to decelerate. This possibility was ruled out in 1998 by the discovery that the expansion of the universe is not slowing down but is in fact accelerating, meaning empty space does indeed have some intrinsic energy. The discovery of dark energy is best explained by zero-point energy, though it still remains a mystery as to why the value appears to be so small compared to the huge value obtained through theory – the cosmological constant problem.
Many physical effects attributed to zero-point energy have been experimentally verified, such as spontaneous emission, Casimir force, Lamb shift, magnetic moment of the electron and Delbrück scattering. These effects are usually called "radiative corrections". In more complex nonlinear theories (e.g. QCD) zero-point energy can give rise to a variety of complex phenomena such as multiple stable states, symmetry breaking, chaos and emergence. Active areas of research include the effects of virtual particles, quantum entanglement, the difference (if any) between inertial and gravitational mass, variation in the speed of light, a reason for the observed value of the cosmological constant and the nature of dark energy.
History
Early aether theories
Zero-point energy evolved from historical ideas about the vacuum. To Aristotle the vacuum was , "the empty"; i.e., space independent of body. He believed this concept violated basic physical principles and asserted that the elements of fire, air, earth, and water were not made of atoms, but were continuous. To the atomists the concept of emptiness had absolute character: it was the distinction between existence and nonexistence. Debate about the characteristics of the vacuum were largely confined to the realm of philosophy, it was not until much later on with the beginning of the renaissance, that Otto von Guericke invented the first vacuum pump and the first testable scientific ideas began to emerge. It was thought that a totally empty volume of space could be created by simply removing all gases. This was the first generally accepted concept of the vacuum.
Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation. The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics, this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were transmitted in empty space was considered evidence that their associated aethers were part of the fabric of space itself. However Maxwell noted that for the most part these aethers were ad hoc:
Moreever, the results of the Michelson–Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity, which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be created by cooling and thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool a region of space down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved.
Second quantum theory
In 1900, Max Planck derived the average energy of a single energy radiator, e.g., a vibrating atomic unit, as a function of absolute temperature:
where is the Planck constant, is the frequency, is the Boltzmann constant, and is the absolute temperature. The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900.
The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900.
In 1912, Max Planck published the first journal article to describe the discontinuous emission of radiation, based on the discrete quanta of energy. In Planck's "second quantum theory" resonators absorbed energy continuously, but emitted energy in discrete energy quanta only when they reached the boundaries of finite cells in phase space, where their energies became integer multiples of . This theory led Planck to his new radiation law, but in this version energy resonators possessed a zero-point energy, the smallest average energy a resonator could take on. Planck's radiation equation contained a residual energy factor, one , as an additional term dependent on the frequency , which was greater than zero (where is the Planck constant). It is therefore widely agreed that "Planck's equation marked the birth of the concept of zero-point energy." In a series of papers from 1911 to 1913, Planck found the average energy of an oscillator to be:
Soon, the idea of zero-point energy attracted the attention of Albert Einstein and his assistant Otto Stern. In 1913 they published a paper that attempted to prove the existence of zero-point energy by calculating the specific heat of hydrogen gas and compared it with the experimental data. However, after assuming they had succeeded, they retracted support for the idea shortly after publication because they found Planck's second theory may not apply to their example. In a letter to Paul Ehrenfest of the same year Einstein declared zero-point energy "dead as a doornail". Zero-point energy was also invoked by Peter Debye, who noted that zero-point energy of the atoms of a crystal lattice would cause a reduction in the intensity of the diffracted radiation in X-ray diffraction even as the temperature approached absolute zero. In 1916 Walther Nernst proposed that empty space was filled with zero-point electromagnetic radiation. With the development of general relativity Einstein found the energy density of the vacuum to contribute towards a cosmological constant in order to obtain static solutions to his field equations; the idea that empty space, or the vacuum, could have some intrinsic energy associated with it had returned, with Einstein stating in 1920:
and Francis Simon (1923), who worked at Walther Nernst's laboratory in Berlin, studied the melting process of chemicals at low temperatures. Their calculations of the melting points of hydrogen, argon and mercury led them to conclude that the results provided evidence for a zero-point energy. Moreover, they suggested correctly, as was later verified by Simon (1934), that this quantity was responsible for the difficulty in solidifying helium even at absolute zero. In 1924 Robert Mulliken provided direct evidence for the zero-point energy of molecular vibrations by comparing the band spectrum of 10BO and 11BO: the isotopic difference in the transition frequencies between the ground vibrational states of two different electronic levels would vanish if there were no zero-point energy, in contrast to the observed spectra. Then just a year later in 1925, with the development of matrix mechanics in Werner Heisenberg's article "Quantum theoretical re-interpretation of kinematic and mechanical relations" the zero-point energy was derived from quantum mechanics.
In 1913 Niels Bohr had proposed what is now called the Bohr model of the atom, but despite this it remained a mystery as to why electrons do not fall into their nuclei. According to classical ideas, the fact that an accelerating charge loses energy by radiating implied that an electron should spiral into the nucleus and that atoms should not be stable. This problem of classical mechanics was nicely summarized by James Hopwood Jeans in 1915: "There would be a very real difficulty in supposing that the (force) law held down to the zero values of . For the forces between two charges at zero distance would be infinite; we should have charges of opposite sign continually rushing together and, when once together, no force would tend to shrink into nothing or to diminish indefinitely in size." The resolution to this puzzle came in 1926 when Erwin Schrödinger introduced the Schrödinger equation. This equation explained the new, non-classical fact that an electron confined to be close to a nucleus would necessarily have a large kinetic energy so that the minimum total energy (kinetic plus potential) actually occurs at some positive separation rather than at zero separation; in other words, zero-point energy is essential for atomic stability.
Quantum field theory and beyond
In 1926, Pascual Jordan published the first attempt to quantize the electromagnetic field. In a joint paper with Max Born and Werner Heisenberg he considered the field inside a cavity as a superposition of quantum harmonic oscillators. In his calculation he found that in addition to the "thermal energy" of the oscillators there also had to exist an infinite zero-point energy term. He was able to obtain the same fluctuation formula that Einstein had obtained in 1909. However, Jordan did not think that his infinite zero-point energy term was "real", writing to Einstein that "it is just a quantity of the calculation having no direct physical meaning". Jordan found a way to get rid of the infinite term, publishing a joint work with Pauli in 1928, performing what has been called "the first infinite subtraction, or renormalisation, in quantum field theory".
Building on the work of Heisenberg and others, Paul Dirac's theory of emission and absorption (1927) was the first application of the quantum theory of radiation. Dirac's work was seen as crucially important to the emerging field of quantum mechanics; it dealt directly with the process in which "particles" are actually created: spontaneous emission. Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. The theory showed that spontaneous emission depends upon the zero-point energy fluctuations of the electromagnetic field in order to get started. In a process in which a photon is annihilated (absorbed), the photon can be thought of as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. In the words of Dirac:
Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote:
This view was also later supported by Theodore Welton (1948), who argued that spontaneous emission "can be thought of as forced emission taking place under the action of the fluctuating field". This new theory, which Dirac coined quantum electrodynamics (QED), predicted a fluctuating zero-point or "vacuum" field existing even in the absence of sources.
Throughout the 1940s improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift, and measurement of the magnetic moment of the electron. Discrepancies between these experiments and Dirac's theory led to the idea of incorporating renormalisation into QED to deal with zero-point infinities. Renormalization was originally developed by Hans Kramers and also Victor Weisskopf (1936), and first successfully applied to calculate a finite value for the Lamb shift by Hans Bethe (1947). As per spontaneous emission, these effects can in part be understood with interactions with the zero-point field. But in light of renormalisation being able to remove some zero-point infinities from calculations, not all physicists were comfortable attributing zero-point energy any physical meaning, viewing it instead as a mathematical artifact that might one day be eliminated. In Wolfgang Pauli's 1945 Nobel lecture he made clear his opposition to the idea of zero-point energy stating "It is clear that this zero-point energy has no physical reality".
In 1948 Hendrik Casimir showed that one consequence of the zero-point field is an attractive force between two uncharged, perfectly conducting parallel plates, the so-called Casimir effect. At the time, Casimir was studying the properties of colloidal solutions. These are viscous materials, such as paint and mayonnaise, that contain micron-sized particles in a liquid matrix. The properties of such solutions are determined by Van der Waals forces – short-range, attractive forces that exist between neutral atoms and molecules. One of Casimir's colleagues, Theo Overbeek, realized that the theory that was used at the time to explain Van der Waals forces, which had been developed by Fritz London in 1930, did not properly explain the experimental measurements on colloids. Overbeek therefore asked Casimir to investigate the problem. Working with Dirk Polder, Casimir discovered that the interaction between two neutral molecules could be correctly described only if the fact that light travels at a finite speed was taken into account. Soon afterwards after a conversation with Bohr about zero-point energy, Casimir noticed that this result could be interpreted in terms of vacuum fluctuations. He then asked himself what would happen if there were two mirrors – rather than two molecules – facing each other in a vacuum. It was this work that led to his prediction of an attractive force between reflecting plates. The work by Casimir and Polder opened up the way to a unified theory of van der Waals and Casimir forces and a smooth continuum between the two phenomena. This was done by Lifshitz (1956) in the case of plane parallel dielectric plates. The generic name for both van der Waals and Casimir forces is dispersion forces, because both of them are caused by dispersions of the operator of the dipole moment. The role of relativistic forces becomes dominant at orders of a hundred nanometers.
In 1951 Herbert Callen and Theodore Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. The fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. FDT has been shown to be true experimentally under certain quantum, non-classical, conditions.
In 1963 the Jaynes–Cummings model was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave nonintuitive predictions such as that an atom's spontaneous emission could be driven by field of effectively constant frequency (Rabi frequency). In the 1970s experiments were being performed to test aspects of quantum optics and showed that the rate of spontaneous emission of an atom could be controlled using reflecting surfaces. These results were at first regarded with suspicion in some quarters: it was argued that no modification of a spontaneous emission rate would be possible, after all, how can the emission of a photon be affected by an atom's environment when the atom can only "see" its environment by emitting a photon in the first place? These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections. Spontaneous emission can be suppressed (or "inhibited") or amplified. Amplification was first predicted by Purcell in 1946 (the Purcell effect) and has been experimentally verified. This phenomenon can be understood, partly, in terms of the action of the vacuum field on the atom.
Uncertainty principle
Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum, or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well, for then its position and momentum would both be completely determined to arbitrarily great precision. Therefore, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle, which implies its energy must be greater than the minimum of the potential well.
Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator,
where is the minimum of the classical potential well.
The uncertainty principle tells us that
making the expectation values of the kinetic and potential terms above satisfy
The expectation value of the energy must therefore be at least
where is the angular frequency at which the system oscillates.
A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly , requires solving for the ground state of the system.
Atomic physics
The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or a subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by above, using angular frequency, denoted with and defined by . This leads to a convention of writing the Planck constant with a bar through its top () to denote the quantity . In these terms, an example of zero-point energy is the above associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state.
If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system.
According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature.
The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by:
where is the Planck constant, is the mass of the particle, is the energy state ( corresponds to the ground-state energy), and is the width of the well.
Quantum field theory
In quantum field theory (QFT), the fabric of "empty" space is visualized as consisting of fields, with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson. The matter and force fields have zero-point energy. A related term is zero-point field (ZPF), which is the lowest energy state of a particular field. The vacuum can be viewed not as empty space, but as the combination of all zero-point fields.
In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field.
Each point in space makes a contribution of , resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology, the vacuum energy is one possible explanation for the cosmological constant and the source of dark energy.
Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy. Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy. The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy.
In quantum perturbation theory, it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations, or the zero-point energy to the particle masses.
Quantum electrodynamic vacuum
The oldest and best known quantized force field is the electromagnetic field. Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories.
Redefining the zero of energy
In the quantum theory of the electromagnetic field, classical wave amplitudes and are replaced by operators and that satisfy:
The classical quantity appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator . The fact that:
implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for and . The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode "amplitudes" and associated with these classical modes.
The zero-point energy of the field arises formally from the non-commutativity of and . This is true for any harmonic oscillator: the zero-point energy appears when we write the Hamiltonian:
It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by:
without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted , i.e.:
In other words, within the normal ordering symbol we can commute and . Since zero-point energy is intimately connected to the non-commutativity of and , the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with and and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion.
However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistency of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian.
Electromagnetic field in free space
From Maxwell's equations, the electromagnetic energy of a "free" field i.e. one with no sources, is described by:
We introduce the "mode function" that satisfies the Helmholtz equation:
where and assume it is normalized such that:
We wish to "quantize" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that should be independent of for each mode of the field. The mode function satisfying these conditions is:
where in order to have the transversality condition satisfied for the Coulomb gauge in which we are working.
To achieve the desired normalization we pretend space is divided into cubes of volume and impose on the field the periodic boundary condition:
or equivalently
where can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function:
which satisfies the Helmholtz equation, transversality, and the "box normalization":
where is chosen to be a unit vector which specifies the polarization of the field mode. The condition means that there are two independent choices of , which we call and where and . Thus we define the mode functions:
in terms of which the vector potential becomes:
or:
where and , are photon annihilation and creation operators for the mode with wave vector and polarization . This gives the vector potential for a plane wave mode of the field. The condition for shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write:
for the total vector potential in free space. Using the fact that:
we find the field Hamiltonian is:
This is the Hamiltonian for an infinite number of uncoupled harmonic oscillators. Thus different modes of the field are independent and satisfy the commutation relations:
Clearly the least eigenvalue for is:
This state describes the zero-point energy of the vacuum. It appears that this sum is divergent – in fact highly divergent, as putting in the density factor
shows. The summation becomes approximately the integral:
for high values of . It diverges proportional to for large .
There are two separate questions to consider. First, is the divergence a real one such that the zero-point energy really is infinite? If we consider the volume is contained by perfectly conducting walls, very high frequencies can only be contained by taking more and more perfect conduction. No actual method of containing the high frequencies is possible. Such modes will not be stationary in our box and thus not countable in the stationary energy content. So from this physical point of view the above sum should only extend to those frequencies which are countable; a cut-off energy is thus eminently reasonable. However, on the scale of a "universe" questions of general relativity must be included. Suppose even the boxes could be reproduced, fit together and closed nicely by curving spacetime. Then exact conditions for running waves may be possible. However the very high frequency quanta will still not be contained. As per John Wheeler's "geons" these will leak out of the system. So again a cut-off is permissible, almost necessary. The question here becomes one of consistency since the very high energy quanta will act as a mass source and start curving the geometry.
This leads to the second question. Divergent or not, finite or infinite, is the zero-point energy of any physical significance? The ignoring of the whole zero-point energy is often encouraged for all practical calculations. The reason for this is that energies are not typically defined by an arbitrary data point, but rather changes in data points, so adding or subtracting a constant (even if infinite) should be allowed. However this is not the whole story, in reality energy is not so arbitrarily defined: in general relativity the seat of the curvature of spacetime is the energy content and there the absolute amount of energy has real physical meaning. There is no such thing as an arbitrary additive constant with density of field energy. Energy density curves space, and an increase in energy density produces an increase of curvature. Furthermore, the zero-point energy density has other physical consequences e.g. the Casimir effect, contribution to the Lamb shift, or anomalous magnetic moment of the electron, it is clear it is not just a mathematical constant or artifact that can be cancelled out.
Necessity of the vacuum field in QED
The vacuum state of the "free" electromagnetic field (that with no sources) is defined as the ground state in which for all modes . The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero.
In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be "dressed" by emission and reabsorption of "virtual photons" from the vacuum. The vacuum state energy described by is infinite. We can make the replacement:
the zero-point energy density is:
or in other words the spectral energy density of the vacuum field:
The zero-point energy density in the frequency range from to is therefore:
This can be large even in relatively narrow "low frequency" regions of the spectrum. In the optical region from 400 to 700 nm, for instance, the above equation yields around 220 erg/cm3.
We showed in the above section that the zero-point energy can be eliminated from the Hamiltonian by the normal ordering prescription. However, this elimination does not mean that the vacuum field has been rendered unimportant or without physical consequences. To illustrate this point we consider a linear dipole oscillator in the vacuum. The Hamiltonian for the oscillator plus the field with which it interacts is:
This has the same form as the corresponding classical Hamiltonian and the Heisenberg equations of motion for the oscillator and the field are formally the same as their classical counterparts. For instance the Heisenberg equations for the coordinate and the canonical momentum of the oscillator are:
or:
since the rate of change of the vector potential in the frame of the moving charge is given by the convective derivative
For nonrelativistic motion we may neglect the magnetic force and replace the expression for by:
Above we have made the electric dipole approximation in which the spatial dependence of the field is neglected. The Heisenberg equation for is found similarly from the Hamiltonian to be:
in the electric dipole approximation.
In deriving these equations for , , and we have used the fact that equal-time particle and field operators commute. This follows from the assumption that particle and field operators commute at some time (say, ) when the matter-field interpretation is presumed to begin, together with the fact that a Heisenberg-picture operator evolves in time as , where is the time evolution operator satisfying
Alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the Hamiltonian, just as the corresponding Poisson brackets in classical theory must vanish in order to generate the correct Hamilton equations. The formal solution of the field equation is:
and therefore the equation for may be written:
where
and
It can be shown that in the radiation reaction field, if the mass is regarded as the "observed" mass then we can take
The total field acting on the dipole has two parts, and . is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution, at the position of the dipole, of the wave equation
satisfied by the field in the (source free) vacuum. For this reason is often referred to as the "vacuum field", although it is of course a Heisenberg-picture operator acting on whatever state of the field happens to be appropriate at . is the source field, the field generated by the dipole and acting on the dipole.
Using the above equation for we obtain an equation for the Heisenberg-picture operator that is formally the same as the classical equation for a linear dipole oscillator:
where . in this instance we have considered a dipole in the vacuum, without any "external" field acting on it. the role of the external field in the above equation is played by the vacuum electric field acting on the dipole.
Classically, a dipole in the vacuum is not acted upon by any "external" field: if there are no sources other than the dipole itself, then the only field acting on the dipole is its own radiation reaction field. In quantum theory however there is always an "external" field, namely the source-free or vacuum field .
According to our earlier equation for the free field is the only field in existence at as the time at which the interaction between the dipole and the field is "switched on". The state vector of the dipole-field system at is therefore of the form
where is the vacuum state of the field and is the initial state of the dipole oscillator. The expectation value of the free field is therefore at all times equal to zero:
since . however, the energy density associated with the free field is infinite:
The important point of this is that the zero-point field energy does not affect the Heisenberg equation for since it is a c-number or constant (i.e. an ordinary number rather than an operator) and commutes with . We can therefore drop the zero-point field energy from the Hamiltonian, as is usually done. But the zero-point field re-emerges as the homogeneous solution for the field equation. A charged particle in the vacuum will therefore always see a zero-point field of infinite density. This is the origin of one of the infinities of quantum electrodynamics, and it cannot be eliminated by the trivial expedient dropping of the term in the field Hamiltonian.
The free field is in fact necessary for the formal consistency of the theory. In particular, it is necessary for the preservation of the commutation relations, which is required by the unitary of time evolution in quantum theory:
We can calculate from the formal solution of the operator equation of motion
Using the fact that
and that equal-time particle and field operators commute, we obtain:
For the dipole oscillator under consideration it can be assumed that the radiative damping rate is small compared with the natural oscillation frequency, i.e., . Then the integrand above is sharply peaked at and:
the necessity of the vacuum field can also be appreciated by making the small damping approximation in
and
Without the free field in this equation the operator would be exponentially dampened, and commutators like would approach zero for . With the vacuum field included, however, the commutator is at all times, as required by unitarity, and as we have just shown. A similar result is easily worked out for the case of a free particle instead of a dipole oscillator.
What we have here is an example of a "fluctuation-dissipation elation". Generally speaking if a system is coupled to a bath that can take energy from the system in an effectively irreversible way, then the bath must also cause fluctuations. The fluctuations and the dissipation go hand in hand we cannot have one without the other. In the current example the coupling of a dipole oscillator to the electromagnetic field has a dissipative component, in the form of the zero-point (vacuum) field; given the existence of radiation reaction, the vacuum field must also exist in order to preserve the canonical commutation rule and all it entails.
The spectral density of the vacuum field is fixed by the form of the radiation reaction field, or vice versa: because the radiation reaction field varies with the third derivative of , the spectral energy density of the vacuum field must be proportional to the third power of in order for to hold. In the case of a dissipative force proportional to , by contrast, the fluctuation force must be proportional to in order to maintain the canonical commutation relation. This relation between the form of the dissipation and the spectral density of the fluctuation is the essence of the fluctuation-dissipation theorem.
The fact that the canonical commutation relation for a harmonic oscillator coupled to the vacuum field is preserved implies that the zero-point energy of the oscillator is preserved. it is easy to show that after a few damping times the zero-point motion of the oscillator is in fact sustained by the driving zero-point field.
Quantum chromodynamic vacuum
The QCD vacuum is the vacuum state of quantum chromodynamics (QCD). It is an example of a non-perturbative vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions.
Higgs field
The Standard Model hypothesises a field called the Higgs field (symbol: ), which has the unusual property of a non-zero amplitude in its ground state (zero-point) energy after renormalization; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. The expectation value of in the ground state (the vacuum expectation value or VEV) is then , where . The measured value of this parameter is approximately . It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number.
The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged and thus the field has a nonzero vacuum expectation value. Interaction with the vacuum energy filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory).
Experimental observations
Zero-point energy has many observed physical consequences. It is important to note that zero-point energy is not merely an artifact of mathematical formalism that can, for instance, be dropped from a Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion without latter consequence. Indeed, such treatment could create a problem at a deeper, as of yet undiscovered, theory. For instance, in general relativity the zero of energy (i.e. the energy density of the vacuum) contributes to a cosmological constant of the type introduced by Einstein in order to obtain static solutions to his field equations. The zero-point energy density of the vacuum, due to all quantum fields, is extremely large, even when we cut off the largest allowable frequencies based on plausible physical arguments. It implies a cosmological constant larger than the limits imposed by observation by about 120 orders of magnitude. This "cosmological constant problem" remains one of the greatest unsolved mysteries of physics.
Casimir effect
A phenomenon that is commonly presented as evidence for the existence of zero-point energy in vacuum is the Casimir effect, proposed in 1948 by Dutch physicist Hendrik Casimir, who considered the quantized electromagnetic field between a pair of grounded, neutral metal plates. The vacuum energy contains contributions from all wavelengths, except those excluded by the spacing between plates. As the plates draw together, more wavelengths are excluded and the vacuum energy decreases. The decrease in energy means there must be a force doing work on the plates as they move.
Early experimental tests from the 1950s onwards gave positive results showing the force was real, but other external factors could not be ruled out as the primary cause, with the range of experimental error sometimes being nearly 100%. That changed in 1997 with Lamoreaux conclusively showing that the Casimir force was real. Results have been repeatedly replicated since then.
In 2009, Munday et al. published experimental proof that (as predicted in 1961) the Casimir force could also be repulsive as well as being attractive. Repulsive Casimir forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction.
An interesting hypothetical side effect of the Casimir effect is the Scharnhorst effect, a hypothetical phenomenon in which light signals travel slightly faster than between two closely spaced conducting plates.
Lamb shift
The quantum fluctuations of the electromagnetic field have important physical consequences. In addition to the Casimir effect, they also lead to a splitting between the two energy levels and (in term symbol notation) of the hydrogen atom which was not predicted by the Dirac equation, according to which these states should have the same energy. Charged particles can interact with the fluctuations of the quantized vacuum field, leading to slight shifts in energy; this effect is called the Lamb shift. The shift of about is roughly of the difference between the energies of the 1s and 2s levels, and amounts to 1,058 MHz in frequency units. A small part of this shift (27 MHz ≈ 3%) arises not from fluctuations of the electromagnetic field, but from fluctuations of the electron–positron field. The creation of (virtual) electron–positron pairs has the effect of screening the Coulomb field and acts as a vacuum dielectric constant. This effect is much more important in muonic atoms.
Fine-structure constant
Taking (the Planck constant divided by ), (the speed of light), and (the electromagnetic coupling constant i.e. a measure of the strength of the electromagnetic force (where is the absolute value of the electronic charge and is the vacuum permittivity)) we can form a dimensionless quantity called the fine-structure constant:
The fine-structure constant is the coupling constant of quantum electrodynamics (QED) determining the strength of the interaction between electrons and photons. It turns out that the fine-structure constant is not really a constant at all owing to the zero-point energy fluctuations of the electron-positron field. The quantum fluctuations caused by zero-point energy have the effect of screening electric charges: owing to (virtual) electron-positron pair production, the charge of the particle measured far from the particle is far smaller than the charge measured when close to it.
The Heisenberg inequality where , and , are the standard deviations of position and momentum states that:
It means that a short distance implies large momentum and therefore high energy i.e. particles of high energy must be used to explore short distances. QED concludes that the fine-structure constant is an increasing function of energy. It has been shown that at energies of the order of the Z0 boson rest energy, 90 GeV, that:
rather than the low-energy . The renormalization procedure of eliminating zero-point energy infinities allows the choice of an arbitrary energy (or distance) scale for defining . All in all, depends on the energy scale characteristic of the process under study, and also on details of the renormalization procedure. The energy dependence of has been observed for several years now in precision experiment in high-energy physics.
Vacuum birefringence
In the presence of strong electrostatic fields it is predicted that virtual particles become separated from the vacuum state and form real matter. The fact that electromagnetic radiation can be transformed into matter and vice versa leads to fundamentally new features in quantum electrodynamics. One of the most important consequences is that, even in the vacuum, the Maxwell equations have to be exchanged by more complicated formulas. In general, it will be not possible to separate processes in the vacuum from the processes involving matter since electromagnetic fields can create matter if the field fluctuations are strong enough. This leads to highly complex nonlinear interaction – gravity will have an effect on the light at the same time the light has an effect on gravity. These effects were first predicted by Werner Heisenberg and Hans Heinrich Euler in 1936 and independently the same year by Victor Weisskopf who stated: "The physical properties of the vacuum originate in the "zero-point energy" of matter, which also depends on absent particles through the external field strengths and therefore contributes an additional term to the purely Maxwellian field energy". Thus strong magnetic fields vary the energy contained in the vacuum. The scale above which the electromagnetic field is expected to become nonlinear is known as the Schwinger limit. At this point the vacuum has all the properties of a birefringent medium, thus in principle a rotation of the polarization frame (the Faraday effect) can be observed in empty space.
Both Einstein's theory of special and general relativity state that light should pass freely through a vacuum without being altered, a principle known as Lorentz invariance. Yet, in theory, large nonlinear self-interaction of light due to quantum fluctuations should lead to this principle being measurably violated if the interactions are strong enough. Nearly all theories of quantum gravity predict that Lorentz invariance is not an exact symmetry of nature. It is predicted the speed at which light travels through the vacuum depends on its direction, polarization and the local strength of the magnetic field. There have been a number of inconclusive results which claim to show evidence of a Lorentz violation by finding a rotation of the polarization plane of light coming from distant galaxies. The first concrete evidence for vacuum birefringence was published in 2017 when a team of astronomers looked at the light coming from the star RX J1856.5-3754, the closest discovered neutron star to Earth.
Roberto Mignani at the National Institute for Astrophysics in Milan who led the team of astronomers has commented that "When Einstein came up with the theory of general relativity 100 years ago, he had no idea that it would be used for navigational systems. The consequences of this discovery probably will also have to be realised on a longer timescale." The team found that visible light from the star had undergone linear polarisation of around 16%. If the birefringence had been caused by light passing through interstellar gas or plasma, the effect should have been no more than 1%. Definitive proof would require repeating the observation at other wavelengths and on other neutron stars. At X-ray wavelengths the polarization from the quantum fluctuations should be near 100%. Although no telescope currently exists that can make such measurements, there are several proposed X-ray telescopes that may soon be able to verify the result conclusively such as China's Hard X-ray Modulation Telescope (HXMT) and NASA's Imaging X-ray Polarimetry Explorer (IXPE).
Speculated involvement in other phenomena
Dark energy
In the late 1990s it was discovered that very distant supernovae were dimmer than expected suggesting that the universe's expansion was accelerating rather than slowing down. This revived discussion that Einstein's cosmological constant, long disregarded by physicists as being equal to zero, was in fact some small positive value. This would indicate empty space exerted some form of negative pressure or energy.
There is no natural candidate for what might cause what has been called dark energy but the current best guess is that it is the zero-point energy of the vacuum, but this guess is known to be off by 120 orders of magnitude.
The European Space Agency's Euclid telescope, launched on 1 July 2023, will map galaxies up to 10 billion light years away. By seeing how dark energy influences their arrangement and shape, the mission will allow scientists to see if the strength of dark energy has changed. If dark energy is found to vary throughout time it would indicate it is due to quintessence, where observed acceleration is due to the energy of a scalar field, rather than the cosmological constant. No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses again due to zero-point energy.
Cosmic inflation
Cosmic inflation is phase of accelerated cosmic expansion just after the Big Bang. It explains the origin of the large-scale structure of the cosmos. It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed.
The mechanism for inflation is unclear, it is similar in effect to dark energy but is a far more energetic and short lived process. As with dark energy the best explanation is some form of vacuum energy arising from quantum fluctuations. It may be that inflation caused baryogenesis, the hypothetical physical processes that produced an asymmetry (imbalance) between baryons and antibaryons produced in the very early universe, but this is far from certain.
Cosmology
Paul S. Wesson examined the cosmological implications of assuming that zero-point energy is real. Among numerous difficulties, general relativity requires that such energy not gravitate, so it cannot be similar to electromagnetic radiation.
Alternative theories
There has been a long debate over the question of whether zero-point fluctuations of quantized vacuum fields are "real" i.e. do they have physical effects that cannot be interpreted by an equally valid alternative theory? Schwinger, in particular, attempted to formulate QED without reference to zero-point fluctuations via his "source theory". From such an approach it is possible to derive the Casimir Effect without reference to a fluctuating field. Such a derivation was first given by Schwinger (1975) for a scalar field, and then generalized to the electromagnetic case by Schwinger, DeRaad, and Milton (1978). in which they state "the vacuum is regarded as truly a state with all physical properties equal to zero". Jaffe (2005) has highlighted a similar approach in deriving the Casimir effect stating "the concept of zero-point fluctuations is a heuristic and calculational aid in the description of the Casimir effect, but not a necessity in QED."
Milonni has shown the necessity of the vacuum field for the formal consistency of QED. Modern physics does not know any better way to construct gauge-invariant, renormalizable theories than with zero-point energy and they would seem to be a necessity for any attempt at a unified theory.
Nevertheless, as pointed out by Jaffe, "no
known phenomenon, including the Casimir effect, demonstrates that zero point energies are “real”"
Chaotic and emergent phenomena
The mathematical models used in classical electromagnetism, quantum electrodynamics (QED) and the Standard Model all view the electromagnetic vacuum as a linear system with no overall observable consequence. For example, in the case of the Casimir effect, Lamb shift, and so on these phenomena can be explained by alternative mechanisms other than action of the vacuum by arbitrary changes to the normal ordering of field operators. See the alternative theories section. This is a consequence of viewing electromagnetism as a U(1) gauge theory, which topologically does not allow the complex interaction of a field with and on itself. In higher symmetry groups and in reality, the vacuum is not a calm, randomly fluctuating, largely immaterial and passive substance, but at times can be viewed as a turbulent virtual plasma that can have complex vortices (i.e. solitons vis-à-vis particles), entangled states and a rich nonlinear structure. There are many observed nonlinear physical electromagnetic phenomena such as Aharonov–Bohm (AB) and Altshuler–Aronov–Spivak (AAS) effects, Berry, Aharonov–Anandan, Pancharatnam and Chiao–Wu phase rotation effects, Josephson effect, Quantum Hall effect, the De Haas–Van Alphen effect, the Sagnac effect and many other physically observable phenomena which would indicate that the electromagnetic potential field has real physical meaning rather than being a mathematical artifact and therefore an all encompassing theory would not confine electromagnetism as a local force as is currently done, but as a SU(2) gauge theory or higher geometry. Higher symmetries allow for nonlinear, aperiodic behaviour which manifest as a variety of complex non-equilibrium phenomena that do not arise in the linearised U(1) theory, such as multiple stable states, symmetry breaking, chaos and emergence.
What are called Maxwell's equations today, are in fact a simplified version of the original equations reformulated by Heaviside, FitzGerald, Lodge and Hertz. The original equations used Hamilton's more expressive quaternion notation, a kind of Clifford algebra, which fully subsumes the standard Maxwell vectorial equations largely used today. In the late 1880s there was a debate over the relative merits of vector analysis and quaternions. According to Heaviside the electromagnetic potential field was purely metaphysical, an arbitrary mathematical fiction, that needed to be "murdered". It was concluded that there was no need for the greater physical insights provided by the quaternions if the theory was purely local in nature. Local vector analysis has become the dominant way of using Maxwell's equations ever since. However, this strictly vectorial approach has led to a restrictive topological understanding in some areas of electromagnetism, for example, a full understanding of the energy transfer dynamics in Tesla's oscillator-shuttle-circuit can only be achieved in quaternionic algebra or higher SU(2) symmetries. It has often been argued that quaternions are not compatible with special relativity, but multiple papers have shown ways of incorporating relativity.
A good example of nonlinear electromagnetics is in high energy dense plasmas, where vortical phenomena occur which seemingly violate the second law of thermodynamics by increasing the energy gradient within the electromagnetic field and violate Maxwell's laws by creating ion currents which capture and concentrate their own and surrounding magnetic fields. In particular Lorentz force law, which elaborates Maxwell's equations is violated by these force free vortices. These apparent violations are due to the fact that the traditional conservation laws in classical and quantum electrodynamics (QED) only display linear U(1) symmetry (in particular, by the extended Noether theorem, conservation laws such as the laws of thermodynamics need not always apply to dissipative systems, which are expressed in gauges of higher symmetry). The second law of thermodynamics states that in a closed linear system entropy flow can only be positive (or exactly zero at the end of a cycle). However, negative entropy (i.e. increased order, structure or self-organisation) can spontaneously appear in an open nonlinear thermodynamic system that is far from equilibrium, so long as this emergent order accelerates the overall flow of entropy in the total system. The 1977 Nobel Prize in Chemistry was awarded to thermodynamicist Ilya Prigogine for his theory of dissipative systems that described this notion. Prigogine described the principle as "order through fluctuations" or "order out of chaos". It has been argued by some that all emergent order in the universe from galaxies, solar systems, planets, weather, complex chemistry, evolutionary biology to even consciousness, technology and civilizations are themselves examples of thermodynamic dissipative systems; nature having naturally selected these structures to accelerate entropy flow within the universe to an ever-increasing degree. For example, it has been estimated that human body is 10,000 times more effective at dissipating energy per unit of mass than the sun.
One may query what this has to do with zero-point energy. Given the complex and adaptive behaviour that arises from nonlinear systems considerable attention in recent years has gone into studying a new class of phase transitions which occur at absolute zero temperature. These are quantum phase transitions which are driven by EM field fluctuations as a consequence of zero-point energy. A good example of a spontaneous phase transition that are attributed to zero-point fluctuations can be found in superconductors. Superconductivity is one of the best known empirically quantified macroscopic electromagnetic phenomena whose basis is recognised to be quantum mechanical in origin. The behaviour of the electric and magnetic fields under superconductivity is governed by the London equations. However, it has been questioned in a series of journal articles whether the quantum mechanically canonised London equations can be given a purely classical derivation. Bostick, for instance, has claimed to show that the London equations do indeed have a classical origin that applies to superconductors and to some collisionless plasmas as well. In particular it has been asserted that the Beltrami vortices in the plasma focus display the same paired flux-tube morphology as Type II superconductors. Others have also pointed out this connection, Fröhlich has shown that the hydrodynamic equations of compressible fluids, together with the London equations, lead to a macroscopic parameter ( = electric charge density / mass density), without involving either quantum phase factors or the Planck constant. In essence, it has been asserted that Beltrami plasma vortex structures are able to at least simulate the morphology of Type I and Type II superconductors. This occurs because the "organised" dissipative energy of the vortex configuration comprising the ions and electrons far exceeds the "disorganised" dissipative random thermal energy. The transition from disorganised fluctuations to organised helical structures is a phase transition involving a change in the condensate's energy (i.e. the ground state or zero-point energy) but without any associated rise in temperature. This is an example of zero-point energy having multiple stable states (see Quantum phase transition, Quantum critical point, Topological degeneracy, Topological order) and where the overall system structure is independent of a reductionist or deterministic view, that "classical" macroscopic order can also causally affect quantum phenomena. Furthermore, the pair production of Beltrami vortices has been compared to the morphology of pair production of virtual particles in the vacuum.
The idea that the vacuum energy can have multiple stable energy states is a leading hypothesis for the cause of cosmic inflation. In fact, it has been argued that these early vacuum fluctuations led to the expansion of the universe and in turn have guaranteed the non-equilibrium conditions necessary to drive order from chaos, as without such expansion the universe would have reached thermal equilibrium and no complexity could have existed. With the continued accelerated expansion of the universe, the cosmos generates an energy gradient that increases the "free energy" (i.e. the available, usable or potential energy for useful work) which the universe is able to use to create ever more complex forms of order. The only reason Earth's environment does not decay into an equilibrium state is that it receives a daily dose of sunshine and that, in turn, is due to the sun "polluting" interstellar space with entropy. The sun's fusion power is only possible due to the gravitational disequilibrium of matter that arose from cosmic expansion. In this essence, the vacuum energy can be viewed as the key cause of the structure throughout the universe. That humanity might alter the morphology of the vacuum energy to create an energy gradient for useful work is the subject of much controversy.
Purported applications
Physicists overwhelmingly reject any possibility that the zero-point energy field can be exploited to obtain useful energy (work) or uncompensated momentum; such efforts are seen as tantamount to perpetual motion machines.
Nevertheless, the allure of free energy has motivated such research, usually falling in the category of fringe science. As long ago as 1889 (before quantum theory or discovery of the zero point energy) Nikola Tesla proposed that useful energy could be obtained from free space, or what was assumed at that time to be an all-pervasive aether. Others have since claimed to exploit zero-point or vacuum energy with a large amount of pseudoscientific literature causing ridicule around the subject. Despite rejection by the scientific community, harnessing zero-point energy remains an interest of research, particularly in the US where it has attracted the attention of major aerospace/defence contractors and the U.S. Department of Defense as well as in China, Germany, Russia and Brazil.
Casimir batteries and engines
A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. In 1984 Robert Forward published work showing how a "vacuum-fluctuation battery" could be constructed; the battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates.
In 1999, Pinto, a former scientist at NASA's Jet Propulsion Laboratory at Caltech in Pasadena, published in Physical Review his thought experiment (Gedankenexperiment) for a "Casimir engine". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract "In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved."
Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force, he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001)) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy than has been input into the system.
In 2008, DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force.
A 2008 patent by Haisch and Moddel details a device that is able to extract power from zero-point fluctuations using a gas that circulates through a Casimir cavity. A published test of this concept by Moddel was performed in 2012 and seemed to give excess energy that could not be attributed to another source. However it has not been conclusively shown to be from zero-point energy and the theory requires further investigation.
Single heat baths
In 1951 Callen and Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. Such a theory has met with resistance: Macdonald (1962) and Harris (1971) claimed that extracting power from the zero-point energy to be impossible, so FDT could not be true. Grau and Kleen (1982) and Kleen (1986), argued that the Johnson noise of a resistor connected to an antenna must satisfy Planck's thermal radiation formula, thus the noise must be zero at zero temperature and FDT must be invalid. Kiss (1988) pointed out that the existence of the zero-point term may indicate that there is a renormalization problem—i.e., a mathematical artifact—producing an unphysical term that is not actually present in measurements (in analogy with renormalization problems of ground states in quantum electrodynamics). Later, Abbott et al. (1996) arrived at a different but unclear conclusion that "zero-point energy is infinite thus it should be renormalized but not the 'zero-point fluctuations'". Despite such criticism, FDT has been shown to be true experimentally under certain quantum, non-classical conditions. Zero-point fluctuations can, and do, contribute towards systems which dissipate energy. A paper by Armen Allahverdyan and Theo Nieuwenhuizen in 2000 showed the feasibility of extracting zero-point energy for useful work from a single bath, without contradicting the laws of thermodynamics, by exploiting certain quantum mechanical properties.
There have been a growing number of papers showing that in some instances the classical laws of thermodynamics, such as limits on the Carnot efficiency, can be violated by exploiting negative entropy of quantum fluctuations.
Despite efforts to reconcile quantum mechanics and thermodynamics over the years, their compatibility is still an open fundamental problem. The full extent that quantum properties can alter classical thermodynamic bounds is unknown
Space travel and gravitational shielding
The use of zero-point energy for space travel is speculative and does not form part of the mainstream scientific consensus. A complete quantum theory of gravitation (that would deal with the role of quantum phenomena like zero-point energy) does not yet exist. Speculative papers explaining a relationship between zero-point energy and gravitational shielding effects have been proposed, but the interaction (if any) is not yet fully understood. According to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spin. In certain conditions the gravitomagnetic field can be repulsive. In neutron stars for example, it can produce a gravitational analogue of the Meissner effect, but the force produced in such an example is theorized to be exceedingly weak.
In 1963 Robert Forward, a physicist and aerospace engineer at Hughes Research Laboratories, published a paper showing how within the framework of general relativity "anti-gravitational" effects might be achieved. Since all atoms have spin, gravitational permeability may be able to differ from material to material. A strong toroidal gravitational field that acts against the force of gravity could be generated by materials that have nonlinear properties that enhance time-varying gravitational fields. Such an effect would be analogous to the nonlinear electromagnetic permeability of iron, making it an effective core (i.e. the doughnut of iron) in a transformer, whose properties are dependent on magnetic permeability. In 1966 Dewitt was first to identify the significance of gravitational effects in superconductors. Dewitt demonstrated that a magnetic-type gravitational field must result in the presence of fluxoid quantization. In 1983, Dewitt's work was substantially expanded by Ross.
From 1971 to 1974 Henry William Wallace, a scientist at GE Aerospace was issued with three patents. Wallace used Dewitt's theory to develop an experimental apparatus for generating and detecting a secondary gravitational field, which he named the kinemassic field (now better known as the gravitomagnetic field). In his three patents, Wallace describes three different methods used for detection of the gravitomagnetic field – change in the motion of a body on a pivot, detection of a transverse voltage in a semiconductor crystal, and a change in the specific heat of a crystal material having spin-aligned nuclei. There are no publicly available independent tests verifying Wallace's devices. Such an effect if any would be small. Referring to Wallace's patents, a New Scientist article in 1980 stated "Although the Wallace patents were initially ignored as cranky, observers believe that his invention is now under serious but secret investigation by the military authorities in the USA. The military may now regret that the patents have already been granted and so are available for anyone to read." A further reference to Wallace's patents occur in an electric propulsion study prepared for the Astronautics Laboratory at Edwards Air Force Base which states: "The patents are written in a very believable style which include part numbers, sources for some components, and diagrams of data. Attempts were made to contact Wallace using patent addresses and other sources but he was not located nor is there a trace of what became of his work. The concept can be somewhat justified on general relativistic grounds since rotating frames of time varying fields are expected to emit gravitational waves."
In 1986 the U.S. Air Force's then Rocket Propulsion Laboratory (RPL) at Edwards Air Force Base solicited "Non Conventional Propulsion Concepts" under a small business research and innovation program. One of the six areas of interest was "Esoteric energy sources for propulsion, including the quantum dynamic energy of vacuum space..." In the same year BAE Systems launched "Project Greenglow" to provide a "focus for research into novel propulsion systems and the means to power them".
In 1988 Kip Thorne et al. published work showing how traversable wormholes can exist in spacetime only if they are threaded by quantum fields generated by some form of exotic matter that has negative energy. In 1993 Scharnhorst and Barton showed that the speed of a photon will be increased if it travels between two Casimir plates, an example of negative energy. In the most general sense, the exotic matter needed to create wormholes would share the repulsive properties of the inflationary energy, dark energy or zero-point radiation of the vacuum. Building on the work of Thorne, in 1994 Miguel Alcubierre proposed a method for changing the geometry of space by creating a wave that would cause the fabric of space ahead of a spacecraft to contract and the space behind it to expand (see Alcubierre drive). The ship would then ride this wave inside a region of flat space, known as a warp bubble and would not move within this bubble but instead be carried along as the region itself moves due to the actions of the drive.
In 1992 Evgeny Podkletnov published a heavily debated journal article claiming a specific type of rotating superconductor could shield gravitational force. Independently of this, from 1991 to 1993 Ning Li and Douglas Torr published a number of articles about gravitational effects in superconductors. One finding they derived is the source of gravitomagnetic flux in a type II superconductor material is due to spin alignment of the lattice ions. Quoting from their third paper: "It is shown that the coherent alignment of lattice ion spins will generate a detectable gravitomagnetic field, and in the presence of a time-dependent applied magnetic vector potential field, a detectable gravitoelectric field." The claimed size of the generated force has been disputed by some but defended by others. In 1997 Li published a paper attempting to replicate Podkletnov's results and showed the effect was very small, if it existed at all. Li is reported to have left the University of Alabama in 1999 to found the company AC Gravity LLC. AC Gravity was awarded a U.S. Department of Defense grant for $448,970 in 2001 to continue anti-gravity research. The grant period ended in 2002 but no results from this research were made public.
In 2002 Phantom Works, Boeing's advanced research and development facility in Seattle, approached Evgeny Podkletnov directly. Phantom Works was blocked by Russian technology transfer controls. At this time Lieutenant General George Muellner, the outgoing head of the Boeing Phantom Works, confirmed that attempts by Boeing to work with Podkletnov had been blocked by Russian government, lso commenting that "The physical principles – and Podkletnov's device is not the only one – appear to be valid... There is basic science there. They're not breaking the laws of physics. The issue is whether the science can be engineered into something workable"
Froning and Roach (2002) put forward a paper that builds on the work of Puthoff, Haisch and Alcubierre. They used fluid dynamic simulations to model the interaction of a vehicle (like that proposed by Alcubierre) with the zero-point field. Vacuum field perturbations are simulated by fluid field perturbations and the aerodynamic resistance of viscous drag exerted on the interior of the vehicle is compared to the Lorentz force exerted by the zero-point field (a Casimir-like force is exerted on the exterior by unbalanced zero-point radiation pressures). They find that the optimized negative energy required for an Alcubierre drive is where it is a saucer-shaped vehicle with toroidal electromagnetic fields. The EM fields distort the vacuum field perturbations surrounding the craft sufficiently to affect the permeability and permittivity of space.
In 2009, Giorgio Fontana and Bernd Binder presented a new method to potentially extract the Zero-point energy of the electromagnetic field and nuclear forces in the form of gravitational waves. In the spheron model of the nucleus, proposed by the two times Nobel laureate Linus Pauling, dineutrons are among the components of this structure. Similarly to a dumbbell put in a suitable rotational state, but with nuclear mass density, dineutrons are nearly ideal sources of gravitational waves at X-ray and gamma-ray frequencies. The dynamical interplay, mediated by nuclear forces, between the electrically neutral dineutrons and the electrically charged core nucleus is the fundamental mechanism by which nuclear vibrations can be converted to a rotational state of dineutrons with emission of gravitational waves. Gravity and gravitational waves are well described by General Relativity, that is not a quantum theory, this implies that there is no Zero-point energy for gravity in this theory, therefore dineutrons will emit gravitational waves like any other known source of gravitational waves. In Fontana and Binder paper, nuclear species with dynamical instabilites, related to the Zero-point energy of the electromagnetic field and nuclear forces, and possessing dineutrons, will emit gravitational waves. In experimental physics this approach is still unexplored.
In 2014 NASA's Eagleworks Laboratories announced that they had successfully validated the use of a Quantum Vacuum Plasma Thruster which makes use of the Casimir effect for propulsion. In 2016 a scientific paper by the team of NASA scientists passed peer review for the first time. The paper suggests that the zero-point field acts as pilot-wave and that the thrust may be due to particles pushing off the quantum vacuum. While peer review doesn't guarantee that a finding or observation is valid, it does indicate that independent scientists looked over the experimental setup, results, and interpretation and that they could not find any obvious errors in the methodology and that they found the results reasonable. In the paper, the authors identify and discuss nine potential sources of experimental errors, including rogue air currents, leaky electromagnetic radiation, and magnetic interactions. Not all of them could be completely ruled out, and further peer-reviewed experimentation is needed in order to rule these potential errors out.
Zero-point energy in fiction
The concept of Zero-point energy used as an energy source has been an element used in science fiction and related media.
See also
Casimir effect
Ground state
Lamb shift
QED vacuum
QCD vacuum
Quantum fluctuation
Quantum foam
Scalar field
Time crystal
Topological order
Unruh effect
Vacuum energy
Vacuum expectation value
Vacuum state
Virtual particle
References
Notes
Articles in the press
Via Calphysics Institute.
Bibliography
Further reading
Press articles
Journal articles
Books
External links
Nima Arkani-Hamed on the issue of vacuum energy and dark energy.
Steven Weinberg on the cosmological constant problem.
Energy (physics)
Quantum field theory
Quantum electrodynamics
Concepts in physics
Mathematical physics
Condensed matter physics
Materials science
Quantum phases
Non-equilibrium thermodynamics
Perpetual motion
Physical paradoxes
Thermodynamics | Zero-point energy | Physics,Chemistry,Materials_science,Mathematics,Engineering | 17,794 |
103,073 | https://en.wikipedia.org/wiki/Plexus | In neuroanatomy, a plexus (from the Latin term for "braid") is a branching network of vessels or nerves. The vessels may be blood vessels (veins, capillaries) or lymphatic vessels. The nerves are typically axons outside the central nervous system.
The standard plural form in English is plexuses. Alternatively, the Latin plural plexūs may be used.
Types
Nerve plexuses
The four primary nerve plexuses are the cervical plexus, brachial plexus, lumbar plexus, and the sacral plexus.
Cardiac plexus
Celiac plexus
Renal plexus
Venous plexus
Choroid plexus
The choroid plexus is a part of the central nervous system in the brain and consists of capillaries, brain ventricles, and ependymal cells.
Invertebrates
The plexus is the characteristic form of nervous system in the coelenterates and persists with modifications in the flatworms. The nerves of the radially symmetric echinoderms also take this form, where a plexus underlies the ectoderm of these animals and deeper in the body other nerve cells form plexuses of limited extent.
See also
Cranial nerve
Spinal nerve
Nerve plexus
Brachial nerve
List of anatomy mnemonics
References
Nervous system | Plexus | Biology | 287 |
63,964,488 | https://en.wikipedia.org/wiki/Rapid%20voltage%20change | A rapid voltage change or RVC is one of the power-quality (PQ) issue related to voltage disturbance. "According to IEC 61000-4-30, Ed. 3 standard, RVC is defined as "a quick transition in root means square (r.m.s.) voltage occurring between two steady-state conditions, and during which the r.m.s. voltage does not exceed the dip/swell thresholds." Switching processes such as motor starting, capacitor bank on/off, load switching, or transformer tap-changer operations can all create RVCs. Moreover, they can also be induced by sudden load variations or by disturbance in power output from distributed energy sources such as solar or wind power system. The main known effect of rapid voltage changes is light flicker, but other non-flicker effects also have been reported.
Rapid voltage change effect
The RVC voltage disturbance level is not as big as sag / dip and swell. While RVC events generally are not destructive for electronic equipment, it can be annoying for final users as they may influence light flicker.
References
Electronics
Voltage stability
Electric power | Rapid voltage change | Physics,Engineering | 232 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.