text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Gravity separation is an industrial method of separating two components, either a suspension, or dry granular mixture where separating the components with gravity is sufficiently practical: i.e. the components of the mixture have different specific weight. Every gravitational method uses gravity as the primary force for separation. One type of gravity separator lifts the material by vacuum over an inclined vibrating screen covered deck. [ 1 ] This results in the material being suspended in air while the heavier impurities are left behind on the screen and are discharged from the stone outlet. Gravity separation is used in a wide variety of industries, and can be most simply differentiated by the characteristics of the mixture to be separated - principally that of 'wet' i.e. - a suspension versus 'dry' -a mixture of granular product. Often other methods are applied to make the separation faster and more efficient, such as flocculation , coagulation and suction. The most notable advantages of the gravitational methods are their cost effectiveness and in some cases excellent reduction. Gravity separation is an attractive unit operation as it generally has low capital and operating costs, uses few if any chemicals that might cause environmental concerns and the recent development of new equipment enhances the range of separations possible.
Agriculture-
Gravity separation tables are used for the removal of impurities, admixture, insect damage and immature kernels from the following examples: wheat, barley, oilseed rape, peas, beans, cocoa beans, linseed. They can be used to separate and standardize coffee beans, cocoa beans, peanuts, corn, peas, rice, wheat, sesame and other food grains.
The gravity separator separates products of same size but with difference in specific weight. It has a vibrating rectangular deck, which makes it easy for the product to travel a longer distance, ensuring improved quality of the end product. The pressurized air in the deck enables the material to split according to its specific weight. As a result, the heavier particles travel to the higher level while the lighter particles travel to the lower level of the deck. It comes with easily adjustable air fans to control the volume of air distribution at different areas of the vibrating deck to meet the air supply needs of the deck. The table inclination, speed of eccentric motion and the feed rate can be precisely adjusted to achieve smooth operation of the machine. [ 2 ]
Heavy liquids such as tetrabromoethane can be used to separate ores from supporting rocks by preferential flotation. The rocks are crushed, and while sand, limestone , dolomite , and other types of rock material will float on TBE, ores such as sphalerite , galena and pyrite will sink.
Clarification is a name for the method of separating fluid from solid particles. Often clarification is used along with flocculation to make the solid particles sink faster to the bottom of the clarification pool while fluid is obtained from the surface which is free of solid particles.
Thickening is the same as clarification except reverse. Solids that sink to the bottom are obtained and fluid is rejected from the surface.
The difference of these methods could be demonstrated with the methods used in waste water processing: in the clarification phase, sludge sinks to the bottom of the pool and clear water flows over the clear water grooves and continues its journey. The obtained sludge is then pumped into the thickeners, where sludge thickens farther and is then obtained to be pumped into digestion to be prepared into fertilizer.
When clearing gases, an often used and mostly working method for clearing large particles is to blow it into a large chamber where the gas's velocity decreases and the solid particles start sinking to the bottom. This method is used mostly because of its cheap cost. | https://en.wikipedia.org/wiki/Gravity_separation |
A gravity train is a theoretical means of transportation for purposes of commuting between two points on the surface of a sphere , by following a straight tunnel connecting the two points through the interior of the sphere.
In a large body such as a planet , this train could be left to accelerate using just the force of gravity , since during the first half of the trip (from the point of departure until the middle), the downward pull towards the center of gravity would pull it towards the destination. During the second half of the trip, the acceleration would be in the opposite direction relative to the trajectory, but, ignoring the effects of friction , the momentum acquired during the first half of the trajectory would equal this deceleration, and as a result, the train's speed would reach zero at approximately the moment the train reached its destination. [ 1 ] [ better source needed ]
In the 17th century, British scientist Robert Hooke presented the idea of an object accelerating inside a planet in a letter to Isaac Newton . A gravity train project was seriously presented to the French Academy of Sciences in the 19th century. The same idea was proposed, without calculation, by Lewis Carroll in 1893 in Sylvie and Bruno Concluded . The idea was rediscovered in the 1960s when physicist Paul Cooper published a paper in the American Journal of Physics suggesting that gravity trains be considered for a future transportation project. [ 2 ]
Under the assumption of a spherical planet with uniform density, and ignoring relativistic effects as well as friction, a gravity train has the following properties: [ 3 ]
For gravity trains between points which are not the antipodes of each other, the following hold:
On the planet Earth specifically, since a gravity train's movement is the projection of a very-low-orbit satellite's movement onto a line, it has the following parameters:
To put some numbers in perspective, the deepest current bore hole is the Kola Superdeep Borehole with a true depth of 12,262 meters; covering the distance between London and Paris (350 km) via a hypocycloidical path would require the creation of a hole 111,408 metres deep. Not only is such a depth nine times as great, but it would also necessitate a tunnel that passes through the Earth's mantle .
Using the approximations that the Earth is perfectly spherical and of uniform density ρ {\displaystyle \rho } , and the fact that within a uniform hollow sphere there is no gravity, the gravitational acceleration a {\displaystyle a} experienced by a body within the Earth is proportional to the ratio of the distance from the center r {\displaystyle r} to the Earth's radius R {\displaystyle R} . This is because underground at distance r {\displaystyle r} from the center is like being on the surface of a planet of radius r {\displaystyle r} , within a hollow sphere which contributes nothing.
On the surface, r = R {\displaystyle r=R} , so the gravitational acceleration is g = G ρ 4 3 π R {\displaystyle g=G\rho {\frac {4}{3}}\pi \,R} . Hence, the gravitational acceleration at r {\displaystyle r} is
In the case of a straight line through the center of the Earth, the acceleration of the body is equal to that of gravity: it is falling freely straight down. We start falling at the surface, so at time t {\displaystyle t} (treating acceleration and velocity as positive downwards):
Differentiating twice:
where ω = g R {\displaystyle \omega ={\sqrt {\frac {g}{R}}}} . This class of problems, where there is a restoring force proportional to the displacement away from zero, has general solutions of the form r = k cos ( ω t + φ ) {\displaystyle r=k\cos(\omega t+\varphi )} , and describes simple harmonic motion such as in a spring or pendulum .
In this case r t = R cos g R t {\displaystyle r_{t}=R\cos {\sqrt {\frac {g}{R}}}\,t} so that r 0 = R {\displaystyle r_{0}=R} , we begin at the surface at time zero, and oscillate back and forth forever.
The travel time to the antipodes is half of one cycle of this oscillator, that is the time for the argument to cos g R t {\displaystyle \cos {\sqrt {\frac {g}{R}}}\,t} to sweep out π {\displaystyle {\pi }} radians. Using simple approximations of g = 10 m / s 2 , R = 6500 km {\displaystyle g=10{\text{ m}}/{\text{s}}^{2},R=6500{\text{ km}}} that time is
For the more general case of the straight line path between any two points on the surface of a sphere we calculate the acceleration of the body as it moves frictionlessly along its straight path.
The body travels along AOB, O being the midpoint of the path, and the closest point to the center of the Earth on this path. At distance r {\displaystyle r} along this path, the force of gravity g r {\displaystyle g_{r}} (directed from point X towards the center of the Earth, along X C {\displaystyle XC} ) depends linearly on distance x {\displaystyle x} to the center of the Earth as above. (Expressed in terms of r {\displaystyle r} , and using the shorthand b = R sin θ {\displaystyle b=R\sin \theta } for length OC, x = r 2 + b 2 {\displaystyle x={\sqrt {r^{2}+b^{2}}}} ). We have:
The resulting acceleration on the body, because it is on a frictionless inclined surface , is g r cos φ {\displaystyle g_{r}\cos \varphi } :
But cos φ = r / x {\displaystyle \cos \varphi =r/x} , so substituting both:
which is exactly the same for this new r {\displaystyle r} , distance along AOB away from O, as for the r {\displaystyle r} in the diametric case along ACD. So the remaining analysis is the same, accommodating the initial condition that the maximal r {\displaystyle r} is R cos θ = A O {\displaystyle R\cos \theta =AO} the complete equation of motion is
The time constant ω = g R {\displaystyle \omega ={\sqrt {\frac {g}{R}}}} is the same as in the diametric case so the journey time is still 42 minutes; it's just that all the distances and speeds are scaled by the constant cos θ {\displaystyle \cos \theta } .
The time constant ω {\displaystyle \omega } depends only on g R {\displaystyle {\frac {g}{R}}} so if we expand that we get
which depends only on the gravitational constant and ρ {\displaystyle \rho } the density of the planet. The size of the planet is immaterial; the journey time is the same if the density is the same.
In the 2012 movie Total Recall , a gravity train called "The Fall" goes through the center of the Earth to commute between Western Europe and Australia. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Gravity_train |
In mathematics , Gray's conjecture is a conjecture made by Brayton Gray in 1984 about maps between loop spaces of spheres. [ 1 ] It was later proved by John Harper. [ 2 ]
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gray's_conjecture |
Gray's Paradox is a paradox posed in 1936 by British zoologist Sir James Gray . The paradox was to figure out how dolphins can obtain such high speeds and accelerations with what appears to be a small muscle mass. Gray made an estimate of the power a dolphin could exert based on its physiology, and concluded the power was insufficient to overcome the drag forces in water. He hypothesized that Dolphin's skin must have special anti-drag properties. [ 1 ]
In 2008, researchers from Rensselaer Polytechnic Institute , West Chester University and the University of California, Santa Cruz used digital particle image velocimetry to prove that Gray's assumptions oversimplified the relationship between muscle power and drag force. [ 2 ]
Timothy Wei, professor and acting dean of Rensselaer's School of Engineering, videotaped two bottlenose dolphins, Primo and Puka, as they swam through a section of water populated with hundreds of thousands of tiny air bubbles. Computer software and force measurement tools developed for aerospace were then used to study the particle-image velocimetry which was captured at 1,000 frames per second (fps). This allowed the team to measure the force exerted by a dolphin. Results showed the dolphin to exert approximately 200 lb of force every time it thrust its tail – 10 times more than Gray hypothesized – and at peak force can exert between 300 and 400 lb. [ 2 ]
Wei also used this technique to film dolphins as they were doing tail-stands, a trick where the dolphins “walk” on water by holding most of their bodies vertical above the water while supporting themselves with short, powerful thrusts of their tails.
In 2009, researchers from the National Chung Hsing University in Taiwan introduced new concepts of “kidnapped airfoils” and “circulating horsepower” to explain the swimming capabilities of the swordfish . Swordfish swim at even higher speeds and accelerations than dolphins. The researchers claim their analysis also "solves the perplexity of dolphin’s Gray paradox". [ 3 ]
The prior research efforts to refute Gray's paradox only looked at the drag reducing aspect of dolphin's skin, but never questioned the basic assumption of Gray "that drag cannot be greater than muscle work" which led to paradox in the first place. In 2014, a team of theoretical mechanical engineers from Northwestern University proved the underlying hypothesis of Gray's paradox wrong. [ 4 ] They showed mathematically that drag on undulatory swimmers (such as dolphins) can indeed be greater than the muscle power it generates to propel itself forward, without being paradoxical. They introduced the concept of "energy cascade" to show that during steady swimming all of the generated muscle power is dissipated in the wake of the swimmer (through viscous dissipation). A swimmer uses muscle power to undulate its body, which causes it to experience both drag and thrust simultaneously. Muscle power generated should be equated to power needed to deform the body, rather than equating it to the drag power. On the contrary drag power should be equated to thrust power. This is because during steady swimming, drag and thrust are equal in magnitude but opposite in direction. Their findings can be summarized in a simple power balance equation:
in which,
It is important to acknowledge the fact that a swimmer does not have to spend energy to overcome drag all through its muscle work; it is also assisted by the thrust force in this task. Their research also shows that defining drag on the body is definitional and many definitions of drag on the swimming body are prevalent in literature. Some of these definitions can give higher value than the muscle power. However, this does not lead to any paradox because higher drag also means higher thrust in the power balance equation, and this does not violate any energy balance principles. | https://en.wikipedia.org/wiki/Gray's_paradox |
The gray (symbol: Gy ) is the unit of ionizing radiation dose in the International System of Units (SI), defined as the absorption of one joule of radiation energy per kilogram of matter . [ 1 ]
It is used as a unit of the radiation quantity absorbed dose that measures the energy deposited by ionizing radiation in a unit mass of absorbing material, and is used for measuring the delivered dose in radiotherapy , food irradiation and radiation sterilization . It is important in predicting likely acute health effects, such as acute radiation syndrome and is used to calculate equivalent dose using the sievert , which is a measure of the stochastic health effect on the human body.
The gray is also used in radiation metrology as a unit of the radiation quantity kerma ; defined as the sum of the initial kinetic energies of all the charged particles liberated by uncharged ionizing radiation [ a ] in a sample of matter per unit mass. The unit was named after British physicist Louis Harold Gray , a pioneer in the measurement of X-ray and radium radiation and their effects on living tissue. [ 2 ]
The gray was adopted as part of the International System of Units in 1975. The corresponding cgs unit to the gray is the rad (equivalent to 0.01 Gy), which remains common largely in the United States, though "strongly discouraged" in the style guide for U.S. National Institute of Standards and Technology . [ 3 ]
The gray has a number of fields of application in measuring dose:
The measurement of absorbed dose in tissue is of fundamental importance in radiobiology and radiation therapy as it is the measure of the amount of energy the incident radiation deposits in the target tissue. The measurement of absorbed dose is a complex problem due to scattering and absorption, and many specialist dosimeters are available for these measurements, and can cover applications in 1-D, 2-D and 3-D. [ 4 ] [ 5 ] [ 6 ]
In radiation therapy, the amount of radiation applied varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers).
The average radiation dose from an abdominal X-ray is 0.7 millisieverts (0.0007 Sv), that from an abdominal CT scan is 8 mSv, that from a pelvic CT scan is 6 mGy, and that from a selective CT scan of the abdomen and the pelvis is 14 mGy. [ 7 ]
The absorbed dose also plays an important role in radiation protection , as it is the starting point for calculating the stochastic health risk of low levels of radiation, which is defined as the probability of cancer induction and genetic damage. [ 8 ] The gray measures the total absorbed energy of radiation, but the probability of stochastic damage also depends on the type and energy of the radiation and the types of tissues involved. This probability is related to the equivalent dose in sieverts (Sv), which has the same dimensions as the gray. It is related to the gray by weighting factors described in the articles on equivalent dose and effective dose .
The International Committee for Weights and Measures states: "In order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H , the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H ." [ 9 ]
The accompanying diagrams show how absorbed dose (in grays) is first obtained by computational techniques, and from this value the equivalent doses are derived. For X-rays and gamma rays the gray is numerically the same value when expressed in sieverts, but for alpha particles one gray is equivalent to 20 sieverts, and a radiation weighting factor is applied accordingly.
The gray is conventionally used to express the severity of what are known as "tissue effects" from doses received in acute exposure to high levels of ionizing radiation. These are effects that are certain to happen, as opposed to the uncertain effects of low levels of radiation that have a probability of causing damage. A whole-body acute exposure to 5 grays or more of high-energy radiation usually leads to death within 14 days. LD 1 is 2.5 Gy, LD 50 is 5 Gy and LD 99 is 8 Gy. [ 10 ] The LD 50 dose represents 375 joules for a 75 kg adult.
The gray is used to measure absorbed dose rates in non-tissue materials for processes such as radiation hardening , food irradiation and electron irradiation . Measuring and controlling the value of absorbed dose is vital to ensuring correct operation of these processes.
Kerma (" k inetic e nergy r eleased per unit ma ss") is used in radiation metrology as a measure of the liberated energy of ionisation due to irradiation, and is expressed in grays. Importantly, kerma dose is different from absorbed dose, depending on the radiation energies involved, partially because ionization energy is not accounted for. Whilst roughly equal at low energies, kerma is much higher than absorbed dose at higher energies, because some energy escapes from the absorbing volume in the form of bremsstrahlung (X-rays) or fast-moving electrons.
Kerma, when applied to air, is equivalent to the legacy roentgen unit of radiation exposure, but there is a difference in the definition of these two units. The gray is defined independently of any target material, however, the roentgen was defined specifically by the ionisation effect in dry air, which did not necessarily represent the effect on other media.
Wilhelm Röntgen discovered X-rays on November 8, 1895, and their use spread very quickly for medical diagnostics, particularly broken bones and embedded foreign objects where they were a revolutionary improvement over previous techniques.
Due to the wide use of X-rays and the growing realisation of the dangers of ionizing radiation, measurement standards became necessary for radiation intensity and various countries developed their own, but using differing definitions and methods. Eventually, in order to promote international standardisation, the first International Congress of Radiology (ICR) meeting in London in 1925, proposed a separate body to consider units of measure. This was called the International Commission on Radiation Units and Measurements , or ICRU, [ b ] and came into being at the Second ICR in Stockholm in 1928, under the chairmanship of Manne Siegbahn . [ 11 ] [ 12 ] [ c ]
One of the earliest techniques of measuring the intensity of X-rays was to measure their ionising effect in air by means of an air-filled ion chamber . At the first ICRU meeting it was proposed that one unit of X-ray dose should be defined as the quantity of X-rays that would produce one esu of charge in one cubic centimetre of dry air at 0 °C and 1 standard atmosphere of pressure. This unit of radiation exposure was named the roentgen in honour of Wilhelm Röntgen, who had died five years previously. At the 1937 meeting of the ICRU, this definition was extended to apply to gamma radiation . [ 13 ] This approach, although a great step forward in standardisation, had the disadvantage of not being a direct measure of the absorption of radiation, and thereby the ionisation effect, in various types of matter including human tissue, and was a measurement only of the effect of the X-rays in a specific circumstance; the ionisation effect in dry air. [ 14 ]
In 1940, Louis Harold Gray, who had been studying the effect of neutron damage on human tissue, together with William Valentine Mayneord and the radiobiologist John Read, published a paper in which a new unit of measure, dubbed the gram roentgen (symbol: gr) was proposed, and defined as "that amount of neutron radiation which produces an increment in energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one roentgen of radiation". [ 15 ] This unit was found to be equivalent to 88 ergs in air, and made the absorbed dose, as it subsequently became known, dependent on the interaction of the radiation with the irradiated material, not just an expression of radiation exposure or intensity, which the roentgen represented. In 1953 the ICRU recommended the rad , equal to 100 erg/g, as the new unit of measure of absorbed radiation. The rad was expressed in coherent cgs units. [ 13 ]
In the late 1950s, the CGPM invited the ICRU to join other scientific bodies to work on the development of the International System of Units , or SI. [ 16 ] The CCU decided to define the SI unit of absorbed radiation as energy deposited by reabsorbed charged particles per unit mass of absorbent material, which is how the rad had been defined, but in MKS units it would be equivalent to the joule per kilogram. This was confirmed in 1975 by the 15th CGPM, and the unit was named the "gray" in honour of Louis Harold Gray, who had died in 1965. The gray was thus equal to 100 rad. Notably, the centigray (numerically equivalent to the rad) is still widely used to describe absolute absorbed doses in radiotherapy.
The adoption of the gray by the 15th General Conference on Weights and Measures as the unit of measure of the absorption of ionizing radiation , specific energy absorption , and of kerma in 1975 [ 17 ] was the culmination of over half a century of work, both in the understanding of the nature of ionizing radiation and in the creation of coherent radiation quantities and units.
The following table shows radiation quantities in SI and non-SI units. | https://en.wikipedia.org/wiki/Gray_(unit) |
Gray goo (also spelled (or spelt) as grey goo ) is a hypothetical global catastrophic scenario involving molecular nanotechnology in which out-of-control self-replicating machines consume all biomass (and perhaps also everything else) on Earth while building many more of themselves, [ 1 ] [ 2 ] a scenario that has been called ecophagy ( literally: "consumption of the environment" ) . [ 3 ] The original idea assumed machines were designed to have this capability, while popularizations have assumed that machines might somehow gain this capability by accident.
Self-replicating machines of the macroscopic variety were originally described by mathematician John von Neumann , and are sometimes referred to as von Neumann machines or clanking replicators .
The term gray goo was coined by nanotechnology pioneer K. Eric Drexler in his 1986 book Engines of Creation . [ 4 ] In 2004, he stated "I wish I had never used the term 'gray goo'." [ 5 ] Engines of Creation mentions "gray goo" as a thought experiment in two paragraphs and a note, while the popularized idea of gray goo was first publicized in a mass-circulation magazine, Omni , in November 1986. [ 6 ]
The term was first used by molecular nanotechnology pioneer K. Eric Drexler in Engines of Creation (1986). In Chapter 4, Engines Of Abundance , Drexler illustrates both exponential growth and inherent limits (not gray goo) by describing " dry " nanomachines that can function only if given special raw materials :
Imagine such a replicator floating in a bottle of chemicals, making copies of itself...the first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of ten hours, there are not thirty-six new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined — if the bottle of chemicals hadn't run dry long before.
According to Drexler, the term was popularized by an article in science fiction magazine Omni , which also popularized the term "nanotechnology" in the same issue. Drexler says arms control is a far greater issue than gray goo "nanobugs". [ 7 ]
Drexler describes gray goo in Chapter 11 of Engines of Creation :
Early assembler-based replicators could beat the most advanced modern organisms. 'Plants' with 'leaves' no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous 'bacteria' could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop — at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.
Drexler notes that the geometric growth made possible by self-replication is inherently limited by the availability of suitable raw materials. Drexler used the term "gray goo" not to indicate color or texture, but to emphasize the difference between "superiority" in terms of human values and "superiority" in terms of competitive success:
Though masses of uncontrolled replicators need not be grey or gooey, the term "grey goo" emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be "superior" in an evolutionary sense, but this need not make them valuable.
Bill Joy , one of the founders of Sun Microsystems, discussed some of the problems with pursuing this technology in his now-famous 2000 article in Wired magazine, titled " Why The Future Doesn't Need Us ". In direct response to Joy's concerns, the first quantitative technical analysis of the ecophagy scenario was published in 2000 by nanomedicine pioneer Robert Freitas . [ 3 ]
Drexler more recently conceded that there is no need to build anything that even resembles a potential runaway replicator. This would avoid the problem entirely. In a paper in the journal Nanotechnology , he argues that self-replicating machines are needlessly complex and inefficient. His 1992 technical book on advanced nanotechnologies Nanosystems: Molecular Machinery, Manufacturing, and Computation [ 8 ] describes manufacturing systems that are desktop-scale factories with specialized machines in fixed locations and conveyor belts to move parts from place to place. None of these measures would prevent a party from creating a weaponized gray goo, were such a thing possible.
King Charles III (then Prince of Wales ) called upon the British Royal Society to investigate the "enormous environmental and social risks" of nanotechnology in a planned report, leading to much media commentary on gray goo. The Royal Society's report on nanoscience was released on 29 July 2004, and declared the possibility of self-replicating machines to lie too far in the future to be of concern to regulators. [ 9 ]
More recent analysis in the paper titled Safe Exponential Manufacturing from the Institute of Physics (co-written by Chris Phoenix, Director of Research of the Center for Responsible Nanotechnology, and Eric Drexler), shows that the danger of gray goo is far less likely than originally thought. [ 10 ] However, other long-term major risks to society and the environment from nanotechnology have been identified. [ 11 ] Drexler has made a somewhat public effort to retract his gray goo hypothesis, in an effort to focus the debate on more realistic threats associated with knowledge-enabled nanoterrorism and other misuses. [ 12 ]
In Safe Exponential Manufacturing , which was published in a 2004 issue of Nanotechnology , it was suggested that creating manufacturing systems with the ability to self-replicate by the use of their own energy sources would not be needed. [ 13 ] The Foresight Institute also recommended embedding controls in the molecular machines. These controls would be able to prevent anyone from purposely abusing nanotechnology, and therefore avoid the gray goo scenario. [ 14 ]
Gray goo is a useful construct for considering low-probability, high-impact outcomes from emerging technologies. Thus, it is a useful tool in the ethics of technology . Daniel A. Vallero applied it as a worst-case scenario thought experiment for technologists contemplating possible risks from advancing a technology. [ 15 ] This requires that a decision tree or event tree include even extremely low probability events if such events may have an extremely negative and irreversible consequence, i.e. application of the precautionary principle . Dianne Irving admonishes that "any error in science will have a rippling effect". [ 16 ] Vallero adapted this reference to chaos theory to emerging technologies, wherein slight permutations of initial conditions can lead to unforeseen and profoundly negative downstream effects, for which the technologist and the new technology's proponents must be held accountable. | https://en.wikipedia.org/wiki/Gray_goo |
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).
For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ]
Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ]
In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ]
The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ]
It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ]
Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ]
The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension.
When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ]
About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ]
Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization .
In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ]
A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ]
As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ]
The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list:
The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values.
The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).
It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits.
0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings.
For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ):
There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by
λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ]
We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form
{ g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by
If we define the transition multiplicities
m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is
λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ]
Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e.
V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} .
An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and
P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by
G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines .
Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 N , where N is the number of vertices in the middle-level subgraph. [ 70 ]
Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ]
Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ]
The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC for P = 30 and n = 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022.
An STGC for P = 360 and n = 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ]
Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ]
There are a number of binary codes similar to Gray codes, including:
The following binary-coded decimal (BCD) codes are Gray code variants as well: | https://en.wikipedia.org/wiki/Gray_isometry |
Gray molasses is a method of sub-Doppler laser cooling of atoms. It employs principles from Sisyphus cooling in conjunction with a so-called "dark" state whose transition to the excited state is not addressed by the resonant lasers. Ultracold atomic physics experiments on atomic species with poorly-resolved hyperfine structure, like isotopes of lithium [ 1 ] and potassium , [ 2 ] often utilize gray molasses instead of Sisyphus cooling as a secondary cooling stage after the ubiquitous magneto-optical trap (MOT) to achieve temperatures below the Doppler limit . Unlike a MOT, which combines a molasses force with a confining force, a gray molasses can only slow but not trap atoms; hence, its efficacy as a cooling mechanism lasts only milliseconds before further cooling and trapping stages must be employed.
Like Sisyphus cooling , the cooling mechanism of gray molasses relies on a two-photon Raman-type transition between two hyperfine-split ground states mediated by an excited state. Orthogonal superpositions of these ground states constitute "bright" and "dark" states, so called since the former couples to the excited state via dipole transitions driven by the laser , and the latter is only accessible via spontaneous emission from the excited state. As neither are eigenstates of the kinetic energy operator, the dark state also evolves into the bright state with frequency proportional to the atom's external momentum. Gradients in the polarization of the molasses beam create a sinusoidal potential energy landscape for the bright state in which atoms lose kinetic energy by traveling "uphill" to potential energy maxima that coincide with circular polarizations capable of executing electric dipole transitions to the excited state. Atoms in the excited state are then optically pumped to the dark state and subsequently evolve back to the bright state to restart the cycle. Alternately, the pair of bright and dark ground states can be generated by electromagnetically-induced transparency (EIT) . [ 3 ] [ 4 ]
The net effect of many cycles from bright to excited to dark states is to subject atoms to Sisyphus-like cooling in the bright state and select the coldest atoms to enter the dark state and escape the cycle. The latter process constitutes velocity-selective coherent population trapping (VSCPT). [ 5 ] The combination of bright and dark states thus inspires the name "gray molasses."
In 1988, The NIST group in Washington led by William Phillips first measured temperatures below the Doppler limit in sodium atoms in an optical molasses , prompting the search for the theoretical underpinnings of sub-Doppler cooling. [ 6 ] The next year, Jean Dalibard and Claude Cohen-Tannoudji identified the cause as the multi-photon process of Sisyphus cooling, [ 7 ] and Steven Chu 's group likewise modeled sub-Doppler cooling as fundamentally an optical pumping scheme. [ 8 ] As a result of their efforts, Phillips, Cohen-Tannoudji, and Chu jointly won the 1997 Nobel Prize in Physics . T.W. Hänsch, et al. , first outlined the theoretical formulation of gray molasses in 1994, [ 9 ] and a four-beam experimental realization in cesium was achieved by G. Grynberg the next year. [ 10 ] It has since been regularly used to cool all the other alkali (hydrogenic) metals. [ 1 ] [ 2 ] [ 11 ] [ 12 ]
In Sisyphus cooling, the two Zeeman levels of a J = 1 / 2 {\displaystyle J=1/2} atomic ground state manifold experience equal and opposite AC Stark shifts from the near-resonant counter-propagating beams. The beams also effect a polarization gradient, alternating between linear and circular polarizations. The potential energy maxima of one m J {\displaystyle m_{J}} coincide with pure circular polarization, which optically pumps atoms to the other m J {\displaystyle m_{J}} , which experiences its minima in the same location. Over time, the atoms expend their kinetic energy traversing the potential energy landscape and transferring the potential energy difference between the crests and troughs of the AC-Stark-shifted ground state levels to emitted photons. [ 7 ]
In contrast, gray molasses only has one sinusoidally light-shifted ground state; optical pumping at the peaks of this potential energy landscape takes atoms to the dark state, which can selectively evolve to the bright state and re-enter the cycle with sufficient momentum. Sisyphus cooling is difficult to implement when the excited state manifold is poorly-resolved (i.e. whose hyperfine spacing is comparable to or less than the constituent linewidths ); in these atomic species, the Raman-type gray molasses is preferable.
Denote the two ground states and the excited state of the electron | g − 1 ⟩ , | g 1 ⟩ , {\displaystyle |g_{-1}\rangle ,|g_{1}\rangle ,} and | e 0 ⟩ {\displaystyle |e_{0}\rangle } , respectively. The atom also has overall momentum, so the overall state of the atom is a product state of its internal state and its momentum, as shown in the figure. In the presence of counter-propagating beams of opposite polarization, the internal states experience the atom-light interaction Hamiltonian
where Ω {\displaystyle \Omega } is the Rabi frequency , approximated to be the same for both transitions. Using the definition of the translation operator in momentum space,
the effect of H A L {\displaystyle H_{\mathrm {AL} }} on the state | e 0 , p ⟩ {\displaystyle |e_{0},p\rangle } is
This suggests the dressed state | ψ c ( p ) ⟩ {\displaystyle |\psi _{\mathrm {c} }(p)\rangle } that couples to | ψ e ( p ) ⟩ = | e 0 , p ⟩ {\displaystyle |\psi _{\mathrm {e} }(p)\rangle =|e_{0},p\rangle } is a more convenient basis state of the two ground states. The orthogonal basis state | ψ n c ( p ) ⟩ {\displaystyle |\psi _{\mathrm {nc} }(p)\rangle } defined below does not couple to | ψ e ( p ) ⟩ {\displaystyle |\psi _{\mathrm {e} }(p)\rangle } at all.
The action of H A L {\displaystyle H_{\mathrm {AL} }} on these states is
Thus, | ψ c ( p ) ⟩ {\displaystyle |\psi _{\mathrm {c} }(p)\rangle } and | ψ e ( p ) ⟩ {\displaystyle |\psi _{\mathrm {e} }(p)\rangle } undergo Sisyphus-like cooling, identifying the former as the bright state. | ψ n c ( p ) ⟩ {\displaystyle |\psi _{\mathrm {nc} }(p)\rangle } is optically inaccessible and constitutes the dark state. However, | ψ c ( p ) ⟩ {\displaystyle |\psi _{\mathrm {c} }(p)\rangle } and | ψ n c ( p ) ⟩ {\displaystyle |\psi _{\mathrm {nc} }(p)\rangle } are not eigenstates of the momentum operator, and thus motionally couple to one another via the kinetic energy term of the unperturbed Hamiltonian:
As a result of this coupling, the dark state evolves into the bright state with frequency proportional to the momentum, effectively selecting hotter atoms to re-enter the Sisyphus cooling cycle. This nonadiabatic coupling occurs predominantly at the potential minima of the light-shifted coupling state. Over time, atoms cool until they lack the momentum to traverse the sinusoidal light shift of the bright state and instead populate the dark state. [ 9 ]
The resonance condition of any Λ {\displaystyle \Lambda } -type Raman process requires that the difference in the two photon energies match the difference in energy between the states at the "legs" of the Λ {\displaystyle \Lambda } , here the ground states | g − 1 ⟩ , | g 1 ⟩ , {\displaystyle |g_{-1}\rangle ,|g_{1}\rangle ,} identified above. In experimental settings, this condition is realized when the detunings of the cycling and repumper frequencies in respect to the | g − 1 ⟩ → | e 0 ⟩ {\displaystyle |g_{-1}\rangle \rightarrow |e_{0}\rangle } and | g 1 ⟩ → | e 0 ⟩ {\displaystyle |g_{1}\rangle \rightarrow |e_{0}\rangle } transition frequencies, respectively, are equal. [ note 1 ]
Unlike most Doppler cooling techniques, light in the gray molasses must be blue -detuned from its resonant transition; the resulting Doppler heating is offset by polarization-gradient cooling. Qualitatively, this is because the choice of | F = 2 ⟩ → | F ′ = 2 ⟩ {\displaystyle |F=2\rangle \rightarrow |F'=2\rangle } means that the AC Stark shifts of the three levels are the same sign at any given position. Selecting the potential energy maxima as the sites of optical pumping to the dark state requires the overall light to be blue-detuned; in doing so, the atoms in the bright state traverse the maximum potential energy difference and thus dissipate the most kinetic energy. A full quantitative explanation of the molasses force with respect to detuning can be found in Hänsch's paper. [ 9 ] | https://en.wikipedia.org/wiki/Gray_molasses |
Grazing-incidence small-angle scattering ( GISAS ) is a scattering technique used to study nanostructured surfaces and thin films. The scattered probe is either photons ( grazing-incidence small-angle X-ray scattering , GISAXS ) or neutrons ( grazing-incidence small-angle neutron scattering , GISANS ). GISAS combines the accessible length scales of small-angle scattering (SAS: SAXS or SANS ) and the surface sensitivity of grazing incidence diffraction (GID).
A typical application of GISAS is the characterisation of self-assembly and self-organization on the nanoscale in thin films. Systems studied by GISAS include quantum dot arrays, [ 1 ] growth instabilities formed during in-situ growth, [ 2 ] self-organized nanostructures in thin films of block copolymers , [ 3 ] silica mesophases, [ 4 ] [ 5 ] and nanoparticles . [ 6 ] [ 7 ]
GISAXS was introduced by Levine and Cohen [ 8 ] to study the dewetting of gold deposited on a glass surface. The technique was further developed by Naudon [ 9 ] and coworkers to study metal agglomerates on surfaces and in buried interfaces. [ 10 ] With the advent of nanoscience other applications evolved quickly, first in hard matter such as the characterization of quantum dots on semiconductor surfaces and the in-situ characterization of metal deposits on oxide surfaces. This was soon to be followed by soft matter systems such as ultrathin polymer films, [ 11 ] polymer blends, block copolymer films and other self-organized nanostructured thin films that have become indispensable for nanoscience and technology. Future challenges of GISAS may lie in biological applications, such as proteins , peptides , or viruses attached to surfaces or in lipid layers.
As a hybrid technique, GISAS combines concepts from transmission small-angle scattering (SAS), from grazing-incidence diffraction (GID), and from diffuse reflectometry. From SAS it uses the form factors and structure factors. From GID it uses the scattering geometry close to the critical angles of substrate and film, and the two-dimensional character of the scattering, giving rise to diffuse rods of scattering intensity perpendicular to the surface. With diffuse (off-specular) reflectometry it shares phenomena like the Yoneda/Vinyard peak at the critical angle of the sample, and the scattering theory, the distorted wave Born approximation (DWBA). [ 12 ] [ 13 ] [ 14 ] However, while diffuse reflectivity remains confined to the incident plane (the plane given by the incident beam and the surface normal), GISAS explores the whole scattering from the surface in all directions, typically utilizing an area detector. Thus GISAS gains access to a wider range of lateral and vertical structures and, in particular, is sensitive to the morphology and preferential alignment of nanoscale objects at the surface or inside the thin film.
As a particular consequence of the DWBA, the refraction of x-rays or neutrons has to be always taken into account in the case of thin film studies, [ 15 ] [ 16 ] due to the fact that scattering angles are small, often less than 1 deg. The refraction correction applies to the perpendicular component of the scattering vector with respect to the substrate while the parallel component is unaffected. Thus parallel scattering can often be interpreted within the kinematic theory of SAS, while refractive corrections apply to the scattering along perpendicular cuts of the scattering image, for instance along a scattering rod.
In the interpretation of GISAS images some complication arises in the scattering from low-Z films e.g. organic materials on silicon wafers, when the incident angle is in between the critical angles of the film and the substrate. In this case, the reflected beam from the substrate has a similar strength as the incident beam and thus the scattering from the reflected beam from the film structure can give rise to a doubling of scattering features in the perpendicular direction. This as well as interference between the scattering from the direct and the reflected beam can be fully accounted for by the DWBA scattering theory. [ 16 ]
These complications are often more than offset by the fact that the dynamic enhancement of the scattering intensity is significant. In combination with the straightforward scattering geometry, where all relevant information is contained in a single scattering image, in-situ and real-time experiments are facilitated. Specifically self-organization during MBE growth [ 2 ] and re-organization processes in block copolymer films under the influence of solvent vapor [ 3 ] have been characterized on the relevant timescales ranging from seconds to minutes. Ultimately the time resolution is limited by the x-ray flux on the samples necessary to collect an image and the read-out time of the area detector.
Dedicated or partially dedicated GISAXS beamlines exist at most synchrotron light sources (for instance Advanced Light Source (ALS), Australian Synchrotron, APS , ELETTRA (Italy), Diamond (UK), ESRF , National Synchrotron Light Source II (NSLS-II), Pohang Light Source (South Korea), SOLEIL (France), Shanghai Synchrotron (PR China), SSRL
At neutron research facilities , GISANS is increasingly used, typically on small-angle (SANS) instruments or on reflectometers .
GISAS does not require any specific sample preparation other than thin film deposition techniques. Film thicknesses may range from a few nm to several 100 nm, and such thin films are still fully penetrated by the x-ray beam. The film surface, the film interior, as well as the substrate-film interface are all accessible. By varying the incidence angle the various contributions can be identified. | https://en.wikipedia.org/wiki/Grazing-incidence_small-angle_scattering |
A grazing lunar occultation (also lunar grazing occultation , lunar graze , or just graze ) is a lunar occultation in which as the occulted star disappears and reappears intermittently on the edge of the Moon . [ 1 ] A team of many observers can combine grazes and reconstruct an accurate profile of the limb lunar terrain .
Since graze paths rarely pass over established observatories , amateur astronomers use portable observing equipment and travel to sites along the shadow path limits. The goal is to report the UTC of each event as accurately as possible, and GPS disciplined devices are frequently used as the time-base.
Two methods are used to observe:
Such observations are useful for: | https://en.wikipedia.org/wiki/Grazing_lunar_occultation |
Grazing marsh is a British Isles term for flat, marshy grassland in polders . It consists of large grass fields separated by fresh or brackish ditches, and is often important for its wildlife.
Grazing marshes were created from medieval times by building sea walls (earth banks) across tidal mudflats and salt marsh to make polders (though the term "polder" is little used in Britain). Polders in Britain are mostly drained by gravity, rather than active pumping. The original tidal drainage channels were augmented by new ditches, and flap valves in the sea walls let water drain out at low tide and prevent the sea or tidal river from entering at high tide. Constructing polders in this way is called inning or reclaiming from the sea.
Grazing marshes have been made in most lowland estuaries in Britain, often leaving only the river channel and the lowest part of the estuary tidal. In a few cases (such as Newtown Harbour on the Isle of Wight , and Pagham Harbour in West Sussex ) the sea walls have been breached, and the estuaries have returned to a tidal state. Grazing marshes have also been made on low-lying open coasts.
Many grazing marshes were inned in stages, and the old sea walls (called counter walls ) may be found marooned far from the current sea wall. Land levels on either side of a counter wall often differ by several metres. Paradoxically, the lower side is the land inned earlier, because sediment continued to build up on the side that remained tidal.
Wintering wildfowl are characteristic of grazing marshes, often including large flocks of Eurasian wigeon , brent goose , white-fronted goose and Bewick's swan . Many of these birds are hunted by predators such as peregrine and marsh harrier .
In spring, waders such as common redshank , Eurasian curlew , snipe , and northern lapwing breed. [ 1 ]
The ditches often have a range of salinity , depending on how close to the sea wall they are. The more saline ditches host specialist brackish-water plants and animals. These include, for example, the rare brackish amphipod Gammarus insensibilis and sea club-rush ( Bolboschoenus maritimus ). Fresher ditches may support rare animals, such as the great silver water beetle ( Hydrophilus piceus ) and the great raft spider ( Dolomedes plantarius ), and a wide range of pondweeds ( Potamogeton and relatives).
The grassland vegetation usually has a fairly small number of species, but those present are often scarce elsewhere, such as sea arrowgrass ( Triglochin maritimum ), divided sedge ( Carex divisa ) and strawberry clover (Trifolium fragiferum) .
Many grazing marshes have been converted into arable land , often using pumped drainage to lower the water levels enough to grow crops, though most are used for grazing cattle. [ 2 ] The low ditch levels and agricultural runoff combine to remove much of the aquatic wildlife, although the arable fields may still be used by some wintering wildfowl .
Some areas of grazing marsh and other polder land have been used to recreate tidal habitats by a process of managed retreat .
Many of the larger areas of grazing marsh bear nature conservation designations, including Site of Special Scientific Interest , Special Protection Area , Special Area of Conservation and Ramsar Site . | https://en.wikipedia.org/wiki/Grazing_marsh |
Gražvydas Lukinavičius is a Lithuanian biochemist . His scientific interest and main area of research is focused on labeling of biomolecules and visualization using super-resolution microscopy . He is co-invertor of DNA labeling technology known as Methyltransferase-Directed Transfer of Activated Groups (mTAG) and biocompatible and cell permeable fluorophore – silicon-rhodamine (SiR). Both inventions were commercialized. He is studying labeling methods and apply them for chromatin dynamics visualization in living cells. [ 1 ]
He was born in the family of an electrician and a land development specialist. Lukinavičius finished secondary school in Jurbarkas .
Lukinavičius completed his bachelor's degree and master's degree in biochemistry at the Vilnius University in 2000 and 2002 respectively. During this period he worked as a research assistant in Saulius Klimašauskas group and investigating conformational movements of the catalytic loop of DNA methyltransferase .
Later he became interested in S-Adenosyl methionine analogues which can be cofactors for methyltransferases . He collaborated with Elmar Weinhold from RWTH Aachen University and learned chemical synthesis and received his PhD in biochemistry at Vilnius University , Lithuania in September 2007. This led to the development of a new DNA labeling method, the Methyltransferase-Directed Transfer of Activated Groups (mTAG). [ 2 ] [ 3 ] This method was applied for optical DNA mapping and for a profiling epigenetic modifications by several research groups.
After obtaining his PhD, he moved to École polytechnique fédérale de Lausanne for postdoctoral research where he continued on working with protein labeling methods in group of Kai Johnsson. He improved SNAP-tag protein labelling technology by developing a new biocompatible fluorophore, silicon-rhodamine (SiR). [ 4 ] [ 5 ] [ 6 ] During this period, he began a collaboration with Stefan Hell to perform one of the first super-resolution microscopy experiments of living cells. [ 7 ]
In 2016, Stefan Hell invited Lukinavičius to the Department of NanoBiophotonics of the Max Planck Institute for Biophysical Chemistry in Göttingen . He has continued working on fluorescence labeling of biomolecules and started a Chromatin Labeling and Imaging group in 2018. [ 1 ]
His most-cited publications, according to Google Scholar are: [ 8 ] | https://en.wikipedia.org/wiki/Gražvydas_Lukinavičius |
Grease is a solid or semisolid lubricant formed as a dispersion of thickening agents in a liquid lubricant. Grease generally consists of a soap emulsified with mineral or vegetable oil .
A common feature of greases is that they possess high initial viscosities , which upon the application of shear, drop to give the effect of an oil-lubricated bearing of approximately the same viscosity as the base oil used in the grease. This change in viscosity is called shear thinning . Grease is sometimes used to describe lubricating materials that are simply soft solids or high viscosity liquids, but these materials do not exhibit the shear-thinning properties characteristic of the classical grease. For example, petroleum jellies such as Vaseline are not generally classified as greases.
Greases are applied to mechanisms that can be lubricated only infrequently and where a lubricating oil would not stay in position. They also act as sealants to prevent the ingress of water and incompressible materials. Grease-lubricated bearings have greater frictional characteristics because of their high viscosities.
A true grease consists of an oil or other fluid lubricant that is mixed with a thickener, typically a soap , to form a solid or semisolid. [ 1 ] Greases are usually shear-thinning or pseudo-plastic fluids , which means that the viscosity of the fluid is reduced under shear stress . After sufficient force to shear the grease has been applied, the viscosity drops and approaches that of the base lubricant, such as mineral oil. This sudden drop in shear force means that grease is considered a plastic fluid , and the reduction of shear force with time makes it thixotropic . A few greases are rheotropic , meaning they become more viscous when worked. [ 2 ] Grease is often applied using a grease gun , which applies the grease to the part being lubricated under pressure, forcing the solid grease into the spaces in the part.
Soaps are the most common emulsifying agent used, and the selection of the type of soap is determined by the application. [ 3 ] Soaps include calcium stearate , sodium stearate , lithium stearate , as well as mixtures of these components. Fatty acids derivatives other than stearates are also used, especially lithium 12-hydroxystearate . The nature of the soaps influences the temperature resistance (relating to the viscosity), water resistance, and chemical stability of the resulting grease. Calcium sulphonates and polyureas are increasingly common grease thickeners not based on metallic soaps. [ 4 ] [ 5 ]
Powdered solids may also be used as thickeners, especially as absorbent clays like bentonite . Fatty oil-based greases have also been prepared with other thickeners, such as tar , graphite , or mica , which also increase the durability of the grease. Silicone greases are generally thickened with silica .
Lithium-based greases are the most commonly used; sodium and lithium-based greases have higher melting point ( dropping point ) than calcium-based greases but are not resistant to the action of water . Lithium-based grease has a dropping point at 190 to 220 °C (374 to 428 °F). However the maximum usable temperature for lithium-based grease is 120 °C.
The amount of grease in a sample can be determined in a laboratory by extraction with a solvent followed by e.g. gravimetric determination. [ 6 ]
Some greases are labeled "EP", which indicates " extreme pressure ". Under high pressure or shock loading, normal grease can be compressed to the extent that the greased parts come into physical contact, causing friction and wear. EP greases have increased resistance to film breakdown, form sacrificial coatings on the metal surface to protect if the film does break down, or include solid lubricants such as graphite , molybdenum disulfide or hexagonal boron nitride (hBN) to provide protection even without any grease remaining. [ 3 ]
Solid additives such as copper or ceramic powder (most often hBN) are added to some greases for static high pressure and/or high temperature applications, or where corrosion could prevent dis-assembly of components later in their service life. These compounds are working as a release agent . [ 7 ] [ 8 ] Solid additives cannot be used in bearings because of tight tolerances. Solid additives will cause increased wear in bearings. [ citation needed ]
Grease from the early Egyptian or Roman eras is thought to have been prepared by combining lime with olive oil . The lime saponifies some of the triglyceride that comprises oil to give a calcium grease. In the middle of the 19th century, soaps were intentionally added as thickeners to oils. [ 9 ] Over the centuries, all manner of materials have been employed as greases. For example, black slugs Arion ater were used as axle -grease to lubricate wooden axle-trees or carts in Sweden. [ 10 ]
Jointly developed by ASTM International , the National Lubricating Grease Institute (NLGI) and SAE International , standard ASTM D4950 “standard classification and specification for automotive service greases” was first published in 1989 by ASTM International. It categorizes greases suitable for the lubrication of chassis components and wheel bearings of vehicles, based on performance requirements, using codes adopted from the NLGI's “chassis and wheel bearing service classification system” :
A given performance category may include greases of different consistencies. [ 11 ]
The measure of the consistency of grease is commonly expressed by its NLGI consistency number .
The main elements of standard ATSM D4950 and NLGI's consistency classification are reproduced and described in standard SAE J310 “automotive lubricating greases” published by SAE International.
Standard ISO 6743-9 “lubricants, industrial oils and related products (class L) — classification — part 9: family X (greases)” , first released in 1987 by the International Organization for Standardization , establishes a detailed classification of greases used for the lubrication of equipment, components of machines, vehicles, etc. It assigns a single multi-part code to each grease based on its operational properties (including temperature range, effects of water, load, etc.) and its NLGI consistency number. [ 12 ]
Silicone grease is based on a silicone oil , usually thickened with amorphous fumed silica .
Fluoropolymers containing C-O-C (ether) with fluorine (F) bonded to the carbon. They are more flexible and often used in demanding environments due to their inertness. Fomblin by Solvay Solexis and Krytox by duPont are prominent examples.
Apiezon, silicone-based, and fluoroether-based greases are all used commonly in laboratories for lubricating stopcocks and ground glass joints. The grease helps to prevent joints from "freezing", as well as ensuring high vacuum systems are properly sealed. Apiezon or similar hydrocarbon based greases are the cheapest, and most suitable for high vacuum applications. However, they dissolve in many organic solvents . This quality makes clean-up with pentane or hexanes trivial, but also easily leads to contamination of reaction mixtures.
Silicone-based greases are cheaper than fluoroether-based greases. They are relatively inert and generally do not affect reactions, though reaction mixtures often get contaminated (detected through NMR near δ 0 [ 13 ] ). Silicone-based greases are not easily removed with solvent, but they are removed efficiently by soaking in a base bath.
Fluoroether-based greases are inert to many substances including solvents, acids , bases , and oxidizers . They are, however, expensive, and are not easily cleaned away.
Food-grade greases are those greases that may come in contact with food and as such are required to be safe to digest. Food-grade lubricant base oil are generally low sulfur petrochemical, less easily oxidized and emulsified. Another commonly used poly-α olefin base oil as well. [ clarification needed ] The United States Department of Agriculture (USDA) has three food-grade designations: H1, H2 and H3. H1 lubricants are food-grade lubricants used in food-processing environments where there is the possibility of incidental food contact. H2 lubricants are industrial lubricants used on equipment and machine parts in locations with no possibility of contact. H3 lubricants are food-grade lubricants, typically edible oils, used to prevent rust on hooks, trolleys and similar equipment. [ citation needed ]
In some cases, the lubrication and high viscosity of a grease are desired in situations where non-toxic, non-oil based materials are required. Carboxymethyl cellulose , or CMC, is one popular material used to create a water-based analog of greases. CMC serves to both thicken the solution and add a lubricating effect, and often silicone-based lubricants are added for additional lubrication. The most familiar example of this type of lubricant, used as a surgical and personal lubricant , is K-Y Jelly .
Cork grease is a lubricant used to lubricate cork, for example in musical wind instruments. It is usually applied using small lip-balm /lip-stick like applicators. [ 14 ]
Published literature on axle grease is sparse. One of the rare electronically accessible sources describes the preparation of axle grease by first producing a calcium soap by heating a mixture of "rosin oil" with slaked lime . The resulting thick paste is diluted with additional rosin oil. The author disparages the further addition of talc as a thickener. [ 15 ]
New location: Navigate to USACE Home > [Publications] > [Engineer Manuals] > [EM 1110-2-1424 Lubricants and Hydraulic Fluids] | https://en.wikipedia.org/wiki/Grease_(lubricant) |
The great-circle distance , orthodromic distance , or spherical distance is the distance between two points on a sphere , measured along the great-circle arc between them. This arc is the shortest path between the two points on the surface of the sphere. (By comparison, the shortest path passing through the sphere's interior is the chord between the points.)
On a curved surface , the concept of straight lines is replaced by a more general concept of geodesics , curves which are locally straight with respect to the surface. Geodesics on the sphere are great circles, circles whose center coincides with the center of the sphere.
Any two distinct points on a sphere that are not antipodal (diametrically opposite) both lie on a unique great circle, which the points separate into two arcs; the length of the shorter arc is the great-circle distance between the points. This arc length is proportional to the central angle between the points, which if measured in radians can be scaled up by the sphere's radius to obtain the arc length. Two antipodal points both lie on infinitely many great circles, each of which they divide into two arcs of length π times the radius.
The determination of the great-circle distance is part of the more general problem of great-circle navigation , which also computes the azimuths at the end points and intermediate way-points. Because the Earth is nearly spherical , great-circle distance formulas applied to longitude and geodetic latitude of points on Earth are accurate to within about 0.5%. [ 1 ]
Let λ 1 , ϕ 1 {\displaystyle \lambda _{1},\phi _{1}} and λ 2 , ϕ 2 {\displaystyle \lambda _{2},\phi _{2}} be the geographical longitude and latitude of two points 1 and 2, and Δ λ , Δ ϕ {\displaystyle \Delta \lambda ,\Delta \phi } be their absolute differences; then Δ σ {\displaystyle \Delta \sigma } , the central angle between them, is given by the spherical law of cosines if one of the poles is used as an auxiliary third point on the sphere: [ 2 ]
The problem is normally expressed in terms of finding the central angle Δ σ {\displaystyle \Delta \sigma } . Given this angle in radians, the actual arc length d on a sphere of radius r can be trivially computed as
The central angle Δ σ {\displaystyle \Delta \sigma } is related with the chord length of unit sphere Δ σ c {\displaystyle \Delta \sigma _{\text{c}}\,\!} :
For short-distance approximation ( | Δ σ c | ≪ 1 {\displaystyle |\Delta \sigma _{\text{c}}|\ll 1} ),
On computer systems with low floating point precision, the spherical law of cosines formula can have large rounding errors if the distance is small (if the two points are a kilometer apart on the surface of the Earth, the cosine of the central angle is near 0.99999999). For modern 64-bit floating-point numbers , the spherical law of cosines formula, given above, does not have serious rounding errors for distances larger than a few meters on the surface of the Earth. [ 3 ] The haversine formula is numerically better-conditioned for small distances by using the chord-length relation: [ 4 ]
Historically, the use of this formula was simplified by the availability of tables for the haversine function: hav θ = sin 2 θ 2 {\displaystyle \operatorname {hav} \theta =\sin ^{2}{\frac {\theta }{2}}} and archav x = 2 arcsin x {\displaystyle \operatorname {archav} x=2\arcsin {\sqrt {x}}} .
The following shows the equivalent formula expressing the chord length explicitly:
where ϕ m = 1 2 ( ϕ 1 + ϕ 2 ) {\displaystyle \phi _{\text{m}}={\tfrac {1}{2}}(\phi _{1}+\phi _{2})} .
Although this formula is accurate for most distances on a sphere, it too suffers from rounding errors for the special (and somewhat unusual) case of antipodal points. A formula that is accurate for all distances is the following special case of the Vincenty formula for an ellipsoid with equal major and minor axes: [ 5 ]
where atan2 ( y , x ) {\displaystyle \operatorname {atan2} (y,x)} is the two-argument arctangent . Using atan2 ensures that the correct quadrant is chosen.
Another representation of similar formulas, but using normal vectors instead of latitude and longitude to describe the positions, is found by means of 3D vector algebra , using the dot product , cross product , or a combination: [ 6 ]
where n 1 {\displaystyle \mathbf {n} _{1}} and n 2 {\displaystyle \mathbf {n} _{2}} are the normals to the sphere at the two positions 1 and 2. Similarly to the equations above based on latitude and longitude, the expression based on arctan is the only one that is well-conditioned for all angles . The expression based on arctan requires the magnitude of the cross product over the dot product.
A line through three-dimensional space between points of interest on a spherical Earth is the chord of the great circle between the points. The central angle between the two points can be determined from the chord length. The great circle distance is proportional to the central angle.
The great circle chord length, Δ σ c {\displaystyle \Delta \sigma _{\text{c}}\,\!} , may be calculated as follows for the corresponding unit sphere, by means of Cartesian subtraction :
Substituting λ 1 = − 1 2 Δ λ {\displaystyle \lambda _{1}=-{\tfrac {1}{2}}\Delta \lambda } and λ 2 = 1 2 Δ λ {\displaystyle \lambda _{2}={\tfrac {1}{2}}\Delta \lambda } this formula can be algebraically manipulated to the form shown above in § Computational formulae .
The shape of the Earth closely resembles a flattened sphere (a spheroid ) with equatorial radius a {\displaystyle a} of 6378.137 km; distance b {\displaystyle b} from the center of the spheroid to each pole is 6356.7523142 km. When calculating the length of a short north-south line at the equator, the circle that best approximates that line has a radius of b 2 a {\textstyle {\frac {b^{2}}{a}}} (which equals the meridian's semi-latus rectum ), or 6335.439 km, while the spheroid at the poles is best approximated by a sphere of radius a 2 b {\textstyle {\frac {a^{2}}{b}}} , or 6399.594 km, a 1% difference. So long as a spherical Earth is assumed, any single formula for distance on the Earth is only guaranteed correct within 0.5% (though better accuracy is possible if the formula is only intended to apply to a limited area). Using the mean Earth radius , R 1 = 1 3 ( 2 a + b ) ≈ 6371.009 km {\textstyle R_{1}={\frac {1}{3}}(2a+b)\approx 6371.009{\text{ km}}} (for the WGS84 ellipsoid) means that in the limit of small flattening, the mean square relative error in the estimates for distance is minimized. [ 7 ]
For distances smaller than 500 kilometers and outside of the poles, a Euclidean approximation of an ellipsoidal Earth ( Federal Communications Commission's (FCC)'s formula ) is both simpler and more accurate (to 0.1%). [ 8 ] | https://en.wikipedia.org/wiki/Great-circle_distance |
In geometry , the great 120-cell or great polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,5/2,5}. It is one of 10 regular Schläfli-Hess polytopes . It is one of the two such polytopes that is self-dual.
It has the same edge arrangement as the 600-cell , icosahedral 120-cell as well as the same face arrangement as the grand 120-cell .
Due to its self-duality, it does not have a good three-dimensional analogue, but (like all other star polyhedra and polychora) is analogous to the two-dimensional pentagram .
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Great_120-cell |
In the geometry of hyperbolic 4-space , the great 120-cell honeycomb is one of four regular star- honeycombs . With Schläfli symbol {5,5/2,5,3}, it has three great 120-cells around each face. It is dual to the order-5 icosahedral 120-cell honeycomb .
It can be seen as a greatening of the 120-cell honeycomb , and is thus analogous to the three-dimensional great dodecahedron {5,5/2} and four-dimensional great 120-cell {5,5/2,5}. It has density 10.
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Great_120-cell_honeycomb |
The Great American Biotic Interchange (commonly abbreviated as GABI ), also known as the Great American Interchange and the Great American Faunal Interchange , was an important late Cenozoic paleozoogeographic biotic interchange event in which land and freshwater fauna migrated from North America to South America via Central America and vice versa, as the volcanic Isthmus of Panama rose up from the sea floor, forming a land bridge between the previously separated continents . Although earlier dispersals had occurred, probably over water, the migration accelerated dramatically about 2.7 million years ( Ma ) ago during the Piacenzian age. [ 1 ] It resulted in the joining of the Neotropic (roughly South American) and Nearctic (roughly North American) biogeographic realms definitively to form the Americas . The interchange is visible from observation of both biostratigraphy and nature ( neontology ). Its most dramatic effect is on the zoogeography of mammals , but it also gave an opportunity for reptiles , amphibians , arthropods , weak-flying or flightless birds , and even freshwater fish to migrate. Coastal and marine biota were affected in the opposite manner; the formation of the Central American Isthmus caused what has been termed the Great American Schism, with significant diversification and extinction occurring as a result of the isolation of the Caribbean from the Pacific. [ 2 ]
The occurrence of the interchange was first discussed in 1876 by the "father of biogeography ", Alfred Russel Wallace . [ 3 ] [ 4 ] Wallace had spent five years exploring and collecting specimens in the Amazon basin . Others who made significant contributions to understanding the event in the century that followed include Florentino Ameghino , W. D. Matthew , W. B. Scott , Bryan Patterson , George Gaylord Simpson and S. David Webb. [ 5 ] The Pliocene timing of the formation of the connection between North and South America was discussed in 1910 by Henry Fairfield Osborn . [ 6 ]
Analogous interchanges occurred earlier in the Cenozoic, when the formerly isolated land masses of India and Africa made contact with Eurasia about 56 and 30 Ma ago, respectively. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ excessive citations ]
After the late Mesozoic breakup of Gondwana , South America spent most of the Cenozoic era as an island continent whose "splendid isolation" allowed its fauna to evolve into many forms found nowhere else on Earth, most of which are now extinct . [ 18 ] Its endemic mammals initially consisted primarily of metatherians ( marsupials and sparassodonts ), xenarthrans , and a diverse group of native ungulates known as the Meridiungulata : notoungulates (the "southern ungulates"), litopterns , astrapotheres , pyrotheres and xenungulates . [ n 1 ] [ n 2 ] A few non- therian mammals – monotremes , gondwanatheres , dryolestids and possibly cimolodont multituberculates – were also present in the Paleocene ; while none of these diversified significantly and most lineages did not survive long, forms like Necrolestes and Patagonia remained as recently as the Miocene . [ 25 ]
Marsupials appear to have traveled via Gondwanan land connections from South America through Antarctica to Australia in the late Cretaceous or early Tertiary . [ 26 ] [ n 3 ] One living South American marsupial, the monito del monte , has been shown to be more closely related to Australian marsupials than to other South American marsupials ( Ameridelphia ); however, it is the most basal australidelphian, [ n 4 ] meaning that this superorder arose in South America and then dispersed to Australia after the monito del monte split off. [ 26 ] Monotrematum , a 61-Ma-old platypus-like monotreme fossil from Patagonia , may represent an Australian immigrant. [ 27 ] [ 28 ] Paleognath birds ( ratites and South American tinamous ) may have made a similar migration around the same time to Australia and New Zealand . [ 29 ] [ 30 ] Other taxa that may have dispersed by the same route (if not by flying or oceanic dispersal ) are parrots , chelid turtles, and the extinct meiolaniid turtles.
Marsupials remaining in South America included didelphimorphs ( opossums ), paucituberculatans ( shrew opossums ) and microbiotheres (monitos del monte). Larger predatory relatives of these also existed, such as the borhyaenids and the saber-toothed Thylacosmilus ; these were sparassodont metatherians, which are no longer considered to be true marsupials. [ 31 ] As the large carnivorous metatherians declined, and before the arrival of most types of carnivorans , predatory opossums such as Thylophorops temporarily attained larger size (about 7 kg).
Metatherians and a few xenarthran armadillos, such as Macroeuphractus , were the only South American mammals to specialize as carnivores ; their relative inefficiency created openings for nonmammalian predators to play more prominent roles than usual (similar to the situation in Australia ). Sparassodonts and giant opossums shared the ecological niches for large predators with fearsome flightless "terror birds" ( phorusrhacids ), whose closest living relatives are the seriemas . [ 32 ] [ 33 ] North America also had large terrestrial predatory birds during the early Cenozoic (the related bathornithids ), but they died out before the GABI in the Early Miocene , about 20 million years ago. Through the skies over late Miocene South America (6 Ma ago) soared one of the largest flying birds known, Argentavis , a teratorn that had a wing span of 6 m or more, and which may have subsisted in part on the leftovers of Thylacosmilus kills. [ 34 ] Terrestrial sebecid ( metasuchian ) crocodyliforms with ziphodont teeth [ n 5 ] were also present at least through the middle Miocene [ 35 ] [ 36 ] [ 37 ] [ 38 ] and maybe to the Miocene-Pliocene boundary. [ 39 ] Some of South America's aquatic crocodilians, such as Gryposuchus , Mourasuchus and Purussaurus , reached monstrous sizes, with lengths up to 12 m (comparable to the largest Mesozoic crocodyliforms). They shared their habitat with one of the largest turtles of all time, the 3.3 m (11 ft) Stupendemys .
Xenarthrans are a curious group of mammals that developed morphological adaptations for specialized diets very early in their history. [ 40 ] In addition to those extant today ( armadillos , anteaters , and tree sloths ), a great diversity of larger types was present, including pampatheres , the ankylosaur -like glyptodonts , predatory euphractines , various ground sloths , some of which reached the size of elephants (e.g. Megatherium ), and even semiaquatic to aquatic marine sloths . [ 41 ] [ 42 ]
The notoungulates and litopterns had many strange forms, such as Macrauchenia , a camel-like litoptern with a small proboscis . They also produced a number of familiar-looking body types that represent examples of parallel or convergent evolution : one-toed Thoatherium had legs like those of a horse, Pachyrukhos resembled a rabbit, Homalodotherium was a semibipedal, clawed browser like a chalicothere , and horned Trigodon looked like a rhinoceros . Both groups started evolving in the Lower Paleocene, possibly from condylarth stock, diversified, dwindled before the great interchange, and went extinct at the end of the Pleistocene. The pyrotheres and astrapotheres were also strange, but were less diverse and disappeared earlier, well before the interchange.
The North American fauna was a typical boreoeutherian one, supplemented with Afrotherian proboscids .
The invasions of South America started about 40 Ma ago (middle Eocene ), when caviomorph rodents arrived in South America. [ 43 ] [ 44 ] [ 45 ] Their subsequent vigorous diversification displaced some of South America's small marsupials and gave rise to – among others – capybaras , chinchillas , viscachas , and New World porcupines . The independent development of spines by New and Old World porcupines is another example of parallel evolution. This invasion most likely came from Africa. [ 46 ] [ 47 ] The crossing from West Africa to the northeast corner of Brazil was much shorter then, due to continental drift , and may have been aided by island hopping (e.g. via St. Paul's Rocks , if they were an inhabitable island at the time) and westward oceanic currents. [ 48 ] Crossings of the ocean were accomplished when at least one fertilised female (more commonly a group of animals) accidentally floated over on driftwood or mangrove rafts. Hutias (Capromyidae) would subsequently colonize the West Indies as far as the Bahamas , [ 49 ] [ 50 ] reaching the Greater Antilles by the early Oligocene. [ 51 ] Over time, some caviomorph rodents evolved into larger forms that competed with some of the native South American ungulates, which may have contributed to the gradual loss of diversity suffered by the latter after the early Oligocene. [ 18 ] By the Pliocene, some caviomorphs (e.g., Josephoartigasia monesi ) attained sizes on the order of 500 kg (1,100 lb) or larger. [ 52 ]
Later (by 36 Ma ago), [ 53 ] primates followed, again from Africa in a fashion similar to that of the rodents. [ 43 ] Primates capable of migrating had to be small. Like caviomorph rodents, South American monkeys are believed to be a clade (i.e., monophyletic ). However, although they would have had little effective competition, all extant New World monkeys appear to derive from a radiation that occurred long afterwards, in the Early Miocene about 18 Ma ago. [ 43 ] Subsequent to this, monkeys apparently most closely related to titis island-hopped to Cuba , Hispaniola , and Jamaica . Additionally, a find of seven 21-Ma-old apparent cebid teeth in Panama suggests that South American monkeys had dispersed across the seaway separating Central and South America by that early date. However, all extant Central American monkeys are believed to be descended from much later migrants, and there is as yet no evidence that these early Central American cebids established an extensive or long-lasting population, perhaps due to a shortage of suitable rainforest habitat at the time. [ 54 ] [ 55 ]
Fossil evidence presented in 2020 indicates a second lineage of African monkeys also rafted to and at least briefly colonized South America. Ucayalipithecus remains dating from the Early Oligocene of Amazonian Peru are, by morphological analysis, deeply nested within the family Parapithecidae of the Afro-Arabian radiation of parapithecoid simians , with dental features markedly different from those of platyrrhines . The Old World members of this group are thought to have become extinct by the Late Oligocene. Qatrania wingi of lower Oligocene Fayum deposits is considered the closest known relative of Ucayalipithecus . [ 56 ] [ 57 ]
Remarkably, the descendants of those few bedraggled " waifs " that crawled ashore from their rafts of African flotsam in the Eocene now constitute more than twice as many of South America's species as the descendants of all the flightless mammals previously resident on the continent ( 372 caviomorph and monkey species versus 136 marsupial and xenarthran species ). [ n 6 ]
Many of South America's bats may have arrived from Africa during roughly the same period, possibly with the aid of intervening islands, although by flying rather than floating. Noctilionoid bats ancestral to those in the neotropical families Furipteridae , Mormoopidae , Noctilionidae , Phyllostomidae , and Thyropteridae are thought to have reached South America from Africa in the Eocene, [ 59 ] possibly via Antarctica. [ 60 ] Similarly, free-tailed bats (Molossidae) may have reached South America from Africa in as many as five dispersals, starting in the Eocene. [ 59 ] Emballonurids may have also reached South America from Africa about 30 Ma ago, based on molecular evidence. [ 59 ] [ 61 ] Vespertilionids may have arrived in five dispersals from North America and one from Africa. [ 59 ] Natalids are thought to have arrived during the Pliocene from North America via the Caribbean. [ 59 ]
Tortoises also arrived in South America in the Oligocene. They were long thought to have come from North America, but a recent comparative genetic analysis concludes that the South American genus Chelonoidis (formerly part of Geochelone ) is actually most closely related to African hingeback tortoises . [ n 7 ] [ 62 ] Tortoises are aided in oceanic dispersal by their ability to float with their heads up, and to survive up to six months without food or water. [ 62 ] South American tortoises then went on to colonize the West Indies [ 63 ] and Galápagos Islands (the Galápagos tortoise ). A number of clades of American geckos seem to have rafted over from Africa during both the Paleogene and Neogene. [ 64 ] Skinks of the related genera Mabuya and Trachylepis apparently dispersed across the Atlantic from Africa to South America and Fernando de Noronha , respectively, during the last 9 Ma. [ 65 ] Surprisingly, South America's burrowing amphisbaenians [ 66 ] and blind snakes [ 67 ] also appear to have rafted from Africa, as does the hoatzin , a weak-flying bird of South American rainforests. [ 68 ]
The earliest traditionally recognized mammalian arrival from North America was a procyonid that island-hopped from Central America before the Isthmus of Panama land bridge formed, around 7.3 Ma ago. [ 69 ] This was South America's first eutherian carnivore. South American procyonids then diversified into forms now extinct (e.g. the "dog-coati" Cyonasua , which evolved into the bear-like Chapalmalania ). However, all extant procyonid genera appear to have originated in North America. [ 70 ] The first South American procyonids may have contributed to the extinction of sebecid crocodilians by eating their eggs, but this view has not been universally viewed as plausible. [ n 8 ] [ 38 ] The procyonids were followed to South America by rafting or island-hopping hog-nosed skunks [ 71 ] and sigmodontine rodents . [ 72 ] [ 73 ] [ 74 ] [ 75 ] The oryzomyine tribe of sigmodontine rodents went on to colonize the Lesser Antilles to Anguilla .
One group has proposed that a number of large Neartic herbivores actually reached South America as early as 9–10 Ma ago, in the late Miocene, via an early incomplete land bridge. These claims, based on fossils recovered from rivers in southwestern Peru, have been viewed with caution by other investigators, due to the lack of corroborating finds from other sites and the fact that almost all of the specimens in question have been collected as float in rivers without little to no stratigraphic control. [ 76 ] These taxa are a gomphothere ( Amahuacatherium ), [ 77 ] [ 78 ] peccaries ( Sylvochoerus and Waldochoerus ), [ 79 ] tapirs and Surameryx , a palaeomerycid (from a family probably ancestral to cervids). [ 80 ] The identification of Amahuacatherium and the dating of its site is controversial; it is regarded by a number of investigators as a misinterpreted fossil of a different gomphothere, Notiomastodon , and biostratigraphy dates the site to the Pleistocene. [ 81 ] [ 82 ] [ 83 ] The early date proposed for Surameryx has also been met with skepticism. [ 84 ]
Megalonychid and mylodontid ground sloths island-hopped to North America by 9 Ma ago. [ 72 ] A basal group of sloths [ 85 ] had colonized the Antilles previously, by the early Miocene . [ 86 ] In contrast, megatheriid and nothrotheriid ground sloths did not migrate north until the formation of the isthmus. Sloths first appear in Florida after a major sea level lowstand at the terminus of the Miocene. [ 87 ] Terror birds may have also island-hopped to North America as early as 5 Ma ago. [ 88 ]
The Caribbean Islands were populated primarily by species from South America, due to the prevailing direction of oceanic currents, rather than to a competition between North and South American forms. [ 49 ] [ 50 ] Except in the case of Jamaica, oryzomyine rodents of North American origin were able to enter the region only after invading South America.
The formation of the Isthmus of Panama led to the last and most conspicuous wave, the Great American Biotic Interchange (GABI), starting around 2.7 Ma ago. This included the immigration into South America of North American ungulates (including camelids , tapirs , deer and horses ), proboscids ( gomphotheres ), carnivorans (including felids such as cougars , jaguars and saber-toothed cats , canids , mustelids , procyonids and bears ) and a number of types of rodents . [ n 9 ] The larger members of the reverse migration were ground sloths , terror birds , glyptodonts , pampatheres , capybaras , and the notoungulate Mixotoxodon (the only South American ungulate known to have invaded Central America).
In general, the initial net migration was symmetrical. Later on, however, the Neotropic species proved far less successful than the Nearctic. This difference in fortunes was manifested in several ways. Northwardly migrating animals often were not able to compete for resources as well as the North American species already occupying the same ecological niches; those that did become established were not able to diversify much, and in some cases did not survive for long. [ 89 ] Southwardly migrating Nearctic species established themselves in larger numbers and diversified considerably more, [ 89 ] and are thought to have caused the extinction of a large proportion of the South American fauna. [ 71 ] [ 90 ] [ 91 ] (No extinctions in North America are plainly linked to South American immigrants. [ n 10 ] ) Native South American ungulates did poorly, with only a handful of genera withstanding the northern onslaught. (Several of the largest forms, macraucheniids and toxodontids , have long been recognized to have survived to the end of the Pleistocene. Recent fossil finds indicate that one species of the horse-like proterotheriid litopterns did, as well. [ 93 ] The notoungulate mesotheriids and hegetotheriids also managed to hold on at least part way through the Pleistocene.) [A] South America's small marsupials , though, survived in large numbers, while the primitive -looking xenarthrans proved to be surprisingly competitive and became the most successful invaders of North America. The African immigrants, the caviomorph rodents and platyrrhine monkeys, were less impacted by the interchange than most of South America's 'old-timers', although the caviomorphs suffered a significant loss of diversity, [ n 11 ] [ n 12 ] including the elimination of the largest forms (e.g. the dinomyids ). With the exception of the North American porcupine and several extinct porcupines and capybaras, however, they did not migrate past Central America. [ n 13 ]
Due in large part to the continued success of the xenarthrans, one area of South American ecospace the Nearctic invaders were unable to dominate was the niches for megaherbivores. [ 95 ] Before 12,000 years ago, South America was home to about 25 species of herbivores weighing more than 1,000 kg (2,200 lb), consisting of Neotropic ground sloths, glyptodonts, and toxodontids, as well as gomphotheres and camelids of Nearctic origin. [ n 14 ] Native South American forms made up about 75% of these species. However, none of these megaherbivores has survived.
Armadillos, opossums and porcupines are present in North America today because of the Great American Interchange. Opossums and porcupines were among the most successful northward migrants, reaching as far as Canada and Alaska , respectively. Most major groups of xenarthrans were present in North America until the end-Pleistocene Quaternary extinction event (as a result of at least eight successful invasions of temperate North America, and at least six more invasions of Central America only). Among the megafauna , ground sloths were notably successful emigrants; four different lineages invaded North America. A megalonychid representative, Megalonyx , spread as far north as the Yukon [ 97 ] and Alaska, [ 98 ] and might well have invaded Eurasia had a suitable habitat corridor across Beringia been present.
Generally speaking, however, the dispersal and subsequent explosive adaptive radiation of sigmodontine rodents throughout South America (leading to over 80 currently recognized genera ) was vastly more successful (both spatially and by number of species) than any northward migration of South American mammals. Other examples of North American mammal groups that diversified conspicuously in South America include canids and cervids, both of which currently have three or four genera in North America, two or three in Central America, and six in South America. [ n 15 ] [ n 16 ] Although members of Canis (specifically, coyotes ) currently range only as far south as Panama, [ n 17 ] South America still has more extant genera of canids than any other continent. [ n 15 ]
The effect of formation of the isthmus on the marine biota of the area was the inverse of its effect on terrestrial organisms, a development that has been termed the "Great American Schism". The connection between the east Pacific Ocean and the Caribbean (the Central American Seaway ) was severed, setting now-separated populations on divergent evolutionary paths. [ 2 ] Caribbean species also had to adapt to an environment of lower productivity after the inflow of nutrient-rich water of deep Pacific origin was blocked. [ 102 ] The Pacific coast of South America cooled as the input of warm water from the Caribbean was cut off. This trend is thought to have caused the extinction of the marine sloths of the area. [ 103 ]
During the last 7 Ma, South America's terrestrial predator guild has changed from one composed almost entirely of nonplacental mammals ( metatherians ), birds , and reptiles to one dominated by immigrant placental carnivorans (with a few small marsupial and avian predators like didelphine opossums and seriemas ). It was originally thought that the native South American predator guild, including sparassodonts , carnivorous opossums like Thylophorops and Hyperdidelphys , armadillos such as Macroeuphractus , terror birds , and teratorns , as well as early-arriving immigrant Cyonasua -group procyonids , were driven to extinction during the GABI by competitive exclusion from immigrating placental carnivorans , and that this turnover was abrupt. [ 104 ] [ 105 ] However, the turnover of South America's predator guild was more complex, with competition only playing a limited role.
In the case of sparassodonts and carnivorans, which has been the most heavily studied, little evidence shows that sparassodonts even encountered their hypothesized placental competitors. [ 106 ] [ 107 ] [ 108 ] Many supposed Pliocene records of South American carnivorans have turned out to be misidentified or misdated. [ 109 ] [ 106 ] Sparassodonts appear to have been declining in diversity since the middle Miocene , with many of the niches once occupied by small sparassodonts being increasingly occupied by carnivorous opossums, [ 110 ] [ 111 ] [ 112 ] [ 113 ] [ 114 ] which reached sizes of up to roughly 8 kg (~17 lbs). [ 111 ] Whether sparassodonts competed with carnivorous opossums or whether opossums began occupying sparassodont niches through passive replacement is still debated. [ 114 ] [ 113 ] [ 112 ] [ 111 ] Borhyaenids last occur in the late Miocene, about 4 Ma before the first appearance of canids or felids in South America. [ 107 ] Thylacosmilids last occur about 3 Ma ago and appear to be rarer at pre-GABI Pliocene sites than Miocene ones. [ 106 ]
In general, sparassodonts appear to have been mostly or entirely extinct by the time most nonprocyonid carnivorans arrived, with little overlap between the groups. Purported ecological counterparts between pairs of analogous groups (thylacosmilids and saber-toothed cats, borhyaenids and felids, hathliacynids and weasels ) neither overlap in time nor abruptly replace one another in the fossil record. [ 104 ] [ 107 ] Procyonids dispersed to South America by at least 7 Ma ago, and had achieved a modest endemic radiation by the time other carnivorans arrived ( Cyonasua -group procyonids ). However, procyonids do not appear to have competed with sparassodonts, the procyonids being large omnivores and sparassodonts being primarily hypercarnivorous . [ 115 ] Other groups of carnivorans did not arrive in South America until much later. Dogs and weasels appear in South America about 2.9 Ma ago, but do not become abundant or diverse until the early Pleistocene. [ 106 ] Bears , cats, and skunks do not appear in South America until the early Pleistocene (about 1 Ma ago or slightly earlier). [ 106 ] Otters and other groups of procyonids (i.e., coatis , raccoons ) have been suggested to have dispersed to South America in the Miocene based on genetic data, but no remains of these animals have been found even at heavily sampled northern South American fossil sites such as La Venta (Colombia) , which is only 600 km (370 mi) from the Isthmus of Panama. [ 116 ] [ 115 ] [ 117 ] [ 118 ]
Other groups of native South American predators have not been studied in as much depth. Terror birds have often been suggested to have been driven to extinction by placental carnivorans, though this hypothesis has not been investigated in detail. [ 119 ] [ 120 ] Titanis dispersed from South America to North America against the main wave of carnivoran migrations, being the only large native South American carnivore to accomplish this. [ 120 ] However, it only managed to colonize a small part of North America for a limited time, failing to diversify and going extinct in the early Pleistocene (1.8 Ma ago); the modest scale of its success has been suggested to be due to competition with placental carnivorans. [ 121 ] Terror birds also decline in diversity after about 3 Ma ago. [ 106 ] At least one genus of relatively small terror birds, Psilopterus , appears to have survived to as recently as about 96,000 years ago. [ 122 ] [ 123 ]
The native carnivore guild appears to have collapsed completely roughly 3 Ma ago (including the extinction of the last sparassodonts), not correlated with the arrival of carnivorans in South America, with terrestrial carnivore diversity being low thereafter. [ 106 ] [ 124 ] This has been suggested to have opened up ecological niches and allowed carnivorans to establish themselves in South America due to low competition. [ 115 ] [ 125 ] [ 126 ] A meteor impact 3.3 million years ago in southern South America has been suggested as a possible cause of this turnover, but this is still controversial. [ 127 ] [ 124 ] A similar pattern occurs in the crocodilian fauna, where modern crocodiles ( Crocodylus ) dispersed to South America during the Pliocene and became the dominant member of crocodilian communities after the late Miocene extinction of the previously dominant large native crocodilians such as the giant caiman Purussaurus and giant gharial Gryposuchus , which is thought to be related to the loss of wetlands habitat across northern South America. [ 128 ] [ 129 ]
Whether this revised scenario with a reduced role for competitive exclusion applies to other groups of South American mammals such as notoungulates and litopterns is unclear, though some authors have pointed out a protracted decline in South American native ungulate diversity since the middle Miocene. [ 130 ] Regardless of how this turnover happened, it is clear that carnivorans benefitted from it. Several groups of carnivorans such as dogs and cats underwent an adaptive radiation in South America after dispersing there, and the greatest modern diversity of canids in the world is in South America. [ 100 ]
The eventual triumph of the Nearctic migrants was ultimately based on geography, which played into the hands of the northern invaders in two crucial respects. The first was a matter of climate . Any species that reached Panama from either direction obviously had to be able to tolerate moist tropical conditions . Those migrating southward would then be able to occupy much of South America without encountering climates that were markedly different. However, northward migrants would have encountered drier or cooler conditions by the time they reached the vicinity of the Trans-Mexican Volcanic Belt . The challenge this climatic asymmetry (see map on right) presented was particularly acute for Neotropic species specialized for tropical rainforest environments, which had little prospect of penetrating beyond Central America. As a result, Central America currently has 41 mammal species of Neotropical origin, [ n 18 ] compared to only three for temperate North America. However, species of South American origin ( marsupials , xenarthrans , caviomorph rodents , and monkeys ) still comprise only 21% of species from nonflying, nonmarine mammal groups in Central America , while North American invaders constitute 49% of species from such groups in South America . Thus, climate alone cannot fully account for the greater success of species of Nearctic origin during the interchange.
The second and more important advantage geography gave to the northerners is related to the land area in which their ancestors evolved. During the Cenozoic, North America was periodically connected to Eurasia via Beringia , allowing repeated migrations back and forth to unite the faunas of the two continents. [ n 19 ] Eurasia was connected in turn to Africa , which contributed further to the species that made their way to North America. [ n 20 ] South America, though, was connected only to Antarctica and Australia, two much smaller and less hospitable continents, and only in the early Cenozoic. Moreover, this land connection does not seem to have carried much traffic (apparently no mammals other than marsupials and perhaps a few monotremes ever migrated by this route), particularly in the direction of South America. This means that Northern Hemisphere species arose within a land area roughly six times greater than was available to South American species. North American species were thus products of a larger and more competitive arena, [ n 21 ] [ 89 ] [ 131 ] [ 132 ] where evolution would have proceeded more rapidly. They tended to be more efficient and brainier , [ n 22 ] [ n 23 ] generally able to outrun and outwit their South American counterparts, who were products of an evolutionary backwater. In the cases of ungulates and their predators, South American forms were replaced wholesale by the invaders, possibly a result of these advantages.
The greater eventual success of South America's African immigrants compared to its native early Cenozoic mammal fauna is another example of this phenomenon, since the former evolved over a greater land area; their ancestors migrated from Eurasia to Africa , two significantly larger continents, before finding their way to South America. [ 58 ]
Against this backdrop, the ability of South America's xenarthrans to compete effectively against the northerners represents a special case. The explanation for the xenarthrans' success lies in part in their idiosyncratic approach to defending against predation, based on possession of body armor or formidable claws . The xenarthrans did not need to be fleet-footed or quick-witted to survive. Such a strategy may have been forced on them by their low metabolic rate (the lowest among the therians ). [ 140 ] [ 141 ] Their low metabolic rate may in turn have been advantageous in allowing them to subsist on less abundant [ 142 ] or less nutritious food sources. Unfortunately, the defensive adaptations of the large xenarthrans would have offered little protection against humans armed with spears and other projectiles .
At the end of the Pleistocene epoch, about 12,000 years ago, three dramatic developments occurred in the Americas at roughly the same time (geologically speaking). Paleoindians invaded and occupied the New World (although humans may have been living in the Americas, including what is now the southern US and Chile, more than 15,000 years ago [ 143 ] ), the last glacial period came to an end, and a large fraction of the megafauna of both North and South America went extinct. This wave of extinctions swept off the face of the Earth many of the successful participants of the GABI, as well as other species that had not migrated.
All the pampatheres, glyptodonts, ground sloths, equids, proboscideans, [ 144 ] [ 145 ] [ 83 ] giant short-faced bears , dire wolves , and machairodont species of both continents disappeared. The last of the South and Central American notoungulates and litopterns died out, as well as North America's giant beavers , lions , dholes , cheetahs , and many of its antilocaprid , bovid , cervid , tapirid and tayassuid ungulates. Some groups disappeared over most or all of their original range, but survived in their adopted homes, e.g. South American tapirs, camelids, and tremarctine bears (cougars and jaguars may have been temporarily reduced to South American refugia also). Others, such as capybaras, survived in their original range, but died out in areas to which they had migrated. Notably, this extinction pulse eliminated all Neotropic migrants to North America larger than about 15 kg (the size of a big porcupine), and all native South American mammals larger than about 65 kg (the size of a big capybara or giant anteater ). In contrast, the largest surviving native North American mammal, the wood bison , can exceed 900 kg (2,000 lb), and the largest surviving Nearctic migrant to South America, Baird's tapir , can reach 400 kg (880 lb).
The near-simultaneity of the megafaunal extinctions with the glacial retreat and the peopling of the Americas has led to proposals that both climate change and human hunting played a role. [ 95 ] Although the subject is contentious, [ 146 ] [ 147 ] [ 148 ] [ 149 ] [ 150 ] a number of considerations suggest that human activities were pivotal. [ 96 ] [ 151 ] The extinctions did not occur selectively in the climatic zones that would have been most affected by the warming trend, and no plausible general climate-based megafauna-killing mechanism could explain the continent-wide extinctions. The climate change took place worldwide, but had little effect on the megafauna in Africa and southern Asia, where megafaunal species had coevolved with humans . Numerous very similar glacial retreats had occurred previously within the ice age of the last several million years without ever producing comparable waves of extinction in the Americas or anywhere else.
Similar megafaunal extinctions have occurred on other recently populated land masses (e.g. Australia , [ 152 ] [ 153 ] Japan , [ 154 ] Madagascar , [ 155 ] New Zealand , [ 156 ] and many smaller islands around the world, such as Cyprus , [ 157 ] Crete , Tilos and New Caledonia [ 158 ] ) at different times that correspond closely to the first arrival of humans at each location. These extinction pulses invariably swept rapidly over the full extent of a contiguous land mass, regardless of whether it was an island or a hemisphere-spanning set of connected continents. This was true despite the fact that all the larger land masses involved (as well as many of the smaller ones) contained multiple climatic zones that would have been affected differently by any climate changes occurring at the time. However, on sizable islands far enough offshore from newly occupied territory to escape immediate human colonization, megafaunal species sometimes survived for many thousands of years after they or related species became extinct on the mainland; examples include giant kangaroos in Tasmania, [ 159 ] [ 160 ] giant Chelonoidis tortoises of the Galápagos Islands (formerly also of South America [ 95 ] ), giant Dipsochelys tortoises of the Seychelles (formerly also of Madagascar ), giant meiolaniid turtles on Lord Howe Island , New Caledonia and Vanuatu (previously also of Australia), [ 161 ] [ n 24 ] ground sloths on the Antilles , [ 164 ] [ 165 ] Steller's sea cows off the Commander Islands [ 166 ] and woolly mammoths on Wrangel Island [ 167 ] and Saint Paul Island . [ 168 ]
The glacial retreat may have played a primarily indirect role in the extinctions in the Americas by simply facilitating the movement of humans southeastward from Beringia to North America. The reason that a number of groups went extinct in North America but lived on in South America (while no examples of the opposite pattern are known) appears to be that the dense rainforest of the Amazon basin and the high peaks of the Andes provided environments that afforded a degree of protection from human predation. [ 169 ] [ n 25 ] [ n 26 ]
Extant or extinct (†) North American taxa whose ancestors migrated out of South America and reached the modern territory of the contiguous United States : [ n 27 ]
Fish
Amphibians
Birds
Mammals
Extant or extinct (†) North American taxa whose ancestors migrated out of South America, but failed to reach the contiguous United States and were confined to Mexico and Central America: [ n 27 ] [ n 29 ]
Invertebrates
Fish
Amphibians
Reptiles
Birds
Mammals
Extant or extinct (†) South American taxa whose ancestors migrated out of North America: [ n 27 ]
Amphibians
Reptiles
Birds
Mammals | https://en.wikipedia.org/wiki/Great_American_Interchange |
The Great Calcite Belt (GCB) refers to a region of the ocean where there are high concentrations of calcite , a mineral form of calcium carbonate . The belt extends over a large area of the Southern Ocean surrounding Antarctica. The calcite in the Great Calcite Belt is formed by tiny marine organisms called coccolithophores , which build their shells out of calcium carbonate. When these organisms die, their shells sink to the bottom of the ocean, and over time, they accumulate to form a thick layer of calcite sediment.
The Great Calcite Belt occurs in areas of the Southern ocean where the calcite compensation depth (CCD) is relatively shallow, meaning that calcite minerals from the shells of marine organisms dissolve at a shallower depth in the water column. This results in a higher concentration of calcium carbonate sediments in the ocean floor, which can be observed in the form of white chalky sediments.
The Great Calcite Belt plays a significant role regulating the global carbon cycle . Calcite is a form of carbon that is removed from the atmosphere and stored in the ocean, which helps to reduce the amount of carbon dioxide in the atmosphere and mitigate the effects of climate change . Recent studies suggest the belt sequesters something between 15 and 30 million tonnes of carbon per year. [ 1 ]
Scientists have further interest in the calcite sediments in the belt, which contain valuable information about past climate, ocean currents, ocean chemistry, and marine ecosystems. For example, variations in the CCD depth over time can indicate changes in the amount of carbon dioxide in the atmosphere and the ocean's ability to absorb it. The belt is also home to a diverse range of contemporary marine life, including deep-sea corals and fish that are adapted to the unique conditions found in this part of the ocean. The Great Calcite Belt is a region of elevated summertime upper ocean calcite concentration derived from coccolithophores , despite the region being known for its diatom predominance. The overlap of two major phytoplankton groups, coccolithophores and diatoms, in the dynamic frontal systems characteristic of this region provides an ideal setting to study environmental influences on the distribution of different species within these taxonomic groups. [ 2 ]
The Great Calcite Belt can be defined as an elevated particulate inorganic carbon (PIC) feature occurring alongside seasonally elevated chlorophyll a in austral spring and summer in the Southern Ocean. [ 3 ] It plays an important role in climate fluctuations, [ 4 ] [ 5 ] accounting for over 60% of the Southern Ocean area (30–60° S). [ 6 ] The region between 30° and 50° S has the highest uptake of anthropogenic carbon dioxide (CO 2 ) alongside the North Atlantic and North Pacific oceans. [ 7 ] Knowledge of the impact of interacting environmental influences on phytoplankton distribution in the Southern Ocean is limited. For example, more understanding is needed of how light and iron availability or temperature and pH interact to control phytoplankton biogeography . [ 8 ] [ 9 ] [ 10 ] Hence, if model parameterizations are to improve to provide accurate predictions of biogeochemical change, a multivariate understanding of the full suite of environmental drivers is required. [ 11 ] [ 2 ]
The Southern Ocean has often been considered as a microplankton -dominated (20–200 μm) system with phytoplankton blooms dominated by large diatoms and Phaeocystis sp. [ 12 ] [ 13 ] [ 14 ] However, since the identification of the Great Calcite Belt (GCB) as a consistent feature [ 3 ] [ 15 ] and the recognition of picoplankton (< 2 μm) and nanoplankton (2–20 μm) importance in high-nutrient, low-chlorophyll (HNLC) waters, [ 16 ] the dynamics of small (bio)mineralizing plankton and their export need to be acknowledged. The two dominant biomineralizing phytoplankton groups in the GCB are coccolithophores and diatoms. Coccolithophores are generally found north of the polar front, [ 17 ] though Emiliania huxleyi has been observed as far south as 58° S in the Scotia Sea , [ 18 ] at 61° S across Drake Passage , [ 10 ] and at 65°S south of Australia. [ 19 ] [ 2 ]
Diatoms are present throughout the GCB, with the polar front marking a strong divide between different size fractions. [ 20 ] North of the polar front, small diatom species, such as Pseudo-nitzschia spp. and Thalassiosira spp., tend to dominate numerically, whereas large diatoms with higher silicic acid requirements (e.g., Fragilariopsis kerguelensis ) are generally more abundant south of the polar front. [ 20 ] High abundances of nanoplankton (coccolithophores, small diatoms, chrysophytes ) have also been observed on the Patagonian Shelf [ 13 ] and in the Scotia Sea . [ 21 ] Currently, few studies incorporate small biomineralizing phytoplankton to species level. [ 20 ] [ 12 ] [ 13 ] [ 21 ] Rather, the focus has often been on the larger and noncalcifying species in the Southern Ocean due to sample preservation issues (i.e., acidified Lugol’s solution dissolves calcite , and light microscopy restricts accurate identification to cells > 10 μm. [ 21 ] In the context of climate change and future ecosystem function, the distribution of biomineralizing phytoplankton is important to define when considering phytoplankton interactions with carbonate chemistry , [ 22 ] [ 23 ] and ocean biogeochemistry . [ 24 ] [ 25 ] [ 26 ] [ 2 ]
The Great Calcite Belt spans the major Southern Ocean circumpolar fronts: the Subantarctic front, the polar front, the Southern Antarctic Circumpolar Current front, and occasionally the southern boundary of the Antarctic Circumpolar Current . [ 27 ] [ 28 ] [ 29 ] The subtropical front (at approximately 10 °C) acts as the northern boundary of the GCB and is associated with a sharp increase in PIC southwards. [ 6 ] These fronts divide distinct environmental and biogeochemical zones, making the GCB an ideal study area to examine controls on phytoplankton communities in the open ocean. [ 14 ] [ 8 ] A high PIC concentration observed in the GCB (1 μmol PIC L −1 ) compared to the global average (0.2 μmol PIC L −1 ) and significant quantities of detached E. huxleyi coccoliths (in concentrations > 20,000 coccoliths mL −1 ) [ 6 ] both characterize the GCB. The GCB is clearly observed in satellite imagery [ 3 ] spanning from the Patagonian Shelf [ 30 ] [ 31 ] across the Atlantic, Indian, and Pacific oceans and completing Antarctic circumnavigation via the Drake Passage. [ 2 ]
The biogeography of Southern Ocean phytoplankton controls the local biogeochemistry and the export of macronutrients to lower latitudes and depth. Of particular relevance is the competitive interaction between coccolithophores and diatoms, with the former being prevalent along the Great Calcite Belt (40–60°S), while diatoms tend to dominate the regions south of 60°S, as illustrated in the diagram on the right. [ 32 ]
The ocean is changing at an unprecedented rate as a consequence of increasing anthropogenic CO 2 emissions and related climate change. Changes in density stratification and nutrient supply, as well as ocean acidification , lead to changes in phytoplankton community composition and consequently ecosystem structure and function. Some of these changes are already observable today [ 33 ] [ 34 ] and may have cascading effects on global biogeochemical cycles and oceanic carbon uptake. [ 35 ] [ 36 ] [ 37 ] Changes in Southern Ocean (SO) biogeography are especially critical due to the importance of the Southern Ocean in fuelling primary production at lower latitudes through the lateral export of nutrients [ 38 ] and in taking up anthropogenic CO 2 . [ 39 ] For the carbon cycle , the ratio of calcifying and noncalcifying phytoplankton is crucial due to the counteracting effects of calcification and photosynthesis on seawater pCO 2 , which ultimately controls CO 2 exchange with the atmosphere, and the differing ballasting effect of calcite and silicic acid shells for organic carbon export . [ 32 ]
Calcifying coccolithophores and silicifying diatoms are globally ubiquitous phytoplankton functional groups. [ 40 ] [ 41 ] Diatoms are a major contributor to global phytoplankton biomass [ 42 ] and annual net primary production. [ 43 ] In comparison, coccolithophores contribute less to biomass [ 42 ] and to global NPP. [ 44 ] [ 45 ] [ 46 ] [ 47 ] [ 32 ]
However, coccolithophores are the major phytoplanktonic calcifier. [ 48 ] thereby significantly impacting the global carbon cycle . Diatoms dominate the phytoplankton community in the Southern Ocean, [ 49 ] [ 50 ] [ 51 ] but coccolithophores have received increasing attention in recent years. Satellite imagery of particulate inorganic carbon (PIC, a proxy for coccolithophore abundance) revealed the "Great Calcite Belt", [ 52 ] an annually reoccurring circumpolar band of elevated PIC concentrations between 40 and 60°S. In situ observations confirmed coccolithophore abundances of up to 2.4×10 3 cells mL −1 in the Atlantic sector (blooms on the Patagonian Shelf ), up to 3.8×10 2 cells mL −1 in the Indian sector, [ 15 ] and up to 5.4×10 2 cells mL −1 in the Pacific sector of the Southern Ocean [ 53 ] with Emiliania huxleyi being the dominant species. [ 15 ] [ 54 ] However, the contribution of coccolithophores to total Southern Ocean phytoplankton biomass and NPP has not yet been assessed. Locally, elevated coccolithophore abundance in the GCB has been found to turn surface waters into a source of CO 2 for the atmosphere, [ 15 ] emphasising the necessity to understand the controls on their abundance in the Southern Ocean in the context of the carbon cycle and climate change. While coccolithophores have been observed to have moved polewards in recent decades, [ 55 ] [ 56 ] [ 34 ] their response to the combined effects of future warming and ocean acidification is still subject to debate. [ 57 ] [ 55 ] [ 58 ] [ 59 ] [ 60 ] As their response will also crucially depend on future phytoplankton community composition and predator–prey interactions, [ 61 ] it is essential to assess the controls on their abundance in today's climate. [ 32 ]
Coccolithophore biomass is controlled by a combination of bottom-up (physical–biogeochemical environment) and top-down factors ( predator–prey interactions ), but the relative importance of the two has not yet been assessed for coccolithophores in the Southern Ocean. Bottom-up factors directly impact phytoplankton growth, and diatoms and coccolithophores are traditionally discriminated based on their differing requirements for nutrients, turbulence, and light. Based on this, Margalef's mandala predicts a seasonal succession from diatoms to coccolithophores as light levels increase and nutrient levels decline. [ 62 ] In situ studies assessing Southern Ocean coccolithophore biogeography have found coccolithophores under various environmental conditions, [ 15 ] [ 63 ] [ 64 ] [ 54 ] [ 50 ] thus suggesting a wide ecological niche, but all of the mentioned studies have almost exclusively focused on bottom-up controls. [ 32 ]
However, phytoplankton growth rates do not necessarily covary with biomass accumulation rates. Using satellite data from the North Atlantic, Behrenfeld stressed in 2014 the importance of simultaneously considering bottom-up and top-down factors when assessing seasonal phytoplankton biomass dynamics and the succession of different phytoplankton types owing to the spatially and temporally varying relative importance of the physical–biogeochemical and the biological environment. [ 65 ] [ 32 ]
In the Southern Ocean, previous studies have shown zooplankton grazing to control total phytoplankton biomass, [ 66 ] phytoplankton community composition, [ 67 ] and ecosystem structure, [ 68 ] [ 69 ] suggesting that top-down control might also be an important driver for the relative abundance of coccolithophores and diatoms. But the role of zooplankton grazing in current Earth system models is not well considered, [ 70 ] [ 71 ] and the impact of different grazing formulations on phytoplankton biogeography and diversity is subject to ongoing research. [ 72 ] [ 73 ] [ 32 ]
The diagram on the left shows the spatial distribution of different types of marine sediments in the Southern Ocean. The greenish area south of the Polar Front shows the extension of the subpolar opal belt where sediments have a significant portion of silicous plankton frustules. Sediments near Antarctica mainly consist of glacial debris in any grain size eroded and delivered by the Antarctic Ice. [ 74 ] [ 75 ] | https://en.wikipedia.org/wiki/Great_Calcite_Belt |
The Great Debate , also called the Shapley–Curtis Debate , was held on 26 April 1920 at the U.S. National Museum in Washington, D.C. [ a ] between the astronomers Harlow Shapley and Heber Curtis . It concerned the nature of so-called spiral nebulae and the size of the Universe . Shapley believed that these nebulae were relatively small and lay within the outskirts of the Milky Way galaxy (then thought to be the center or entirety of the universe ), while Curtis held that they were in fact independent galaxies, implying that they were exceedingly large and distant. A year later the two sides of the debate were presented and expanded on in independent technical papers under the title "The Scale of the Universe".
In the aftermath of the public debate, scientists have been able to verify individual pieces of evidence from both astronomers, but on the main point of the existence of other galaxies, Curtis has been proven correct.
The debate was the topic of that years William Ellery Hale lecture during a meeting of the National Academy of Sciences in the Baird Auditorium of the U.S. National Museum in Washington, D.C.(now the Smithsonian Museum of Natural History ). [ 1 ] Most of the Academy members in attendance that night were not astronomers. The topics considered for that year's meeting included Einstein's theory of relativity , glaciers, and even zoological or biological subjects, before a debate on "The Distance Scale of the Universe" was chosen. [ 2 ] Shapley and Curtis agreed to a format where each would present their opposing views in back to back 40 minute lectures. Shapley worked from a typed script and presented a general background introduction to astronomy before going on to his views on the size of the universe. [ 3 ] Curtis worked from a set of notes and presented his lecture points in type written projected photographic slides . [ 4 ]
No transcript of the debate exists; its content has been pieced together over the years from Shapley's original annotated typewritten script, Curtis's slides (his script was discarded after the lecture), and both participants' letters. [ 5 ]
Shapley presented the case that the Milky Way is the entirety of the Universe. [ 6 ] In his astronomical work he had been coming up with estimates for the size of the galaxy using globular clusters and the Cepheid variables found within them. He presented the audience with a galaxy 300,000 light-years in diameter with the Sun off to one side. [ 7 ] He spent most of his lecture describing the vast size of the Milky Way and towards the end argued that " spiral nebulae " such as Andromeda were simply objects on the edge the Milky Way itself. He backed up this claim by appealing to their relative sizes—if Andromeda and other spiral nebulae were not part of the Milky Way, then, given the vast size of our galaxy, the distance to them would be a span most contemporary astronomers would not accept. [ 8 ]
Curtis, on the other hand, contended that Andromeda and other such "nebulae" were separate galaxies, or " island universes " (a term invented by the 18th-century philosopher Immanuel Kant , who also argued that the "spiral nebulae" were extragalactic). [ 9 ] He showed that there were more novae in Andromeda than in the Milky Way. From this, he could ask why there were more novae in one small section of the galaxy than the other sections of the galaxy, if Andromeda were not a separate galaxy but simply a nebula within Earth's galaxy. This led to supporting Andromeda as a separate galaxy with its own signature age and rate of nova occurrences. [ citation needed ] Curtis also noted the large radial velocities of spiral nebulae that suggested they could not be gravitationally bound to the Milky Way in a Kapteyn -model universe. [ 10 ] Curtis pointed out a similarity in structure that explained why there were no spiral nebulae visible along the plane of the Milky Way (referred to as the zone of avoidance ); both the Milky Way and the spiral nebulae had similar dust clouds along their plane, and that dust in the Milky Way blocked our view of the spiral nebulae. [ 11 ]
In a May 1921 issue of the Bulletin of the National Research Council Harlow Shapley and Heber Curtis sides of the debate were presented and expanded on in independent technical papers under the title "The Scale of the Universe". The published papers each included counterarguments to the position advocated by the other scientist at the 1920 meeting. [ 12 ]
Later in the 1920s, Edwin Hubble showed that Andromeda was far outside the Milky Way by measuring Cepheid variable stars, proving that Curtis was correct. [ 13 ] It is now known that the Milky Way is only one of as many as an estimated 200 billion ( 2 × 10 11 ) [ 14 ] to 2 trillion ( 2 × 10 12 ) or more galaxies in the observable Universe. [ 15 ] [ 16 ] Also, astronomers generally accept that the nova Shapley referred to in his arguments was in fact a supernova , which does indeed temporarily outshine the combined output of an entire galaxy. On other points, the results were mixed (the actual size of the Milky Way is in between the sizes proposed by Shapley and Curtis), or in favor of Shapley (the Sun was near the center of the galaxy in Curtis's model, while Shapley correctly placed the Sun in the outer regions of the galaxy). [ 17 ]
It later became apparent that van Maanen's observations were incorrect—one cannot actually see the Pinwheel Galaxy rotate during a human lifespan . [ 11 ]
The format of the great debate has been used subsequently to argue the nature of fundamental questions in astronomy. In honor of the first "Great Debate", the Smithsonian has hosted four more events. [ 18 ] | https://en.wikipedia.org/wiki/Great_Debate_(astronomy) |
The Great Green Wall , officially known as the Three-North Shelter Forest Program ( simplified Chinese : 三北防护林 ; traditional Chinese : 三北防護林 ; pinyin : Sānběi Fánghùlín ), is a series of human-planted windbreaking forest strips (shelterbelts) in China , designed to hold back the expansion of the Gobi Desert [ 1 ] and provide timber to the local population. [ 2 ] The program started in 1978 and is planned to complete around 2050, [ 3 ] at which point it will be expected to have created a vast green barrier spanning approximately 4,828 kilometres (3,000 mi) long and up to 1,448 kilometres (900 mi) wide in certain regions, and will encompass around 88 million acres of forests. [ 4 ] [ 5 ] [ 6 ]
The project's name indicates that it is to be carried out in all three northern regions: the North , the Northeast , and the Northwest . [ 7 ] This project has historical precedents dating back to before the Common Era . However, in premodern periods, government-sponsored afforestation projects along the historical frontier regions were mostly for military fortification. [ 8 ]
China has the largest desert area of any country and is heavily impacted by sandstorms. However, the country has implemented various measures to restore grasslands and forests, successfully slowing and now reversing overall desertification. [ 9 ] In November 2024, China's government reported the completion of the 3,000 km green belt around the Taklamakan Desert . The fraction of the country covered by deserts declined from 27.2% in the previous decade to 26.8%. [ 10 ]
Initiated mostly in Northern China , the Great Green Wall of China is a massive reforestation project meant to counteract desertification and slow down the consequences of climate change. [ 11 ] Starting in the 1970s in reaction to the Gobi Desert incursion, the project was driven by the Chinese government. Early projects included massive tree planting to stop desertification and safeguard local communities and agricultural territory. [ 11 ] Aiming to build a green barrier against desertification , dust storms, and ecological damage, [ 12 ] the government started the "Three-North Shelterbelt Program" in the 2000s. This effort developed over time into the enormous environmental rehabilitation project known today as the Great Green Wall.
Desertification, which involves either human or natural activity changing normal humid areas to dry desert conditions, [ 13 ] is a large and increasing problem faced by modern China. 29.7% of China had been desertified by the year 2000, with the rate of change increasing almost every year. [ 13 ] In 2003, Worldchanging reported that 3,600 km 2 (1,400 sq mi) of Chinese grassland were being overtaken annually by the Gobi Desert . [ 14 ] On June 2001, National Geographic reported that annually, the dust storms blow off as much as 2,000 km 2 (800 sq mi) of topsoil , and the storms increase in severity. Such storms have serious agricultural impacts on other nearby countries, including Japan , North Korea , and South Korea . [ 15 ] The main cause of these changes was human activity, with water usage, mining, excess farming, and wood cutting being the top contributors. [ 13 ] In 2022, Time reported progress in desert stabilization and that thousands of acres of shifting dunes had been stabilized, while the nationwide frequency of sandstorms, including those affecting Beijing, had "especially" dropped by one-fifth between 2009 and 2014. [ 16 ]
In 2017, National Geographic reported that the policies in place at the time had contributed to the reduction of China's forests and grasslands. One example was the 'grain first' policy, which required grasslands to be converted into croplands. [ 11 ] Reducing grass coverage eliminated one of the barriers to desertification. Most of these causes can be attributed to an underlying issue: population growth. [ 11 ] The amount of people living in important ecological areas has grown beyond the carrying capacity of those areas. [ 17 ] In 2019, a study published in Nature Sustainability , found that while human activity had previously been associated with land degradation, these policies had led to a net increase in vegetated land in China. [ 18 ]
In the past 40 years, the world as a whole has desertified a third of its land because of this, and the problem only continues to grow. [ 17 ] The increasing desertification and related storms have caused some major issues for people living in China, especially around the Gobi desert. Crops and buildings are being damaged or destroyed. This has forced many people, who are now called "climate refugees" [ 17 ] to leave their homelands. In total, the effects of desertification have affected the lives of over 400 million people. [ 17 ] The Green Wall project was begun in 1978, with the proposed result of raising northern China's forest cover from 5 to 15 percent, [ 19 ] thereby reducing desertification .
Yin Yuzhen planted trees to rehabilitate the desolate environment in the Uxin Banner of China's semi-arid western landscape. Yin's afforestation efforts have been recognized by individuals such as Chinese Communist Party general secretary Xi Jinping , who, during the 2020 National People's Congress , described the actions of those such as Yin as a remarkable achievement and an overall improvement of the ecology in China. [ 20 ]
China has been increasingly installing solar farms in its desert regions including Kubuqi Desert as part of efforts to combat desertification. Listed in its renewable energy plan for 2021 to 2025, China has called for the “large-scale development” of its sand-plus-solar anti-desertification method, a strategy that Beijing began promoting around 2023. [ 21 ] [ 22 ] Solar farms in desert areas contribute to China’s renewable energy capacity while also helping to stabilize the landscape. The shade provided by the solar panels reduces the harsh impact of the sun on the soil, creating more favorable conditions for vegetation to grow. In some instances, grass has reportedly started to grow beneath the panels, which purportedly aids in reducing soil erosion and supporting the local ecosystem. [ 23 ]
Additionally, solar panels have been documented to lower wind speeds at ground level, helping to prevent the movement of sand dunes and minimizing dust that can degrade the environment. [ 24 ] This can lead to better air quality and improved conditions for plant growth, further aiding in the restoration of desertified land. In particular, liquorice plants has proven to be effective in the shaded areas beneath solar panels. As a nitrogen-fixing crop, it draws nitrogen from the air, adds organic matter to the soil, and gradually restores soil fertility. Over time, this helps improve the quality of the land, making it suitable for growing a wider variety of crops, such as tomatoes and melons. The more advanced solar farms in Chinese deserts currently feature elevated solar panels that allow for high-tech farming underneath, often in irrigated greenhouses . This approach combines renewable energy production with agricultural practices, contributing to both ecological restoration and food production. Though challenges such as sand buildup on panels and the costs associated with transporting energy from remote areas remain an issue, overall analysis of Landsat data indicates that solar projects have contributed to the greening of deserts in parts of China in recent years. [ 23 ] [ 24 ]
In 2008, winter storms destroyed 10% of the new forest stock, causing the World Bank to advise China to focus more on quality rather than quantity in its stock species. [ 25 ] By 2009, China's planted forest covered more than 500,000 square kilometers (increasing tree cover from 12% to 18%) – the largest artificial forest in the world. [ 25 ]
According to Foreign Affairs , the Three-North Shelter Forest Program successfully transitioned the economic model of the Gobi Desert region from harmful farming agriculture to ecological-friendly tourism, fruit business, and forestry. [ 26 ]
In 2018, the United States' National Oceanic and Atmospheric Administration found that the increase in forest coverage observed by satellites is consistent with Chinese government data. [ 27 ] According to Shixiong Cao, an ecologist at Beijing Forestry University, the Chinese government recognized the water shortages problem in arid regions and changed the approach to plant vegetation with lower water requirements. [ 27 ] Zhang Jianlong, head of the forestry department, told the media that the goal was to sustain the health of vegetation and choose suitable plant species and irrigation techniques. [ 27 ]
According to a 2019 NASA Earth Observatory report, satellite data from 2000 to 2017 showed that China contributed significantly to global greening efforts, primarily through both large-scale afforestation programs and intensive agricultural practices. NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) data suggested that China accounted for approximately 25% of the global increase in greening, with about 42% of this growth attributed to both forest conservation and expansion efforts aimed at mitigating soil erosion and climate change. [ 28 ]
According to a BBC News report in 2020, tree plantation programs resulted in significant carbon dioxide absorption and helped mitigate climate change; the benefit of tree planting was underestimated by previous research. [ 29 ] The Three-North Shelter Forest Program was also found to have reversed the desertification of the Gobi Desert, which grew 10,000 square kilometers per year in the 1980s, but was shrinking by more than 2,000 square kilometers per year in 2022. [ 30 ]
In November 2024, China's government reported that after 46 years of work it had finished the 3,000-kilometer green belt around the Taklamakan Desert . The country's forest coverage grew from 10% of the overall territory in 1949 to 25% in 2024; the green belt project contributed to this achievement. Although the country's desert coverage was still 26.8% in 2024, this was down slightly from 27.2% a decade previously. [ 10 ]
To date, at least 30 million hectares of trees have been planted. [ 31 ] Moreover, the authorities noted a notable decrease in the frequency and intensity of dust storms, especially in places like Beijing. [ 12 ] The tree planting has helped to stabilize the soil and enhance the local microclimates, therefore aiding agriculture and stopping more desertification. [ 12 ] Though these achievements are noteworthy, they do not allay the continuous worries regarding the ecological sustainability and efficacy of such massive operations. [ 32 ]
Hong Jiang, a geography professor at the University of Wisconsin, worried trees could soak up large amounts of groundwater , which would be extremely problematic for arid regions like northern China. [ 33 ] Dee Williams, a US Department of Interior anthropologist, pointed to China's past failures in anti-desertification efforts and suggested that planting trees is a temporary fix that could not change behavior. [ 33 ]
In December 2003, American futurist Alex Steffen on his website Worldchanging strongly criticized the Green Wall project. He claimed China wasn't using collaborative effort and information platforms to support the local effort. China's increasing levels of pollution have also weakened the soil, causing it to be unusable in many areas. [ 14 ]
Research of reforested areas of the Loess Plateau has found that the combination of exotic tree species and high-density planting could worsen water shortages. The forests increase the loss of soil moisture content when compared to farmland. [ 34 ]
Furthermore, planting blocks of fast-growing trees reduces the biodiversity of forested areas, creating areas unsuitable for plants and animals normally found in forests. "China plants more trees than the rest of the world combined", says John MacKinnon, the head of the EU-China Biodiversity Programme, "but the trouble is they tend to be monoculture plantations. They are not places where birds want to live." The lack of diversity also makes the trees more susceptible to disease, as in 2000, when one billion poplar trees in Ningxia were lost to a single disease, setting back 20 years of planting efforts. [ 35 ] China's forest scientists argued that monoculture tree plantations are more effective at absorbing the greenhouse gas carbon dioxide than slow-growth forests, [ 25 ] so while diversity may be lower, the trees purportedly help to offset China's carbon emissions.
Liu Tuo, head of the Desertification Control Office in the State Forestry Administration, believed that there are huge gaps in the country's efforts to reclaim the land that has become desert. [ 36 ] By 2011, around 1.73 million km 2 of land had become desert in China, of which 530,000 km 2 was treatable. But, at the 2011 rate of treating 1,717 km 2 per year, it would take 300 years to reclaim the land that had become desert. [ 37 ]
Critics have also questioned the project's efficacy in stopping the Gobi Desert's spread, noting that the severe arid climate and poor soil quality make it challenging to maintain tree development over the long run. [ 12 ] Furthermore, although the initiative has made significant progress in reforestation, it has not adequately addressed the fundamental socioeconomic drivers of desertification, including overgrazing and unsustainable farming methods. [ 32 ] | https://en.wikipedia.org/wiki/Great_Green_Wall_(China) |
NASA 's series of Great Observatories satellites are four large, powerful space-based astronomical telescopes launched between 1990 and 2003. They were built with different technology to examine specific wavelength/energy regions of the electromagnetic spectrum : gamma rays , X-rays , visible and ultraviolet light , and infrared light .
The Hubble Space Telescope (HST) primarily observes visible light and near-ultraviolet . It was launched in 1990 aboard the Space Shuttle Discovery during STS-31 , but its main mirror had been ground incorrectly, resulting in spherical aberration that compromised the telescope's capabilities. The optics were corrected to their intended quality by the STS-61 servicing mission in 1993. In 1997, the STS-82 servicing mission added capability in the near-infrared range, and in 2009 the STS-125 servicing mission refurbished the telescope and extended its projected service life. It remains in active operation as of October 2024 [update] .
The Compton Gamma Ray Observatory (CGRO) primarily observed gamma rays , though it extended into hard x-rays as well. It was launched in 1991 aboard Atlantis during STS-37 . It was de-orbited in 2000 after a gyroscope failed.
The Chandra X-ray Observatory (CXO) primarily observes soft X-rays . It was launched in 1999 aboard Columbia during STS-93 into an elliptical high-Earth orbit, and was initially named the Advanced X-ray Astronomical Facility (AXAF). It remains in active operation as of October 2024 [update] .
The Spitzer Space Telescope (SST) observed the infrared spectrum. It was launched in 2003 aboard a Delta II rocket into an Earth-trailing solar orbit. Depletion of its liquid helium coolant in 2009 reduced its functionality, leaving it with only two short-wavelength imaging modules. It was removed from service and placed into safe-mode on January 30, 2020.
The concept of a Great Observatory program was first proposed in the 1979 NRC report "A Strategy for Space Astronomy and Astrophysics for the 1980s". [ 1 ] This report laid the essential groundwork for the Great Observatories and was chaired by Peter Meyer (through June 1977) and then by Harlan J. Smith (through publication). In the mid-1980s, it was further advanced by all of the astrophysics Division Directors at NASA headquarters , including Frank Martin and Charlie Pellerin. NASA's "Great Observatories" program used four separate satellites, each designed to cover a different part of the spectrum in ways which terrestrial systems could not. This perspective enabled the proposed X-ray and InfraRed observatories to be appropriately seen as a continuation of the astronomical program begun with Hubble and CGRO rather than competitors or replacements. [ 2 ] [ 3 ] Two explanatory documents published by NASA and created for the NASA Astrophysics Division and the NASA Astrophysics Management Working Group laid out the rationale for the suite of observatories and questions that could be addressed across the spectrum. [ 4 ] [ 5 ] They had an important role in the campaign to win and sustain approval for the four telescopes. [ citation needed ]
The history of the Hubble Space Telescope can be traced back to 1946, when the astronomer Lyman Spitzer wrote the paper Astronomical advantages of an extraterrestrial observatory . [ 6 ] Spitzer devoted much of his career to pushing for a space telescope.
The 1966–1972 Orbiting Astronomical Observatory missions demonstrated the important role space-based observations could play in astronomy. In 1968, NASA developed firm plans for a space-based reflecting telescope with a 3-meter mirror, known provisionally as the Large Orbiting Telescope or Large Space Telescope (LST), with a launch slated for 1979. [ 7 ] Congress eventually approved funding of US$36 million for 1978, and the design of the LST began in earnest, aiming for a launch date of 1983. During the early 1980s, the telescope was named after Edwin Hubble .
Hubble was originally intended to be retrieved and returned to Earth by the Space Shuttle , but the retrieval plan was later abandoned. On 31 October 2006, NASA Administrator Michael D. Griffin gave the go-ahead for a final refurbishment mission. The 11-day STS-125 mission by Space Shuttle Atlantis , launched on 11 May 2009, [ 8 ] installed fresh batteries, replaced all gyroscopes, replaced a command computer, fixed several instruments, and installed the Wide Field Camera 3 and the Cosmic Origins Spectrograph . [ 9 ]
Gamma rays had been examined above the atmosphere by several early space missions. During its High Energy Astronomy Observatory Program in 1977, NASA announced plans to build a "great observatory" for gamma-ray astronomy . The Gamma Ray Observatory (GRO), renamed Compton Gamma-Ray Observatory (CGRO), was designed to take advantage of the major advances in detector technology during the 1980s. Following 14 years of effort, the CGRO was launched on 5 April 1991. [ 10 ] One of the three gyroscopes on the Compton Gamma Ray Observatory failed in December 1999. Although the observatory was fully functional with two gyroscopes, NASA judged that failure of a second gyroscope would result in inability to control the satellite during its eventual return to Earth due to orbital decay. NASA chose instead to preemptively de-orbit Compton on 4 June 2000. [ 11 ] Parts that survived reentry splashed into the Pacific Ocean .
In 1976 the Chandra X-ray Observatory (called AXAF at the time) was proposed to NASA by Riccardo Giacconi and Harvey Tananbaum . Preliminary work began the following year at Marshall Space Flight Center (MSFC) and the Smithsonian Astrophysical Observatory (SAO). In the meantime, in 1978, NASA launched the first imaging X-ray telescope, Einstein Observatory (HEAO-2), into orbit. Work continued on the Chandra project through the 1980s and 1990s. In 1992, to reduce costs, the spacecraft was redesigned. Four of the twelve planned mirrors were eliminated, as were two of the six scientific instruments. Chandra's planned orbit was changed to an elliptical one, reaching one third of the way to the Moon's at its farthest point. This eliminated the possibility of improvement or repair by the Space Shuttle but put the observatory above the Earth's radiation belts for most of its orbit.
By the early 1970s, astronomers began to consider the possibility of placing an infrared telescope above the obscuring effects of atmosphere of Earth . Most of the early concepts, envisioned repeated flights aboard the NASA Space Shuttle. This approach was developed in an era when the Shuttle program was presumed to be capable of supporting weekly flights of up to 30 days duration. In 1979, a National Research Council of the National Academy of Sciences report, A Strategy for Space Astronomy and Astrophysics for the 1980s , identified a Shuttle Infrared Telescope Facility (SIRTF) as "one of two major astrophysics facilities [to be developed] for Spacelab ," a Shuttle-borne platform.
The launch of the Infrared Astronomical Satellite , an Explorer-class satellite designed to conduct the first infrared survey of the sky led to anticipation of an instrument using new infrared detector technology. By September 1983, NASA was considering the "possibility of a long duration [free-flyer] SIRTF mission". The 1985 Spacelab-2 flight aboard STS-51-F confirmed the Shuttle environment was not well suited to an onboard infrared telescope, and a free-flying design was better. The first word of the name was changed from Shuttle so it would be called the Space Infrared Telescope Facility. [ 12 ] [ 13 ]
Spitzer was the only one of the Great Observatories not launched by the Space Shuttle. It was originally intended to be so launched, but after the Challenger disaster , the Centaur LH2 / LOX upper stage that would have been required to push it into a heliocentric orbit was banned from Shuttle use. Titan and Atlas launch vehicles were canceled for cost reasons. After redesign and lightening, it was launched in 2003 by a Delta II launch vehicle instead. It was called the Space Infrared Telescope Facility (SIRTF) before launch. The telescope was deactivated when operations ended on 30 January 2020.
Since the Earth's atmosphere prevents X-rays , gamma-rays [ 14 ] and far-infrared radiation from reaching the ground, space missions were essential for the Compton, Chandra and Spitzer observatories. Hubble also benefits from being above the atmosphere, as the atmosphere blurs ground-based observations of very faint objects, decreasing spatial resolution (however brighter objects can be imaged in much higher resolution than by Hubble from the ground using astronomical interferometers or adaptive optics ). Larger, ground-based telescopes have only recently matched Hubble in resolution for near-infrared wavelengths of faint objects. Being above the atmosphere eliminates the problem of airglow , allowing Hubble to make observations of ultrafaint objects. Ground-based telescopes cannot compensate for airglow on ultrafaint objects, and so very faint objects require unwieldy and inefficient exposure times. Hubble can also observe at ultraviolet wavelengths which do not penetrate the atmosphere.
Each observatory was designed to push the state of technology in its region of the electromagnetic spectrum. Compton was much larger than any gamma-ray instruments flown on the previous HEAO missions, opening entirely new areas of observation. It had four instruments covering the 20 keV to 30 GeV energy range, which complemented each other's sensitivities, resolutions, and fields of view. Gamma rays are emitted by various high-energy and high-temperature sources, such as black holes , pulsars , and supernovae .
Chandra similarly had no ground predecessors. It followed the three NASA HEAO Program satellites, notably the highly successful Einstein Observatory , which was the first to demonstrate the power of grazing-incidence, focusing X-ray optics , giving spatial resolution an order of magnitude better than collimated instruments (comparable to optical telescopes), with an enormous improvement in sensitivity. Chandra's large size, high orbit, and sensitive CCDs allowed observations of very faint X-ray sources.
Spitzer also observes at wavelength largely inaccessible to ground telescopes. It was preceded in space by NASA's smaller IRAS mission and European Space Agency (ESA)'s large ISO telescope. Spitzer's instruments took advantage of the rapid advances in infrared detector technology since IRAS, combined with its large aperture, favorable fields of view, and long life. Science returns were accordingly outstanding. [ citation needed ] Infrared observations are necessary for very distant astronomical objects where all the visible light is redshifted to infrared wavelengths, for cool objects which emit little visible light, and for regions optically obscured by dust.
Aside from inherent mission capabilities (particularly sensitivities, which cannot be replicated by ground observatories), the Great Observatories program allows missions to interact for greater science return. Different objects shine in different wavelengths, but training two or more observatories on an object allows a deeper understanding.
High-energy studies (in X-rays and gamma rays) have had only moderate imaging resolutions so far. Studying X-ray and gamma-ray objects with Hubble, as well as Chandra and Compton, gives accurate size and positional data. In particular, Hubble's resolution can often discern whether the target is a standalone object, or part of a parent galaxy, and if a bright object is in the nucleus, arms, or halo of a spiral galaxy . Similarly, the smaller aperture of Spitzer means that Hubble can add finer spatial information to a Spitzer image. Reported in March 2016, Spitzer and Hubble were used to discover the most distant-known galaxy, GN-z11 . This object was seen as it appeared 13.4 billion years ago. [ 15 ] [ 16 ] ( List of the most distant astronomical objects )
Ultraviolet studies with Hubble also reveal the temporal states of high-energy objects. X-rays and gamma rays are harder to detect with current technologies than visible and ultraviolet. Therefore, Chandra and Compton needed long integration times to gather enough photons. However, objects which shine in X-rays and gamma rays can be small, and can vary on timescales of minutes or seconds. Such objects then call for followup with Hubble or the Rossi X-ray Timing Explorer , which can measure details in angular seconds or fractions of a second, due to different designs. Rossi's last full year of operation was 2011.
The ability of Spitzer to see through dust and thick gases is good for galactic nuclei observations. Massive objects at the hearts of galaxies shine in X-rays, gamma rays, and radio waves, but infrared studies into these clouded regions can reveal the number and positions of objects.
Hubble, meanwhile, has neither the field of view nor the available time to study all interesting objects. Worthwhile targets are often found with ground telescopes, which are cheaper, or with smaller space observatories, which are sometimes expressly designed to cover large areas of the sky. Also, the other three Great Observatories have found interesting new objects, which merit diversion of Hubble.
One example of observatory synergy is Solar System and asteroid studies. Small bodies, such as small moons and asteroids, are too small and/or distant to be directly resolved even by Hubble; their image appears as a diffraction pattern determined by brightness, not size. However, the minimum size can be deduced by Hubble through knowledge of the body's albedo . The maximum size can be determined by Spitzer through knowledge of the body's temperature, which is largely known from its orbit. Thus, the body's true size is bracketed. Further spectroscopy by Spitzer can determine the chemical composition of the object's surface, which limits its possible albedos, and therefore sharpens the low size estimate.
At the opposite end of the cosmic distance ladder , observations made with Hubble, Spitzer and Chandra have been combined in the Great Observatories Origins Deep Survey to yield a multi-wavelength picture of galaxy formation and evolution in the early Universe .
All four telescopes have had a substantial impact on astronomy. The opening up of new wavebands to high resolution, high sensitivity observations by the Compton, Chandra and Spitzer has revolutionized our understanding of a wide range of astronomical objects, and has led to the detection of thousands of new, interesting objects. Hubble has had a much larger public and media impact than the other telescopes, although at optical wavelengths Hubble has provided a more modest improvement in sensitivity and resolution over existing instruments. Hubble's capability for uniform high-quality imaging of any astronomical object at any time has allowed accurate surveys and comparisons of large numbers of astronomical objects. The Hubble Deep Field observations have been very important for studies of distant galaxies, as they provide rest-frame ultraviolet images of these objects with a similar number of pixels across the galaxies as previous ultraviolet images of closer galaxies, allowing direct comparison.
In 2016, NASA began considering four different Flagship space telescopes , [ 21 ] they are the Habitable Exoplanet Imaging Mission (HabEx), Large UV Optical Infrared Surveyor (LUVOIR), Origins Space Telescope (OST), and Lynx X-ray Observatory . In 2019, the four teams will turn their final reports over to the National Academy of Sciences , whose independent Decadal Survey committee advises NASA on which mission should take top priority. [ 21 ]
NASA announced the Habitable Worlds Observatory (HWO) in 2023, a successor building on the Large UV Optical Infrared Surveyor (LUVOIR) and Habitable Exoplanet Imaging Mission (HabEX) proposals. [ 22 ] The administration also created the Great Observatory Maturation Program for the development of the Habitable Worlds Observatory . [ 23 ] | https://en.wikipedia.org/wiki/Great_Observatories_program |
The Great Oxidation Event ( GOE ) or Great Oxygenation Event , also called the Oxygen Catastrophe , Oxygen Revolution , Oxygen Crisis or Oxygen Holocaust , [ 2 ] was a time interval during the Earth 's Paleoproterozoic era when the Earth's atmosphere and shallow seas first experienced a rise in the concentration of free oxygen . [ 3 ] This began approximately 2.460–2.426 billion years ago (Ga) during the Siderian period and ended approximately 2.060 Ga ago during the Rhyacian . [ 4 ] Geological, isotopic and chemical evidence suggests that biologically produced molecular oxygen ( dioxygen or O 2 ) started to accumulate in the Archean prebiotic atmosphere due to microbial photosynthesis , and eventually changed it from a weakly reducing atmosphere practically devoid of oxygen into an oxidizing one containing abundant free oxygen, [ 5 ] with oxygen levels being as high as 10% of modern atmospheric level by the end of the GOE. [ 6 ]
The appearance of highly reactive free oxygen, which can oxidize organic compounds (especially genetic materials ) and thus is toxic to the then-mostly anaerobic biosphere , may have caused the extinction / extirpation of many early organisms on Earth—mostly archaeal colonies that used retinal to use green-spectrum light energy and power a form of anoxygenic photosynthesis (see Purple Earth hypothesis ). Although the event is inferred to have constituted a mass extinction , [ 7 ] due in part to the great difficulty in surveying microscopic organisms' abundances, and in part to the extreme age of fossil remains from that time, the Great Oxidation Event is typically not counted among conventional lists of " great extinctions ", which are implicitly limited to the Phanerozoic eon. In any case, isotope geochemistry data from sulfate minerals have been interpreted to indicate a decrease in the size of the biosphere of >80% associated with changes in nutrient supplies at the end of the GOE. [ 8 ]
The GOE is inferred to have been caused by cyanobacteria , which evolved chlorophyll -based photosynthesis that releases dioxygen as a byproduct of water photolysis . The continually produced oxygen eventually depleted all the surface reducing capacity from ferrous iron , sulfur , hydrogen sulfide and atmospheric methane over nearly a billion years. The oxidative environmental change, compounded by a global glaciation , devastated the microbial mats around the Earth's surface. The subsequent adaptation of surviving archaea via symbiogenesis with aerobic proteobacteria (which went endosymbiont and became mitochondria ) may have led to the rise of eukaryotic organisms and the subsequent evolution of multicellular life-forms. [ 9 ] [ 10 ] [ 11 ]
The composition of the Earth's earliest atmosphere is not known with certainty. However, the bulk was likely nitrogen N 2 , and carbon dioxide CO 2 , which are also the predominant nitrogen- and carbon-bearing gases produced by volcanism today. These are relatively inert gases. Oxygen, O 2 , meanwhile, was present in the atmosphere at just 0.001% of its present atmospheric level. [ 12 ] [ 13 ] The Sun shone at about 70% of its current brightness 4 billion years ago, but there is strong evidence that liquid water existed on Earth at the time. A warm Earth, in spite of a faint Sun, is known as the faint young Sun paradox . [ 14 ] Either CO 2 levels were much higher at the time, providing enough of a greenhouse effect to warm the Earth, or other greenhouse gases were present. The most likely such gas is methane , CH 4 , which is a powerful greenhouse gas and was produced by early forms of life known as methanogens . Scientists continue to research how the Earth was warmed before life arose. [ 15 ]
An atmosphere of N 2 and CO 2 with trace amounts of H 2 O , CH 4 , carbon monoxide ( CO ), and hydrogen ( H 2 ) is described as a weakly reducing atmosphere . [ 16 ] Such an atmosphere contains practically no oxygen. The modern atmosphere contains abundant oxygen (nearly 21%), making it an oxidizing atmosphere. [ 17 ] The rise in oxygen is attributed to photosynthesis by cyanobacteria , which are thought to have evolved as early as 3.5 billion years ago. [ 18 ]
The current scientific understanding of when and how the Earth's atmosphere changed from a weakly reducing to a strongly oxidizing atmosphere largely began with the work of the American geologist Preston Cloud in the 1970s. [ 14 ] Cloud observed that detrital sediments older than about 2 billion years contained grains of pyrite , uraninite , [ 14 ] and siderite , [ 17 ] all minerals containing reduced forms of iron or uranium that are not found in younger sediments because they are rapidly oxidized in an oxidizing atmosphere. He further observed that continental red beds , which get their color from the oxidized ( ferric ) mineral hematite , began to appear in the geological record at about this time. Banded iron formation largely disappears from the geological record at 1.85 Ga, after peaking at about 2.5 Ga. [ 19 ] Banded iron formation can form only when abundant dissolved ferrous iron is transported into depositional basins , and an oxygenated ocean blocks such transport by oxidizing the iron to form insoluble ferric iron compounds. [ 20 ] The end of the deposition of banded iron formation at 1.85 Ga is therefore interpreted as marking the oxygenation of the deep ocean. [ 14 ] Heinrich Holland further elaborated these ideas through the 1980s, placing the main time interval of oxygenation between 2.2 and 1.9 Ga. [ 15 ]
Constraining the onset of atmospheric oxygenation has proven particularly challenging for geologists and geochemists. While there is a widespread consensus that initial oxygenation of the atmosphere happened sometime during the first half of the Paleoproterozoic , there is disagreement on the exact timing of this event. Scientific publications between 2016–2022 have differed in the inferred timing of the onset of atmospheric oxygenation by approximately 500 million years; estimates of 2.7 Ga , [ 21 ] 2.501–2.434 Ga [ 22 ] 2.501–2.225 Ga, [ 23 ] 2.460–2.426 Ga, [ 4 ] 2.430 Ga, [ 24 ] 2.33 Ga, [ 25 ] and 2.3 Ga have been given. [ 26 ] Factors limiting calculations include an incomplete sedimentary record for the Paleoproterozoic (e.g., because of subduction and metamorphism ), uncertainties in depositional ages for many ancient sedimentary units , and uncertainties related to the interpretation of different geological/geochemical proxies . While the effects of an incomplete geological record have been discussed and quantified in the field of paleontology for several decades, particularly with respect to the evolution and extinction of organisms (the Signor–Lipps effect ), this is rarely quantified when considering geochemical records and may therefore lead to uncertainties for scientists studying the timing of atmospheric oxygenation. [ 23 ]
Evidence for the Great Oxidation Event is provided by a variety of petrological and geochemical markers that define this geological event .
Paleosols , detrital grains, and red beds are evidence of low oxygen levels. [ 27 ] Paleosols (fossil soils) older than 2.4 billion years old have low iron concentrations that suggest anoxic weathering . [ 28 ] Detrital grains composed of pyrite, siderite, and uraninite (redox-sensitive detrital minerals) are found in sediments older than ca. 2.4 Ga. [ 29 ] These minerals are only stable under low oxygen conditions, and so their occurrence as detrital minerals in fluvial and deltaic sediments are widely interpreted as evidence of an anoxic atmosphere. [ 29 ] [ 30 ] In contrast to redox-sensitive detrital minerals are red beds, red-colored sandstones that are coated with hematite. The occurrence of red beds indicates that there was sufficient oxygen to oxidize iron to its ferric state, and these represent a marked contrast to sandstones deposited under anoxic conditions which are often beige, white, grey, or green. [ 31 ]
Banded iron formations are composed of thin alternating layers of chert (a fine-grained form of silica ) and iron oxides ( magnetite and hematite). Extensive deposits of this rock type are found around the world, almost all of which are more than 1.85 billion years old and most of which were deposited around 2.5 Ga . The iron in banded iron formations is partially oxidized, with roughly equal amounts of ferrous and ferric iron. [ 32 ] Deposition of a banded iron formation requires both an anoxic deep ocean capable of transporting iron in soluble ferrous form, and an oxidized shallow ocean where the ferrous iron is oxidized to insoluble ferric iron and precipitates onto the ocean floor. [ 20 ] The deposition of banded iron formations before 1.8 Ga suggests the ocean was in a persistent ferruginous state, but deposition was episodic and there may have been significant intervals of euxinia . [ 33 ] The transition from deposition of banded iron formations to manganese oxides in some strata has been considered a key tipping point in the timing of the GOE because it is believed to indicate the escape of significant molecular oxygen into the atmosphere in the absence of ferrous iron as a reducing agent. [ 34 ]
Black laminated shales , rich in organic matter, are often regarded as a marker for anoxic conditions. However, the deposition of abundant organic matter is not a sure indication of anoxia, and burrowing organisms that destroy lamination had not yet evolved during the time frame of the Great Oxygenation Event. Thus laminated black shale by itself is a poor indicator of oxygen levels. Scientists must look instead for geochemical evidence of anoxic conditions. These include ferruginous anoxia, in which dissolved ferrous iron is abundant, and euxinia, in which hydrogen sulfide is present in the water. [ 35 ]
Examples of such indicators of anoxic conditions include the degree of pyritization (DOP), which is the ratio of iron present as pyrite to the total reactive iron. Reactive iron, in turn, is defined as iron found in oxides and oxyhydroxides, carbonates, and reduced sulfur minerals such as pyrites, in contrast with iron tightly bound in silicate minerals. [ 36 ] A DOP near zero indicates oxidizing conditions, while a DOP near 1 indicates euxinic conditions. Values of 0.3 to 0.5 are transitional, suggesting anoxic bottom mud under an oxygenated ocean. Studies of the Black Sea , which is considered a modern model for ancient anoxic ocean basins, indicate that high DOP, a high ratio of reactive iron to total iron, and a high ratio of total iron to aluminum are all indicators of transport of iron into a euxinic environment. Ferruginous anoxic conditions can be distinguished from euxenic conditions by a DOP less than about 0.7. [ 35 ]
The currently available evidence suggests that the deep ocean remained anoxic and ferruginous as late as 580 Ma, well after the Great Oxygenation Event, remaining just short of euxenic during much of this interval of time. Deposition of banded iron formation ceased when conditions of local euxenia on continental platforms and shelves began precipitating iron out of upwelling ferruginous water as pyrite. [ 33 ] [ 27 ] [ 35 ]
Some of the most persuasive evidence for the Great Oxidation Event is provided by the mass-independent fractionation (MIF) of sulfur. The chemical signature of the MIF of sulfur is found prior to 2.4–2.3 Ga but disappears thereafter. [ 37 ] The presence of this signature all but eliminates the possibility of an oxygenated atmosphere. [ 17 ]
Different isotopes of a chemical element have slightly different atomic masses. Most of the differences in geochemistry between isotopes of the same element scale with this mass difference. These include small differences in molecular velocities and diffusion rates, which are described as mass-dependent fractionation processes. By contrast, MIF describes processes that are not proportional to the difference in mass between isotopes. The only such process likely to be significant in the geochemistry of sulfur is photodissociation . This is the process in which a molecule containing sulfur is broken up by solar ultraviolet (UV) radiation. The presence of a clear MIF signature for sulfur prior to 2.4 Ga shows that UV radiation was penetrating deep into the Earth's atmosphere. This in turn rules out an atmosphere containing more than traces of oxygen, which would have produced an ozone layer that would have shielded the lower atmosphere from UV radiation. The disappearance of the MIF signature for sulfur indicates the formation of such an ozone shield as oxygen began to accumulate in the atmosphere. [ 17 ] [ 27 ] MIF of sulphur also indicates the presence of oxygen in that oxygen is required to facilitate repeated redox cycling of sulphur. [ 38 ]
MIF provides clues to the Great Oxygenation Event. For example, oxidation of manganese in surface rocks by atmospheric oxygen leads to further reactions that oxidize chromium. The heavier 53 Cr is oxidized preferentially over the lighter 52 Cr, and the soluble oxidized chromium carried into the ocean shows this enhancement of the heavier isotope. The chromium isotope ratio in banded iron formation suggests small but significant quantities of oxygen in the atmosphere before the Great Oxidation Event, and a brief return to low oxygen abundance 500 Ma after the GOE. However, the chromium data may conflict with the sulfur isotope data, which calls the reliability of the chromium data into question. [ 39 ] [ 40 ] It is also possible that oxygen was present earlier only in localized "oxygen oases". [ 41 ] Since chromium is not easily dissolved, its release from rocks requires the presence of a powerful acid such as sulfuric acid (H 2 SO 4 ) which may have formed through bacterial oxidation of pyrite. This could provide some of the earliest evidence of oxygen-breathing life on land surfaces. [ 42 ]
Other elements whose MIF may provide clues to the GOE include carbon, nitrogen, transitional metals such as molybdenum and iron, and non-metal elements such as selenium . [ 27 ]
While the GOE is generally thought to be a result of oxygenic photosynthesis by ancestral cyanobacteria, the presence of cyanobacteria in the Archaean before the GOE is a highly controversial topic. [ 43 ] Structures that are claimed to be fossils of cyanobacteria exist in rock formed 3.5 Ga . [ 44 ] These include microfossils of supposedly cyanobacterial cells and macrofossils called stromatolites , which are interpreted as colonies of microbes, including cyanobacteria, with characteristic layered structures. Modern stromatolites, which can only be seen in harsh environments such as Shark Bay in Western Australia, are associated with cyanobacteria, and thus fossil stromatolites had long been interpreted as the evidence for cyanobacteria. [ 44 ] However, it has increasingly been inferred that at least some of these Archaean fossils were generated abiotically or produced by non-cyanobacterial phototrophic bacteria. [ 45 ]
Additionally, Archaean sedimentary rocks were once found to contain biomarkers , also known as chemical fossils , interpreted as fossilized membrane lipids from cyanobacteria and eukaryotes . For example, traces of 2α-methylhopanes and steranes that are thought to be derived from cyanobacteria and eukaryotes, respectively, were found in the Pilbara of Western Australia. [ 46 ] Steranes are diagenetic products of sterols, which are biosynthesized using molecular oxygen. Thus, steranes can additionally serve as an indicator of oxygen in the atmosphere. However, these biomarker samples have since been shown to have been contaminated, and so the results are no longer accepted. [ 47 ]
Carbonaceous microfossils from the Turee Creek Group of Western Australia, which date back to ~2.45–2.21 Ga, have been interpreted as iron-oxidising bacteria . Their presence suggests a minimum threshold of seawater oxygen content had been reached by this interval of time. [ 48 ]
Some elements in marine sediments are sensitive to different levels of oxygen in the environment such as the transition metals molybdenum [ 35 ] and rhenium . [ 49 ] Non-metal elements such as selenium and iodine are also indicators of oxygen levels. [ 50 ]
The ability to generate oxygen via photosynthesis likely first appeared in the ancestors of cyanobacteria. [ 51 ] These organisms evolved at least 2.45–2.32 Ga [ 52 ] [ 53 ] and probably as early as 2.7 Ga or earlier. [ 14 ] [ 54 ] [ 3 ] [ 55 ] [ 56 ] However, oxygen remained scarce in the atmosphere until around 2.0 Ga, [ 15 ] and banded iron formation continued to be deposited until around 1.85 Ga. [ 14 ] Given the rapid multiplication rate of cyanobacteria under ideal conditions, an explanation is needed for the delay of at least 400 million years between the evolution of oxygen-producing photosynthesis and the appearance of significant oxygen in the atmosphere. [ 15 ]
Hypotheses to explain this gap must take into consideration the balance between oxygen sources and oxygen sinks. Oxygenic photosynthesis produces organic carbon that must be segregated from oxygen to allow oxygen accumulation in the surface environment, otherwise the oxygen back-reacts with the organic carbon and does not accumulate. The burial of organic carbon, sulfide, and minerals containing ferrous iron (Fe 2+ ) is a primary factor in oxygen accumulation. [ 57 ] When organic carbon is buried without being oxidized, the oxygen is left in the atmosphere. In total, the burial of organic carbon and pyrite today creates 15.8 ± 3.3 Tmol (1 Tmol = 10 12 moles) of O 2 per year. This creates a net O 2 flux from the global oxygen sources.
The rate of change of oxygen can be calculated from the difference between global sources and sinks. [ 27 ] The oxygen sinks include reduced gases and minerals from volcanoes , metamorphism and weathering. [ 27 ] The GOE started after these oxygen-sink fluxes and reduced-gas fluxes were exceeded by the flux of O 2 associated with the burial of reductants, such as organic carbon. [ 58 ] About 12.0 ± 3.3 Tmol of O 2 per year today goes to the sinks composed of reduced minerals and gases from volcanoes, metamorphism, percolating seawater and heat vents from the seafloor. [ 27 ] On the other hand, 5.7 ± 1.2 Tmol of O 2 per year today oxidizes reduced gases in the atmosphere through photochemical reaction. [ 27 ] On the early Earth, there was visibly very little oxidative weathering of continents (e.g., a lack of red beds ), and so the weathering sink on oxygen would have been negligible compared to that from reduced gases and dissolved iron in oceans.
Dissolved iron in oceans exemplifies O 2 sinks. Free oxygen produced during this time was chemically captured by dissolved iron, converting iron Fe and Fe 2+ to magnetite ( Fe 2+ Fe 3+ 2 O 4 ) that is insoluble in water, and sank to the bottom of the shallow seas to create banded iron formations. [ 58 ] It took 50 million years or longer to deplete the oxygen sinks. [ 59 ] The rate of photosynthesis and associated rate of organic burial also affect the rate of oxygen accumulation. When land plants spread over the continents in the Devonian , more organic carbon was buried and likely allowed higher O 2 levels to occur. [ 60 ] Today, the average time that an O 2 molecule spends in the air before it is consumed by geological sinks is about 2 million years. [ 61 ] That residence time is relatively short in geologic time; so in the Phanerozoic , there must have been feedback processes that kept the atmospheric O 2 level within bounds suitable for animal life.
Preston Cloud originally proposed that the first cyanobacteria had evolved the capacity to carry out oxygen-producing photosynthesis but had not yet evolved enzymes (such as superoxide dismutase ) for living in an oxygenated environment. These cyanobacteria would have been protected from their own poisonous oxygen waste through its rapid removal via the high levels of reduced ferrous iron, Fe(II), in the early ocean. He suggested that the oxygen released by photosynthesis oxidized the Fe(II) to ferric iron, Fe(III), which precipitated out of the sea water to form banded iron formation. [ 62 ] [ 63 ] He interpreted the great peak in deposition of banded iron formation at the end of the Archean as the signature for the evolution of mechanisms for living with oxygen. This ended self-poisoning and produced a population explosion in the cyanobacteria that rapidly oxygenated the ocean and ended banded iron formation deposition. [ 62 ] [ 63 ] However, improved dating of Precambrian strata showed that the late Archean peak of deposition was spread out over tens of millions of years, rather than taking place in a very short interval of time following the evolution of oxygen-coping mechanisms. This made Cloud's hypothesis untenable. [ 19 ]
Most modern interpretations describe the GOE as a long, protracted process that took place over hundreds of millions of years rather than a single abrupt event, with the quantity of atmospheric oxygen fluctuating in relation to the capacity of oxygen sinks and the productivity of oxygenic photosynthesisers over the course of the GOE. [ 3 ] More recently, families of bacteria have been discovered that closely resemble cyanobacteria but show no indication of ever having possessed photosynthetic capability. These may be descended from the earliest ancestors of cyanobacteria, which only later acquired photosynthetic ability by lateral gene transfer . Based on molecular clock data, the evolution of oxygen-producing photosynthesis may have occurred much later than previously thought, at around 2.5 Ga. This reduces the gap between the evolution of oxygen photosynthesis and the appearance of significant atmospheric oxygen. [ 64 ]
Another possibility is that early cyanobacteria were starved for vital nutrients, and this checked their growth. However, a lack of the scarcest nutrients, iron, nitrogen, and phosphorus, could have slowed but not prevented a cyanobacteria population explosion and rapid oxygenation. The explanation for the delay in the oxygenation of the atmosphere following the evolution of oxygen-producing photosynthesis likely lies in the presence of various oxygen sinks on the young Earth. [ 15 ]
Early chemosynthetic organisms likely produced methane , an important trap for molecular oxygen, since methane readily oxidizes to carbon dioxide (CO 2 ) and water in the presence of UV radiation . Modern methanogens require nickel as an enzyme cofactor . As the Earth's crust cooled and the supply of volcanic nickel dwindled, oxygen-producing algae began to outperform methane producers, and the oxygen percentage of the atmosphere steadily increased. [ 65 ] From 2.7 to 2.4 Ga the rate of deposition of nickel declined steadily from a level 400 times that of today. [ 66 ] This nickel famine was somewhat buffered by an uptick in sulfide weathering at the start of the GOE that brought some nickel to the oceans, without which methanogenic organisms would have declined in abundance more precipitously, plunging Earth into even more severe and long-lasting icehouse conditions than those seen during the Huronian glaciation . [ 67 ]
Another hypothesis posits that a number of large igneous provinces (LIPs) were emplaced during the GOE and fertilised the oceans with limiting nutrients, facilitating and sustaining cyanobacterial blooms. [ 68 ]
One hypothesis argues that the GOE was the immediate result of photosynthesis, although the majority of scientists suggest that a long-term increase of oxygen is more likely. [ 69 ] Several model results show possibilities of long-term increase of carbon burial, [ 70 ] but the conclusions are indeterminate. [ 71 ]
In contrast to the increasing flux hypothesis, there are several hypotheses that attempt to use decrease of sinks to explain the GOE. [ 72 ] One theory suggests increasing lacustrine organic carbon burial as a cause; with more reduced carbon being buried, there was less of it for free oxygen to react with in the atmosphere and oceans, enabling its buildup. [ 73 ] A different theory suggests that the composition of the volatiles from volcanic gases was more oxidized. [ 57 ] Another theory suggests that the decrease of metamorphic gases and serpentinization is the main key of GOE. Hydrogen and methane released from metamorphic processes are also lost from Earth's atmosphere over time and leave the crust oxidized. [ 74 ] Scientists realized that hydrogen would escape into space through a process called methane photolysis, in which methane decomposes under the action of ultraviolet light in the upper atmosphere and releases its hydrogen. The escape of hydrogen from the Earth into space must have oxidized the Earth because the process of hydrogen loss is chemical oxidation. [ 74 ] This process of hydrogen escape required the generation of methane by methanogens, so that methanogens actually helped create the conditions necessary for the oxidation of the atmosphere. [ 41 ]
One hypothesis suggests that the oxygen increase had to await tectonically driven changes in the Earth, including the appearance of shelf seas, where reduced organic carbon could reach the sediments and be buried. [ 75 ] The burial of reduced carbon as graphite or diamond around subduction zones released molecular oxygen into the atmosphere. [ 76 ] [ 77 ] The appearance of oxidised magmas enriched in sulphur formed around subduction zones confirms changes in tectonic regime played an important role in the oxygenation of Earth's atmosphere. [ 78 ]
The newly produced oxygen was first consumed in various chemical reactions in the oceans, primarily with iron. Evidence is found in older rocks that contain massive banded iron formations apparently laid down as this iron and oxygen first combined; most present-day iron ore lies in these deposits. It was assumed oxygen released from cyanobacteria resulted in the chemical reactions that created rust, but it appears the iron formations were caused by anoxygenic phototrophic iron-oxidizing bacteria, which does not require oxygen. [ 79 ] Evidence suggests oxygen levels spiked each time smaller land masses collided to form a super-continent. Tectonic pressure thrust up mountain chains, which eroded releasing nutrients into the ocean that fed photosynthetic cyanobacteria. [ 80 ]
Another hypothesis posits a model of the atmosphere that exhibits bistability : two steady states of oxygen concentration. The state of stable low oxygen concentration (0.02%) experiences a high rate of methane oxidation. If some event raises oxygen levels beyond a moderate threshold, the formation of an ozone layer shields UV rays and decreases methane oxidation, raising oxygen further to a stable state of 21% or more. The Great Oxygenation Event can then be understood as a transition from the lower to the upper steady states. [ 81 ] [ 82 ]
Cyanobacteria tend to consume nearly as much oxygen at night as they produce during the day. However, experiments demonstrate that cyanobacterial mats produce a greater excess of oxygen with longer photoperiods. The rotational period of the Earth was only about six hours shortly after its formation 4.5 Ga but increased to 21 hours by 2.4 Ga in the Paleoproterozoic. The rotational period increased again, starting 700 million years ago, to its present value of 24 hours. The total amount of oxygen produced by the cyanobacteria remained the same with longer days, but the longer the day, the more time oxygen has to diffuse into the water. [ 83 ] [ 84 ] [ 85 ]
One group of researchers has suggested that, if certain conditions were present (a low-productivity trajectory), it may have been plants, instead of cyanobacteria, that made the greatest contribution of oxygen to the GOE. [ 11 ]
Eventually, oxygen started to accumulate in the atmosphere, with two major consequences.
The Great Oxygenation Event triggered an explosive growth in the diversity of minerals , with many elements occurring in one or more oxidized forms near the Earth's surface. [ 90 ] It is estimated that the GOE was directly responsible for deposition of more than 2,500 of the total of about 4,500 minerals found on Earth today. Most of these new minerals were formed as hydrated and oxidized forms due to dynamic mantle and crust processes. [ 91 ]
In field studies done in Lake Fryxell , Antarctica, scientists found that mats of oxygen-producing cyanobacteria produced a thin layer, one to two millimeters thick, of oxygenated water in an otherwise anoxic environment , even under thick ice. By inference, these organisms could have adapted to oxygen even before oxygen accumulated in the atmosphere. [ 92 ] The evolution of such oxygen-dependent organisms eventually established an equilibrium in the availability of oxygen, which became a major constituent of the atmosphere. [ 92 ]
It has been proposed that a local rise in oxygen levels due to cyanobacterial photosynthesis in ancient microenvironments was highly toxic to the surrounding biota and that this selective pressure drove the evolutionary transformation of an archaeal lineage into the first eukaryotes . [ 93 ] Oxidative stress involving production of reactive oxygen species (ROS) might have acted in synergy with other environmental stresses (such as ultraviolet radiation and desiccation ) to drive selection in an early archaeal lineage towards eukaryosis. This archaeal ancestor may already have had DNA repair mechanisms based on DNA pairing and recombination , and possibly some cell fusion mechanism. [ 94 ] [ 95 ] The detrimental effects of internal ROS (produced by endosymbiont proto- mitochondria ) on the archaeal genome could have promoted the evolution of meiotic sex from these humble beginnings. [ 94 ] Selective pressure for efficient DNA repair of oxidative DNA damage may have driven the evolution of eukaryotic sex involving such features as cell-cell fusions, cytoskeleton-mediated chromosome movements, and the emergence of the nuclear membrane . [ 93 ] Thus, the evolution of eukaryotic sex and eukaryogenesis were likely inseparable processes that largely evolved to facilitate DNA repair. [ 93 ] The evolution of mitochondria, which are well suited for oxygenated environments, may have occurred during the GOE. [ 96 ]
However, other authors express skepticism that the GOE resulted in widespread eukaryotic diversification due to the lack of robust evidence, concluding that the oxygenation of the oceans and atmosphere does not necessarily lead to increases in ecological and physiological diversity. [ 97 ]
The rise in oxygen content was not linear: instead, there was a rise in oxygen content around 2.3 Ga, followed by a drop around 2.1 Ga. This rise in oxygen is called the Lomagundi-Jatuli event , Lomagundi event , [ 98 ] [ 99 ] or Lomagundi-Jatuli excursion [ 100 ] (named for a district of Southern Rhodesia ) and the time period has been termed Jatulian ; it is currently considered to be part of the Rhyacian period. [ 101 ] [ 102 ] [ 103 ] During the Lomagundi-Jatuli event, oxygen amounts in the atmosphere reached similar heights to modern levels, before returning to low levels during the following stage, which caused the deposition of black shales (rocks that contain large amounts of organic matter that would otherwise have been burned away by oxygen). This drop in oxygen levels is called the Shunga-Francevillian event . Evidence for the event has been found globally in places such as Fennoscandia and the Wyoming Craton . [ 104 ] [ 105 ] Oceans seem to have stayed rich in oxygen for some time even after the event ended. [ 102 ] [ 106 ]
It has been hypothesized that eukaryotes first evolved during the Lomagundi-Jatuli event. [ 102 ] | https://en.wikipedia.org/wiki/Great_Oxidation_Event |
The Rationality Debate —also called the Great Rationality Debate —is the question of whether humans are rational or not. This issue is a topic in the study of cognition and is important in fields such as economics where it is relevant to the theories of market efficiency .
Many studies in experimental psychology have shown that humans often reason in a way that is inaccurate or imperfect—that they do not naturally chose the ideal method or solution. [ 1 ] An example of a problem which causes difficulty and debate is the St. Petersburg paradox . [ 2 ] This is a lottery which is constructed so that the expected value is infinite but unlikely so that most people will not pay a large fee to play. Gerd Gigerenzer explained that, in this case, mathematicians refined their formulae to model this pragmatic behaviour. [ 3 ] Keith Stanovich characterizes this as a Panglossian position in the debate—that humans are fundamentally rational and any variance between the normative position and empirical outcomes may be explained by such adjustments. [ 4 ]
This philosophy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Great_Rationality_Debate |
The Zasechnaya cherta ( Russian : Большая засечная черта ) was a chain of fortification lines, created by Grand Duchy of Moscow and later the Tsardom of Russia to protect it from the Crimean-Nogai Raids that ravaged the southern provinces of the country via the Muravsky Trail during the Russo-Crimean Wars . [ 1 ] It was south of the original line along the Oka River . It also served as a border between the Muscovite State and the steppe nomads . As a fortification line stretching for hundreds of kilometers, the Great Abatis Border is analogous to the Great Wall of China and the Roman limes .
Abatis is a military term for a field fortification made by cutting down trees. The line was built from the felled trees arranged as a barricade and fortified by ditches and earth mounds, palisades, watch towers and natural features like lakes and swamps . The width of the abatis totalled up to several hundred meters. In the most dangerous places the abatis was doubled, trebled etc., the gates and small wooden fortresses were created to check the passers. [ 2 ] Peasants who lived nearby were forbidden to settle or cut wood in the area, but were required by authorities to spend part of their time supporting and renewing the fortifications. [ 3 ] In the autumn, large areas of steppegrass beyond the line were burned to deny fodder to raiders.
Stone and wooden kremlins of the towns were also included in the Great Abatis Line. Among these towns were: Serpukhov , Kolomna , Zaraysk , Tula , Ryazan , Belyov . Other fortresses in the line were smaller ostrogs .
There were a large number of fortification lines in Russian history and it is difficult to get good information on them.
The lines naturally moved south as the Russian state expanded. The earliest reference to abatis fortifications appears to be in a Novgorod chronicle of 1137-1139. Abatis lines began appearing in southern Rus' in the 13th century. The 'Great Abatis Line' extended from Bryansk to Meschera and was nominally completed in 1566. It was guarded by a local militia of about 35,000 in the second half of the 16th century. Another source gives an annual callup of 65,000. Behind the line was a mobile army headquartered in Tula (6,279 men in 1616, 17,005 in 1636).
There are several notable lines. The oldest one (finished by 1563-1566) ran from Nizhniy Novgorod along the Oka River to Kozelsk , [ 4 ] and was built by Ivan the Terrible . The next one built, followed the Alatyr - Orel - Novgorod Seversky - Putivl line. Feodor I of Russia had the abatis built on the Livny - Kursk - Voronezh - Belgorod . Simbirsk line [ 5 ] about 1640, and continued the Belgorod line from Tambov to Simbirsk on the Volga River . [ 6 ] In 1730-31 the Kama line separated Kazan from the Bashkirs. From about 1736 on, a Samara-Orenburg line closed in the Bashkirs from the south. | https://en.wikipedia.org/wiki/Great_Zasechnaya_cherta |
In mathematics , a great circle or orthodrome is the circular intersection of a sphere and a plane passing through the sphere's center point . [ 1 ] [ 2 ]
Any arc of a great circle is a geodesic of the sphere, so that great circles in spherical geometry are the natural analog of straight lines in Euclidean space . For any pair of distinct non- antipodal points on the sphere, there is a unique great circle passing through both. (Every great circle through any point also passes through its antipodal point, so there are infinitely many great circles through two antipodal points.) The shorter of the two great-circle arcs between two distinct points on the sphere is called the minor arc , and is the shortest surface-path between them. Its arc length is the great-circle distance between the points (the intrinsic distance on a sphere), and is proportional to the measure of the central angle formed by the two points and the center of the sphere.
A great circle is the largest circle that can be drawn on any given sphere. Any diameter of any great circle coincides with a diameter of the sphere, and therefore every great circle is concentric with the sphere and shares the same radius . Any other circle of the sphere is called a small circle , and is the intersection of the sphere with a plane not passing through its center. Small circles are the spherical-geometry analog of circles in Euclidean space.
Every circle in Euclidean 3-space is a great circle of exactly one sphere.
The disk bounded by a great circle is called a great disk : it is the intersection of a ball and a plane passing through its center.
In higher dimensions, the great circles on the n -sphere are the intersection of the n -sphere with 2-planes that pass through the origin in the Euclidean space R n + 1 .
Half of a great circle may be called a great semicircle (e.g., as in parts of a meridian in astronomy ).
To prove that the minor arc of a great circle is the shortest path connecting two points on the surface of a sphere, one can apply calculus of variations to it.
Consider the class of all regular paths from a point p {\displaystyle p} to another point q {\displaystyle q} . Introduce spherical coordinates so that p {\displaystyle p} coincides with the north pole. Any curve on the sphere that does not intersect either pole, except possibly at the endpoints, can be parametrized by
provided ϕ {\displaystyle \phi } is allowed to take on arbitrary real values. The infinitesimal arc length in these coordinates is
So the length of a curve γ {\displaystyle \gamma } from p {\displaystyle p} to q {\displaystyle q} is a functional of the curve given by
According to the Euler–Lagrange equation , S [ γ ] {\displaystyle S[\gamma ]} is minimized if and only if
where C {\displaystyle C} is a t {\displaystyle t} -independent constant, and
From the first equation of these two, it can be obtained that
Integrating both sides and considering the boundary condition, the real solution of C {\displaystyle C} is zero. Thus, ϕ ′ = 0 {\displaystyle \phi '=0} and θ {\displaystyle \theta } can be any value between 0 and θ 0 {\displaystyle \theta _{0}} , indicating that the curve must lie on a meridian of the sphere. In a Cartesian coordinate system , this is
which is a plane through the origin, i.e., the center of the sphere.
Some examples of great circles on the celestial sphere include the celestial horizon , the celestial equator , and the ecliptic . Great circles are also used as rather accurate approximations of geodesics on the Earth 's surface for air or sea navigation (although it is not a perfect sphere ), as well as on spheroidal celestial bodies .
The equator of the idealized earth is a great circle and any meridian and its opposite meridian form a great circle. Another great circle is the one that divides the land and water hemispheres . A great circle divides the earth into two hemispheres and if a great circle passes through a point it must pass through its antipodal point .
The Funk transform integrates a function along all great circles of the sphere. | https://en.wikipedia.org/wiki/Great_circle |
A great comet is a comet that becomes exceptionally bright. There is no official definition; often the term is attached to comets such as Halley's Comet , which during certain appearances are bright enough to be noticed by casual observers who are not looking for them, and become well known outside the astronomical community. Typically, they are as bright or brighter than a second magnitude star and have tails that are 10 degrees or longer under dark skies. [ 1 ] Great comets appear at irregular, unpredictable intervals, on average about once per decade . Although comets are officially named after their discoverers, great comets are sometimes also referred to by the year in which they appeared great, using the formulation "The Great Comet of ...", followed by the year. It can also be used as a generic name when a very bright comet is discovered by many observers simultaneously. [ 2 ]
The vast majority of comets are never bright enough to be seen by the naked eye, and generally pass through the inner Solar System unseen by anyone except astronomers . However, occasionally a comet may brighten to naked eye visibility, and even more rarely it may become as bright as or brighter than the brightest stars. The requirements for this to occur are: a large and active nucleus , a close approach to the Sun , and a close approach to the Earth . A comet fulfilling all three of these criteria will certainly be very bright. Sometimes, a comet failing on one criterion will still be bright. For example, Comet Hale–Bopp did not approach the Sun very closely, but had an exceptionally large and active nucleus. It was visible to the naked eye for several months and was very widely observed. Similarly, Comet Hyakutake was a relatively small comet, but appeared bright because it passed very close to the Earth.
Cometary nuclei vary in size from a few hundreds of metres across or less to many kilometres across. When they approach the Sun, large amounts of gas and dust are ejected by cometary nuclei, due to solar heating. A crucial factor in how bright a comet becomes is how large and how active its nucleus is. After many returns to the inner Solar System, cometary nuclei become depleted in volatile materials and thus are much less bright than comets which are making their first passage through the Solar System.
The sudden brightening of Comet Holmes in 2007 showed the importance of the activity of the nucleus in the comet's brightness. On October 23–24, 2007, the comet underwent a sudden outburst which caused it to brighten by factor of about half a million. It unexpectedly brightened from an apparent magnitude of about 17 to about 2.8 in a period of only 42 hours, making it visible to the naked eye. All these temporarily made comet 17P the largest (by radius) object in the Solar System although its nucleus is estimated to be only about 3.4 km in diameter.
The brightness of a simple reflective body varies with the inverse square of its distance from the Sun. That is, if an object's distance from the Sun is halved, its brightness is quadrupled. However, comets behave differently, due to their ejection of large amounts of volatile gas which then also reflect sunlight and may also fluoresce . Their brightness varies roughly as the inverse cube of their distance from the Sun, meaning that if a comet's distance from the Sun is halved, it will become eight times as bright.
This means that the peak brightness of a comet depends significantly on its distance from the Sun. For most comets, the perihelion of their orbit lies outside the Earth's orbit. Any comet approaching the Sun to within 0.5 AU (75 million km ) or less may have a chance of becoming a great comet.
For a comet to become very bright, it also needs to pass close to the Earth. Halley's Comet , for example, is usually very bright when it passes through the inner Solar System every seventy-six years, but during its 1986 apparition , its closest approach to Earth was almost the most distant possible. The comet became visible to the naked eye, but was unspectacular. On the other hand, the intrinsically small and faint Comet Hyakutake (C/1996 B2) appeared very bright and spectacular due to its very close approach to Earth at its nearest during March 1996. Its passage near the Earth was one of the closest cometary approaches on record with a distance of 0.1 AU (15 million km ; 39 LD ).
Great comets of the past two millennia include the following below. This list includes multiple bright apparitions of Halley's Comet since 86 BC: | https://en.wikipedia.org/wiki/Great_comet |
In geometry , the great duoantiprism is the only uniform star- duoantiprism solution p = 5, q = 5 / 3 , in 4-dimensional geometry . It has Schläfli symbol {5}⊗{5/3}, s{5}s{5/3} or ht 0,1,2,3 {5,2,5/3}, Coxeter diagram , constructed from 10 pentagonal antiprisms , 10 pentagrammic crossed-antiprisms , and 50 tetrahedra .
Its vertices are a subset of those of the small stellated 120-cell .
The great duoantiprism can be constructed from a nonuniform variant of the 10-10/3 duoprism (a duoprism of a decagon and a decagram ) where the decagram's edge length is around 1.618 ( golden ratio ) times the edge length of the decagon via an alternation process. The decagonal prisms alternate into pentagonal antiprisms , the decagrammic prisms alternate into pentagrammic crossed-antiprisms with new regular tetrahedra created at the deleted vertices. This is the only uniform solution for the p-q duoantiprism aside from the regular 16-cell (as a 2-2 duoantiprism).
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Great_duoantiprism |
A great ellipse is an ellipse passing through two points on a spheroid and having the same center as that of the spheroid. Equivalently, it is an ellipse on the surface of a spheroid and centered on the origin , or the curve formed by intersecting the spheroid by a plane through its center. [ 1 ] For points that are separated by less than about a quarter of the circumference of the earth , about 10 000 k m {\displaystyle 10\,000\,\mathrm {km} } , the length of the great ellipse connecting the points is close (within one part in 500,000) to the geodesic distance . [ 2 ] [ 3 ] [ 4 ] The great ellipse therefore is sometimes proposed as a suitable route for marine navigation.
The great ellipse is special case of an earth section path .
Assume that the spheroid, an ellipsoid of revolution, has an equatorial radius a {\displaystyle a} and polar semi-axis b {\displaystyle b} . Define the flattening f = ( a − b ) / a {\displaystyle f=(a-b)/a} , the eccentricity e = f ( 2 − f ) {\displaystyle e={\sqrt {f(2-f)}}} , and the second eccentricity e ′ = e / ( 1 − f ) {\displaystyle e'=e/(1-f)} . Consider two points: A {\displaystyle A} at (geographic) latitude ϕ 1 {\displaystyle \phi _{1}} and longitude λ 1 {\displaystyle \lambda _{1}} and B {\displaystyle B} at latitude ϕ 2 {\displaystyle \phi _{2}} and longitude λ 2 {\displaystyle \lambda _{2}} . The connecting great ellipse (from A {\displaystyle A} to B {\displaystyle B} ) has length s 12 {\displaystyle s_{12}} and has azimuths α 1 {\displaystyle \alpha _{1}} and α 2 {\displaystyle \alpha _{2}} at the two endpoints.
There are various ways to map an ellipsoid into a sphere of radius a {\displaystyle a} in such a way as to map the great ellipse into a great circle, allowing the methods of great-circle navigation to be used:
The last method gives an easy way to generate a succession of way-points on the great ellipse connecting two known points A {\displaystyle A} and B {\displaystyle B} . Solve for the great circle between ( ϕ 1 , λ 1 ) {\displaystyle (\phi _{1},\lambda _{1})} and ( ϕ 2 , λ 2 ) {\displaystyle (\phi _{2},\lambda _{2})} and find the way-points on the great circle . These map into way-points on the corresponding great ellipse.
If distances and headings are needed, it is simplest to use the first of the mappings. [ 5 ] In detail, the mapping is as follows (this description is taken from [ 6 ] ):
a tan β = b tan ϕ . {\displaystyle a\tan \beta =b\tan \phi .}
tan α = tan γ 1 − e 2 cos 2 β , tan γ = tan α 1 + e ′ 2 cos 2 ϕ , {\displaystyle {\begin{aligned}\tan \alpha &={\frac {\tan \gamma }{\sqrt {1-e^{2}\cos ^{2}\beta }}},\\\tan \gamma &={\frac {\tan \alpha }{\sqrt {1+e'^{2}\cos ^{2}\phi }}},\end{aligned}}}
(A similar mapping to an auxiliary sphere is carried out in the solution of geodesics on an ellipsoid . The differences are that the azimuth α {\displaystyle \alpha } is conserved in the mapping, while the longitude λ {\displaystyle \lambda } maps to a "spherical" longitude ω {\displaystyle \omega } . The equivalent ellipse used for distance calculations has semi-axes b 1 + e ′ 2 cos 2 α 0 {\displaystyle b{\sqrt {1+e'^{2}\cos ^{2}\alpha _{0}}}} and b {\displaystyle b} .)
The "inverse problem" is the determination of s 12 {\displaystyle s_{12}} , α 1 {\displaystyle \alpha _{1}} , and α 2 {\displaystyle \alpha _{2}} , given the positions of A {\displaystyle A} and B {\displaystyle B} . This is solved by computing β 1 {\displaystyle \beta _{1}} and β 2 {\displaystyle \beta _{2}} and solving for the great-circle between ( β 1 , λ 1 ) {\displaystyle (\beta _{1},\lambda _{1})} and ( β 2 , λ 2 ) {\displaystyle (\beta _{2},\lambda _{2})} .
The spherical azimuths are relabeled as γ {\displaystyle \gamma } (from α {\displaystyle \alpha } ). Thus γ 0 {\displaystyle \gamma _{0}} , γ 1 {\displaystyle \gamma _{1}} , and γ 2 {\displaystyle \gamma _{2}} and the spherical azimuths at the equator and at A {\displaystyle A} and B {\displaystyle B} . The azimuths of the endpoints of great ellipse, α 1 {\displaystyle \alpha _{1}} and α 2 {\displaystyle \alpha _{2}} , are computed from γ 1 {\displaystyle \gamma _{1}} and γ 2 {\displaystyle \gamma _{2}} .
The semi-axes of the great ellipse can be found using the value of γ 0 {\displaystyle \gamma _{0}} .
Also determined as part of the solution of the great circle problem are the arc lengths, σ 01 {\displaystyle \sigma _{01}} and σ 02 {\displaystyle \sigma _{02}} , measured from the equator crossing to A {\displaystyle A} and B {\displaystyle B} . The distance s 12 {\displaystyle s_{12}} is found by computing the length of a portion of perimeter of the ellipse using the formula giving the meridian arc in terms the parametric latitude . In applying this formula, use the semi-axes for the great ellipse (instead of for the meridian) and substitute σ 01 {\displaystyle \sigma _{01}} and σ 02 {\displaystyle \sigma _{02}} for β {\displaystyle \beta } .
The solution of the "direct problem", determining the position of B {\displaystyle B} given A {\displaystyle A} , α 1 {\displaystyle \alpha _{1}} , and s 12 {\displaystyle s_{12}} , can be similarly be found (this requires, in addition, the inverse meridian distance formula ). This also enables way-points (e.g., a series of equally spaced intermediate points) to be found in the solution of the inverse problem. | https://en.wikipedia.org/wiki/Great_ellipse |
In geometry , the great grand 120-cell or great grand polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,5/2,3}. It is one of 10 regular Schläfli-Hess polytopes .
It has the same edge arrangement as the small stellated 120-cell .
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Great_grand_120-cell |
In geometry , the great icosahedral 120-cell , great polyicosahedron or great faceted 600-cell is a regular star 4-polytope with Schläfli symbol {3,5/2,5}. It is one of 10 regular Schläfli-Hess polytopes .
It has the same edge arrangement as the great stellated 120-cell , and grand stellated 120-cell , and face arrangement of the grand 600-cell .
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Great_icosahedral_120-cell |
In geometry , the great stellated 120-cell or great stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,3,5}. It is one of 10 regular Schläfli-Hess polytopes .
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli . It is named by John Horton Conway , extending the naming system by Arthur Cayley for the Kepler-Poinsot solids .
It has the same edge arrangement as the grand 600-cell , icosahedral 120-cell , and the same face arrangement as the grand stellated 120-cell .
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Great_stellated_120-cell |
U+2A7E ⩾ GREATER-THAN OR SLANTED EQUAL TO
The greater-than sign is a mathematical symbol that denotes an inequality between two values. The widely adopted form of two equal-length strokes connecting in an acute angle at the right, > , has been found in documents dated as far back as 1631. [ 1 ] In mathematical writing, the greater-than sign is typically placed between two values being compared and signifies that the first number is greater than the second number. Examples of typical usage include 1.5 > 1 and 1 > −2 . The less-than sign and greater-than sign always "point" to the smaller number. Since the development of computer programming languages , the greater-than sign and the less-than sign have been repurposed for a range of uses and operations.
The earliest known use of the symbols < and > is found in Artis Analyticae Praxis ad Aequationes Algebraicas Resolvendas ( The Analytical Arts Applied to Solving Algebraic Equations ) by Thomas Harriot , published posthumously in 1631. [ 1 ] The text states " Signum majoritatis ut a > b significet a majorem quam b (The sign of majority a > b indicates that a is greater than b)" and " Signum minoritatis ut a < b significet a minorem quam b (The sign of minority a < b indicates that a is less than b)."
According to historian Art Johnson, while Harriot was surveying North America, he saw a Native American with a symbol that resembled the greater-than sign, [ 1 ] in both backwards and forwards forms. [ 2 ] Johnson says it is likely Harriot developed the two symbols from this symbol. [ 2 ]
The greater-than sign is sometimes used for an approximation of the closing angle bracket , ⟩ . The proper Unicode character is U+232A 〉 RIGHT-POINTING ANGLE BRACKET . ASCII does not have angular brackets.
In HTML (and SGML and XML ), the greater-than sign is used at the end of tags. The greater-than sign may be included with > , while ≥ produces the greater-than or equal to sign.
In some early e-mail systems, the greater-than sign was used to denote quotations . [ 3 ] The sign is also used to denote quotations in Markdown . [ 4 ]
The 'greater-than sign' > is encoded in ASCII as character hex 3E, decimal 62. The Unicode code point is U+003E > GREATER-THAN SIGN , inherited from ASCII.
For use with HTML , the mnemonics > or > may also be used.
BASIC and C -family languages (including Java [ 5 ] and C++ ) use the comparison operator > to mean "greater than". In Lisp -family languages, > is a function used to mean "greater than".
In Coldfusion and Fortran , operator .GT. means "greater than".
>> is used for an approximation of the much-greater-than sign ≫ . ASCII does not have the much greater-than sign.
The double greater-than sign is also used for an approximation of the closing guillemet , » .
In Java , C , and C++ , the operator >> is the right-shift operator . In C++ it is also used to get input from a stream , similar to the C functions getchar and fgets .
In Haskell , the >> function is a monadic operator. It is used for sequentially composing two actions, discarding any value produced by the first. In that regard, it is like the statement sequencing operator in imperative languages, such as the semicolon in C.
In XPath the >> operator returns true if the left operand follows the right operand in document order; otherwise it returns false. [ 6 ]
>>> is the unsigned-right-shift operator in JavaScript . Three greater-than signs form the distinctive prompt of the firmware console in MicroVAX , VAXstation , and DEC Alpha computers (known as the SRM console in the latter). This is also the default prompt of the Python interactive shell, often seen for code examples that can be executed interactively in the interpreter:
>= is sometimes used for an approximation of the greater than or equal to sign, ≥ which was not included in the ASCII repertoire. The sign is, however, provided in Unicode , as U+2265 ≥ GREATER-THAN OR EQUAL TO ( ≥, ≥, ≥ ).
In BASIC , Lisp -family languages, Lua and C -family languages (including Java and C++ ) the operator >= means "greater than or equal to". In Sinclair BASIC it is encoded as a single-byte code point token.
In Fortran , the operator .GE. means "greater than or equal to".
In Bourne shell and Windows PowerShell , the operator -ge means "greater than or equal to".
-> is used in some programming languages (for example F# ) to create an arrow. Arrows like these could also be used in text where other arrow symbols are unavailable. In the R programming language , this can be used as the right assignment operator. In the C , C++ , and PHP , this is used as a member access operator. In Swift and Python , it is used to indicate the return value type when defining a function (i.e., func foo () -> MyClass {...} ).
In Bourne shell (and many other shells), greater-than sign is used to redirect output to a file. Greater-than plus ampersand ( >& ) is used to redirect to a file descriptor .
Greater-than sign is used in the ' spaceship operator ', <=> .
In ECMAScript and C# , the greater-than sign is used in lambda function expressions.
In ECMAScript:
In C#:
In PHP , the greater-than sign is used in conjunction with the less-than sign as a not equal to operator. It is the same as the != operator.
Unicode provides various greater than symbols: [ 7 ] (use ⇕ controls to change sort order temporarily) | https://en.wikipedia.org/wiki/Greater-than_sign |
The greater sciatic foramen is an opening ( foramen ) in the posterior human pelvis . It is formed by the sacrotuberous and sacrospinous ligaments . The piriformis muscle passes through the foramen and occupies most of its volume. The greater sciatic foramen is wider in women than in men.
It is bounded as follows:
The piriformis , which exits the pelvis through the foramen, occupies most of its volume.
The following structures also exit the pelvis through the greater sciatic foramen: [ 2 ]
This article incorporates text in the public domain from page 309 of the 20th edition of Gray's Anatomy (1918) | https://en.wikipedia.org/wiki/Greater_sciatic_foramen |
Greebles , also greeblies (singular: greebly ), [ 1 ] or " nurnies ", are parts harvested from plastic modeling kits to be applied to an original model as a detail element. The practice of using parts in this manner is called " kitbashing ". [ 2 ]
The term "greeblies" was first used by effects artists at Industrial Light & Magic in the 1970s to refer to small details added to models. According to model designer and fabricator Adam Savage , George Lucas , Industrial Light & Magic's founder, coined the term "greeblies". [ 3 ]
Ron Thornton is widely believed to have coined the term " nurnies " referring to CGI technical detail that his company Foundation Imaging produced for the Babylon 5 series, [ 2 ] while the model-making team of 2001: A Space Odyssey referred to them as " wiggets ". [ 4 ] | https://en.wikipedia.org/wiki/Greeble |
The Greek Atomic Energy Commission (EEAE) is an independent government agency of Greece which is responsible for atomic safety, development and regulations and for monitoring artificially produced ionizing and non-ionizing radiation . The seven-member board of directors operate under the supervision of the Ministry of Development through the General Secretariat of Research and Technology. [ 1 ]
The EEAE was established by act of legislation in 1954. Among other notable Greek scientists, Leonidas Zervas has served twice as president of the commission (1964–1965 & 1974–1975). [ 2 ]
This article about government in Greece is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Greek_Atomic_Energy_Commission |
Greek letters are used in mathematics , science , engineering , and other areas where mathematical notation is used as symbols for constants , special functions , and also conventionally for variables representing certain quantities. In these contexts, the capital letters and the small letters represent distinct and unrelated entities. Those Greek letters which have the same form as Latin letters are rarely used: capital A, B, E, Z, H, I, K, M, N, O, P, T, Y, X. Small ι, ο and υ are also rarely used, since they closely resemble the Latin letters i, o and u. Sometimes, font variants of Greek letters are used as distinct symbols in mathematics, in particular for ε/ϵ and π/ϖ. The archaic letter digamma (Ϝ/ϝ/ϛ) is sometimes used.
The Bayer designation naming scheme for stars typically uses the first Greek letter, α, for the brightest star in each constellation, and runs through the alphabet before switching to Latin letters.
In mathematical finance , the Greeks are the variables denoted by Greek letters used to describe the risk of certain investments.
Some common conventions:
The Greek letter forms used in mathematics are often different from those used in Greek-language text: they are designed to be used in isolation, not connected to other letters, and some use variant forms which are not normally used in current Greek typography.
The OpenType font format has the feature tag "mgrk" ("Mathematical Greek") to identify a glyph as representing a Greek letter to be used in mathematical (as opposed to Greek language) contexts.
The table below shows a comparison of Greek letters rendered in TeX and HTML.
The font used in the TeX rendering is an italic style. This is in line with the convention that variables should be italicized. As Greek letters are more often than not used as variables in mathematical formulas, a Greek letter appearing similar to the TeX rendering is more likely to be encountered in works involving mathematics.
Note: The empty set symbol ∅ looks similar, but is unrelated to the Greek letter. | https://en.wikipedia.org/wiki/Greek_letters_used_in_mathematics,_science,_and_engineering |
Greek numerals , also known as Ionic , Ionian , Milesian , or Alexandrian numerals , is a system of writing numbers using the letters of the Greek alphabet . In modern Greece , they are still used for ordinal numbers and in contexts similar to those in which Roman numerals are still used in the Western world . For ordinary cardinal numbers , however, modern Greece uses Arabic numerals .
The Minoan and Mycenaean civilizations ' Linear A and Linear B alphabets used a different system, called Aegean numerals , which included number-only symbols for powers of ten: 𐄇 = 1, 𐄐 = 10, 𐄙 = 100, 𐄢 = 1000, and 𐄫 = 10000. [ 1 ]
Attic numerals composed another system that came into use perhaps in the 7th century BC. They were acrophonic , derived (after the initial one) from the first letters of the names of the numbers represented. They ran = 1, = 5, = 10, = 100, = 1,000, and = 10,000. The numbers 50, 500, 5,000, and 50,000 were represented by the letter with minuscule powers of ten written in the top-right corner: , , , and . [ 1 ] One-half was represented by 𐅁 (left half of a full circle) and one-quarter by ɔ (right side of a full circle). The same system was used outside of Attica , but the symbols varied with the local alphabets ; for example, 1,000 was in Boeotia . [ 2 ]
The present system probably developed around Miletus in Ionia . 19th century classicists placed its development in the 3rd century BC, the occasion of its first widespread use. [ 3 ] More thorough modern archaeology has caused the date to be pushed back at least to the 5th century BC, [ 4 ] a little before Athens abandoned its pre-Eucleidean alphabet in favour of Miletus 's in 402 BC, and it may predate that by a century or two. [ 5 ] The present system uses the 24 letters adopted under Eucleides , as well as three Phoenician and Ionic ones that had not been dropped from the Athenian alphabet (although kept for numbers): digamma , koppa , and sampi . The position of those characters within the numbering system imply that the first two were still in use (or at least remembered as letters) while the third was not. The exact dating, particularly for sampi , is problematic since its uncommon value means the first attested representative near Miletus does not appear until the 2nd century BC, [ 6 ] and its use is unattested in Athens until the 2nd century CE. [ 7 ] (In general, Athenians resisted using the new numerals for the longest of any Greek state, but had fully adopted them by c. 50 CE . [ 2 ] )
Greek numerals are decimal , based on powers of 10. The units from 1 to 9 are assigned to the first nine letters of the old Ionic alphabet from alpha to theta . Instead of reusing these numbers to form multiples of the higher powers of ten, however, each multiple of ten from 10 to 90 was assigned its own separate letter from the next nine letters of the Ionic alphabet from iota to koppa . Each multiple of one hundred from 100 to 900 was then assigned its own separate letter as well, from rho to sampi . [ 8 ] (That this was not the traditional location of sampi in the Ionic alphabetical order has led classicists to conclude that sampi had fallen into disuse as a letter by the time the system was created. [ citation needed ] )
This alphabetic system operates on the additive principle in which the numeric values of the letters are added together to obtain the total. For example, 241 was represented as (200 + 40 + 1). (It was not always the case that the numbers ran from highest to lowest: a 4th-century BC inscription at Athens placed the units to the left of the tens. This practice continued in Asia Minor well into the Roman period . [ 9 ] ) In ancient and medieval manuscripts, these numerals were eventually distinguished from letters using overbars : α , β , γ , etc. In medieval manuscripts of the Book of Revelation , the number of the Beast 666 is written as χξϛ (600 + 60 + 6). (Numbers larger than 1,000 reused the same letters but included various marks to note the change.) Fractions were indicated as the denominator followed by a keraia (ʹ); γʹ indicated one third, δʹ one fourth and so on. As an exception, special symbol ∠ʹ indicated one half, and γ°ʹ or γoʹ was two-thirds. These fractions were additive (also known as Egyptian fractions ); for example δʹ ϛʹ indicated 1 ⁄ 4 + 1 ⁄ 6 = 5 ⁄ 12 .
Although the Greek alphabet began with only majuscule forms, surviving papyrus manuscripts from Egypt show that uncial and cursive minuscule forms began early. [ clarification needed ] These new letter forms sometimes replaced the former ones, especially in the case of the obscure numerals. The old Q-shaped koppa (Ϙ) began to be broken up ( and ) and simplified ( and ). The numeral for 6 changed several times. During antiquity, the original letter form of digamma (Ϝ) came to be avoided in favour of a special numerical one ( ). By the Byzantine era , the letter was known as episemon and written as or . This eventually merged with the sigma - tau ligature stigma ϛ ( or ).
In modern Greek , a number of other changes have been made. Instead of extending an over bar over an entire number, the keraia ( κεραία , lit. "hornlike projection") is marked to its upper right, a development of the short marks formerly used for single numbers and fractions. The modern keraia (´) is a symbol similar to the acute accent (´), the tonos (U+0384,΄) and the prime symbol (U+02B9, ʹ), but has its own Unicode character as U+0374. Alexander the Great 's father Philip II of Macedon is thus known as Φίλιππος Βʹ in modern Greek. A lower left keraia (Unicode: U+0375, "Greek Lower Numeral Sign") is now standard for distinguishing thousands: 2019 is represented as ͵ΒΙΘʹ ( 2 × 1,000 + 10 + 9 ).
The declining use of ligatures in the 20th century also means that stigma is frequently written as the separate letters ΣΤʹ, although a single keraia is used for the group. [ 10 ]
The practice of adding up the number values of Greek letters of words, names and phrases, thus connecting the meaning of words, names and phrases with others with equivalent numeric sums, is called isopsephy . Similar practices for the Hebrew and English are called gematria and English Qaballa , respectively.
ΣΤʹ
In his text The Sand Reckoner , the natural philosopher Archimedes gives an upper bound of the number of grains of sand required to fill the entire universe, using a contemporary estimation of its size. This would defy the then-held notion that it is impossible to name a number greater than that of the sand on a beach or on the entire world. In order to do that, he had to devise a new numeral scheme with much greater range.
Pappus of Alexandria reports that Apollonius of Perga developed a simpler system based on powers of the myriad; α Μ was 10,000, β Μ was 10,000 2 = 100,000,000, γ Μ was 10,000 3 = 10 12 and so on. [ 11 ]
Hellenistic astronomers extended alphabetic Greek numerals into a sexagesimal positional numbering system by limiting each position to a maximum value of 50 + 9 and including a special symbol for zero , which was only used alone for a whole table cell, rather than combined with other digits, like today's modern zero, which is a placeholder in positional numeric notation. This system was probably adapted from Babylonian numerals by Hipparchus c. 140 BC . It was then used by Ptolemy ( c. 140 BC ), Theon ( c. 380 AD ) and Theon's daughter Hypatia ( d. 415 AD ). The symbol for zero is clearly different from that of the value for 70, omicron or " ο ". In the 2nd-century papyrus shown here, one can see the symbol for zero in the lower right, and a number of larger omicrons elsewhere in the same papyrus.
In Ptolemy's table of chords , the first fairly extensive trigonometric table, there were 360 rows, portions of which looked as follows:
Each number in the first column, labeled περιφερειῶν , ["regions"] is the number of degrees of arc on a circle. Each number in the second column, labeled εὐθειῶν , ["straight lines" or "segments"] is the length of the corresponding chord of the circle, when the diameter is 120. Thus πδ represents an 84° arc, and the ∠′ after it means one-half, so that πδ∠′ means 84 + 1 ⁄ 2 °. In the next column we see π μα γ , meaning 80 + 41 / 60 + 3 / 60² . That is the length of the chord corresponding to an arc of 84 + 1 ⁄ 2 ° when the diameter of the circle is 120. The next column, labeled ἑξηκοστῶν , for "sixtieths", is the number to be added to the chord length for each 1' increase in the arc, over the span of the next 1°. Thus that last column was used for linear interpolation .
The Greek sexagesimal placeholder or zero symbol changed over time: The symbol used on papyri during the second century was a very small circle with an overbar several diameters long, terminated or not at both ends in various ways. Later, the overbar shortened to only one diameter, similar to the modern o -macron (ō) which was still being used in late medieval Arabic manuscripts whenever alphabetic numerals were used, later the overbar was omitted in Byzantine manuscripts, leaving a bare ο (omicron). This gradual change from an invented symbol to ο does not support the hypothesis that the latter was the initial of οὐδέν meaning "nothing". [ 12 ] [ 13 ] Note that the letter ο was still used with its original numerical value of 70; however, there was no ambiguity, as 70 could not appear in the fractional part of a sexagesimal number, and zero was usually omitted when it was the integer.
Some of Ptolemy's true zeros appeared in the first line of each of his eclipse tables, where they were a measure of the angular separation between the center of the Moon and either the center of the Sun (for solar eclipses ) or the center of Earth 's shadow (for lunar eclipses ). All of these zeros took the form ο | ο ο , where Ptolemy actually used three of the symbols described in the previous paragraph. The vertical bar (|) indicates that the integral part on the left was in a separate column labeled in the headings of his tables as digits (of five arc-minutes each), whereas the fractional part was in the next column labeled minute of immersion , meaning sixtieths (and thirty-six-hundredths) of a digit. [ 14 ]
The Greek zero was added to Unicode in Version 4.1.0 at U+1018A 𐆊 GREEK ZERO SIGN . [ 15 ] | https://en.wikipedia.org/wiki/Greek_numerals |
In mathematics , a Green's function (or Green function [ 1 ] ) is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
This means that if L {\displaystyle L} is a linear differential operator, then
Through the superposition principle , given a linear ordinary differential equation (ODE), L y = f {\displaystyle Ly=f} , one can first solve L G = δ s {\displaystyle LG=\delta _{s}} , for each s , and realizing that, since the source is a sum of delta functions , the solution is a sum of Green's functions as well, by linearity of L .
Green's functions are named after the British mathematician George Green , who first developed the concept in the 1820s. In the modern study of linear partial differential equations , Green's functions are studied largely from the point of view of fundamental solutions instead.
Under many-body theory , the term is also used in physics , specifically in quantum field theory , aerodynamics , aeroacoustics , electrodynamics , seismology and statistical field theory , to refer to various types of correlation functions , even those that do not fit the mathematical definition. In quantum field theory, Green's functions take the roles of propagators .
A Green's function, G ( x , s ) , of a linear differential operator L = L ( x ) acting on distributions over a subset of the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , at a point s , is any solution of
where δ is the Dirac delta function . This property of a Green's function can be exploited to solve differential equations of the form
If the kernel of L is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry , boundary conditions and/or other externally imposed criteria will give a unique Green's function. Green's functions may be categorized, by the type of boundary conditions satisfied, by a Green's function number . Also, Green's functions in general are distributions , not necessarily functions of a real variable.
Green's functions are also useful tools in solving wave equations and diffusion equations . In quantum mechanics , Green's function of the Hamiltonian is a key concept with important links to the concept of density of states .
The Green's function as used in physics is usually defined with the opposite sign, instead. That is, L G ( x , s ) = δ ( x − s ) . {\displaystyle LG(x,s)=\delta (x-s)\,.} This definition does not significantly change any of the properties of Green's function due to the evenness of the Dirac delta function.
If the operator is translation invariant , that is, when L {\displaystyle L} has constant coefficients with respect to x , then the Green's function can be taken to be a convolution kernel , that is, G ( x , s ) = G ( x − s ) . {\displaystyle G(x,s)=G(x-s)\,.} In this case, Green's function is the same as the impulse response of linear time-invariant system theory .
Loosely speaking, if such a function G can be found for the operator L , then, if we multiply equation 1 for the Green's function by f ( s ) , and then integrate with respect to s , we obtain, ∫ L G ( x , s ) f ( s ) d s = ∫ δ ( x − s ) f ( s ) d s = f ( x ) . {\displaystyle \int LG(x,s)\,f(s)\,ds=\int \delta (x-s)\,f(s)\,ds=f(x)\,.} Because the operator L = L ( x ) {\displaystyle L=L(x)} is linear and acts only on the variable x (and not on the variable of integration s ), one may take the operator L {\displaystyle L} outside of the integration, yielding L ( ∫ G ( x , s ) f ( s ) d s ) = f ( x ) . {\displaystyle L\left(\int G(x,s)\,f(s)\,ds\right)=f(x)\,.} This means that
is a solution to the equation L u ( x ) = f ( x ) . {\displaystyle Lu(x)=f(x)\,.}
Thus, one may obtain the function u ( x ) through knowledge of the Green's function in equation 1 and the source term on the right-hand side in equation 2 . This process relies upon the linearity of the operator L .
In other words, the solution of equation 2 , u ( x ) , can be determined by the integration given in equation 3 . Although f ( x ) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green's function G that satisfies equation 1 . For this reason, the Green's function is also sometimes called the fundamental solution associated to the operator L .
Not every operator L {\displaystyle L} admits a Green's function. A Green's function can also be thought of as a right inverse of L . Aside from the difficulties of finding a Green's function for a particular operator, the integral in equation 3 may be quite difficult to evaluate. However the method gives a theoretically exact result.
This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over δ ( x − s ) {\displaystyle \delta (x-s)} ; and a superposition of the solution on each projection . Such an integral equation is known as a Fredholm integral equation , the study of which constitutes Fredholm theory .
The primary use of Green's functions in mathematics is to solve non-homogeneous boundary value problems . In modern theoretical physics , Green's functions are also usually used as propagators in Feynman diagrams ; the term Green's function is often further used for any correlation function .
Let L {\displaystyle L} be the Sturm–Liouville operator, a linear differential operator of the form L = d d x [ p ( x ) d d x ] + q ( x ) {\displaystyle L={\dfrac {d}{dx}}\left[p(x){\dfrac {d}{dx}}\right]+q(x)} and let D {\displaystyle \mathbf {D} } be the vector-valued boundary conditions operator D u = [ α 1 u ′ ( 0 ) + β 1 u ( 0 ) α 2 u ′ ( ℓ ) + β 2 u ( ℓ ) ] . {\displaystyle \mathbf {D} u={\begin{bmatrix}\alpha _{1}u'(0)+\beta _{1}u(0)\\\alpha _{2}u'(\ell )+\beta _{2}u(\ell )\end{bmatrix}}\,.}
Let f ( x ) {\displaystyle f(x)} be a continuous function in [ 0 , ℓ ] {\displaystyle [0,\ell ]\,} . Further suppose that the problem L u = f D u = 0 {\displaystyle {\begin{aligned}Lu&=f\\\mathbf {D} u&=\mathbf {0} \end{aligned}}} is "regular", i.e., the only solution for f ( x ) = 0 {\displaystyle f(x)=0} for all x is u ( x ) = 0 {\displaystyle u(x)=0} . [ a ]
There is one and only one solution u ( x ) {\displaystyle u(x)} that satisfies L u = f D u = 0 {\displaystyle {\begin{aligned}Lu&=f\\\mathbf {D} u&=\mathbf {0} \end{aligned}}} and it is given by u ( x ) = ∫ 0 ℓ f ( s ) G ( x , s ) d s , {\displaystyle u(x)=\int _{0}^{\ell }f(s)\,G(x,s)\,ds\,,} where G ( x , s ) {\displaystyle G(x,s)} is a Green's function satisfying the following conditions:
Green's function is not necessarily unique since the addition of any solution of the homogeneous equation to one Green's function results in another Green's function. Therefore if the homogeneous equation has nontrivial solutions, multiple Green's functions exist. In some cases, it is possible to find one Green's function that is nonvanishing only for s ≤ x {\displaystyle s\leq x} , which is called a retarded Green's function, and another Green's function that is nonvanishing only for s ≥ x {\displaystyle s\geq x} , which is called an advanced Green's function. In such cases, any linear combination of the two Green's functions is also a valid Green's function. The terminology advanced and retarded is especially useful when the variable x corresponds to time. In such cases, the solution provided by the use of the retarded Green's function depends only on the past sources and is causal whereas the solution provided by the use of the advanced Green's function depends only on the future sources and is acausal. In these problems, it is often the case that the causal solution is the physically important one. The use of advanced and retarded Green's function is especially common for the analysis of solutions of the inhomogeneous electromagnetic wave equation .
While it does not uniquely fix the form the Green's function will take, performing a dimensional analysis to find the units a Green's function must have is an important sanity check on any Green's function found through other means. A quick examination of the defining equation, L G ( x , s ) = δ ( x − s ) , {\displaystyle LG(x,s)=\delta (x-s),} shows that the units of G {\displaystyle G} depend not only on the units of L {\displaystyle L} but also on the number and units of the space of which the position vectors x {\displaystyle x} and s {\displaystyle s} are elements. This leads to the relationship: [ [ G ] ] = [ [ L ] ] − 1 [ [ d x ] ] − 1 , {\displaystyle [[G]]=[[L]]^{-1}[[dx]]^{-1},} where [ [ G ] ] {\displaystyle [[G]]} is defined as, "the physical units of G {\displaystyle G} " [ further explanation needed ] , and d x {\displaystyle dx} is the volume element of the space (or spacetime ).
For example, if L = ∂ t 2 {\displaystyle L=\partial _{t}^{2}} and time is the only variable then: [ [ L ] ] = [ [ time ] ] − 2 , [ [ d x ] ] = [ [ time ] ] , and [ [ G ] ] = [ [ time ] ] . {\displaystyle {\begin{aligned}[][[L]]&=[[{\text{time}}]]^{-2},\\[1ex][[dx]]&=[[{\text{time}}]],\ {\text{and}}\\[1ex][[G]]&=[[{\text{time}}]].\end{aligned}}} If L = ◻ = 1 c 2 ∂ t 2 − ∇ 2 {\displaystyle L=\square ={\tfrac {1}{c^{2}}}\partial _{t}^{2}-\nabla ^{2}} , the d'Alembert operator , and space has 3 dimensions then: [ [ L ] ] = [ [ length ] ] − 2 , [ [ d x ] ] = [ [ time ] ] [ [ length ] ] 3 , and [ [ G ] ] = [ [ time ] ] − 1 [ [ length ] ] − 1 . {\displaystyle {\begin{aligned}[][[L]]&=[[{\text{length}}]]^{-2},\\[1ex][[dx]]&=[[{\text{time}}]][[{\text{length}}]]^{3},\ {\text{and}}\\[1ex][[G]]&=[[{\text{time}}]]^{-1}[[{\text{length}}]]^{-1}.\end{aligned}}}
If a differential operator L admits a set of eigenvectors Ψ n ( x ) (i.e., a set of functions Ψ n and scalars λ n such that L Ψ n = λ n Ψ n ) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues .
"Complete" means that the set of functions {Ψ n } satisfies the following completeness relation , δ ( x − x ′ ) = ∑ n = 0 ∞ Ψ n † ( x ′ ) Ψ n ( x ) . {\displaystyle \delta (x-x')=\sum _{n=0}^{\infty }\Psi _{n}^{\dagger }(x')\Psi _{n}(x).}
Then the following holds,
G ( x , x ′ ) = ∑ n = 0 ∞ Ψ n † ( x ′ ) Ψ n ( x ) λ n , {\displaystyle G(x,x')=\sum _{n=0}^{\infty }{\dfrac {\Psi _{n}^{\dagger }(x')\Psi _{n}(x)}{\lambda _{n}}},}
where † {\displaystyle \dagger } represents complex conjugation.
Applying the operator L to each side of this equation results in the completeness relation, which was assumed.
The general study of Green's function written in the above form, and its relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory .
There are several other methods for finding Green's functions, including the method of images , separation of variables , and Laplace transforms . [ 2 ]
If the differential operator L {\displaystyle L} can be factored as L = L 1 L 2 {\displaystyle L=L_{1}L_{2}} then the Green's function of L {\displaystyle L} can be constructed from the Green's functions for L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} : G ( x , s ) = ∫ G 2 ( x , s 1 ) G 1 ( s 1 , s ) d s 1 . {\displaystyle G(x,s)=\int G_{2}(x,s_{1})\,G_{1}(s_{1},s)\,ds_{1}.} The above identity follows immediately from taking G ( x , s ) {\displaystyle G(x,s)} to be the representation of the right operator inverse of L {\displaystyle L} , analogous to how for the invertible linear operator C {\displaystyle C} , defined by C = ( A B ) − 1 = B − 1 A − 1 {\displaystyle C=(AB)^{-1}=B^{-1}A^{-1}} , is represented by its matrix elements C i , j {\displaystyle C_{i,j}} .
A further identity follows for differential operators that are scalar polynomials of the derivative, L = P N ( ∂ x ) {\displaystyle L=P_{N}(\partial _{x})} . The fundamental theorem of algebra , combined with the fact that ∂ x {\displaystyle \partial _{x}} commutes with itself , guarantees that the polynomial can be factored, putting L {\displaystyle L} in the form: L = ∏ i = 1 N ( ∂ x − z i ) , {\displaystyle L=\prod _{i=1}^{N}\left(\partial _{x}-z_{i}\right),} where z i {\displaystyle z_{i}} are the zeros of P N ( z ) {\displaystyle P_{N}(z)} . Taking the Fourier transform of L G ( x , s ) = δ ( x − s ) {\displaystyle LG(x,s)=\delta (x-s)} with respect to both x {\displaystyle x} and s {\displaystyle s} gives: G ^ ( k x , k s ) = δ ( k x − k s ) ∏ i = 1 N ( i k x − z i ) . {\displaystyle {\widehat {G}}(k_{x},k_{s})={\frac {\delta (k_{x}-k_{s})}{\prod _{i=1}^{N}(ik_{x}-z_{i})}}.} The fraction can then be split into a sum using a partial fraction decomposition before Fourier transforming back to x {\displaystyle x} and s {\displaystyle s} space. This process yields identities that relate integrals of Green's functions and sums of the same. For example, if L = ( ∂ x + γ ) ( ∂ x + α ) 2 {\displaystyle L=\left(\partial _{x}+\gamma \right)\left(\partial _{x}+\alpha \right)^{2}} then one form for its Green's function is: G ( x , s ) = 1 ( γ − α ) 2 Θ ( x − s ) e − γ ( x − s ) − 1 ( γ − α ) 2 Θ ( x − s ) e − α ( x − s ) + 1 γ − α Θ ( x − s ) ( x − s ) e − α ( x − s ) = ∫ Θ ( x − s 1 ) ( x − s 1 ) e − α ( x − s 1 ) Θ ( s 1 − s ) e − γ ( s 1 − s ) d s 1 . {\displaystyle {\begin{aligned}G(x,s)&={\frac {1}{\left(\gamma -\alpha \right)^{2}}}\Theta (x-s)e^{-\gamma (x-s)}-{\frac {1}{\left(\gamma -\alpha \right)^{2}}}\Theta (x-s)e^{-\alpha (x-s)}+{\frac {1}{\gamma -\alpha }}\Theta (x-s)\left(x-s\right)e^{-\alpha (x-s)}\\[1ex]&=\int \Theta (x-s_{1})\left(x-s_{1}\right)e^{-\alpha (x-s_{1})}\Theta (s_{1}-s)e^{-\gamma (s_{1}-s)}\,ds_{1}.\end{aligned}}} While the example presented is tractable analytically, it illustrates a process that works when the integral is not trivial (for example, when ∇ 2 {\displaystyle \nabla ^{2}} is the operator in the polynomial).
The following table gives an overview of Green's functions of frequently appearing differential operators, where r = x 2 + y 2 + z 2 {\textstyle r={\sqrt {x^{2}+y^{2}+z^{2}}}} , ρ = x 2 + y 2 {\textstyle \rho ={\sqrt {x^{2}+y^{2}}}} , Θ ( t ) {\textstyle \Theta (t)} is the Heaviside step function , J ν ( z ) {\textstyle J_{\nu }(z)} is a Bessel function , I ν ( z ) {\textstyle I_{\nu }(z)} is a modified Bessel function of the first kind , and K ν ( z ) {\textstyle K_{\nu }(z)} is a modified Bessel function of the second kind . [ 3 ] Where time ( t ) appears in the first column, the retarded (causal) Green's function is listed.
Green's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities .
To derive Green's theorem, begin with the divergence theorem (otherwise known as Gauss's theorem ), ∫ V ∇ ⋅ A d V = ∫ S A ⋅ d σ ^ . {\displaystyle \int _{V}\nabla \cdot \mathbf {A} \,dV=\int _{S}\mathbf {A} \cdot d{\hat {\boldsymbol {\sigma }}}\,.}
Let A = φ ∇ ψ − ψ ∇ φ {\displaystyle \mathbf {A} =\varphi \,\nabla \psi -\psi \,\nabla \varphi } and substitute into Gauss' law.
Compute ∇ ⋅ A {\displaystyle \nabla \cdot \mathbf {A} } and apply the product rule for the ∇ operator, ∇ ⋅ A = ∇ ⋅ ( φ ∇ ψ − ψ ∇ φ ) = ( ∇ φ ) ⋅ ( ∇ ψ ) + φ ∇ 2 ψ − ( ∇ φ ) ⋅ ( ∇ ψ ) − ψ ∇ 2 φ = φ ∇ 2 ψ − ψ ∇ 2 φ . {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {A} &=\nabla \cdot \left(\varphi \,\nabla \psi \;-\;\psi \,\nabla \varphi \right)\\&=(\nabla \varphi )\cdot (\nabla \psi )\;+\;\varphi \,\nabla ^{2}\psi \;-\;(\nabla \varphi )\cdot (\nabla \psi )\;-\;\psi \nabla ^{2}\varphi \\&=\varphi \,\nabla ^{2}\psi \;-\;\psi \,\nabla ^{2}\varphi .\end{aligned}}}
Plugging this into the divergence theorem produces Green's theorem , ∫ V ( φ ∇ 2 ψ − ψ ∇ 2 φ ) d V = ∫ S ( φ ∇ ψ − ψ ∇ φ ) ⋅ d σ ^ . {\displaystyle \int _{V}\left(\varphi \,\nabla ^{2}\psi -\psi \,\nabla ^{2}\varphi \right)dV=\int _{S}\left(\varphi \,\nabla \psi -\psi \nabla \,\varphi \right)\cdot d{\hat {\boldsymbol {\sigma }}}.}
Suppose that the linear differential operator L is the Laplacian , ∇ 2 , and that there is a Green's function G for the Laplacian. The defining property of the Green's function still holds, L G ( x , x ′ ) = ∇ 2 G ( x , x ′ ) = δ ( x − x ′ ) . {\displaystyle LG(\mathbf {x} ,\mathbf {x} ')=\nabla ^{2}G(\mathbf {x} ,\mathbf {x} ')=\delta (\mathbf {x} -\mathbf {x} ').}
Let ψ = G {\displaystyle \psi =G} in Green's second identity, see Green's identities . Then, ∫ V [ φ ( x ′ ) δ ( x − x ′ ) − G ( x , x ′ ) ∇ ′ 2 φ ( x ′ ) ] d 3 x ′ = ∫ S [ φ ( x ′ ) ∇ ′ G ( x , x ′ ) − G ( x , x ′ ) ∇ ′ φ ( x ′ ) ] ⋅ d σ ^ ′ . {\displaystyle \int _{V}\left[\varphi (\mathbf {x} ')\delta (\mathbf {x} -\mathbf {x} ')-G(\mathbf {x} ,\mathbf {x} ')\,{\nabla '}^{2}\,\varphi (\mathbf {x} ')\right]d^{3}\mathbf {x} '=\int _{S}\left[\varphi (\mathbf {x} ')\,{\nabla '}G(\mathbf {x} ,\mathbf {x} ')-G(\mathbf {x} ,\mathbf {x} ')\,{\nabla '}\varphi (\mathbf {x} ')\right]\cdot d{\hat {\boldsymbol {\sigma }}}'.}
Using this expression, it is possible to solve Laplace's equation ∇ 2 φ ( x ) = 0 or Poisson's equation ∇ 2 φ ( x ) = − ρ ( x ) , subject to either Neumann or Dirichlet boundary conditions. In other words, we can solve for φ ( x ) everywhere inside a volume where either (1) the value of φ ( x ) is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of φ ( x ) is specified on the bounding surface (Neumann boundary conditions).
Suppose the problem is to solve for φ ( x ) inside the region. Then the integral ∫ V φ ( x ′ ) δ ( x − x ′ ) d 3 x ′ {\displaystyle \int _{V}\varphi (\mathbf {x} ')\,\delta (\mathbf {x} -\mathbf {x} ')\,d^{3}\mathbf {x} '} reduces to simply φ ( x ) due to the defining property of the Dirac delta function and we have φ ( x ) = − ∫ V G ( x , x ′ ) ρ ( x ′ ) d 3 x ′ + ∫ S [ φ ( x ′ ) ∇ ′ G ( x , x ′ ) − G ( x , x ′ ) ∇ ′ φ ( x ′ ) ] ⋅ d σ ^ ′ . {\displaystyle \varphi (\mathbf {x} )=-\int _{V}G(\mathbf {x} ,\mathbf {x} ')\,\rho (\mathbf {x} ')\,d^{3}\mathbf {x} '+\int _{S}\left[\varphi (\mathbf {x} ')\,\nabla 'G(\mathbf {x} ,\mathbf {x} ')-G(\mathbf {x} ,\mathbf {x} ')\,\nabla '\varphi (\mathbf {x} ')\right]\cdot d{\hat {\boldsymbol {\sigma }}}'.}
This form expresses the well-known property of harmonic functions , that if the value or normal derivative is known on a bounding surface, then the value of the function inside the volume is known everywhere .
In electrostatics , φ ( x ) is interpreted as the electric potential , ρ ( x ) as electric charge density , and the normal derivative ∇ φ ( x ′ ) ⋅ d σ ^ ′ {\displaystyle \nabla \varphi (\mathbf {x} ')\cdot d{\hat {\boldsymbol {\sigma }}}'} as the normal component of the electric field.
If the problem is to solve a Dirichlet boundary value problem, the Green's function should be chosen such that G ( x , x ′) vanishes when either x or x′ is on the bounding surface. Thus only one of the two terms in the surface integral remains. If the problem is to solve a Neumann boundary value problem, it might seem logical to choose Green's function so that its normal derivative vanishes on the bounding surface. However, application of Gauss's theorem to the differential equation defining the Green's function yields ∫ S ∇ ′ G ( x , x ′ ) ⋅ d σ ^ ′ = ∫ V ∇ ′ 2 G ( x , x ′ ) d 3 x ′ = ∫ V δ ( x − x ′ ) d 3 x ′ = 1 , {\displaystyle \int _{S}\nabla 'G(\mathbf {x} ,\mathbf {x} ')\cdot d{\hat {\boldsymbol {\sigma }}}'=\int _{V}\nabla '^{2}G(\mathbf {x} ,\mathbf {x} ')\,d^{3}\mathbf {x} '=\int _{V}\delta (\mathbf {x} -\mathbf {x} ')\,d^{3}\mathbf {x} '=1\,,} meaning the normal derivative of G ( x , x ′) cannot vanish on the surface, because it must integrate to 1 on the surface. [ 4 ]
The simplest form the normal derivative can take is that of a constant, namely 1/ S , where S is the surface area of the surface. The surface term in the solution becomes ∫ S φ ( x ′ ) ∇ ′ G ( x , x ′ ) ⋅ d σ ^ ′ = ⟨ φ ⟩ S {\displaystyle \int _{S}\varphi (\mathbf {x} ')\,\nabla 'G(\mathbf {x} ,\mathbf {x} ')\cdot d{\hat {\boldsymbol {\sigma }}}'=\langle \varphi \rangle _{S}} where ⟨ φ ⟩ S {\displaystyle \langle \varphi \rangle _{S}} is the average value of the potential on the surface. This number is not known in general, but is often unimportant, as the goal is often to obtain the electric field given by the gradient of the potential, rather than the potential itself.
With no boundary conditions, the Green's function for the Laplacian ( Green's function for the three-variable Laplace equation ) is G ( x , x ′ ) = − 1 4 π | x − x ′ | . {\displaystyle G(\mathbf {x} ,\mathbf {x} ')=-{\frac {1}{4\pi \left|\mathbf {x} -\mathbf {x} '\right|}}.}
Supposing that the bounding surface goes out to infinity and plugging in this expression for the Green's function finally yields the standard expression for electric potential in terms of electric charge density as
φ ( x ) = ∫ V ρ ( x ′ ) 4 π ε | x − x ′ | d 3 x ′ . {\displaystyle \varphi (\mathbf {x} )=\int _{V}{\dfrac {\rho (\mathbf {x} ')}{4\pi \varepsilon \left|\mathbf {x} -\mathbf {x} '\right|}}\,d^{3}\mathbf {x} '\,.}
Find the Green function for the following problem, whose Green's function number is X11: L u = u ″ + k 2 u = f ( x ) u ( 0 ) = 0 , u ( π 2 k ) = 0. {\displaystyle {\begin{aligned}Lu&=u''+k^{2}u=f(x)\\u(0)&=0,\quad u{\left({\tfrac {\pi }{2k}}\right)}=0.\end{aligned}}}
First step: The Green's function for the linear operator at hand is defined as the solution to
If x ≠ s {\displaystyle x\neq s} , then the delta function gives zero, and the general solution is G ( x , s ) = c 1 cos k x + c 2 sin k x . {\displaystyle G(x,s)=c_{1}\cos kx+c_{2}\sin kx.}
For x < s {\displaystyle x<s} , the boundary condition at x = 0 {\displaystyle x=0} implies G ( 0 , s ) = c 1 ⋅ 1 + c 2 ⋅ 0 = 0 , c 1 = 0 {\displaystyle G(0,s)=c_{1}\cdot 1+c_{2}\cdot 0=0,\quad c_{1}=0} if x < s {\displaystyle x<s} and s ≠ π 2 k {\displaystyle s\neq {\tfrac {\pi }{2k}}} .
For x > s {\displaystyle x>s} , the boundary condition at x = π 2 k {\displaystyle x={\tfrac {\pi }{2k}}} implies G ( π 2 k , s ) = c 3 ⋅ 0 + c 4 ⋅ 1 = 0 , c 4 = 0 {\displaystyle G{\left({\tfrac {\pi }{2k}},s\right)}=c_{3}\cdot 0+c_{4}\cdot 1=0,\quad c_{4}=0}
The equation of G ( 0 , s ) = 0 {\displaystyle G(0,s)=0} is skipped for similar reasons.
To summarize the results thus far: G ( x , s ) = { c 2 sin k x , for x < s , c 3 cos k x , for s < x . {\displaystyle G(x,s)={\begin{cases}c_{2}\sin kx,&{\text{for }}x<s,\\[0.4ex]c_{3}\cos kx,&{\text{for }}s<x.\end{cases}}}
Second step: The next task is to determine c 2 {\displaystyle c_{2}} and c 3 {\displaystyle c_{3}} .
Ensuring continuity in the Green's function at x = s {\displaystyle x=s} implies c 2 sin k s = c 3 cos k s {\displaystyle c_{2}\sin ks=c_{3}\cos ks}
One can ensure proper discontinuity in the first derivative by integrating the defining differential equation (i.e., Eq. * ) from x = s − ε {\displaystyle x=s-\varepsilon } to x = s + ε {\displaystyle x=s+\varepsilon } and taking the limit as ε {\displaystyle \varepsilon } goes to zero. Note that we only integrate the second derivative as the remaining term will be continuous by construction. c 3 ⋅ ( − k sin k s ) − c 2 ⋅ ( k cos k s ) = 1 {\displaystyle c_{3}\cdot (-k\sin ks)-c_{2}\cdot (k\cos ks)=1}
The two (dis)continuity equations can be solved for c 2 {\displaystyle c_{2}} and c 3 {\displaystyle c_{3}} to obtain c 2 = − cos k s k ; c 3 = − sin k s k {\displaystyle c_{2}=-{\frac {\cos ks}{k}}\quad ;\quad c_{3}=-{\frac {\sin ks}{k}}}
So Green's function for this problem is: G ( x , s ) = { − cos k s k sin k x , x < s , − sin k s k cos k x , s < x . {\displaystyle G(x,s)={\begin{cases}-{\frac {\cos ks}{k}}\sin kx,&x<s,\\-{\frac {\sin ks}{k}}\cos kx,&s<x.\end{cases}}} | https://en.wikipedia.org/wiki/Green's_function |
In many-body theory , the term Green's function (or Green function ) is sometimes used interchangeably with correlation function , but refers specifically to correlators of field operators or creation and annihilation operators .
The name comes from the Green's functions used to solve inhomogeneous differential equations , to which they are loosely related. (Specifically, only two-point "Green's functions" in the case of a non-interacting system are Green's functions in the mathematical sense; the linear operator that they invert is the Hamiltonian operator , which in the non-interacting case is quadratic in the fields.)
We consider a many-body theory with field operator (annihilation operator written in the position basis) ψ ( x ) {\displaystyle \psi (\mathbf {x} )} .
The Heisenberg operators can be written in terms of Schrödinger operators as ψ ( x , t ) = e i K t ψ ( x ) e − i K t , {\displaystyle \psi (\mathbf {x} ,t)=e^{iKt}\psi (\mathbf {x} )e^{-iKt},} and the creation operator is ψ ¯ ( x , t ) = [ ψ ( x , t ) ] † {\displaystyle {\bar {\psi }}(\mathbf {x} ,t)=[\psi (\mathbf {x} ,t)]^{\dagger }} , where K = H − μ N {\displaystyle K=H-\mu N} is the grand-canonical Hamiltonian.
Similarly, for the imaginary-time operators, ψ ( x , τ ) = e K τ ψ ( x ) e − K τ {\displaystyle \psi (\mathbf {x} ,\tau )=e^{K\tau }\psi (\mathbf {x} )e^{-K\tau }} ψ ¯ ( x , τ ) = e K τ ψ † ( x ) e − K τ . {\displaystyle {\bar {\psi }}(\mathbf {x} ,\tau )=e^{K\tau }\psi ^{\dagger }(\mathbf {x} )e^{-K\tau }.} [Note that the imaginary-time creation operator ψ ¯ ( x , τ ) {\displaystyle {\bar {\psi }}(\mathbf {x} ,\tau )} is not the Hermitian conjugate of the annihilation operator ψ ( x , τ ) {\displaystyle \psi (\mathbf {x} ,\tau )} .]
In real time, the 2 n {\displaystyle 2n} -point Green function is defined by G ( n ) ( 1 … n ∣ 1 ′ … n ′ ) = i n ⟨ T ψ ( 1 ) … ψ ( n ) ψ ¯ ( n ′ ) … ψ ¯ ( 1 ′ ) ⟩ , {\displaystyle G^{(n)}(1\ldots n\mid 1'\ldots n')=i^{n}\langle T\psi (1)\ldots \psi (n){\bar {\psi }}(n')\ldots {\bar {\psi }}(1')\rangle ,} where we have used a condensed notation in which j {\displaystyle j} signifies ( x j , t j ) {\displaystyle (\mathbf {x} _{j},t_{j})} and j ′ {\displaystyle j'} signifies ( x j ′ , t j ′ ) {\displaystyle (\mathbf {x} _{j}',t_{j}')} . The operator T {\displaystyle T} denotes time ordering , and indicates that the field operators that follow it are to be ordered so that their time arguments increase from right to left.
In imaginary time, the corresponding definition is G ( n ) ( 1 … n ∣ 1 ′ … n ′ ) = ⟨ T ψ ( 1 ) … ψ ( n ) ψ ¯ ( n ′ ) … ψ ¯ ( 1 ′ ) ⟩ , {\displaystyle {\mathcal {G}}^{(n)}(1\ldots n\mid 1'\ldots n')=\langle T\psi (1)\ldots \psi (n){\bar {\psi }}(n')\ldots {\bar {\psi }}(1')\rangle ,} where j {\displaystyle j} signifies x j , τ j {\displaystyle \mathbf {x} _{j},\tau _{j}} . (The imaginary-time variables τ j {\displaystyle \tau _{j}} are restricted to the range from 0 {\displaystyle 0} to the inverse temperature β = 1 k B T {\textstyle \beta ={\frac {1}{k_{\text{B}}T}}} .)
Note regarding signs and normalization used in these definitions: The signs of the Green functions have been chosen so that Fourier transform of the two-point ( n = 1 {\displaystyle n=1} ) thermal Green function for a free particle is G ( k , ω n ) = 1 − i ω n + ξ k , {\displaystyle {\mathcal {G}}(\mathbf {k} ,\omega _{n})={\frac {1}{-i\omega _{n}+\xi _{\mathbf {k} }}},} and the retarded Green function is G R ( k , ω ) = 1 − ( ω + i η ) + ξ k , {\displaystyle G^{\mathrm {R} }(\mathbf {k} ,\omega )={\frac {1}{-(\omega +i\eta )+\xi _{\mathbf {k} }}},} where ω n = [ 2 n + θ ( − ζ ) ] π β {\displaystyle \omega _{n}={\frac {[2n+\theta (-\zeta )]\pi }{\beta }}} is the Matsubara frequency .
Throughout, ζ {\displaystyle \zeta } is + 1 {\displaystyle +1} for bosons and − 1 {\displaystyle -1} for fermions and [ … , … ] = [ … , … ] − ζ {\displaystyle [\ldots ,\ldots ]=[\ldots ,\ldots ]_{-\zeta }} denotes either a commutator or anticommutator as appropriate.
(See below for details.)
The Green function with a single pair of arguments ( n = 1 {\displaystyle n=1} ) is referred to as the two-point function, or propagator . In the presence of both spatial and temporal translational symmetry, it depends only on the difference of its arguments. Taking the Fourier transform with respect to both space and time gives G ( x τ ∣ x ′ τ ′ ) = ∫ k d k 1 β ∑ ω n G ( k , ω n ) e i k ⋅ ( x − x ′ ) − i ω n ( τ − τ ′ ) , {\displaystyle {\mathcal {G}}(\mathbf {x} \tau \mid \mathbf {x} '\tau ')=\int _{\mathbf {k} }d\mathbf {k} {\frac {1}{\beta }}\sum _{\omega _{n}}{\mathcal {G}}(\mathbf {k} ,\omega _{n})e^{i\mathbf {k} \cdot (\mathbf {x} -\mathbf {x} ')-i\omega _{n}(\tau -\tau ')},} where the sum is over the appropriate Matsubara frequencies (and the integral involves an implicit factor of ( L / 2 π ) d {\displaystyle (L/2\pi )^{d}} , as usual).
In real time, we will explicitly indicate the time-ordered function with a superscript T: G T ( x t ∣ x ′ t ′ ) = ∫ k d k ∫ d ω 2 π G T ( k , ω ) e i k ⋅ ( x − x ′ ) − i ω ( t − t ′ ) . {\displaystyle G^{\mathrm {T} }(\mathbf {x} t\mid \mathbf {x} 't')=\int _{\mathbf {k} }d\mathbf {k} \int {\frac {d\omega }{2\pi }}G^{\mathrm {T} }(\mathbf {k} ,\omega )e^{i\mathbf {k} \cdot (\mathbf {x} -\mathbf {x} ')-i\omega (t-t')}.}
The real-time two-point Green function can be written in terms of 'retarded' and 'advanced' Green functions, which will turn out to have simpler analyticity properties. The retarded and advanced Green functions are defined by G R ( x t ∣ x ′ t ′ ) = − i ⟨ [ ψ ( x , t ) , ψ ¯ ( x ′ , t ′ ) ] ζ ⟩ Θ ( t − t ′ ) {\displaystyle G^{\mathrm {R} }(\mathbf {x} t\mid \mathbf {x} 't')=-i\langle [\psi (\mathbf {x} ,t),{\bar {\psi }}(\mathbf {x} ',t')]_{\zeta }\rangle \Theta (t-t')} and G A ( x t ∣ x ′ t ′ ) = i ⟨ [ ψ ( x , t ) , ψ ¯ ( x ′ , t ′ ) ] ζ ⟩ Θ ( t ′ − t ) , {\displaystyle G^{\mathrm {A} }(\mathbf {x} t\mid \mathbf {x} 't')=i\langle [\psi (\mathbf {x} ,t),{\bar {\psi }}(\mathbf {x} ',t')]_{\zeta }\rangle \Theta (t'-t),} respectively.
They are related to the time-ordered Green function by G T ( k , ω ) = [ 1 + ζ n ( ω ) ] G R ( k , ω ) − ζ n ( ω ) G A ( k , ω ) , {\displaystyle G^{\mathrm {T} }(\mathbf {k} ,\omega )=[1+\zeta n(\omega )]G^{\mathrm {R} }(\mathbf {k} ,\omega )-\zeta n(\omega )G^{\mathrm {A} }(\mathbf {k} ,\omega ),} where n ( ω ) = 1 e β ω − ζ {\displaystyle n(\omega )={\frac {1}{e^{\beta \omega }-\zeta }}} is the Bose–Einstein or Fermi–Dirac distribution function.
The thermal Green functions are defined only when both imaginary-time arguments are within the range 0 {\displaystyle 0} to β {\displaystyle \beta } . The two-point Green function has the following properties. (The position or momentum arguments are suppressed in this section.)
Firstly, it depends only on the difference of the imaginary times: G ( τ , τ ′ ) = G ( τ − τ ′ ) . {\displaystyle {\mathcal {G}}(\tau ,\tau ')={\mathcal {G}}(\tau -\tau ').} The argument τ − τ ′ {\displaystyle \tau -\tau '} is allowed to run from − β {\displaystyle -\beta } to β {\displaystyle \beta } .
Secondly, G ( τ ) {\displaystyle {\mathcal {G}}(\tau )} is (anti)periodic under shifts of β {\displaystyle \beta } . Because of the small domain within which the function is defined, this means just G ( τ − β ) = ζ G ( τ ) , {\displaystyle {\mathcal {G}}(\tau -\beta )=\zeta {\mathcal {G}}(\tau ),} for 0 < τ < β {\displaystyle 0<\tau <\beta } . Time ordering is crucial for this property, which can be proved straightforwardly, using the cyclicity of the trace operation.
These two properties allow for the Fourier transform representation and its inverse, G ( ω n ) = ∫ 0 β d τ G ( τ ) e i ω n τ . {\displaystyle {\mathcal {G}}(\omega _{n})=\int _{0}^{\beta }d\tau \,{\mathcal {G}}(\tau )\,e^{i\omega _{n}\tau }.}
Finally, note that G ( τ ) {\displaystyle {\mathcal {G}}(\tau )} has a discontinuity at τ = 0 {\displaystyle \tau =0} ; this is consistent with a long-distance behaviour of G ( ω n ) ∼ 1 / | ω n | {\displaystyle {\mathcal {G}}(\omega _{n})\sim 1/|\omega _{n}|} .
The propagators in real and imaginary time can both be related to the spectral density (or spectral weight), given by ρ ( k , ω ) = 1 Z ∑ α , α ′ 2 π δ ( E α − E α ′ − ω ) | ⟨ α ∣ ψ k † ∣ α ′ ⟩ | 2 ( e − β E α ′ − ζ e − β E α ) , {\displaystyle \rho (\mathbf {k} ,\omega )={\frac {1}{\mathcal {Z}}}\sum _{\alpha ,\alpha '}2\pi \delta (E_{\alpha }-E_{\alpha '}-\omega )|\langle \alpha \mid \psi _{\mathbf {k} }^{\dagger }\mid \alpha '\rangle |^{2}\left(e^{-\beta E_{\alpha '}}-\zeta e^{-\beta E_{\alpha }}\right),} where | α ⟩ refers to a (many-body) eigenstate of the grand-canonical Hamiltonian H − μN , with eigenvalue E α .
The imaginary-time propagator is then given by G ( k , ω n ) = ∫ − ∞ ∞ d ω ′ 2 π ρ ( k , ω ′ ) − i ω n + ω ′ , {\displaystyle {\mathcal {G}}(\mathbf {k} ,\omega _{n})=\int _{-\infty }^{\infty }{\frac {d\omega '}{2\pi }}{\frac {\rho (\mathbf {k} ,\omega ')}{-i\omega _{n}+\omega '}}~,} and the retarded propagator by G R ( k , ω ) = ∫ − ∞ ∞ d ω ′ 2 π ρ ( k , ω ′ ) − ( ω + i η ) + ω ′ , {\displaystyle G^{\mathrm {R} }(\mathbf {k} ,\omega )=\int _{-\infty }^{\infty }{\frac {d\omega '}{2\pi }}{\frac {\rho (\mathbf {k} ,\omega ')}{-(\omega +i\eta )+\omega '}},} where the limit as η → 0 + {\displaystyle \eta \to 0^{+}} is implied.
The advanced propagator is given by the same expression, but with − i η {\displaystyle -i\eta } in the denominator.
The time-ordered function can be found in terms of G R {\displaystyle G^{\mathrm {R} }} and G A {\displaystyle G^{\mathrm {A} }} . As claimed above, G R ( ω ) {\displaystyle G^{\mathrm {R} }(\omega )} and G A ( ω ) {\displaystyle G^{\mathrm {A} }(\omega )} have simple analyticity properties: the former (latter) has all its poles and discontinuities in the lower (upper) half-plane.
The thermal propagator G ( ω n ) {\displaystyle {\mathcal {G}}(\omega _{n})} has all its poles and discontinuities on the imaginary ω n {\displaystyle \omega _{n}} axis.
The spectral density can be found very straightforwardly from G R {\displaystyle G^{\mathrm {R} }} , using the Sokhatsky–Weierstrass theorem lim η → 0 + 1 x ± i η = P 1 x ∓ i π δ ( x ) , {\displaystyle \lim _{\eta \to 0^{+}}{\frac {1}{x\pm i\eta }}=P{\frac {1}{x}}\mp i\pi \delta (x),} where P denotes the Cauchy principal part .
This gives ρ ( k , ω ) = 2 Im G R ( k , ω ) . {\displaystyle \rho (\mathbf {k} ,\omega )=2\operatorname {Im} G^{\mathrm {R} }(\mathbf {k} ,\omega ).}
This furthermore implies that G R ( k , ω ) {\displaystyle G^{\mathrm {R} }(\mathbf {k} ,\omega )} obeys the following relationship between its real and imaginary parts: Re G R ( k , ω ) = − 2 P ∫ − ∞ ∞ d ω ′ 2 π Im G R ( k , ω ′ ) ω − ω ′ , {\displaystyle \operatorname {Re} G^{\mathrm {R} }(\mathbf {k} ,\omega )=-2P\int _{-\infty }^{\infty }{\frac {d\omega '}{2\pi }}{\frac {\operatorname {Im} G^{\mathrm {R} }(\mathbf {k} ,\omega ')}{\omega -\omega '}},} where P {\displaystyle P} denotes the principal value of the integral.
The spectral density obeys a sum rule, ∫ − ∞ ∞ d ω 2 π ρ ( k , ω ) = 1 , {\displaystyle \int _{-\infty }^{\infty }{\frac {d\omega }{2\pi }}\rho (\mathbf {k} ,\omega )=1,} which gives G R ( ω ) ∼ 1 | ω | {\displaystyle G^{\mathrm {R} }(\omega )\sim {\frac {1}{|\omega |}}} as | ω | → ∞ {\displaystyle |\omega |\to \infty } .
The similarity of the spectral representations of the imaginary- and real-time Green functions allows us to define the function G ( k , z ) = ∫ − ∞ ∞ d x 2 π ρ ( k , x ) − z + x , {\displaystyle G(\mathbf {k} ,z)=\int _{-\infty }^{\infty }{\frac {dx}{2\pi }}{\frac {\rho (\mathbf {k} ,x)}{-z+x}},} which is related to G {\displaystyle {\mathcal {G}}} and G R {\displaystyle G^{\mathrm {R} }} by G ( k , ω n ) = G ( k , i ω n ) {\displaystyle {\mathcal {G}}(\mathbf {k} ,\omega _{n})=G(\mathbf {k} ,i\omega _{n})} and G R ( k , ω ) = G ( k , ω + i η ) . {\displaystyle G^{\mathrm {R} }(\mathbf {k} ,\omega )=G(\mathbf {k} ,\omega +i\eta ).} A similar expression obviously holds for G A {\displaystyle G^{\mathrm {A} }} .
The relation between G ( k , z ) {\displaystyle G(\mathbf {k} ,z)} and ρ ( k , x ) {\displaystyle \rho (\mathbf {k} ,x)} is referred to as a Hilbert transform .
We demonstrate the proof of the spectral representation of the propagator in the case of the thermal Green function, defined as G ( x , τ ∣ x ′ , τ ′ ) = ⟨ T ψ ( x , τ ) ψ ¯ ( x ′ , τ ′ ) ⟩ . {\displaystyle {\mathcal {G}}(\mathbf {x} ,\tau \mid \mathbf {x} ',\tau ')=\langle T\psi (\mathbf {x} ,\tau ){\bar {\psi }}(\mathbf {x} ',\tau ')\rangle .}
Due to translational symmetry, it is only necessary to consider G ( x , τ ∣ 0 , 0 ) {\displaystyle {\mathcal {G}}(\mathbf {x} ,\tau \mid \mathbf {0} ,0)} for τ > 0 {\displaystyle \tau >0} , given by G ( x , τ ∣ 0 , 0 ) = 1 Z ∑ α ′ e − β E α ′ ⟨ α ′ ∣ ψ ( x , τ ) ψ ¯ ( 0 , 0 ) ∣ α ′ ⟩ . {\displaystyle {\mathcal {G}}(\mathbf {x} ,\tau \mid \mathbf {0} ,0)={\frac {1}{\mathcal {Z}}}\sum _{\alpha '}e^{-\beta E_{\alpha '}}\langle \alpha '\mid \psi (\mathbf {x} ,\tau ){\bar {\psi }}(\mathbf {0} ,0)\mid \alpha '\rangle .} Inserting a complete set of eigenstates gives G ( x , τ ∣ 0 , 0 ) = 1 Z ∑ α , α ′ e − β E α ′ ⟨ α ′ ∣ ψ ( x , τ ) ∣ α ⟩ ⟨ α ∣ ψ ¯ ( 0 , 0 ) ∣ α ′ ⟩ . {\displaystyle {\mathcal {G}}(\mathbf {x} ,\tau \mid \mathbf {0} ,0)={\frac {1}{\mathcal {Z}}}\sum _{\alpha ,\alpha '}e^{-\beta E_{\alpha '}}\langle \alpha '\mid \psi (\mathbf {x} ,\tau )\mid \alpha \rangle \langle \alpha \mid {\bar {\psi }}(\mathbf {0} ,0)\mid \alpha '\rangle .}
Since | α ⟩ {\displaystyle |\alpha \rangle } and | α ′ ⟩ {\displaystyle |\alpha '\rangle } are eigenstates of H − μ N {\displaystyle H-\mu N} , the Heisenberg operators can be rewritten in terms of Schrödinger operators, giving G ( x , τ | 0 , 0 ) = 1 Z ∑ α , α ′ e − β E α ′ e τ ( E α ′ − E α ) ⟨ α ′ ∣ ψ ( x ) ∣ α ⟩ ⟨ α ∣ ψ † ( 0 ) ∣ α ′ ⟩ . {\displaystyle {\mathcal {G}}(\mathbf {x} ,\tau |\mathbf {0} ,0)={\frac {1}{\mathcal {Z}}}\sum _{\alpha ,\alpha '}e^{-\beta E_{\alpha '}}e^{\tau (E_{\alpha '}-E_{\alpha })}\langle \alpha '\mid \psi (\mathbf {x} )\mid \alpha \rangle \langle \alpha \mid \psi ^{\dagger }(\mathbf {0} )\mid \alpha '\rangle .} Performing the Fourier transform then gives G ( k , ω n ) = 1 Z ∑ α , α ′ e − β E α ′ 1 − ζ e β ( E α ′ − E α ) − i ω n + E α − E α ′ ∫ k ′ d k ′ ⟨ α ∣ ψ ( k ) ∣ α ′ ⟩ ⟨ α ′ ∣ ψ † ( k ′ ) ∣ α ⟩ . {\displaystyle {\mathcal {G}}(\mathbf {k} ,\omega _{n})={\frac {1}{\mathcal {Z}}}\sum _{\alpha ,\alpha '}e^{-\beta E_{\alpha '}}{\frac {1-\zeta e^{\beta (E_{\alpha '}-E_{\alpha })}}{-i\omega _{n}+E_{\alpha }-E_{\alpha '}}}\int _{\mathbf {k} '}d\mathbf {k} '\langle \alpha \mid \psi (\mathbf {k} )\mid \alpha '\rangle \langle \alpha '\mid \psi ^{\dagger }(\mathbf {k} ')\mid \alpha \rangle .}
Momentum conservation allows the final term to be written as (up to possible factors of the volume) | ⟨ α ′ ∣ ψ † ( k ) ∣ α ⟩ | 2 , {\displaystyle |\langle \alpha '\mid \psi ^{\dagger }(\mathbf {k} )\mid \alpha \rangle |^{2},} which confirms the expressions for the Green functions in the spectral representation.
The sum rule can be proved by considering the expectation value of the commutator, 1 = 1 Z ∑ α ⟨ α ∣ e − β ( H − μ N ) [ ψ k , ψ k † ] − ζ ∣ α ⟩ , {\displaystyle 1={\frac {1}{\mathcal {Z}}}\sum _{\alpha }\langle \alpha \mid e^{-\beta (H-\mu N)}[\psi _{\mathbf {k} },\psi _{\mathbf {k} }^{\dagger }]_{-\zeta }\mid \alpha \rangle ,} and then inserting a complete set of eigenstates into both terms of the commutator: 1 = 1 Z ∑ α , α ′ e − β E α ( ⟨ α ∣ ψ k ∣ α ′ ⟩ ⟨ α ′ ∣ ψ k † ∣ α ⟩ − ζ ⟨ α ∣ ψ k † ∣ α ′ ⟩ ⟨ α ′ ∣ ψ k ∣ α ⟩ ) . {\displaystyle 1={\frac {1}{\mathcal {Z}}}\sum _{\alpha ,\alpha '}e^{-\beta E_{\alpha }}\left(\langle \alpha \mid \psi _{\mathbf {k} }\mid \alpha '\rangle \langle \alpha '\mid \psi _{\mathbf {k} }^{\dagger }\mid \alpha \rangle -\zeta \langle \alpha \mid \psi _{\mathbf {k} }^{\dagger }\mid \alpha '\rangle \langle \alpha '\mid \psi _{\mathbf {k} }\mid \alpha \rangle \right).}
Swapping the labels in the first term then gives 1 = 1 Z ∑ α , α ′ ( e − β E α ′ − ζ e − β E α ) | ⟨ α ∣ ψ k † ∣ α ′ ⟩ | 2 , {\displaystyle 1={\frac {1}{\mathcal {Z}}}\sum _{\alpha ,\alpha '}\left(e^{-\beta E_{\alpha '}}-\zeta e^{-\beta E_{\alpha }}\right)|\langle \alpha \mid \psi _{\mathbf {k} }^{\dagger }\mid \alpha '\rangle |^{2}~,} which is exactly the result of the integration of ρ .
In the non-interacting case, ψ k † ∣ α ′ ⟩ {\displaystyle \psi _{\mathbf {k} }^{\dagger }\mid \alpha '\rangle } is an eigenstate with (grand-canonical) energy E α ′ + ξ k {\displaystyle E_{\alpha '}+\xi _{\mathbf {k} }} , where ξ k = ϵ k − μ {\displaystyle \xi _{\mathbf {k} }=\epsilon _{\mathbf {k} }-\mu } is the single-particle dispersion relation measured with respect to the chemical potential. The spectral density therefore becomes ρ 0 ( k , ω ) = 1 Z 2 π δ ( ξ k − ω ) ∑ α ′ ⟨ α ′ ∣ ψ k ψ k † ∣ α ′ ⟩ ( 1 − ζ e − β ξ k ) e − β E α ′ . {\displaystyle \rho _{0}(\mathbf {k} ,\omega )={\frac {1}{\mathcal {Z}}}\,2\pi \delta (\xi _{\mathbf {k} }-\omega )\sum _{\alpha '}\langle \alpha '\mid \psi _{\mathbf {k} }\psi _{\mathbf {k} }^{\dagger }\mid \alpha '\rangle (1-\zeta e^{-\beta \xi _{\mathbf {k} }})e^{-\beta E_{\alpha '}}.}
From the commutation relations, ⟨ α ′ ∣ ψ k ψ k † ∣ α ′ ⟩ = ⟨ α ′ ∣ ( 1 + ζ ψ k † ψ k ) ∣ α ′ ⟩ , {\displaystyle \langle \alpha '\mid \psi _{\mathbf {k} }\psi _{\mathbf {k} }^{\dagger }\mid \alpha '\rangle =\langle \alpha '\mid (1+\zeta \psi _{\mathbf {k} }^{\dagger }\psi _{\mathbf {k} })\mid \alpha '\rangle ,} with possible factors of the volume again. The sum, which involves the thermal average of the number operator, then gives simply [ 1 + ζ n ( ξ k ) ] Z {\displaystyle [1+\zeta n(\xi _{\mathbf {k} })]{\mathcal {Z}}} , leaving ρ 0 ( k , ω ) = 2 π δ ( ξ k − ω ) . {\displaystyle \rho _{0}(\mathbf {k} ,\omega )=2\pi \delta (\xi _{\mathbf {k} }-\omega ).}
The imaginary-time propagator is thus G 0 ( k , ω ) = 1 − i ω n + ξ k {\displaystyle {\mathcal {G}}_{0}(\mathbf {k} ,\omega )={\frac {1}{-i\omega _{n}+\xi _{\mathbf {k} }}}} and the retarded propagator is G 0 R ( k , ω ) = 1 − ( ω + i η ) + ξ k . {\displaystyle G_{0}^{\mathrm {R} }(\mathbf {k} ,\omega )={\frac {1}{-(\omega +i\eta )+\xi _{\mathbf {k} }}}.}
As β → ∞ , the spectral density becomes ρ ( k , ω ) = 2 π ∑ α [ δ ( E α − E 0 − ω ) | ⟨ α ∣ ψ k † ∣ 0 ⟩ | 2 − ζ δ ( E 0 − E α − ω ) | ⟨ 0 ∣ ψ k † ∣ α ⟩ | 2 ] {\displaystyle \rho (\mathbf {k} ,\omega )=2\pi \sum _{\alpha }\left[\delta (E_{\alpha }-E_{0}-\omega )\left|\left\langle \alpha \mid \psi _{\mathbf {k} }^{\dagger }\mid 0\right\rangle \right|^{2}-\zeta \delta (E_{0}-E_{\alpha }-\omega )\left|\left\langle 0\mid \psi _{\mathbf {k} }^{\dagger }\mid \alpha \right\rangle \right|^{2}\right]} where α = 0 corresponds to the ground state. Note that only the first (second) term contributes when ω is positive (negative).
We can use 'field operators' as above, or creation and annihilation operators associated with other single-particle states, perhaps eigenstates of the (noninteracting) kinetic energy. We then use ψ ( x , τ ) = φ α ( x ) ψ α ( τ ) , {\displaystyle \psi (\mathbf {x} ,\tau )=\varphi _{\alpha }(\mathbf {x} )\psi _{\alpha }(\tau ),} where ψ α {\displaystyle \psi _{\alpha }} is the annihilation operator for the single-particle state α {\displaystyle \alpha } and φ α ( x ) {\displaystyle \varphi _{\alpha }(\mathbf {x} )} is that state's wavefunction in the position basis. This gives G α 1 … α n | β 1 … β n ( n ) ( τ 1 … τ n | τ 1 ′ … τ n ′ ) = ⟨ T ψ α 1 ( τ 1 ) … ψ α n ( τ n ) ψ ¯ β n ( τ n ′ ) … ψ ¯ β 1 ( τ 1 ′ ) ⟩ {\displaystyle {\mathcal {G}}_{\alpha _{1}\ldots \alpha _{n}|\beta _{1}\ldots \beta _{n}}^{(n)}(\tau _{1}\ldots \tau _{n}|\tau _{1}'\ldots \tau _{n}')=\langle T\psi _{\alpha _{1}}(\tau _{1})\ldots \psi _{\alpha _{n}}(\tau _{n}){\bar {\psi }}_{\beta _{n}}(\tau _{n}')\ldots {\bar {\psi }}_{\beta _{1}}(\tau _{1}')\rangle } with a similar expression for G ( n ) {\displaystyle G^{(n)}} .
These depend only on the difference of their time arguments, so that G α β ( τ ∣ τ ′ ) = 1 β ∑ ω n G α β ( ω n ) e − i ω n ( τ − τ ′ ) {\displaystyle {\mathcal {G}}_{\alpha \beta }(\tau \mid \tau ')={\frac {1}{\beta }}\sum _{\omega _{n}}{\mathcal {G}}_{\alpha \beta }(\omega _{n})\,e^{-i\omega _{n}(\tau -\tau ')}} and G α β ( t ∣ t ′ ) = ∫ − ∞ ∞ d ω 2 π G α β ( ω ) e − i ω ( t − t ′ ) . {\displaystyle G_{\alpha \beta }(t\mid t')=\int _{-\infty }^{\infty }{\frac {d\omega }{2\pi }}\,G_{\alpha \beta }(\omega )\,e^{-i\omega (t-t')}.}
We can again define retarded and advanced functions in the obvious way; these are related to the time-ordered function in the same way as above.
The same periodicity properties as described in above apply to G α β {\displaystyle {\mathcal {G}}_{\alpha \beta }} . Specifically, G α β ( τ ∣ τ ′ ) = G α β ( τ − τ ′ ) {\displaystyle {\mathcal {G}}_{\alpha \beta }(\tau \mid \tau ')={\mathcal {G}}_{\alpha \beta }(\tau -\tau ')} and G α β ( τ ) = G α β ( τ + β ) , {\displaystyle {\mathcal {G}}_{\alpha \beta }(\tau )={\mathcal {G}}_{\alpha \beta }(\tau +\beta ),} for τ < 0 {\displaystyle \tau <0} .
In this case, ρ α β ( ω ) = 1 Z ∑ m , n 2 π δ ( E n − E m − ω ) ⟨ m ∣ ψ α ∣ n ⟩ ⟨ n ∣ ψ β † ∣ m ⟩ ( e − β E m − ζ e − β E n ) , {\displaystyle \rho _{\alpha \beta }(\omega )={\frac {1}{\mathcal {Z}}}\sum _{m,n}2\pi \delta (E_{n}-E_{m}-\omega )\;\langle m\mid \psi _{\alpha }\mid n\rangle \langle n\mid \psi _{\beta }^{\dagger }\mid m\rangle \left(e^{-\beta E_{m}}-\zeta e^{-\beta E_{n}}\right),} where m {\displaystyle m} and n {\displaystyle n} are many-body states.
The expressions for the Green functions are modified in the obvious ways: G α β ( ω n ) = ∫ − ∞ ∞ d ω ′ 2 π ρ α β ( ω ′ ) − i ω n + ω ′ {\displaystyle {\mathcal {G}}_{\alpha \beta }(\omega _{n})=\int _{-\infty }^{\infty }{\frac {d\omega '}{2\pi }}{\frac {\rho _{\alpha \beta }(\omega ')}{-i\omega _{n}+\omega '}}} and G α β R ( ω ) = ∫ − ∞ ∞ d ω ′ 2 π ρ α β ( ω ′ ) − ( ω + i η ) + ω ′ . {\displaystyle G_{\alpha \beta }^{\mathrm {R} }(\omega )=\int _{-\infty }^{\infty }{\frac {d\omega '}{2\pi }}{\frac {\rho _{\alpha \beta }(\omega ')}{-(\omega +i\eta )+\omega '}}.}
Their analyticity properties are identical to those of G ( k , ω n ) {\displaystyle {\mathcal {G}}(\mathbf {k} ,\omega _{n})} and G R ( k , ω ) {\displaystyle G^{\mathrm {R} }(\mathbf {k} ,\omega )} defined in the translationally invariant case. The proof follows exactly the same steps, except that the two matrix elements are no longer complex conjugates.
If the particular single-particle states that are chosen are 'single-particle energy eigenstates', i.e. [ H − μ N , ψ α † ] = ξ α ψ α † , {\displaystyle [H-\mu N,\psi _{\alpha }^{\dagger }]=\xi _{\alpha }\psi _{\alpha }^{\dagger },} then for | n ⟩ {\displaystyle |n\rangle } an eigenstate: ( H − μ N ) ∣ n ⟩ = E n ∣ n ⟩ , {\displaystyle (H-\mu N)\mid n\rangle =E_{n}\mid n\rangle ,} so is ψ α ∣ n ⟩ {\displaystyle \psi _{\alpha }\mid n\rangle } : ( H − μ N ) ψ α ∣ n ⟩ = ( E n − ξ α ) ψ α ∣ n ⟩ , {\displaystyle (H-\mu N)\psi _{\alpha }\mid n\rangle =(E_{n}-\xi _{\alpha })\psi _{\alpha }\mid n\rangle ,} and so is ψ α † ∣ n ⟩ {\displaystyle \psi _{\alpha }^{\dagger }\mid n\rangle } : ( H − μ N ) ψ α † ∣ n ⟩ = ( E n + ξ α ) ψ α † ∣ n ⟩ . {\displaystyle (H-\mu N)\psi _{\alpha }^{\dagger }\mid n\rangle =(E_{n}+\xi _{\alpha })\psi _{\alpha }^{\dagger }\mid n\rangle .}
We therefore have ⟨ m ∣ ψ α ∣ n ⟩ ⟨ n ∣ ψ β † ∣ m ⟩ = δ ξ α , ξ β δ E n , E m + ξ α ⟨ m ∣ ψ α ∣ n ⟩ ⟨ n ∣ ψ β † ∣ m ⟩ . {\displaystyle \langle m\mid \psi _{\alpha }\mid n\rangle \langle n\mid \psi _{\beta }^{\dagger }\mid m\rangle =\delta _{\xi _{\alpha },\xi _{\beta }}\delta _{E_{n},E_{m}+\xi _{\alpha }}\langle m\mid \psi _{\alpha }\mid n\rangle \langle n\mid \psi _{\beta }^{\dagger }\mid m\rangle .}
We then rewrite ρ α β ( ω ) = 1 Z ∑ m , n 2 π δ ( ξ α − ω ) δ ξ α , ξ β ⟨ m ∣ ψ α ∣ n ⟩ ⟨ n ∣ ψ β † ∣ m ⟩ e − β E m ( 1 − ζ e − β ξ α ) , {\displaystyle \rho _{\alpha \beta }(\omega )={\frac {1}{\mathcal {Z}}}\sum _{m,n}2\pi \delta (\xi _{\alpha }-\omega )\delta _{\xi _{\alpha },\xi _{\beta }}\langle m\mid \psi _{\alpha }\mid n\rangle \langle n\mid \psi _{\beta }^{\dagger }\mid m\rangle e^{-\beta E_{m}}\left(1-\zeta e^{-\beta \xi _{\alpha }}\right),} therefore ρ α β ( ω ) = 1 Z ∑ m 2 π δ ( ξ α − ω ) δ ξ α , ξ β ⟨ m ∣ ψ α ψ β † e − β ( H − μ N ) ∣ m ⟩ ( 1 − ζ e − β ξ α ) , {\displaystyle \rho _{\alpha \beta }(\omega )={\frac {1}{\mathcal {Z}}}\sum _{m}2\pi \delta (\xi _{\alpha }-\omega )\delta _{\xi _{\alpha },\xi _{\beta }}\langle m\mid \psi _{\alpha }\psi _{\beta }^{\dagger }e^{-\beta (H-\mu N)}\mid m\rangle \left(1-\zeta e^{-\beta \xi _{\alpha }}\right),} use ⟨ m ∣ ψ α ψ β † ∣ m ⟩ = δ α , β ⟨ m ∣ ζ ψ α † ψ α + 1 ∣ m ⟩ {\displaystyle \langle m\mid \psi _{\alpha }\psi _{\beta }^{\dagger }\mid m\rangle =\delta _{\alpha ,\beta }\langle m\mid \zeta \psi _{\alpha }^{\dagger }\psi _{\alpha }+1\mid m\rangle } and the fact that the thermal average of the number operator gives the Bose–Einstein or Fermi–Dirac distribution function.
Finally, the spectral density simplifies to give ρ α β = 2 π δ ( ξ α − ω ) δ α β , {\displaystyle \rho _{\alpha \beta }=2\pi \delta (\xi _{\alpha }-\omega )\delta _{\alpha \beta },} so that the thermal Green function is G α β ( ω n ) = δ α β − i ω n + ξ β {\displaystyle {\mathcal {G}}_{\alpha \beta }(\omega _{n})={\frac {\delta _{\alpha \beta }}{-i\omega _{n}+\xi _{\beta }}}} and the retarded Green function is G α β ( ω ) = δ α β − ( ω + i η ) + ξ β . {\displaystyle G_{\alpha \beta }(\omega )={\frac {\delta _{\alpha \beta }}{-(\omega +i\eta )+\xi _{\beta }}}.} Note that the noninteracting Green function is diagonal, but this will not be true in the interacting case. | https://en.wikipedia.org/wiki/Green's_function_(many-body_theory) |
In mathematical heat conduction , the Green's function number is used to uniquely categorize certain fundamental solutions of the heat equation to make existing solutions easier to identify, store, and retrieve.
Numbers have long been used to identify types of boundary conditions. [ 1 ] [ 2 ] [ 3 ] The Green's function number system was proposed by Beck and Litkouhi in 1988 [ 4 ] and has seen increasing use since then. [ 5 ] [ 6 ] [ 7 ] [ 8 ] The number system has been used to catalog a large collection of Green's functions and related solutions. [ 9 ] [ 10 ] [ 11 ]
Although the examples given below are for the heat equation , this number system applies to any phenomena described by differential equations such as diffusion , acoustics , electromagnetics , fluid dynamics , etc.
The Green's function number specifies the coordinate system and the type of boundary conditions that a Green's function satisfies. The Green's function number has two parts, a letter designation followed by a number designation. The letter(s) designate the coordinate system, while the numbers designate the type of boundary conditions that are satisfied.
Some of the designations for the Greens function number system are given next. Coordinate system designations include: X, Y, and Z for Cartesian coordinates; R, Z, φ for cylindrical coordinates; and, RS, φ, θ for spherical coordinates.
Designations for several boundary conditions are given in Table 1. The zeroth boundary condition is important for identifying the presence of a coordinate boundary where no physical boundary exists, for example, far away in a semi-infinite body or at the center of a cylindrical or spherical body.
As an example, number X11 denotes the Green's function that satisfies the heat equation in the domain ( 0 < x < L ) for boundary conditions of type 1 ( Dirichlet ) at both boundaries x = 0 and x = L . Here X denotes the Cartesian coordinate and 11 denotes the type 1 boundary condition at both sides of the body. The boundary value problem for the X11 Green's function is given by
Here α {\displaystyle \alpha } is the thermal diffusivity (m 2 /s) and δ {\displaystyle \delta } is the Dirac delta function .
This GF is developed elsewhere. [ 12 ] [ 13 ]
As another Cartesian example, number X20 denotes the Green's function in the semi-infinite body ( 0 < x < ∞ {\displaystyle 0<x<\infty } ) with a Neumann (type 2) boundary at x = 0 . Here X denotes the Cartesian coordinate, 2 denotes the type 2 boundary condition at x = 0 and 0 denotes the zeroth type boundary condition (boundedness) at x = ∞ {\displaystyle x=\infty } . The boundary value problem for the X20 Green's function is given by
This GF is published elsewhere. [ 14 ] [ 15 ]
As a two-dimensional example, number X10Y20 denotes the Green's function in the quarter-infinite body ( 0 < x < ∞ {\displaystyle 0<x<\infty } , 0 < y < ∞ {\displaystyle 0<y<\infty } ) with a Dirichlet (type 1) boundary at x = 0 and a Neumann (type 2) boundary at y = 0 . The boundary value problem for the X10Y20 Green's function is given by
Applications of related half-space and quarter-space GF are available. [ 16 ]
As an example in the cylindrical coordinate system, number R03 denotes the Green's function that satisfies the heat equation in the solid cylinder ( 0 < r < a ) with a boundary condition of type 3 (Robin) at r = a . Here letter R denotes the cylindrical coordinate system, number 0 denotes the zeroth boundary condition (boundedness) at the center of the cylinder ( r = 0 ), and number 3 denotes the type 3 ( Robin ) boundary condition at r = a . The boundary value problem for R03 Green's function is given by
Here k {\displaystyle k} is thermal conductivity (W/(m K)) and h {\displaystyle h} is the heat transfer coefficient (W/(m 2 K)).
See Carslaw & Jaeger (1959 , p. 369), Cole et al. (2011 , p. 543) for this GF.
As another example, number R10 denotes the Green's function in a large body containing a cylindrical void (a < r < ∞ {\displaystyle \infty } ) with a type 1 (Dirichlet) boundary condition at r = a . Again letter R denotes the cylindrical coordinate system, number 1 denotes the type 1 boundary at r = a , and number 0 denotes the type zero boundary (boundedness) at large values of r. The boundary value problem for the R10 Green's function is given by
This GF is available elsewhere. [ 17 ] [ 18 ]
As a two dimensional example, number R01φ00 denotes the Green's function in a solid cylinder with angular dependence, with a type 1 (Dirichlet) boundary condition at r = a . Here letter φ denotes the angular (azimuthal) coordinate, and numbers 00 denote the type zero boundaries for angle; here no physical boundary takes the form of the periodic boundary condition. The boundary value problem for the R01φ00 Green's function is given by
Both a transient [ 19 ] and steady form [ 20 ] of this GF are available.
As an example in the spherical coordinate system, number RS02 denotes the Green's function for a solid sphere ( 0 < r < b ) with a type 2 ( Neumann ) boundary condition at r = b . Here letters RS denote the radial-spherical coordinate system, number 0 denotes the zeroth boundary condition (boundedness) at r = 0 , and number 2 denotes the type 2 boundary at r = b . The boundary value problem for the RS02 Green's function is given by
This GF is available elsewhere. [ 21 ] | https://en.wikipedia.org/wiki/Green's_function_number |
In mathematics , Green's identities are a set of three identities in vector calculus relating the bulk with the boundary of a region on which differential operators act. They are named after the mathematician George Green , who discovered Green's theorem .
This identity is derived from the divergence theorem applied to the vector field F = ψ ∇ φ while using an extension of the product rule that ∇ ⋅ ( ψ X ) = ∇ ψ ⋅ X + ψ ∇⋅ X : Let φ and ψ be scalar functions defined on some region U ⊂ R d , and suppose that φ is twice continuously differentiable , and ψ is once continuously differentiable. Using the product rule above, but letting X = ∇ φ , integrate ∇⋅( ψ ∇ φ ) over U . Then [ 1 ] ∫ U ( ψ Δ φ + ∇ ψ ⋅ ∇ φ ) d V = ∮ ∂ U ψ ( ∇ φ ⋅ n ) d S = ∮ ∂ U ψ ∇ φ ⋅ d S {\displaystyle \int _{U}\left(\psi \,\Delta \varphi +\nabla \psi \cdot \nabla \varphi \right)\,dV=\oint _{\partial U}\psi \left(\nabla \varphi \cdot \mathbf {n} \right)\,dS=\oint _{\partial U}\psi \,\nabla \varphi \cdot d\mathbf {S} } where ∆ ≡ ∇ 2 is the Laplace operator , ∂ U is the boundary of region U , n is the outward pointing unit normal to the surface element dS and d S = n dS is the oriented surface element.
This theorem is a special case of the divergence theorem , and is essentially the higher dimensional equivalent of integration by parts with ψ and the gradient of φ replacing u and v .
Note that Green's first identity above is a special case of the more general identity derived from the divergence theorem by substituting F = ψ Γ , ∫ U ( ψ ∇ ⋅ Γ + Γ ⋅ ∇ ψ ) d V = ∮ ∂ U ψ ( Γ ⋅ n ) d S = ∮ ∂ U ψ Γ ⋅ d S . {\displaystyle \int _{U}\left(\psi \,\nabla \cdot \mathbf {\Gamma } +\mathbf {\Gamma } \cdot \nabla \psi \right)\,dV=\oint _{\partial U}\psi \left(\mathbf {\Gamma } \cdot \mathbf {n} \right)\,dS=\oint _{\partial U}\psi \mathbf {\Gamma } \cdot d\mathbf {S} ~.}
If φ and ψ are both twice continuously differentiable on U ⊂ R 3 , and ε is once continuously differentiable, one may choose F = ψε ∇ φ − φε ∇ ψ to obtain ∫ U [ ψ ∇ ⋅ ( ε ∇ φ ) − φ ∇ ⋅ ( ε ∇ ψ ) ] d V = ∮ ∂ U ε ( ψ ∂ φ ∂ n − φ ∂ ψ ∂ n ) d S . {\displaystyle \int _{U}\left[\psi \,\nabla \cdot \left(\varepsilon \,\nabla \varphi \right)-\varphi \,\nabla \cdot \left(\varepsilon \,\nabla \psi \right)\right]\,dV=\oint _{\partial U}\varepsilon \left(\psi {\partial \varphi \over \partial \mathbf {n} }-\varphi {\partial \psi \over \partial \mathbf {n} }\right)\,dS.}
For the special case of ε = 1 all across U ⊂ R 3 , then, ∫ U ( ψ ∇ 2 φ − φ ∇ 2 ψ ) d V = ∮ ∂ U ( ψ ∂ φ ∂ n − φ ∂ ψ ∂ n ) d S . {\displaystyle \int _{U}\left(\psi \,\nabla ^{2}\varphi -\varphi \,\nabla ^{2}\psi \right)\,dV=\oint _{\partial U}\left(\psi {\partial \varphi \over \partial \mathbf {n} }-\varphi {\partial \psi \over \partial \mathbf {n} }\right)\,dS.}
In the equation above, ∂ φ /∂ n is the directional derivative of φ in the direction of the outward pointing surface normal n of the surface element dS , ∂ φ ∂ n = ∇ φ ⋅ n = ∇ n φ . {\displaystyle {\partial \varphi \over \partial \mathbf {n} }=\nabla \varphi \cdot \mathbf {n} =\nabla _{\mathbf {n} }\varphi .}
Explicitly incorporating this definition in the Green's second identity with ε = 1 results in ∫ U ( ψ ∇ 2 φ − φ ∇ 2 ψ ) d V = ∮ ∂ U ( ψ ∇ φ − φ ∇ ψ ) ⋅ d S . {\displaystyle \int _{U}\left(\psi \,\nabla ^{2}\varphi -\varphi \,\nabla ^{2}\psi \right)\,dV=\oint _{\partial U}\left(\psi \nabla \varphi -\varphi \nabla \psi \right)\cdot d\mathbf {S} .}
In particular, this demonstrates that the Laplacian is a self-adjoint operator in the L 2 inner product for functions vanishing on the boundary so that the right hand side of the above identity is zero.
Green's third identity derives from the second identity by choosing φ = G , where the Green's function G is taken to be a fundamental solution of the Laplace operator , ∆. This means that: Δ G ( x , η ) = δ ( x − η ) . {\displaystyle \Delta G(\mathbf {x} ,{\boldsymbol {\eta }})=\delta (\mathbf {x} -{\boldsymbol {\eta }})~.}
For example, in R 3 , a solution has the form G ( x , η ) = − 1 4 π ‖ x − η ‖ . {\displaystyle G(\mathbf {x} ,{\boldsymbol {\eta }})={\frac {-1}{4\pi \|\mathbf {x} -{\boldsymbol {\eta }}\|}}~.}
Green's third identity states that if ψ is a function that is twice continuously differentiable on U , then ∫ U [ G ( y , η ) Δ ψ ( y ) ] d V y − ψ ( η ) = ∮ ∂ U [ G ( y , η ) ∂ ψ ∂ n ( y ) − ψ ( y ) ∂ G ( y , η ) ∂ n ] d S y . {\displaystyle \int _{U}\left[G(\mathbf {y} ,{\boldsymbol {\eta }})\,\Delta \psi (\mathbf {y} )\right]\,dV_{\mathbf {y} }-\psi ({\boldsymbol {\eta }})=\oint _{\partial U}\left[G(\mathbf {y} ,{\boldsymbol {\eta }}){\partial \psi \over \partial \mathbf {n} }(\mathbf {y} )-\psi (\mathbf {y} ){\partial G(\mathbf {y} ,{\boldsymbol {\eta }}) \over \partial \mathbf {n} }\right]\,dS_{\mathbf {y} }.}
A simplification arises if ψ is itself a harmonic function , i.e. a solution to the Laplace equation . Then ∇ 2 ψ = 0 and the identity simplifies to ψ ( η ) = ∮ ∂ U [ ψ ( y ) ∂ G ( y , η ) ∂ n − G ( y , η ) ∂ ψ ∂ n ( y ) ] d S y . {\displaystyle \psi ({\boldsymbol {\eta }})=\oint _{\partial U}\left[\psi (\mathbf {y} ){\frac {\partial G(\mathbf {y} ,{\boldsymbol {\eta }})}{\partial \mathbf {n} }}-G(\mathbf {y} ,{\boldsymbol {\eta }}){\frac {\partial \psi }{\partial \mathbf {n} }}(\mathbf {y} )\right]\,dS_{\mathbf {y} }.}
The second term in the integral above can be eliminated if G is chosen to be the Green's function that vanishes on the boundary of U ( Dirichlet boundary condition ), ψ ( η ) = ∮ ∂ U ψ ( y ) ∂ G ( y , η ) ∂ n d S y . {\displaystyle \psi ({\boldsymbol {\eta }})=\oint _{\partial U}\psi (\mathbf {y} ){\frac {\partial G(\mathbf {y} ,{\boldsymbol {\eta }})}{\partial \mathbf {n} }}\,dS_{\mathbf {y} }~.}
This form is used to construct solutions to Dirichlet boundary condition problems. Solutions for Neumann boundary condition problems may also be simplified, though the Divergence theorem applied to the differential equation defining Green's functions shows that the Green's function cannot integrate to zero on the boundary, and hence cannot vanish on the boundary. See Green's functions for the Laplacian or [ 2 ] for a detailed argument, with an alternative.
For the Neumann boundary condition, an appropriate choice of Green's function can be made to simplify the integral. [ 3 ] First note ∫ U Δ G ( y , η ) d V y = 1 = ∮ ∂ U ∂ G ( y , η ) ∂ n d S y {\displaystyle \int _{U}\Delta G(\mathbf {y} ,{\boldsymbol {\eta }})dV_{y}=1=\oint _{\partial U}{\frac {\partial G(\mathbf {y} ,{\boldsymbol {\eta }})}{\partial \mathbf {n} }}dS_{y}} and so ∂ G ( y , η ) ∂ n {\displaystyle {\frac {\partial G(\mathbf {y} ,{\boldsymbol {\eta }})}{\partial \mathbf {n} }}} cannot vanish on surface S {\displaystyle S} . A convenient choice is ∂ G ( y , η ) ∂ n = 1 / A {\displaystyle {\frac {\partial G(\mathbf {y} ,{\boldsymbol {\eta }})}{\partial \mathbf {n} }}=1/{\mathcal {A}}} , where A {\displaystyle {\mathcal {A}}} is the area of the surface S {\displaystyle S} . The integral can be simplified to ψ ( η ) = ⟨ ψ ⟩ S − ∮ ∂ U G ( y , η ) ∂ ψ ∂ n ( y ) d S y . {\displaystyle \psi ({\boldsymbol {\eta }})=\langle \psi \rangle _{S}-\oint _{\partial U}G(\mathbf {y} ,{\boldsymbol {\eta }}){\frac {\partial \psi }{\partial \mathbf {n} }}(\mathbf {y} )\,dS_{\mathbf {y} }.} where ⟨ ψ ⟩ S = 1 A ∮ ∂ U ψ ( y ) d S y {\displaystyle \langle \psi \rangle _{S}={\frac {1}{\mathcal {A}}}\oint _{\partial U}\psi (\mathbf {y} )dS_{y}} is the average value of ψ {\displaystyle \psi } on surface S {\displaystyle S} .
Furthermore, if ψ {\displaystyle \psi } is a solution to the Laplace's equation, divergence theorem implies it must satisfy ∮ ∂ U ∂ ψ ∂ n ( y ) d S y = ∫ U Δ ψ ( y ) d V y = 0 {\displaystyle \oint _{\partial U}{\frac {\partial \psi }{\partial \mathbf {n} }}(\mathbf {y} )dS_{y}=\int _{U}\Delta \psi (\mathbf {y} )dV_{y}=0} . This is a necessary condition for the Neumann boundary problem to have a solution.
It can be further verified that the above identity also applies when ψ is a solution to the Helmholtz equation or wave equation and G is the appropriate Green's function. In such a context, this identity is the mathematical expression of the Huygens principle , and leads to Kirchhoff's diffraction formula and other approximations.
Green's identities hold on a Riemannian manifold. In this setting, the first two are ∫ M u Δ v d V + ∫ M ⟨ ∇ u , ∇ v ⟩ d V = ∫ ∂ M u N v d V ~ ∫ M ( u Δ v − v Δ u ) d V = ∫ ∂ M ( u N v − v N u ) d V ~ {\displaystyle {\begin{aligned}\int _{M}u\,\Delta v\,dV+\int _{M}\langle \nabla u,\nabla v\rangle \,dV&=\int _{\partial M}uNv\,d{\widetilde {V}}\\\int _{M}\left(u\,\Delta v-v\,\Delta u\right)\,dV&=\int _{\partial M}(uNv-vNu)\,d{\widetilde {V}}\end{aligned}}} where u and v are smooth real-valued functions on M , dV is the volume form compatible with the metric, d V ~ {\displaystyle d{\widetilde {V}}} is the induced volume form on the boundary of M , N is the outward oriented unit vector field normal to the boundary, and Δ u = div(grad u ) is the Laplacian.
Using the vector Laplacian identity and the divergence identity , [ 4 ] expand P ⋅ Δ Q {\displaystyle \mathbf {P} \cdot \Delta \mathbf {Q} }
P ⋅ Δ Q = ∇ ⋅ ( P × ∇ × Q ) − ( ∇ × P ) ⋅ ( ∇ × Q ) + P ⋅ [ ∇ ( ∇ ⋅ Q ) ] {\displaystyle \mathbf {P} \cdot \Delta \mathbf {Q} =\nabla \cdot (\mathbf {P} \times \nabla \times \mathbf {Q} )-(\nabla \times \mathbf {P} )\cdot (\nabla \times \mathbf {Q} )+\mathbf {P} \cdot [\nabla (\nabla \cdot \mathbf {Q} )]}
The last term can be simplified by expanding components P ⋅ [ ∇ ( ∇ ⋅ Q ) ] = P i [ ∇ i ( ∇ j Q j ) ] = ∇ i [ P i ( ∇ j Q j ) ] − ( ∇ i P i ) ( ∇ j Q j ) = ∇ ⋅ [ P ( ∇ ⋅ Q ) ] − ( ∇ ⋅ P ) ( ∇ ⋅ Q ) {\displaystyle {\begin{aligned}\mathbf {P} \cdot [\nabla (\nabla \cdot \mathbf {Q} )]&=P^{i}[\nabla _{i}(\nabla _{j}Q^{j})]\\&=\nabla _{i}[P^{i}(\nabla _{j}Q^{j})]-(\nabla _{i}P^{i})(\nabla _{j}Q^{j})\\&=\nabla \cdot [\mathbf {P} (\nabla \cdot \mathbf {Q} )]-(\nabla \cdot \mathbf {P} )(\nabla \cdot \mathbf {Q} )\end{aligned}}}
The identity can be rewritten as P ⋅ Δ Q = ∇ ⋅ ( P × ∇ × Q ) − ( ∇ × P ) ⋅ ( ∇ × Q ) + ∇ ⋅ [ P ( ∇ ⋅ Q ) ] − ( ∇ ⋅ P ) ( ∇ ⋅ Q ) {\displaystyle \mathbf {P} \cdot \Delta \mathbf {Q} =\nabla \cdot (\mathbf {P} \times \nabla \times \mathbf {Q} )-(\nabla \times \mathbf {P} )\cdot (\nabla \times \mathbf {Q} )+\nabla \cdot [\mathbf {P} (\nabla \cdot \mathbf {Q} )]-(\nabla \cdot \mathbf {P} )(\nabla \cdot \mathbf {Q} )}
In integral form, this is ∮ ∂ U n ⋅ [ P × ∇ × Q + P ( ∇ ⋅ Q ) ] d S = ∫ U [ P ⋅ Δ Q + ( ∇ × P ) ⋅ ( ∇ × Q ) + ( ∇ ⋅ P ) ( ∇ ⋅ Q ) ] d V {\displaystyle \oint _{\partial U}\mathbf {n} \cdot [\mathbf {P} \times \nabla \times \mathbf {Q} +\mathbf {P} (\nabla \cdot \mathbf {Q} )]dS=\int _{U}[\mathbf {P} \cdot \Delta \mathbf {Q} +(\nabla \times \mathbf {P} )\cdot (\nabla \times \mathbf {Q} )+(\nabla \cdot \mathbf {P} )(\nabla \cdot \mathbf {Q} )]dV}
Green's second identity establishes a relationship between second and (the divergence of) first order derivatives of two scalar functions. In differential form p m Δ q m − q m Δ p m = ∇ ⋅ ( p m ∇ q m − q m ∇ p m ) , {\displaystyle p_{m}\,\Delta q_{m}-q_{m}\,\Delta p_{m}=\nabla \cdot \left(p_{m}\nabla q_{m}-q_{m}\,\nabla p_{m}\right),} where p m and q m are two arbitrary twice continuously differentiable scalar fields. This identity is of great importance in physics because continuity equations can thus be established for scalar fields such as mass or energy. [ 5 ]
In vector diffraction theory, two versions of Green's second identity are introduced.
One variant invokes the divergence of a cross product [ 6 ] [ 7 ] [ 8 ] and states a relationship in terms of the curl-curl of the field P ⋅ ( ∇ × ∇ × Q ) − Q ⋅ ( ∇ × ∇ × P ) = ∇ ⋅ ( Q × ( ∇ × P ) − P × ( ∇ × Q ) ) . {\displaystyle \mathbf {P} \cdot \left(\nabla \times \nabla \times \mathbf {Q} \right)-\mathbf {Q} \cdot \left(\nabla \times \nabla \times \mathbf {P} \right)=\nabla \cdot \left(\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)-\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)\right).}
This equation can be written in terms of the Laplacians,
P ⋅ Δ Q − Q ⋅ Δ P + Q ⋅ [ ∇ ( ∇ ⋅ P ) ] − P ⋅ [ ∇ ( ∇ ⋅ Q ) ] = ∇ ⋅ ( P × ( ∇ × Q ) − Q × ( ∇ × P ) ) . {\displaystyle \mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} +\mathbf {Q} \cdot \left[\nabla \left(\nabla \cdot \mathbf {P} \right)\right]-\mathbf {P} \cdot \left[\nabla \left(\nabla \cdot \mathbf {Q} \right)\right]=\nabla \cdot \left(\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)\right).}
However, the terms Q ⋅ [ ∇ ( ∇ ⋅ P ) ] − P ⋅ [ ∇ ( ∇ ⋅ Q ) ] , {\displaystyle \mathbf {Q} \cdot \left[\nabla \left(\nabla \cdot \mathbf {P} \right)\right]-\mathbf {P} \cdot \left[\nabla \left(\nabla \cdot \mathbf {Q} \right)\right],} could not be readily written in terms of a divergence.
The other approach introduces bi-vectors, this formulation requires a dyadic Green function. [ 9 ] [ 10 ] The derivation presented here avoids these problems. [ 11 ]
Consider that the scalar fields in Green's second identity are the Cartesian components of vector fields, i.e., P = ∑ m p m e ^ m , Q = ∑ m q m e ^ m . {\displaystyle \mathbf {P} =\sum _{m}p_{m}{\hat {\mathbf {e} }}_{m},\qquad \mathbf {Q} =\sum _{m}q_{m}{\hat {\mathbf {e} }}_{m}.}
Summing up the equation for each component, we obtain ∑ m [ p m Δ q m − q m Δ p m ] = ∑ m ∇ ⋅ ( p m ∇ q m − q m ∇ p m ) . {\displaystyle \sum _{m}\left[p_{m}\Delta q_{m}-q_{m}\Delta p_{m}\right]=\sum _{m}\nabla \cdot \left(p_{m}\nabla q_{m}-q_{m}\nabla p_{m}\right).}
The LHS according to the definition of the dot product may be written in vector form as ∑ m [ p m Δ q m − q m Δ p m ] = P ⋅ Δ Q − Q ⋅ Δ P . {\displaystyle \sum _{m}\left[p_{m}\,\Delta q_{m}-q_{m}\,\Delta p_{m}\right]=\mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} .}
The RHS is a bit more awkward to express in terms of vector operators. Due to the distributivity of the divergence operator over addition, the sum of the divergence is equal to the divergence of the sum, i.e., ∑ m ∇ ⋅ ( p m ∇ q m − q m ∇ p m ) = ∇ ⋅ ( ∑ m p m ∇ q m − ∑ m q m ∇ p m ) . {\displaystyle \sum _{m}\nabla \cdot \left(p_{m}\nabla q_{m}-q_{m}\nabla p_{m}\right)=\nabla \cdot \left(\sum _{m}p_{m}\nabla q_{m}-\sum _{m}q_{m}\nabla p_{m}\right).}
Recall the vector identity for the gradient of a dot product , ∇ ( P ⋅ Q ) = ( P ⋅ ∇ ) Q + ( Q ⋅ ∇ ) P + P × ( ∇ × Q ) + Q × ( ∇ × P ) , {\displaystyle \nabla \left(\mathbf {P} \cdot \mathbf {Q} \right)=\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} +\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)+\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right),} which, written out in vector components is given by
∇ ( P ⋅ Q ) = ∇ ∑ m p m q m = ∑ m p m ∇ q m + ∑ m q m ∇ p m . {\displaystyle \nabla \left(\mathbf {P} \cdot \mathbf {Q} \right)=\nabla \sum _{m}p_{m}q_{m}=\sum _{m}p_{m}\nabla q_{m}+\sum _{m}q_{m}\nabla p_{m}.}
This result is similar to what we wish to evince in vector terms 'except' for the minus sign. Since the differential operators in each term act either over one vector (say p m {\displaystyle p_{m}} ’s) or the other ( q m {\displaystyle q_{m}} ’s), the contribution to each term must be ∑ m p m ∇ q m = ( P ⋅ ∇ ) Q + P × ( ∇ × Q ) , {\displaystyle \sum _{m}p_{m}\nabla q_{m}=\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right),} ∑ m q m ∇ p m = ( Q ⋅ ∇ ) P + Q × ( ∇ × P ) . {\displaystyle \sum _{m}q_{m}\nabla p_{m}=\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} +\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right).}
These results can be rigorously proven to be correct through evaluation of the vector components . Therefore, the RHS can be written in vector form as
∑ m p m ∇ q m − ∑ m q m ∇ p m = ( P ⋅ ∇ ) Q + P × ( ∇ × Q ) − ( Q ⋅ ∇ ) P − Q × ( ∇ × P ) . {\displaystyle \sum _{m}p_{m}\nabla q_{m}-\sum _{m}q_{m}\nabla p_{m}=\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} -\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right).}
Putting together these two results, a result analogous to Green's theorem for scalar fields is obtained, Theorem for vector fields: P ⋅ Δ Q − Q ⋅ Δ P = [ ( P ⋅ ∇ ) Q + P × ( ∇ × Q ) − ( Q ⋅ ∇ ) P − Q × ( ∇ × P ) ] . {\displaystyle \color {OliveGreen}\mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} =\left[\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} -\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)\right].}
The curl of a cross product can be written as ∇ × ( P × Q ) = ( Q ⋅ ∇ ) P − ( P ⋅ ∇ ) Q + P ( ∇ ⋅ Q ) − Q ( ∇ ⋅ P ) ; {\displaystyle \nabla \times \left(\mathbf {P} \times \mathbf {Q} \right)=\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} -\left(\mathbf {P} \cdot \nabla \right)\mathbf {Q} +\mathbf {P} \left(\nabla \cdot \mathbf {Q} \right)-\mathbf {Q} \left(\nabla \cdot \mathbf {P} \right);}
Green's vector identity can then be rewritten as P ⋅ Δ Q − Q ⋅ Δ P = ∇ ⋅ [ P ( ∇ ⋅ Q ) − Q ( ∇ ⋅ P ) − ∇ × ( P × Q ) + P × ( ∇ × Q ) − Q × ( ∇ × P ) ] . {\displaystyle \mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} =\nabla \cdot \left[\mathbf {P} \left(\nabla \cdot \mathbf {Q} \right)-\mathbf {Q} \left(\nabla \cdot \mathbf {P} \right)-\nabla \times \left(\mathbf {P} \times \mathbf {Q} \right)+\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)\right].}
Since the divergence of a curl is zero, the third term vanishes to yield Green's second vector identity : P ⋅ Δ Q − Q ⋅ Δ P = ∇ ⋅ [ P ( ∇ ⋅ Q ) − Q ( ∇ ⋅ P ) + P × ( ∇ × Q ) − Q × ( ∇ × P ) ] . {\displaystyle \color {OliveGreen}\mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} =\nabla \cdot \left[\mathbf {P} \left(\nabla \cdot \mathbf {Q} \right)-\mathbf {Q} \left(\nabla \cdot \mathbf {P} \right)+\mathbf {P} \times \left(\nabla \times \mathbf {Q} \right)-\mathbf {Q} \times \left(\nabla \times \mathbf {P} \right)\right].}
With a similar procedure, the Laplacian of the dot product can be expressed in terms of the Laplacians of the factors Δ ( P ⋅ Q ) = P ⋅ Δ Q − Q ⋅ Δ P + 2 ∇ ⋅ [ ( Q ⋅ ∇ ) P + Q × ∇ × P ] . {\displaystyle \Delta \left(\mathbf {P} \cdot \mathbf {Q} \right)=\mathbf {P} \cdot \Delta \mathbf {Q} -\mathbf {Q} \cdot \Delta \mathbf {P} +2\nabla \cdot \left[\left(\mathbf {Q} \cdot \nabla \right)\mathbf {P} +\mathbf {Q} \times \nabla \times \mathbf {P} \right].}
As a corollary, the awkward terms can now be written in terms of a divergence by comparison with the vector Green equation, P ⋅ [ ∇ ( ∇ ⋅ Q ) ] − Q ⋅ [ ∇ ( ∇ ⋅ P ) ] = ∇ ⋅ [ P ( ∇ ⋅ Q ) − Q ( ∇ ⋅ P ) ] . {\displaystyle \mathbf {P} \cdot \left[\nabla \left(\nabla \cdot \mathbf {Q} \right)\right]-\mathbf {Q} \cdot \left[\nabla \left(\nabla \cdot \mathbf {P} \right)\right]=\nabla \cdot \left[\mathbf {P} \left(\nabla \cdot \mathbf {Q} \right)-\mathbf {Q} \left(\nabla \cdot \mathbf {P} \right)\right].}
This result can be verified by expanding the divergence of a scalar times a vector on the RHS.
The third vector identity can be derived using the free space scalar Green's function. [ 4 ] Take the scalar Green's function definition Δ G ( x , η ) = δ ( x − η ) {\displaystyle \Delta G(\mathbf {x} ,{\boldsymbol {\eta }})=\delta (\mathbf {x} -{\boldsymbol {\eta }})} , multiply by P {\displaystyle \mathbf {P} } and subtract G ∇ i ∇ i P {\displaystyle G\nabla _{i}\nabla ^{i}\mathbf {P} } .
∇ i ( P ∇ i G − G ∇ i P ) = P δ ( x − η ) − G Δ P {\displaystyle \nabla _{i}(\mathbf {P} \nabla ^{i}G-G\nabla ^{i}\mathbf {P} )=\mathbf {P} \delta (\mathbf {x} -{\boldsymbol {\eta }})-G\Delta \mathbf {P} }
Integrate over volume U {\displaystyle U} and use divergence theorem. ∮ ∂ U ( P ( y ) ∂ G ( y , η ) ∂ n − G ( y , η ) ∂ P ( y ) ∂ n ) d S y + ∫ U G ( y , η ) Δ P ( y ) d V y = { P ( η ) for η ∈ U 0 for η ∉ U {\displaystyle \oint _{\partial U}{\bigg (}\mathbf {P} (\mathbf {y} ){\frac {\partial G(\mathbf {y} ,{\boldsymbol {\eta }})}{\partial \mathbf {n} }}-G(\mathbf {y} ,{\boldsymbol {\eta }}){\frac {\partial \mathbf {P} (\mathbf {y} )}{\partial \mathbf {n} }}{\bigg )}dS_{y}+\int _{U}G(\mathbf {y} ,{\boldsymbol {\eta }})\Delta \mathbf {P} (\mathbf {y} )dV_{y}=\left\{{\begin{matrix}\mathbf {P} ({\boldsymbol {\eta }})&{\text{for }}~{\boldsymbol {\eta }}\in U\\0&{\text{for }}~{\boldsymbol {\eta }}\not \in U\\\end{matrix}}\right.} | https://en.wikipedia.org/wiki/Green's_identities |
In fluid dynamics , Green's law , named for 19th-century British mathematician George Green , is a conservation law describing the evolution of non-breaking , surface gravity waves propagating in shallow water of gradually varying depth and width. In its simplest form, for wavefronts and depth contours parallel to each other (and the coast), it states:
where H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} are the wave heights at two different locations – 1 and 2 respectively – where the wave passes, and h 1 {\displaystyle h_{1}} and h 2 {\displaystyle h_{2}} are the mean water depths at the same two locations.
Green's law is often used in coastal engineering for the modelling of long shoaling waves on a beach, with "long" meaning wavelengths in excess of about twenty times the mean water depth. [ 1 ] Tsunamis shoal (change their height) in accordance with this law, as they propagate – governed by refraction and diffraction – through the ocean and up the continental shelf . Very close to (and running up) the coast, nonlinear effects become important and Green's law no longer applies. [ 2 ] [ 3 ]
According to this law, which is based on linearized shallow water equations , the spatial variations of the wave height H {\displaystyle H} (twice the amplitude a {\displaystyle a} for sine waves , equal to the amplitude for a solitary wave ) for travelling waves in water of mean depth h {\displaystyle h} and width b {\displaystyle b} (in case of an open channel ) satisfy [ 4 ] [ 5 ]
where h 4 {\displaystyle {\sqrt[{4}]{h}}} is the fourth root of h . {\displaystyle h.} Consequently, when considering two cross sections of an open channel, labeled 1 and 2, the wave height in section 2 is:
with the subscripts 1 and 2 denoting quantities in the associated cross section. So, when the depth has decreased by a factor sixteen, the waves become twice as high. And the wave height doubles after the channel width has gradually been reduced by a factor four. For wave propagation perpendicular towards a straight coast with depth contours parallel to the coastline, take b {\displaystyle b} a constant, say 1 metre or yard.
For refracting long waves in the ocean or near the coast, the width b {\displaystyle b} can be interpreted as the distance between wave rays . The rays (and the changes in spacing between them) follow from the geometrical optics approximation to the linear wave propagation. [ 6 ] In case of straight parallel depth contours this simplifies to the use of Snell's law . [ 7 ]
Green published his results in 1838, [ 8 ] based on a method – the Liouville–Green method – which would evolve into what is now known as the WKB approximation . Green's law also corresponds to constancy of the mean horizontal wave energy flux for long waves: [ 4 ] [ 5 ]
where g h {\displaystyle {\sqrt {gh}}} is the group speed (equal to the phase speed in shallow water), 1 8 ρ g H 2 = 1 2 ρ g a 2 {\displaystyle {\tfrac {1}{8}}\rho gH^{2}={\tfrac {1}{2}}\rho ga^{2}} is the mean wave energy density integrated over depth and per unit of horizontal area, g {\displaystyle g} is the gravitational acceleration and ρ {\displaystyle \rho } is the water density .
Further, from Green's analysis, the wavelength λ {\displaystyle \lambda } of the wave shortens during shoaling into shallow water, with [ 4 ] [ 8 ]
along a wave ray . The oscillation period (and therefore also the frequency ) of shoaling waves does not change, according to Green's linear theory.
Green derived his shoaling law for water waves by use of what is now known as the Liouville–Green method, applicable to gradual variations in depth h {\displaystyle h} and width b {\displaystyle b} along the path of wave propagation. [ 9 ]
Starting point are the linearized one-dimensional Saint-Venant equations for an open channel with a rectangular cross section (vertical side walls). These equations describe the evolution of a wave with free surface elevation η ( x , t ) {\displaystyle \eta (x,t)} and horizontal flow velocity u ( x , t ) , {\displaystyle u(x,t),} with x {\displaystyle x} the horizontal coordinate along the channel axis and t {\displaystyle t} the time:
where g {\displaystyle g} is the gravity of Earth (taken as a constant), h {\displaystyle h} is the mean water depth, b {\displaystyle b} is the channel width and ∂ / ∂ t {\displaystyle \partial /\partial t} and ∂ / ∂ x {\displaystyle \partial /\partial x} are denoting partial derivatives with respect to space and time. The slow variation of width b {\displaystyle b} and depth h {\displaystyle h} with distance x {\displaystyle x} along the channel axis is brought into account by denoting them as b ( μ x ) {\displaystyle b(\mu x)} and h ( μ x ) , {\displaystyle h(\mu x),} where μ {\displaystyle \mu } is a small parameter: μ ≪ 1. {\displaystyle \mu \ll 1.} The above two equations can be combined into one wave equation for the surface elevation:
In the Liouville–Green method, the approach is to convert the above wave equation with non-homogeneous coefficients into a homogeneous one (neglecting some small remainders in terms of μ {\displaystyle \mu } ).
The next step is to apply a coordinate transformation , introducing the travel time (or wave phase ) τ {\displaystyle \tau } given by
and x {\displaystyle x} are related through the celerity c = g h . {\displaystyle c={\sqrt {gh}}.} Introducing the slow variable X = μ x {\displaystyle X=\mu x} and denoting derivatives of b ( X ) {\displaystyle b(X)} and h ( X ) {\displaystyle h(X)} with respect to X {\displaystyle X} with a prime, e.g. b ′ = d b / d X , {\displaystyle b'=\mathrm {d} b/\mathrm {d} X,} the x {\displaystyle x} -derivatives in the wave equation, Eq. ( 1 ), become:
Now the wave equation ( 1 ) transforms into:
The next step is transform the equation in such a way that only deviations from homogeneity in the second order of approximation remain, i.e. proportional to μ 2 . {\displaystyle \mu ^{2}.}
The homogeneous wave equation (i.e. Eq. ( 2 ) when μ {\displaystyle \mu } is zero) has solutions η = F ( t ± τ ) {\displaystyle \eta =F(t\pm \tau )} for travelling waves of permanent form propagating in either the negative or positive x {\displaystyle x} -direction. For the inhomogeneous case, considering waves propagating in the positive x {\displaystyle x} -direction, Green proposes an approximate solution:
Then
Now the left-hand side of Eq. ( 2 ) becomes:
So the proposed solution in Eq. ( 3 ) satisfies Eq. ( 2 ), and thus also Eq. ( 1 ) apart from the above two terms proportional to μ {\displaystyle \mu } and μ 2 {\displaystyle \mu ^{2}} , with μ ≪ 1. {\displaystyle \mu \ll 1.} The error in the solution can be made of order O ( μ 2 ) {\displaystyle {\mathcal {O}}(\mu ^{2})} provided
This has the solution:
Using Eq. ( 3 ) and the transformation from x {\displaystyle x} to τ {\displaystyle \tau } , the approximate solution for the surface elevation η ( x , t ) {\displaystyle \eta (x,t)} is
where the constant α {\displaystyle \alpha } has been set to one, without loss of generality . Waves travelling in the negative x {\displaystyle x} -direction have the minus sign in the argument of function F {\displaystyle F} reversed to a plus sign. Since the theory is linear, solutions can be added because of the superposition principle .
Waves varying sinusoidal in time, with period T , {\displaystyle T,} are considered. That is
where a {\displaystyle a} is the amplitude , H = 2 a {\displaystyle H=2a} is the wave height , ω = 2 π / T {\displaystyle \omega =2\pi /T} is the angular frequency and ϕ ( x ) {\displaystyle \phi (x)} is the wave phase . Consequently, also F {\displaystyle F} in Eq. ( 4 ) has to be a sine wave, e.g. F = β sin ( ω t − ϕ ( x ) ) {\displaystyle F=\beta \sin(\omega t-\phi (x))} with β {\displaystyle \beta } a constant.
Applying these forms of η {\displaystyle \eta } and F {\displaystyle F} in Eq. ( 4 ) gives:
which is Green's law .
The horizontal flow velocity in the x {\displaystyle x} -direction follows directly from substituting the solution for the surface elevation η ( x , t ) {\displaystyle \eta (x,t)} from Eq. ( 4 ) into the expression for u ( x , t ) {\displaystyle u(x,t)} in Eq. ( 1 ): [ 10 ]
and Q {\displaystyle Q} an additional constant discharge .
Note that – when the width b {\displaystyle b} and depth h {\displaystyle h} are not constants – the term proportional to Φ ( x , t ) {\displaystyle \Phi (x,t)} implies an O ( μ ) {\displaystyle {\mathcal {O}}(\mu )} (small) phase difference between elevation η {\displaystyle \eta } and velocity u {\displaystyle u} .
For sinusoidal waves with velocity amplitude V , {\displaystyle V,} the flow velocities shoal to leading order as [ 8 ]
This could have been anticipated since for a horizontal bed V = g h ( a / h ) = g / h a {\textstyle V={\sqrt {g\,h}}\,(a/h)={\sqrt {g/h}}\,a} with a {\displaystyle a} the wave amplitude. | https://en.wikipedia.org/wiki/Green's_law |
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D (surface in R 2 {\displaystyle \mathbb {R} ^{2}} ) bounded by C . It is the two-dimensional special case of Stokes' theorem (surface in R 3 {\displaystyle \mathbb {R} ^{3}} ). In one dimension, it is equivalent to the fundamental theorem of calculus. In three dimensions, it is equivalent to the divergence theorem .
Let C be a positively oriented , piecewise smooth , simple closed curve in a plane , and let D be the region bounded by C . If L and M are functions of ( x , y ) defined on an open region containing D and have continuous partial derivatives there, then
∮ C ( L d x + M d y ) = ∬ D ( ∂ M ∂ x − ∂ L ∂ y ) d A {\displaystyle \oint _{C}(L\,dx+M\,dy)=\iint _{D}\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)dA}
where the path of integration along C is counterclockwise . [ 1 ] [ 2 ]
In physics, Green's theorem finds many applications. One is solving two-dimensional flow integrals, stating that the sum of fluid outflowing from a volume is equal to the total outflow summed about an enclosing area. In plane geometry , and in particular, area surveying , Green's theorem can be used to determine the area and centroid of plane figures solely by integrating over the perimeter.
The following is a proof of half of the theorem for the simplified area D , a type I region where C 1 and C 3 are curves connected by vertical lines (possibly of zero length). A similar proof exists for the other half of the theorem when D is a type II region where C 2 and C 4 are curves connected by horizontal lines (again, possibly of zero length). Putting these two parts together, the theorem is thus proven for regions of type III (defined as regions which are both type I and type II). The general case can then be deduced from this special case by decomposing D into a set of type III regions.
If it can be shown that
and
are true, then Green's theorem follows immediately for the region D. We can prove ( 1 ) easily for regions of type I, and ( 2 ) for regions of type II. Green's theorem then follows for regions of type III.
Assume region D is a type I region and can thus be characterized, as pictured on the right, by D = { ( x , y ) ∣ a ≤ x ≤ b , g 1 ( x ) ≤ y ≤ g 2 ( x ) } {\displaystyle D=\{(x,y)\mid a\leq x\leq b,g_{1}(x)\leq y\leq g_{2}(x)\}} where g 1 and g 2 are continuous functions on [ a , b ] . Compute the double integral in ( 1 ):
Now compute the line integral in ( 1 ). C can be rewritten as the union of four curves: C 1 , C 2 , C 3 , C 4 .
With C 1 , use the parametric equations : x = x , y = g 1 ( x ), a ≤ x ≤ b . Then ∫ C 1 L ( x , y ) d x = ∫ a b L ( x , g 1 ( x ) ) d x . {\displaystyle \int _{C_{1}}L(x,y)\,dx=\int _{a}^{b}L(x,g_{1}(x))\,dx.}
With C 3 , use the parametric equations: x = x , y = g 2 ( x ), a ≤ x ≤ b . Then ∫ C 3 L ( x , y ) d x = − ∫ − C 3 L ( x , y ) d x = − ∫ a b L ( x , g 2 ( x ) ) d x . {\displaystyle \int _{C_{3}}L(x,y)\,dx=-\int _{-C_{3}}L(x,y)\,dx=-\int _{a}^{b}L(x,g_{2}(x))\,dx.}
The integral over C 3 is negated because it goes in the negative direction from b to a , as C is oriented positively (anticlockwise). On C 2 and C 4 , x remains constant, meaning ∫ C 4 L ( x , y ) d x = ∫ C 2 L ( x , y ) d x = 0. {\displaystyle \int _{C_{4}}L(x,y)\,dx=\int _{C_{2}}L(x,y)\,dx=0.}
Therefore,
Combining ( 3 ) with ( 4 ), we get ( 1 ) for regions of type I. A similar treatment yields ( 2 ) for regions of type II. Putting the two together, we get the result for regions of type III.
We are going to prove the following
Theorem — Let Γ {\displaystyle \Gamma } be a rectifiable, positively oriented Jordan curve in R 2 {\displaystyle \mathbb {R} ^{2}} and let R {\displaystyle R} denote its inner region. Suppose that A , B : R ¯ → R {\displaystyle A,B:{\overline {R}}\to \mathbb {R} } are continuous functions with the property that A {\displaystyle A} has second partial derivative at every point of R {\displaystyle R} , B {\displaystyle B} has first partial derivative at every point of R {\displaystyle R} and that the functions D 1 B , D 2 A : R → R {\displaystyle D_{1}B,D_{2}A:R\to \mathbb {R} } are Riemann-integrable over R {\displaystyle R} . Then ∫ Γ ( A d x + B d y ) = ∫ R ( D 1 B ( x , y ) − D 2 A ( x , y ) ) d ( x , y ) . {\displaystyle \int _{\Gamma }(A\,dx+B\,dy)=\int _{R}\left(D_{1}B(x,y)-D_{2}A(x,y)\right)\,d(x,y).}
We need the following lemmas whose proofs can be found in: [ 3 ]
Lemma 1 (Decomposition Lemma) — Assume Γ {\displaystyle \Gamma } is a rectifiable, positively oriented Jordan curve in the plane and let R {\displaystyle R} be its inner region. For every positive real δ {\displaystyle \delta } , let F ( δ ) {\displaystyle {\mathcal {F}}(\delta )} denote the collection of squares in the plane bounded by the lines x = m δ , y = m δ {\displaystyle x=m\delta ,y=m\delta } , where m {\displaystyle m} runs through the set of integers. Then, for this δ {\displaystyle \delta } , there exists a decomposition of R ¯ {\displaystyle {\overline {R}}} into a finite number of non-overlapping subregions in such a manner that
Lemma 2 — Let Γ {\displaystyle \Gamma } be a rectifiable curve in the plane and let Δ Γ ( h ) {\displaystyle \Delta _{\Gamma }(h)} be the set of points in the plane whose distance from (the range of) Γ {\displaystyle \Gamma } is at most h {\displaystyle h} . The outer Jordan content of this set satisfies c ¯ Δ Γ ( h ) ≤ 2 h Λ + π h 2 {\displaystyle {\overline {c}}\,\,\Delta _{\Gamma }(h)\leq 2h\Lambda +\pi h^{2}} .
Lemma 3 — Let Γ {\displaystyle \Gamma } be a rectifiable curve in R 2 {\displaystyle \mathbb {R} ^{2}} and let f : range of Γ → R {\displaystyle f:{\text{range of }}\Gamma \to \mathbb {R} } be a continuous function. Then | ∫ Γ f ( x , y ) d y | ≤ 1 2 Λ Ω f , {\displaystyle \left\vert \int _{\Gamma }f(x,y)\,dy\right\vert \leq {\frac {1}{2}}\Lambda \Omega _{f},} and | ∫ Γ f ( x , y ) d x | ≤ 1 2 Λ Ω f , {\displaystyle \left\vert \int _{\Gamma }f(x,y)\,dx\right\vert \leq {\frac {1}{2}}\Lambda \Omega _{f},} where Ω f {\displaystyle \Omega _{f}} is the oscillation of f {\displaystyle f} on the range of Γ {\displaystyle \Gamma } .
Now we are in position to prove the theorem:
Proof of Theorem. Let ε {\displaystyle \varepsilon } be an arbitrary positive real number. By continuity of A {\displaystyle A} , B {\displaystyle B} and compactness of R ¯ {\displaystyle {\overline {R}}} , given ε > 0 {\displaystyle \varepsilon >0} , there exists 0 < δ < 1 {\displaystyle 0<\delta <1} such that whenever two points of R ¯ {\displaystyle {\overline {R}}} are less than 2 2 δ {\displaystyle 2{\sqrt {2}}\,\delta } apart, their images under A , B {\displaystyle A,B} are less than ε {\displaystyle \varepsilon } apart. For this δ {\displaystyle \delta } , consider the decomposition given by the previous Lemma. We have ∫ Γ A d x + B d y = ∑ i = 1 k ∫ Γ i A d x + B d y + ∑ i = k + 1 s ∫ Γ i A d x + B d y . {\displaystyle \int _{\Gamma }A\,dx+B\,dy=\sum _{i=1}^{k}\int _{\Gamma _{i}}A\,dx+B\,dy\quad +\sum _{i=k+1}^{s}\int _{\Gamma _{i}}A\,dx+B\,dy.}
Put φ := D 1 B − D 2 A {\displaystyle \varphi :=D_{1}B-D_{2}A} .
For each i ∈ { 1 , … , k } {\displaystyle i\in \{1,\ldots ,k\}} , the curve Γ i {\displaystyle \Gamma _{i}} is a positively oriented square, for which Green's formula holds. Hence ∑ i = 1 k ∫ Γ i A d x + B d y = ∑ i = 1 k ∫ R i φ = ∫ ⋃ i = 1 k R i φ . {\displaystyle \sum _{i=1}^{k}\int _{\Gamma _{i}}A\,dx+B\,dy=\sum _{i=1}^{k}\int _{R_{i}}\varphi =\int _{\bigcup _{i=1}^{k}R_{i}}\,\varphi .}
Every point of a border region is at a distance no greater than 2 2 δ {\displaystyle 2{\sqrt {2}}\,\delta } from Γ {\displaystyle \Gamma } . Thus, if K {\displaystyle K} is the union of all border regions, then K ⊂ Δ Γ ( 2 2 δ ) {\displaystyle K\subset \Delta _{\Gamma }(2{\sqrt {2}}\,\delta )} ; hence c ( K ) ≤ c ¯ Δ Γ ( 2 2 δ ) ≤ 4 2 δ + 8 π δ 2 {\displaystyle c(K)\leq {\overline {c}}\,\Delta _{\Gamma }(2{\sqrt {2}}\,\delta )\leq 4{\sqrt {2}}\,\delta +8\pi \delta ^{2}} , by Lemma 2. Notice that ∫ R φ − ∫ ⋃ i = 1 k R i φ = ∫ K φ . {\displaystyle \int _{R}\varphi \,\,-\int _{\bigcup _{i=1}^{k}R_{i}}\varphi =\int _{K}\varphi .} This yields | ∑ i = 1 k ∫ Γ i A d x + B d y − ∫ R φ | ≤ M δ ( 1 + π 2 δ ) for some M > 0. {\displaystyle \left\vert \sum _{i=1}^{k}\int _{\Gamma _{i}}A\,dx+B\,dy\quad -\int _{R}\varphi \right\vert \leq M\delta (1+\pi {\sqrt {2}}\,\delta ){\text{ for some }}M>0.}
We may as well choose δ {\displaystyle \delta } so that the RHS of the last inequality is < ε . {\displaystyle <\varepsilon .}
The remark in the beginning of this proof implies that the oscillations of A {\displaystyle A} and B {\displaystyle B} on every border region is at most ε {\displaystyle \varepsilon } . We have | ∑ i = k + 1 s ∫ Γ i A d x + B d y | ≤ 1 2 ε ∑ i = k + 1 s Λ i . {\displaystyle \left\vert \sum _{i=k+1}^{s}\int _{\Gamma _{i}}A\,dx+B\,dy\right\vert \leq {\frac {1}{2}}\varepsilon \sum _{i=k+1}^{s}\Lambda _{i}.}
By Lemma 1(iii), ∑ i = k + 1 s Λ i ≤ Λ + ( 4 δ ) 4 ( Λ δ + 1 ) ≤ 17 Λ + 16. {\displaystyle \sum _{i=k+1}^{s}\Lambda _{i}\leq \Lambda +(4\delta )\,4\!\left({\frac {\Lambda }{\delta }}+1\right)\leq 17\Lambda +16.}
Combining these, we finally get | ∫ Γ A d x + B d y − ∫ R φ | < C ε , {\displaystyle \left\vert \int _{\Gamma }A\,dx+B\,dy\quad -\int _{R}\varphi \right\vert <C\varepsilon ,} for some C > 0 {\displaystyle C>0} . Since this is true for every ε > 0 {\displaystyle \varepsilon >0} , we are done.
The hypothesis of the last theorem are not the only ones under which Green's formula is true. Another common set of conditions is the following:
The functions A , B : R ¯ → R {\displaystyle A,B:{\overline {R}}\to \mathbb {R} } are still assumed to be continuous. However, we now require them to be Fréchet-differentiable at every point of R {\displaystyle R} . This implies the existence of all directional derivatives, in particular D e i A =: D i A , D e i B =: D i B , i = 1 , 2 {\displaystyle D_{e_{i}}A=:D_{i}A,D_{e_{i}}B=:D_{i}B,\,i=1,2} , where, as usual, ( e 1 , e 2 ) {\displaystyle (e_{1},e_{2})} is the canonical ordered basis of R 2 {\displaystyle \mathbb {R} ^{2}} . In addition, we require the function D 1 B − D 2 A {\displaystyle D_{1}B-D_{2}A} to be Riemann-integrable over R {\displaystyle R} .
As a corollary of this, we get the Cauchy Integral Theorem for rectifiable Jordan curves:
Theorem (Cauchy) — If Γ {\displaystyle \Gamma } is a rectifiable Jordan curve in C {\displaystyle \mathbb {C} } and if f : closure of inner region of Γ → C {\displaystyle f:{\text{closure of inner region of }}\Gamma \to \mathbb {C} } is a continuous mapping holomorphic throughout the inner region of Γ {\displaystyle \Gamma } , then ∫ Γ f = 0 , {\displaystyle \int _{\Gamma }f=0,} the integral being a complex contour integral.
We regard the complex plane as R 2 {\displaystyle \mathbb {R} ^{2}} . Now, define u , v : R ¯ → R {\displaystyle u,v:{\overline {R}}\to \mathbb {R} } to be such that f ( x + i y ) = u ( x , y ) + i v ( x , y ) . {\displaystyle f(x+iy)=u(x,y)+iv(x,y).} These functions are clearly continuous. It is well known that u {\displaystyle u} and v {\displaystyle v} are Fréchet-differentiable and that they satisfy the Cauchy-Riemann equations: D 1 v + D 2 u = D 1 u − D 2 v = zero function {\displaystyle D_{1}v+D_{2}u=D_{1}u-D_{2}v={\text{zero function}}} .
Now, analyzing the sums used to define the complex contour integral in question, it is easy to realize that ∫ Γ f = ∫ Γ u d x − v d y + i ∫ Γ v d x + u d y , {\displaystyle \int _{\Gamma }f=\int _{\Gamma }u\,dx-v\,dy\quad +i\int _{\Gamma }v\,dx+u\,dy,} the integrals on the RHS being usual line integrals. These remarks allow us to apply Green's Theorem to each one of these line integrals, finishing the proof.
Theorem. Let Γ 0 , Γ 1 , … , Γ n {\displaystyle \Gamma _{0},\Gamma _{1},\ldots ,\Gamma _{n}} be positively oriented rectifiable Jordan curves in R 2 {\displaystyle \mathbb {R} ^{2}} satisfying Γ i ⊂ R 0 , if 1 ≤ i ≤ n Γ i ⊂ R 2 ∖ R ¯ j , if 1 ≤ i , j ≤ n and i ≠ j , {\displaystyle {\begin{aligned}\Gamma _{i}\subset R_{0},&&{\text{if }}1\leq i\leq n\\\Gamma _{i}\subset \mathbb {R} ^{2}\setminus {\overline {R}}_{j},&&{\text{if }}1\leq i,j\leq n{\text{ and }}i\neq j,\end{aligned}}} where R i {\displaystyle R_{i}} is the inner region of Γ i {\displaystyle \Gamma _{i}} . Let D = R 0 ∖ ( R ¯ 1 ∪ R ¯ 2 ∪ ⋯ ∪ R ¯ n ) . {\displaystyle D=R_{0}\setminus ({\overline {R}}_{1}\cup {\overline {R}}_{2}\cup \cdots \cup {\overline {R}}_{n}).}
Suppose p : D ¯ → R {\displaystyle p:{\overline {D}}\to \mathbb {R} } and q : D ¯ → R {\displaystyle q:{\overline {D}}\to \mathbb {R} } are continuous functions whose restriction to D {\displaystyle D} is Fréchet-differentiable. If the function ( x , y ) ⟼ ∂ q ∂ e 1 ( x , y ) − ∂ p ∂ e 2 ( x , y ) {\displaystyle (x,y)\longmapsto {\frac {\partial q}{\partial e_{1}}}(x,y)-{\frac {\partial p}{\partial e_{2}}}(x,y)} is Riemann-integrable over D {\displaystyle D} , then ∫ Γ 0 p ( x , y ) d x + q ( x , y ) d y − ∑ i = 1 n ∫ Γ i p ( x , y ) d x + q ( x , y ) d y = ∫ D { ∂ q ∂ e 1 ( x , y ) − ∂ p ∂ e 2 ( x , y ) } d ( x , y ) . {\displaystyle {\begin{aligned}&\int _{\Gamma _{0}}p(x,y)\,dx+q(x,y)\,dy-\sum _{i=1}^{n}\int _{\Gamma _{i}}p(x,y)\,dx+q(x,y)\,dy\\[5pt]={}&\int _{D}\left\{{\frac {\partial q}{\partial e_{1}}}(x,y)-{\frac {\partial p}{\partial e_{2}}}(x,y)\right\}\,d(x,y).\end{aligned}}}
Green's theorem is a special case of the Kelvin–Stokes theorem , when applied to a region in the x y {\displaystyle xy} -plane.
We can augment the two-dimensional field into a three-dimensional field with a z component that is always 0. Write F for the vector -valued function F = ( L , M , 0 ) {\displaystyle \mathbf {F} =(L,M,0)} . Start with the left side of Green's theorem: ∮ C ( L d x + M d y ) = ∮ C ( L , M , 0 ) ⋅ ( d x , d y , d z ) = ∮ C F ⋅ d r . {\displaystyle \oint _{C}(L\,dx+M\,dy)=\oint _{C}(L,M,0)\cdot (dx,dy,dz)=\oint _{C}\mathbf {F} \cdot d\mathbf {r} .}
The Kelvin–Stokes theorem: ∮ C F ⋅ d r = ∬ S ∇ × F ⋅ n ^ d S . {\displaystyle \oint _{C}\mathbf {F} \cdot d\mathbf {r} =\iint _{S}\nabla \times \mathbf {F} \cdot \mathbf {\hat {n}} \,dS.}
The surface S {\displaystyle S} is just the region in the plane D {\displaystyle D} , with the unit normal n ^ {\displaystyle \mathbf {\hat {n}} } defined (by convention) to have a positive z component in order to match the "positive orientation" definitions for both theorems.
The expression inside the integral becomes ∇ × F ⋅ n ^ = [ ( ∂ 0 ∂ y − ∂ M ∂ z ) i + ( ∂ L ∂ z − ∂ 0 ∂ x ) j + ( ∂ M ∂ x − ∂ L ∂ y ) k ] ⋅ k = ( ∂ M ∂ x − ∂ L ∂ y ) . {\displaystyle \nabla \times \mathbf {F} \cdot \mathbf {\hat {n}} =\left[\left({\frac {\partial 0}{\partial y}}-{\frac {\partial M}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial L}{\partial z}}-{\frac {\partial 0}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)\mathbf {k} \right]\cdot \mathbf {k} =\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right).}
Thus we get the right side of Green's theorem ∬ S ∇ × F ⋅ n ^ d S = ∬ D ( ∂ M ∂ x − ∂ L ∂ y ) d A . {\displaystyle \iint _{S}\nabla \times \mathbf {F} \cdot \mathbf {\hat {n}} \,dS=\iint _{D}\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)\,dA.}
Green's theorem is also a straightforward result of the general Stokes' theorem using differential forms and exterior derivatives : ∮ C L d x + M d y = ∮ ∂ D ω = ∫ D d ω = ∫ D ∂ L ∂ y d y ∧ d x + ∂ M ∂ x d x ∧ d y = ∬ D ( ∂ M ∂ x − ∂ L ∂ y ) d x d y . {\displaystyle \oint _{C}L\,dx+M\,dy=\oint _{\partial D}\!\omega =\int _{D}d\omega =\int _{D}{\frac {\partial L}{\partial y}}\,dy\wedge \,dx+{\frac {\partial M}{\partial x}}\,dx\wedge \,dy=\iint _{D}\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)\,dx\,dy.}
Considering only two-dimensional vector fields, Green's theorem is equivalent to the two-dimensional version of the divergence theorem :
where ∇ ⋅ F {\displaystyle \nabla \cdot \mathbf {F} } is the divergence on the two-dimensional vector field F {\displaystyle \mathbf {F} } , and n ^ {\displaystyle \mathbf {\hat {n}} } is the outward-pointing unit normal vector on the boundary.
To see this, consider the unit normal n ^ {\displaystyle \mathbf {\hat {n}} } in the right side of the equation. Since in Green's theorem d r = ( d x , d y ) {\displaystyle d\mathbf {r} =(dx,dy)} is a vector pointing tangential along the curve, and the curve C is the positively oriented (i.e. anticlockwise) curve along the boundary, an outward normal would be a vector which points 90° to the right of this; one choice would be ( d y , − d x ) {\displaystyle (dy,-dx)} . The length of this vector is d x 2 + d y 2 = d s . {\textstyle {\sqrt {dx^{2}+dy^{2}}}=ds.} So ( d y , − d x ) = n ^ d s . {\displaystyle (dy,-dx)=\mathbf {\hat {n}} \,ds.}
Start with the left side of Green's theorem: ∮ C ( L d x + M d y ) = ∮ C ( M , − L ) ⋅ ( d y , − d x ) = ∮ C ( M , − L ) ⋅ n ^ d s . {\displaystyle \oint _{C}(L\,dx+M\,dy)=\oint _{C}(M,-L)\cdot (dy,-dx)=\oint _{C}(M,-L)\cdot \mathbf {\hat {n}} \,ds.} Applying the two-dimensional divergence theorem with F = ( M , − L ) {\displaystyle \mathbf {F} =(M,-L)} , we get the right side of Green's theorem: ∮ C ( M , − L ) ⋅ n ^ d s = ∬ D ( ∇ ⋅ ( M , − L ) ) d A = ∬ D ( ∂ M ∂ x − ∂ L ∂ y ) d A . {\displaystyle \oint _{C}(M,-L)\cdot \mathbf {\hat {n}} \,ds=\iint _{D}\left(\nabla \cdot (M,-L)\right)\,dA=\iint _{D}\left({\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}\right)\,dA.}
Green's theorem can be used to compute area by line integral. [ 4 ] The area of a planar region D {\displaystyle D} is given by A = ∬ D d A . {\displaystyle A=\iint _{D}dA.}
Choose L {\displaystyle L} and M {\displaystyle M} such that ∂ M ∂ x − ∂ L ∂ y = 1 {\displaystyle {\frac {\partial M}{\partial x}}-{\frac {\partial L}{\partial y}}=1} , the area is given by A = ∮ C ( L d x + M d y ) . {\displaystyle A=\oint _{C}(L\,dx+M\,dy).}
Possible formulas for the area of D {\displaystyle D} include [ 4 ] A = ∮ C x d y = − ∮ C y d x = 1 2 ∮ C ( − y d x + x d y ) . {\displaystyle A=\oint _{C}x\,dy=-\oint _{C}y\,dx={\tfrac {1}{2}}\oint _{C}(-y\,dx+x\,dy).}
It is named after George Green , who stated a similar result in an 1828 paper titled An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism . In 1846, Augustin-Louis Cauchy published a paper stating Green's theorem as the penultimate sentence. This is in fact the first printed version of Green's theorem in the form appearing in modern textbooks. George Green, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism (Nottingham, England: T. Wheelhouse, 1828). Green did not actually derive the form of "Green's theorem" which appears in this article; rather, he derived a form of the "divergence theorem", which appears on pages 10–12 of his Essay . In 1846, the form of "Green's theorem" which appears in this article was first published, without proof, in an article by Augustin Cauchy : A. Cauchy (1846) "Sur les intégrales qui s'étendent à tous les points d'une courbe fermée" (On integrals that extend over all of the points of a closed curve), Comptes rendus , 23 : 251–255. (The equation appears at the bottom of page 254, where ( S ) denotes the line integral of a function k along the curve s that encloses the area S .) A proof of the theorem was finally provided in 1851 by Bernhard Riemann in his inaugural dissertation: Bernhard Riemann (1851) Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse (Basis for a general theory of functions of a variable complex quantity), (Göttingen, (Germany): Adalbert Rente, 1867); see pages 8–9. [ 5 ] | https://en.wikipedia.org/wiki/Green's_theorem |
The green-beard effect is a thought experiment used in evolutionary biology to explain selective altruism among individuals of a species.
The idea of a green-beard gene was proposed by William D. Hamilton in his articles of 1964, [ 1 ] [ 2 ] and got the name from the example used by Richard Dawkins ( "I have a green beard and I will be altruistic to anyone else with green beard" ) in The Selfish Gene (1976). [ 3 ] [ 4 ]
A green-beard effect occurs when an allele , or a set of linked alleles, produce three expressed (or phenotypic ) effects:
The carrier of the gene (or a specific allele) is essentially recognizing copies of the same gene (or a specific allele) in other individuals. Whereas kin selection involves altruism to related individuals who share genes in a non-specific way, green-beard alleles promote altruism toward individuals who share a gene that is expressed by a specific phenotypic trait. Some authors also note that the green-beard effects can include "spite" for individuals lacking the "green-beard" gene. [ 5 ] This can have the effect of delineating a subset of organisms within a population that is characterized by members who show greater cooperation toward each other, this forming a "clique" that can be advantageous to its members who are not necessarily kin. [ 6 ]
A green-beard effect could increase altruism on green-beard phenotypes and therefore its presence in a population even if genes assist in the increase of genes that are not exact copies; all that is required is that they express the three required characteristics. Green-beard alleles are vulnerable to mutations that produce the perceptible trait without the helping behaviour.
Altruistic behaviour is paradoxical when viewed in the light of old ideas of evolutionary theory that emphasised the role of competition. The evolution of altruism is better explained through the gene-centered view of evolution , which emphasizes an interpretation of natural selection from the point of view of the gene which acts as an agent that has the metaphorical "selfish goal" of maximizing its own propagation. A gene for (behavioral) selective altruism can be favored by (natural) selection if the altruism is primarily directed at other individuals who share the gene. Since genes are invisible, such an effect requires perceptible markers for altruistic behaviour to occur.
Evolutionary biologists have debated the potential validity of green-beard genes, suggesting that it would be extraordinarily rare for a single or even a set of linked genes to produce three complex phenotypic effects. This criticism has led some to believe that they simply cannot exist or that they only can be present in less complex organisms, such as microorganisms. This critique has been called into question in recent years.
The concept remained a merely theoretical possibility under Dawkins' selfish gene model until 1998, when a green-beard allele was first found in nature by Laurent Keller and Kenneth G. Ross in the red imported fire ant ( Solenopsis invicta ). [ 4 ] [ 7 ] Polygyne colony queens are heterozygous (Bb) at the Gp-9 gene locus. Their worker offspring can have both heterozygous (Bb) and homozygous (BB) genotypes. The investigators discovered that homozygous dominant (BB) queens, which in the wild form produce monogyne rather than polygyne colonies, are specifically killed when introduced into polygyne colonies, most often by heterozygous (Bb) and not homozygous (BB) workers. They concluded that the allele Gp-9 b is linked to a greenbeard allele which induces workers bearing this allele to kill all queens that do not have it. A final conclusion notes that the workers are able to distinguish BB queens from Bb queens based on an odor cue. [ 7 ]
The gene csA in the slime mould Dictyostelium discoideum , discovered in 2003, [ 8 ] codes for a cell adhesion protein which binds to gp80 proteins on other cells, allowing multicellular fruiting body formation on soil. Mixtures of csA knockout cells with wild-type cells yield spores , "born" from the fruiting bodies, which are 82% wild-type (WT). This is because the wild-type cells are better at adhering and more effectively combine into aggregates; knockout (KO) cells are left behind. On more adhesive but less natural substances, KO cells can adhere; WT cells, still better at adhering, sort preferentially into the stalk. [ 8 ]
In 2006, green beard-like recognition was seen in the cooperative behavior among color morphs in side-blotched lizards , although the traits appear to be encoded by multiple loci across the genome. [ 9 ]
A more recent example, found in 2008, is a gene that makes brewer's yeast clump together in response to a toxin such as alcohol. [ 10 ] By investigating flocculation , a type of self-adherence generally present in asexual aggregations, Smukalla et al. showed that S. cerevisiae is a model for cooperative behavior evolution. When this yeast expresses FLO1 in the laboratory, flocculation is restored. Flocculation is apparently protective for the FLO1+ cells, which are shielded from certain stresses (ethanol, for example). In addition FLO1+ cells preferentially adhere to each other. The authors therefore conclude that flocculation is driven by this greenbeard allele. [ 11 ]
A mammalian example appears to be the reproductive strategy of the wood mouse , which shows cooperation among spermatozoa. Single sperms hook in each other to form sperm-trains, which are able to move faster together than single sperm would do. [ 12 ]
It has been suggested that speciation could be possible through the manifestation of a green-beard effect. [ 13 ]
Additionally, it has been suggested that suicide could have evolved through green beard selection. [ 14 ] Suicide is often a reaction to an undesirable social context. Attempting suicide imposes a threat of bereavement on community members. If bereavement from many previous suicides has been felt, then the community is likely to take a new suicide attempter seriously. Accordingly, previous suicides may increase the credibility of future suicide attempts, resulting in increased effort from the community to alleviate the undesirable social context.
A study has shown that humans are about as genetically equivalent to their friends as they are their fourth cousins. [ 15 ] | https://en.wikipedia.org/wiki/Green-beard_effect |
The GreenScreen List Translator is a procedure for assessing chemical hazard used to identify chemicals of concern to prioritize for removal from product formulations. The List Translator assesses substances based on their presence on lists of chemicals associated with human and environmental health hazards issued by a global set of governmental and professional scientific bodies, such as the European Union’s GHS hazard statements and California's Proposition 65 .
The List Translator procedure is defined in the GreenScreen for Safer Chemicals , a transparent, open standard for chemical hazard assessment that supports alternatives assessment for toxics use reduction through identifying chemicals of concern and safer alternatives. The GreenScreen protocol is published in a Guidance document that is reviewed and updated regularly. This description of the List Translator is based upon the Hazard Assessment Guidance Version 1.4 [ 1 ]
The List Translator identifies the hazard endpoints for which a substance has been listed on each of a defined set of published hazard lists and the level of hazard. It prioritizes for avoidance those substances listed with a high hazard of any of the following endpoints:
This parallels the prioritization schemes underlying various international governmental regulatory programs such as the Substance of very high concern definition within the REACH Regulation of the European Union .
The central tools of the List Translator are the GreenScreen Specified Lists and the GreenScreen List Translator Map.
Scoring a substance is a three part procedure:
An LT-Unk, No GSLT or NoGS score is not an indication of low hazard or safety for a substance, only that the substance has not been listed for the priority health endpoints. A full GreenScreen Assessment must be undertaken to determine if the substance qualifies as an affirmatively safer substance.
Any person can use the GreenScreen List Translator protocol to score a substance. The research required to look up the substance in each of the hazard lists is, however, substantial. Several Licensed GreenScreen List Translator™ Automators aggregate the lists and provide free online lookup services for determining List Translator scores.
The GreenScreen List Translator is the first step in a GreenScreen Assessment. It is also used as a stand alone screening protocol by health and sustainability screening and certification programs. It is widely referenced in standards and certifications related to green building products, including the Health Product Declaration Standard (HPD), [ 2 ] Portico, [ 3 ] and the "Building product disclosure and optimization - material ingredients" credits in the US Green Building Council 's LEED program. [ 4 ] | https://en.wikipedia.org/wiki/GreenScreen_List_Translator |
The GreenScreen for Safer Chemicals is a transparent, open standard for assessing chemical hazard that supports alternatives assessment for toxics use reduction through identifying chemicals of concern and safer alternatives. [ 1 ] It is used by researchers, product formulators and certifiers in a variety of industries, including building products, textiles, apparel, and consumer products.
The GreenScreen prioritizes the avoidance of substances with a high hazard as a carcinogen , mutagen , reproductive toxicant or developmental toxicant or endocrine disruptor or that are a persistent, bioaccumulative and toxic substance (PBT).
The GreenScreen protocol is published in a Guidance document that is reviewed and updated regularly. The description here is based upon the Hazard Assessment Guidance Version 1.4 [ 2 ] An assessment using the GreenScreen has two major outputs:
The GreenScreen process has two levels of analysis:
A full GreenScreen Assessment provides a more complete hazard profile of a substance than a List Translator screening. It involves a detailed review of the scientific literature to attempt to determine hazard levels for all endpoints and calculate a GreenScreen benchmark. It may also use models and studies of analogs where direct data are scarce. Each endpoint hazard level is also assigned a confidence level based on the quality of the data.
The GreenScreen List Translator only can flag chemicals known to be of highest concern. A full GreenScreen Assessment can benchmark chemicals as being of lower concern. The Benchmark scale is:
The assessment requires data for most endpoints in order to give a substance a benchmark of lower concern than BM-1
Benchmark 1 is reserved for substances with a high hazard of any of the following:
High hazards for other human health endpoints, such as neurotoxicity and respiratory sensitization , receive a Benchmark 2
This parallels the prioritization schemes underlying various international governmental regulatory programs such as the Substance of very high concern definition within the REACH Regulation of the European Union .
DG - Data gaps : Strict guidelines limit the amount of data gaps. Where there are data gaps, the assessment includes a worst case scenario to determine the lowest possible Benchmark score if the data gap were filled with the highest possible hazard. These Benchmarks include a subscript of DG. A chemical that has too many data gaps receives a Benchmark U.
TP - Transformation Products : The assessment also must identify feasible and relevant environmental transformation products and benchmark them. If the Benchmark score is determined by the transformation products, the Benchmark score will include a subscript of TP.
CoHC - Chemicals of High Concern (polymer residuals & catalysts): Version 1.4 of the GreenScreen added special rules for benchmarking polymers which include analysis of residual monomers and/or catalysts present at or above 100 ppm. If the Benchmark score is determined by one of these chemicals, the Benchmark score will include a subscript of CHoC.
GreenScreen Assessments are internally used for research and product improvement by product manufacturers in many industry sectors, including electronics, [ 3 ] building products, textiles, apparel, and consumer products. [ 4 ] For example Apple is using GreenScreen framework and similar approaches to find safer materials in its products and processes. [ 5 ] The GreenScreen is also referenced publicly by sustainability standards in several of these industries, including the Health Product Declaration Standard (HPD), [ 6 ] Portico, [ 7 ] Building product disclosure and optimization - material ingredients credits in the US Green Building Council 's LEED program, [ 8 ] the International Living Future Institute's Living Product Challenge [ 9 ] (related to the Living Building Challenge , and by various governmental bodies. [ citation needed ]
The GreenScreen standard is developed, maintained and published by Clean Production Action (CPA), a non profit organization, based in the United States. CPA publishes the GreenScreen as an open standard which anyone can utilize. To make a public claim using a GreenScreen Benchmark, however, the GreenScreen assessment must be completed by a Profiler licensed by CPA. [ citation needed ]
The GreenScreen has substantial overlaps with the Globally Harmonized System of Classification and Labelling of Chemicals (GHS) and the criteria of the US EPA’s Design for the Environment . It differs from GHS, however, in some significant ways. GreenScreen has a lower threshold of analysis. GreenScreen includes endocrine activity, addresses PBTs more comprehensively and considers environmental transformation products. GreenScreen also requires and provides guidance for addressing data gaps. GHS, on the other hand, covers more physical workplace hazards then the GreenScreen and provides guidelines for identifying hazards across languages with icons. This reflects the GHS focus on workplace safety and communications. [ citation needed ]
The Cradle to Cradle (C2C) Product Certification program includes a hazard screening protocol that is similar to the GreenScreen and GHS in many ways. The C2C analysis divides endpoints differently and is integrated into a product certification. There is not a standalone public assessment of individual substances.
These programs have been the subject of analysis evaluating the relationships, differences and opportunities for harmonization. [ 10 ] | https://en.wikipedia.org/wiki/GreenScreen_for_Safer_Chemicals |
The Green Building XML schema ( gbXML ) is an open schema developed to facilitate transfer of building data stored in Building Information Models (BIMs) to engineering analysis tools. It enables interoperability between BIM and building performance simulation , which is relevant to sustainable building design and operation. [ 1 ] gbXML is being integrated into a range of Computer-aided design (CAD) software and engineering tools, supported by leading 3D BIM vendors. [ 2 ] The streamlined workflow can transfer building properties to and from engineering analysis tools, which eliminates the duplicate model generation and allows a bidirectional information update.
gbXML is the underlying architecture of Autodesk 's Green Building Studio commercial on-line energy analysis product, [ 3 ] and is the main export option for energy analysis from their modeling products. It is often used for geometry data transformation, but the quality of exported models is not good. Lighting systems, Heating, Ventilation, and Air Conditioning (HVAC) systems and internal loads are often manually created by engineers in engineering analysis tools.
gbXML is a hierarchy architecture made up of elements and attributes. Elements can have sub-elements, and attributes can help define the features of elements. Some attributes are necessary for building an element. For example, the gbXML tag, locating at the highest-level in the schema, must contain a campus element. Attributes of temperature unit and area unit are required to define the gbXML tag.
Elements are components of a system. For example, Variable Air Volume (VAV) boxes are common components of a typical HVAC system. Defining a VAV box needs both the "HydronicLoop" tag and the "AirLoop" tag under the gbXML tag.
Attributes can help define the specialty of an element. | https://en.wikipedia.org/wiki/Green_Building_XML |
Green Chemistry Letters and Reviews is a peer-reviewed scientific journal published quarterly by Taylor & Francis . It publishes full papers and review articles on new syntheses and green chemistry.
This article about a chemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Green_Chemistry_Letters_and_Reviews |
Green Communication Challenge is an organization founded and led by Francesco De Leo that promotes the development of energy conservation technology and practices in the field of Information and Communications Technology (ICT).
Green Comm Challenge achieved worldwide notoriety in 2007, when it enlisted as one of the challengers in the 33rd edition of the America's Cup, [ 1 ] an effort meant to show how researchers, technologists and entrepreneurs from around the world can be brought together by an exciting vision: building the ultimate renewable energy machine, a competitive America’s Cup boat.
ICT is helping society become more energy efficient: think of the positive impact on CO 2 emissions of telecommuting and ecommerce for example. [ 2 ] [ 3 ] Computers are helping us design more energy efficient products. But there is little doubt that, while other industries strive to become more energy efficient, computers and networks themselves risk becoming the “energy hogs” of the future, unless something is done.
Powering the over 1 billion personal computers , the millions of corporate data centers , the over 4 billion fixed and mobile telephones and telecommunications networks around the world requires approximately 1.4 Petawatt-hr a year ( 1.4 × 10 15 W-hr ) of electricity, [ 4 ] approximately 8% of the global electrical energy produced in 2005. And consider that over 4 billion people around the world have never used a cell phone, almost three times as many as those who currently have access to one.
Some estimates project that the above percentage will grow to 15% by 2020, [ 4 ] but these projections may fail to take into account some of the disruptive trends we are witnessing today. Take Google for example: to power the over 75 billion searches performed in July 2009 the company needed an estimated one million servers, consuming an estimated 1.3 Terawatt-hr a year ( 1.3 × 10 12 W-hr ). The number of searches has grown over 60% between 2008 and 2009 alone. [ 5 ] It is no surprise that the company is planning to manage as many as 10 million servers in the future. [ 6 ]
The explosion of video on the net is another disruptive element. The Amesterdam Internet exchange (AMS-IX), which handles approximately 20% of Europe’s traffic, saw its aggregate data traffic increase from 1.75 Petabyte per day in November 2007 to an expected 4 Petabyte per day in November 2009. [ 7 ] Much of this rapid increase in traffic is driven by widespread use of voice and, in particular, video over the Internet.
Green Comm Challenge’s founders believe that defining a corollary to Moore’s Law is in order: increases in processor performance must be accompanied by a less-than proportional increase in energy consumption.
This, of course, is no easy undertaking. It will require a new engineering approach to designing computers, cell phones and networks. It will also require a new management culture, capable of recognizing the attractive ROIs that green technology can generate, in addition to being more sensitive to the environmental impact of management's decisions.
This is why Green Comm has fully embraced the ICT energy-efficiency challenge by establishing an interdisciplinary approach that involves some of the most innovative thinkers around the world. We are currently involved in the following four initiatives: | https://en.wikipedia.org/wiki/Green_Comm_Challenge |
Strengthening climate resilience of rural communities in Northern Rwanda, commonly known as the Green Gicumbi Project , is a six-year governmental project , launched on 26 October 2019 by the Government of Rwanda , through the Ministry of the Environment and the Rwanda Green Fund (FONERWA) with target of strengthening climate resilience of rural communities in Northern Rwanda , especially in Gicumbi District . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The project is to be implemented by the National Fund for the Environment. Jean Marie Vianney Kagenza is Project Director. [ 5 ] [ 3 ] [ 6 ] [ 7 ]
According to Ministry of environment of Rwanda , Green Gicumbi Project includes the following components: [ 8 ]
In January 2022, the Government of Rwanda , through the Green Gicumbi Project, has started constructing 200 green and climate resilient houses for Gicumbi residents, most relocated citizens will be in Ubudehe category I and category II, high risk zones. [ 9 ] The green housing project is located in the Rubaya and Kaniga sectors, and is considered a model village where beneficiaries will receive additional support such as cows and the resources to start horticulture farms around the village, the Project Director has stated. [ 1 ] [ 4 ]
This Rwanda -related article is a stub . You can help Wikipedia by expanding it .
This environment -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Green_Gicumbi |
Green Star is a voluntary sustainability rating system for buildings in Australia . It was launched in 2003 by the Green Building Council of Australia (GBCA).
The Green Star rating system assesses the sustainability of projects at all stages of the built environment life cycle. Ratings can be achieved at the planning phase for communities, during the design, construction or fit out phase of buildings, or during the ongoing operational phase.
The system considers assesses and rates buildings, fitouts and communities against a range of environmental impact categories, and aims to encourage leadership in environmentally sustainable design and construction, showcase innovation in sustainable building practices, and consider occupant health, productivity and operational cost savings.
In 2013, the GBCA released The Value of Green Star , a report that analysed data from 428 Green Star-certified projects occupying 5,746,000 million square metres across Australia and compared it to the ‘average’ Australian building and minimum practice benchmarks. The research found that, on average, Green Star-certified buildings produce 62% fewer greenhouse gas emissions and use 66% less electricity than average Australian buildings. Green Star buildings use 51% less potable water than average buildings. Green Star-certified buildings also have been found to recycle 96 per cent of their construction and demolition waste, compared to the average 58% for new construction projects. [ 1 ]
Green Star benchmarks projects against the nine Green Star categories of: Management; Indoor Environment Quality; Energy; Transport; Water; Materials; Land Use & Ecology; Emissions and Innovation. [ 2 ]
Within each category are credits which address specific aspects of sustainable building design, construction or performance. Ratings for buildings are available at the design stage ('Design' ratings), at the post-construction phase (known as 'As Built' ratings) or for interior fitouts (‘Interiors’ ratings).
Green Star - Communities rates projects at the community or precinct scale against the categories of: Liveability; Economic Prosperity; Environment; Design; Governance and Innovation.
Green Star certification is a formal process in which an independent assessment panel reviews documentary evidence that a project meets Green Star benchmarks within each credit. The assessment panel awards points, with a Green Star rating determined by comparing the overall score with the rating scale:
Green Star rating tools for building, fitout and community design and construction reward projects that achieve best practice or above, which means ratings of 1, 2 or 3 are not awarded. Ongoing performance of a building can be rated at any of the 6 star ratings.
Buildings assessed using the Green Star – Performance rating tool will be able to achieve a Green Star rating from 1 – 6 Star Green Star. [ 3 ] [ 4 ]
More than 1900 projects around Australia have achieved Green Star ratings. The first building to achieve a Green Star rating was 8 Brindabella Circuit at Canberra Airport , which achieved a 5 Star Green Star – Office Design v1 rating in 2004. In 2005, Council House 2 in Melbourne became the first building to achieve a 6 Star Green Star – Office Design v1 rating. Flinders Medical Centre – New South Wing was the first healthcare facility in Australia to achieve a Green Star rating. Scarborough Beach Pool was the first aquatic facility to achieve a 6 star green rating. [ 5 ] Bond University Mirvac School for Sustainability achieved the first Green Star rating for an educational facility. Other well-known Green Star projects include 1 Bligh Street in Sydney and the Melbourne Convention and Exhibition Centre .
The launch of the Green Star rating system was met with some scepticism by green groups, which argued that the rating system was funded by mostly development industry companies. [ 6 ] There was controversy over a proposal to expand the forest certification of timber and composite timber products, but this issue was resolved with the release of the revised ‘Timber’ credit in 2010. [ 7 ] There has also been concern over various aspects of the timeframe for awarding of the certification, transfer of properties once awarded, and termination rights. [ 8 ] | https://en.wikipedia.org/wiki/Green_Star_(Australia) |
Green bridges are an ecotechnological in-situ bio remediation system. Their different physical and biological filters work in combination to remove suspended and dissolved impurities of water. [ 1 ] Green bridge filters help in reducing the suspended solids by filtration process, reducing Chemical Oxygen Demand (COD) / Biochemical Oxygen Demand (BOD) by aerobic degradation. [ 2 ] Green Bridges also help in the restoration of ecological food chain.
Natural streams, rivers and lakes have their own in-built purification system which consists of natural slopes, stones for biological growth and complex food web help in the purification process. This food web is nothing but utilization of one's waste by another as its own food. Nature has her own living machinery of detritivorous microbes and other living species to consume wastes. These principles have been harnessed in the treatment of polluted streams. [ citation needed ]
Green bridges are developed using fibrous material with stones. All the floatable and suspended solids are trapped in this biological bridge and the turbidity of flowing water is reduced. Green plants on the bridges increase the DO level in water, which in turn facilitates the growth of aerobic organisms, which degrade organic pollutants. Sandeep Joshi, director, SERI (Shrishti Eco-Research Institute) has developed this technology and has received a patent for it. [ citation needed ]
Other than the changes in the water quality mentioned above a multifold change in population of avifauna, terrestrial plants along the riverbanks has been noticed. There is an overall odour and mosquito reduction and improvement of river aesthetics. Increase in health status of aquatic life in lentic-lotic system by reduction in ecotoxicity of pollutants. [ 3 ] | https://en.wikipedia.org/wiki/Green_bridge_(filtration_system) |
Green building (also known as green construction , sustainable building , or eco-friendly building ) refers to both a structure and the application of processes that are environmentally responsible and resource-efficient throughout a building's life-cycle: from planning to design, construction, operation, maintenance, renovation, and demolition. [ 1 ] This requires close cooperation of the contractor, the architects, the engineers, and the client at all project stages. [ 2 ] The Green Building practice expands and complements the classical building design concerns of economy, utility, durability, and comfort. [ 1 ] Green building also refers to saving resources to the maximum extent, including energy saving, land saving, water saving, material saving, etc., during the whole life cycle of the building, protecting the environment and reducing pollution, providing people with healthy, comfortable and efficient use of space, and being in harmony with nature. Buildings that live in harmony; green building technology focuses on low consumption, high efficiency, economy, environmental protection, integration and optimization.’ [ 3 ]
Leadership in Energy and Environmental Design (LEED) is a set of rating systems for the design, construction, operation, and maintenance of green buildings which was developed by the U.S. Green Building Council . Other certificate systems that confirm the sustainability of buildings are the British BREEAM (Building Research Establishment Environmental Assessment Method) for buildings and large-scale developments or the DGNB System ( Deutsche Gesellschaft für Nachhaltiges Bauen e.V. ) which benchmarks the sustainability performance of buildings, indoor environments and districts. Currently, the World Green Building Council is conducting research on the effects of green buildings on the health and productivity of their users and is working with the World Bank to promote Green Buildings in Emerging Markets through EDGE ( Excellence in Design for Greater Efficiencies ) Market Transformation Program and certification. [ 4 ] There are also other tools such as NABERS or Green Star in Australia, Global Sustainability Assessment System (GSAS) used in the Middle East and the Green Building Index (GBI) predominantly used in Malaysia.
Building information modeling (BIM) is a process involving the generation and management of digital representations of physical and functional characteristics of places. Building information models (BIMs) are files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged, or networked to support decision-making regarding a building or other built asset. Current BIM software is used by individuals, businesses, and government agencies who plan, design, construct, operate and maintain diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports, and tunnels.
Although new technologies are constantly being developed to complement current practices in creating greener structures, the common objective of green buildings is to reduce the overall impact of the built environment on human health and the natural environment by:
Natural building is a similar concept, usually on a smaller scale and focusing on the use of locally available natural materials . [ 5 ] Other related topics include sustainable design and green architecture . Sustainability may be defined as meeting the needs of present generations without compromising the ability of future generations to meet their needs. [ 6 ] Although some green building programs don't address the issue of retrofitting existing homes , others do, especially through public schemes for energy efficient refurbishment . Green construction principles can easily be applied to retrofit work as well as new construction.
A 2009 report by the U.S. General Services Administration found 12 sustainably-designed buildings that cost less to operate and have excellent energy performance. In addition, occupants were overall more satisfied with the building than those in typical commercial buildings. These are eco-friendly buildings. [ 7 ]
Buildings represent a large part of energy, electricity, water and materials consumption. As of 2020, they account for 37% of global energy use and energy-related CO 2 emissions, which the United Nations estimate contributed to 33% of overall worldwide emissions. [ 8 ] [ 9 ] Including the manufacturing of building materials, the global CO 2 emissions were 39%. [ 10 ] If new technologies in construction are not adopted during this time of rapid growth, emissions could double by 2050, according to the United Nations Environment Program .
Glass buildings, especially all-glass skyscrapers, contribute significantly to climate change due to their energy inefficiency. While these structures are visually appealing and allow abundant natural light, they also trap heat, necessitating increased use of air conditioning systems, which contribute to higher carbon emissions. Experts advocate for design modifications and potential restrictions on all-glass edifices to mitigate their detrimental environmental impact. [ 11 ] [ 12 ]
Buildings account for a large amount of land. According to the National Resources Inventory , approximately 107 million acres (430,000 km 2 ) of land in the United States are developed. The International Energy Agency released a publication that estimated that existing buildings are responsible for more than 40% of the world's total primary energy consumption and for 24% of global carbon dioxide emissions. [ 13 ] [ 14 ]
According to Global status report from the year 2016, buildings consume more than 30% of all produced energy. The report states that "Under a below 2°C trajectory, effective action to improve building energy efficiency could limit building final energy demand to just above current levels, meaning that the average energy intensity of the global building stock would decrease by more than 80% by 2050". [ 15 ]
Green building practices aim to reduce the environmental impact of building as the building sector has the greatest potential to deliver significant cuts in emissions at little or no cost. [ 16 ] General guidelines can be summarized as follows: Every building should be as small as possible. Avoid contributing to sprawl , even if the most energy-efficient, environmentally sound methods are used in design and construction. Bioclimatic design principles are able to reduce energy expenditure and by extension, carbon emissions. Bioclimatic design is a method of building design that takes local climate into account to create comfortable conditions within the structure. [ 17 ] [ 18 ] This could be as simple as constructing a different shape for the building envelope or facing the building towards the south to maximize solar exposure for energy or lighting purposes. Given the limitations of city planned construction, bioclimatic principles may be employed on a lesser scale, however it is still an effective passive method to reduce environmental impact.
The concept of sustainable development can be traced to the energy (especially fossil oil ) crisis and environmental pollution concerns of the 1960s and 1970s. [ 20 ] The Rachel Carson book, " Silent Spring ", [ 21 ] published in 1962, is considered to be one of the first initial efforts to describe sustainable development as related to green building. [ 20 ] The green building movement in the U.S. originated from the need and desire for more energy efficient and environmentally friendly construction practices. There are a number of motives for building green, including environmental, economic, and social benefits. [ 1 ] However, modern sustainability initiatives call for an integrated and synergistic design to both new construction and in the retrofitting of existing structures. Also known as sustainable design , this approach integrates the building life-cycle with each green practice employed with a design-purpose to create a synergy among the practices used.
Green building brings together a vast array of practices, techniques, and skills to reduce and ultimately eliminate the impacts of buildings on the environment and human health. It often emphasizes taking advantage of renewable resources , e.g., using sunlight through passive solar , active solar , and photovoltaic equipment, and using plants and trees through green roofs , rain gardens , and reduction of rainwater run-off. Many other techniques are used, such as using low-impact building materials or using packed gravel or permeable concrete instead of conventional concrete or asphalt to enhance replenishment of groundwater.
While the practices or technologies employed in green building are constantly evolving and may differ from region to region, fundamental principles persist from which the method is derived: siting and structure design efficiency, energy efficiency, water efficiency , materials efficiency, indoor environmental quality enhancement, operations and maintenance optimization and waste and toxics reduction. [ 22 ] [ 23 ] The essence of green building is an optimization of one or more of these principles. Also, with the proper synergistic design, individual green building technologies may work together to produce a greater cumulative effect.
On the aesthetic side of green architecture or sustainable design is the philosophy of designing a building that is in harmony with the natural features and resources surrounding the site. There are several key steps in designing sustainable buildings: specify 'green' building materials from local sources, reduce loads, optimize systems, and generate on-site renewable energy.
A life cycle assessment (LCA) can help avoid a narrow outlook on environmental, social and economic concerns [ 24 ] by assessing a full range of impacts associated with all cradle-to-grave stages of a process: from extraction of raw materials through materials processing, manufacture, distribution, use, repair and maintenance, and disposal or recycling. Impacts taken into account include (among others) embodied energy , global warming potential , resource use, air pollution , water pollution , and waste.
In terms of green building, the last few years have seen a shift away from a prescriptive approach, which assumes that certain prescribed practices are better for the environment, toward the scientific evaluation of actual performance through LCA.
Although LCA is widely recognized as the best way to evaluate the environmental impacts of buildings (ISO 14040 provides a recognized LCA methodology), [ 25 ] it is not yet a consistent requirement of green building rating systems and codes, despite the fact that embodied energy and other life cycle impacts are critical to the design of environmentally responsible buildings.
In North America, LCA is rewarded to some extent in the Green Globes rating system, and is part of the new American National Standard based on Green Globes, ANSI/GBI 01-2010: Green Building Protocol for Commercial Buildings . LCA is also included as a pilot credit in the LEED system, though a decision has not been made as to whether it will be incorporated fully into the next major revision. The state of California also included LCA as a voluntary measure in its 2010 draft Green Building Standards Code .
Although LCA is often perceived as overly complex and time-consuming for regular use by design professionals, research organizations such as BRE in the UK and the Athena Sustainable Materials Institute in North America are working to make it more accessible. [ 26 ]
In the UK, the BRE Green Guide to Specifications offers ratings for 1,500 building materials based on LCA.
The foundation of any construction project is rooted in the concept and design stages. The concept stage, in fact, is one of the major steps in a project life cycle, as it has the largest impact on cost and performance. [ 27 ] In designing environmentally optimal buildings, the objective is to minimize the total environmental impact associated with all life-cycle stages of the building project.
However, building as a process is not as streamlined as an industrial process, and varies from one building to the other, never repeating itself identically. In addition, buildings are much more complex products, composed of a multitude of materials and components each constituting various design variables to be decided at the design stage. A variation of every design variable may affect the environment during all the building's relevant life-cycle stages. [ 28 ]
Green buildings often include measures to reduce energy consumption – both the embodied energy required to extract, process, transport and install building materials and operating energy to provide services such as heating and power for equipment.
As high-performance buildings use less operating energy, embodied energy has assumed much greater importance – and may make up as much as 30% of the overall life cycle energy consumption. Studies such as the U.S. LCI Database Project [ 29 ] show buildings built primarily with wood will have a lower embodied energy than those built primarily with brick, concrete, or steel. [ 30 ]
To reduce operating energy use, designers use details that reduce air leakage through the building envelope (the barrier between conditioned and unconditioned space). They also specify high-performance windows and extra insulation in walls, ceilings, and floors. Another strategy, passive solar building design , is often implemented in low-energy homes. Designers orient windows and walls and place awnings, porches, and trees [ 31 ] to shade windows and roofs during the summer while maximizing solar gain in the winter. In addition, effective window placement ( daylighting ) can provide more natural light and lessen the need for electric lighting during the day. Solar water heating further reduces energy costs.
Onsite generation of renewable energy through solar power , wind power , hydro power , or biomass can significantly reduce the environmental impact of the building. Power generation is generally the most expensive feature to add to a building.
Energy efficiency for green buildings can be evaluated from either numerical or non-numerical methods. These include use of simulation modelling, analytical or statistical tools. [ 32 ]
In a report published in April 2024, the International Energy Agency (IEA) highlighted that buildings are responsible for about 30% of global final energy consumption and over 50% of electricity demand . It noted the tripling of heat pump sales from 2015 to 2022, electric cars accounting for 20% of 2023 vehicle sales, and a potential doubling of China's peak electricity demand by mid-century. India's air conditioner ownership could see a tenfold rise by 2050, causing a sixfold increase in peak electricity demand, which could be halved with efficient practices. By 2050, demand response measures might lower household electricity bills by 7% to 12% in advanced economies and nearly 20% in developing ones, with smart device installations nearly doubling by 2030. The US could see a 116 GW reduction in peak demand, 80 million tonnes less CO2 per year by 2030, and save between USD 100 billion and USD 200 billion over twenty years with grid-interactive buildings. In Alabama , a smart neighborhood demonstrated 35% to 45% energy savings compared to traditional homes. [ 33 ] [ 34 ]
Reducing water consumption and protecting water quality are key objectives in sustainable building. One critical issue of water consumption is that in many areas, the demands on the supplying aquifer exceed its ability to replenish itself. To the maximum extent feasible, facilities should increase their dependence on water that is collected, used, purified, and reused on-site. The protection and conservation of water throughout the life of a building may be accomplished by designing for dual plumbing that recycles water in toilet flushing or by using water for washing of the cars. Waste-water may be minimized by utilizing water conserving fixtures such as ultra-low flush toilets and low-flow shower heads. [ 35 ] Bidets help eliminate the use of toilet paper, reducing sewer traffic and increasing possibilities of re-using water on-site. Point of use water treatment and heating improves both water quality and energy efficiency while reducing the amount of water in circulation. The use of non-sewage and greywater for on-site use such as site-irrigation will minimize demands on the local aquifer. [ 36 ]
Large commercial buildings with water and energy efficiency can qualify for an LEED Certification. Philadelphia's Comcast Center is the tallest building in Philadelphia. It is also one of the tallest buildings in the USA that is LEED Certified. Their environmental engineering consists of a hybrid central chilled water system which cools floor-by-floor with steam instead of water. Burn's Mechanical set-up the entire renovation of the 58 story, 1.4 million square foot sky scraper.
Building materials typically considered 'green' include lumber (that has been certified to a third-party standard), rapidly renewable plant materials (like bamboo and straw), dimension stone , recycled stone, hempcrete , [ 37 ] recycled metal (see: copper sustainability and recyclability ) , and other non-toxic, reusable, renewable, and/or recyclable products. Materials with lower embodied energy can be used in substitution to common building materials with high degrees of energy consumption and carbon/harmful emissions. [ 38 ] For concrete a high performance self-healing version is available, [ 39 ] [ 40 ] however options with lower yields of pollutive waste entertain ideas of upcycling and congregate supplementing; replacing traditional concrete mixes with slag, production waste, and aggregates. [ 41 ] Insulation also sees multiple angles for substitution. Commonly used fiberglass has competition from other eco-friendly, low energy embodying insulators with similar or higher R-values (per inch of thickness) at a competitive price. Sheep wool, cellulose , and ThermaCork perform more efficiently, however, use may be limited by transportation or installation costs.
Furthermore, embodied energy comparisons can help deduce the selection of building material and its efficiency. Wood production emits less CO 2 than concrete and steel if produced in a sustainable way just as steel can be produced more sustainably through improvements in technology (e.g. EAF) and energy recycling/carbon capture(an underutilized potential for systematically storing carbon in the built environment). [ 42 ] [ 43 ] [ 44 ]
The EPA ( Environmental Protection Agency ) also suggests using recycled industrial goods, such as coal combustion products, foundry sand, and demolition debris in construction projects. [ 22 ] Energy efficient building materials and appliances are promoted in the United States through energy rebate programs .
A 2022 report from the Boston Consulting Group found that, investments in developing greener forms of cement, iron, and steel lead to bigger greenhouse gas reductions compared with investments in electricity and aviation. [ 45 ] In addition, the process of making cement without producing CO 2 is unavoidable. However, using pozzolans clinkers can reduce CO 2 emission while in the process of making cement. [ 46 ]
The Indoor Environmental Quality (IEQ) category in LEED standards, one of the five environmental categories, was created to provide comfort, well-being, and productivity of occupants. The LEED IEQ category addresses design and construction guidelines especially: indoor air quality (IAQ), thermal quality, and lighting quality. [ 47 ] [ 48 ] [ 49 ]
Indoor Air Quality seeks to reduce volatile organic compounds , or VOCs, and other air impurities such as microbial contaminants. Buildings rely on a properly designed ventilation system (passively/naturally or mechanically powered) to provide adequate ventilation of cleaner air from outdoors or recirculated, filtered air as well as isolated operations (kitchens, dry cleaners, etc.) from other occupancies. During the design and construction process choosing construction materials and interior finish products with zero or low VOC emissions will improve IAQ. Most building materials and cleaning/maintenance products emit gases, some of them toxic, such as many VOCs including formaldehyde. These gases can have a detrimental impact on occupants' health, comfort, and productivity. Avoiding these products will increase a building's IEQ. LEED, [ 50 ] HQE [ 51 ] and Green Star contain specifications on use of low-emitting interior. Draft LEED 2012 [ 52 ] is about to expand the scope of the involved products. BREEAM [ 53 ] limits formaldehyde emissions, no other VOCs. MAS Certified Green is a registered trademark to delineate low VOC-emitting products in the marketplace. [ 54 ] The MAS Certified Green Program ensures that any potentially hazardous chemicals released from manufactured products have been thoroughly tested and meet rigorous standards established by independent toxicologists to address recognized long-term health concerns. These IAQ standards have been adopted by and incorporated into the following programs:
Also important to indoor air quality is the control of moisture accumulation (dampness) leading to mold growth and the presence of bacteria and viruses as well as dust mites and other organisms and microbiological concerns. Water intrusion through a building's envelope or water condensing on cold surfaces on the building's interior can enhance and sustain microbial growth. A well-insulated and tightly sealed envelope will reduce moisture problems but adequate ventilation is also necessary to eliminate moisture from sources indoors including human metabolic processes, cooking, bathing, cleaning, and other activities. [ 59 ]
Personal temperature and airflow control over the HVAC system coupled with a properly designed building envelope will also aid in increasing a building's thermal quality. Creating a high performance luminous environment through the careful integration of daylight and electrical light sources will improve on the lighting quality and energy performance of a structure. [ 36 ] [ 60 ]
Solid wood products, particularly flooring, are often specified in environments where occupants are known to have allergies to dust or other particulates. Wood itself is considered to be hypo-allergenic and its smooth surfaces prevent the buildup of particles common in soft finishes like carpet. The Asthma and Allergy Foundation of America recommends hardwood, vinyl, linoleum tile or slate flooring instead of carpet. [ 61 ] The use of wood products can also improve air quality by absorbing or releasing moisture in the air to moderate humidity. [ 62 ]
Interactions among all the indoor components and the occupants together form the processes that determine the indoor air quality. Extensive investigation of such processes is the subject of indoor air scientific research and is well documented in the journal Indoor Air. [ 63 ]
No matter how sustainable a building may have been in its design and construction, it can only remain so if it is operated responsibly and maintained properly. Ensuring operations and maintenance(O&M) personnel are part of the project's planning and development process will help retain the green criteria designed at the onset of the project. [ 64 ] Every aspect of green building is integrated into the O&M phase of a building's life. The addition of new green technologies also falls on the O&M staff. Although the goal of waste reduction may be applied during the design, construction and demolition phases of a building's life-cycle, it is in the O&M phase that green practices such as recycling and air quality enhancement take place. O&M staff should aim to establish best practices in energy efficiency, resource conservation, ecologically sensitive products and other sustainable practices. Education of building operators and occupants is key to effective implementation of sustainable strategies in O&M services. [ 65 ]
Green architecture also seeks to reduce waste of energy, water and materials used during construction. For example, in California nearly 60% of the state's waste comes from commercial buildings [ 66 ] During the construction phase, one goal should be to reduce the amount of material going to landfills . Well-designed buildings also help reduce the amount of waste generated by the occupants as well, by providing on-site solutions such as compost bins to reduce matter going to landfills.
To reduce the amount of wood that goes to landfill, Neutral Alliance (a coalition of government, NGOs and the forest industry) created the website dontwastewood.com . The site includes a variety of resources for regulators, municipalities, developers, contractors, owner/operators and individuals/homeowners looking for information on wood recycling.
When buildings reach the end of their useful life, they are typically demolished and hauled to landfills. Deconstruction is a method of harvesting what is commonly considered "waste" and reclaiming it into useful building material. [ 67 ] Extending the useful life of a structure also reduces waste – building materials such as wood that are light and easy to work with make renovations easier. [ 68 ]
To reduce the impact on wells or water treatment plants , several options exist. " Greywater ", wastewater from sources such as dishwashing or washing machines, can be used for subsurface irrigation, or if treated, for non-potable purposes, e.g., to flush toilets and wash cars. Rainwater collectors are used for similar purposes.
Centralized wastewater treatment systems can be costly and use a lot of energy. An alternative to this process is converting waste and wastewater into fertilizer, which avoids these costs and shows other benefits. By collecting human waste at the source and running it to a semi-centralized biogas plant with other biological waste, liquid fertilizer can be produced. This concept was demonstrated by a settlement in Lübeck Germany in the late 1990s. Practices like these provide soil with organic nutrients and create carbon sinks that remove carbon dioxide from the atmosphere, offsetting greenhouse gas emission. Producing artificial fertilizer is also more costly in energy than this process. [ 69 ]
Electricity networks are built based on peak demand (another name is peak load). Peak demand is measured in the units of watts (W). It shows how fast electrical energy is consumed. Residential electricity is often charged on electrical energy ( kilowatt hour , kWh). Green buildings or sustainable buildings are often capable of saving electrical energy but not necessarily reducing peak demand .
When sustainable building features are designed, constructed and operated efficiently, peak demand can be reduced so that there is less desire for electricity network expansion and there is less impact onto carbon emission and climate change . [ 70 ] These sustainable features can be good orientation, sufficient indoor thermal mass, good insulation, photovoltaic panels , thermal or electrical energy storage systems , smart building (home) energy management systems . [ 71 ]
The most criticized issue about constructing environmentally friendly buildings is the price. Photovoltaics , new appliances, and modern technologies tend to cost more money. Most green buildings cost a premium of <2%, but yield 10 times as much over the entire life of the building. [ 66 ] In regards to the financial benefits of green building, "Over 20 years, the financial payback typically exceeds the additional cost of greening by a factor of 4-6 times. And broader benefits, such as reductions in greenhouse gases (GHGs) and other pollutants have large positive impacts on surrounding communities and on the planet." [ 72 ] The stigma is between the knowledge of up-front cost [ 73 ] vs. life-cycle cost. The savings in money come from more efficient use of utilities which result in decreased energy bills. It is projected that different sectors could save $130 billion on energy bills. [ 74 ] Also, higher worker or student productivity can be factored into savings and cost deductions. [ citation needed ]
Numerous studies have shown the measurable benefit of green building initiatives on worker productivity. In general it has been found that, "there is a direct correlation between increased productivity and employees who love being in their work space." [ 75 ] Specifically, worker productivity can be significantly impacted by certain aspects of green building design such as improved lighting, reduction of pollutants, advanced ventilation systems and the use of non-toxic building materials. [ 76 ] In " The Business Case for Green Building ", the U.S. Green Building Council gives another specific example of how commercial energy retrofits increase worker health and thus productivity, "People in the U.S. spend about 90% of their time indoors. EPA studies indicate indoor levels of pollutants may be up to ten times higher than outdoor levels. LEED-certified buildings are designed to have healthier, cleaner indoor environmental quality, which means health benefits for occupants." [ 77 ]
Studies have shown over a 20-year life period, some green buildings have yielded $53 to $71 per square foot back on investment. [ 78 ] Confirming the rentability of green building investments, further studies of the commercial real estate market have found that LEED and Energy Star certified buildings achieve significantly higher rents, sale prices and occupancy rates as well as lower capitalization rates potentially reflecting lower investment risk. [ 79 ] [ 80 ] [ 81 ]
As a result of the increased interest in green building concepts and practices, a number of organizations have developed standards, codes and rating systems for use by government regulators, building professionals and consumers. In some cases, codes are written so local governments can adopt them as bylaws to reduce the local environmental impact of buildings.
Green building rating systems such as BREEAM (United Kingdom), LEED (United States and Canada), DGNB (Germany), CASBEE (Japan), and VERDE GBCe (Spain), GRIHA (India) help consumers determine a structure's level of environmental performance. They award credits for optional building features that support green design in categories such as location and maintenance of building site, conservation of water , energy, and building materials, and occupant comfort and health. The number of credits generally determines the level of achievement. [ 82 ]
Green building codes and standards, such as the International Code Council's draft International Green Construction Code, [ 83 ] are sets of rules created by standards development organizations that establish minimum requirements for elements of green building such as materials or heating and cooling.
The new version of the European Construction Products Regulation (PCR) contains elements of Life Cycle Analysis and verification of Environmental Product Declarations under the "System 3+" process. [ 84 ]
Some of the major building environmental assessment tools currently in use include:
At the beginning of the 21st century, efforts were made to implement the principles of green building, not only for individual buildings, but also for neighborhoods and villages. The intent is to create zero energy neighborhoods and villages, which means they're going to create all the energy on their own. They will also reuse waste, implements sustainable transportation, and produce their own food. [ 85 ] [ 86 ] Green villages have been identified as a way to decentralize sustainable climate practices, which may prove key in areas with high rural or scattered village populations, such as India, where 74% of the population lives in over 600,000 different villages. [ 87 ]
IPCC Fourth Assessment Report
Climate Change 2007, the Fourth Assessment Report (AR4) of the United Nations Intergovernmental Panel on Climate Change ( IPCC ), is the fourth in a series of such reports. The IPCC was established by the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP) to assess scientific, technical and socio-economic information concerning climate change, its potential effects and options for adaptation and mitigation. [ 88 ]
UNEP and Climate change
United Nations Environment Program UNEP works to facilitate the transition to low-carbon societies, support climate proofing efforts, improve understanding of climate change science, and raise public awareness about this global challenge.
GHG Indicator
The Greenhouse Gas Indicator: UNEP Guidelines for Calculating Greenhouse Gas Emissions for Businesses and Non-Commercial Organizations
Agenda 21
Agenda 21 is a programme run by the United Nations (UN) related to sustainable development. It is a comprehensive blueprint of action to be taken globally, nationally and locally by organizations of the UN, governments, and major groups in every area in which humans impact on the environment . The number 21 refers to the 21st century.
FIDIC's PSM
The International Federation of Consulting Engineers (FIDIC) Project Sustainability Management Guidelines were created to assist project engineers and other stakeholders in setting sustainable development goals for their projects that are recognized and accepted as being in the interests of society. The process is also intended to align project goals with local conditions and priorities and assist those involved in managing projects to measure and verify their progress.
The Project Sustainability Management Guidelines are structured with Themes and Sub-Themes under the three main sustainability headings of Social, Environmental and Economic. For each individual Sub-Theme a core project indicator is defined along with guidance as to the relevance of that issue in the context of an individual
project.
The Sustainability Reporting Framework provides guidance for organizations to use as the basis for disclosure about their sustainability performance, and also provides stakeholders a universally applicable, comparable framework in which to understand disclosed information.
The Reporting Framework contains the core product of the Sustainability Reporting Guidelines, as well as Protocols and Sector Supplements.
The Guidelines are used as the basis for all reporting. They are the foundation upon which all other reporting guidance is based, and outline core content for reporting that is broadly relevant to all organizations regardless of size, sector, or location. The Guidelines contain principles and guidance as well as standard disclosures – including indicators – to outline a disclosure framework that organizations can voluntarily, flexibly, and incrementally, adopt.
Protocols underpin each indicator in the Guidelines and include definitions for key terms in the indicator, compilation methodologies, intended scope of the indicator, and other technical references.
Sector Supplements respond to the limits of a one-size-fits-all approach. Sector Supplements complement the use of the core Guidelines by capturing the unique set of sustainability issues faced by different sectors such as mining, automotive, banking, public agencies and others.
IPD Environment Code
The IPD Environment Code was launched in February 2008. The Code is intended as a good practice global standard for measuring the environmental performance of corporate buildings. Its aim is to accurately measure and manage the environmental impacts of corporate buildings and enable property executives to generate high quality, comparable performance information about their
buildings anywhere in the world. The Code covers a wide range of building types (from offices to airports) and aims to inform and support
the following;
IPD estimate that it will take approximately three years to gather significant data to develop a robust set of baseline data that could be used across a typical corporate estate.
ISO 21931
ISO/TS 21931:2006, Sustainability in building construction—Framework for methods of assessment for environmental performance of construction works—Part 1: Buildings, is intended to provide a general framework for improving the quality and comparability of methods for assessing the environmental performance of buildings. It identifies and describes issues to be taken into account when using methods for the assessment of environmental performance for new or existing building properties in the design, construction, operation, refurbishment and deconstruction stages. It is not an assessment system in itself but is intended be used in conjunction with, and following the principles set out in, the ISO 14000 series of standards. | https://en.wikipedia.org/wiki/Green_building |
Israel has a Green Building Standard for buildings with reduced environmental impact. The standard is based on a point rating system, awarding up to 5 stars based on the number of points achieved (55 – 100) in 8 categories.
Israel has had its own voluntary Green Building Standard (IS-5281) since 2005. While the 2005 version covered only new residential and office buildings,a significantly revised and updated new version was approved in 2011, following pressure from professionals and market players. The new standard written with the help of BRE, the British office that wrote BREEAM, the UK Green Building tool, covers new buildings and buildings under significant renovation.
This version was revised in August 2014.
A number of Israeli municipalities are currently using the standard as a mandatory part of their building licensing process. Together with complementary standards 5282 [classification of buildings according to energy use] and 1738 for sustainable products provides a system for evaluating the environmental sustainability of buildings. [ 1 ]
United States Green Building Council LEED rating system has been implemented on some building in Israel including the Intel Development Center in Haifa .
The Israeli Green Building Standard ('Buildings of Lesser Environmental Harm'), Standard 5281, was upgraded and expanded in 2011 in cooperation with the Ministry of Environmental Protection, the Standards Institute of Israel, the Ministry of Interior, Ministry of Building and Housing and the Israeli Green Building Council. While the old standard only applied to residential and office buildings, the revised version defined seven standards for seven types of buildings: residential, offices, healthcare institutions, public, commercial, education and tourism buildings.
The 5281 standard encompasses issues pertinent to every green building: energy, land, water, building materials, health and welfare of building users, waste, transportation, building site management and innovation. Each issue is divided into sub-categories that include rating and assessment criteria, and the score is determined in accordance with the project's compliance with the requirements.
A building is deemed a 'green building' if it meets the minimal requirements for each of the categories, as well as additional preconditions to minimize the building's "environmental footprint." The standard has five levels, ranging from one star to five stars.
The Israeli Green Building Council (ILGBC) publishes general and technical manuals that provide information on these standards and their implementation. [ 2 ] | https://en.wikipedia.org/wiki/Green_building_in_Israel |
Green bullet , green ammunition or green ammo are nicknames for a United States Department of Defense program to eliminate the use of hazardous materials from small arms ammunition and from small arms ammunition manufacturing. Initial objectives were elimination of ozone-depleting substances , volatile organic compounds , and heavy metals from primers and projectiles . These materials were perceived as causing difficulties through the entire life cycle of ammunition. The materials generated hazardous wastes and emissions at manufacturing facilities and use of ammunition caused contamination at shooting ranges . Potential health hazards made demilitarization and disposal of unused ammunition difficult and expensive. [ 2 ]
The Joint Working Group for Non-Toxic Ammunition was formed by the Small Caliber Ammunition Branch of the United States Army Armament Research, Development and Engineering Center in October 1995. Members of the working group included the National Guard of the United States , the United States Coast Guard , the United States Army Infantry School , the Industrial Operations Command , the Lake City Army Ammunition Plant , the Oak Ridge National Laboratory , the Los Alamos National Laboratory and the United States Department of Energy Kansas City Plant . [ 2 ]
In 2013, lead bullet production represented the second largest use of lead in the U.S., after lead-acid batteries. [ 3 ] Studies by the U.S. CDC suggest blood lead levels are correlated with self-reported consumption of game meat. [ 4 ]
October 11, 2013 Governor Jerry Brown of California signed into law AB 711 Hunting: nonlead ammunition. [ 5 ] Cost reductions from conversion to green ammo are estimated at "$2.5
million required for waste removal at each outdoor firing range as well as the $100 thousand annual
costs for lead contamination monitoring". [ 6 ]
Two green ammunition cartridges are the 5.56×45mm NATO M855A1 and the MK281 40 mm grenade . Switching to the 5.56 mm green bullet, the M855A1 Enhanced Performance Round, or EPR, in 2010 has eliminated nearly 2,000 tons of lead from the waste stream. [ 15 ] U.S. Army representatives at a 2013 House Armed Services Committee hearing have credited the 5.56mm M855A1 Enhanced Performance Round "close to" those of a 7.62 mm in its performance capabilities. [ 16 ] The longer, less dense M855A1 bullet must be seated deeper than the lead core bullet it replaced to maintain the same exterior cartridge dimensions required for reliable functioning in self-loading firearms; and higher pressure is required to obtain the same bullet velocity with reduced propellant volume. Increased pressure causes gas port erosion producing a higher cyclic rate of automatic fire making jamming malfunctions more likely. Cracks in bolt locking lugs have been observed after 3000 rounds of full automatic fire with the M855A1 cartridge. [ 17 ]
Enhanced Performance Round, Lead-Free
The Army Research Laboratory and other participants developed the M855A1, Enhanced Performance Round (EPR), by applying ballistics concepts originally used in large-caliber cartridges to small arms. The result was significant improvements to lethality of small arms. [ 18 ] [ 19 ] The 5.56-mm (M855A1) ammunition was first battle-tested in mid-2010 in Afghanistan. The 7.62-mm (M80A1) ammunition was fielded in 2014. [ 18 ]
The EPR "bronze tip" ammo – previously known generically as "Green Ammo" – was born at the kickoff meeting for Phase II of the Army's Green Ammunition replacement program in mid-2005, at the Lake City Army Ammunition Plant. Participants met to discuss problems surrounding environmentally-friendly small arms training ammunition. [ 18 ]
The program team was composed of Project Manager, Maneuver Ammunition Systems (PM-MAS), Army Research Laboratory (ARL), U.S. Army Armaments Research Development and Engineering Center (ARDEC), and other team members. Participants evaluated more than 20 potential projectile designs before moving forward with a three-piece, reverse-jacket bullet design incorporating a hardened steel penetrator and lead-free slug. [ 20 ]
The EPR produces consistent effects against soft targets; increased effectiveness at long ranges; increased defeat of hard targets; and reduced muzzle flash (to help conceal soldiers' firing positions). The lead-free cartridges also reduce environmental impact by removing more than 2,000 metric tons of lead per year that otherwise could end up in the environment. [ 18 ] [ 19 ]
The EPR contains an environmentally-friendly projectile that eliminates lead from the manufacturing process in direct support of Army commitment to environmental stewardship. [ 21 ] Under the Green Ammo Phase II initiative, the Army focused on lead-free ammo in stateside training ranges, in response to tightening state environmental regulations. [ 18 ]
Some of a bullet's kinetic energy is typically converted to heat if the bullet strikes a hard surface like rock. Collision debris may include high temperature bullet fragments as sparks . Steel core and solid copper ammunition have the highest potential to start wildfires . Lead core bullets are less likely to ignite surrounding vegetation. [ 22 ] [ 23 ]
Rifling is required to stabilize elongated bullets, and longer bullets require faster rotation for similar stability. The rate of rotation is determined by the twist of the lands and grooves engraved on the interior of a rifled barrel. Twist is usually expressed as the length of barrel (in inches) in which the bullet will rotate through a full 360 degrees; so bullets fired from a 1:10" twist rifle will make a complete rotation in every 10 inches (25 cm) of distance traveled. [ 24 ]
Since lead is a very dense material, bullets made of inexpensive, non-toxic materials will be lighter than bullets made of lead unless bullet length is increased. Inferior external ballistics cause lighter bullets to be less effective against distant targets. Increasing bullet length may require a faster rifling twist to maintain stability. Some early trials versions of the M16 rifle had 1:14" twist barrels, but this was increased to 1:12" twist in early military production to improve stability with 55-grain (3.6 g) M193 lead-core bullets in the early 5.56×45mm NATO cartridges. Twist was increased to 1:9" after combat experience demonstrated the advantages of longer 62-grain (4.0 g) M855 bullets with a portion of the lead core replaced by a less dense steel penetrator. Barrels with 1:7" twist have been used in 21st century 5.56×45mm NATO firearms and have replaced barrels of older United States military firearms to stabilize longer M856 tracer bullets and M855A1 green bullets of less dense materials. [ 25 ] | https://en.wikipedia.org/wiki/Green_bullet |
Green chemistry , similar to sustainable chemistry or circular chemistry , [ 1 ] is an area of chemistry and chemical engineering focused on the design of products and processes that minimize or eliminate the use and generation of hazardous substances. [ 2 ] While environmental chemistry focuses on the effects of polluting chemicals on nature, green chemistry focuses on the environmental impact of chemistry, including lowering consumption of nonrenewable resources and technological approaches for preventing pollution . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
The overarching goals of green chemistry—namely, more resource-efficient and inherently safer design of molecules, materials, products, and processes—can be pursued in a wide range of contexts. [ 9 ]
Green chemistry (sustainable chemistry) : Design of chemical products and processes that minimize or eliminate the use or generation of substances hazardous to humans, animals, plants, and the environment.
Note 1: Modified from ref. [ 10 ] to be more general.
Note 2: Green chemistry discusses the engineering concept of pollution prevention and zero waste both at laboratory and industrial scales. It encourages the use of economical and
ecocompatible techniques that not only improve the yield but also bring down the cost of disposal of wastes at the end of a chemical process. [ 11 ]
Green chemistry emerged from a variety of existing ideas and research efforts (such as atom economy and catalysis ) in the period leading up to the 1990s, in the context of increasing attention to problems of chemical pollution and resource depletion . The development of green chemistry in Europe and the United States was linked to a shift in environmental problem-solving strategies: a movement from command and control regulation and mandated lowering of industrial emissions at the "end of the pipe," toward the active prevention of pollution through the innovative design of production technologies themselves. The set of concepts now recognized as green chemistry coalesced in the mid- to late-1990s, along with broader adoption of the term (which prevailed over competing terms such as "clean" and "sustainable" chemistry). [ 12 ] [ 13 ]
In the United States, the Environmental Protection Agency played a significant early role in fostering green chemistry through its pollution prevention programs, funding, and professional coordination. At the same time in the United Kingdom, researchers at the University of York , who used the term "clean technology" in the early 1990s, contributed to the establishment of the Green Chemistry Network within the Royal Society of Chemistry , and the launch of the journal Green Chemistry . [ 13 ] In 1991, in the Netherlands, a special issue called 'green chemistry' [groene chemie] was published in Chemisch Magazine . In the Dutch context, the umbrella term green chemistry was associated with the exploitation of biomass as a renewable feedstock. [ 14 ]
In 1998, Paul Anastas (who then directed the Green Chemistry Program at the US EPA ) and John C. Warner (then of Polaroid Corporation ) published a set of principles to guide the practice of green chemistry. [ 15 ] The twelve principles address a range of ways to lower the environmental and health impacts of chemical production, and also indicate research priorities for the development of green chemistry technologies. [ 16 ]
The principles cover such concepts as:
The twelve principles of green chemistry are: [ 17 ]
Attempts are being made not only to quantify the greenness of a chemical process but also to factor in other variables such as chemical yield , the price of reaction components, safety in handling chemicals, hardware demands, energy profile and ease of product workup and purification. In one quantitative study, [ 18 ] the reduction of nitrobenzene to aniline receives 64 points out of 100 marking it as an acceptable synthesis overall whereas a synthesis of an amide using HMDS is only described as adequate with a combined 32 points.
Green chemistry is increasingly seen as a powerful tool that researchers must use to evaluate the environmental impact of nanotechnology . [ 19 ] As nano materials are developed, the environmental and human health impacts of both the products themselves and the processes to make them must be considered to ensure their long-term economic viability. There is a trend of nano material technology in the practice, however, people ignored the potential nanotoxicity . Therefore, people need to address further consideration on legal, ethical, safety, and regulatory issues associated with nanomaterials , [ 20 ]
The major application of solvents in human activities is in paints and coatings (46% of usage). Smaller volume applications include cleaning, de-greasing, adhesives, and in chemical synthesis. [ 21 ] Traditional solvents are often toxic or are chlorinated. Green solvents, on the other hand, are generally less harmful to health and the environment and preferably more sustainable. Ideally, solvents would be derived from renewable resources and biodegrade to innocuous, often a naturally occurring product. [ 22 ] [ 23 ] However, the manufacture of solvents from biomass can be more harmful to the environment than making the same solvents from fossil fuels. [ 24 ] Thus the environmental impact of solvent manufacture must be considered when a solvent is being selected for a product or process. [ 25 ] Another factor to consider is the fate of the solvent after use. If the solvent is being used in an enclosed situation where solvent collection and recycling is feasible, then the energy cost and environmental harm associated with recycling should be considered; in such a situation water, which is energy-intensive to purify, may not be the greenest choice. On the other hand, a solvent contained in a consumer product is likely to be released into the environment upon use, and therefore the environmental impact of the solvent itself is more important than the energy cost and impact of solvent recycling; in such a case water is very likely to be a green choice. In short, the impact of the entire lifetime of the solvent, from cradle to grave (or cradle to cradle if recycled) must be considered. Thus the most comprehensive definition of a green solvent is the following: " a green solvent is the solvent that makes a product or process have the least environmental impact over its entire life cycle. " [ 26 ]
By definition, then, a solvent might be green for one application (because it results in less environmental harm than any other solvent that could be used for that application) and yet not be a green solvent for a different application. A classic example is water , which is a very green solvent for consumer products such as toilet bowl cleaner but is not a green solvent for the manufacture of polytetrafluoroethylene . For the production of that polymer, the use of water as solvent requires the addition of perfluorinated surfactants which are highly persistent. Instead, supercritical carbon dioxide seems to be the greenest solvent for that application because it performs well without any surfactant. [ 26 ] In summary, no solvent can be declared to be a "green solvent" unless the declaration is limited to a specific application.
Novel or enhanced synthetic techniques can often provide improved environmental performance or enable better adherence to the principles of green chemistry. For example, the 2005 Nobel Prize for Chemistry was awarded to Yves Chauvin, Robert H. Grubbs and Richard R. Schrock, for the development of the metathesis method in organic synthesis, with explicit reference to its contribution to green chemistry and "smarter production." [ 27 ] A 2005 review identified three key developments in green chemistry in the field of organic synthesis : use of supercritical carbon dioxide as green solvent, aqueous hydrogen peroxide for clean oxidations and the use of hydrogen in asymmetric synthesis . [ 28 ] Some further examples of applied green chemistry are supercritical water oxidation , on water reactions , and dry media reactions . [ citation needed ]
Bioengineering is also seen as a promising technique for achieving green chemistry goals. A number of important process chemicals can be synthesized in engineered organisms, such as shikimate , a Tamiflu precursor which is fermented by Roche in bacteria. Click chemistry is often cited [ citation needed ] as a style of chemical synthesis that is consistent with the goals of green chemistry. The concept of 'green pharmacy' has recently been articulated based on similar principles. [ 29 ]
In 1996, Dow Chemical won the 1996 Greener Reaction Conditions award for their 100% carbon dioxide blowing agent for polystyrene foam production. Polystyrene foam is a common material used in packing and food transportation. Seven hundred million pounds are produced each year in the United States alone. Traditionally, CFC and other ozone -depleting chemicals were used in the production process of the foam sheets, presenting a serious environmental hazard . Flammable, explosive, and, in some cases toxic hydrocarbons have also been used as CFC replacements, but they present their own problems. Dow Chemical discovered that supercritical carbon dioxide works equally as well as a blowing agent, without the need for hazardous substances, allowing the polystyrene to be more easily recycled. The CO 2 used in the process is reused from other industries, so the net carbon released from the process is zero.
Addressing principle #2 is the peroxide process for producing hydrazine without cogenerating salt. Hydrazine is traditionally produced by the Olin Raschig process from sodium hypochlorite (the active ingredient in many bleaches ) and ammonia . The net reaction produces one equivalent of sodium chloride for every equivalent of the targeted product hydrazine: [ 30 ]
In the greener peroxide process hydrogen peroxide is employed as the oxidant and the side product is water. The net conversion follows:
Addressing principle #4, this process does not require auxiliary extracting solvents. Methyl ethyl ketone is used as a carrier for the hydrazine, the intermediate ketazine phase separates from the reaction mixture, facilitating workup without the need of an extracting solvent.
Addressing principle #7 is a green route to 1,3-propanediol , which is traditionally generated from petrochemical precursors. It can be produced from renewable precursors via the bioseparation of 1,3-propanediol using a genetically modified strain of E. coli . [ 31 ] This diol is used to make new polyesters for the manufacture of carpets.
In 2002, Cargill Dow (now NatureWorks ) won the Greener Reaction Conditions Award for their improved method for polymerization of polylactic acid . Unfortunately, lactide-base polymers do not perform well and the project was discontinued by Dow soon after the award. Lactic acid is produced by fermenting corn and converted to lactide , the cyclic dimer ester of lactic acid using an efficient, tin-catalyzed cyclization. The L,L-lactide enantiomer is isolated by distillation and polymerized in the melt to make a crystallizable polymer , which has some applications including textiles and apparel, cutlery, and food packaging . Wal-Mart has announced that it is using/will use PLA for its produce packaging. The NatureWorks PLA process substitutes renewable materials for petroleum feedstocks, doesn't require the use of hazardous organic solvents typical in other PLA processes, and results in a high-quality polymer that is recyclable and compostable.
In 2003 Shaw Industries selected a combination of polyolefin resins as the base polymer of choice for EcoWorx due to the low toxicity of its feedstocks, superior adhesion properties, dimensional stability, and its ability to be recycled. The EcoWorx compound also had to be designed to be compatible with nylon carpet fiber. Although EcoWorx may be recovered from any fiber type, nylon-6 provides a significant advantage. Polyolefins are compatible with known nylon-6 depolymerization methods. PVC interferes with those processes. Nylon-6 chemistry is well-known and not addressed in first-generation production. From its inception, EcoWorx met all of the design criteria necessary to satisfy the needs of the marketplace from a performance, health, and environmental standpoint. Research indicated that separation of the fiber and backing through elutriation , grinding, and air separation proved to be the best way to recover the face and backing components, but an infrastructure for returning postconsumer EcoWorx to the elutriation process was necessary. Research also indicated that the postconsumer carpet tile had a positive economic value at the end of its useful life. EcoWorx is recognized by MBDC as a certified cradle-to-cradle design .
In 2005, Archer Daniels Midland (ADM) and Novozymes won the Greener Synthetic Pathways Award for their enzyme interesterification process. In response to the U.S. Food and Drug Administration (FDA) mandated labeling of trans -fats on nutritional information by January 1, 2006, Novozymes and ADM worked together to develop a clean, enzymatic process for the interesterification of oils and fats by interchanging saturated and unsaturated fatty acids. The result is commercially viable products without trans -fats. In addition to the human health benefits of eliminating trans -fats, the process has reduced the use of toxic chemicals and water, prevents vast amounts of byproducts, and reduces the amount of fats and oils wasted.
In 2011, the Outstanding Green Chemistry Accomplishments by a Small Business Award went to BioAmber Inc. for integrated production and downstream applications of bio-based succinic acid . Succinic acid is a platform chemical that is an important starting material in the formulations of everyday products. Traditionally, succinic acid is produced from petroleum-based feedstocks. BioAmber has developed process and technology that produces succinic acid from the fermentation of renewable feedstocks at a lower cost and lower energy expenditure than the petroleum equivalent while sequestering CO 2 rather than emitting it. [ 32 ] However, lower prices of oil precipitated the company into bankruptcy [ 33 ] and bio-sourced succinic acid is now barely made. [ 34 ]
Several laboratory chemicals are controversial from the perspective of Green chemistry. The Massachusetts Institute of Technology created a "Green" Alternatives Wizard [1] to help identify alternatives. Ethidium bromide , xylene , mercury , and formaldehyde have been identified as "worst offenders" which have alternatives. [ 35 ] Solvents in particular make a large contribution to the environmental impact of chemical manufacturing and there is a growing focus on introducing Greener solvents into the earliest stage of development of these processes: laboratory-scale reaction and purification methods. [ 36 ] In the Pharmaceutical Industry, both GSK [ 37 ] and Pfizer [ 38 ] have published Solvent Selection Guides for their Drug Discovery chemists.
In 2007, The EU put into place the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) program, which requires companies to provide data showing that their products are safe. This regulation (1907/2006) ensures not only the assessment of the chemicals' hazards as well as risks during their uses but also includes measures for banning or restricting/authorising uses of specific substances. ECHA, the EU Chemicals Agency in Helsinki, is implementing the regulation whereas the enforcement lies with the EU member states.
The United States formed the Environmental Protection Agency (EPA) in 1970 to protect human and environmental health by creating and enforcing environmental regulation. Green chemistry builds on the EPA’s goals by encouraging chemists and engineers to design chemicals, processes, and products that avoid the creation of toxins and waste. [ 39 ]
The U.S. law that governs the majority of industrial chemicals (excluding pesticides, foods, and pharmaceuticals) is the Toxic Substances Control Act (TSCA) of 1976. Examining the role of regulatory programs in shaping the development of green chemistry in the United States, analysts have revealed structural flaws and long-standing weaknesses in TSCA; for example, a 2006 report to the California Legislature concludes that TSCA has produced a domestic chemicals market that discounts the hazardous properties of chemicals relative to their function, price, and performance. [ 40 ] Scholars have argued that such market conditions represent a key barrier to the scientific, technical, and commercial success of green chemistry in the U.S., and fundamental policy changes are needed to correct these weaknesses. [ 41 ]
Passed in 1990, the Pollution Prevention Act helped foster new approaches for dealing with pollution by preventing environmental problems before they happen.
Green chemistry grew in popularity in the United States after the Pollution Prevention Act of 1990 was passed. This Act declared that pollution should be lowered by improving designs and products rather than treatment and disposal. These regulations encouraged chemists to reimagine pollution and research ways to limit the toxins in the atmosphere. In 1991, the EPA Office of Pollution Prevention and Toxics created a research grant program encouraging the research and recreation of chemical products and processes to limit the impact on the environment and human health. [ 42 ] The EPA hosts The Green Chemistry Challenge each year to incentivize the economic and environmental benefits of developing and utilizing green chemistry. [ 43 ]
In 2008, the State of California approved two laws aiming to encourage green chemistry, launching the California Green Chemistry Initiative . One of these statutes required California's Department of Toxic Substances Control (DTSC) to develop new regulations to prioritize "chemicals of concern" and promote the substitution of hazardous chemicals with safer alternatives. The resulting regulations took effect in 2013, initiating DTSC's Safer Consumer Products Program . [ 44 ]
There are ambiguities in the definition of green chemistry and how it is understood among broader science, policy, and business communities. Even within chemistry, researchers have used the term "green chemistry" to describe a range of work independently of the framework put forward by Anastas and Warner (i.e., the 12 principles). [ 13 ] While not all uses of the term are legitimate (see greenwashing ), many are, and the authoritative status of any single definition is uncertain. More broadly, the idea of green chemistry can easily be linked (or confused) with related concepts like green engineering , environmental design , or sustainability in general. Green chemistry's complexity and multifaceted nature makes it difficult to devise clear and simple metrics . As a result, "what is green" is often open to debate. [ 45 ]
Several scientific societies have created awards to encourage research in green chemistry. | https://en.wikipedia.org/wiki/Green_chemistry |
Green chemistry metrics describe aspects of a chemical process relating to the principles of green chemistry . [ 1 ] The metrics serve to quantify the efficiency or environmental performance of chemical processes, and allow changes in performance to be measured. The motivation for using metrics is the expectation that quantifying technical and environmental improvements can make the benefits of new technologies more tangible, perceptible, or understandable. This, in turn, is likely to aid the communication of research and potentially facilitate the wider adoption of green chemistry technologies in industry.
For a non-chemist, an understandable method of describing the improvement might be a decrease of X unit cost per kilogram of compound Y . This, however, might be an over-simplification. For example, it would not allow a chemist to visualize the improvement made or to understand changes in material toxicity and process hazards. For yield improvements and selectivity increases, simple percentages are suitable, but this simplistic approach may not always be appropriate. For example, when a highly pyrophoric reagent is replaced by a benign one, a numerical value is difficult to assign but the improvement is obvious, if all other factors are similar. [ 2 ]
Numerous metrics have been formulated over time. A general problem is that the more accurate and universally applicable the metric devised, the more complex and unemployable it becomes. A good metric must be clearly defined, simple, measurable, objective rather than subjective and must ultimately drive the desired behavior.
The fundamental purpose of metrics is to allow comparisons. If there are several economically viable ways to make a product, which one causes the least environmental harm (i.e. which is the greenest)? The metrics that have been developed to achieve that purpose fall into two groups: mass-based metrics and impact-based metrics.
The simplest metrics are based upon the mass of materials rather than their impact. Atom economy, E-factor, yield, reaction mass efficiency and effective mass efficiency are all metrics that compare the mass of desired product to the mass of waste. They do not differentiate between more harmful and less harmful wastes. A process that produces less waste may appear to be greener than the alternatives according to mass-based metrics but may in fact be less green if the waste produced is particularly harmful to the environment. This serious limitation means that mass-based metrics can not be used to determine which synthetic method is greener. [ 3 ] However, mass-based metrics have the great advantage of simplicity: they can be calculated from readily available data with few assumptions. For companies that produce thousands of products, mass-based metrics may be the only viable choice for monitoring company-wide reductions in environmental harm.
In contrast, impact-based metrics such as those used in life-cycle assessment evaluate environmental impact as well as mass, making them much more suitable for selecting the greenest of several options or synthetic pathways. Some of them, such as those for acidification, ozone depletion , and resource depletion , are just as easy to calculate as mass-based metrics but require emissions data that may not be readily available. Others, such as those for inhalation toxicity, ingestion toxicity, and various forms of aquatic eco toxicity, are more complex to calculate in addition to requiring emissions data. [ 4 ]
Atom economy was designed by Barry Trost as a framework by which organic chemists would pursue “greener” chemistry. [ 5 ] [ 6 ] The atom economy number is how much of the reactants remain in the final product. Atom economy = molecular mass of desire product molecular masses of reactants × 100 % {\displaystyle {\text{Atom economy}}={\frac {\text{molecular mass of desire product}}{\text{molecular masses of reactants}}}\times 100\%}
For a generic multi-stage reaction used for producing R:
The atom economy is calculated by Atom economy = molecular mass of R molecular masses of A, B, C and D × 100 % {\displaystyle {\text{Atom economy}}={\frac {\text{molecular mass of R}}{\text{molecular masses of A, B, C and D}}}\times 100\%}
The conservation of mass principle dictates that the total mass of the reactants is the same as the total mass of the products. In the above example, the sum of molecular masses of A, B, C and D should be equal to that of R, X, Y and Z. As only R is the useful product, the atoms of X, Y and Z are said to be wasted as by-products. Economic and environmental costs of disposal of these waste make a reaction with low atom economy to be "less green".
A further simplified version of this is the carbon economy . It is how much carbon ends up in the useful product compared to how much carbon was used to create the product. Carbon economy = number of carbon atoms in desire product number of carbon atoms in reactants × 100 % {\displaystyle {\text{Carbon economy}}={\frac {\text{number of carbon atoms in desire product}}{\text{number of carbon atoms in reactants}}}\times 100\%}
This metric is a good simplification for use in the pharmaceutical industry as it takes into account the stoichiometry of reactants and products. Furthermore, this metric is of interest to the pharmaceutical industry where development of carbon skeletons is key to their work.
The atom economy calculation is a simple representation of the “greenness” of a reaction as it can be carried out without the need for experimental results. Nevertheless, it can be useful in the process synthesis early stage design.
The drawback of this type of analysis is that assumptions have to be made. In an ideal chemical process, the amount of starting materials or reactants equals the amount of all products generated and no atom is lost. However, in most processes, some of the consumed reactant atoms do not become part of the products, but remain as unreacted reactants, or are lost in some side reactions. Besides, solvents and energy used for the reaction are ignored in this calculation, but they may have non-negligible impacts to the environment.
Percentage yield is calculated by dividing the amount of the obtained desired product by the theoretical yield. [ 7 ] In a chemical process, the reaction is usually reversible, thus reactants are not completely converted into products; some reactants are also lost by undesired side reaction. [ 8 ] [ 9 ] To evaluate these losses of chemicals, actual yield has to be measured experimentally.
Percentage yield = actual mass of product theoretical mass of product × 100 % {\displaystyle {\text{Percentage yield}}={\frac {\text{actual mass of product}}{\text{theoretical mass of product}}}\times 100\%}
As percentage yield is affected by chemical equilibrium , allowing one or more reactants to be in great excess can increase the yield. However, this may not be considered as a "greener" method, as it implies a greater amount of the excess reactant remain unreacted and therefore wasted. To evaluate the use of excess reactants, the excess reactant factor can be calculated.
Excess reactant factor = stoichiometric mass of reactants + excess mass of reactant(s) stoichiometric mass of reactants {\displaystyle {\text{Excess reactant factor}}={\frac {{\text{stoichiometric mass of reactants}}+{\text{excess mass of reactant(s)}}}{\text{stoichiometric mass of reactants}}}}
If this value is far greater than 1, then the excess reactants may be a large waste of chemicals and costs. This can be a concern when raw materials have high economic costs or environmental costs in extraction.
In addition, increasing the temperature can also increase the yield of some endothermic reactions , but at the expense of consuming more energy. Hence this may not be attractive methods as well.
The reaction mass efficiency is the percentage of actual mass of desire product to the mass of all reactants used. It takes into account both atom economy and chemical yield.
Reaction mass efficiency = actual mass of desired product mass of reactants × 100 % {\displaystyle {\text{Reaction mass efficiency}}={\frac {\text{actual mass of desired product}}{\text{mass of reactants}}}\times 100\%} Reaction mass efficiency = atom economy × percentage yield excess reactant factor {\displaystyle {\text{Reaction mass efficiency}}={\frac {{\text{atom economy}}\times {\text{percentage yield}}}{\text{excess reactant factor}}}}
Reaction mass efficiency, together with all metrics mentioned above, shows the “greenness” of a reaction but not of a process. Neither metric takes into account all waste produced. For example, these metrics could present a rearrangement as “very green” but fail to address any solvent, work-up, and energy issues that make the process less attractive.
A metric similar to reaction mass efficiency is the effective mass efficiency , as suggested by Hudlicky et al . [ 10 ] It is defined as the percentage of the mass of the desired product relative to the mass of all non-benign reagents used in its synthesis. The reagents here may include any used reactant, solvent or catalyst.
Effective mass efficiency = actual mass of desire products mass of non-benign reagents × 100 % {\displaystyle {\text{Effective mass efficiency}}={\frac {\text{actual mass of desire products}}{\text{mass of non-benign reagents}}}\times 100\%}
Note that when most reagents are benign, the effective mass efficiency can be greater than 100%. This metric requires further definition of a benign substance. Hudlicky defines it as “those by-products, reagents or solvents that have no environmental risk associated with them, for example, water, low-concentration saline, dilute ethanol, autoclaved cell mass, etc.”. This definition leaves the metric open to criticism, as nothing is absolutely benign (which is a subjective term), and even the substances listed in the definition have some environmental impact associated with them. The formula also fails to address the level of toxicity associated with a process. Until all toxicology data is available for all chemicals and a term dealing with these levels of “benign” reagents is written into the formula, the effective mass efficiency is not the best metric for chemistry.
The first general metric for green chemistry remains one of the most flexible and popular ones. Roger A. Sheldon ’s environmental factor (E-factor) can be made as complex and thorough or as simple as desired and useful. [ 11 ]
The E-factor of a process is the ratio of the mass of waste per mass of product:
As examples, Sheldon calculated E-factors of various industries:
It highlights the waste produced in the process as opposed to the reaction, thus helping those who try to fulfil one of the twelve principles of green chemistry to avoid waste production. E-factors can be combined to assess multi-step reactions step by step or in one calculation. E-factors ignore recyclable factors such as recycled solvents and re-used catalysts, which obviously increases the accuracy but ignores the energy involved in the recovery (these are often included theoretically by assuming 90% solvent recovery). The main difficulty with E-factors is the need to define system boundaries, for example, which stages of the production or product life-cycle to consider before calculations can be made.
This metric is simple to apply industrially, as a production facility can measure how much material enters the site and how much leaves as product and waste, thereby directly giving an accurate global E-factor for the site. Sheldon's analyses (see table) demonstrate that oil companies produce less waste than pharmaceuticals as a percentage of material processed. This reflects the fact that the profit margins in the oil industry require them to minimise waste and find uses for products which would normally be discarded as waste. By contrast the pharmaceutical sector is more focused on molecule manufacture and quality. The (currently) high profit margins within the sector mean that there is less concern about the comparatively large amounts of waste that are produced (especially considering the volumes used). Despite the percentage waste and E-factor being high, the pharmaceutical section produces much lower tonnage of waste than any other sector. This table encouraged a number of large pharmaceutical companies to commence “green” chemistry programs. [ citation needed ]
The EcoScale metric was proposed in an article in the Beilstein Journal of Organic Chemistry in 2006 for evaluation of the effectiveness of a synthetic reaction. [ 12 ] It is characterized by simplicity and general applicability. Like the yield-based scale, the EcoScale gives a score from 0 to 100, but also takes into account cost, safety, technical set-up, energy and purification aspects. It is obtained by assigning a value of 100 to an ideal reaction defined as "Compound A (substrate) undergoes a reaction with (or in the presence of)inexpensive compound(s) B to give the desired compound C in 100% yield at room temperature with a minimal risk for the operator and a minimal impact on the environment", and then subtracting penalty points for non-ideal conditions. These penalty points take into account both the advantages and disadvantages of specific reagents, set-ups and technologies. | https://en.wikipedia.org/wiki/Green_chemistry_metrics |
Green criminology is a branch of criminology that involves the study of harms and crimes against the environment broadly conceived, including the study of environmental law and policy, the study of corporate crimes against the environment, and environmental justice from a criminological perspective. [ 1 ]
The term "green criminology" was introduced by Michael J. Lynch in 1990, and expanded upon in Nancy Frank and Michael J. Lynch's 1992 book, Corporate Crime, Corporate Violence , [ 2 ] which examined the political economic origins of green crime and injustice, and the scope of environmental law. The term became more widely used following publication of a special issue on green criminology in the journal Theoretical Criminology edited by Piers Beirne and Nigel South in 1998. [ 3 ] Green criminology has recently started to feature in university-level curriculum and textbooks in criminology and other disciplinary fields. [ 4 ]
The study of green criminology has expanded significantly over time, and is supported by groups such as the International Green Criminology Working Group. [ 5 ] There are increasing interfaces and hybrid empirical and theoretical influences between the study of green criminology, which focuses on environmental harms and crimes, and mainstream criminology and criminal justice, with criminologists studying the 'greening' of criminal justice institutions and practices in efforts to become more environmentally sustainable and the involvement of people in prison or on probation in ecological justice initiatives. [ 6 ] [ 7 ] [ 8 ]
Though green criminology was originally proposed as a political economic approach for the study of environmental harm, crime, law and justice, there are now several varieties of green criminology as noted below. [ 9 ]
The initial grounding of green criminology was in political economic theory and analysis. In his original 1990 article, [ 10 ] Lynch proposed green criminology as an extension of radical criminology and its focus on political economic theory and analysis. In that view, it was essential to examine the political economic dimensions of green crime and justice in order to understand the major environmental issues of our times and how they connect with the political economy of capitalism. The political economic approach was expanded upon by Lynch and Paul B. Stretesky in two additional articles in The Critical Criminologist . [ 11 ] [ 12 ] In those articles, Lynch and Stretesky extended the scope of green criminology to apply to the study of environmental justice, and followed that work with a series of studies addressing environmental justice concerns, [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] the distribution of environmental crimes and hazards, [ 18 ] [ 19 ] and empirical studies of environmental justice movements and enforcement. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] Later, working with Michael A. Long and then Kimberly L. Barrett, the political economic explanation and empirical studies of green crimes were adapted to include a perspective on the structural influence of the treadmill of production on the creation of green crimes [ 25 ] [ 26 ] [ 27 ] [ 28 ] drawn from the work of Allan Schnaiberg , environmental sociology , eco-socialism and ecological Marxism . Throughout the development of the political economic approach to green criminology, scholars have made significant use of scientific and ecological literatures, as well as empirical analysis, which have become characteristics of this approach and distinguish it from other varieties of green criminology.
The second major variation of green criminology is the nonspeciesist argument proposed by Piers Beirne. [ 29 ] In Beirne's view, the study of harms against nonhuman animals is an important criminological topic which requires attention and at the same time illustrates the limits of current criminological theorizing about, crime/harm, law and justice with its focus almost exclusively on humans. [ 30 ] This approach also includes discussions of animal rights . Beirne's approach to green criminology has been extremely influential, and there are now a significant number of studies within the green criminological literature focusing on nonhuman animal crimes and animal abuse. [ 31 ] In addition to studies of animal abuse, included within the scope of nonhuman animal studies are those focused on illegal wildlife trade , poaching , wildlife smuggling , animal trafficking and the international trade in endangered species. [ 32 ] [ 33 ] [ 34 ] Many of the studies green criminologists undertake in this area of research are theoretical or qualitative. Ron Clarke and several colleagues, however, have explored empirical examinations of illegal animal trade and trafficking, [ 35 ] [ 36 ] [ 37 ] [ 38 ] and this has become a useful approach for examining green crimes. Clarke's approach draws on more traditional criminological theory such as rational choice theory and crime opportunity theory , and hence is not within the mainstream of green criminological approaches. Nevertheless, Clarke's approach has drawn attention to important empirical explanations of green crimes.
Similar to the political economic approach but without grounding in political economic theory, some green criminologists have explored the issue of green crime by examining how corporate behavior impacts green crimes. [ 39 ] Among other issues, this approach has included discussions of eco-crimes and activities such as bio-piracy as discussed by Nigel South. [ 40 ] Bio-piracy is largely an effort by corporations to commodify native knowledge and to turn native knowledge and practices into for-profit products while depriving native peoples of their rights to that knowledge and those products, and in most cases, avoiding payments to natives for their knowledge or products. Bio-piracy includes issues of social and economic justice for native peoples. These kinds of crimes fall into the category of eco-crimes, a term associated with the work of Reece Walters. [ 41 ] Also included within the examination of eco-crimes is the analysis of other ecologically harmful corporate behaviors such as the production of genetically modified foods [ 42 ] and various forms of toxic pollution. [ 43 ]
Ecocide describes attempts to criminalize human activities that cause extensive damage to, destruction of or loss of ecosystems of a given territory; and which diminish the health and well-being of species within these ecosystems including humans. It involves transgressions that violate the principles of environmental justice, ecological justice and species justice. When this occurs as a result of human behaviour, advocates argue that a crime has occurred. However, this has not yet been accepted as an international crime by the United Nations. [ 44 ]
Some of those who study environmental crime and justice prefer the use of Rob White's term, eco-global criminology. [ 45 ] In proposing this term, White suggested that it is necessary to employ a critical analysis of environmental crime as it occurs in its global context and connections. [ 46 ] Similar to Lynch's political economic approach to green criminology, White has also noted that it is desirable to refer to the political economy of environmental crime, and to social and environmental justice issues.
As proposed by Avi Brisman and Nigel South [ 47 ] green-cultural criminology attempts to integrate green and cultural criminology to explore the cultural meaning and significance of terms such as "environment" and "environmental crime". Green-cultural criminology goes against traditional approaches in regards to criminology, bringing attention to social harms and social consequences. [ 48 ]
Conservation criminology is complementary to green criminology. Originally proposed by an interdisciplinary group of scholars from the Department of Fisheries & Wildlife, School of Criminal Justice, and Environmental Science & Policy Program at Michigan State University, conservation criminology seeks to overcome limitations inherent to single-discipline science and provide practical guidance about on-the-ground reforms. [ 49 ] [ 50 ] Conservation criminology is an interdisciplinary and applied paradigm for understanding programs and policies associated with global conservation risks. By integrating natural resources management, risk and decision science, and criminology, conservation criminology-based approaches ideally result in improved environmental resilience, biodiversity conservation, and secure human livelihoods. As an interdisciplinary science, conservation criminology requires the constant and creative combination of theories, methods, and techniques from diverse disciplines throughout the entire processes of research, practice, education, and policy. Thinking about the interdisciplinary nature of conservation criminology can be quite exciting but does require patience and understanding of the different languages, epistemologies and ontologies of the core disciplines. Conservation criminology has been extensively applied to extralegal exploitation of natural resources such as wildlife poaching in Namibia [ 51 ] and Madagascar [ 52 ] corruption in conservation, [ 53 ] e-waste , [ 54 ] and general noncompliance with conservation rules. [ 55 ] By relying on multiple disciplines, conservation criminology leapfrogs this ideal; it promotes thinking about second- and third-order consequences of risks, not just isolated trends.
The way of seeing eco-crime through media in the form of images portrays racism. [ 56 ] Photography is very powerful tool to generate perspective and interpretation when representing the eco-crime. The blackness of the eco-crime be it in just a background or sillhoutte of the people on the site of eco-crime or the title of the images which has a racist content can be a tool to racialize the community where eco-crime happens or creating a symbol where green crime is black. [ 56 ] Reading race through an image is one of beneficial approach to see how racism pictured through an images of eco-crime. [ 56 ] Moreover, the meaning of green also deducted by media. [ 57 ] Media advertisement tend to use all the so called "go green" to sell the product even though the product is not really a sustainable product and not environmentally friendly. [ 57 ] This act by media to advertise their product to increase selling by sabotaging the "go green" movement is called ' greenwashing '. [ 57 ] Criminologist and media should study and create a focus on how the media portrays eco-crime to provide an equal information free from bias be it gender and race as well as eager to pay an attention towards green offender (e.g. corporations which violate environmental laws). [ 56 ] [ 58 ]
It is often noted that green criminology is interdisciplinary and as a result, lacks its own unique theory or any preferred theoretical approach. Moreover, significant portions of the green criminological literature are qualitative and descriptive, and those studies have generally not proposed a unique or unifying theory. Despite this general lack of a singular theory, some of the approaches noted above indicate certain theoretical preferences. For example, as noted, the political economic approach to green criminology develops explanations of green crime, victimization and environmental justice consistent with several existing strains of political economic analysis. Beirne's approach takes an interdisciplinary view of theory with respect to various animal rights models and arguments. Clarke's rational choice models of animal poaching and trafficking build on the rational choice tradition found within the criminological literature. To date, these different theoretical approaches have not been examined as competing explanations for green crime and justice, a situation that is found with respect to orthodox or traditional criminological theories of street crime. | https://en.wikipedia.org/wiki/Green_criminology |
Green death is a solution used to test the resistance of metals and alloys to corrosion . It consists of a mixture of sulfuric acid , hydrochloric acid , iron(III) chloride and copper(II) chloride and its boiling point is at approximately 103 °C . Its typical chemical composition is given in the table hereafter: [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The chemical composition of the green death solution allows it to achieve a particularly aggressive oxidizing chloride solution. [ 1 ] Indeed, among the four reagents, all are oxidizing species ( H 2 SO 4 , Fe 3+ , Cu 2+ ) except hydrochloric acid (HCl) in which the chlorine atom is present in its lowest oxidation state as Cl − anion. The chloride anions, also added to the solution as counter-ions of iron(III) and copper(II) species, are very aggressive for the localized corrosion of metals and alloys as they induce severe pitting corrosion problems. The green death solution is also used to determine the critical pitting temperature (CPT) and the critical crevice temperature (CCT) of metals and alloys . [ citation needed ]
This corrosion -related article is a stub . You can help Wikipedia by expanding it .
This article about an acid is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Green_death |
Green engineering approaches the design of products and processes by applying financially and technologically feasible principles to achieve one or more of the following goals: (1) decrease in the amount of pollution that is generated by a construction or operation of a facility, (2) minimization of human population exposure to potential hazards (including reducing toxicity ), (3) improved uses of matter and energy throughout the life cycle of the product and processes, and (4) maintaining economic efficiency and viability. [ 1 ] Green engineering can be an overarching framework for all design disciplines.
The concept of green engineering began between 1966 and 1970 during the Organization for Economic Cooperation and Development under the name: "The Ten Ecological Commandments for Earth Citizens". [ 2 ] The idea was expressed visually as the following cycle starting with the first commandment and ending with the tenth:
The idea was then presented by Peter Menke-Glückert at the United Nations Educational, Scientific, and Cultural Conference at Paris in 1968. These principles are similar to the Principles of Green Engineering in that each individual has an intrinsic responsibility to uphold these values. The Ten Ecological Commandments for Earth Citizens is thought by Dr. Płotka-Wasylka to have influenced The Principles of Green Engineering, which has been said to imply that all engineers have a duty to uphold sustainable values and practices when creating new processes.
Green engineering is a part of a larger push for sustainable practices in the creation of products such as chemical compounds. This movement is more widely known as green chemistry , and has been headed since 1991 by Paul Anastas and John C. Warner . Green chemistry, being older than green engineering, is a more researched field of study and began in 1991 with the creation of the 12 Principles of Green Chemistry.
On May 19, 2003, Paul Anastas along with his future wife, Julie Zimmerman created the 12 Principles of Green Engineering. This expanded upon the 12 Principles of Green Chemistry to not only include the guidelines for what an environmentally conscious chemical should be in theory, but also what steps should be followed to create an environmentally conscious alternative to the chemical. [ 3 ] Environmentally conscious thought can be applied to engineering disciplines such as civil and mechanical engineers when considering practices with negative environmental impacts, such as concrete hydration . These principles still were centered around chemical processes, with about half pertaining to engineers. [ 4 ] There are many ways that both the 12 Principles of Green Chemistry and 12 Principles of Green Engineering interact, referred to by Tse-Lun Chen et al. as "cross connections". Every one Principle of Green Engineering has one or more corresponding "cross connections" to Principles of Green Chemistry. For example, principle 1 of green engineering is "Inherent Rather than Circumstantial", which has cross connections to principles 1, 3, and 8 of green chemistry. [ 5 ]
On May 19, 2003, during a conference at the Sandestin Resort in Florida, a group consisting of about 65 chemists, engineers, and government officials met to create a narrowed down set of green principles relating to engineers and engineering. After 4 days of debating and proposals, the Sandestin Declaration was created. [ 6 ] This declaration established the 9 Principles of Green Engineering, which narrowed down the focus to processes engineers can abide by, with a focus on designing processes and products with the future in mind. The resulting 9 Principles were later supported and recognized by The U.S. Environmental Protection Agency , National Science Foundation , Department of Energy (Los Alamos National Laboratory) , and the ACS Green Chemistry institute . [ 6 ]
Green engineering follows nine guiding principles:
In 2003, The American Chemical Society introduced a new list of twelve principles:
Many engineering disciplines engage in green engineering. This includes sustainable design , life cycle analysis (LCA), pollution prevention, design for the environment (DfE), design for disassembly (DfD), and design for recycling (DfR). As such, green engineering is a subset of sustainable engineering . [ 10 ] Green engineering involves four basic approaches to improve processes and products to make them more efficient from an environmental standpoint. [ 11 ]
Green engineering approaches design from a systematic perspective which integrates numerous professional disciplines. In addition to all engineering disciplines, green engineering includes land use planning, architecture, landscape architecture, and other design fields, as well as the social sciences(e.g. to determine how various groups of people use products and services. Green engineers are concerned with space, the sense of place, viewing the site map as a set of fluxes across the boundary, and considering the combinations of these systems over larger regions, e.g. urban areas.
The life cycle analysis is an important green engineering tool, which provides a holistic view of the entirety of a product, process or activity, encompassing raw materials, manufacturing, transportation, distribution, use, maintenance, recycling, and final disposal. Assessing its life cycle should yield a complete picture of the product. The first step in a life cycle assessment is to gather data on the flow of a material through an identifiable society. Once the quantities of various components of such a flow are known, the important functions and impacts of each step in the production, manufacture, use, and recovery/disposal are estimated. In sustainable design, engineers must optimize for variables that give the best performance in temporal frames. [ 12 ]
The system approach employed in green engineering is similar to value engineering (VE). Daniel A. Vallero has compared green engineering to be a form of VE because both systems require that all elements and linkages within the overall project be considered to enhance the value of the project. Every component and step of the system must be challenged. Ascertaining overall value is determined not only be a project's cost-effectiveness, but other values, including environmental and public health factors. Thus, the broader sense of VE is compatible with and can be identical to green engineering, since VE is aimed at effectiveness, not just efficiency, i.e. a project is designed to achieve multiple objectives, without sacrificing any important values. Efficiency is an engineering and thermodynamic term for the ratio of an input to an output of energy and mass within a system. As the ratio approaches 100%, the system becomes more efficient. Effectiveness requires that efficiencies be met for each component, but also that the integration of components lead to an effective, multiple value-based design. [ 13 ] Green engineering is also a type of concurrent engineering , since tasks must be parallelized to achieve multiple design objectives.
An ionic liquid can be described simply as a salt in a liquid state, exhibiting triboelectric properties which allow it to be used as a lubricant. Traditional solvents are composed of oils or synthetic compounds, like fluorocarbons which, when airborne, can act as a greenhouse gas . Ionic liquids are nonvolatile and have high thermal stability and, as Lei states, "They present a “greener” alternative to standard solvents". [ 14 ] Ionic liquids can also be used for carbon dioxide capture or as a component in bioethanol production in the gasification process. [ 3 ]
Ceramic tile production is typically an energy and water-intensive process. Ceramic tile milling is similar to cement milling for concrete, where there is both a dry and wet milling process. Wet milling typically produces a higher quality tile at a higher cost of energy and water, while dry milling would produce a lower quality material at a lower cost. [ 3 ] | https://en.wikipedia.org/wiki/Green_engineering |
The green fluorescent protein ( GFP ) is a protein that exhibits green fluorescence when exposed to light in the blue to ultraviolet range. [ 2 ] [ 3 ] The label GFP traditionally refers to the protein first isolated from the jellyfish Aequorea victoria and is sometimes called avGFP . However, GFPs have been found in other organisms including corals , sea anemones , zoanithids , copepods and lancelets . [ 4 ]
The GFP from A. victoria has a major excitation peak at a wavelength of 395 nm and a minor one at 475 nm. Its emission peak is at 509 nm, which is in the lower green portion of the visible spectrum . The fluorescence quantum yield (QY) of GFP is 0.79. The GFP from the sea pansy ( Renilla reniformis ) has a single major excitation peak at 498 nm. GFP makes for an excellent tool in many forms of biology due to its ability to form an internal chromophore without requiring any accessory cofactors , gene products, or enzymes / substrates other than molecular oxygen. [ 5 ] [ 6 ]
In cell and molecular biology , the GFP gene is frequently used as a reporter of expression . [ 7 ] It has been used in modified forms to make biosensors , and many animals have been created that express GFP, which demonstrates a proof of concept that a gene can be expressed throughout a given organism, in selected organs, or in cells of interest. GFP can be introduced into animals or other species through transgenic techniques , and maintained in their genome and that of their offspring. GFP has been expressed in many species, including bacteria, yeasts, fungi, fish and mammals, including in human cells. Scientists Roger Y. Tsien , Osamu Shimomura , and Martin Chalfie were awarded the 2008 Nobel Prize in Chemistry on 10 October 2008 for their discovery and development of the green fluorescent protein.
Most commercially available genes for GFP and similar fluorescent proteins are around 730 base-pairs long. The natural protein has 238 amino acids. Its molecular mass is 27 kD. [ 8 ] Therefore, fusing the GFP gene to the gene of a protein of interest can significantly increase the protein's size and molecular mass, and can impair the protein's natural function or change its location or trajectory of transport within the cell. [ 9 ]
In the 1960s and 1970s, GFP, along with the separate luminescent protein aequorin (an enzyme that catalyzes the breakdown of luciferin , releasing light), was first purified from the jellyfish Aequorea victoria and its properties studied by Osamu Shimomura . [ 10 ] In A. victoria , GFP fluorescence occurs when aequorin interacts with Ca 2+ ions, inducing a blue glow. Some of this luminescent energy is transferred to the GFP, shifting the overall color towards green. [ 11 ] However, its utility as a tool for molecular biologists did not begin to be realized until 1992 when Douglas Prasher reported the cloning and nucleotide sequence of wtGFP in Gene . [ 12 ] The funding for this project had run out, so Prasher sent cDNA samples to several labs. The lab of Martin Chalfie expressed the coding sequence of wtGFP, with the first few amino acids deleted, in heterologous cells of E. coli and C. elegans , publishing the results in Science in 1994. [ 13 ] Frederick Tsuji's lab independently reported the expression of the recombinant protein one month later. [ 14 ] Remarkably, the GFP molecule folded and was fluorescent at room temperature, without the need for exogenous cofactors specific to the jellyfish. Although this near-wtGFP was fluorescent, it had several drawbacks, including dual peaked excitation spectra, pH sensitivity, chloride sensitivity, poor fluorescence quantum yield, poor photostability and poor folding at 37 °C (99 °F).
The first reported crystal structure of a GFP was that of the S65T mutant by the Remington group in Science in 1996. [ 15 ] One month later, the Phillips group independently reported the wild-type GFP structure in Nature Biotechnology . [ 16 ] These crystal structures provided vital background on chromophore formation and neighboring residue interactions. Researchers have modified these residues by directed and random mutagenesis to produce the wide variety of GFP derivatives in use today. Further research into GFP has shown that it is resistant to detergents, proteases, guanidinium chloride (GdmCl) treatments, and drastic temperature changes. [ 17 ]
Due to the potential for widespread usage and the evolving needs of researchers, many different mutants of GFP have been engineered. [ 18 ] [ 19 ] The first major improvement was a single point mutation (S65T) reported in 1995 in Nature by Roger Tsien . [ 20 ] This mutation dramatically improved the spectral characteristics of GFP, resulting in increased fluorescence, photostability, and a shift of the major excitation peak to 488 nm, with the peak emission kept at 509 nm. This matched the spectral characteristics of commonly available FITC filter sets, increasing the practicality of use by the general researcher. A 37 °C folding efficiency (F64L) point mutant to this scaffold, yielding enhanced GFP (EGFP), was discovered in 1995 by the laboratories of Thastrup [ 21 ] and Falkow. [ 22 ] EGFP allowed the practical use of GFPs in mammalian cells. EGFP has an extinction coefficient (denoted ε) of 55,000 M −1 cm −1 . [ 23 ] The fluorescence quantum yield (QY) of EGFP is 0.60. The relative brightness, expressed as ε•QY, is 33,000 M −1 cm −1 .
Superfolder GFP (sfGFP), a series of mutations that allow GFP to rapidly fold and mature even when fused to poorly folding peptides, was reported in 2006. [ 24 ]
Many other mutations have been made, including color mutants; in particular, blue fluorescent protein (EBFP, EBFP2, Azurite, mKalama1), cyan fluorescent protein (ECFP, Cerulean, CyPet, mTurquoise2), and yellow fluorescent protein derivatives (YFP, Citrine, Venus, YPet). BFP derivatives (except mKalama1) contain the Y66H substitution. They exhibit a broad absorption band in the ultraviolet centered close to 380 nanometers and an emission maximum at 448 nanometers. A green fluorescent protein mutant (BFPms1) that preferentially binds Zn(II) and Cu(II) has been developed. BFPms1 have several important mutations including and the BFP chromophore (Y66H),Y145F for higher quantum yield, H148G for creating a hole into the beta-barrel and several other mutations that increase solubility. Zn(II) binding increases fluorescence intensity, while Cu(II) binding quenches fluorescence and shifts the absorbance maximum from 379 to 444 nm. Therefore, they can be used as Zn biosensor. [ 25 ]
Chromophore binding. The critical mutation in cyan derivatives is the Y66W substitution, which causes the chromophore to form with an indole rather than phenol component. Several additional compensatory mutations in the surrounding barrel are required to restore brightness to this modified chromophore due to the increased bulk of the indole group. In ECFP and Cerulean, the N-terminal half of the seventh strand exhibits two conformations. These conformations both have a complex set of van der Waals interactions with the chromophore. The Y145A and H148D mutations in Cerulean stabilize these interactions and allow the chromophore to be more planar, better packed, and less prone to collisional quenching. [ 26 ]
Additional site-directed random mutagenesis in combination with fluorescence lifetime based screening has further stabilized the seventh β-strand resulting in a bright variant, mTurquoise2, with a quantum yield (QY) of 0.93. [ 27 ] The red-shifted wavelength of the YFP derivatives is accomplished by the T203Y mutation and is due to π-electron stacking interactions between the substituted tyrosine residue and the chromophore. [ 3 ] These two classes of spectral variants are often employed for Förster resonance energy transfer (FRET) experiments. Genetically encoded FRET reporters sensitive to cell signaling molecules, such as calcium or glutamate, protein phosphorylation state, protein complementation, receptor dimerization, and other processes provide highly specific optical readouts of cell activity in real time.
Semirational mutagenesis of a number of residues led to pH-sensitive mutants known as pHluorins, and later super-ecliptic pHluorins. By exploiting the rapid change in pH upon synaptic vesicle fusion, pHluorins tagged to synaptobrevin have been used to visualize synaptic activity in neurons. [ 28 ]
Redox sensitive GFP ( roGFP ) was engineered by introduction of cysteines into the beta barrel structure. The redox state of the cysteines determines the fluorescent properties of roGFP . [ 29 ]
The nomenclature of modified GFPs is often confusing due to overlapping mapping of several GFP versions onto a single name. For example, mGFP often refers to a GFP with an N-terminal palmitoylation that causes the GFP to bind to cell membranes . However, the same term is also used to refer to monomeric GFP, which is often achieved by the dimer interface breaking A206K mutation. [ 30 ] Wild-type GFP has a weak dimerization tendency at concentrations above 5 mg/mL. mGFP also stands for "modified GFP," which has been optimized through amino acid exchange for stable expression in plant cells.
The purpose of both the (primary) bioluminescence (from aequorin 's action on luciferin) and the (secondary) fluorescence of GFP in jellyfish is unknown. GFP is co-expressed with aequorin in small granules around the rim of the jellyfish bell. The secondary excitation peak (480 nm) of GFP does absorb some of the blue emission of aequorin, giving the bioluminescence a more green hue. The serine 65 residue of the GFP chromophore is responsible for the dual-peaked excitation spectra of wild-type GFP. It is conserved in all three GFP isoforms originally cloned by Prasher. Nearly all mutations of this residue consolidate the excitation spectra to a single peak at either 395 nm or 480 nm. The precise mechanism of this sensitivity is complex, but, it seems, involves donation of a hydrogen from serine 65 to glutamate 222, which influences chromophore ionization. [ 3 ] Since a single mutation can dramatically enhance the 480 nm excitation peak, making GFP a much more efficient partner of aequorin, A. victoria appears to evolutionarily prefer the less-efficient, dual-peaked excitation spectrum. Roger Tsien has speculated that varying hydrostatic pressure with depth may affect serine 65's ability to donate a hydrogen to the chromophore and shift the ratio of the two excitation peaks. Thus, the jellyfish may change the color of its bioluminescence with depth. However, a collapse in the population of jellyfish in Friday Harbor , where GFP was originally discovered, has hampered further study of the role of GFP in the jellyfish's natural environment.
Most species of lancelet are known to produce GFP in various regions of their body. [ 31 ] Unlike A. victoria , lancelets do not produce their own blue light, and the origin of their endogenous GFP is still unknown. Some speculate that it attracts plankton towards the mouth of the lancelet, serving as a passive hunting mechanism. It may also serve as a photoprotective agent in the larvae, preventing damage caused by high-intensity blue light by converting it into lower-intensity green light. However, these theories have not been tested.
GFP-like proteins have been found in multiple species of marine copepods , particularly from the Pontellidae and Aetideidae families. [ 32 ] GFP isolated from Pontella mimocerami has shown high levels of brightness with a quantum yield of 0.92, making them nearly two-fold brighter than the commonly used EGFP isolated from A. victoria. [ 33 ]
There are many GFP-like proteins that, despite being in the same protein family as GFP, are not directly derived from Aequorea victoria . These include dsRed , eqFP611, Dronpa, TagRFPs, KFP, EosFP/IrisFP, Dendra, and so on. Having been developed from proteins in different organisms, these proteins can sometimes display unanticipated approaches to chromophore formation. Some of these, such as KFP, are developed from naturally non- or weakly-fluorescent proteins to be greatly improved upon by mutagenesis. [ 34 ] When GFP-like barrels of different spectra characteristics are used, the excitation spectra of one chromophore can be used to power another chromophore (FRET), allowing for conversion between wavelengths of light. [ 35 ]
FMN-binding fluorescent proteins (FbFPs) were developed in 2007 and are a class of small (11–16 kDa), oxygen-independent fluorescent proteins that are derived from blue-light receptors. They are intended especially for the use under anaerobic or hypoxic conditions, since the formation and binding of the Flavin chromophore does not require molecular oxygen, as it is the case with the synthesis of the GFP chromophore. [ 36 ]
Fluorescent proteins with other chromophores, such as UnaG with bilirubin, can display unique properties like red-shifted emission above 600 nm or photoconversion from a green-emitting state to a red-emitting state. They can have excitation and emission wavelengths far enough apart to achieve conversion between red and green light.
A new class of fluorescent protein was evolved from a cyanobacterial ( Trichodesmium erythraeum ) phycobiliprotein , α- allophycocyanin , and named small ultra red fluorescent protein ( smURFP ) in 2016. smURFP autocatalytically self-incorporates the chromophore biliverdin without the need of an external protein , known as a lyase . [ 37 ] [ 38 ] Jellyfish - and coral -derived GFP-like proteins require oxygen and produce a stoichiometric amount of hydrogen peroxide upon chromophore formation. [ 39 ] smURFP does not require oxygen or produce hydrogen peroxide and uses the chromophore , biliverdin . smURFP has a large extinction coefficient (180,000 M −1 cm −1 ) and has a modest quantum yield (0.20), which makes it comparable biophysical brightness to eGFP and ~2-fold brighter than most red or far-red fluorescent proteins derived from coral . smURFP spectral properties are similar to the organic dye Cy5 . [ 37 ] [ 40 ]
Reviews on new classes of fluorescent proteins and applications can be found in the cited reviews. [ 41 ] [ 42 ]
GFP has a beta barrel structure consisting of eleven β-strands with a pleated sheet arrangement, with an alpha helix containing the covalently bonded chromophore 4-( p -hydroxybenzylidene)imidazolidin-5-one (HBI) running through the center. [ 3 ] [ 15 ] [ 16 ] Five shorter alpha helices form caps on the ends of the structure. The beta barrel structure is a nearly perfect cylinder, 42Å long and 24Å in diameter (some studies have reported a diameter of 30Å [ 17 ] ), [ 15 ] creating what is referred to as a "β-can" formation, which is unique to the GFP-like family. [ 16 ] HBI, the spontaneously modified form of the tripeptide Ser65–Tyr66–Gly67, is nonfluorescent in the absence of the properly folded GFP scaffold and exists mainly in the un-ionized phenol form in wtGFP. [ 43 ] Inward-facing sidechains of the barrel induce specific cyclization reactions in Ser65–Tyr66–Gly67 that induce ionization of HBI to the phenolate form and chromophore formation. This process of post-translational modification is referred to as maturation . [ 44 ] The hydrogen-bonding network and electron-stacking interactions with these sidechains influence the color, intensity and photostability of GFP and its numerous derivatives. [ 45 ] The tightly packed nature of the barrel excludes solvent molecules, protecting the chromophore fluorescence from quenching by water. In addition to the auto-cyclization of the Ser65-Tyr66-Gly67, a 1,2-dehydrogenation reaction occurs at the Tyr66 residue. [ 17 ] Besides the three residues that form the chromophore, residues such as Gln94, Arg96, His148, Thr203, and Glu222 all act as stabilizers. The residues of Gln94, Arg96, and His148 are able to stabilize by delocalizing the chromophore charge. Arg96 is the most important stabilizing residue due to the fact that it prompts the necessary structural realignments that are necessary from the HBI ring to occur. Any mutation to the Arg96 residue would result in a decrease in the development rate of the chromophore because proper electrostatic and steric interactions would be lost. Tyr66 is the recipient of hydrogen bonds and does not ionize in order to produce favorable electrostatics. [ 46 ]
Blue fluorescent protein (BFP) is the blue variant of green fluorescent protein (GFP). BFP has a very similar structure to GFP. In the BFP structure, two substitution mutations in the amino acid sequence change its fluorescence from green to blue. The first mutation occurs inside the chromophore of GFP at position 66 which changes a tyrosine to a histidine. The other mutation in BFP is on the tyrosine at position 145 which mutates to phenylalanine. The autocatalytic cyclization and oxidation of the serine, tyrosine, and glycine form the GFP chromophore. These three residues at positions 65-67 make up the green fluorescent chromophore. When the tyrosine in the chromophore is substituted by a histidine, it changes the folding structure of the protein and emission spectra. The T145F mutation is also added to increase the stability of the protein and well as intensify the fluorescence. These mutations are what change GFP to BFP.
Mechanistically, the process involves base-mediated cyclization followed by dehydration and oxidation. In the reaction of 7a to 8 involves the formation of an enamine from the imine, while in the reaction of 7b to 9 a proton is abstracted. [ 47 ] The formed HBI fluorophore is highlighted in green.
The reactions are catalyzed by residues Glu222 and Arg96. [ 47 ] [ 48 ] An analogous mechanism is also possible with threonine in place of Ser65.
Green fluorescent protein may be used as a reporter gene . [ 49 ] [ 50 ]
For example, GFP can be used as a reporter for environmental toxicity levels. This protein has been shown to be an effective way to measure the toxicity levels of various chemicals including ethanol, p -formaldehyde, phenol, triclosan, and paraben. GFP is great as a reporter protein because it has no effect on the host when introduced to the host's cellular environment. Due to this ability, no external visualization stain, ATP, or cofactors are needed. With regards to pollutant levels, the fluorescence was measured in order to gauge the effect that the pollutants have on the host cell. The cellular density of the host cell was also measured. Results from the study conducted by Song, Kim, & Seo (2016) showed that there was a decrease in both fluorescence and cellular density as pollutant levels increased. This was indicative of the fact that cellular activity had decreased. More research into this specific application in order to determine the mechanism by which GFP acts as a pollutant marker. [ 51 ] Similar results have been observed in zebrafish because zebrafish that were injected with GFP were approximately twenty times more susceptible to recognize cellular stresses than zebrafish that were not injected with GFP. [ 52 ]
The biggest advantage of GFP is that it can be heritable, depending on how it was introduced, allowing for continued study of cells and tissues it is expressed in. Visualizing GFP is noninvasive, requiring only illumination with blue light. GFP alone does not interfere with biological processes, but when fused to proteins of interest, careful design of linkers is required to maintain the function of the protein of interest. Moreover, if used with a monomer it is able to diffuse readily throughout cells. [ 53 ]
The availability of GFP and its derivatives has thoroughly redefined fluorescence microscopy and the way it is used in cell biology and other biological disciplines. [ 54 ] While most small fluorescent molecules such as FITC (fluorescein isothiocyanate) are strongly phototoxic when used in live cells, fluorescent proteins such as GFP are usually much less harmful when illuminated in living cells. This has triggered the development of highly automated live-cell fluorescence microscopy systems, which can be used to observe cells over time expressing one or more proteins tagged with fluorescent proteins.
There are many techniques to utilize GFP in a live cell imaging experiment. The most direct way of utilizing GFP is to directly attach it to a protein of interest. For example, GFP can be included in a plasmid expressing other genes to indicate a successful transfection of a gene of interest. Another method is to use a GFP that contains a mutation where the fluorescence will change from green to yellow over time, which is referred to as a fluorescent timer. With the fluorescent timer, researchers can study the state of protein production such as recently activated, continuously activated, or recently deactivated based on the color reported by the fluorescent protein. [ 55 ] In yet another example, scientists have modified GFP to become active only after exposure to irradiation giving researchers a tool to selectively activate certain portions of a cell and observe where proteins tagged with the GFP move from the starting location. [ 56 ] These are only two examples in a burgeoning field of fluorescent microcopy and a more complete review of biosensors utilizing GFP and other fluorescent proteins can be found here [ 57 ]
For example, GFP had been widely used in labelling the spermatozoa of various organisms for identification purposes as in Drosophila melanogaster , where expression of GFP can be used as a marker for a particular characteristic. GFP can also be expressed in different structures enabling morphological distinction. In such cases, the gene for the production of GFP is incorporated into the genome of the organism in the region of the DNA that codes for the target proteins and that is controlled by the same regulatory sequence ; that is, the gene's regulatory sequence now controls the production of GFP, in addition to the tagged protein(s). In cells where the gene is expressed, and the tagged proteins are produced, GFP is produced at the same time. Thus, only those cells in which the tagged gene is expressed, or the target proteins are produced, will fluoresce when observed under fluorescence microscopy. Analysis of such time lapse movies has redefined the understanding of many biological processes including protein folding, protein transport, and RNA dynamics, which in the past had been studied using fixed (i.e., dead) material. Obtained data are also used to calibrate mathematical models of intracellular systems and to estimate rates of gene expression. [ 58 ] Similarly, GFP can be used as an indicator of protein expression in heterologous systems. In this scenario, fusion proteins containing GFP are introduced indirectly, using RNA of the construct, or directly, with the tagged protein itself. This method is useful for studying structural and functional characteristics of the tagged protein on a macromolecular or single-molecule scale with fluorescence microscopy.
The Vertico SMI microscope using the SPDM Phymod technology uses the so-called "reversible photobleaching" effect of fluorescent dyes like GFP and its derivatives to localize them as single molecules in an optical resolution of 10 nm. This can also be performed as a co-localization of two GFP derivatives (2CLM). [ 59 ]
Another powerful use of GFP is to express the protein in small sets of specific cells. This allows researchers to optically detect specific types of cells in vitro (in a dish), or even in vivo (in the living organism). [ 60 ] GFP is considered to be a reliable reporter of gene expression in eukaryotic cells when the fluorescence is measured by flow cytometry. [ 61 ] Genetically combining several spectral variants of GFP is a useful trick for the analysis of brain circuitry ( Brainbow ). [ 62 ] Other interesting uses of fluorescent proteins in the literature include using FPs as sensors of neuron membrane potential , [ 63 ] tracking of AMPA receptors on cell membranes, [ 64 ] viral entry and the infection of individual influenza viruses and lentiviral viruses, [ 65 ] [ 66 ] etc.
It has also been found that new lines of transgenic GFP rats can be relevant for gene therapy as well as regenerative medicine. [ 67 ] By using "high-expresser" GFP, transgenic rats display high expression in most tissues, and many cells that have not been characterized or have been only poorly characterized in previous GFP-transgenic rats.
GFP has been shown to be useful in cryobiology as a viability assay . Correlation of viability as measured by trypan blue assays were 0.97. [ 68 ] Another application is the use of GFP co-transfection as internal control for transfection efficiency in mammalian cells. [ 69 ]
A novel possible use of GFP includes using it as a sensitive monitor of intracellular processes via an eGFP laser system made out of a human embryonic kidney cell line. The first engineered living laser is made by an eGFP expressing cell inside a reflective optical cavity and hitting it with pulses of blue light. At a certain pulse threshold, the eGFP's optical output becomes brighter and completely uniform in color of pure green with a wavelength of 516 nm. Before being emitted as laser light, the light bounces back and forth within the resonator cavity and passes the cell numerous times. By studying the changes in optical activity, researchers may better understand cellular processes. [ 70 ] [ 71 ]
GFP is used widely in cancer research to label and track cancer cells. GFP-labelled cancer cells have been used to model metastasis, the process by which cancer cells spread to distant organs. [ 72 ]
GFP can be used to analyse the colocalization of proteins. This is achieved by "splitting" the protein into two fragments which are able to self-assemble, and then fusing each of these to the two proteins of interest. Alone, these incomplete GFP fragments are unable to fluoresce. However, if the two proteins of interest colocalize, then the two GFP fragments assemble together to form a GFP-like structure which is able to fluoresce. Therefore, by measuring the level of fluorescence it is possible to determine whether the two proteins of interest colocalize. [ 73 ]
Macro-scale biological processes, such as the spread of virus infections, can be followed using GFP labeling. [ 74 ] In the past, mutagenic ultra violet light (UV) has been used to illuminate living organisms (e.g., see [ 75 ] ) to detect and photograph the GFP expression. Recently, a technique using non-mutagenic LED lights [ 76 ] have been developed for macro-photography. [ 77 ] The technique uses an epifluorescence camera attachment [ 78 ] based on the same principle used in the construction of epifluorescence microscopes .
Alba , a green-fluorescent rabbit, was created by a French laboratory commissioned by Eduardo Kac using GFP for purposes of art and social commentary. [ 79 ] The US company Yorktown Technologies markets to aquarium shops green fluorescent zebrafish ( GloFish ) that were initially developed to detect pollution in waterways. NeonPets, a US-based company has marketed green fluorescent mice to the pet industry as NeonMice. [ 80 ] Green fluorescent pigs, known as Noels, were bred by a group of researchers led by Wu Shinn-Chih at the Department of Animal Science and Technology at National Taiwan University . [ 81 ] A Japanese-American Team created green-fluorescent cats as proof of concept to use them potentially as model organisms for diseases, particularly HIV . [ 82 ] In 2009 a South Korean team from Seoul National University bred the first transgenic beagles with fibroblast cells from sea anemones. The dogs give off a red fluorescent light, and they are meant to allow scientists to study the genes that cause human diseases like narcolepsy and blindness . [ 83 ]
Julian Voss-Andreae , a German-born artist specializing in "protein sculptures," [ 84 ] created sculptures based on the structure of GFP, including the 1.70 metres (5 feet 7 inches) tall "Green Fluorescent Protein" (2004) [ 85 ] and the 1.40 metres (4 feet 7 inches) tall "Steel Jellyfish" (2006). The latter sculpture is located at the place of GFP's discovery by Shimomura in 1962, the University of Washington 's Friday Harbor Laboratories . [ 86 ] | https://en.wikipedia.org/wiki/Green_fluorescent_protein |
Green infrastructure or blue-green infrastructure refers to a network that provides the “ingredients” for solving urban and climatic challenges by building with nature. [ 1 ] The main components of this approach include stormwater management, climate adaptation , the reduction of heat stress , increasing biodiversity , food production , better air quality , sustainable energy production, clean water, and healthy soils , as well as more human centered functions, such as increased quality of life through recreation and the provision of shade and shelter in and around towns and cities. [ 2 ] [ 3 ] Green infrastructure also serves to provide an ecological framework for social, economic, and environmental health of the surroundings. [ 4 ] More recently scholars and activists have also called for green infrastructure that promotes social inclusion and equity rather than reinforcing pre-existing structures of unequal access to nature-based services. [ 5 ]
Green infrastructure is considered a subset of "Sustainable and Resilient Infrastructure", which is defined in standards such as SuRe , the Standard for Sustainable and Resilient Infrastructure. However, green infrastructure can also mean "low-carbon infrastructure" such as renewable energy infrastructure and public transportation systems (See "low-carbon infrastructure"). [ 6 ] Blue-green infrastructure can also be a component of " sustainable drainage systems " or " sustainable urban drainage systems " (SuDS or SUDS) designed to manage water quantity and quality, while providing improvements to biodiversity and amenity. [ 7 ]
Nature can be used to provide important services for communities by protecting them against flooding or excessive heat, or helping to improve air , soil and water quality . When nature is harnessed by people and used as an infrastructural system it is called “green infrastructure”. [ 8 ] Many such efforts take as their model prairies, where absorbent soil prevents runoff and vegetation filters out pollutants. [ 9 ] Green infrastructure occurs at all scales. It is most often associated with green stormwater management systems , which are smart and cost-effective. [ 10 ] However, green infrastructure acts as a supplemental component to other related concepts, and ultimately provides an ecological framework for social, economic, and environmental health of the surroundings. [ 11 ] [ 12 ]
"Blue infrastructure" refers to urban infrastructure relating to water. Blue infrastructure is commonly associated with green infrastructure in urban environments and may be referred to as "blue-green infrastructure" when being viewed in combination. Rivers, streams, ponds, and lakes may exist as natural features within cities, or be added to an urban environment as an aspect of its design. Coastal urban developments may also utilize pre-existing features of the coastline specifically employed in their design. Harbours, quays, piers, and other extensions of the urban environment are also often added to capture benefits associated with the marine environment. Blue infrastructure can support unique aquatic biodiversity in urban areas, including aquatic insects, [ 13 ] amphibians, [ 14 ] and water birds. [ 15 ] There may considerable co-benefits to the health and wellbeing of populations with access to blue spaces in the urban context. [ 16 ] [ 17 ] Accessible blue infrastructure in urban areas is also referred as to blue spaces .
Ideas for green urban structures began in the 1870s with concepts of urban farming and garden allotments. [ 1 ] Alternative terminology includes stormwater best management practices , source controls, and low impact development (LID) practices. [ 18 ]
Green infrastructure concepts originated in mid-1980s proposals for best management practices that would achieve more holistic stormwater quantity management goals for runoff volume reduction, erosion prevention, and aquifer recharge. [ 19 ] In 1987, amendments to the U.S. Clean Water Act introduced new provisions for management of diffuse pollutant sources from urban land uses, establishing the regulatory need for practices that unlike conventional drainage infrastructure managed runoff "at source." The U.S. Environmental Protection Agency (EPA) published its initial regulations for municipal separate storm sewer systems ("MS4") in 1990, requiring large MS4s to develop stormwater pollution prevention plans and implement "source control practices". [ 20 ] EPA's 1993 handbook, Urban Runoff Pollution Prevention and Control Planning , identified best management practices to consider in such plans, including vegetative controls, filtration practices and infiltration practices (trenches, porous pavement). [ 21 ] Regulations covering smaller municipalities were published in 1999. [ 22 ] MS4s serve over 80% of the US population and provide drainage for 4% of the land area. [ 23 ]
Green infrastructure is a concept that highlights the importance of the natural environment in decisions about land-use planning . [ 24 ] [ 25 ] However, the term does not have a widely recognized definition. [ 26 ] [ 27 ] Also known as “blue-green infrastructure”, [ 28 ] or “green-blue urban grids” [ 1 ] the terms are used by many design-, conservation- and planning-related disciplines and commonly feature stormwater management, climate adaptation and multifunctional green space.
The term "green infrastructure" is sometimes expanded to "multifunctional" green infrastructure. Multifunctionality in this context refers to the integration and interaction of different functions or activities on the same piece of land.
The EPA extended the concept of “green infrastructure” to apply to the management of stormwater runoff at the local level through the use of natural systems, or engineered systems that mimic natural systems, to treat polluted runoff . [ 29 ] This use of the term "green infrastructure" to refer to urban "green" best management practices contributes to the overall health of natural ecosystems, even though it is not central to the larger concept.
However, it is apparent that the term “blue-green infrastructure” is applied in an urban context and places a greater emphasis on the management of stormwater as an integral part of creating a sustainable, multifunctional urban environment. [ 28 ] At the building level, the term "blue-green architecture" is used, which implements the same principles on a smaller scale. The focus here is on building greening with water management from alternative water resources such as grey water and rainwater. [ 30 ]
Green Infrastructure as a term did not appear until the early 1990s, although ideas of Green Infrastructure had been used long before that. The first coined use of the term was seen in a 1994 report by Buddy MacKay, chair of the Florida Greenways Commission, to Florida governor Lawton Chiles about a Green Infrastructure project undertaken in 1991: Florida Greenways Project. [ 31 ] MacKay states, "Just as we carefully plan the infrastructure our communities need to support the people who live there—the roads, water and electricity—so must we begin to plan and manage Florida’s green infrastructure”. [ 32 ]
Chinese literary gardens are an example of a sustainable lawn that showcased natural beauty in suburban areas. [ 33 ] These gardens, dating back to the Shang Dynasty (1600–1046 BC), were designed to allow native plant species to thrive in their natural conditions and appear untouched by humans. This created ecological havens within the city. [ 34 ]
Greece was an early adopter of the concept of green Infrastructure with the invention of Greek agora . Agoras were meeting spaces that were built for social conversations and allowed Greeks to converse in public. Many were built across Greece, and some incorporated nature as a design aspect, giving nature a space among the public. [ 35 ]
A common urban habitat, the lawn, consists of short grass and sometimes herbaceous plants. [ 36 ] While modern artificial lawns have been connected to a negative environmental impact, lawns in the past have been more sustainable, and they promoted biodiversity and the growth of native plants. These historical lawns are impacting lawn design today to create more sustainable ‘alternative lawns’. [ 34 ]
In medieval Europe, lawns rich with flowers and herbaceous plants known as ‘flower meads’ are a good example of a more sustainable lawn. [ 34 ] Since then, this idea has been used. In the Edwardian Era, lawns full of thyme, whose flowers attracted insects and pollinators, created biodiversity. [ 37 ] A 20th century take on this lawn, the ‘enamelled mead’, has been used in England, and has the purpose of both aesthetics and for stormwater management. [ 38 ] [ 39 ]
During the height of the Renaissance, public areas became more common in new cities and infrastructure. These areas were carefully selected and would often be urban parks and gardens for the public to converse and relax at. [ 35 ] Other than social uses, urban parks and gardens were used to improve the aesthetic of the urban environment they were present in. [ 35 ] Urban spaces had environmental uses for the implementation of fresh air and reduced urban heating. [ 35 ]
Green Infrastructure can be traced as far back as the 17th century in European society beginning in France. [ 40 ] France used the presence of nature to provide social and spatial organization to their towns. [ 41 ] Originally, nature in cities was used to provide social areas to interact, and plants were grown in these spaces to provide food in close proximity to the inhabitants. [ 41 ] In this period, Large open spaces were used to provide a calm setting that could give "sites of power with sites of sanctity" across France. [ 42 ] These sites were used by the French elites to bring rural country town house beauty to their new urban houses in a showcase of power and elaborate display of wealth. [ 42 ] The French implemented many different types of infrastructure throughout the 17th century that involved incorporating nature in some shape or form. Another example would be the use of promenades that were used by the French elites to flee the unhealthy living conditions of the cities and to avoid the filthy public areas available to the common folks. These areas were lush gardens that had a wide variety of vegetation and foliage that kept the air clean for the wealthy while allowing them to relax away from the poorer members of French society. [ 42 ] Again, Mathis goes on to state, "The first cours [or promenades] were established in the capital at the instigation of Marie de Medici : the Mail de l'Arsenal (1604) and above all the Allée du Cours-la-Reine (1616), 1300 mètres long and lined with elms, running along the Seine, from the Tuileries Garden to the high ground of Chaillot," establishing the use of nature as a symbol of power and achievement amongst French royalty and the common people at the time. [ 42 ]
Keeping and making cities green were at the forefront for city planners in France. They often incorporated design elements blending urbanism and nature, forming a relationship that showcased how the French grew alongside nature and often made it a key aspect of their expansion. [ 42 ]
In 18th century France, citizens were able to request to have old and battered city walls destroyed to make room for new gardens, vegetation sites, and green walkways. [ 42 ] This opened up new areas to the city landscape and incorporated greenery into the new areas where the walls were torn down. Along with this, the town hall as well as the city center were elaborately decorated with different types of vegetation and trees, especially rare and unique species that had been brought from other countries. Mathis goes on to state, "A French-style garden is linked to the town hall to make the view of it more sublime", showing the use of foliage as a way to impress and beautify French cities. [ 42 ]
In 1847, a speech by George Perkins Marsh called attention to negative human impacts such as deforestation. Marsh later wrote Man and Nature in 1864 based on his idea for conserving forests. [ 43 ] Around the same time, Henry David Thoreau's 1854 book Walden discussed preservation of nature and applied these ideas to urban planning saying, “I think every town should have a park,” and stated the “importance of preserving some portions of nature herself unimpaired.” [ 44 ] Frederick Law Olmsted , a landscape architect, agreed with these ideas and planned many parks, areas of preserved land, and scenic roads, and in 1887, the Emerald Necklace of Boston . The Emerald Necklace is a system of public parks linked by parkways that serves as a home to diverse wildlife and provides environmental benefits such as flood protection and water storage. [ 43 ]
In Europe, Ebenezer Howard led the garden city movement to balance development with nature. He planned agricultural greenbelts and wide, radiating boulevards surrounded by trees and shrubbery for Victoria, England. One of Howard's concepts was of the "marriage of town and country" to promote sustainable relationships between human society and nature through the planning of garden cities. [ 45 ]
The US government became more involved in conservation and land preservation in the late 1800s. This was seen in the 1864 legislation to preserve the Yosemite Valley as a California public park, and 8 years later, the United States’ first national park. [ 43 ]
Many industrial leaders in the 19th century had the goal of increasing worker's quality of life through quality sanitation and outdoor activity, which would in turn create increased productivity in the workforce. These ideas carried into the 20th century where efforts in green infrastructure were seen in industrial parks, integrated landscaping, and suburban gardens. [ 46 ]
The Anaconda Copper Mining Company was responsible for environmental damage in Montana, but a refinery in Great Falls saw this impact and used the surrounding land to create a green open space that was also used for recreation. This natural haven included a golf course, flower beds, picnic areas, a lily pond, and pedestrian paths. [ 46 ]
Proximity and access to water have been key factors in human settlement through history. [ 47 ] Water, along with the spaces around it, create a potential for transport, trade, and power generation. They also provide the human population with resources like recreation and tourism in addition to drinking water and food. Many of the world's largest cities are located near water sources, and networks of urban "blue infrastructure", such as canals, harbors and so forth, have been constructed to capture the benefits and minimize risks. Globally, cities are facing severe water uncertainties such as floods, droughts, and upstream activities on trans-boundary rivers. The increasing pressure, intensity, and speed of urbanization has led to the disappearance of any visible form of water infrastructure in most cities. [ 48 ] Urban coastal populations are growing, [ 49 ] and many cities have seen an extensive post-industrial transformation of canals, riversides, docks, etc. following changes in global trading patterns. The potential implications of such waterside regeneration in terms of public health have only recently been scientifically investigated. [ 17 ] A systematic review conducted in 2017 found consistent evidence of positive associations between exposure of people to blue space and mental health and physical activity. [ 50 ]
One-fifth of the world's population, 1.2 billion people, live in areas of water scarcity . Climate change and water-related disasters will place increasing demands on urban systems and will result in increased migration to urban areas. Cities require a very large input of freshwater and in turn have a huge impact on freshwater systems. Urban and industrial water use is projected to double by 2050. [ 51 ]
In 2010 the United Nations declared that access to clean water and sanitation is a human right. [ 52 ] New solutions for improving the sustainability of cities are being explored. Good urban water management is complex and requires not only water and wastewater infrastructure, but also pollution control and flood prevention. It requires coordination across many sectors, and between different local authorities and changes in governance, that lead to more sustainable and equitable use of urban water resources. [ 51 ]
Urban forests are forests located in cities. They are an important component of urban green infrastructure systems. Urban forests use appropriate tree and vegetation species, instead of noxious and invasive kinds, which reduce the need of maintenance and irrigation. [ 53 ] In addition, native species also provide aesthetic value while reducing cost. Diversity of plant species should also be considered in design of urban forests to avoid monocultures ; this makes the urban forests more durable and resilient to pests and other harms. [ 53 ]
Constructed wetlands are manmade wetlands , which work as a bio-filtration system. They contain wetland vegetation and are mostly built on uplands and floodplains . Constructed wetlands are built this way to avoid connection or damage to natural wetlands and other aquatic resources. There are two main categories of constructed wetlands: subsurface flow system and free water surface system. Proper planning and operating can help avoid possible harm done to the wetlands, which are caused by alteration of natural hydrology and introduction of invasive species. [ 61 ]
Green roofs improve air and water quality while reducing energy cost. The implementation of green roofs in some regions have correlated with increased albedo, providing slightly cooler temperatures and thus, lower energy consumption. [ 63 ] The plants and soil provide more green space and insulation on roofs. Green and blue roofs also help reducing city runoff by retaining rainfall providing a potential solution for the stormwater management in highly concentrated urban areas. [ 64 ] The social benefit of green roofs is the rooftop agriculture for the residents. [ 42 ]
Green roofs also sequester rain and carbon pollution. Forty to eighty percent of the total volume of rain that falls on green roofs are able to be reserved. [ 65 ] The water released from the roofs flow at a slow pace, reducing the amount of runoff entering the watershed at once.
Blue roofs , not technically being green infrastructure, collect and store rainfall, reducing the inrush of runoff water into sewer systems. Blue roofs use detention ponds, or detention basins , for collecting the rainfall before it gets drained into waterways and sewers at a controlled rate. As well as saving energy by reducing cooling expenses, blue roofs reduce the urban heat island effect when coupled with reflective roofing material.
Rain gardens are a form of stormwater management using water capture. Rain gardens are shallow depressed areas in the landscape, planted with shrubs and plants that are used to collect rainwater from roofs or pavement and allows for the stormwater to slowly infiltrate into the ground.
Ubiquitous lawn grass is not a solution for controlling runoff, so an alternative is required to reduce urban and suburban first flush (highly toxic) runoff and to slow the water down for infiltration. In residential applications, water runoff can be reduced by 30% with the use of rain gardens in the homeowner's yard. A minimum size of 150 sq. ft. up to a range of 300 sq. ft. is the usual size considered for a private property residence. The cost per square foot is about $5–$25, depending on the type of plants you use and the slope of the property. Native trees, shrubs, and herbaceous perennials of the wetland and riparian zones being the most useful for runoff detoxification. [ 66 ] [ 67 ]
Downspout disconnection is a form of green infrastructure that separates roof downspouts from the sewer system and redirects roof water runoff into permeable surfaces. [ 29 ] It can be used for storing stormwater or allowing the water to penetrate the ground. Downspout disconnection is especially beneficial in cities with combined sewer systems. With high volumes of rain, downspouts on buildings can send 12 gallons of water a minute into the sewer system, which increases the risk of basement backups and sewer overflows. In attempts to reduce the amount of rainwater that enters the combined sewer systems, agencies such as the Milwaukee Metropolitan Sewerage District amended regulations that require downspout disconnection at residential areas. [ 68 ]
Bioswales are stormwater runoff systems providing an alternative to traditional storm sewers . Much like rain gardens, bioswales are vegetated or mulched channels commonly placed in long narrow spaces in urban areas. They absorb flows or carry stormwater runoff from heavy rains into sewer channels or directly to surface waters. [ 69 ] Vegetated bioswales infiltrate, slow down, and filter stormwater flows that are most beneficial along streets and parking lots. [ 29 ]
The Trust for Public Land is working in partnership with the City of Los Angeles' Community Redevelopment Agency, Bureau of Sanitation, the University of Southern California 's Center for Sustainable Cities, and Jefferson High School by converting the existing 900 miles of alleys in the city to green alleys. [ 70 ] The concept is to re-engineer existing alleyways to reflect more light to mitigate heat island effect, capture storm water, and make the space beautiful and usable by the neighboring communities. [ 70 ] The first alley, completed in 2015, saved more than 750,000 gallons in its first year. [ 71 ] The Green alleys will provide open space on top of these ecological benefits, converting spaces which used to feel unsafe, or used for dumping into a playground, and walking/biking corridor. [ 72 ]
The Trust for Public Land has completed 183 green school yards across the 5 boroughs in New York. [ 73 ] Existing asphalt school yards are converted to a more vibrant and exciting place while also incorporating infrastructure to capture and store rainwater: rain garden, rain barrel, tree groves with pervious pavers, and an artificial field with a turf base. [ 74 ] The children are engaged in the design process, lending to a sense of ownership and encourages children to take better care of their school yard. [ 74 ] Success in New York has allowed other cities like Philadelphia and Oakland to also convert to green school yards. [ 75 ] [ 76 ]
Low-impact development (also referred to as green stormwater infrastructure) are systems and practices that use or mimic natural processes that result in the infiltration, evapotranspiration or use of stormwater in order to protect water quality and associated aquatic habitat. LID practices aim to preserve, restore and create green space using soils, vegetation, and rainwater harvest techniques. It is an approach to land development (or re-development) that works with nature to manage stormwater as close to its source as possible. [ 18 ] Many low impact development tools integrate vegetation or the existing soil to reduce runoff and let rainfall enter the natural water cycle . [ 77 ]
The Green Infrastructure approach analyses the natural environment in a way that highlights its function and subsequently seeks to put in place, through regulatory or planning policy, mechanisms that safeguard critical natural areas. Where life support functions are found to be lacking, plans may propose how these can be put in place through landscaped and/or engineered improvements. [ 78 ]
[ 30 ]
Within an urban context, this can be applied to re-introducing natural waterways [ 79 ] and making a city self-sustaining particularly with regard to water, for example, to harvest water locally, recycle it, re-use it and integrate stormwater management into everyday infrastructure. [ 80 ]
The multi-functionality of this approach is key to the efficient and sustainable use of land, especially in a compact and bustling country such as England where pressures on land are particularly acute. An example might be an urban edge river floodplain which provides a repository for flood waters, acts as a nature reserve , provides a recreational green space and could also be productively farmed (probably through grazing). There is growing evidence that the natural environment also has a positive effect on human health. [ 81 ]
In the United Kingdom, Green infrastructure planning is increasingly recognised as a valuable approach for spatial planning and is now seen in national, regional and local planning and policy documents and strategies, for example in the Milton Keynes and South Midlands Growth area. [ 82 ]
In 2009, guidance on green infrastructure planning was published by Natural England. [ 83 ] This guidance promotes the importance of green infrastructure in 'place-making', i.e. in recognizing and maintaining the character of a particular location, especially where new developments are planned. [ 84 ]
In North West England the former Regional Spatial Strategy had a specific Green Infrastructure Policy (EM3 – Green Infrastructure) as well as other references to the concept in other land use development policies (e.g. DP6). [ 85 ] The policy was supported by the North West Green Infrastructure Guide. [ 86 ] The Green Infrastructure Think Tank (GrITT) provides the support for policy development in the region and manages the web site that acts as a repository for information on Green Infrastructure. [ 87 ]
The Natural Economy Northwest programme has supported a number of projects, commissioned by The Mersey Forest to develop the evidence base for green infrastructure in the region. In particular work has been undertaken to look at the economic value of green infrastructure, the linkage between grey and green infrastructure and also to identify areas where green infrastructure may play critical role in helping to overcome issues such as risks of flood or poor air quality.
In March 2011, a prototype Green Infrastructure Valuation Toolkit [ 88 ] was launched. The Toolkit is available under a Creative Commons license, and provides a range of tools that provide economic valuation of green infrastructure interventions. The toolkit has been trialled in a number of areas and strategies, including the Liverpool Green Infrastructure Strategy. [ 89 ]
In 2012, the Greater London Authority published the All London Green Grid Supplementary Planning Guidance (ALGG SPG) which proposes an integrated network of green and open spaces together with the Blue Ribbon Network of rivers and waterways. The ALGG SPG aims to promote the concept of green infrastructure, and increase its delivery by boroughs, developers, and communities, to benefit areas such as sustainable travel, flood management, healthy living and the economic and social uplift these support. [ 90 ]
Green infrastructure is being promoted as an effective and efficient response to projected climate change. [ 91 ] [ 92 ]
Green infrastructure may include geodiversity objectives. [ 93 ]
Green infrastructure programs managed by EPA and partner organizations are intended to improve water quality generally through more extensive management of stormwater runoff. The practices are expected to reduce stress on traditional water drainage infrastructure-- storm sewers and combined sewers —which are typically extensive networks of underground pipes and/or surface water channels in U.S. cities, towns and suburban areas. Improved stormwater management is expected to reduce the frequency of combined sewer overflows and sanitary sewer overflows , reduce the impacts of urban flooding , and provide other environmental benefits. [ 94 ] [ 95 ]
Though green infrastructure is yet to become a mainstream practice, [ 96 ] many US cities have initiated its implementation to comply with their MS4 permit requirements. For example, the City of Philadelphia has installed or supported a variety of retrofit projects in neighborhoods throughout the city. Installed improvements include:
Some of these facilities reduce the volume of runoff entering the city's aging combined sewer system, and thereby reduce the extent of system overflows during rainstorms. [ 97 ]
Another U.S. example is the State of Maryland 's promotion of a program called "GreenPrint." GreenPrint Maryland is the first web-enabled map in the nation that shows the relative ecological importance of every parcel of land in the state. Combining color-coded maps, information layers, and aerial photography with public openness and transparency, Greenprint Maryland applies the best environmental science and Geographic Information Systems (GIS) to the urgent work of preserving and protecting environmentally critical lands. A valuable new tool not only for making land conservation decisions today, but for building a broader and better informed public consensus for sustainable growth and land preservation decisions into the future. The program was established in 2001 with the objective to "preserve an extensive intertwined network of lands vital to the long-term protection of the State's natural resources, in concert with other Smart Growth initiatives." [ 98 ] [ 99 ]
In April 2011, EPA announced the Strategic Agenda to Protect Waters and Build More Livable Communities through Green Infrastructure and the selection of the first ten communities to be green infrastructure partners. [ 100 ] [ 101 ] The communities selected were: Austin, Texas; Chelsea, Massachusetts; the Northeast Ohio Regional Sewer District (Cleveland, Ohio); the City and County of Denver, Colorado; Jacksonville, Florida; Kansas City, Missouri; Los Angeles, California; Puyallup, Washington; Onondaga County and the City of Syracuse, New York; and Washington, D.C. [ 102 ]
The Federal Emergency Management Agency (FEMA) is also promoting green infrastructure as a means of managing urban flooding (also known as localized flooding). [ 103 ]
Since 2009, two editions of the ABC (Active, Beautiful, Clean) Waters Design Guidelines have been published by the Public Utilities Board, Singapore. The latest version (2011) contains planning and design considerations for the holistic integration of drains, canals and reservoirs with the surrounding environment. The Public Utilities Board encourages the various stakeholders — landowners, private developers to incorporate ABC Waters design features into their developments, and the community to embrace these infrastructures for recreational & educational purposes.
The main benefits outlined in the ABC Waters Concept include:
A 2012 paper by the Overseas Development Institute reviewed evidence of the economic impacts of green infrastructure in fragile states.
Upfront construction costs for GI were up to 8% higher than non-green infrastructure projects. Climate Finance was not adequately captured by Fragile states for GI investments, and governance issues may further hinder capability to take full advantage. [ 105 ]
GI Investments needed strong government participation as well as institutional capacities and capabilities that fragile states may not possess. Potential poverty reduction includes improved agricultural yields and higher rural electrification rates, benefits that can be transmitted to other sectors of the economy not directly linked to the GI investment. [ 105 ]
Whilst there are examples of GI investments creating new jobs in a number of sectors, it is unclear what the employment opportunities advantages are in respect to traditional infrastructure investments. The correct market conditions (i.e. labour regulations or energy demand) are also required in order to maximise employment creation opportunities.
Such factors that may not be fully exploited by fragile state governments lacking the capacity to do so. GI investments have a number of co-benefits including increased energy security and improved health outcomes, whilst a potential reduction of a country's vulnerability to the negative effects of climate change being arguably the most important co-benefit for such investments in a fragile state context. [ 105 ]
There is some evidence that GI options are taken into consideration during project appraisal . Engagement mostly occurs in projects specifically designed with green goals, hence there is no data showing decision making that leads to a shift towards any green alternative. Comparisons of costs, co-benefits, poverty reduction benefits or employment creation benefits between the two typologies are also not evident. [ 106 ]
Currently, an international standard for green infrastructure is developed: SuRe – The Standard for Sustainable and Resilient Infrastructure is a global voluntary standard which integrates key criteria of sustainability and resilience into infrastructure development and upgrade. [ 4 ] SuRe is developed by the Swiss Global Infrastructure Basel Foundation and the French bank Natixis as part of a multi-stakeholder process and will be compliant with ISEAL guidelines. [ 107 ] The foundation has also developed the SuRe SmartScan, a simplified version of the SuRe Standard which serves as a self-assessment tool for infrastructure project developers. It provides them with a comprehensive and time-efficient analysis of the various themes covered by the SuRe Standard, offering a solid foundation for projects that are planning to become certified by the SuRe Standard in the future. Upon completion of the SmartScan, project developers receive a spider diagram evaluation, which indicates their project's performance in the different themes and benchmarks the performances with other SmartScan assessed projects. [ 108 ]
A good example of green infrastructure principles being applied at landscape scale is the Beijing Olympic site. First developed for the 2008 Summer Olympics but used also for the 2022 Winter Olympics, the Beijing Olympic site covers a large area of brownfield redevelopment in the northern sector of the city between the 4th and 5th ring roads. The central green infrastructure feature of the Olympic site is the "Dragon-shaped river" – a complex of retention basins and wetlands covering more than a half million square metres configured to look from the air like a traditional Chinese dragon.
In addition to referencing Chinese culture, the system is capable of significantly reducing nutrient loads from influent waters, which are provided by a nearby wastewater recycling facility. [ 109 ]
Farmers claimed that flooding of their farmlands was caused by suburban development upstream. The flooding was a result of funneled runoff directed into storm drains by impervious cove, which ran unmitigated and unabsorbed into their farmlands downstream. The farmers were awarded an undisclosed amount of money in the tens of millions as compensation. Low density and highly paved residential communities redirect stormwater from impervious surfaces and pipes to stream at velocities much greater than predevelopment rates. Not only are these practices environmentally damaging, they can be costly and inefficient to maintain. In response, the city of Surrey opted to employ a green infrastructure strategy and chose a 250-hectare site called East Clayton as a demonstration project. The approach reduced the stormwater flowing downstream and allows for infiltration of rainwater closer if not at its point of origin. In result, the stormwater system at East Clayton had the ability to hold one inch of rainfall per day, accounting for 90% of the annual rainfall. The incorporation of green infrastructure at Surrey, British Columbia was able to create a sustainable environment that diminishes runoff and to save around $12,000 per household. [ 8 ]
The site of former factory Nya Krokslätt is situated between a mountain and a stream. Danish engineers, Ramboll, have designed a concept of slowing down and guiding storm water in the area with methods such as vegetation combined with ponds, streams and soak-away pits as well as glazed green-blue climate zones surrounding the buildings which delay and clean roof water and greywater .
The design concept provides for a multifunctional, rich urban environment, which includes not only technical solutions for energy efficient buildings, but encompasses the implementation of blue-green infrastructure and ecosystem services in an urban area. [ 28 ]
Since 1991, the city of Zürich has had a law stating all flat roofs (unless used as terraces) must be greened roofed surfaces. The main advantages as a result of this policy include increased biodiversity, rainwater storage and outflow delay, and micro-climatic compensation (temperature extremes, radiation balance, evaporation and filtration efficiency). [ 110 ] Roof biotopes are stepping stones which, together with the earthbound green areas and the seeds distributed by wind and birds, make an important contribution to the urban green infrastructure. [ 1 ]
In the old industrial area of the Ruhr District in Germany, Duisburg -Nord is a landscape park which incorporates former industrial structures and natural biodiversity. The architects Latz + Partner developed the water park which now consists of the old River Emscher, subdivided into five main sections: Klarwasserkanal (Clear Water Canal), the Emschergraben (Dyke), the Emscherrinne (Channel), the Emscherschlucht (Gorge) and the Emscherbach (Stream). The open waste water canal of the “Old Emscher” river is now fed gradually by rainwater collection through a series of barrages and water shoots. This gradual supply means that, even in lengthy dry spells, water can be supplied to the Old Emscher to replenish the oxygen levels. [ 111 ] This has allowed the canalised river bed to become a valley with possibilities for nature development and recreation.
As a key part of the ecological objectives, much of the overgrown areas of the property were included in the plan as they were found to contain a wide diversity of flora and fauna, including threatened species from the red list. Another important theme in the development of the plan was to make the water system visible, in order to stimulate a relationship between visitors and the water. [ 1 ]
The Greenhouse Project was started in 2008 by a small group of public school parents and educators to facilitate hands-on learning, not only to teach about food and nutrition, but also to help children make educated choices regarding their impact on the environment.
The laboratory is typically built as a traditional greenhouse on school rooftops and accommodates a hydroponic urban farm and environmental science laboratory. It includes solar panels, hydroponic growing systems, a rainwater catchment system, a weather station and a vermi composting station.
Main topics of education include nutrition, water resource management, efficient land use, climate change, biodiversity, conservation, contamination, pollution, waste management, and sustainable development. Students learn the relationship between humans and the environment and gain a greater appreciation of sustainable development and its direct relationship to cultural diversity. [ 112 ]
In the early 1990s, Hammarby Sjöstad had a reputation for being a run-down, polluted and unsafe industrial and residential area. [ 1 ] Now, it is a new district in Stockholm where the city has imposed tough environmental requirements on buildings, technical installations and the traffic environment.
An ‘eco-cycle’ solution named the Hammarby Model, developed by Fortum, Stockholm Water Company and the Stockholm Waste Management Administration, is an integral energy, waste and water system for both housing and offices. The goal is to create a residential environment based on sustainable resource usage. [ 113 ] Examples include waste heat from the treated wastewater being used for heating up the water in the district heating system, rainwater runoff is returned to the natural cycle through infiltration in green roofs and treatment pools, sludge from the local wastewater treatment is recycled as fertiliser for farming and forestry. [ 1 ] This sustainable model has been a source of inspiration to many urban development projects including the Toronto (Canada) Waterfront, London's New Wembley, and a number of cities/city areas in China. [ 114 ]
EPA supported the city of Emeryville, California in the development of "Stormwater Guidelines for Green, Dense Redevelopment." [ 115 ] Emeryville, which is a suburb of San Francisco, began in the 1990s reclaiming, remediating and redeveloping the many brownfields within its borders. These efforts sparked a successful economic rebound. The city did not stop there, and decided in the 2000s to harness the redevelopment progress for even better environmental outcomes, in particular that related to stormwater runoff, by requiring in 2005 the use of on-site GI practices in all new private development projects. The city faced several challenges, including a high water table, tidal flows, clay soils, contaminated soil and water, and few absorbent natural areas among the primarily impervious, paved parcels of existing and redeveloped industrial sites. The guidelines, and an accompanying spreadsheet model, were developed to make as much use of redevelopment sites as possible for handling stormwater. The main strategies fell into several categories:
The Gowanus Canal , in Brooklyn , New York, is bounded by several communities including Park Slope, Cobble Hill, Carroll Gardens, and Red Hook. The canal empties into New York Harbor . Completed in 1869, the canal was once a major transportation route for the then separate cities of Brooklyn and New York City. Manufactured gas plants, mills, tanneries, and chemical plants are among the many facilities that operated along the canal. As a result of years of discharges, storm water runoff, sewer outflows, and industrial pollutants, the canal has become one of the nation's most extensively contaminated water bodies. Contaminants include PCBs, coal tar wastes, heavy metals, and volatile organics. On March 2, 2010, EPA added the canal to its Superfund National Priorities List (NPL). Placing the canal on the list allows the agency to further investigate contamination at the site and develop an approach to address the contamination.
After the NPL designation, several firms tried to redesign the area surrounding the canal to meet EPA's principles. One of the proposals was the Gowanus Canal Sponge Park, suggested by Susannah Drake of DLANDstudio, an architecture and landscape architecture firm based in Brooklyn. The firm designed a public open space system that slows, absorbs, and filters surface water runoff with the goal of remediating contaminated water, activating the private canal waterfront, and revitalizing the neighborhood. The unique feature of the park is its character as a working landscape that means the ability to improve the environment of the canal over time while simultaneously supporting public engagement with the canal ecosystem. The park was cited in a professional award by the American Society of Landscape Architects (ASLA), in the Analysis and Planning category, in 2010. [ citation needed ]
The Lafitte Greenway in New Orleans, Louisiana , is a post- Hurricane Katrina revitalization effort that utilizes green infrastructure to improve water quality as well as support wildlife habitat. [ 42 ] The site was previously an industrial corridor that connected the French Quarter to Bayou St. John and Lake Pontchartrain . [ 42 ] Part of the revitalization plan was to incorporate green infrastructure for environmental sustainability. [ 42 ] One strategy to mitigate localized flooding was to create recreation fields that are carved out to hold water during times of heavy rains. [ 42 ] Another strategy was to restore the native ecology of the corridor, giving special attention to the ecotones that bisect the site. [ 42 ] The design proposed retrofitting historic buildings with stormwater management techniques, such as rainwater collection systems, which allows historic buildings to be preserved. [ 42 ] This project received the Award of Excellence from the ASLA in 2013. [ 42 ]
A geographic information system (GIS) is a computer system for that allows users to capture, store, display, and analyze all kinds of spatial data on Earth. [ 117 ] GIS can gather multiple layers of information on one single map regarding streets, buildings, soil types, vegetation, and more. [ 117 ] Planners can combine or calculate useful information such as impervious area percentage or vegetation coverage status of a specific region to design or analyze the use of green infrastructure. The continued development of geographic information systems and their increasing level of use is particularly important in the development of Green Infrastructure plans. The plans frequently are based on GIS analysis of many layers of geographic information. [ 117 ]
According to the "Green Infrastructure Master Plan" developed by Hawkins Partners, civil engineers use GIS to analyze the modeling of impervious surfaces with historical Nashville rainfall data within the CSS (combined sewer system) to find the current rates of runoff. GIS systems are able to help planning teams analyze potential volume reductions at the specific region for green infrastructures, including water harvesting, green roofs, urban trees, and structural control measures. [ 118 ]
Lack of funding is consistently cited as a barrier to the implementation of green infrastructure. One advantage that green infrastructure projects offer, however, is that they generate so many benefits that they can compete for a variety of diverse funding sources. Some tax incentive programs administered by federal agencies can be used to attract financing to green infrastructure projects. Here are two examples of programs whose missions are broad enough to support green infrastructure projects:
Some people might expect that green spaces are extravagant and excessively difficult to maintain, but high-performing green spaces can provide tangible economic, ecological, and social benefits. [ 120 ] For example:
As a result, high-performing green spaces work to create a balance between built and natural environments. [ 10 ] A higher abundance of green space in communities or neighbourhoods, for example, has been observed to promote participation in physical activities among elderly men, [ 121 ] while more green space around one's house is associated with improved mental health . [ 122 ]
In addition to these benefits, recent studies have shown that residents highly value the experiential aspects of green infrastructure, emphasizing the importance of aesthetics, wellbeing, and a sense of place. This focus on cultural ecosystem services suggests that the design and implementation of green infrastructure should prioritize these elements, as they significantly contribute to the community's perception of value and overall quality of life. [ 123 ]
A 2012 study focusing on 479 green infrastructure projects across the United States found that 44% of green infrastructure projects reduced costs, compared to the 31% that increased the costs. The most notable cost savings were due to reduced stormwater runoff and decreased heating and cooling costs. [ 124 ] [ 125 ] Green infrastructure is often cheaper than other conventional water management strategies. The city of Philadelphia, for example, discovered that a new green infrastructure plan would cost $1.2 billion over a 25-year period, compared to the $6 billion that would have been needed to finance a grey infrastructure plan. [ 126 ]
A comprehensive green infrastructure in Philadelphia is planned to cost just $1.2 billion over the next 25 years, compared to over $6 billion for "grey" infrastructure (concrete tunnels created to move water). Under the new green infrastructure plan it is expected that: [ 127 ]
A green infrastructure plan in New York City is expected to cost $1.5 billion less than a comparable grey infrastructure approach. Also, the green stormwater management systems alone will save $1 billion, at a cost of about $0.15 less per gallon. The sustainability benefits in New York City range from $139–418 million over the 20 year life of the project. This green plan estimates that “every fully vegetated acre of green infrastructure would provide total annual benefits of $8.522 in reduced energy demand, $166 in reduced CO2 emissions, $1,044 in improved air quality, and $4,725 in increased property value.” [ 124 ] [ 125 ] [ 128 ] [ 129 ]
In addition to ambitious infrastructure plans and layouts offering economical and health benefits with the investment of green infrastructure, a study conducted in 2016 within the United Kingdom analyzed the "willingness-to-pay" capacity held by residents in response to green infrastructure. Their findings concluded that, "investment in urban [green infrastructure] that is visibly greener, that facilitates access to [green infrastructure] and other amenities, and that is perceived to promote multiple functions and benefits on a single site (i.e. multi-functionality) generate higher [willingness-to-pay] values." [ 130 ] The "willingness-to-pay" obligation is pronounced with the idea that the locations of some living spaces with functionality and aesthetics are more likely to wield larger amounts of social and economical capital. [ 131 ] By incentivising residents to invest in green infrastructure within their own zones for development and communities, it allows the potential for increased revenue to be used in order to facilitate further green infrastructure, ultimately increasing the "economic viability" for future projects to occur. [ 130 ]
In cities such as Chicago, green infrastructure projects are aimed at enhancing the environment through sustainability and livability, but often they create more social justice concerns like gentrification. This often happens when urban green spaces added in lower income communities attract wealthier residents, which causes the property values to increase and displace the current residence of lower income communities. The impacts of gentrification varies depending on the community, with different implemented infrastructures like greenspaces and transportation avenues along with the size and location of them, [ 132 ] which reshapes the demographic and the economic landscape of the community. The challenges with incorporating more green infrastructure with a beneficial goal for social justice is often due to how the government funds and fulfills projects. Many of the projects are managed by nonprofits so they are not the focus nor are the proper skills necessary acquired which creates a larger social justice issue like the decrease in affordable housing. [ 133 ] This causes a focus on environmental and recreational improvements and neglects the socioeconomic dimensions of sustainability. The planning process of infrastructure should consider the environmental outcomes while also integrating social equity considerations. [ 133 ]
The impacts of green gentrification upon local communities can ultimately contradict the positives brought by sustainable and green infrastructure initially. Green infrastructure like increased green spaces or walkability in cities can potentially improve the well-being of individuals living within the communities, [ 134 ] but more often at the expense of dispelling homeless populations or those with decreased housing accessibility living in the future project areas for urban improvement. [ 135 ] In order to combat the negative effects of gentrification occurring as a byproduct of haphazard implementation of green infrastructure, different "critical barriers" that act as components prohibiting affordable housing must be addressed. Five major barriers that need to be addressed in future policies and legislation for communities are, "green retrofit-related; land market-related; incentive-related; housing market-related and infrastructural-related barriers." [ 136 ]
The success of implementing green infrastructure within communities that have experienced environmental injustice, like excess exposure to pollution or affordable housing, is dependent on the interaction and collaboration of project managers overseeing green infrastructure sites alongside community residents. The most prominent concerns raised by residents in a community in New Jersey cited concerns regarding the maintenance and upkeep of future green stormwater infrastructure (GSI), the necessity for future GSI projects to be multifaceted rather than universal amongst communities, and advocacy for environmental justice to be implemented within project outlines, as "GSI projects, as part of broader community greening initiatives, do not automatically guarantee EJ and health equity, which may be absent in many shrinking cities." [ 137 ] It is important to comprehend the environmental and economical capabilities that green infrastructure can provide, but the environmental inequity in respect to being able to access these spaces [ 138 ] must be considered in application of green infrastructure within communities. The imperative need to focus on communities with less accessibility to ecosystem services and green infrastructure is a major part of ensuring all communities and residents feel the benefits and effects of implementation.
One program that has integrated green infrastructure into construction projects worldwide is the Leadership in Energy and Environmental Design (LEED) certification. This system offers a benchmark rating for green buildings and neighborhoods, credibly quantifying a project's environmental responsibility. [ 139 ] The LEED program incentivizes development that uses resources efficiently. [ 140 ] For example, it offers specific credits for reducing indoor and outdoor water use, optimizing energy performance, producing renewable energy, and minimizing or recycling project waste. Two LEED initiatives that directly promote the use of green infrastructure include the rainwater management and heat island reduction credits. [ 141 ] An example of a successfully LEED-certified neighborhood development is the 9th and Berks Street transit-oriented development (TOD) in Philadelphia, Pennsylvania , which achieved a Platinum level rating on October 12, 2017. [ 142 ]
Another approach to implementing green infrastructure has been developed by the International Living Future Institute. [ 143 ] Their Living Community Challenge [ 144 ] assesses a community or city in twenty different aspects of sustainability. [ 145 ] Notably, the Challenge considers whether the development achieves net positive water [ 146 ] and energy [ 147 ] uses and utilizes replenishable materials. [ 148 ] | https://en.wikipedia.org/wiki/Green_infrastructure |
Green leaf volatiles ( GLV ) are organic compounds released by plants. [ 1 ] Some of these chemicals function as signaling compounds between either plants of the same species, of other species, or even different lifeforms like insects. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Green leaf volatiles are involved in patterns of attack and protection between species. They have been found to increase the attractive effect of pheromones of cohabiting insect species that protect plants from attacking insect species. For example, corn plants that are being fed on by caterpillars will release GLVs that attract wasps, who then attack the caterpillars. [ 2 ] [ 4 ] GLVs also have antimicrobial properties that can prevent infection at the site of injury. [ 3 ]
GLVs include C6-aldehydes [(Z)-3-hexenal, n-hexanal] and their derivatives such as (Z)-3-hexenol, (Z)-3-hexen-1-yl acetate, and the corresponding E-isomers. [ 6 ] [ 7 ]
When a plant is attacked, it emits GLVs into the environment through the air. How a plant responds depends on the type of damage involved. Plants respond differently to damage from a purely mechanical source and damage from herbivores. Mechanical damage tends to cause damage-associated molecular patterns (DAMPs) involving plant-derived substances and breakdown products. Herbivore-associated molecular patterns (HAMPs) involve characteristic molecules left by different types of herbivores when feeding. The oral secretions of herbivores appear to play an essential role in triggering the release of species-specific herbivore-induced plant volatiles. Wounds from herbivores, and mechanical wounds that have been treated with herbivore oral secretions, both trigger the release of higher quantities of plant volatiles than mechanical damage. [ 4 ]
Volatile blends are proposed to convey a variety of information to insects and plants. "Each plant species and even each plant genotype releases its own specific blend, and the quantities and ratios in which they are released also vary with the arthropod that is feeding on a plant and may even provide information on the time of day that feeding occurs." [ 4 ] In addition to GLVs, herbivore induced plant volatiles (HIPVs) include terpenes, ethylene, methyl salicylate and other VOCs. [ 6 ]
GLVs activate the expression of genes related to the plants' defense mechanisms. [ 3 ] Different antagonists trigger different expression of genes and the biosynthesis of signaling peptides which mediate systemic defense responses. [ 4 ]
Undamaged neighboring plants have been shown, in some cases, to respond to GLV signals. [ 3 ] Both the plant emitting the GLVs and its neighboring plants can enter a primed state in which plants activate their defenses systems more quickly and in a stronger concentration. [ 8 ] [ 4 ]
The first study to clearly demonstrate anti-herbivore defense priming by GLVs focused on corn ( Zea mays ). Neighboring plants responded to the release of GLVs by priming against insect herbivore attack, reacting more rapidly and releasing greater levels of GLVs. [ 3 ] [ 9 ] Similar results have been shown in tomato plants. Neighboring plants reacted more strongly to GLVs from the plants exposed to the herbivore, by releasing more of the proteins related to the plants' defense mechanisms. [ 10 ]
In positive plant-insect interactions, GLVs are used as a form of defense. They attract predators to plants that are being preyed upon by herbivores . [ 4 ] For example, female parasitoid wasps from two different families, Microplitis croceipes and Netelia heroica , can be attracted to plants that are emitting GLVs due to wounding from caterpillars. [ 11 ] Maize plants emit volatiles to attract the parasitic wasps Cotesia marginiventris and Microplitis rufiventris to attack African cotton leafworm . [ 12 ] [ 13 ] In some species GLVs enhance the attraction of sex pheromones. [ 4 ] [ 14 ] For example, green leaf volatiles have been found to increase the response of tobacco budworm to sex pheromone. Budworm larvae feed on tobacco, cotton, and various flowers and weeds, and in turn can be fed on by the larvae of cohabiting species that are attracted by GLVs. [ 15 ]
In another study, a multi-plant relationship was reported. The parasitic wasps ( Vespula germanica and V. vulgaris ) prey on caterpillar ( Pieris brassicae )-infested cabbage leaves that emit GLVs. The same GLVs are emitted by the orchids ( Epipactis purpurata and E. helleborine ) through pheromone release. The orchids benefit from attracting the wasps, not to protect them from insects, but because the wasps aid in pollination. [ 16 ] [ 17 ]
Benefits of GLV release have also been reported in soybeans grown in Iowa. [ 18 ] When these soybean plants became heavily infested by aphids , the amount of GLV released far surpassed normal levels and as a result, more spotted lady beetles were attracted to the pheromone releasing plants and preyed on the bugs eating the plant. The stimulus of aphid predation is chemically transmitted through the plant to coordinate an increase release of GLV’s. The particular chemical released is unique to these spotted lady beetles and when different species of beetles were tested, there wasn’t any extra inclination for them to move towards GLV releasing plants. [ 18 ] This indicates that these soybeans evolved ability to release species-specific pheromones to aid in their survival.
GLV release is correlated with fruit ripeness. [ 19 ] Although this may be of effect in attracting pollinators, it also can cause issues if these GLV’s attract predators. One such example of this is with boll weevils, as an increase of GLV release when the plants are ripe has been found to increase the predation rate of these beetles. [ 19 ]
Another issue with GLV release and increasing predation is with populations that alter GLV emissions from the affected plants. In one case, it was noted that secretions from certain species of caterpillars significantly decrease the effect amount of GLV emissions. [ 20 ] In order to determine what was being done to decrease GLV emissions, a study was run on four unique species of caterpillars to measure their effectiveness in decreasing GLV levels released from the predated plant. It’s been found that compounds in the gut and salivary glands, as well as modifications to those compounds in these various species, has been able to mute a large part of the effect of GLV released into the external environment. [ 20 ] How this is done is though stopping the flow of pheromone molecules, so they can’t interact with receptors on the leaves of other plants. [ 20 ]
GLVs can also have antimicrobial effects. [ 21 ] Some plants express HPL, the main enzyme of GLV synthesis. [ 8 ] The rates of fungal spore growth in HPL over-expressing have been compared with HPL silencing mutants to the wild type plants. [ 8 ] Results from the study showed lower rates of fungal growth and higher GLV emissions on the HPL over-expressing mutants, while the HPL silencing mutants showed higher rates of fungal growth and lower GLV emissions, which supports the hypothesis that GLVs have antimicrobial properties. [ 8 ]
The antimicrobial properties of GLVs have also been proposed to be part of an evolutionary arms race . During an infection, plants emit GLVs to act as microbial agents, but bacteria and viruses have adapted to use these GLVs to their own benefit. [ 22 ] The most common example of this is found in the red raspberry. When the red raspberry plant is infected, the virus influences it to produce more GLVs, which attract the red raspberry aphid. [ 9 ] These GLVs cause more aphids to come and to feed on the plant for longer, giving the virus better chances of being ingested and spread more widely. [ 9 ] Researchers are now trying to determine whether under infectious conditions plants emit GLVs for their benefit, or if bacteria and viruses induce the release of these compounds for their own benefit. [ 23 ] Studies in this area have been inconclusive and contradictory.
A systematic review by Schuman 2023 finds that most studies on plant volatiles relate to herbivore interactions. Schuman also finds that laboratory studies are overrepresented despite the wide differences in herbivore behaviour between natural and artificial settings. [ 25 ] | https://en.wikipedia.org/wiki/Green_leaf_volatiles |
Green lightning originally referred to random flashing streaks across the screen of IBM 3278-9 computer terminals , which were produced by a hardware bug when a new symbol set was being downloaded . Instead of fixing the fault, IBM suggested that it was useful because it let the user know during the download that something was in progress.
Later IBM colour graphics terminals were microprocessor driven and would not have produced flashing streaks. IBM decided to program them to re-create the "green lightning", since the bug had become a feature – a phenomenon known as a misbug.
This is one of many terms from the Jargon File that are widely quoted but have little or no everyday usage.
This computing article is a stub . You can help Wikipedia by expanding it .
This slang article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Green_lightning_(computing) |
In agriculture , a green manure is a crop specifically cultivated to be incorporated into the soil while still green. [ 1 ] Typically, the green manure's biomass is incorporated with a plow or disk, as is often done with (brown) manure . The primary goal is to add organic matter to the soil for its benefits. Green manuring is often used with legume crops to add nitrogen to the soil for following crops, especially in organic farming , but is also used in conventional farming . [ 2 ]
Farmers apply green manure by blending available plant discards into the soil. [ 3 ] Farmers begin the process of green manuring by growing legumes or collecting tree/shrub clippings. Harvesters gather the green manure crops and mix the plant material into the soil. The un-decomposed plants prepare the ground for cash crops by slowly releasing nutrients like nitrogen into the soil. [ 3 ]
Farmers may decide to add the green manure into the soil before or after cash crop planting. This variety in planting schedules can be seen in rice farming. [ 4 ]
Green manures usually perform multiple functions that include soil improvement and soil protection:
Leguminous green manures such as clover and vetch contain nitrogen-fixing symbiotic bacteria in root nodules that fix atmospheric nitrogen in a form that plants can use. This performs the vital function of fertilization .
Depending on the species of cover crop grown, the amount of nitrogen released into the soil lies between 40 and 200 pounds per acre. With green manure use, the amount of nitrogen that is available to the succeeding crop is usually in the range of 40-60% of the total amount of nitrogen that is contained within the green manure crop. [ 5 ]
Green manure acts mainly as soil-acidifying matter to decrease the alkalinity/pH of alkali soils by generating humic acid and acetic acid .
Incorporation of cover crops into the soil allows the nutrients held within the green manure to be released and made available to the succeeding crops. This results immediately from an increase in abundance of soil microorganisms from the degradation of plant material that aid in the decomposition of this fresh material. This additional decomposition also allows for the re-incorporation of nutrients that are found in the soil in a particular form such as nitrogen (N), potassium (K), phosphorus (P), calcium (Ca), magnesium (Mg), and sulfur (S).
Microbial activity from incorporation of cover crops into the soil leads to the formation of mycelium and viscous materials which benefit the health of the soil by increasing its soil structure (i.e. by aggregation). [ 5 ] The increased percentage of organic matter ( biomass ) improves water infiltration and retention, aeration, and other soil characteristics. The soil is more easily turned or tilled than non-aggregated soil. Further aeration of the soil results from the ability of the root systems of many green manure crops to efficiently penetrate compact soils. The amount of humus found in the soil also increases with higher rates of decomposition, which is beneficial for the growth of the crop succeeding the green manure crop. Non-leguminous crops are primarily used to increase biomass.
The root systems of some varieties of green manure grow deep in the soil and bring up nutrient resources unavailable to shallower-rooted crops.
Common cover crop functions of weed suppression . Non-leguminous crops are primarily used (e.g. buckwheat ). [ 6 ] The deep rooting properties of many green manure crops make them efficient at suppressing weeds . [ 7 ]
Some green manure crops, when allowed to flower , provide forage for pollinating insects . Green manure crops also often provide habitat for predatory beneficial insects, which allow for a reduction in the application of insecticides where cover crops are planted.
Some green manure crops (e.g. winter wheat and winter rye ) can also be used for grazing. [ 6 ]
Erosion control is often also taken into account when selecting which green manure cover crop to plant.
Some green crops reduce plant insect pests and diseases . Verticillium wilt is especially reduced in potato plants. [ 8 ]
Incorporation of green manures into a farming system can drastically reduce the need for additional products such as supplemental fertilizers and pesticides.
Limitations to consider in the use of green manure are time, energy, and resources (monetary and natural) required to successfully grow and utilize these cover crops. Consequently, it is important to choose green manure crops based on the growing region and annual precipitation amounts to ensure efficient growth and use of the cover crop(s).
Green manure is broken down into plant nutrient components by heterotrophic bacteria that consumes organic matter. Warmth and moisture contribute to this process, similar to creating compost fertilizer. The plant matter releases large amounts of carbon dioxide and weak acids that react with insoluble soil minerals to release beneficial nutrients. Soils that are high in calcium minerals, for example, can be given green manure to generate a higher phosphate content in the soil, which in turn acts as a fertilizer. [ 6 ]
The ratio of carbon to nitrogen in a plant is a crucial factor to consider, since it will impact the nutrient content of the soil and may starve a crop of nitrogen, if the incorrect plants are used to make green manure. The ratio of carbon to nitrogen will differ from species to species, and depending upon the age of the plant. The ratio is referred to as C:N. The value of N is always one, whereas the value of carbon or carbohydrates is expressed in a value of about 10 up to 90; the ratio must be less than 30:1 to prevent the manure bacteria from depleting existing nitrogen in the soil. Rhizobium are soil organisms that interact with green manure to retain atmospheric nitrogen in the soil. [ 9 ] Legumes , such as beans, alfalfa, clover and lupines, have root systems rich in rhizobium, often making them the preferred source of green manure material. [ citation needed ]
Many green manures are planted in autumn or winter to cover the ground before spring or summer sowing. [ 10 ]
Green manures have been used since ancient times. Farmers could only use organic fertilizers before the invention of chemical nitrogen fertilizer. There is evidence for the Greeks plowing broad beans and faba beans into the soil around 300 B.C. The Romans also used green manures like faba beans and lupines to make their soil more fertile. [ 3 ] Chinese agricultural texts dating back hundreds of years refer to the importance of grasses and weeds in providing nutrients for farm soil. It was also known to early North American colonists arriving from Europe. Common colonial green manure crops were rye, buckwheat and oats. [ 6 ]
Traditionally, the incorporation of green manure into the soil is known as the fallow cycle of crop rotation , which was used to allow the soil to regain its fertility after the harvest. [ citation needed ]
Managing green manure improperly or without additional chemical inputs may limit crop production. Mixing green manures into the soil without enough time before crop planting could stop the flow of nitrogen (nitrogen immobilization). When nitrogen stops flowing there won't be enough nutrients for the next crop planting. [ 3 ] Farming systems with short growth spans for green manure are not usually efficient. Farmers must weigh the cost of green manures with their productivity to determine suitability. [ 4 ] | https://en.wikipedia.org/wiki/Green_manure |
Green mix is an early step in the manufacturing of black powder for explosives . [ 1 ] It is a rough mixture of potassium nitrate , charcoal and sulfur in the correct proportions (75:15:10) for black powder , but is not milled, pressed or corned. It burns much more slowly than black powder, when it chooses to burn at all, can still explode if ignited in a confined place; the deflagration is usually characterized by short, uneven sizzling followed by relatively long periods of smoulder.
Green mix is merely an unfinished product and not generally used itself in any pyrotechnic or projectile applications.
This explosives -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Green_mix |
Green photocatalysts are photocatalysts derived from environmentally friendly sources. [ 1 ] [ 2 ] They are synthesized from natural, renewable, and biological resources, such as plant extracts , biomass , or microorganisms , minimizing the use of toxic chemicals and reducing the environmental impact associated with conventional photocatalyst production. [ 3 ] [ 4 ]
A photocatalyst is a material that absorbs light energy to initiate or accelerate a chemical reaction without being consumed in the process. [ 5 ] They are semiconducting materials which generate electron-hole pairs upon light irradiation . These photogenerated charge carriers [ 6 ] then migrate to the surface of the photocatalyst and interact with adsorbed species, triggering redox reactions . [ 7 ] They are promising candidates for a wide range of applications, including the degradation of organic pollutants in wastewater , the reduction of harmful gases , and the production of hydrogen or solar fuels . [ 8 ] Many methods exist to produce photocatalysts via both conventional and more green approaches including hydrothermal synthesis or sol-gel , the difference being in the material sources used.
A green source for photocatalyst synthesis refers to a material that is renewable , biodegradable , and has minimal environmental impact during its extraction and processing. [ 3 ] [ 4 ] This approach aligns with the principles of green chemistry , which aim to reduce or eliminate the use and generation of hazardous substances in chemical processes. [ 3 ] [ 4 ] Green sources are abundant, readily available, and often considered as waste materials, thus offering a sustainable and cost-effective alternative to conventional photocatalyst precursors . [ 9 ]
Plant extracts and agricultural waste products have emerged as promising green sources for photocatalyst production, offering attractive alternatives to conventional precursors due to their abundance, biodegradability , and cost-effectiveness. [ 10 ] Extracts from various plant parts, such as leaves , roots , and fruits , contain phyto-chemicals that can act as reducing and stabilizing agents in nanoparticle synthesis, [ 10 ] [ 11 ] contributing to the formation of desired photocatalyst morphologies . Meanwhile, waste materials from agricultural processes, such as rice husks and sugarcane bagasse , are rich in cellulose and lignin . [ 12 ] These components can be used as precursors for carbon-based photocatalyst or as templates for the synthesis of porous nano-materials. [ 13 ] [ 14 ]
Notes:
Utilizing bio-waste , such as food waste and animal waste , for green photocatalyst synthesis offers a dual benefit of waste management and material production. [ 27 ] These waste streams are rich in organic matter , which can be converted into valuable carbon-based photocatalyst through various thermochemical processes . [ 28 ] [ 29 ]
Notes/Explanations:
Seaweed is a highly promising green source for photocatalyst synthesis due to its rapid growth rates and minimal environmental requirements. [ 42 ] It does not require freshwater or fertilizers for cultivation , making it a sustainable and environmentally friendly option. [ 43 ] [ 44 ] Various seaweed species have been explored for their ability to produce nanoparticles and to act as templates for the synthesis of photocatalytic materials. [ 45 ] [ 46 ] [ 47 ]
Notes/Explanations:
Notes/Explanations:
Hydrothermal synthesis is a green method that utilizes water under high pressure and temperature to facilitate chemical reactions . [ 76 ] It often avoids the need for organic solvents and offers control over crystal size and morphology , making it a versatile approach for producing various photocatalyst materials. [ 76 ]
Microwave-assisted synthesis employs microwaves to provide rapid and uniform heating, leading to faster reaction rates and potential for significant energy savings compared to conventional heating methods. [ 77 ] This technique is increasingly favored in green synthesis due to its reduced energy consumption and potential for shorter reaction times. [ 77 ]
The sol-gel method involves the formation of a gel from a solution , followed by its conversion into a solid material through controlled drying and calcination . [ 78 ] It is a versatile technique widely used in the production of various photocatalyst materials, offering advantages in terms of controlling material composition and morphology. [ 78 ]
The table below provides a comparison of the advantages, potential limitations, and suitability of different green synthesis methods:
Green photocatalyst effectively break down organic contaminants in wastewater into less harmful products through a process known as photocatalytic oxidation . [ 82 ] Upon light irradiation, the photocatalyst generates reactive oxygen species (ROS), such as hydroxyl radicals (•OH) and superoxide radicals (O 2 •-) , which attack and decompose organic pollutants. [ 83 ] Green photocatalyst synthesized from plant extracts or agricultural waste have shown promising results in degrading various dye molecules, including methylene blue , rhodamine B , and methyl orange . [ 84 ] Green photocatalyst have demonstrated the ability to remove pharmaceutical contaminants such as carbamazepine , [ 85 ] ibuprofen , [ 86 ] tetracycline [ 87 ] [ 88 ] from wastewater . Additionally, green photocatalyst have been successfully employed in the degradation of pesticides such as alachlor . [ 89 ]
Notes/Explanations:
In addition to degrading organic pollutants , green photocatalyst can also contribute to the removal of toxic heavy metals from wastewater . The large surface area and functional groups present on green photocatalyst , particularly those derived from carbon-based sources like bio-waste , can effectively adsorb heavy metal ions from the water. [ 98 ] Furthermore, photogenerated electrons [ 99 ] from the green photocatalyst can reduce heavy metal ions to their less toxic elemental forms, which can then be more easily removed from the wastewater . [ 98 ]
Green photocatalyst exhibit potent antibacterial properties due to their ability to generate ROS upon light irradiation . [ 100 ] These ROS , including hydroxyl radicals and superoxide radicals , can damage bacterial cell walls and membranes , leading to cell death . [ 101 ]
Several green photocatalyst have shown promising antibacterial activity . ZnO nanoparticles synthesized using plant extracts have demonstrated strong antibacterial activity against a wide range of bacteria , including E. coli and Staphylococcus aureus . [ 102 ] TiO 2 -based photocatalyst , particularly those doped with silver or copper , exhibit enhanced antibacterial properties under visible light irradiation , making them suitable for disinfection applications. [ 103 ] Potential applications of these materials include water disinfection and the creation of antibacterial surfaces. Green photocatalyst can be used to disinfect water by killing harmful bacteria , offering a sustainable alternative to conventional disinfection methods. [ 103 ] Incorporating them into coatings or surfaces can create self-sterilizing materials, reducing the risk of bacterial contamination in healthcare settings and other environments . [ 103 ]
Notes/Explanations:
Despite their sustainable origins, a thorough evaluation of the potential toxicity of green photocatalyst is essential to ensure their safe and responsible application in various settings. Even though they are synthesized from environmentally benign materials, their unique properties and nanoscale dimensions can potentially pose risks to human health and the environment. [ 107 ] It is crucial to assess the potential for adverse effects before widespread implementation of these materials in water treatment , air purification , or biomedical applications.
Various methods are employed to assess the potential toxicity of green photocatalyst . Eco-toxicity tests expose organisms such as algae , daphnia , or fish to varying concentrations of the photocatalyst to evaluate their effects on growth, reproduction , or mortality . [ 108 ] These tests provide valuable insights into the potential impact of green photocatalyst on aquatic ecosystems . Cytotoxicity assays are conducted in laboratory settings using human cell lines to evaluate the potential toxicity of green photocatalysts to human cells . [ 109 ] [ 110 ] These assays help determine the potential for adverse effects on human health upon exposure to these materials.
Notes/Explanations: | https://en.wikipedia.org/wiki/Green_photocatalyst |
The Green report was written by Andrew Conway Ivy , a medical researcher and vice president of the University of Illinois at Chicago . Ivy was in charge of the medical school and its hospitals. The report justified testing malaria vaccines on Statesville Prison , Joliet, Illinois prisoners in the 1940s. Ivy mentioned the report in the 1946 Nuremberg Medical Trial for Nazi war criminals. [ 1 ] He used it to refute any similarity between human experimentation in the United States and the Nazis. [ 2 ]
Malaria experiments in the Statesville Prison were publicized in the June 1945 edition of LIFE, entitled "Prisoners Expose Themselves to Malaria". [ 3 ]
When Ivy testified at the 1946 Nuremberg Medical Trial for Nazi war criminals, he misled the trial about the report, in order to strengthen the prosecution case. [ 4 ] Ivy stated that the committee had debated and issued the report, when the committee had not met at that time. [ 1 ] [ 5 ] It was only formed when Ivy departed for Nuremberg after he requested then Illinois Governor Dwight Green to convene a group that would advise on ethical considerations concerning medical experimentation . [ 6 ] An account stated that he wrote the report on his own after he cited its existence in the trial. [ 4 ] It was later published in the Journal of the American Medical Association (JAMA) .
This history of science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Green_report |
Sustainable refurbishment describes working on existing buildings to improve their environmental performance using sustainable methods and materials. A refurbishment or retrofit is defined as: "any work to a building over and above maintenance to change its capacity, function or performance' in other words, any intervention to adjust, reuse, or upgrade a building to suit new conditions or requirements". [ 1 ] Refurbishment can be done to a part of a building, an entire building, or a campus. [ 2 ] Sustainable refurbishment takes this a step further to modify the existing building to perform better in terms of its environmental impact and its occupants' environment.
Most sustainable refubrishments are also green retrofits : any refurbishment of an existing building that aims to reduce the carbon emissions and environmental impact of the building. [ 3 ] This can include improving the energy efficiency of the HVAC and other mechanical systems, increasing the quality of insulation in the building envelope , implementing sustainable energy generation, and aiming to improve occupant comfort and health. [ 4 ]
Green retrofits have become increasingly prominent with their inclusion in a number of building rating systems, such as the USGBC 's LEED for Existing Buildings: Operations & Maintenance, [ 5 ] Passive House EnerPHit, [ 6 ] and Green Globes for Existing Buildings. [ 7 ] Some governments offer funding towards green retrofits as existing buildings make up a majority of operational buildings and have been identified as a growing area of consideration in the fight against climate change . [ 8 ]
Sustainable refurbishment is the equivalent of sustainable development which relates to new developments of cities, buildings or industries etc. [ 4 ] Sustainable refurbishment includes insulation and related measures to reduce the energy consumption of buildings, installation of renewable energy sources such as solar water heating and photovoltaics , measures to reduce water consumption , and changes to reduce over heating , improve ventilation and improve internal comfort. [ 9 ] The process of sustainable refurbishment includes minimizing the waste of existing components, recycling and using environmentally friendly materials, and minimizing energy use , noise and waste during the refurbishment. [ 10 ]
The importance of sustainable refurbishment is that the majority of buildings in use are not new and thus were constructed when energy standards were low or non-existent, and are otherwise incompatible with current standards or the expectations of users. [ 11 ] Much of the existing building stock is likely to be in use for many years to come since demolition and replacement is often unacceptable owing to cost, social disruption or because the building is of architectural and/or historical interest. The solution is to refurbish or renovate such buildings to make them appropriate for current and future use and to satisfy current requirements and standards of energy use and comfort. [ 10 ]
Sustainable refurbishment is not a new concept but is gaining recognition and importance owing to current concerns about high energy use leading to climate change , overheating in buildings, the need for healthy internal environments, waste and environmental damage associated with materials production. [ 9 ] Many governments are beginning to realize the importance of sustainably renovating their existing building stock, rather than just raising standards for new buildings and developments, and are producing guidance and grants and other support and stimulation activities. [ 12 ] Think-tanks, lobby groups and voluntary organizations continue to publicize and promote the need for and practice of sustainable refurbishment. For these reasons, multiple countries have started to develop demonstration projects.
The techniques of sustainable refurbishment have been developing over many years and though the principles are very similar to those used on new buildings, the practice and details appropriate for the wide range of situations found in old buildings has required development of specific solutions and guidance to optimize the process and avoid subsequent problems. [ 13 ] Recently, many government-sponsored sources have developed technical guidance regarding sustainable refurbishment practices and have published their findings.
Most retrofits can be considered somewhat "green" because rather than constructing a new building, an existing one is improved. [ 14 ] This saves resources that would otherwise be used to build an entirely new structure. A green retrofit typically aims to incorporate sustainability and save energy costs with each design decision.
Retrofitting a building inherently carries the constraints of the existing building and site. For example, the orientation of a building in regard to the sun has a great impact on its energy performance, but it's generally not within the scope of a retrofit to rotate the building. Budgetary constraints also often impact the energy conservation measures proposed. [ 15 ]
Until recently, green retrofits have generally been considered as one-off projects for specific buildings or clients, but given the increased emphasis on improving the energy efficiency of existing building stock in the face of climate change , they are beginning to be reviewed systematically and at scale. [ 14 ] [ 16 ] The main challenge this presents for governments and advocacy groups is that the existing building stock is characterized by different uses, located in disparate climatic areas, and uses different construction traditions and system technologies. [ 17 ] Because of these disparities, it is difficult to characterize strategies that apply to all buildings.
Green retrofits have recently garnered considerable research attention due to government emphasis on retrofitting old building stock to address climate change. It is estimated that up to half of building stock is always over 40 years old. [ 18 ] Older buildings have significantly worse energy performance than their modern counterparts due to shortcomings in their design, deterioration in mechanical system efficiency, and increases in envelope permeability. The energy use intensity of houses in the United States dropped 9% from 1985 to 2004 due to improvements in end-use energy efficiency and code improvements. [ 19 ] However, this is offset by the overall increase in the total number of houses.
One of the objectives of the United Nations Framework Convention on Climate Change (UNFCCC) is the mitigation of greenhouse gas emissions that contribute to climate change. More specifically, the UN supports the immediate reduction of building-related greenhouse gas emissions. [ 20 ] Building refurbishment plays a key role in the decarbonization of the current building stock. [ 21 ] Other than tearing down existing buildings, it is the only way to improve building performance or to develop zero-emission buildings. [ 20 ] Energy-efficient refurbishments are a tool to reduce energy consumption in buildings, [ 22 ] which will result in lower greenhouse gas emissions and resource use. [ 2 ] Studies present the significance of the possible impact of widespread refurbishment implementation on individual GHG emissions, but also worldwide emissions and energy consumption. [ 23 ]
Social sustainability relates to the impacts of a building on the surrounding or occupying society, community, and individuals. [ 2 ] This is considered in environmental impact assessment tools, such as life cycle assessment (LCA). Sustainable refurbishment integrates economic, social, and environmental needs to improve upon the existing building conditions. [ 22 ] For example, sustainable buildings are socially sustainable because they are healthier for occupants due to the use of materials that do not negatively impact health. [ 2 ]
Organizations such as 'Her Retrofit Space' in the UK, focus on empowering women professionals in the retrofit industry. By providing resources, training access, and networking opportunities, they aim to close the gender gap in the industry while advancing sustainable refurbishment practices. With the need to expand the workforce in many countries, advocating women in the workforce is an important aspect of driving workforce growth.
The indoor environmental quality of the existing building stock is known to often be more unsatisfactory and unhealthy than the outdoor environment due to the design and materials used. The leading argument for sustainable refurbishment, and sustainable building in general, is the belief that green buildings are healthier and more satisfactory for occupants. [ 2 ] The specifications for sustainable refurbishments take measures to ensure that the materials and building framework does not radiate dangerous particulates and gasses, like sulfur dioxide and nitrogen dioxide, into the indoor environment, and further measures are taken to filter indoor air for inhabitants. [ 22 ] [ 24 ] The "Citizen's Healthcare Principle" states that sustainable refurbishments must ensure that buildings are safe and improve living quality for those inside. [ 22 ] The refurbishment design must consider both the indoor microclimate and the external environment around the building when developing the program. [ 22 ] The microclimate parameters that should be considered include:
The preservation of historic buildings is inherently sustainable since it maximizes the lifespan of existing materials and infrastructure. [ 20 ] Conserving the materials and existing structures reduces waste and preserves the character of small historic communities. Many argue that the epitome of sustainability is to not build at all, which equites to preservation and refurbishment.
As a society, we have a finite amount of many resources, nonrenewable energy sources for example. Non-sustainable buildings, in terms of their operation, also consume a significant amount of non-renewable energy resources, relative to other industries. Again, it is necessary to increase the energy efficiency of the existing building stick to reduce non-renewable energy resource consumption, or even replace that with renewable sources entirely. [ 20 ] Efficiency can be improved through sustainable building refurbishments that modify the building systems and building operations. With buildings becoming more energy efficient it is increasingly important to look at the life cycle impact of the materials that make up the building. [ 21 ] Designing and constructing buildings that are not sustainable for long lifetimes allows for the construction and demolition of buildings with short lifetimes, which wastes construction materials that are not used for their entire lifetime capacity. Reusing existing buildings enables building owners to use the embodied energy that is already invested in the building composition, rather than wasting that embodied carbon and consuming more with a new building construction. [ 2 ]
This section lays out a timeline and progression of the development of the goals of sustainable refurbishment from a few different authors:
The main goals of "sustainable development", by Baldwin in 1996, include minimizing the impact on human health and the environment, optimal use of non-renewable resources, utilizing renewable resources, and future planning and adaptability. [ 22 ] [ 25 ] Minimizing impact on the climate and ecological system is achieved through a reduction of emissions of greenhouse gasses, which is connected to the other goal of optimizing the use of non-renewable resources. By reducing the non-renewable energy sources used to construct and operate buildings, the embodied greenhouse gas emissions from buildings are also reduced. The refurbishment could also attempt to protect and enhance local ecology through landscape architecture. [ 22 ] Human health is preserved by increasing ventilation and air filtration of indoor spaces and by avoiding potentially harmful construction materials that can impact respiratory health. This can also be achieved by encouraging the reuse or recycling of materials to reduce or eliminate material waste. The goal of renewable resource use can be achieved in a refurbishment by electrifying the home's heating and cooling systems, by installing on-site renewables generation and storage, or by using renewable resource products as building materials – like timber. [ 22 ] Building for the future can be achieved through refurbishment by making an existing building more durable and extending the previous lifespan of the building.
In 1996, Keeping and Shiers described the goals of "green refurbishments" as having three parts. [ 22 ] [ 26 ] The first part includes lower utility costs since less energy is consumed due to a combination of efficient and passive heating and cooling systems. The second part ensures lower maintenance costs since the refurbished systems are simpler and were installed to be accessible for repairs. Finally, the third portion claims that buildings with green refurbishments are healthier and more comfortable for occupants. [ 22 ]
Ultimately, in 2006, Sitar et al. defined the principles of "sustainable refurbishments". [ 22 ] [ 27 ] The goals include decreasing the energy used during operation, which includes heating, cooling, ventilation, lighting, etc. Another goal is the utilization of both renewable energy sources and low-impact materials regarding the indoor micro-environment, as well as the exterior macro-environment. They claim to achieve improvement in living conditions in terms of human health, user-friendly controls, and adaptability for future needs. [ 22 ] The goal is that this all be achieved through innovation planning to develop a design that is environmentally, economically, and socially beneficial.
Sustainable refurbishments aim to reach "total building performance optimization" with the integrates of multiple systems throughout the building and the community. [ 20 ] The refurbishment not only decreases energy consumption but also improves occupant comfort in terms of noise, temperature, lighting, etc. It extends the life cycle of the building, reduces environmental impact, and creates healthy occupant conditions. [ 22 ]
Sustainable refurbishments aim to minimize the negative environmental impacts of the renovation by reducing quantities of harmful materials, utilizing energy-saving technology, and retrofitting the building for renewable energy use, as opposed to non-renewable energy sources. [ 22 ] Some responsible environmental measures that can be incorporated in a building retrofit include energy and water efficiency, waste reduction and recycling, use of low environmental impact materials, and effective building operation. [ 2 ] An example of energy-efficient designs could include high-efficiency lighting and smart controls. Similarly, an example of water-efficient design could include dual-flushing toilets, greywater recycling, or aerating water fixtures.
A sustainable retrofit ensures that the energy performance of the building after renovation is significantly better than it was before the work. The increase in energy performance must meet the current building regulations for new buildings. [ 23 ] The sustainable approach would be to even design beyond the code minimum and plan for future requirements. A deep energy refurbishment should include the integration of energy generation on-site from renewable energy sources, with the goal of developing a nearly zero-energy building. The energy efficiency gained through the architectural retrofit makes the integration of renewable energy sources cost-effective. [ 23 ] A study in the United Kingdom showed that, after a refurbishment, buildings had lower operating costs even if sustainability was not a priority of the retrofit. [ 2 ]
A study of energy refurbishments of residential buildings showed that the refurbishment led to an average thermal energy savings of 59% during the heating season. [ 23 ] The savings consisted of 25% from thermal insulation addition in exterior walls and floor, 10% from window insulation improvements, 6% from a reduction in air exchange, and 18% from the installation of heating controls. [ 23 ] The retrofit of the building envelope and operation reduced the energy consumption of the building, the associated greenhouse gas emissions during the operation phase, and the overall environmental impact. [ 21 ] [ 2 ]
Sustainable refurbishment ensures energy efficiency by improving the following systems: [ 22 ]
There are typical strategies to improve upon each of the above systems. Innovative insulation materials can be utilized that have a lower environmental impact, but even non-sustainable insulation additions can improve the energy performance of the building. [ 22 ] The building envelope can be improved by replacing the existing windows with efficient windows in terms of thermal bridging and optimizing solar gain. [ 22 ] The use of passive ventilation strategies or hybrid systems that use both passive and active strategies reduces the energy required for conditioning. [ 22 ] Buildings' heating and cooling systems can be powered with solar energy, or even use solar-heated water, which both reduce non-renewable energy consumption through heating and cooling. [ 22 ] Finally, the electricity used for lighting can be reduced by optimizing daylighting in occupiable spaces. [ 22 ]
The overall environmental impact of a sustainable refurbishment is highly dependent on the material choices for the refurbishment. [ 21 ] The Rational Resources Principle for refurbishment encourages the efficient use of construction materials and natural resources. [ 22 ] This is quantified through life cycle analysis that measures the impact of a material over its lifetime, which stretches into the "D phase" that includes end-of-life waste after the building is demolished. [ 21 ] [ 22 ] Waste transportation adds costs to both construction and building maintenance. Designing refurbishments that reduce waste, and maximize reuse, minimize waste hauling costs in the short-term and long-term by using materials for their entire intended life. [ 22 ]
When materials can be compared on the basis of a common bottom line, through life cycle analysis, an optimum path for the design appears. [ 21 ] Ultimately, since the refurbishment will require additional construction materials, there will be a negative environmental impact, but the aim of sustainable refurbishment is to minimize these impacts. [ 21 ] For example, reusing on-site timber, using reclaimed timber, and using timber from renewable certified sources are sustainable material choices based on taking advantage of the embodied carbon that has already been invested in those materials. [ 2 ] As was mentioned prior, the analysis of the life cycle of building materials' primary energy demand and global warming potential is becoming more important as buildings are consuming less energy during their operation. [ 21 ] The human health impact of materials is also included in their life cycle assessment, meaning that a construction material cannot be sustainable if it harms its occupants. Therefore, sustainable refurbishments should not include adhesives, paints, or glues that expel low-volatile organic compounds into the indoor air of the building. [ 2 ] Materials that harm indoor occupants or the exterior ecology are considered lethal and are therefore not utilized in sustainable refurbishments. [ 24 ]
The image shown [ clarification needed ] is a conceptual model of sustainable refurbishment. [ 22 ] The dimensions of the model include technical, economic, architectural, social, ecological, and cultural. The dimensions are all related and influence each other and the refurbishment design itself. [ 22 ]
The principle of sustainable refurbishment should be incorporated into the project development from the first blueprint through building commissioning and turnover. This section provides a generalized description of steps in the design process for a sustainable refurbishment: [ 22 ]
1. Data collection: This includes analyzing the problem formulation and developing project goals.
2. Determination of the degree of refurbishment necessary: Multiple conditions of the building must be investigated before modeling can be done including physical deterioration, the presence of moisture, [ 20 ] any thermal bridging, [ 20 ] whether current code requirements are met, if there is high energy demand/consumption, the quality of the indoor environmental and air quality, the quality of the outdoor air quality, [ 20 ] and whether the building has unsatisfied occupants.
3. Modeling phase: In order to develop a model, architects must first analyze the data collected, develop criteria to base the alternative comparison, and develop design alternatives (consider stakeholders and look at best practices)
4. Selection Phase: Once multiple options are available for the building, architects must evaluate alternative layouts, address strengths and weaknesses, and then ultimately choose a recommendation and optimize the chosen design.
5. Implementation Phase: In the modeling phase, the greater environment impacted by the refurbishment is often accounted for and then decisions are made based on this context. [ 22 ] The social and political conditions of the community also need to be considered, especially the living conditions and standards of those within the buildings being updated. Finally, the ecological conditions need to be considered, including the average temperature, humidity, soil quality , natural resources, and topography. [ 22 ] With all of these conditions in mind, the necessary changes can finally be implemented.
Green retrofits utilize an integrated design strategy . [ 28 ] This is in opposition to the traditional waterfall design strategy, in which architects, engineers, and contractors operate independently from one another. In an integrated design strategy, these teams work together to leverage their areas of expertise and solve design problems while also considering the building as a whole. This is imperative for a green retrofit, where the design solutions are often constrained by the existing site. This could relate to the orientation and geometry of the existing building form, the size of the site, or the installation requirements of the existing and proposed mechanical systems. Because these constraints affect all aspects of building design, the only way sustainable, effective, and cost-efficient solutions can be synthesized is when project teams consider all these aspects from the project start.
Many sustainable building practices are passive and can be automated, like insulation or light controls. Others depend on the behavior of the building's occupants to realize their full energy efficiency potential. An energy-efficient heating system does very little good if the windows are left open in winter. Per Ascione et al., "the first lever of energy efficiency is a proper energy education of users". [ 29 ] Green retrofits can involve training building occupants in sustainable practices and building systems that they'll interact with, which helps ensure that any energy conservation measures used will reach their full design potential. Training can be handled by system manufacturers or the project design team.
Smart energy management can also be implemented into sustainable buildings through AI-driven HVAC, lighting systems, climate control, and adaptive lighting. With AI, these building functions can be controlled based on activity patterns to more efficiently manage energy. Additionally, behavior-responsive water fixtures can be implemented to limit water usage based on user habits. [ 30 ]
One of the most common forms of a green retrofit is a full or partial lighting retrofit. A lighting retrofit usually consists of replacing all or some of the lightbulbs in a building with newer, more efficient models. [ 31 ] This can also include changing light fixtures, ballasts , and drivers. LED bulbs are generally the preferred choice in a lighting retrofit because of their greatly increased efficiency compared to incandescent bulbs, but other types of bulbs like compact fluorescent or metal halides may be used as well.
Lighting retrofits are a popular form of green retrofit because, compared to other methods of improving energy efficiency, they are relatively straightforward to plan and execute, and the energy savings often provide a quick return on investment. [ 32 ] Most modern LED and compact fluorescent bulbs are designed to work with existing light fixtures and rarely involve any additional work than removing and screwing in a new lightbulb. The installation is also relatively quick compared to more invasive energy conservation measures.
Lighting retrofits can also include implementing new lighting controls like occupancy sensors, daylight sensors, and timers. When correctly implemented, these controls can reduce the demand for lighting. However, due to the complicated nature of lighting controls, there is debate as to whether or not they are an effective energy conserving measure because of the prevalence of over-optimistic energy usage reduction estimates and the difficulty in predicting the actions of human occupants. [ 33 ]
Heating, ventilation, and air conditioning ( HVAC ) account for around 50% of a building's operating energy consumption, and HVAC retrofits can account for 40-70% of energy savings. [ 34 ] [ 15 ] Reducing this consumption can provide both energy and cost savings, so it is the main focus of many green retrofits, especially in colder climates where heating accounts for over 60% of energy use. [ 35 ] The heating system, cooling system, air handling systems, humidification systems, and ductwork in the building are often considered. [ 36 ]
Heat recovery ventilation is recommended for newly air-sealed homes as it uses the heat from the warm, moist, stale air that is being vented from the home to warm the cool, fresh, and filtered air that is entering the home. This allows for minimal heat loss while mitigating concerns of carbon monoxide poisoning, radon gas, or harmful particulates accumulating in the home. [ 37 ]
Other green HVAC retrofits can include implementing a newer, more efficient model of the same type as the existing system, such as replacing an old water boiler with a more efficient one to feed a hydronic heating system. Sometimes a larger system overhaul is merited—for example, exchanging an old boiler for a newer ground or air source heat pump system.
Thermal insulation and building envelope performance are key to the overall energy performance of any building. [ 38 ] Many older buildings are not insulated up to current standards, let alone up to the standards recommended in many green building rating systems. Many of these buildings spend energy and money heating, cooling, or conditioning the air inside them only to see it seep out through leaks in the building envelope or through poorly insulated windows.
During many green retrofits, the first step towards improving the building's envelope is to evaluate its current shortcomings. Air-sealing is an easily accessible and cost-efficient way to improve the energy efficiency of a home that is mechanically heated or cooled. Caulking can be used to fill gaps in immobile areas like window and door frames and or poorly sealed appliances. Weather stripping can be used where moving parts meet, such as the area between the door and the doorframe or windows that can open. These drafty areas can be found by feeling for temperature differences and drafts on days when the temperature inside the house is dramatically different than the temperature outside the house, burning incense and watching how the smoke moves to detect drafts, or hiring a professional to perform a blower door test. [ 39 ] In a blower door test, a door with a fan and a gauge is installed into one of the doorways and the house is depressurized. The gauge can then measure the air changes per hour (ACH), or how many times the volume of air in the house is completely replaced in one hour. The draftier a house is, the higher the air changes per hour will be.
Windows are the weakest point of insulation in a building's envelope and contribute greatly to how thermally effective that envelope is. [ 40 ] Because of this, windows are another common area of focus for a green retrofit. Similar to a lighting retrofit, windows are a relatively straightforward aspect of a building to retrofit, with easy-to-calculate payback periods. Modern, efficient windows are generally sized for existing window openings and can usually be installed without much additional work on the building envelope.
Most green retrofits will replace older single-pane windows with more efficient triple-paned varieties that are filled with an inert gas such as argon or krypton. [ 41 ] These windows have greater R-values , so they insulate a space far better than single-pane windows. Some windows have low-e coatings to control the solar heat gain coefficient .
Green roofs , also called living roofs, have a number of major benefits, including reducing stormwater runoff and urban heat island effect, increasing roof insulation, improving building acoustics , [ 43 ] and providing biodiversity. [ 44 ]
There are many factors to account for when considering a green roof for a green retrofit. Extensive green roofs use a thin substrate layer for the often shorter vegetation that needs less room for roots to grow. Intensive green roofs use a thicker growing substrate to accommodate larger plant species that require more space for their roots. Semi-intensive green roofs fall somewhere in between the two. The strength of the existing structure must be considered; many existing structures were not designed for an intensive green roof, which can carry a considerable structural load. The existing roof also needs to be evaluated for stripping or re-waterproofing. Some roofs can simply be laid over with sedum mats, while others require additional work to prepare. A peaked or sloped roof does not preclude the installation of a green roofing system but can influence the installation costs and product choices available.
In general, older buildings with lower existing insulation values benefit the most from green roof retrofits, and where there are no modifications necessary to install one, green roofs have been shown to have many benefits. [ 45 ] [ 46 ]
Passive design is a design strategy that uses the shape and placement of the architecture and landscaping to heat, cool, light, ventilate, and sometimes provide power to the building. Often, this impacts the shape of the building envelope, the orientation of the building, and the placement of the building. The shape of the building can also create microclimates in which the building is designed to trap heat or funnel breezes for warming in the winter or cooling in the summer. While these passive design elements are more often applied in newly built green buildings, passive design can still be a consideration in green retrofits. For example, if there are windows that receive very little sunlight in the winter or a large amount of sunlight in the summer, those may be replaced first to reduce an undesirable amount of heat lost in the winter or gained in the summer. Using landscaping, such as planting a deciduous tree in front of south-facing windows to maximize solar heat gain in the winter while shading the windows in the summer, is also an example of passive design. [ 47 ]
A 2019 case study in Vienna explored the impact of a sustainable refurbishment that included a Multi-Active Façade System. [ 21 ] The assumption of the study was that the improvement of the outermost layer of the structure, the façade shell, was the most important regarding energy efficiency. [ 21 ] Insulation, specifically, was a major contributor to energy savings during building operation, and a life cycle analysis was required to make an informed decision about the insulation material. [ 21 ] The façade system in this study reduced the building's energy demand with insulation and corrugated board, which passively increased the solar gain in the winter when extra heat was required to minimize energy consumption and reduced the solar gain during the summer. [ 21 ] This was achieved by installing the façade at a strategic angle to allow ultraviolet rays to pass through only when the sun is at its lower winter angle. The façade also integrated renewable energy generation into the shell itself, as well as energy storage for when there is no active radiation. [ 21 ] After the sustainable refurbishment with the new facade, the heating demand for the building was modeled to be about 53% less than the baseline value. [ 21 ] The low energy demand even exceeded the new building standard requirements for 2021 by about 45%, making the design adaptable and resilient for the future. [ 21 ]
Possible benefits of green retrofits include but are not exclusive to: improved energy security, reduced air pollution, improved indoor air quality through technology such as MVHR, reduced greenhouse gas emissions and impact on climate change, increased thermal comfort, enhanced indoor air quality and occupant health, generation of local jobs, and reduction of peak electrical demand.
Possible barriers to green retrofits include: initial cost and financing, lack of knowledge and experience of the designers, architects, construction workers, inspectors, and financial institutions involved in the project, [ 15 ] building code regulations, and lack of consumer interest.
The scope of a green retrofit can vary widely. It can involve specific building systems, like the lighting , or can be a full renovation of all non-structural components. While a lighting retrofit is straightforward to execute and relatively unobtrusive to building occupants, it will not generally carry as much of a benefit or cost as an insulation retrofit. When weighing the benefits and costs of a green retrofit, each of these components are typically considered towards the project as a whole.
While green retrofits do have an up-front cost, the amount depends on how extensive the retrofits are. [ 48 ] Likewise, the kind of retrofit that is implemented will also impact how fast the investment is returned in savings. The economic feasibility of a green retrofit depends on the state of installed systems of the existing building, the proposed design, the energy costs of the local utility grid, and the climatic conditions of the site. Any economic incentives granted will depend on what country or state the project is in. These incentives differ regionally and can affect the total project feasibility. In Ireland, for example, "shallow" green retrofits have been found to be economically feasible, but "deep" retrofits are often not feasible without government grant aid to offset the initial capital costs. [ 49 ]
The EU has found that implementing green retrofit programs comes with the benefit of "energy security, job creation, fuel poverty alleviation, health and indoor comfort". [ 17 ]
Green retrofits can carry benefits such as the re-use of existing building material . Concrete and steel have some of the highest embodied energy impacts of any building material and can account for up to 60% of the carbon used in the construction of a building. [ 50 ] [ 51 ] They are primarily used in the structure of a building, which usually remains untouched in retrofits.
Most types of green retrofit introduce new building materials into the space which can themselves emit harmful indoor air pollutants . The amount, type, and exposure to these pollutants will depend on the material itself, what it is used for, and how it is installed. Often, green retrofits call for sealing leaks in the building envelope to prevent the escape of conditioned air, but if this is not offset by an increase in ventilation, it can contribute to higher concentrations of indoor air pollutants in the building. [ 52 ]
This gap in building improvement is moving toward being more frequently addressed by policymakers to avoid an environmental justice issue with a "two-tiered" market. [ 2 ] The premium stock has high rental prices which incentivize owners to invest in them further, which is not the case for the lower quality stock. Many argue that it is not just that only occupants that can afford premium housing get to live in the ensured healthy and comfortable environment of a sustainable refurbishment. The Affordability Principle states that sustainable refurbishments should be affordable for the general population. [ 22 ] Health disparities and carbon emissions may also be reduced through the implementation of making building more efficient, thus ensuring a more equitable future for all. [ 53 ] In addition, many call for the information about sustainable refurbishment to be shared and freely available to people of all income levels, ages, races, etc. in order that everyone have an equal opportunity to live better
There is criticism of the efficacy of sustainable refurbishment in terms of decarbonizing the current building stock. This criticism is directed specifically toward large-scale energy refurbishments of industrial structures. One could argue that the embodied and operational carbon of those types of buildings are significantly larger than that of smaller residential or office buildings which are discussed in this article. However, the technology to efficiently heat, cool, and power these structures do not yet exist, and they cannot completely rely on passive strategies due to more stringent code restrictions. [ 23 ] It cannot be expected that these large-impact buildings be refurbished if it cannot be done economically. This argument impacts the hopeful global energy consumption decrease that researchers propose for sustainable refurbishments.
Another criticism of sustainable refurbishments is that not all existing buildings are good candidates for refurbishment. Put plainly, it is challenging to improve on buildings that were poorly designed from the start. It was proven that floor plans that are typical, with deep shapes, were more adaptable than irregular designs. [ 2 ] Similarly, floor-to-floor heights impact the designer and contractor's ability to modify utility ducts, meaning that taller buildings are easier to refurbish. [ 23 ] Research also shows that structures that qualify as "higher grade" building stock experience greater levels and frequency of sustainable refurbishment. [ 2 ] There seem to be a number of reasons for this, one being that premium buildings undergo retrofit earlier in their lifecycle in order to compete with newer sustainable buildings. [ 2 ] It can be argued that it is not sustainable to replace building systems early in their lifecycle, just to invest in additional embodied carbon and discard the old equipment into a landfill. However, there is an opportunity for "young" removed materials to be utilized in lower-quality refurbishments in low-income communities. [ 2 ] In an Australian study using data from 2007, it was found that about 89% of all premium retrofits were to buildings that were less than 25 years old, with the remaining 11% aged between 26 and 50 years old. [ 2 ] The same study showed that no refurbishments occurred in the "least desirable" stock locations. [ 2 ]
Based on a study conducted by the Journal of International Affairs, 41% of citizens from urban areas, 34% of citizens from suburban areas, and 46% of citizens from rural areas have a high interest in smart street lighting. Smart street lighting refers to street lighting systems that use technology to improve public street lighting efficiency, safety, and management with energy-efficient LEDs that utilize sensors and wireless connectivity to allow for the remote control and monitoring of lighting levels based on real-time conditions. With these systems, the brightness of street lights is automatically adjusted based on various conditions such as the time of day, level of traffic, and environmental conditions. Essentially, smart street lighting saves energy by effectively managing the energy used for lighting. Even further, 52% of citizens from both urban and suburban areas, and 53% of citizens living in rural areas, are interested in free public Wifi networks as part of smart city initiatives. Free public Wifi networks are essential for smart cities as they support the various initiatives within smart cities and enable digital inclusion by bridging the digital divide and enhancing economic opportunities. More so, 31% of citizens from urban areas, 23% of citizens from suburban areas, and 31% of citizens from rural areas are interested in smart cities implementing monitoring for critical systems. Critical systems monitoring refers to the real-time tracking and analysis of city functions using technology in order to identify and address any potential issues within city operation systems. With the use of cameras, sensors, and devices, data is gathered on the infrastructure, service, and public safety within urban areas, and rapid responses are formed to optimize city operations. And finally, 40% of citizens in urban areas, 23% of citizens in suburban areas, and 31% of citizens in rural areas are in favor of public transportation as part of smart cities. With technology, smart cities are able to improve public transportation systems by offering benefits such as real-time travel information, congestion control, and increased urban mobility. [ 54 ]
The BIO Web of Conferences conducted a study where they measured the initial carbon emissions of four cities and then implemented smart city initiatives and measured the carbon emissions after the implementation of these technologies. They saw a significant reduction in carbon emissions within all cities, especially those that had high initial emissions. For example, sample city B, which had initial emissions of 400,000 tons of carbon per year, had post-test emissions of 320,000 tons of carbon per year. This represents a 20% reduction in carbon emissions. Impressive, right? Meanwhile, sample city D, which had initial emissions of 180,000, saw post-test emissions of 160,000, representing an 11% decrease in carbon emissions. The most important factor at play is most likely the fact that sample city B had much more significant initial carbon emissions than sample city D. [ 55 ]
The BIO Web of Conferences conducted another similar study where they tested the carbon reduction based on the implementation of various initiatives. Green energy projects saw carbon emission reductions of 250,000 tons per year, while public transportation upgrades saw reductions of 320,000 tons per year, energy-efficient buildings saw reductions of 180,000 tons per year, and waste management improvements saw reductions of 150,000 tons per year. Clearly, all smart city initiatives have a positive environmental impact, with some being more substantial than others, such as high-frequency services, such as public transportation. [ 55 ]
General
Energy and HVAC
Indoor Environmental Quality
Material Choices
Design Standards
Several books on the subject have been published aimed at different audiences, for example: | https://en.wikipedia.org/wiki/Green_retrofit |
A green roof or living roof is a roof of a building that is partially or completely covered with vegetation and a growing medium, planted over a waterproofing membrane . It may also include additional layers such as a root barrier and drainage and irrigation systems. [ 1 ] Container gardens on roofs, where plants are maintained in pots, are not generally considered to be true green roofs, although this is debated. Rooftop ponds are another form of green roofs which are used to treat greywater . [ 2 ] Vegetation, soil, drainage layer, roof barrier and irrigation system constitute the green roof. [ 3 ]
Green roofs serve several purposes for a building, such as absorbing rainwater , providing insulation , creating a habitat for wildlife, [ 4 ] and decreasing stress of the people around the roof by providing a more aesthetically pleasing landscape, and helping to lower urban air temperatures and mitigate the heat island effect . [ 5 ] Green roofs are suitable for retrofit or redevelopment projects as well as new buildings and can be installed on small garages or larger industrial, commercial and municipal buildings. [ 1 ] They effectively use the natural functions of plants to filter water and treat air in urban and suburban landscapes. [ 6 ] There are two types of green roof: intensive roofs, which are thicker, with a minimum depth of 12.8 cm ( 5 + 1 ⁄ 16 in), and can support a wider variety of plants but are heavier and require more maintenance, and extensive roofs, which are shallow, ranging in depth from 2 to 12.7 cm ( 13 ⁄ 16 to 5 in), lighter than intensive green roofs, and require minimal maintenance. [ 7 ]
The term green roof may also be used to indicate roofs that use some form of green technology, such as a cool roof , a roof with solar thermal collectors or photovoltaic panels . Green roofs are also referred to as eco-roofs , oikosteges , vegetated roofs , living roofs , greenroofs and VCP H [ 8 ] (Horizontal Vegetated Complex Partitions)
Green roofs improve and reduce energy consumption. [ 9 ] They can reduce heating by adding mass and thermal resistance value, and can reduce the heat island effect by increasing evapotranspiration . [ 10 ] A 2005 study by Brad Bass of the University of Toronto showed that green roofs can also reduce heat loss and energy consumption in winter conditions. [ 11 ] [ 12 ] A modeling study found that adding green roofs to 50 percent of the available surfaces in downtown Toronto would cool the entire city by 0.1 to 0.8 °C (0.2 to 1.4 °F). [ 13 ]
Through evaporative cooling , a green roof reduces cooling loads on a building by fifty to ninety percent, [ 14 ] especially if it is glassed-in so as to act as a terrarium and passive solar heat reservoir.
A concentration of green roofs in an urban area can reduce the city's average temperatures during the summer, combating the urban heat island effect . [ 15 ] Traditional building materials soak up the sun's radiation and re-emit it as heat, making cities at least 4 °C (7.2 °F) hotter than surrounding areas. On Chicago's City Hall, by contrast, which features a green roof, roof temperatures on a hot day are typically 1.4–4.4 °C (2.5–7.9 °F) cooler than they are on traditionally roofed buildings nearby. [ 16 ] Green roofs are becoming common in Chicago, as well as in Atlanta, Portland, and other United States cities, where their use is encouraged by regulations to combat the urban heat-island effect. Green roofs are a type of low impact development . [ 17 ] A 2023 meta-analysis found that green roofs reduce rooftop surface temperatures by an average of 30 °C during summer months, providing significant mitigation of urban heat island effects. [ 18 ] In the case of Chicago, the city has passed codes offering incentives to builders who put green roofs on their buildings. The Chicago City Hall green roof is one of the earliest and most well-known examples of green roofs in the United States; it was planted as an experiment to determine the effects a green roof would have on the microclimate of the roof. Following this and other studies, it has now been estimated that if all the roofs in a major city were greened, urban temperatures could be reduced by as much as 7 °C (13 °F). [ 19 ]
Green roofs can reduce stormwater runoff [ 20 ] [ 21 ] via water-wise gardening techniques. Green roofs play a significant role in retrofitting the Low Impact Development (LID) practices in urban areas. [ 22 ] A study presented at the Green Roofs for Healthy Cities Conference in June 2004, cited by the EPA, found water runoff was reduced by over 75% during rainstorms. [ 23 ] Water is stored by the roof's substrate and then taken up by the plants, from which it is returned to the atmosphere through transpiration and evaporation.
Green roofs decrease the total amount of runoff and slow the rate of runoff from the roof. It has been found that they can retain up to 75% of rainwater, gradually releasing it back into the atmosphere via condensation and transpiration , while retaining pollutants in their soil. [ 23 ] Many green roofs are installed to comply with local regulations and government fees, often regarding stormwater runoff management. [ 24 ] In areas with combined sewer-stormwater systems , heavy storms can overload the wastewater system and cause it to flood, dumping raw sewage into the local waterways. Often, phosphorus and nitrogen are in this category of environmentally harmful substances even though they are stimulating to the growth of plant life and agriculture. When these substances are added to a system, it can create mass biological activity since they are considered limiting factors of plant growth and by adding more of them to a system, it allows for more plant growth. [ 25 ]
Green roofs create natural habitat as part of an urban wilderness . [ 26 ] Even in high-rise urban settings as tall as 19 stories, it has been found that green roofs can attract beneficial insects, birds, bees and butterflies. A recent list of the bee species recorded from green roofs (worldwide) highlights both the diversity of species, but also the (expected) bias towards small ground-nesting species (Hofmann and Renner, 2017). Rooftop greenery complements wild areas by providing stepping stones for songbirds, migratory birds and other wildlife facing shortages of natural habitat . Bats have also been reported to be more active over green roofs due to the foraging opportunities these roofs provide. [ 27 ] Research at the Javits Center green roof in New York, NY, has shown a correlation between higher numbers of certain insects on the roof, particularly moths, with an increased amount of bat foraging activity. Research from 2023 in St. Louis, Missouri, showed that urban rooftop food gardens support diverse bee populations, enhancing urban pollination and biodiversity. [ 28 ]
Green roofs also serve as a green wall , filtering pollutants and carbon dioxide out of the air, helping to lower rates of diseases such as asthma. [ 29 ] They can also filter pollutants and heavy metals out of rainwater.
An additional environmental benefit of a green roof is the ability to sequester carbon. Carbon is the main component of plant matter and is naturally absorbed by plant tissue. The carbon is stored in the plant tissue and the soil substrate through plant litter and root exudates. [ 30 ] A study on green roofs in Michigan and Maryland found the above ground biomass and below ground substrate stored on average between 168 g C m −2 and 107 g C m −2 . Variations occurred among the different species of plant used. Substrate carbon content averaged 913 g C m −2 and after the subtraction of the original carbon content the total sequestration was 378 g C m −2 . [ 31 ] The sequestration can be improved by changing plant species, increasing substrate depth, substrate composition, and management practices. In a study done in Michigan above ground sequestration ranged from 64 g C m −2 to 239 g C m −2 for S. acre and S album. [ 31 ] Also, by increasing the substrate depth would allow for more area of carbon storage and diversify the types of plants with greater potential of carbon storage. The direct carbon sequestration techniques and methods can be measured and accounted for. Green roofs also indirectly reduce CO 2 given off by power plants through their ability to insulate buildings. [ 30 ] Buildings in the US account for 38% of the total carbon dioxide emissions. [ 32 ] A model supported by the U.S. Department of Energy found a 2 percent reduction in electricity consumption and 9-11% reduction in natural gas when implementing green roofs. Through this, a 2023 comprehensive review highlighted that green roofs also contribute to carbon dioxide reduction through both direct sequestration and indirect mechanisms, such as decreasing building energy consumption and mitigating urban heat islands. [ 33 ]
A properly designed and installed extensive green-roof system can cost $108–$248/m 2 ($10–$23/sq ft) while an intensive green roof costs $355–$2,368/m 2 ($33–$220/sq ft) However, since most of the materials used to build the green roof can be salvaged, it is estimated that the cost of replacing a green roof is generally one third of the initial installation costs. [ 35 ]
With the initial cost of installing a green roof in mind, there are many financial benefits that accompany green roofing.
The main disadvantage of green roofs is that the initial cost of installing a green roof can be double that of a normal roof. [ 43 ] Depending on what kind of green roof it is, the maintenance costs could be higher, but some types of green roof have little or no ongoing cost. Some kinds of green roofs also place higher demands on the waterproofing system of the structure, both because water is retained on the roof and due to the possibility of roots penetrating the waterproof membrane. Another disadvantage is that the wildlife they attract may include pest insects which could easily infiltrate a residential building through open windows.
The additional mass of the soil substrate and retained water places a large strain on the structural support of a building. This makes it unlikely for intensive green roofs to become widely implemented due to a lack of buildings that are able to support such a large amount of added weight as well as the added cost of reinforcing buildings to be able to support such weight. [ 44 ] Some types of green roofs do have more demanding structural standards especially in seismic regions of the world. Some existing buildings cannot be retrofitted with certain kinds of green roofs because of the weight load of the substrate and vegetation exceeds permitted static loading . The weight of a green roof caused the collapse of a large sports hall roof in Hong Kong in 2016. [ 45 ] In the wake of the disaster numerous other green roofs around the territory were removed. [ 46 ]
Green roofs require significantly more maintenance and maintenance energy compared to a standard roof. Standard maintenance include removing debris, controlling weeds, deadhead trimming, checking moisture levels, and fertilizing. [ 47 ] The maintenance energy use for green roofs has many variables including: climate, intensity of rainfall, type of building, type of vegetation, and external coatings. [ 40 ] The most significant effect comes from scarce rainfall which will increase the maintenance energy due to the watering required. During a 10-year roof maintenance cycle a house with a green roof requires more retrofit embodied energy than a house with a white roof. The individual components of a green roof have CO 2 implications during the manufacturing process have additional implications compared to a conventional roof. [ 48 ] The embodied energy for green roof components are 23.6 kg/m 2 (5 pounds per square foot) CO 2 of green roof. This value is equivalent to 6448 g C m −2 which is significantly greater than 378 g C m −2. [ 40 ] Criteria for waste management practices when green roofs reach their end-of-life remain uncodified. [ 49 ]
Both sod roofs and LWA-based (Lightweight Aggregates) roofs have been found to have a negative impact on the quality of their resulting runoff. [ 50 ]
Green roofs can be categorized as intensive, semi-intensive, or extensive, depending on the depth of planting medium and the amount of maintenance they need. Extensive green roofs traditionally support 50–120 kg/m 2 (10–25 pounds per square foot) of vegetation [ 51 ] while intensive roofs support 390–730 kg/m 2 (80–150 pounds per square foot) of vegetation. [ 52 ] Traditional roof gardens , which require a reasonable depth of soil to grow large plants or conventional lawns, are considered intensive because they are labour-intensive, requiring irrigation, feeding, and other maintenance. Intensive roofs are more park-like with easy access and may include anything from kitchen herbs to shrubs and small trees. [ 53 ]
Extensive green roofs, by contrast, are designed to be virtually self-sustaining and should require only a minimum of maintenance, perhaps a once-yearly weeding or an application of slow-release fertiliser to boost growth. Extensive roofs are usually only accessed for maintenance. [ 53 ] They can be established on a very thin layer of soil (most use specially formulated composts): even a thin layer of rockwool laid directly onto a watertight roof can support a planting of Sedum species and mosses . Some green roof designs incorporate both intensive and extensive elements. To protect the roof, a waterproofing membrane is often used, which is manufactured to remain watertight in extreme conditions including constant dampness, ponding water, high and low alkaline conditions and exposure to plant roots, fungi and bacterial organisms. [ 54 ]
Advances in green roof technology have led to the development of new systems that do not fit into the traditional classification of green roof types. Comprehensive green roofs bring the most advantageous qualities of extensive and intensive green roofs together. Comprehensive green roofs support plant varieties typically seen in intensive green roofs at the depth and weight of an extensive green roof system. [ 55 ]
Another important distinction is between pitched green roofs and flat green roofs. Pitched sod roofs , a traditional feature of many Scandinavian buildings, tend to be of a simpler design than flat green roofs. This is because the pitch of the roof reduces the risk of water penetrating through the roof structure, allowing the use of fewer waterproofing and drainage layers.
In ancient times green roofs consisted of cave-like structures or sod roofs covered with earth and plants commonly used for agriculture, dwelling, and ceremonial purposes. These early shelters provided protection from the elements, good insulation during the winter months, and a cool location in the summer. Unfortunately for modern conveniences, these were neither waterproof nor was there any system to keep out unwanted burrowing wildlife. [ 56 ] [ disputed (for: trad birchbark underlayer) – discuss ]
Modern green roofs, which are made of a system of manufactured layers deliberately placed over roofs to support growing medium and vegetation, are a relatively new phenomenon. However, green roofs or sod roofs in northern Scandinavia have been around for centuries. The modern trend started when green roofs were developed in Germany in the 1960s, and has since spread to many countries. Today, it is estimated that about 10% of all German roofs have been "greened". [ 37 ]
A number of European Countries have very active associations promoting green roofs, including Germany, Switzerland, the Netherlands, Norway, Italy, Austria, Hungary, Sweden, the UK, and Greece. [ 57 ] Germany was the first country to start developing green roof systems and market them on a large scale. [ 56 ] The City of Linz in Austria has been paying developers to install green roofs since 1983, and in Switzerland, it [ clarification needed ] has been a federal law since the late 1990s. [ citation needed ] In the UK, their uptake has been slow, but a number of cities have developed policies to encourage their use, notably London and Sheffield.
Green roofs are also becoming increasingly popular in North America, although they are not as common as in some parts of Europe. Numerous North American cities offer tax incentives to developers who integrate green roofs in their buildings. Toronto and San Francisco legally mandate new buildings to include green roofs. [ 58 ] [ 59 ]
Rooftop water purification is also being implemented in green roofs. These forms of green roofs are actually treatment ponds built into the rooftops. They are built either from a simple substrate (as being done in Dongtan [ 60 ] ) or with plant-based ponds. Plants used include calamus , Menyanthes trifoliata , Mentha aquatica , etc. [ 61 ] )
Several studies have been carried out in Germany since the 1970s. [ clarification needed ] Berlin is one of the most important centers of green roof research in Germany. Particularly in the last 10 years, much more research has begun. About ten green roof research centers exist in the US and activities exist in about 40 countries. In a recent study on the impacts of green infrastructure , in particular green roofs in the Greater Manchester area, researchers found that adding green roofs can help keep temperatures down, particularly in urban areas: "adding green roofs to all buildings can have a dramatic effect on maximum surface temperatures, keeping temperatures below the 1961–1990 current form case for all time periods and emissions scenarios. Roof greening makes the biggest difference…where the building proportion is high and the evaporative fraction is low. Thus, the largest difference was made in the town centers". [ 62 ]
Industrial brownfield sites can be valuable ecosystems, supporting rare species of plants, animals and invertebrates. Increasingly in demand for redevelopment, these habitats are under threat. "Brown roofs", also known as "biodiverse roofs", [ 63 ] can partly mitigate this loss of habitat by covering the flat roofs of new developments with a layer of locally sourced material. Construction techniques for brown roofs are typically similar to those used to create flat green roofs, the main difference being the choice of growing medium (usually locally sourced rubble, gravel, soil, etc...) to meet a specific biodiversity objective. [ 64 ] In Switzerland, it is common to use alluvial gravels from the foundations; in London, a mix of brick rubble and some concrete has been used.
The original idea was to allow the roofs to self-colonise with plants, but they are sometimes seeded to increase their biodiversity potential in the short term. Such practices are derided by purists. [ 65 ] The roofs are colonised by spiders and insects (many of which are becoming extremely rare in the UK as such sites are developed) and provide a feeding site for insectivorous birds. Laban , a centre for contemporary dance in London, has a brown roof specifically designed to encourage the nationally rare black redstart . [ 66 ] A green roof, 160 m (520 ft) above ground level, and claimed to be the highest in the UK and Europe "and probably in the world" to act as nature reserve, is on the Barclays Bank HQ in Canary Wharf . [ 67 ] Designed combining the principles of green and brown roofs, it is already home to a range of rare invertebrates.
Green roofs have been increasing in popularity in Australia over the past 10 years. Some of the early examples include the Freshwater Place residential tower in Melbourne (2002) with its Level 10 rooftop Half Acre Garden, CH2 building housing the Melbourne City Council (2006) – Australia's first 6-star Green Star Design commercial office building as certified by the Green Building Council of Australia , and Condor Tower (2005) with a 75-square-metre (810-square-foot) lawn on the 4th floor.
Since 2008, city councils and influential business groups in Australia have become active promoting the benefits of green roofs. "The Blueprint to Green Roof Melbourne" is one program being run [ when? ] by the Committee for Melbourne . [ 69 ] In 2010, the largest Australian green roof project was announced. The Victorian Desalination Project [ 70 ] will have a "living tapestry" of 98,000 Australian indigenous plants over a roof area spanning more than 26,000 m 2 (280,000 square feet). The roof will form part of the desalination plant's sophisticated roof system, designed to blend the building into the landscape, and provide acoustic protection, corrosion resistance, thermal control, and reduced maintenance.
In June 2014 ecological artist Lloyd Godman , with structural engineer Stuart Jones and environmental scientist Grant Harris collaborated to install an experiment using Tillandsia plants in extreme outdoor conditions at levels 92, 91, 65 and 56 on Eureka Tower in Melbourne, Australia. The selected air plants are extremely light, and are able to grow with no soil or watering system, and the plants have been checked at regular intervals since their installation and are still growing and flowering. One species; Tillandsia bergeri , has grown from a single shoot to several thriving colonies.
The project is now titled Tillandsia SWARM and has been expanded to include many other buildings across Australia, including Federation Square, National Gallery of Victoria and Essendon Airport. [ 71 ] Godman has also experimented with Tillandsia plant screens that can be moved across skylights to create shade in summer and to allow in sun during winter. Temperature readings taken on a 40 °C day in summer revealed that the surface temperature on the roof had reached 84 °C, while the shadows cast by the plants had reduced the surface temperature on the roof to 51 °C.
The city of Toronto approved a by-law in May 2009 [ 72 ] mandating green roofs on residential and industrial buildings. There is criticism from Green Roofs for Healthy Cities that the new laws are not stringent enough, since they will only apply to residential building that are a minimum of six stories high. By 31 January 2011, industrial buildings were required to render 10% or 2,000 m 2 (22,000 sq ft) of their roofs green. [ 73 ] Toronto City Hall 's Podium roof was renovated to include a 3,000 m 2 (32,000 square feet) rooftop garden, the largest publicly accessible roof in the city. The green roof was opened to the public in June 2010. [ 74 ] Many green roofs in Canada also use sustainable rainwater harvesting practices.
In 2008, the Vancouver Convention Centre installed a 2.4-hectare (6-acre) living roof of indigenous plants and grasses on its West building, making it the largest green roof in Canada. [ 75 ] The new Canadian War Museum in Ottawa , opened in 2005, also features a grass-covered roof.
During the renovation of the Hamilton City Hall in Hamilton, Ontario that spanned from 2007 to 2010, many efforts were taken to enhance the environmentally friendly nature of the structure, which included the addition of a grass-covered roof. [ 76 ]
Simon Fraser University 's Burnaby campus contains a substantial number of green roofs. [ 77 ]
Canada's first LEED Platinum V4 Home in Wakefield QC, EcoHome's Edelweiss House, [ 78 ] has a living Green Roof which is sloped at 12 degrees.
Living green roofs have been built and grown at Saint Michael's Sustainable Community since 2012. Native plants, mostly flowers chosen for the environment, maximum shade and mass provide a colorful and functional living roof. The community has the largest number of green roofs in the country.
In Egypt , soil-less agriculture is used to grow plants on the roofs of buildings. No soil is placed directly on the roof itself, thus eliminating the need for an insulating layer; instead, plants are grown on wooden tables. Vegetables and fruit are the most popular candidates, providing a fresh, healthy source of food that is free from pesticides. [ 79 ]
A more advanced method, ( aquaponics ), being used experimentally in Egypt, is farming fish next to plants in a closed cycle. This allows the plants to benefit from the ammonia excreted by the fish, helping the plants to grow better and at the same time eliminating the need for changing the water for the fish, because the plants help to keep it clean by absorbing the ammonia. The fish also get some nutrients from the roots of the plants.
In Finland, green roofs are still scarce. Some experimental green roofs have been built in big cities. However, the capital city of Helsinki has published guidelines for enhancing the building of green roofs in the city. There is on-going research on the topic as the conditions in the southern Europe are very different from those in the north and knowledge acquired there can't be directly applied to colder climates. The fifth dimension – Green roofs and walls in urban areas -research program aims to produce high-level scientific and broadly applicable knowledge on optimal green roof and -wall solutions in Finland.
In France, an 8,000 m 2 (86,000 square feet) extensive, cable-supported green roof has been created on the International School in Lyon. [ 80 ] Another huge green roof of roughly 8,000 m 2 (86,000 square feet) has been incorporated into the new museum L'Historial de la Vendée which opened in June 2006 at Les Lucs-sur-Boulogne .
Long-held green roof traditions started in the early industrialization period more than 100 years ago exist in Germany. In the 1970s, green roof technology was elevated to the next level. Serious storm-water issues made cities think about innovative solutions, preferably with living plants. Modern green roof technology with high performance, lightweight materials were used to grow hardy vegetation even on roofs that can hardly support any additional load. In the 1980s modern green roof technology was common knowledge in Germany while it was practically unknown in any other country in the world. In Stuttgart, with one of the most innovative Department of Parks and Recreation and with the world's oldest horticultural Universities, modern green roof technology was perfected and implemented on a large scale. By the early 2000s, Germany had laws mandating that many metropolitan areas have green roofs. [ 81 ]
With the first green roof industry boom in Germany there were quality issues recorded. The FLL formed a committee that is focused on modern green roof technology. FLL stands for Forschungsgesellschaft Landschaftsentwicklung Landschaftsbau e.V. (German Landscape Research, Development and Construction Society). The FLL is an independent non-profit organization. It was founded in 1975 by eight professional organizations for "the improvement of environmental conditions through the advancement and dissemination of plant research and its planned applications". The FLL green roof working group is only one of 40 committees which have published a long list of guidelines and labor instructions. Some of these guidelines also available in English including the German FLL-Guideline for the Planning, Execution and Upkeep of Green-Roof Sites. The results of the research and synthesis done by FLL members are constantly updated and promulgated utilizing the same principles which govern the compilation of DIN standards and are published as either guiding principles or labor instructions.
The current Green Roof Guideline was published in 2011. [ 82 ] Today most elements of the German FLL are part of standards and guidelines around the world (FM Global, ASTM, NRCA, SPRI etc.).
Fachvereinigung Bauwerksbegrünung (FBB) was founded in 1990 as the second green roof association after DDV (Deutscher Dachgaertner Verband) in 1985. FBB was founded as an open forum for manufacturers and planners, merchants and operators in 1990. The organization was born from the then-visionary idea of understanding the relationship between nature and constructions not as oppositional, but as an opportunity. Both the green roofing and conventional roofing industries are equally represented.
The FBB has developed to become an innovative lobbying group with a strong market presence, internationally known through its cooperation with other European associations. Today, approximately 100 member companies use the multifaceted services offered by FBB, which offers a greater degree of market expertise and competitiveness. "Kompetenz im Markt".
Today, about 10,000,000 m 2 (110,000,000 square feet) of new green roofs are being constructed each year. According latest studies about 3 ⁄ 4 of these are extensive; the last 1 ⁄ 4 are roof gardens. The cities with the most green roofs in Germany are Berlin and Stuttgart . Surveys about the status of regulation are done by the FBB. Nearly one third of all German cities have regulations to support green-roof and rain-water technology. Green-roof research institutions are located in several cities as including Hannover , Berlin, Geisenheim and Neubrandenburg .
Germany is the country with the most green roofs in the world as well as the country with the most advanced knowledge in modern green roof technology. [ 83 ] Green roofs in Germany are part of the 2 –3 years apprentice educations system of landscaping professionals.
The Greek Ministry of Finance has now installed a green roof on the Treasury in Constitution Square in Athens. [ 84 ] The so-called "oikostegi" (Greek – oiko , pronounced [ˈiko] , meaning building-ecological, and stegi , pronounced staygee , meaning roof-abode-shelter) was inaugurated in September 2008. Studies of the thermodynamics of the roof in September 2008 concluded that the thermal performance of the building was significantly affected by the installation. [ 85 ] In further studies, in August 2009, energy savings of 50% were observed for air conditioning in the floor directly below the installation. The ten-floor building has a total floor space of 1.4 hectares (3.5 acres). The oikostegi covers 650 sq ft (60 square metres), equalling 52% of the roof space and 8% of the total floor space. Despite this, energy savings totalling €5,630 per annum were recorded, which translates to a 9% saving in air conditioning and a 4% saving in heating bills for the whole building. [ 86 ] An additional observation and conclusion of the study was that the thermodynamic performance of the oikostegi had improved as biomass was added over the 12 months between the first and second study. This suggests that further improvements will be observed as the biomass increases still further. The study also stated that while measurements were being made by thermal cameras, a plethora of beneficial insects were observed on the roof, such as butterflies, honey bees and ladybirds. Obviously this was not the case before installation. Finally, the study suggested that both the micro-climate and biodiversity of Constitution Square, in Athens, Greece had been improved by the oikostegi.
Sod roofs are frequently found on traditional farmhouses and farm buildings in Iceland . [ 87 ]
Green roofs called Burze Pash were traditionally used in Kashmiri houses to provide insulation during winters. The rooftops were made from locally sourced birch wood , and covered with a fertile layer of soil, on which different flowers and even crops would be grown. The variety of flowers and crops frequently varied by region, creating diversity. [ 88 ]
Bus stops in Kuala Lumpur were fitted with green roofs in 2019. [ 89 ]
Several cities in Poland have implemented policies and incentives to encourage the installation of green roofs, including Warsaw, Krakow, and Wroclaw. These policies have helped to increase the adoption of green roofs in the country, particularly in urban areas, where they are seen as an important tool for mitigating the environmental impacts of urbanization and improving the quality of life for city residents. The University of Warsaw green roof is one of the most impressive and well-known examples of green roofs in Poland. It covers an area of approximately 10,000 square meters and includes over 30,000 plants from more than 70 different species. [ 90 ]
Singapore installed a green roof on a bus in 2019 as part of an experiment led by researchers at the National University of Singapore . [ 89 ] Green roofs on bus stops in Singapore were found to reduce ambient temperatures by up to 2C. [ 91 ]
Switzerland has one of Europe's oldest green roofs, created in 1914 at the Moos lake water-treatment plant, Wollishofen , Zürich . Its filter tanks have 30,000 m 2 (320,000 square feet) of flat concrete roofs. To keep the interior cool and prevent bacterial growth in the filtration beds, a drainage layer of gravel and a 15-centimetre (5.9-inch) layer of soil was spread over the roofs, which had been waterproofed with asphalt . A meadow developed from seeds already present in the soil; it is now a haven for many plant species, some of which are now otherwise extinct in the district, most notably 6,000 Orchis morio ( green-winged orchid ). More recent Swiss examples can be found at Klinikum 1 and Klinikum 2, the Cantonal Hospitals of Basel , and the Sihlpost platform at Zürich's main railway station.
What is claimed [ 92 ] to be the world's first green roof botanical garden was set up in Augustenborg , Malmö in May 1999. The International Green Roof Institute (IGRI) opened to the public in April 2001 as a research station and educational facility. (It has since been renamed the Scandinavian Green Roof Institute (SGRI), in view of the increasing number of similar organisations around the world.) Green roofs are well-established in Malmö: the Augustenborg housing development near the SGRI botanical garden incorporates green roofs and extensive landscaping of streams, ponds, and soak-ways between the buildings to deal with storm water run-off.
The new Bo01 urban residential development (in the Västra Hamnen (Western Harbour) close to the foot of the Turning Torso office and apartment block, designed by Santiago Calatrava ) is built on the site of old shipyards and industrial areas, and incorporates many green roofs.
In 2012, the shopping mall Emporia with its 27,000-square-metre (290,000-square-foot) roof garden, was opened. The size of the roof garden is approximately equivalent to 4 soccer fields, which makes it one of the biggest green roof parks in Europe that is accessible to the public.
In 2003 English Nature concluded that 'in the UK policy makers have largely ignored green roofs'. [ 93 ] However, British examples can be found with increasing frequency. The Kensington Roof Gardens are a notable early roof garden which was built above the former Derry & Toms department store in Kensington , London in 1938. [ 94 ] More recent examples can be found at the University of Nottingham Jubilee Campus , and in London at Sainsbury's Millennium Store in Greenwich, the Horniman Museum and at Canary Wharf . The Ethelred Estate, close to the River Thames in central London, is the British capital's largest roof-greening project to date. Toxteth in Liverpool is also a candidate for a major roof-greening project.
In the United Kingdom, intensive green roofs are sometimes used in built-up city areas where residents and workers often do not have access to gardens or local parks. Extensive green roofs are sometimes used to blend buildings into rural surroundings, for example by Rolls-Royce Motor Cars , who has one of the biggest green roofs in Europe (covering more than 32,000 m 2 (340,000 square feet) on their factory at Goodwood, West Sussex. [ 95 ]
The University of Sheffield has created a Green Roof Centre of Excellence and conducted research, particularly in a UK context, into green roofs. [ 96 ] Nigel Dunnett of Sheffield University published a UK-centric book about green roofing in 2004 (updated 2008). [ 97 ]
Fort Dunlop has the largest green roof in the UK since its redevelopment between 2004 and 2006.
The UK also has one of the most innovative food preparation facilities in Europe, the Kanes salad factory in Evesham . [ 98 ] It is topped with a wildflower roof featuring nearly 90 species of wildflower and natural grasses. The seed mix was prepared in consultation with leading ecologists to try to minimise the impact on the local environment. [ 99 ] The pre-grown wildflower blanket sits on top of a standing seam roof and is combined with solar panels to create an eco-friendly finish to the entire factory. [ 100 ] The development also won the 2013 National Federation of Roofing Contractors Sustainable Roof Award for Green Roofing. [ 101 ] [ 102 ]
One of the largest expanses of extensive green roof is to be found in the US, at Ford Motor Company 's River Rouge Plant , Dearborn , Michigan, where 450,000 square feet (42,000 m 2 ) of assembly plant roofs are covered with sedum and other plants, designed by William McDonough ; the $18 million assembly avoids the need of what otherwise would be $50 million worth of mechanical treatment facilities on site. Built over Millennium Park Garage, Chicago's 24.5-acre (9.9 ha) Millennium Park is considered one of the largest intensive green roofs. [ 104 ] Other well-known American examples include Chicago's City Hall and the former Gap headquarters, now the headquarters of YouTube, in San Bruno, CA. The U.S. military has two major green roofs in the Washington, D.C. area : the U.S. Coast Guard headquarters (550,000 square feet or 51,000 square metres) and the Pentagon (180,000 square feet or 17,000 square metres). [ citation needed ]
An early green-roofed building (completed in 1971) is the 358,000-square-foot (33,300 m 2 ) Weyerhaeuser Corporate Headquarters building in Federal Way, Washington. Its 5-story office roof system comprises a series of stepped terraces covered in greenery. From the air, the building blends into the landscape.
The largest green roof in New York City was installed in midtown Manhattan atop the United States Postal Service 's Morgan Processing and Distribution Center. Construction on the 109,000-square-foot (10,100 m 2 ) project began in September 2008, and was finished and dedicated in July 2009. Covered in native vegetation and having an expected lifetime of fifty years, this green roof will not only save the USPS approximately $30,000 a year in heating and cooling costs, but will also significantly reduce the amount of storm water contaminants entering the municipal water system. [ 105 ] [ 106 ] In 2001, atop Chicago City Hall , the 38,800-square-foot (3,600 m 2 ) roof gardens were completed, serving as a pilot project to assess the impact green roofs would have on the heat island effect in urban areas, rainwater runoff, and the effectiveness of differing types of green roofs and plant species for Chicago's climate. Although the rooftop is not normally accessible to the public, it is visually accessible from 33 taller buildings in the area. The garden consists of 20,000 plants of more than 150 species, including shrubs, vines and two trees. The green roof design team was headed by the Chicago area firm Conservation Design Forum in conjunction with noted "green" architect William McDonough . With an abundance of flowering plants on the rooftop, beekeepers harvest approximately 200 pounds (90 kg) of honey each year from hives installed on the rooftop. Tours of the green roof are by special arrangement only. Chicago City Hall Green Roof won merit design award of the American Society of Landscape Architecture (ASLA) competition in 2002.
The 14,000 square feet (1,300 m 2 ) of outdoor space on the seventh floor of Zeckendorf Towers , formerly an undistinguished rooftop filled with potted plants, make up the largest residential green roof in New York. [ 107 ] [ 108 ] [ 109 ] The roof was transformed in 2010 as part of Mayor Michael Bloomberg 's NYC Green Infrastructure campaign, and supposedly serves to capture some of the rain that falls on it rather than letting it run off and contribute to flooding in the adjacent Union Square subway station . [ 107 ]
Some cost can also be attributed to maintenance. Extensive green roofs have low maintenance requirements but they are generally not maintenance free. German research has quantified the need to remove unwanted seedlings to approximately 6 seconds/m 2 /year. [ 110 ] Maintenance of green roofs often includes fertilization to increase flowering and succulent plant cover. If aesthetics are not an issue, fertilization and maintenance are generally not needed. Extensive green roofs should only be fertilized with controlled-release fertilizers in order to avoid pollution of the storm water. Conventional fertilizers should never be used on extensive vegetated roofs. [ 111 ] [ 112 ] German studies have approximated the nutrient requirement of vegetated roofs to 5 gN/m 2 . It is also important to use a substrate that does not contain too many available nutrients. The FLL guidelines specify maximum-allowable nutrient content of substrates. [ 113 ]
One of the oldest American green roofs in existence is atop the Rockefeller Center in Manhattan, built in 1936. This roof was primarily an aesthetic undertaking for the enjoyment of the center's workers, and remains to this day, having been refurbished in 1986. [ 114 ]
With the passage of Denver's Green Roof Initiative [ 115 ] in the November 2017 elections, effective January 2018, new buildings or existing buildings meeting the initiative's thresholds are required to have rooftop gardens, optionally combined with solar photovoltaic panels. [ 116 ] [ 117 ]
Seattle is another city in which green roofs have been used on an increasing basis. This phenomenon is in large part due to efforts on behalf of the city to encourage green roofs through new and improved building codes . In 2006, the Seattle Green Factor program was approved. [ 118 ] The program rewards the incorporation of landscaping in new building developments in an attempt to reduce stormwater runoff and associated pollution, stabilize temperatures, and create habitats for birds and insects. [ 119 ] These changes were expanded in 2009 to recognize the specific stormwater benefits of green roofs, and to reward developers who used them accordingly. [ 118 ] [ 120 ]
By 2010, Seattle was home to approximately 8.25 acres (3.34 hectares) of green roofs. [ 121 ] Despite initial hiccups in the city stemming from weeds, lack of irrigation during dry summer months, and a need for continuous replanting, the project has continued to succeed as understanding around the best soils and plants and the need for monitoring and upkeep has increased. [ 118 ] A 2010 survey of the green roofs in Seattle acknowledged that while the initial costs of implementing a green roof may deter businesses or homeowners, it is likely that green roofs actually preserve the roofing material and cut costs in the long run. [ 122 ] In light of the success in Seattle, other cities such as Portland, Chicago, and Washington, D.C. have all made efforts to develop their own Green Factor programs. [ 120 ]
The Seattle City Hall has led the way by implementing a green roof project that has involved the planting of more than 22,000 pots of sedum, fescue, and grass. [ 123 ] The City hopes that the project can reduce the annual stormwater runoff for the building by 50 to 75 percent, which will in turn reduce damage to local watershed areas that provide habitats for native species such as salmon. [ 123 ] The historic Union Stables building has used green roofs alongside other efficiency based changes to reduce stormwater runoff and decrease the building's energy use by 70 percent. [ 124 ] The Park Place building in Seattle's downtown provides a leading example of the use of landscaping to recapture rain water with the hopes of cutting back spending on utilities. [ 124 ]
Washington, D.C.
Washington, D.C., started implementing incentives for green roofs within their city at the beginning of the 21st century. In 2003, the Chesapeake Bay Foundation introduced a “green roof demonstration project” in combination with the D.C. Water and Sewer Authority. [ 125 ] This program issued grants to several pilot green roofs, which would assist with the cost of construction for the building owner. From this project the city began to understand how beneficial these roofs could be and more programs were implemented over the years. In 2007, the Riversmart Rewards Program introduced a RiverSmart Rooftops Green Roof Rebate Program that would lend a $3 per square foot subsidy to potential green roof projects within the District. This culminated to assist 12 projects that year. [ 125 ] A year later, the subsidy was raised to $5, incentivizing even more developers to use this program within their design. There is also possibility through the RiverSmart Rewards program for “residents and property owners to receive a significant discount on their water utility fees” if they install approved stormwater management features. [ 126 ] In 2016, a rebate of $10-$15 per square feet was introduced, “promoting the voluntary installation of green roofs for the purpose of reducing stormwater runoff and pollutants”. [ 127 ] $10 per square foot rebates were set for installation within a combined sewer system. $15 per square foot rebates were set for installation within a municipal storm sewer system. The greatest aspect of this incentivized project is the lack of restriction of building type that qualifies. There is no size cap on properties that qualify whether it’s residential, commercial or institutional. [ 127 ] In 2016 there was a total of 2.3 million square feet of green roofing within the district. As of 2020, there is 5.1 million square feet of green roofing. [ 128 ] | https://en.wikipedia.org/wiki/Green_roof |
Green rust is a generic name for various green crystalline chemical compounds containing iron(II) and iron(III) cations, the hydroxide ( OH − ) anion, and another anion such as carbonate ( CO 2− 3 ), chloride ( Cl − ), or sulfate ( SO 2− 4 ), in a layered double hydroxide (LDH) structure. The most studied varieties are the following: [ 1 ]
Other varieties reported in the literature are bromide Br − , [ 7 ] fluoride F − , [ 7 ] iodide I − , [ 9 ] nitrate NO − 3 , [ 10 ] and selenate SeO 2− 4 . [ 11 ]
Green rust was first recognized as a corrosion crust on iron and steel surfaces. [ 2 ] It occurs in nature as the mineral fougerite . [ 1 ]
The crystal structure of green rust can be understood as the result of inserting the foreign anions and water molecules between brucite -like layers of iron(II) hydroxide , Fe(OH) 2 . The latter has an hexagonal crystal structure , with layer sequence AcBAcB... , where A and B are planes of hydroxide ions, and c those of Fe 2+ ( iron (II), ferrous) cations . In green rust, some Fe 2+ cations get oxidized to Fe 3+ (iron(III), ferric). Each triple layer AcB, which is electrically neutral in the hydroxide [ clarification needed ] , becomes positively charged. The anions then intercalate between those triple layers and restore the electroneutrality. [ 1 ]
There are two basic structures of green rust, "type 1" and "type 2". [ 12 ] Type 1 is exemplified by the chloride and carbonate varieties. It has a rhombohedral crystal structure similar to that of pyroaurite ( Mg 6 Fe 2 (OH) 16 CO 3 ·4H 2 O ). The layers are stacked in the sequence AcBiBaCjCbAkA ...; where A, B, and C represent OH − planes, a, b, and c are layers of mixed Fe 2+ and Fe 3+ cations, and i, j, and k are layers of the intercalated anions and water molecules. [ 1 ] [ 13 ] [ 14 ] The c crystallographic parameter is 22.5–22.8 Å for the carbonate, and about 24 Å for the chloride. [ 4 ]
Type 2 green rust is exemplified by the sulfate variety. It has an hexagonal crystal structure as minerals of the sjogrenite ( Mg 6 Fe 2 (OH) 16 CO 3 ·4H 2 O ) group, with layers probably stacked in the sequence AcBiAbCjA... [ 1 ] [ 7 ] [ 13 ]
In oxidizing environment, green rust generally turns into Fe 3+ oxyhydroxides , namely α- FeOOH ( goethite ) and γ- FeOOH ( lepidocrocite ). [ 13 ]
Oxidation of the carbonate variety can be retarded by wetting the material with hydroxyl -containing organic compounds such as glycerol or glucose , even though they do not penetrate the structure. [ 3 ] Some variety of green rust is stabilized also by an atmosphere with high CO 2 partial pressure . [ 3 ] [ 15 ]
Sulfate green rust has been shown to reduce nitrate NO − 3 and nitrite NO − 2 in solution to ammonium NH + 4 , with concurrent oxidation of Fe 2+ to Fe 3+ . Depending on the cations in the solution, the nitrate anions replaced the sulfate in the intercalation layer, before the reduction. It was conjectured that green rust may be formed in the reducing alkaline conditions below the surface of marine sediments and may be connected to the disappearance of oxidized species like nitrate in that environment. [ 16 ] [ 17 ] [ 18 ]
Suspensions of carbonate green rust and orange γ- FeOOH in water react over a few days producing a black precipitate of magnetite Fe 3 O 4 . [ 19 ]
Green rust compounds were identified in green corrosion crusts that form on iron and steel surfaces, in alternating aerobic and anaerobic conditions, by water containing anions such as chloride, sulfate, carbonate, or bicarbonate . [ 2 ] [ 4 ] [ 8 ] [ 12 ] [ 13 ] [ 20 ] [ 21 ] [ 22 ] They are considered to be intermediates in the oxidative corrosion of iron to form iron(III) oxyhydroxides (ordinary brown rust ). Green rust may be formed either directly from metallic iron or from iron(II) hydroxide Fe ( OH ) 2 . [ 4 ]
On the basis of Mössbauer spectroscopy , green rust is suspected to occur as mineral in certain bluish-green soils that are formed in alternating redox conditions, and turn ochre once exposed to air. [ 23 ] [ 24 ] [ 25 ] [ 26 ] Green rust has been conjectured to be present in the form of the mineral fougerite ( [Fe 2+ 4 Fe 3+ 2 (OH) 12 ][CO 3 ]·3H 2 O ). [ 5 ]
Hexagonal crystals of green rust (carbonate and/or sulfate) have also been obtained as byproducts of bioreduction of ferric oxyhydroxides by dissimilatory iron-reducing bacteria , such as Shewanella putrefaciens , that couple the reduction of Fe 3+ with the oxidation of organic matter . [ 27 ] This process has been conjectured to occur in soil solutions and aquifers . [ 19 ]
In one experiment, a 160 m M suspension of orange lepidocrocite γ- FeOOH in a solution containing formate ( HCO − 2 ), incubated for 3 days with a culture of Shewanella putrefaciens , turned dark green due to the conversion of the hydroxide to GR( CO 2− 3 ), in the form of hexagonal platelets with diameter ~7 μm. In this process, the formate was oxidized to bicarbonate HCO − 3 which provided the carbonate anions for the formation of green rust. The active bacteria were necessary for the formation of green rust. [ 19 ]
Green rust compounds can be synthesized at ambient temperature and pressure, from solutions containing iron(II) cations, hydroxide anions, and the appropriate intercalatory anions, such as chloride, [ 6 ] [ 28 ] [ 29 ] [ 30 ] sulfate, [ 31 ] [ 32 ] [ 33 ] [ 34 ] or carbonate. [ 35 ]
The result is a suspension of ferrous hydroxide ( Fe(OH) 2 ) in a solution of the third anion. This suspension is oxidized by stirring under air, or bubbling air through it. [ 25 ] Since the product is very prone to oxidation, it is necessary to monitor the process and exclude oxygen once the desired ratio of Fe 2+ and Fe 3+ is achieved. [ 3 ]
One method first combines an iron(II) salt with sodium hydroxide (NaOH) to form the ferrous hydroxide suspension. Then the sodium salt of the third anion is added, and the suspension is oxidized by stirring under air. [ 3 ] [ 25 ] [ 36 ]
For example, carbonate green rust can be prepared by mixing solutions of iron(II) sulfate FeSO 4 and sodium hydroxide; then adding sufficient amount of sodium carbonate Na 2 CO 3 solution, followed by the air oxidation step. [ 36 ]
Sulfate green rust can be obtained by mixing solutions of FeCl 2 ·4 H 2 O and NaOH to precipitate Fe(OH) 2 then immediately adding sodium sulfate Na 2 SO 4 and proceeding to the air oxidation step. [ 8 ] [ 34 ]
A more direct method combines a solution of iron(II) sulfate FeSO 4 with NaOH, and proceeding to the oxidizing step. [ 18 ] The suspension must have a slight excess of FeSO 4 (in the ratio of 0.5833 Fe 2+ for each OH − ) for green rust to form; however, too much of it will produce instead an insoluble basic iron sulfate, iron(II) sulfate hydroxide Fe 2 (SO 4 )(OH) 2 · n H 2 O . [ 32 ] The production of green rust is lower as temperature increases. [ 37 ]
An alternate preparation of carbonate green rust first produces a suspension of iron(III) hydroxide Fe(OH) 3 in an iron(II) chloride FeCl 2 solution, and bubbles carbon dioxide through it. [ 3 ]
In a more recent variant, solutions of both iron(II) and iron(III) salts are first mixed, then a solution of NaOH is added, all in the stoichiometric proportions of the desired green rust. No oxidation step is then necessary. [ 34 ]
Carbonate green rust films have also been obtained from the electrochemical oxidation of iron plates. [ 35 ] | https://en.wikipedia.org/wiki/Green_rust |
Green solvents are environmentally friendly chemical solvents that are used as a part of green chemistry . They came to prominence in 2015, when the UN defined a new sustainability -focused development plan based on 17 sustainable development goals, recognizing the need for green chemistry and green solvents for a more sustainable future. [ 1 ] Green solvents are developed as more environmentally friendly solvents, derived from the processing of agricultural crops or otherwise sustainable methods as alternatives to petrochemical solvents. Some of the expected characteristics of green solvents include ease of recycling , ease of biodegradation , and low toxicity. [ 2 ]
Although not an organic solvent , water is an attractive solvent because it its non-toxic and renewable . It is a useful solvent in many industrial processes. Traditional organic solvents can sometimes be replaced by aqueous preparations. [ 3 ] Water-based coatings have largely replaced standard petroleum-based paints for the construction industry; however, solvent-based anti-corrosion paints remain among the most used today.
Supercritical water (SCW) is obtained at a temperature of 374.2 °C and a pressure of 22.05 MPa. [ 4 ] It behaves as a dense gas with a dissolving power equivalent to that of organic solvents of low polarity . However, the solubility of inorganic salts in SCW is radically reduced. SCW is used as a reaction medium, especially in oxidation processes for the destruction of toxic substances such as those found in industrial aqueous effluents . The use of supercritical water has two main technical challenges, namely corrosion and salt deposition.
Supercritical carbon dioxide (CO 2 ) is the most commonly used supercritical fluid because of its relatively easy to use. Temperatures above 31 °C and pressures above 7.38 MPa are sufficient to obtain supercriticality, [ 5 ] at which point it behaves as a good nonpolar solvent .
Ethanol is used in toiletries, cosmetics, some cleaners and coatings.
. Bioethanol , made industrially by fermentation of sugars, starch, and cellulose is widely available. Biobutanol (butyl alcohol, various isomers ) is also produced by fermentation of sugars. Tetrahydrofurfuryl alcohol (THFA) is a specialty solvent that may be obtained from hemicellulose .
Ethyl lactate , made from lactic acid obtained from corn starch , is notably used as a mixture with other solvents in some paint strippers and cleaners. [ 6 ] Ethyl lactate has replaced solvents such as toluene , acetone , and xylene in some applications.
Lipids (triglycerides) themselves can be used as solvents, but are mostly hydrolyzed to fatty acids and glycerol (glycerin). Fatty acids can be esterified with an alcohol to give fatty acid esters , e.g., FAMEs ( fatty acid methyl esters ) if the esterification is performed with methanol . Usually derived from natural gas or petroleum, the methanol used to produce FAMEs can also be obtained by other routes, including gasification of biomass and household hazardous waste . Glycerol from lipid hydrolysis can be used as a solvent in synthetic chemistry , as can some of its derivatives. [ 8 ]
Deep eutectic solvents (DES) [ 9 ] [ 10 ] have low melting points , can be cheap, safe and useful in industries. One example is octylammonium bromide/ decanoic acid (molar ratio of [1:2]) has a lower density compared to water of 0.8889 g.cm −3 , up to 1.4851 g.cm −3 for choline chloride /trifluoroacetamine [1:2]. Their miscibility is also composition-dependent.
A mixture whose melting point is lower than that of the constituents is called an eutectic mixture. Many such mixtures can be used as solvents, especially when the melting-point depression is very large, hence the term deep eutectic solvent (DES). One of the most commonly used substances to obtain DES is the ammonium salt choline chloride . Smith, Abbott, and Ryder report that a mixture of urea (melting point: 133 °C) and choline chloride (melting point: 302 °C) in a 2:1 molar ratio has a melting point of 12 °C. [ 1 ]
Natural deep eutectic solvents (NADES) are also a research area relevant to green chemistry, being easy to produce from two low-cost and well-known ecotoxicity components, a hydrogen-bond acceptor, and a hydrogen-bond donor. [ 11 ]
Solvents in a diverse class of natural substances called terpenes are obtained by extraction from certain parts of plants. All terpenes are structurally presented as multiples of isoprene with the gross formula (C 5 H 8 ) n .
Turpentine, formerly used as a solvent in organic coatings, is now largely replaced by petroleum hydrocarbons . [ 13 ] Nowadays, it is mainly used as a source of its constituents, including α-pinene and β-pinene. [ 15 ]
Ionic liquids are molten organic salts that are generally fluid at room temperature. Frequently used cationic liquids, include imidazolium , pyridinium , ammonium and phosphonium . Anionic liquids include halides, tetrafluoroborate , hexafluorophosphate , and nitrate . Bubalo et al. (2015) argue that ionic liquids are non-flammable, and chemically, electrochemically and thermally stable. [ 16 ] These properties allow for ionic liquids to be used as green solvents, as their low volatility limits VOC emissions compared to conventional solvents. The ecotoxicity and poor degradability of ionic liquids has been recognized in the past because the resources typically used for their production are non-renewable, as is the case for imidazole and halogenated alkanes (derived from petroleum). Ionic liquids produced from renewable and biodegradable materials have recently emerged, but their availability is low because of high production costs. [ 11 ]
Bubbling CO 2 into water or an organic solvent results in changes to certain properties of the liquid such as its polarity, ionic strength , and hydrophilicity . This allows an organic solvent to form a homogeneous mixture with the otherwise immiscible water. This process is reversible, and was developed by Jessop et al. (2012) for potential uses in synthetic chemistry, extraction and separation of various substances. The degree of how green switchable solvents are is measured by the energy and material savings it provides; thus, one of the advantages of switchable solvents is the potential reuse of solvent and water in post-process applications. [ 17 ]
First-generation biorefineries exploit food-based substances such as starch and vegetable oils. [ 18 ] For example, corn grain is used to make ethanol. Second-generation biorefineries use residues or wastes generated by various industries as feedstock for the manufacture of their solvents. 2-Methyltetrahydrofuran , derived from lignocellulosic waste, would have the potential to replace tetrahydrofuran , toluene , DCM , and diethyl ether in some applications. Levulinic acid esters from the same source would have the potential to replace DCM in paint cleaners and strippers.
Used cooking oils can be used to produce FAMEs . [ 19 ] Glycerol , obtained as a byproduct of the synthesis of these, can in turn be used to produce various solvents such as 2,2-dimethyl-1,3-dioxolane-4-methanol, usable as a solvent in the formulation of inks and cleaners. [ 20 ]
Fusel oil, an isomeric mixture of amyl alcohol , is a byproduct of ethanol production from sugars. Green solvents derived from fusel oil such as isoamyl acetate or isoamyl methyl carbonate could be obtained. When these green solvents are used to manufacture nail polishes, VOC emissions report a minimum reduction of 68% compared to the emissions caused by using traditional solvents.
Due to the high price of new sustainable solvents, in 2017, Clark et al. listed twenty-five solvents that are currently considered acceptable to replace hazardous solvents, even if they are derived from petrochemicals. [ 21 ]
These include propylene carbonate and dibasic esters (DBEs). Propylene carbonate and DBEs have been the subject of monographs on solvent substitution. [ 22 ] [ 23 ] Propylene carbonate and two DBEs are considered green in the manufacturer GlaxoSmithKline's (GSK) Solvent Sustainability Guide, which is used in the pharmaceutical industry. [ 24 ] Propylene carbonate can be produced from renewable resources, but DBEs that have appeared on the market in recent years are obtained as by-products of the synthesis of polyamides , derived from petroleum. Other petrochemical solvents are variously referred to as green solvents, such as halogenated hydrocarbons like parachlorobenzotrifluoride , which has been used since the early 1990s in paints to replace smog-forming solvents.
Siloxanes are compounds known in industry in the form of polymers (silicones, R-SiO-R'), for their thermal stability and elastic and non-stick properties. The early 1990s saw the emergence of low molecular weight siloxanes (methylsiloxanes), which can be used as solvents in precision cleaning, replacing stratospheric ozone-depleting solvents.
A final category of petrochemical solvents that qualify as green involves polymeric solvents. The International Union of Pure and Applied Chemistry defines the term "polymer solvent" as "a polymer that acts as a solvent for low-molecular weight compounds". [ 25 ] In industrial chemistry, polyethylene glycols (PEGs, H(OCH 2 CH 2 ) n OH) are one of the most widely used polymeric solvent families. [ 26 ] PEGs, with molecular weights below 600 Da , are viscous liquids at room temperature, while heavier PEGs are waxy solids.
Soluble in water and readily biodegradable, liquid PEGs have the advantage of negligible volatility (< 0.01 mmHg or < 1.3 Pa at 20 °C). [ 27 ] PEGs are synthesized from ethylene glycol and ethylene oxide, both of which are petrochemical-derived molecules, though ethylene glycol from renewable sources ( cellulose ) is commercially available. [ 28 ]
The physical properties of solvents are important in identifying the solvent used according to the reaction conditions. In particular, their dissolution properties make it possible to assess the use of a particular solvent for a chemical reaction , such as an extraction or a washing. Evaporation is also important to consider, as it can be indicative of the potential volatile organic compound (VOC) emissions.
The following table shows selected properties of green solvents in each category:
(g·mol −1 )
at 20 °C
17 kPa
at 100 °C
34,5kPa at 50 °C
liquid: yellow to brown
at 25 °C
at 25.15 °C
Other categories of green solvent have additional properties that preclude their usage in various applications:
Fatty acid methyl esters [ 50 ] [ 51 ] [ 52 ] have been investigated and compared to fossil diesel . At 20 °C or 40 °C, those solvents have a lower density than water at 4 °C (temperature in which the water is the densest):
Their kinematic viscosity depends if they are saturated or unsaturated or even the temperature. At 40 °C, for saturated FAMEs, it goes from 0.340 (acetate) to 6.39 (nonadecanoate), and for unsaturated FAMEs, it goes from 5.61 for the stearate to 7.21 for the erucate.
Their dielectric constant decreases as their alkyl chain gets longer. For example, acetate has a tiny alkyl chain and has a dielectric constant of ε 40 = 6.852 and ε 40 = 2.982 for the nonadecanoate.
The properties of switchable solvents [ 53 ] [ 54 ] are caused by the strength of their conjugate acid's pKa and octanol-water partition coefficient ratio K ow . They must have a pKa above 9.5 to be protonated by carbonated water and also a log(K ow ) between 1.2 and 2.5 to be switchable, otherwise they will be hydrophilic or hydrophobic . These properties depend on the volumetric ratio of the compound compared to water. For example, N,N,N ′ -Tributylpentanamidine is a switchable solvent, and for a volumetric ratio of compound to water of 2:1, it has a log(K ow )= 5.99, which is higher than 2.5.
Ionic liquids [ 55 ] with low melting points are associated with asymmetric cations, and liquids with high melting point are associated with symmetric cations. Additionally, if they have branched alkyl chains , they will have a higher melting point. They are more dense than water, ranging from 1.05 to 1.64 g·cm −3 at 20 °C and from 1.01 to 1.57 at 90 °C.
Some green solvents, in addition to being more sustainable , have been found to have more efficient physicochemical properties or reaction yields than when using traditional solvents. However, the results obtained are for the most part observations from experiments on particular green solvents and cannot be generalized. The effectiveness of a green solvent is quantified by calculating the "E factor", which is a ratio of waste materials to desired product produced through a process.
E f a c t o r = Mass of all waste materials Mass of desired product {\displaystyle Efactor={\frac {\text{Mass of all waste materials}}{\text{Mass of desired product}}}}
Green solvent efficiency has mainly been proven in extractions and separations in comparison to traditional solvents. [ 56 ]
Solvent manufacturers also provide industrial companies with databases to propose green alternative solvent mixtures to those originally used in industrial processes with similar efficiency and reaction yield. However, environmental and safety requirements are not always considered in these suggestions. [ 64 ]
The use of green solvents is increasingly preferred because of their lower environmental impact. These solvents still present dangers for human health as well as for the environment. However, for a number of green solvents, their impact is still unclear, or at least, not categorized yet.
Listed here is selected information from the safety data sheets of common green solvents: [ 65 ] [ 66 ]
Causes severe eye damage. May irritate the respiratory tract
Causes severe eye irritation
Harmful if swallowed. Causes skin irritation and severe eye damage
Causes severe eye irritation
Causes skin irritation. May cause skin allergy
Very toxic to aquatic organisms, causes long-term adverse effects
For ethanol, the American Conference of Governmental Industrial Hygienists , shortened ACGIH, advises a short-term exposure limit of 1000 ppm to avoid irritating the respiratory tract. [ 67 ]
The French National Agency for Food, Environmental, and Occupational Health Safety (ANSES) has recommended a short-term occupational exposure limit value of 100 mg/m 3 for butan-1-ol, a solvent used in paints, cleaners, and degreasers, in order to prevent irritation of the mucous membranes of the eyes and upper airways. Since 1998, the ACGIH has suggested an 8-hour exposure limit value (ELV) of 20 ppm of butan-1-ol to prevent irritation of the upper respiratory tract and eyes.
Male rats exposed to THFA develop reproductive toxicity. Moreover, it has an impact on fetal and embryonic development in rats. The American Industrial Hygiene Association suggested an ELV of 2 ppm for THFA to prevent testicular degeneration in 1993 based on the No-observed-effect level of two subchronic investigations in rats and dogs
DES components, according to Wazeer, Hayyan, and Hadj-Kali, [ 68 ] are typically non-toxic and biodegradable . According to Hayyan et al., [ 69 ] the DES they investigated were more harmful to the small crustacean artemia than each of their individual components, which could be attributed to synergy . The abbreviation NADES refers to DES that contain only materials sourced from renewable resources. Compared to other DES, these would typically be less hazardous.
Due to the recency of green solvent development, few laws related to their regulation have been developed beyond standard workplace safety precautions already in place, and laws that enforce the use of green solvents have not been widespread. | https://en.wikipedia.org/wiki/Green_solvent |
Green strength , or handling strength , can be defined as the strength of a material as it is processed to form its final ultimate tensile strength . This strength is usually considerably lower than the final ultimate strength of a material. The term green strength is usually referenced when discussing non-metallic materials such as adhesives and elastomers (such as rubber). Recently, [ when? ] it has also been referenced in metallurgy applications such as powdered metallurgy .
A joint made through the use of an adhesive can be referred to as an adhesive joint or bond.
The green strength of adhesives is the early development of bond strength of an adhesive. It indicated "that the adhesive bond is strong enough to be handled a short time after the adherents are mated but much before full cure is obtained." Usually, this strength is significantly lower than the final curing strength. Most adhesives typically have an initial green strength and a final ultimate tensile strength listed for their application. For household adhesives, this data is usually reflected on the packaging. [ 1 ]
The best example of this is seen in typical epoxies from a local hardware stores. During curing , the epoxy will travel into an initial curing phase, also called "green phase", when it begins to gel . At that point, the epoxy is no longer workable and will move from being tacky to a firm rubber-like texture. While the epoxy is only partially cured at this point, it has formed a lower green strength. Normally, this process occurs within 30 minutes to 1 hour. At this time, the part in question can be handled, but cannot handle large loads or stress. It typically takes up to 24 hours for a standard epoxy to cure to its final and complete strength. [ 2 ] [ 3 ]
Temperature is an important factor in the time it takes for an adhesive to form the green strength. While this can vary from adhesive to adhesive, general speaking, heat can speed up the process to form the green strength and the overall curing time. Time-Temperature-Transformation Diagrams exist for various adhesives that relate the time and temperature to the state of the adhesive during curing. This allows for a proper understanding of when the green strength will be reached for an adhesive joint based on certain conditions. [ 1 ]
Mechanical testing can be used to verify the green strength of a material. This will allow the user the understand the amount of load that can be applied in the green phase before final cure. [ 1 ]
Tensile loading can be verified by various testing methods. Multiple ASTM specifications exist for the tensile testing of adhesives that are relatively easy to follow. Such tests include the process of attaching the adhesive to two adherents (typically wood or steel ) then testing the joint with a pull-type test. One example is the use us ASTM Test Method D2095. In this test, two metal rod ends are polished so it contains no burrs that could affect the adhesive bond. It also machined so the surfaces are perfectly parallel. The rods are then butted against each with the adhesive joining them. As it cures and sets, the fulfillment of green strength can be tested by a pull test, putting the bond in full tension load. [ 1 ]
Shear loading can also be tested in respect to green strength. Most adhesive bonds used in design require the bond to typically be in a state of shear, not tensile. Because of this, it is very important to understand the shear loading of a joint in relation to its green strength and final strength. Just like in tensile loading, ASTM provides very specific testing methods for a joint in shear loading.
The standard lap shear specimen test is described in ASTM D1002. This test is the single common and discussed test method for adhesive bonds. In this method, the surface is prepped and cleaned for each specimen. The adhesive is then applied to the area that will be lapped. This lap length is generally 0.5" and the bond width is 1". The bond is then fixed and allowed to cure. For green strength testing, the fixture can be removed, at the appropriate time, and the specimen can be loaded in shear until it finally fails. This will verify the green strength of the material. [ 1 ]
Other testing, such as cleavage loading and peel test, can be used to determine both the green strength and final strength of a material. These are typically not reflected on the data sheet for standard adhesives, but can be used for testing of adhesives based on their applications in residential and commercial environments. [ 1 ]
In the elastomer industry, green strength describes the strength of an elastomer in an unvulcanized, uncured state. The most popular referenced type of elastomer is rubber.
For rubber composites, green strength is essential during formation and manufacturing of materials such as radial tires , tank tracks , etc. These rubbers must be stretched from one mill to another during processing to form the final, vulcanized product. Green strength allows these transfers without tearing or wrinkling the workpiece.
To improve the green strength of elastomers and prevent issues during forming, various additives and compounds are typically added to the composite. Also, fabrication and forming techniques have been modified to reduce the amount of stress on the material before it is vulcanized. These techniques are a pertinent component of the tire making industry because it is a process that requires much forming, stretching, and bending during fabrication before the final curing is complete. [ 4 ]
Green strength of metals is typically [ weasel words ] referenced in the field of powder metallurgy .
Powder metallurgy refers to the fabrication of materials or components from powders. In powder metallurgy, the initial green strength is formed during compacting and forming. Increased complexity of parts and geometry have created a need for a higher green strength during this process. [ 5 ]
There are several limitations that restrict the ability to increase green strength in powder metallurgy components. Characteristics such as particle size and compressibility pose limits on the final green strength. [ 5 ]
Various studies have been undertaken to improve the green strength of powder metallurgy. The use of advanced lubricants and the addition of high alloys have shown that it is possible to increase the green strength of these materials. [ 5 ] | https://en.wikipedia.org/wiki/Green_strength |
In physics , in the area of quantum information theory , a Greenberger–Horne–Zeilinger ( GHZ ) state is an entangled quantum state that involves at least three subsystems (particle states, qubits , or qudits ). Named for the three authors that first described this state, the GHZ state predicts outcomes from experiments that directly contradict predictions by every classical local hidden-variable theory . The state has applications in quantum computing .
The four-particle version was first studied by Daniel Greenberger , Michael Horne and Anton Zeilinger in 1989. [ 1 ] The following year Abner Shimony joined in and they published a three-particle version [ 2 ] based on suggestions
by N. David Mermin . [ 3 ] [ 4 ] Experimental measurements on such states contradict intuitive notions of locality and causality. GHZ states for large numbers of qubits are theorized to give enhanced performance for metrology compared to other qubit superposition states. [ 5 ]
The GHZ state is an entangled quantum state for 3 qubits and it can be written | G H Z ⟩ = | 000 ⟩ + | 111 ⟩ 2 . {\displaystyle |\mathrm {GHZ} \rangle ={\frac {|000\rangle +|111\rangle }{\sqrt {2}}}.} where the 0 or 1 values of the qubit correspond to any two physical states. For example the two states may correspond to spin-down and spin up along some physical axis. In physics applications the state may be written | G H Z ⟩ = | 1 , 1 , 1 ⟩ + | − 1 , − 1 , − 1 ⟩ 2 . {\displaystyle |\mathrm {GHZ} \rangle ={\frac {|1,1,1\rangle +|-1,-1,-1\rangle }{\sqrt {2}}}.} where the numbering of the states represents spin eigenvalues. [ 3 ]
Another example [ 6 ] of a GHZ state is three photons in an entangled state, with the photons being in a superposition of being all horizontally polarized (HHH) or all vertically polarized (VVV), with respect to some coordinate system . The GHZ state can be written in bra–ket notation as
Prior to any measurements being made, the polarizations of the photons are indeterminate. If a measurement is made on one of the photons using a two-channel polarizer aligned with the axes of the coordinate system, each orientation will be observed, with 50% probability. However the result of all three measurements on the state gives the same result: all three polarizations are observed along the same axis.
The generalized GHZ state is an entangled quantum state of M > 2 subsystems. If each system has dimension d {\displaystyle d} , i.e., the local Hilbert space is isomorphic to C d {\displaystyle \mathbb {C} ^{d}} , then the total Hilbert space of an M {\displaystyle M} -partite system is H t o t = ( C d ) ⊗ M {\displaystyle {\mathcal {H}}_{\rm {tot}}=(\mathbb {C} ^{d})^{\otimes M}} . This GHZ state is also called an M {\displaystyle M} -partite qudit GHZ state.
Its formula as a tensor product is
In the case of each of the subsystems being two-dimensional, that is for a collection of M qubits, it reads
In the language of quantum computation , the polarization state of each photon is a qubit , the basis of which can be chosen to be
With appropriately chosen phase factors for | H ⟩ {\displaystyle |\mathrm {H} \rangle } and | V ⟩ {\displaystyle |\mathrm {V} \rangle } , both types of measurements used in the experiment becomes Pauli measurements , with the two possible results represented as +1 and −1 respectively: [ citation needed ]
A combination of those measurements on each of the three qubits can be regarded as a destructive multi-qubit Pauli measurement, the result of which being the product of each single-qubit Pauli measurement. For example, the combination "circular polarizer on photons 1 and 2, 45° linear polarizer on photon 3" corresponds to a Y 1 Y 2 X 3 {\displaystyle Y_{1}Y_{2}X_{3}} measurement, and the four possible result combinations (RL+, LR+, RR−, LL−) are exactly the ones corresponding to an overall result of −1.
The quantum mechanical predictions of the GHZ experiment can then be summarized as
which is consistent in quantum mechanics because all these multi-qubit Paulis commute with each other, and
due to the anticommutativity between X {\displaystyle X} and Y {\displaystyle Y} .
These results lead to a contradiction in any local hidden variable theory, where each measurement must have definite (classical) values x i , y i = ± 1 {\displaystyle x_{i},y_{i}=\pm 1} determined by hidden variables, because
must equal +1, not −1. [ 3 ]
The results of actual experiments agree with the predictions of quantum mechanics, not those of local realism. [ 7 ]
There is no standard measure of multi-partite entanglement because different, not mutually convertible, types of multi-partite entanglement exist. Nonetheless, many measures define the GHZ state to be a maximally entangled state . [ citation needed ]
Another important property of the GHZ state is that taking the partial trace over one of the three systems yields
which is an unentangled mixed state . It has certain two-particle (qubit) correlations, but these are of a classical nature . On the other hand, if we were to measure one of the subsystems in such a way that the measurement distinguishes between the states 0 and 1, we will leave behind either | 00 ⟩ {\displaystyle |00\rangle } or | 11 ⟩ {\displaystyle |11\rangle } , which are unentangled pure states. This is unlike the W state , which leaves bipartite entanglements even when we measure one of its subsystems. [ citation needed ]
A pure state | ψ ⟩ {\displaystyle |\psi \rangle } of N {\displaystyle N} parties is called biseparable , if one can find a partition of the parties in two nonempty disjoint subsets A {\displaystyle A} and B {\displaystyle B} with A ∪ B = { 1 , … , N } {\displaystyle A\cup B=\{1,\dots ,N\}} such that | ψ ⟩ = | ϕ ⟩ A ⊗ | γ ⟩ B {\displaystyle |\psi \rangle =|\phi \rangle _{A}\otimes |\gamma \rangle _{B}} , i.e. | ψ ⟩ {\displaystyle |\psi \rangle } is a product state with respect to the partition A | B {\displaystyle A|B} . The GHZ state is non-biseparable and is the representative of one of the two non-biseparable classes of 3-qubit states which cannot be transformed (not even probabilistically) into each other by local quantum operations , the other being the W state , | W ⟩ = ( | 001 ⟩ + | 010 ⟩ + | 100 ⟩ ) / 3 {\displaystyle |\mathrm {W} \rangle =(|001\rangle +|010\rangle +|100\rangle )/{\sqrt {3}}} . [ 8 ] : 903 Thus | G H Z ⟩ {\displaystyle |\mathrm {GHZ} \rangle } and | W ⟩ {\displaystyle |\mathrm {W} \rangle } represent two very different kinds of entanglement for three or more particles. [ 9 ] The W state is, in a certain sense "less entangled" than the GHZ state; however, that entanglement is, in a sense, more robust against single-particle measurements, in that, for an N -qubit W state, an entangled ( N − 1)-qubit state remains after a single-particle measurement. By contrast, certain measurements on the GHZ state collapse it into a mixture or a pure state.
Experiments on the GHZ state lead to striking non-classical correlations (1989). Particles prepared in this state lead to a version of Bell's theorem , which shows the internal inconsistency of the notion of elements-of-reality introduced in the famous Einstein–Podolsky–Rosen article. The first laboratory observation of GHZ correlations was by the group of Anton Zeilinger (1998), who was awarded a share of the 2022 Nobel Prize in physics for this work. [ 10 ] Many more accurate observations followed. The correlations can be utilized in some quantum information tasks. These include multipartner quantum cryptography (1998) and communication complexity tasks (1997, 2004).
Although a measurement of the third particle of the GHZ state that distinguishes the two states results in an unentangled pair, a measurement along an orthogonal direction can leave behind a maximally entangled Bell state . This is illustrated below.
The 3-qubit GHZ state can be written as
where the third particle is written as a superposition in the X basis (as opposed to the Z basis) as | 0 ⟩ = ( | + ⟩ + | − ⟩ ) / 2 {\displaystyle |0\rangle =(|+\rangle +|-\rangle )/{\sqrt {2}}} and | 1 ⟩ = ( | + ⟩ − | − ⟩ ) / 2 {\displaystyle |1\rangle =(|+\rangle -|-\rangle )/{\sqrt {2}}} .
A measurement of the GHZ state along the X basis for the third particle then yields either | Φ + ⟩ = ( | 00 ⟩ + | 11 ⟩ ) / 2 {\displaystyle |\Phi ^{+}\rangle =(|00\rangle +|11\rangle )/{\sqrt {2}}} , if | + ⟩ {\displaystyle |+\rangle } was measured, or | Φ − ⟩ = ( | 00 ⟩ − | 11 ⟩ ) / 2 {\displaystyle |\Phi ^{-}\rangle =(|00\rangle -|11\rangle )/{\sqrt {2}}} , if | − ⟩ {\displaystyle |-\rangle } was measured. In the later case, the phase can be rotated by applying a Z quantum gate to give | Φ + ⟩ {\displaystyle |\Phi ^{+}\rangle } , while in the former case, no additional transformations are applied. In either case, the result of the operations is a maximally entangled Bell state.
This example illustrates that, depending on which measurement is made of the GHZ state is more subtle than it first appears: a measurement along an orthogonal direction, followed by a quantum transform that depends on the measurement outcome, can leave behind a maximally entangled state .
GHZ states are used in several protocols in quantum communication and cryptography, for example, in secret sharing [ 11 ] or in the quantum Byzantine agreement . | https://en.wikipedia.org/wiki/Greenberger–Horne–Zeilinger_state |
Developed in Sweden, the Greenfish recirculation technology is a water purification technology for sustainable aquaculture production in closed indoor freshwater systems. It was developed at University of Gothenburg by Björn Lindén in collaboration with Chalmers associate professor Torsten Wik , under the supervision of professor emeritus Gustaf Olsson at Lund University of Technology.
Several published articles
, [ 1 ] , [ 2 ] , [ 3 ] have appeared as well as verification of the system in full-scale farming operations with wet feed and semi-moist fish feed. One of the most important describes the advanced simulator for full-scale recirculation in an aquaculture system with algorithms for complete mass balances calculations, involving: growth of fish, addition of fish feeds, production of waste, bacterial growth and the dynamics of the water purification system.
In the system no less than 28 different parameters of bacterial substrates are described to simulate [ clarification needed Surely the parameters do more than simulate. Or do we mean they are variables within a simulation? ] the water purification dynamics of the system.
The microbial scientific basics and water purification technology and engineering rests on formidable scientific knowledge, as can be followed in further references
, [ 4 ] , [ 5 ] , [ 6 ] , [ 7 ] , [ 8 ] , [ 9 ] , [ 10 ] , [ 11 ] , [ 12 ] , [ 13 ] , [ 14 ] , [ 15 ] , [ 16 ] , [ 17 ] , [ 18 ] , [ 19 ] , [ 20 ] [ 21 ] [ excessive citations ] . | https://en.wikipedia.org/wiki/Greenfish_recirculation_technology |
Greenhouse Gases Observing Satellite ( GOSAT ), also known as Ibuki ( Japanese : いぶき , Hepburn : Ibuki , meaning "breath" [ 4 ] ) , is an Earth observation satellite and the world's first satellite dedicated to greenhouse gas monitoring . [ 5 ] It measures the densities of carbon dioxide and methane from 56,000 locations on the Earth's atmosphere . [ 6 ] The GOSAT was developed by the Japan Aerospace Exploration Agency ( JAXA ) and launched on 23 January 2009, from the Tanegashima Space Center . [ 6 ] Japan's Ministry of the Environment , and the National Institute for Environmental Studies (NIES) [ 7 ] use the data to track gases causing the greenhouse effect , and share the data with NASA and other international scientific organizations. [ 5 ]
GOSAT was launched along with seven other piggyback probes using the H-IIA , Japan's primary large-scale expendable launch system , at 3:54 am on 23 January 2009 UTC on Tanegashima , a small island in southern Japan, after a two-day delay due to unfavourable weather. [ 6 ] [ 5 ] At approximately 16 minutes after liftoff, the separation of Ibuki from the launch rocket was confirmed. [ 8 ]
According to JAXA, the Ibuki satellite is equipped with a greenhouse gas observation sensor (TANSO-FTS) and a cloud/aerosol sensor (TANSO-CAI) that supplements TANSO-FTS. The greenhouse gas observation sensor of Ibuki observes a wide range of wavelengths (near- infrared region–thermal infrared region) within the infrared band to enhance observation accuracy. [ 8 ] The satellite uses a spectrometer to measure different elements and compounds based on their response to certain types of light. This technology allows the satellite to measure "the concentration of greenhouse gases in the atmosphere at a super-high resolution." [ 9 ]
The Greenhouse Gases Observing Satellite-2 was launched from Tanegashima Space Center by a H-IIA rocket on October 29, 2018. [ 10 ] | https://en.wikipedia.org/wiki/Greenhouse_Gases_Observing_Satellite |
The Greenhouse Gases Observing Satellite-2 ( GOSAT-2 ), also known as Ibuki-2 ( Japanese : いぶき2号 , Hepburn : Ibuki nigō ) , is an Earth observation satellite dedicated to greenhouse gas monitoring . It is a successor of Greenhouse Gases Observing Satellite (GOSAT). The GOSAT-2 was developed as a joint project of the Japan Aerospace Exploration Agency ( JAXA ), Ministry of the Environment , and the National Institute for Environmental Studies (NIES). It was launched on 29 October 2018 from the Tanegashima Space Center aboard the H-IIA rocket. [ citation needed ]
Major changes in comparison to the previous GOSAT are: [ 4 ]
As of November 2023 [update] , GOSAT-GW (Ibuki-GW), the successor of GOSAT-2 and GCOM-W "Shizuku" , is under development for launch in JFY 2024 on the last flight of the H-IIA launch vehicle . [ 5 ] | https://en.wikipedia.org/wiki/Greenhouse_Gases_Observing_Satellite-2 |
Greenhouse gas monitoring is the direct measurement of greenhouse gas emissions and levels. There are several different methods of measuring carbon dioxide concentrations in the atmosphere , including infrared analyzing and manometry . Methane and nitrous oxide are measured by other instruments. Greenhouse gases are measured from space such as by the Orbiting Carbon Observatory and networks of ground stations such as the Integrated Carbon Observation System .
Manometry is a key measurement tool for atmospheric carbon dioxide by first measuring the volume, temperature, and pressure of a particular amount of dry air. The air sample is dried by passing it through multiple dry ice traps and then collecting it in a five-liter vessel. The temperature is taken via a thermometer and pressure is calculated using manometry . Then, liquid nitrogen is added, causing the carbon dioxide to condense and become measurable by volume. [ 1 ] The ideal gas law is accurate to 0.3% in these pressure conditions.
Infrared analyzers were used at Mauna Loa Observatory and at Scripps Institution of Oceanography between 1958 and 2006. IR analyzers operate by pumping an unknown sample of dry air through a 40 cm long cell. A reference cell contains dry carbon dioxide -free air. [ 1 ] A glowing nichrome filament radiates broadband IR radiation which splits into two beams and passes through the gas cells. Carbon dioxide absorbs some of the radiation , allowing more radiation that passes through the reference cell to reach the detector than radiation passing through the sample cell. Data is collected on a strip chart recorder. The concentration of carbon dioxide in the sample is quantified by calibrating with a standard gas of known carbon dioxide content. [ 1 ]
Titrimetry is another method of measuring atmospheric carbon dioxide that was first used by a Scandinavian group at 15 different ground stations. They began passing a 100.0 mL air sample through a solution of barium hydroxide containing cresolphthalein indicator. [ 1 ]
Range-resolved infrared differential absorption lidar (DIAL) is a means of measuring methane emissions from various sources, including active and closed landfill sites. [ 2 ] The DIAL takes vertical scans above methane sources and then spatially separates the scans to accurately measure the methane emissions from individual sources. Measuring methane emissions is a crucial aspect of climate change research , as methane is among the most impactful gaseous hydrocarbon species. [ 2 ]
Nitrous oxide is one of the most prominent anthropogenic ozone-depleting gases in the atmosphere. [ 3 ] It is released into the atmosphere primarily through natural sources such as soil and rock, as well as anthropogenic process like farming. Atmospheric nitrous oxide is also created in the atmosphere as a product of a reaction between nitrogen and electronically excited ozone in the lower thermosphere .
The Atmospheric Chemistry Experiment‐Fourier Transform Spectrometer ( ACE-FTS ) is a tool used for measuring nitrous oxide concentrations in the upper to lower troposphere . This instrument, which is attached to the Canadian satellite SCISAT , has shown that nitrous oxide is present throughout the entire atmosphere during all seasons, primarily due to energetic particle precipitation. [ 3 ] Measurements taken by the instrument show that different reactions create nitrous oxide in the lower thermosphere than in the mid to upper mesosphere . The ACE-FTS is a crucial resource in predicting future ozone depletion in the upper stratosphere by comparing the different ways in which nitrous oxide is released into the atmosphere. [ 3 ]
The Orbiting Carbon Observatory (OCO) was first launched in February 2009 but was lost due to launch failure. [ 4 ] The Satellite was launched again in 2014, this time called the Orbiting Carbon Observatory-2 , with an estimated lifespan of about two years. The apparatus uses spectrometers to take 24 carbon dioxide concentration measurements per second of Earth's atmosphere . [ 5 ] The measurements taken by OCO-2 can be used for global atmospheric models and will allow scientists to locate carbon sources when its data is paired with wind patterns . The Orbiting Carbon Observatory-3 operates from the International Space Station (ISS). [ 4 ]
Satellite observations provides accurate readings of carbon dioxide and methane gas concentrations for short-term and long-term purposes in order to detect changes over time. [ 6 ] The goals of this satellite , released in January 2009, is to monitor both carbon dioxide and methane gas in the atmosphere, and to identify their sources. [ 6 ] GOSat is a project of three main entities: the Japan Aerospace Exploration Agency (JAXA), the Ministry of the Environment (MOE), and the National Institute for Environmental Studies (NIES). [ 6 ]
The Integrated Carbon Observation System was established in October 2015 in Helsinki, Finland as a European Research Infrastructure Consortium (ERIC) . [ 7 ] The main task of ICOS is to establish an Integrated Carbon Observation System Research Infrastructure (ICOS RI) that facilitates research on greenhouse gas emissions , sinks , and their causes. The ICOS ERIC strives to link its own research with other greenhouse gas emissions research to produce coherent data products and to promote education and innovation . [ 7 ]
Among the common methods for measuring emissions are top-down approaches, which rely on atmospheric measurements, and bottom-up methods, which utilize ground-based sensors. Each of these methods has its advantages and limitations. An integrated real-time monitoring system can address these challenges by detecting leaks in near real-time and providing actionable insights for stakeholders to enable effective mitigation strategies. However, implementing such a system presents significant challenges and difficulties that must be carefully considered. [ 8 ] | https://en.wikipedia.org/wiki/Greenhouse_gas_monitoring |
In organometallic chemistry , the Green–Davies–Mingos rules predict the regiochemistry for nucleophilic addition to 18-electron metal complexes containing multiple unsaturated ligands . [ 1 ] The rules were published in 1978 by organometallic chemists Stephen G. Davies , Malcolm Green , and Michael Mingos . They describe how and where unsaturated hydrocarbon generally become more susceptibile to nucleophilic attack upon complexation. [ 1 ]
Nucleophilic attack is preferred on even-numbered polyenes (even hapticity ). [ 1 ]
Nucleophiles preferentially add to acyclic polyenes rather than cyclic polyenes. [ 1 ]
Nucleophiles preferentially add to even-hapticity polyene ligands at a terminus. [ 1 ] Nucleophiles add to odd-hapticity acyclic polyene ligands at a terminal position if the metal is highly electrophilic, otherwise they add at an internal site.
Simplified: even before odd and open before closed
The following is a diagram showing the reactivity trends of even/odd hapticity and open/closed π-ligands.
The metal center is electron withdrawing. This effect is enhanced if the metal is also attached to a carbonyl. Electron poor metals do not back bond well to the carbonyl. The more electron withdrawing the metal is, the more triple bond character the CO ligand has. This gives the ligand a higher force constant. The resultant force constant found for a ligated carbonyl represents the same force constant for π ligands if they replaced the CO ligand in the same complex.
Nucleophilic addition does not occur if kCO* (the effective force constant for the CO ligand) is below a threshold value [ 2 ]
The following figure shows a ligated metal attached to a carbonyl group. This group has a partial positive charge and therefore is susceptible to nucleophilic attack. If the ligand represented by L n were a π-ligand, it would be activated toward nucleophilic attack as well.
Incoming nucleophilic attack happens at one of the termini of the π-system in the figure below:
In this example the ring system can be thought of as analogous to 1,3-butadiene. Following the Green–Davies–Mingos rules, since butadiene is an open π-ligand of even hapticity, nucleophilic attack will occur at one of the terminal positions of the π-system. This occurs because the LUMO of butadiene has larger lobes on the ends rather than the internal positions.
Nucleophilic attack at terminal position of allyl ligands when π accepting ligand is present. [ 3 ]
If sigma donating ligands are present they pump electrons into the ligand and attack occurs at the internal position.
When asymmetrical allyl ligands are present attack occurs at the more substituted position. [ 4 ]
In this case the attack will occur on the carbon with both R groups attached to it since that is the more substituted position.
Nucleophilic addition to π ligands can be used in synthesis. One example of this is to make cyclic metal compounds. [ 5 ] Nucleophiles add to the center of the π ligand and produces a metallobutane. | https://en.wikipedia.org/wiki/Green–Davies–Mingos_rules |
The Green–Kubo relations ( Melville S. Green 1954, Ryogo Kubo 1957) give the exact mathematical expression for a transport coefficient γ {\displaystyle \gamma } in terms of the integral of the equilibrium time correlation function of the time derivative of a corresponding microscopic variable A {\displaystyle A} (sometimes termed a "gross variable", as in [ 1 ] ):
γ = ∫ 0 ∞ ⟨ A ˙ ( t ) A ˙ ( 0 ) ⟩ d t . {\displaystyle \gamma =\int _{0}^{\infty }\left\langle {\dot {A}}(t){\dot {A}}(0)\right\rangle \;{\mathrm {d} }t.}
One intuitive way to understand this relation is that relaxations resulting from random fluctuations in equilibrium are indistinguishable from those due to an external perturbation in linear response. [ 2 ]
Green-Kubo relations are important because they relate a macroscopic transport coefficient to the correlation function of a microscopic variable. In addition, they allow one to measure the transport coefficient without perturbing the system out of equilibrium, which has found much use in molecular dynamics simulations. [ 3 ]
Thermodynamic systems may be prevented from relaxing to equilibrium because of the application of a field (e.g. electric or magnetic field), or because the boundaries of the system are in relative motion (shear) or maintained at different temperatures, etc. This generates two classes of nonequilibrium system: mechanical nonequilibrium systems and thermal nonequilibrium systems.
The standard example of an electrical transport process is Ohm's law , which states that, at least for sufficiently small applied voltages, the current I is linearly proportional to the applied voltage V ,
I = G V , {\displaystyle I=GV,}
As the applied voltage increases one expects to see deviations from linear behavior. The coefficient of proportionality is the electrical conductance which is the reciprocal of the electrical resistance.
The standard example of a mechanical transport process is Newton's law of viscosity , which states that the shear stress S x y {\displaystyle S_{xy}} is linearly proportional to the strain rate. The strain rate γ {\displaystyle \gamma } is the rate of change streaming velocity in the x-direction, with respect to the y-coordinate, γ = d e f ∂ u x / ∂ y {\displaystyle \gamma \mathrel {\stackrel {\mathrm {def} }{=}} \partial u_{x}/\partial y} . Newton's law of viscosity states
S x y = η γ . {\displaystyle S_{xy}=\eta \gamma .\,}
As the strain rate increases we expect to see deviations from linear behavior
S x y = η ( γ ) γ . {\displaystyle S_{xy}=\eta (\gamma )\gamma .\,}
Another well known thermal transport process is Fourier's law of heat conduction, stating that the heat flux between two bodies maintained at different temperatures is proportional to the temperature gradient (the temperature difference divided by the spatial separation).
Regardless of whether transport processes are stimulated thermally or mechanically, in the small field limit it is expected that a flux will be linearly proportional to an applied field. In the linear case the flux and the force are said to be conjugate to each other. The relation between a thermodynamic force F and its conjugate thermodynamic flux J is called a linear constitutive relation,
J = L ( F e = 0 ) F e . {\displaystyle J=L(F_{e}{=}0)\,F_{e}.}
L (0) is called a linear transport coefficient. In the case of multiple forces and fluxes acting simultaneously, the fluxes and forces will be related by a linear transport coefficient matrix. Except in special cases, this matrix is symmetric as expressed in the Onsager reciprocal relations .
In the 1950s Green and Kubo proved an exact expression for linear transport coefficients which is valid for systems of arbitrary temperature T, and density. They proved that linear transport coefficients are exactly related to the time dependence of equilibrium fluctuations in the conjugate flux,
L ( F e = 0 ) = β V ∫ 0 ∞ d s ⟨ J ( 0 ) J ( s ) ⟩ F e = 0 , {\displaystyle L(F_{e}{=}0)=\beta V\;\int _{0}^{\infty }{\mathrm {d} }s\,\left\langle J(0)J(s)\right\rangle _{F_{e}=0},}
where β = 1 k T {\displaystyle \beta ={\frac {1}{kT}}} (with k the Boltzmann constant), and V is the system volume. The integral is over the equilibrium flux autocovariance function. At zero time the autocovariance is positive since it is the mean square value of the flux at equilibrium. Note that at equilibrium the mean value of the flux is zero by definition. At long times the flux at time t , J ( t ), is uncorrelated with its value a long time earlier J (0) and the autocorrelation function decays to zero. This remarkable relation is frequently used in molecular dynamics computer simulation to compute linear transport coefficients; see Evans and Morriss, "Statistical Mechanics of Nonequilibrium Liquids" , Academic Press 1990.
In 1985 Denis Evans and Morriss derived two exact fluctuation expressions for nonlinear transport coefficients—see Evans and Morriss in Mol. Phys, 54 , 629(1985). Evans later argued that these are consequences of the extremization of free energy in Response theory as a free energy minimum . [ 4 ]
Evans and Morriss proved that in a thermostatted system that is at equilibrium at t = 0, the nonlinear transport coefficient can be calculated from the so-called transient time correlation function expression:
L ( F e ) = β V ∫ 0 ∞ d s ⟨ J ( 0 ) J ( s ) ⟩ F e , {\displaystyle L(F_{e})=\beta V\;\int _{0}^{\infty }{\mathrm {d} }s\,\left\langle J(0)J(s)\right\rangle _{F_{e}},}
where the equilibrium ( F e = 0 {\displaystyle F_{e}=0} ) flux autocorrelation function is replaced by a thermostatted field dependent transient autocorrelation function. At time zero ⟨ J ( 0 ) ⟩ F e = 0 {\displaystyle \left\langle J(0)\right\rangle _{F_{e}}=0} but at later times since the field is applied ⟨ J ( t ) ⟩ F e ≠ 0 {\displaystyle \left\langle J(t)\right\rangle _{F_{e}}\neq 0} .
Another exact fluctuation expression derived by Evans and Morriss is the so-called Kawasaki expression for the nonlinear response:
⟨ J ( t ; F e ) ⟩ = ⟨ J ( 0 ) exp [ − β V ∫ 0 t J ( − s ) F e d s ] ⟩ F e . {\displaystyle \left\langle J(t;F_{e})\right\rangle =\left\langle J(0)\exp \left[-\beta V\int _{0}^{t}J(-s)F_{e}\,{\mathrm {d} }s\right]\right\rangle _{F_{e}}.\,}
The ensemble average of the right hand side of the Kawasaki expression is to be evaluated under the application of both the thermostat and the external field. At first sight the transient time correlation function (TTCF) and Kawasaki expression might appear to be of limited use—because of their innate complexity. However, the TTCF is quite useful in computer simulations for calculating transport coefficients. Both expressions can be used to derive new and useful fluctuation expressions quantities like specific heats, in nonequilibrium steady states. Thus they can be used as a kind of partition function for nonequilibrium steady states.
For a thermostatted steady state, time integrals of the dissipation function are related to the dissipative flux, J, by the equation
Ω ¯ t = − β J ¯ t V F e . {\displaystyle {\bar {\Omega }}_{t}=-\beta {\overline {J}}_{t}VF_{e}.\,}
We note in passing that the long time average of the dissipation function is a product of the thermodynamic force and the average conjugate thermodynamic flux. It is therefore equal to the spontaneous entropy production in the system. The spontaneous entropy production plays a key role in linear irreversible thermodynamics – see de Groot and Mazur "Non-equilibrium thermodynamics" Dover.
The fluctuation theorem (FT) is valid for arbitrary averaging times, t. Let's apply the FT in the long time limit while simultaneously reducing the field so that the product F e 2 t {\displaystyle F_{e}^{2}t} is held constant,
lim t → ∞ F e → 0 1 t ln ( p ( β J ¯ t = A ) p ( β J ¯ t = − A ) ) = − lim t → ∞ F e → 0 A V F e , F e 2 t = c . {\displaystyle \lim _{t\to \infty \atop F_{e}\to 0}{\frac {1}{t}}\ln \left({\frac {p{\left(\beta {\overline {J}}_{t}=A\right)}}{p{\left(\beta {\overline {J}}_{t}=-A\right)}}}\right)=-\lim _{t\to \infty \atop F_{e}\to 0}AVF_{e},\quad F_{e}^{2}t=c.}
Because of the particular way we take the double limit, the negative of the mean value of the flux remains a fixed number of standard deviations away from the mean as the averaging time increases (narrowing the distribution) and the field decreases. This means that as the averaging time gets longer the distribution near the mean flux and its negative, is accurately described by the central limit theorem . This means that the distribution is Gaussian near the mean and its negative so that
lim t → ∞ F e → 0 1 t ln ( p ( J ¯ t = A ) p ( J ¯ t = − A ) ) = lim t → ∞ F e → 0 2 A ⟨ J ⟩ F e t σ J ¯ ( t ) 2 . {\displaystyle \lim _{t\to \infty \atop F_{e}\to 0}{\frac {1}{t}}\ln \left({\frac {p{\left({\overline {J}}_{t}=A\right)}}{p{\left({\overline {J}}_{t}=-A\right)}}}\right)=\lim _{t\to \infty \atop F_{e}\to 0}{\frac {2A\left\langle J\right\rangle _{F_{e}}}{t\sigma _{{\overline {J}}(t)}^{2}}}.}
Combining these two relations yields (after some tedious algebra!) the exact Green–Kubo relation for the linear zero field transport coefficient, namely,
L ( 0 ) = β V ∫ 0 ∞ d t ⟨ J ( 0 ) J ( t ) ⟩ F e = 0 . {\displaystyle L(0)=\beta V\;\int _{0}^{\infty }{\mathrm {d} }t\,\left\langle J(0)J(t)\right\rangle _{F_{e}=0}.}
Here are the details of the proof of Green–Kubo relations from the FT. [ 5 ] A proof using only elementary quantum mechanics was given by Robert Zwanzig . [ 6 ]
This shows the fundamental importance of the fluctuation theorem (FT) in nonequilibrium statistical mechanics.
The FT gives a generalisation of the second law of thermodynamics . It is then easy to prove the second law inequality and the Kawasaki identity. When combined with the central limit theorem , the FT also implies the Green–Kubo relations for linear transport coefficients close to equilibrium. The FT is, however, more general than the Green–Kubo Relations because, unlike them, the FT applies to fluctuations far from equilibrium. In spite of this fact, no one has yet been able to derive the equations for nonlinear response theory from the FT.
The FT does not imply or require that the distribution of time-averaged dissipation is Gaussian. There are many examples known when the distribution is non-Gaussian and yet the FT still correctly describes the probability ratios. | https://en.wikipedia.org/wiki/Green–Kubo_relations |
Gregory Michael Fahy [ 1 ] is a California-based cryobiologist , biogerontologist , and businessman. He is the Vice President and Chief Scientific Officer at 21st Century Medicine , Inc, and has co-founded Intervene Immune , a company developing clinical methods to reverse immune system aging. [ 2 ] He was the 2022–2023 president of the Society for Cryobiology . [ 3 ]
A native of California, Fahy holds a Bachelor of Science degree in biology from the University of California, Irvine and a PhD in pharmacology and cryobiology from the Medical College of Georgia in Augusta . [ 4 ]
He currently serves on the board of directors of two organizations [ which? ] and as a referee for numerous scientific journals and funding agencies, and holds 35 patents on cryopreservation methods, aging interventions, transplantation, and other topics. [ citation needed ]
Fahy is the world's foremost expert in organ cryopreservation by vitrification . [ 5 ] [ 6 ] [ 7 ] Fahy introduced the modern successful approach to vitrification for cryopreservation in cryobiology [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] and he is widely credited, along with William F. Rall, for introducing vitrification into the field of reproductive biology . [ 11 ] [ 16 ]
In 2005, where he was a keynote speaker at the annual Society for Cryobiology meeting, Fahy announced that 21st Century Medicine had successfully cryopreserved a rabbit kidney at −130 °C by vitrification and transplanted it into a rabbit after rewarming, with subsequent long-term life support by the vitrified-rewarmed kidney as the sole kidney. This research breakthrough was later published in the peer-reviewed journal Organogenesis . [ 7 ]
Fahy is also a biogerontologist and is the originator and Editor-in-Chief of The Future of Aging: Pathways to Human Life Extension , a multi-authored book on the future of biogerontology . [ 17 ] He currently serves on the editorial boards of Rejuvenation Research and the Open Geriatric Medicine Journal and served for 16 years as a Director of the American Aging Association and for 6 years as the editor of AGE News , the organization's newsletter.
As a scientist with the American Red Cross , Fahy was the originator of the first practical method of cryopreservation by vitrification and the inventor of computer-based systems to apply this technology to whole organs . Before joining Twenty-First Century Medicine, he was the chief scientist for Organ, Inc and of LRT, Inc. He was also Head of the Tissue Cryopreservation Section of the Transfusion and Cryopreservation Research Program of the U.S. Naval Medical Research Institute in Bethesda, Maryland where he spearheaded the original concept of ice blocking agents. In 2014, he was named a Fellow of the Society for Cryobiology in recognition of the impact of his work in low temperature biology. [ 18 ]
In 2015–2017, Fahy led the TRIIM (Thymus Regeneration, Immunorestoration, and Insulin Mitigation) human clinical trial, designed to reverse aspects of human aging. The purpose of the TRIIM trial was to investigate the possibility of using recombinant human growth hormone (rhGH) to prevent or reverse signs of immunosenescence in ten 51‐ to 65‐year‐old putatively healthy men. The study:
Observed protective immunological changes, improved risk indices for many age‐related diseases, and a mean epigenetic age approximately 1.5 years less than baseline after 1 year of treatment (−2.5‐year change compared to no treatment at the end of the study). [ 2 ]
Fahy was named as a Fellow of the Society for Cryobiology in 2014, [ 19 ] and in 2010 he received the Distinguished Scientist Award for Reproductive Biology from the Reproductive Biology Professional Group of the American Society of Reproductive Medicine. [ citation needed ] He received the Cryopreservation Award from the International Longevity and Cryopreservation Summit held in Madrid, Spain in 2017 in recognition of his career in and dedication to the field of cryobiology . Fahy also received the Grand Prize for Medicine from INPEX in 1995 for his invention of computerized organ cryoprotectant perfusion technology. In 2005, he was recognized as a Fellow of the American Aging Association . [ citation needed ] | https://en.wikipedia.org/wiki/Greg_Fahy |
Gregorio Baró (June 19, 1928 - May 28, 2012) was an Argentine scientist. He was born in Santiago Temple, Córdoba and died in Buenos Aires .
The son of Spanish immigrants from the Province of León , more precisely from Cabreros del Río , Baró married the writer María Dhialma Tiberti . He completed his Associate of Science in Chemistry degree at the Otto Krause Technical School in Buenos Aires , in 1945. Afterward, he pursued his studies at Universidad de Buenos Aires from which he obtained a Bachelor of Science, followed by a PhD in Chemistry in 1961 at the Instituut voor Kernphysisch Onderzoek , in Amsterdam . [ 1 ] In 1968, he conducted research on the production of radioisotopes in Bombay , India, organized by the International Atomic Energy Agency . [ 2 ]
Baró was additionally a professor at several universities, such as Universidad de Buenos Aires , Universidad Nacional de La Plata , Universidad Nacional de Cuyo , Universidad Nacional de Rosario , and Universidad Nacional del Litoral . [ citation needed ] He was named Emeritus Researcher of the National Atomic Energy Commission in 2010, following 40 years of institutional work and reaching the rank of Director. [ 3 ] He was also awararded a Doctor honoris causa in Radiochemistry from Higher University of San Andres , Bolivia , notably for his work in discovering new isotopes of ruthenium , rhodium , rhenium , tungsten , and osmium , [ 4 ] [ 5 ] and for the development of a contrast agent for Magnetic Resonance Imaging during retirement.
He was the Argentinian representative of the International Union of Pure and Applied Chemistry (IUPAC) for several years. In addition, he served as consultant for the Comisión de Energía Atómica de Bolivia, the Comisión Chilena de Energía Nuclear, the Instituto de Asuntos Nucleares de Colombia, the International Atomic Energy Agency in Asunción , Paraguay , and the Centro Atómico del Perú, and the government of Uruguay . | https://en.wikipedia.org/wiki/Gregorio_Baro |
Gregory John Chaitin ( / ˈ tʃ aɪ t ɪ n / CHY -tin ; born 25 June 1947) is an Argentine - American mathematician and computer scientist . Beginning in the late 1960s, Chaitin made contributions to algorithmic information theory and metamathematics , in particular a computer-theoretic result equivalent to Gödel's incompleteness theorem . [ 2 ] He is considered to be one of the founders of what is today known as algorithmic (Solomonoff–Kolmogorov–Chaitin, Kolmogorov or program-size) complexity together with Andrei Kolmogorov and Ray Solomonoff . [ 3 ] Along with the works of e.g. Solomonoff , Kolmogorov , Martin-Löf , and Leonid Levin , algorithmic information theory became a foundational part of theoretical computer science , information theory , and mathematical logic . [ 4 ] [ 5 ] It is a common subject in several computer science curricula. Besides computer scientists, Chaitin's work draws attention of many philosophers and mathematicians to fundamental problems in mathematical creativity and digital philosophy.
Gregory Chaitin is Jewish . He attended the Bronx High School of Science and the City College of New York , where he (still in his teens) developed the theory that led to his independent discovery of algorithmic complexity . [ 6 ] [ 7 ]
Chaitin has defined Chaitin's constant Ω, a real number whose digits are equidistributed and which is sometimes informally described as an expression of the probability that a random program will halt. Ω has the mathematical property that it is definable , with asymptotic approximations from below (but not from above), but not computable .
Chaitin is also the originator of using graph coloring to do register allocation in compiling , a process known as Chaitin's algorithm . [ 8 ]
He was formerly a researcher at IBM's Thomas J. Watson Research Center in New York. He has written more than 10 books that have been translated to about 15 languages. He is today interested in questions of metabiology and information-theoretic formalizations of the theory of evolution , and is a member of the Institute for Advanced Studies at Mohammed VI Polytechnic University .
Chaitin also writes about philosophy , especially metaphysics and philosophy of mathematics (particularly about epistemological matters in mathematics). In metaphysics, Chaitin claims that algorithmic information theory is the key to solving problems in the field of biology (obtaining a formal definition of 'life', its origin and evolution ) and neuroscience (the problem of consciousness and the study of the mind).
In recent writings, he defends a position known as digital philosophy . In the epistemology of mathematics, he claims that his findings in mathematical logic and algorithmic information theory show there are "mathematical facts that are true for no reason, that are true by accident". [ 9 ] Chaitin proposes that mathematicians must abandon any hope of proving those mathematical facts and adopt a quasi-empirical methodology.
In 1995 he was given the degree of doctor of science honoris causa by the University of Maine . In 2002 he was given the title of honorary professor by the University of Buenos Aires in Argentina, where his parents were born and where Chaitin spent part of his youth. In 2007 he was given a Leibniz Medal [ 10 ] by Wolfram Research . In 2009 he was given the degree of doctor of philosophy honoris causa by the National University of Córdoba . He was formerly a researcher at IBM 's Thomas J. Watson Research Center and a professor at the Federal University of Rio de Janeiro . | https://en.wikipedia.org/wiki/Gregory_Chaitin |
Gregory Markarovich Garibian (December 13, 1924 – June 8, 1991) was a Soviet Armenian physicist, academician-secretary of the Department of Physics and Mathematics of the Armenian National Academy of Sciences (1973–1991). He is known for developing the Theory of Transition Radiation and showing the feasibility of functional transition radiation detectors (TRDs). [ 1 ]
G.M.Garibian [ 2 ] [ 3 ] was born in 1924 in Tiflis (now - Tbilisi , Georgia) in a family of a Medical Doctor and a homemaker. Eventually the family moved to Baku (Azerbaijan) where Garibian got his general education. In 1943 he graduated from school in Baku and went to Moscow . Physical science was Garibian's passion in life. Even at a very young age he followed news in the world of physics and was very excited when in 1942 he learned about the Alikhanian brothers’ expedition to Mount Aragats (Armenia) in order to search for protons in cosmic rays. Garibian was accepted into the Department of Physics and Mathematics of Moscow State University which he graduated from in 1948 immediately to leave for Yerevan and join the Yerevan Physics Institute , which was founded by Artyom Alikhanian in 1943. After that time Garibian dedicated himself to scientific research in Theoretical physics , in the fields of Quantum electrodynamics , Cosmic rays , and High energy particles . [ 4 ] [ 5 ] [ 6 ] [ 7 ] All his life he worked at the Yerevan Physics Institute consecutively as researcher, scientific secretary of the institute, deputy director and head of laboratory. He actively participated in the creation of the Yerevan Synchrotron and also in the establishment of high-altitude cosmic ray stations on Mount Aragats.
Garibian's main scientific achievement was the discovery of X-Ray Transition Radiation [ 8 ] [ 9 ] and the development of the Theory of Transition Radiation. He also showed the feasibility of a functional Transition Raditation Detector (TRD) - a tool for identification of high energy ultrarelativistic particles. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
In the end of the 1940s and the beginning of the 1950s, the main points of interest of the researchers at the Yerevan Institute of Physics were cosmic rays and the physics of elementary particles. One of the problems bothering experimental physicists working with cosmic rays at that time were measurements of very high energies of the relativistic particles in the cosmic radiation, as with the increase of energy levels of the particles the available methods of registration of their energies were becoming less and less effective.
Garibian tried to develop methods that would help to resolve that problem. As a starting point of his research, he used results published in the article written by Ginzburg and Frank in 1946, where the theoretical existence of transition radiation was predicted. The new type of radiation appearing as a result of charged particle passing through the boundary between two layers of matter. In 1959, Garibyan discovered x-ray transition radiation, the intensity of which had a linear dependence on the Lorentz factor of the particle. Due to this feature, x-ray transition radiation received immediate practical implementation, as it made it possible to identify ultra-relativistic charged particles and to measure their energies. [ 14 ]
In following years Garibian with his disciples continued the development of the Theory of Transition Radiation. Their theoretical works stimulated experimental research on transition radiation in Armenia. [ 15 ] [ 16 ]
In the beginning of the 1960s the first experiment on the registration of TR generated by muons of cosmic rays was conducted on Mount Aragats. The research of TR became more powerful after the Yerevan Synchrotron started its work in 1967. Here, theoretical investigation of TR headed by Garibian was correlated with the experimental research headed by A. Alikhanian. Those studies played a decisive role in prompting similar work throughout the world: in the Brookhaven National Laboratory (by prof. Luke Chia-Liu Yuan ), at Stanford Accelerator , and in different research centers of Europe. [ 17 ]
During his life Garibian, besides being a scientist, was also an educator: he lectured to students at the Physical Mathematical Department of Yerevan State University from 1951 to 1964 (on the topics of classical and quantum electrodynamics, theory of electrons and theory of relativity) and from 1970 to 1973 (on the topic of the passage of fast particles through matter). In the years 1965-1969 he worked also as a Chief of the Theoretical Department of Institute of Radiophysics and Electronics , Armenian AS. Under his leadership many of his disciples achieved their Doctoral degrees and became prominent scientists. He was also interested in documenting the history of progress of physics in Armenia. One of his papers on this topic was co-authored with physicists Artsimovich , Migdal and Jelepov and was dedicated to the Artyom Alikhanian's 60th birthday. [ 18 ] From 1966 Garibian worked also as an Executive Editor of the journal "Proceedings of the Academy of Sciences of the Armenian SSR. Physics ".
In 1983 Garibian, together with his disciple Yang Chi, published a monograph: “X-Ray Transition Radiation” which summarized data collected during years of theoretical and experimental research on transition radiation appearing as a result of fast charged particles passing through the boundary between media. [ 19 ]
Garibian died on June 8, 1991, in Yerevan, Armenia. | https://en.wikipedia.org/wiki/Gregory_Garibian |
Gregory Lee Hillhouse (March 1, 1955 – March 6, 2014) [ 1 ] was an inorganic chemist with a long-standing interest in the chemistry of organotransition metal compounds at the University of Chicago . Much of his work focused on creating organometallic compounds to stabilize and isolate reactive intermediates, molecules that are proposed to exist briefly during a larger catalytic reaction progress. [ 1 ]
Hillhouse was born on March 1, 1955, in Greenville, South Carolina . He attended the University of South Carolina in 1976 and received his Ph.D. from Indiana University Bloomington in 1980. He then became a postdoctoral research associate at California Institute of Technology , before taking a position in the department of chemistry at the University of Chicago in 1983.
He died from cancer at his home in Chicago on March 6, 2014, aged 59.
While the early work of Hillhouse focused on early-transition metal chemistry, his later career efforts were dedicated towards base metals. For example, in 2001 Hillhouse and co-workers synthesized a complex that refuted the notion that it was impossible for late transition metals like nickel to form multiple bonds with heteroatoms. [ 2 ] The result was a molecule that he affectionately referred to as “Double Nickel,” which possessed an indisputable nickel-nitrogen double bond. Later the group published a study showcasing that one can also synthesize and isolated an electronically similar phosphinidine species. [ 3 ] Additionally, using bulky N-heterocyclic carbene (NHC) ligands, Hillhouse and co-workers showed that one can stabilize a linear two-coordinate Ni-based imido species. His group has also showcased how some of these and similar complexes can undergo redox chemistry forming Ni(I) and Ni(III) species.
Hillhouse was gay, although he did not come out openly in his professional career until later in his life. [ 4 ] [ 5 ] In his career as a teacher and mentor, he served as a role model for younger LGBTQ+ chemists. [ 5 ] | https://en.wikipedia.org/wiki/Gregory_L._Hillhouse |
Gregory L. Verdine (born June 10, 1959) is an American chemical biologist, biotech entrepreneur, venture capitalist and university professor. [ 3 ] He is a founder of the field of chemical biology , [ citation needed ] which deals with the application of chemical techniques to biological systems. His work has focused on mechanisms of DNA repair and cell penetrability.
Verdine is the co-inventor with Christian Schafmeister of stapled peptides , a new class of drugs that combines the versatile binding properties of monoclonal antibodies with the cell-penetrating ability of small molecules . Verdine coined the term "drugging the undruggable" to describe the unique capabilities of stapled peptides. A close analog of a stapled peptide drug invented in the Verdine Lab, sulanemadlin (ALRN-6924), is a first-in-class dual MDM2/MDMX inhibitor currently in Phase II clinical development by Aileron Therapeutics, [ 4 ] which he co-founded in 2005. FogPharma, founded in 2016, aims to further develop stapled peptide technology for therapeutic use.
He has founded numerous other drug discovery companies, including six that are listed on the NASDAQ. His companies have succeeded in developing two FDA-approved drugs, romidepsin and paritaprevir , which are, respectively, an anticancer agent used in cutaneous T-cell lymphoma (CTCL) and other peripheral T-cell lymphomas (PTCLs), and an acylsulfonamide inhibitor that is used to treat chronic hepatitis C .
Verdine received a Bachelor of Science in Chemistry from Saint Joseph's University and a PhD in Chemistry from Columbia University , working under Koji Nakanishi and Maria Tomasz. He held an NIH postdoctoral fellowship in molecular biology at MIT and Harvard Medical School , and joined the faculty of Harvard University in 1988. [ 5 ]
Over the course of his academic career at Harvard University and the Harvard Medical School , Verdine has elucidated the molecular mechanism of epigenetic DNA methylation and pathways by which certain genotoxic forms of DNA damage are surveilled in and eradicated from the genome. [ 5 ] As a professor, Verdine introduced biological principles into organic chemistry courses and helped found two fields of science that meld basic research and new medicines discovery: chemical biology, which enlists chemistry to answer biological questions; and new modalities, which works to discover and develop novel structural classes of therapeutics. [ 6 ]
He has served as the Erving Professor of Chemistry in the Departments of Stem Cell and Regenerative Biology and Chemistry and Chemical Biology at Harvard University since 1988. In 2013, he stepped down from his tenured professorship at Harvard, taking a leave of absence in order to focus full-time on steering Warp Drive Bio as CEO [ 7 ] [ 8 ] while continuing to run [ 8 ] his eponymous Verdine Laboratory at the Harvard University Department of Stem Cell & Regenerative Biology. The laboratory focused on research based in chemical biology, including synthetic biologics and genomic research,. [ 9 ] He has since transitioned to a 'professor of the practice' position at Harvard.
In his academic research, Verdine made fundamental discoveries about how organisms manage their genomes: how they tag specific cell types and conduct search-and-destroy operations for cancer-causing abnormalities. [ 6 ] Verdine has published more than 190 academic articles. [ 1 ] [ 10 ] In 2005, Verdine and Anirban Banerjee published research in crystallography showing how enzymes could be used to fix flawed DNA. [ 11 ] In 2013, Verdine received a research grant to study cell-penetrating miniproteins in order to target cancer cells. [ 12 ] His work has led to the FDA approval of the drugs romidepsin and paritaprevir . [ 5 ]
Verdine is also the inventor of stapled peptide technology, which stabilizes peptides intended for therapeutic use by introducing an all-hydrocarbon “staple” into the peptide’s linear backbone. These “stapled” peptides have a higher affinity for their targets, enter cells more easily and are less readily degraded. [ 13 ]
To translate his discoveries into therapeutics, Verdine has founded or co-founded numerous public biotech companies including Variagenics, Enanta, Eleven Bio, Tokai, Wave Life Sciences, and Aileron. [ 6 ] He also founded the private company Gloucester Pharmaceuticals, which was acquired by Celgene in 2009. [ 14 ] His companies share the mission of developing molecules intended to target “hard-to-drug” endogenous targets that have remained out of reach of modern cell-penetration technologies. [ 12 ] [ 15 ]
In 2016, Verdine co-founded FogPharma with Sir David Lane to develop next-generation stapled peptides, Cell-Penetrating Miniproteins (CPMPs), a broad new class of medicines that aim to combine the cell-penetrating abilities of small molecules with the strong target engagement of biologics. [ 16 ]
Founded alongside FogPharma in 2016, LifeMine seeks to discover, characterize, and translate into medicine bioactive compounds in fungal genomes. [ 17 ]
Founded in 2013, the nonprofit Gloucester Marine Genomics Institute to study marine genomes for potential therapeutic compounds and to advance fisheries science. [ 18 ] He is also the founder and director of the Gloucester Biotechnology Academy, which is providing technical training in the life science industry to high school graduates in Gloucester, MA, USA. [ 5 ]
In 2012, Verdine founded Warp Drive Bio with cofounders George Church and James Wells. [ 19 ] The company maps the genomes of soil-dwelling microbes in the search for potential treatments for drug-resistant ailments. In 2013, Verdine became full-time CEO of Warp Drive Bio, [ 20 ] then handed the CEO position to Lawrence Reid in 2016 [ 21 ] in order to found two new startups, FogPharma and LifeMine.
Verdine is the Chairman of the Board of Wave Life Sciences, [ 22 ] which uses synthetic chemistry to develop nucleic acid therapeutic candidates. [ 23 ]
Verdine has worked in the venture capital industry as a Venture Partner with Apple Tree Partners, Third Rock Ventures, and WuXi Healthcare Ventures, and as a Special Advisor to Texas Pacific Group . [ 24 ]
Verdine is a member of both the Board of Scientific Consultants of the Memorial Sloan-Kettering Cancer Center , the Board of Scientific Advisors of the National Cancer Institute , [ 5 ] Advisory Board at Spinal Muscular Atrophy Foundation, and the Board of Reviewers at Bill & Melinda Gates Foundation . [ 25 ]
2019 - Honorary Doctor of Science Degree, Clarkson University [ 26 ]
2019 - Herman S. Bloch Award for Scientific Excellence in Industry, University of Chicago [ 6 ]
2011 - American Association for Cancer Research Award for Excellence in Chemistry in Cancer Research [ 27 ]
2007 - Nobel Laureate Signature Award for Graduate Education in Chemistry, with Anirban Banerjee [ 28 ]
2005 - Royal Society of Chemistry Nucleic Acid Award Lecture, Responses to DNA Damage conference [ 29 ] | https://en.wikipedia.org/wiki/Gregory_L._Verdine |
Gregory M. Odegard is a materials researcher and academic. He is the John O. Hallquist Endowed Chair in Computational Mechanics in the Department of Mechanical Engineering – Engineering Mechanics at Michigan Technological University [ 1 ] and the director of the NASA Institute for Ultra-Strong Composites by Computational Design. [ 2 ] [ 3 ]
Odegard's work is focused on computational modeling of advanced composite systems , with his research interests spanning multiscale modeling , computational chemistry , materials science, and mechanics of materials . He is the recipient of 2008 Ferdinand P. Beer and E. Russell Johnston Jr. Outstanding New Mechanics Educator Award, 2011 Ralph R. Teetor Educational Award , [ 4 ] 2021 Michigan Tech Distinguished Researcher Award, [ 5 ] [ 6 ] and 2023 NASA Outstanding Public Leadership Medal . [ 7 ]
Odegard is a Fellow of American Society of Mechanical Engineers (ASME), [ 8 ] and an Associate Fellow of American Institute of Aeronautics and Astronautics (AIAA). [ 9 ]
Odegard earned his B.S. in Mechanical Engineering from the University of Colorado Boulder in 1995. He then completed his M.S. in Mechanical Engineering at the University of Denver in 1998, followed by his Ph.D. in materials science from the same institution in 2000 under Maciej S. Kumosa, with his doctoral thesis titled, "Shear-Dominated Biaxial Failure Analysis of Polymer-Matrix Composites at Room and Elevated Temperatures." [ 10 ]
Odegard worked as a National Research Council postdoctoral research associate in the Mechanics and Durability Branch at NASA Langley Research Center , Hampton, Virginia , from 2000 to 2002. Subsequently, he held positions as a staff scientist at ICASE in 2002 and as a staff scientist at the National Institute of Aerospace from 2003 to 2004, both at NASA Langley Research Center. [ 11 ] He has been serving as a director of the NASA Space Technology Research Institute (STRI) for Ultra-Strong Composites by Computational Design (US-COMP). [ 12 ] [ 13 ] [ 14 ]
Odegard began his academic career at Michigan Technological University in 2004 as an assistant professor in the Department of Mechanical Engineering – Engineering Mechanics, [ 15 ] and was appointed as an associate professor from 2009 to 2013. During this time, he briefly served as a Fulbright Research Scholar at the Norwegian University of Science and Technology , Trondheim , Norway . In 2014, he was named as the Richard and Elizabeth Henes Professor in Computational Mechanics in the Department of Mechanical Engineering – Engineering Mechanics at Michigan Technological University, a position he held until 2021. [ 16 ] He has been holding an appointment as the John O. Hallquist Endowed Chair of Computational Mechanics at the same university since 2021. [ 6 ]
Odegard has led a multi-institution effort in developing ultra-strong composites for deep space exploration using carbon nanotubes (CNTs) and polymers , employing computational modeling for accurate property prediction, and has received media coverage for his contributions, including features in publications such as Chemical & Engineering News , [ 17 ] CompositesWorld , [ 18 ] Nature World News , [ 19 ] and Space.com . [ 20 ]
For his efforts in leading US-COMP to achieve its goals, Odegard was awarded the NASA Outstanding Public Leadership Medal in 2023. [ 7 ]
Odegard has conducted research on computational simulation of polymer and polymer-composite materials, and made contributions to the development of new multi-scale modeling approaches for advanced composite materials. During his time at NASA Langley Research Center, he developed techniques to connect computational chemistry with continuum mechanics. This new approach to materials modeling enabled the development of structure-property relationships in nano-structured materials. [ 21 ] In collaboration with researchers from the National Institute of Aerospace and Langley Research Center in 2005, he used this approach to develop constitutive models for polymer composite systems reinforced with single-walled CNTs. [ 22 ] [ 23 ] Additionally, he developed a multiscale model for silica nanoparticle/polyimide composites, which integrated the molecular structures of the nanoparticle, polyimide, and interfacial regions into the bulk-level constitutive behavior. [ 24 ]
Odegard and his team further developed computational simulation techniques for nanocomposite materials. He developed the simulation of polymer materials using reactive force fields. [ 25 ] [ 26 ] These force fields allow for the simulation of chemical bond breakage during mechanical deformation, thus allowing for more accurate computational predictions of polymer mechanical behavior and failure. His team used these techniques to computationally design CNT nanocomposites with improved manufacturability and mechanical behavior. [ 27 ] [ 28 ] [ 29 ] In addition, he was a contributor to the development of CNT yarn composites as part of US-COMP, which showed significant increases in mechanical stiffness and strength relative to state-of-the-art aerospace composite materials. [ 30 ] [ 31 ] | https://en.wikipedia.org/wiki/Gregory_Odegard |
Gregory “Greg” Raleigh (born 1961 in Orange, California ), is an American radio scientist , inventor , and entrepreneur who has made contributions in the fields of wireless communication , information theory , mobile operating systems , medical devices , and network virtualization . His discoveries and inventions include the first wireless communication channel model to accurately predict the performance of advanced antenna systems, [ 1 ] the MIMO-OFDM technology used in contemporary Wi-Fi and 4G wireless networks and devices, higher accuracy radiation beam therapy for cancer treatment, improved 3D surgery imaging , and a cloud-based Network Functions Virtualization platform for mobile network operators that enables users to customize and modify their smartphone services.
Raleigh received a B.S.E.E. degree from the California Polytechnic State University , an M.S.E.E. degree from Stanford University , and a Ph.D. from Stanford University . He joined Watkins-Johnson Company in 1984 as a Radio Engineer and rose to Chief Scientist and Vice President of Research and Development. Raleigh subsequently co-founded five companies: Clarity Wireless, Airgo Networks , Headwater Research, ItsOn , and Chilko Capital.
In wireless communications, Raleigh developed a comprehensive and precise channel model that works with multiple antennas. [ 2 ] He employed the model to develop smart antenna signal processing techniques for rapid fading, multipath propagation , and frequency-division duplex environments. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] As a result of this research, Raleigh found that multipath propagation could be exploited to greatly increase the capacity of wireless communications, enabling data rates competitive with wire-based networks. [ 9 ] In a paper prepared for the 1996 GLOBECOM conference in London, Raleigh presented the first rigorous mathematical proof that in the presence of naturally occurring multipath propagation multiple antennas may be used with special signal processing techniques to transmit multiple data streams at the same time on the same frequency, multiplying the information-carrying capacity (data rate) of wireless links. [ 10 ] From the time of Guglielmo Marconi , multipath propagation had always been treated as a problem to be overcome. The discovery that multipath can be harnessed to increase performance reversed a century of radio engineering practice. [ 1 ] In subsequent papers, Raleigh proposed a series of enhancements including the use of OFDM with MIMO and techniques for space-frequency coding, space-frequency-time channel estimation, and MIMO synchronization. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] These inventions were incorporated into the LTE , WiMAX , 802.11n and 802.11ac standards.
Raleigh, V.K. Jones, and Michael Pollack founded Clarity Wireless in 1996. Clarity built a MIMO demonstration link and developed a related technology, vector orthogonal frequency-division multiplexing (V-OFDM). Clarity Wireless was acquired by Cisco Systems in 1998. Raleigh, Jones, and David Johnson founded Airgo Networks in 2001 to develop MIMO-OFDM chipsets for wireless LANs . Airgo Networks proposed MIMO as the best technology for meeting the performance goals of next-generation wireless LANs and contributed to the development of the IEEE 802.11n standard. [ 16 ] The company began shipping the world’s first MIMO-OFDM chipsets in 2003. [ 17 ] While at Airgo Networks, Raleigh was named to Network World’s “The 50 most powerful people in networking.” [ 18 ] Airgo Networks was purchased by Qualcomm in 2006.
Raleigh co-founded the technology innovation firm Headwater Research in late 2008 with Charles Giancarlo and became Lead Director of its board. Raleigh’s inventions at Headwater have spanned the wireless and medical device fields. The inventions include mobile device operating system enhancements, improvements to radiation beam therapy for cancer treatment, enhanced 3-D imaging systems for surgery, and cloud-based network function virtualization (NFV) advances. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] Mobile OS controls and NFV are now widely deployed. [ 26 ]
In late 2009, Raleigh and Giancarlo spun out ItsOn to license and commercialize wireless technology, with Raleigh serving as the firm’s first CEO. ItsOn developed a cloud-based network function virtualization (NFV) platform that enables operators to implement intelligent, user context-aware policies including the ability for users to customize and manage their mobile phone services. [ 27 ] ItsOn’s service, called Zact, launched in May 2013. [ 28 ]
Raleigh holds more than 200 US patents [ 29 ] and over 150 international patents [ 30 ] in the fields of radio communications, medical devices, mobile device operating systems, radar systems, and mobile network function virtualization. | https://en.wikipedia.org/wiki/Gregory_Raleigh |
Gregory S. Girolami [ 1 ] (born October 16, 1956) [ citation needed ] is the William H. and Janet G. Lycan Professor of Chemistry at the University of Illinois Urbana-Champaign . His research focuses on the synthesis, properties, and reactivity of new inorganic, organometallic, and solid state species. Girolami has been elected a fellow of the American Association for the Advancement of Science , [ 2 ] the Royal Society of Chemistry , [ 3 ] and the American Chemical Society . [ 4 ]
He was born in 1956 [ 8 ] in Honolulu, Hawaii , and grew up in California , Mexico , and Missouri . [ citation needed ] He started college at the age of 16, [ citation needed ] and four years later received B.S. degrees both in chemistry and in physics from the University of Texas at Austin . [ 9 ] He obtained his Ph.D. in 1981 from the University of California, Berkeley with Prof. Richard A. Andersen . Girolami's doctoral research centered on the chemistry of quadruply-bonded dinuclear transition metal complexes. [ 10 ] [ 11 ] [ 12 ] Thereafter, he was a NATO postdoctoral fellow with Sir Geoffrey Wilkinson at the Imperial College of Science and Technology , and his work there focused on the synthesis and chemistry of first-row transition metal -alkyl complexes. [ 13 ] [ 14 ] [ 15 ]
Girolami joined the faculty of the University of Illinois at Urbana-Champaign in 1983. He has served as Head of the Chemistry Department twice, first from 2000 until 2005 and again from 2013 to 2016.
He is the author of several textbooks, including X-ray Crystallography [ 9 ] and Synthesis and Technique in Inorganic Chemistry. [ 16 ] He was the co-editor of volume 36 of Inorganic Syntheses . [ 17 ]
Girolami is also co-founder of a university spin-off company , Tiptek LLC , which manufactures ultrasharp probe tips for use in scanning tunneling microscopy and for fault diagnosis and testing of integrated circuits . The company has patented its field-directed sputter sharpening (FDSS) technology, which was originally developed in the laboratories of Girolami and fellow UIUC Professor Joseph Lyding . [ 18 ]
To date, Girolami's independent research career has encompassed five major themes: mechanistic studies of organometallic reactions such as the polymerization of alkenes and the activation of saturated alkanes, the chemical vapor deposition of thin films from designed molecular precursors, the construction and study of molecular analogs of the photosynthetic reaction center , actinide chemistry, and the synthesis of new molecule-based magnetic materials . His research approach emphasizes the synthesis of new inorganic and organometallic compounds and materials, investigations of their mechanisms of formation, and measurements and interpretations of their physical properties.
Girolami's early work focused on the synthesis of transition metal compounds with metal-hydrogen and metal-carbon bonds, especially those possessing unusual electronic structures. In 1989, Girolami and Morse showed that [Zr(CH 3 ) 6 ] 2− was of trigonal prismatic molecular geometry as indicated by X-ray crystallography . [ 19 ] This rare molecular geometry was attributed to second-order Jahn-Teller distortions in this d 0 metal complex. Girolami's group accurately predicted that other d 0 ML 6 species such as [Nb(CH 3 ) 6 ] − , [Ta(CH 3 ) 6 ] − , and W(CH 3 ) 6 would also prove to have trigonal prismatic geometry. [ 19 ] Girolami also discovered the first titanium alkyl/alkene complex in 1993, which models the key intermediate in Ziegler-Natta catalysis . [ 20 ] Later model studies of C-H, B-H, and Si-H activation by transition metal complexes led to his current work on approaches to the isolation of stable alkane complexes.
In the mid-1980s Girolami began research on the chemical vapor deposition (CVD) of thin films, especially of phases containing transition metals. Girolami studied the chemical design of new CVD precursors. He investigated copper(I) compounds for copper CVD, [ 21 ] an approach that is now a key fabrication step for integrated circuits. [ 22 ] His mechanistic studies of CVD processes involved transition metals, and these efforts have recently resulted in the development of low-temperature CVD to achieve the deposition of conformal thin films , in work carried out in collaboration with Professor John Abelson of Illinois' Department of Materials Science and Engineering. [ 23 ] Most recently, he discovered a new class of highly volatile CVD precursors containing the aminodiboranate ligand. [ 24 ] [ 25 ]
In a now-concluded project, Girolami studied the chemistry and photophysics of bis( porphyrinate ) metal sandwich complexes in collaboration with Illinois Professor of Chemistry Kenneth S. Suslick . These complexes were proposed to mimic the conversion of light to chemical energy in photosynthesis. Girolami's group synthesized bis(porphyrin) complexes of thorium, uranium, [ 26 ] zirconium, [ 27 ] and hafnium, and showed that these complexes displayed photophysical properties similar to those of the “special pair” , a chlorophyll dimer present in the photosystem I reaction center. [ 28 ]
Overlapping with Girolami's interest in bis(porphryin) complexes that mimic the photosynthetic reaction center, the Girolami group has also studied actinide chemistry. [ 29 ]
In the mid-1990s, Girolami began an investigation of the synthesis of new magnetic solids via a building block approach, publishing in Science in 1995. [ 30 ] Girolami also reported metal-substituted analogs of Prussian blue that have magnetic ordering temperatures above 100 °C. [ 31 ]
Girolami has received numerous awards for his research, including the Office of Naval Research Young Investigator Award, a Sloan Foundation Fellowship , a Dreyfus Teacher-Scholar Award, and a University Scholar Award. [ 32 ] He has been honored by UIUC with a Campus Award for Excellence in Graduate and Professional Teaching, for the introduction of a graduate class in inorganic chemistry covering group theory and electronic correlation methods. [ 33 ] [ 34 ] | https://en.wikipedia.org/wiki/Gregory_S._Girolami |
Greg N. Stephanopoulos (born c. 1950) is an American chemical engineer and the Willard Henry Dow Professor in the department of chemical engineering at the Massachusetts Institute of Technology. He has worked at MIT , Caltech , and the University of Minnesota in the areas of biotechnology , bioinformatics , and metabolic engineering [ 1 ] especially in the areas of bioprocessing for biochemical and biofuel production. Stephanopoulos is the author of over 400 scientific publications with more than 35,000 citations (h index = 97) as of April 2018. [ 2 ] In addition, Greg has supervised more than 70 graduate students and 50 post-docs whose research has led to more than 50 patents. [ 3 ] He was elected a fellow of the American Association for the Advancement of Science (2005), a member of the National Academy of Engineering (2003), and received the ENI Prize on Renewable Energy 2011.
He completed his Ph.D. in chemical engineering at the University of Minnesota in 1975, with advisors Arnold Fredrickson and Rutherford Aris on the topic of modeling of population dynamics . His thesis was published in 1978 with the title, "Mathematical Modelling of the Dynamics of Interacting Microbial Populations. Extinction Probabilities in a Stochastic Competition and Predation". [ 4 ]
Stephanopoulos began his career as an assistant professor of chemical engineering at the California Institute of Technology in 1978. He was promoted to associate professor in 1978. In 1985, he was hired by the Massachusetts Institute of Technology as professor of chemical engineering. During his time at MIT, he has held the following positions: associate director, Biotechnology Center (1990-1997), professor of the MIT-Harvard Division of Health Science and Technology - HST (2000–Present), Bayer Professor of Chemical Engineering and Biotechnology (2000 - 2005), and the W. H. Dow Professor of Chemical Engineering and Biotechnology (2006–Present). From 2006 to 2007, he was a visiting professor at the Institute for Chemical and Bioengineering in Zürich , Switzerland. [ 5 ]
As noted in the citation for his ENI Prize, Stephanopoulos's research has addressed the advancement of multiple aspects bioengineering:
He is mostly renowned for his studies on the global transcription machinery engineering technology, concerning the reprogramming of the gene transcription of particular bacteria, in order to modify their microbial cells, increasing their efficiency in transformation of raw material in hydrocarbons. Until now, the best result concerns the increase in tolerance of microbial cultivations to the toxicity of several products, and the consequent, relevant increase of productiveness in biofuels.
Stephanopoulos has authored more than 400 journal articles on the topics of biotechnology , bioinformatics , and metabolic engineering . These include:
In 2003, Stephanopoulos was elected a member of the American National Academy of Engineering (NAE). His NAE election citation noted: [ 21 ]
For pioneering contributions in defining and advancing metabolic engineering and for leadership in incorporating biology into chemical engineering research and education.
Other awards and honors include: | https://en.wikipedia.org/wiki/Gregory_Stephanopoulos |
Gregory coefficients G n , also known as reciprocal logarithmic numbers , Bernoulli numbers of the second kind , or Cauchy numbers of the first kind , [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] are the rational numbers
that occur in the Maclaurin series expansion of the reciprocal logarithm
Gregory coefficients are alternating G n = (−1) n −1 | G n | for n > 0 and decreasing in absolute value. These numbers are named after James Gregory who introduced them in 1670 in the numerical integration context. They were subsequently rediscovered by many mathematicians and often appear in works of modern authors, who do not always recognize them. [ 1 ] [ 5 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ]
OEIS : A002207 (denominators)
The simplest way to compute Gregory coefficients is to use the recurrence formula
with G 1 = 1 / 2 . [ 14 ] [ 18 ] Gregory coefficients may be also computed explicitly via the following differential
or the integral
which can be proved by integrating ( 1 + z ) x {\displaystyle (1+z)^{x}} between 0 and 1 with respect to x {\displaystyle x} , once directly and the second time using the binomial series expansion first.
It implies the finite summation formula
where s ( n , ℓ ) are the signed Stirling numbers of the first kind .
and Schröder's integral formula [ 19 ] [ 20 ]
The Gregory coefficients satisfy the bounds
given by Johan Steffensen . [ 15 ] These bounds were later improved by various authors. The best known bounds for them were given by Blagouchine. [ 17 ] In particular,
Asymptotically, at large index n , these numbers behave as [ 2 ] [ 17 ] [ 19 ]
More accurate description of G n at large n may be found in works of Van Veen, [ 18 ] Davis, [ 3 ] Coffey, [ 21 ] Nemes [ 6 ] and Blagouchine. [ 17 ]
Series involving Gregory coefficients may be often calculated in a closed-form. Basic series with these numbers include
where γ = 0.5772156649... is Euler's constant . These results are very old, and their history may be traced back to the works of Gregorio Fontana and Lorenzo Mascheroni . [ 17 ] [ 22 ] More complicated series with the Gregory coefficients were calculated by various authors. Kowalenko, [ 8 ] Alabdulmohsin [ 10 ] [ 11 ] and some other authors calculated
Alabdulmohsin [ 10 ] [ 11 ] also gives these identities with
Candelperger, Coppo [ 23 ] [ 24 ] and Young [ 7 ] showed that
where H n are the harmonic numbers .
Blagouchine [ 17 ] [ 25 ] [ 26 ] [ 27 ] provides the following identities
where li( z ) is the integral logarithm and ( k m ) {\displaystyle {\tbinom {k}{m}}} is the binomial coefficient .
It is also known that the zeta function , the gamma function , the polygamma functions , the Stieltjes constants and many other special functions and constants may be expressed in terms of infinite series containing these numbers. [ 1 ] [ 17 ] [ 18 ] [ 28 ] [ 29 ]
Various generalizations are possible for the Gregory coefficients. Many of them may be obtained by modifying the parent generating equation. For example, Van Veen [ 18 ] consider
and hence
Equivalent generalizations were later proposed by Kowalenko [ 9 ] and Rubinstein. [ 30 ] In a similar manner, Gregory coefficients are related to the generalized Bernoulli numbers
see, [ 18 ] [ 28 ] so that
Jordan [ 1 ] [ 16 ] [ 31 ] defines polynomials ψ n ( s ) such that
and call them Bernoulli polynomials of the second kind . From the above, it is clear that G n = ψ n (0) .
Carlitz [ 16 ] generalized Jordan's polynomials ψ n ( s ) by introducing polynomials β
and therefore
Blagouchine [ 17 ] [ 32 ] introduced numbers G n ( k ) such that
obtained their generating function and studied their asymptotics at large n . Clearly, G n = G n (1) . These numbers are strictly alternating G n ( k ) = (-1) n -1 | G n ( k )| and involved in various expansions for the zeta-functions , Euler's constant and polygamma functions .
A different generalization of the same kind was also proposed by Komatsu [ 31 ]
so that G n = c n (1) / n ! Numbers c n ( k ) are called by the author poly-Cauchy numbers . [ 31 ] Coffey [ 21 ] defines polynomials
and therefore | G n | = P n +1 (1) . | https://en.wikipedia.org/wiki/Gregory_coefficients |
In theoretical computer science , in particular in formal language theory , Greibach's theorem states that certain properties of formal language classes are undecidable . It is named after the computer scientist Sheila Greibach , who first proved it in 1963. [ 1 ] [ 2 ]
Given a set Σ, often called "alphabet", the (infinite) set of all strings built from members of Σ is denoted by Σ * .
A formal language is a subset of Σ * .
If L 1 and L 2 are formal languages, their product L 1 L 2 is defined as the set { w 1 w 2 : w 1 ∈ L 1 , w 2 ∈ L 2 } of all concatenations of a string w 1 from L 1 with a string w 2 from L 2 .
If L is a formal language and a is a symbol from Σ, their quotient L / a is defined as the set { w : wa ∈ L } of all strings that can be made members of L by appending an a .
Various approaches are known from formal language theory to denote a formal language by a finite description, such as a formal grammar or a finite-state machine .
For example, using an alphabet Σ = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }, the set Σ * consists of all (decimal representations of) natural numbers, with leading zeroes allowed, and the empty string, denoted as ε.
The set L div3 of all naturals divisible by 3 is an infinite formal language over Σ; it can be finitely described by the following regular grammar with start symbol S 0 :
Examples for finite languages are {ε,1,2} and {0,2,4,6,8}; their product {ε,1,2}{0,2,4,6,8} yields the even numbers up to 28. The quotient of the set of prime numbers up to 100 by the symbol 7, 4, and 2 yields the language {ε,1,3,4,6,9}, {}, and {ε}, respectively.
Greibach's theorem is independent of a particular approach to describe a formal language.
It just considers a set C of formal languages over an alphabet Σ∪{#} such that
Let P be any nontrivial subset of C that contains all regular sets over Σ∪{#} and is closed under quotient by each single symbol in Σ∪{#}. [ note 2 ] Then the question whether L ∈ P for a given description of a language L ∈ C is undecidable.
Let M ⊆ Σ * , such that M ∈ C , but M ∉ P . [ note 3 ] For any L ∈ C with L ⊆ Σ * , define φ( L ) = ( M #Σ * ) ∪ (Σ * # L ).
From a description of L , a description of φ( L ) can be effectively computed.
Then L = Σ * if and only if φ( L ) ∈ P :
Hence, if membership in P would be decidable for φ( L ) from its description, so would be L ’s equality to Σ * from its description, which contradicts the definition of C . [ 3 ]
Using Greibach's theorem, it can be shown that the following problems are undecidable:
See also Context-free grammar#Being in a lower or higher level of the Chomsky hierarchy . | https://en.wikipedia.org/wiki/Greibach's_theorem |
The Greisen–Zatsepin–Kuzmin limit ( GZK limit or GZK cutoff ) is a theoretical upper limit on the energy of cosmic ray protons traveling from other galaxies through the intergalactic medium to our galaxy. The limit is 5 × 10 19 eV (50 EeV), or about 8 joules (the energy of a proton travelling at ≈ 99.999 999 999 999 999 999 98 % the speed of light). The limit is set by the slowing effect of interactions of the protons with the microwave background radiation over long distances (≈ 160 million light-years). The limit is at the same order of magnitude as the upper limit for energy at which cosmic rays have experimentally been detected, although indeed some detections appear to have exceeded the limit, as noted below. For example, one extreme-energy cosmic ray , the Oh-My-God Particle , which has been found to possess a record-breaking 3.12 × 10 20 eV (50 joules) [ 1 ] [ 2 ] of energy (about the same as the kinetic energy of a 95 km/h baseball).
In the past, the apparent violation of the GZK limit has inspired cosmologists and theoretical physicists to suggest other ways that circumvent the limit. These theories propose that ultra-high energy cosmic rays are produced near our galaxy or that Lorentz covariance is violated in such a way that protons do not lose energy on their way to our galaxy.
The limit was independently computed in 1966 by Kenneth Greisen , [ 3 ] Georgy Zatsepin , and Vadim Kuzmin [ 4 ] based on interactions between cosmic rays and the photons of the cosmic microwave background radiation (CMB). They predicted that cosmic rays with energies over the threshold energy of 5 × 10 19 eV would interact with cosmic microwave background photons γ C M B , {\displaystyle \gamma _{\rm {CMB}}\;,} relatively blueshifted by the speed of the cosmic rays, to produce pions through the Δ {\displaystyle \Delta } resonance ,
or
Pions produced in this manner proceed to decay in the standard pion channels – ultimately to photons for neutral pions, and photons, positrons, and various neutrinos for positive pions. Neutrons also decay to similar products, so that ultimately the energy of any cosmic ray proton is drained off by production of high-energy photons plus (in some cases) high-energy electron–positron pairs and neutrino pairs.
The pion production process begins at a higher energy than ordinary electron-positron pair production (lepton production) from protons impacting the CMB, which starts at cosmic-ray proton energies of only about 10 17 eV . However, pion production events drain 20% of the energy of a cosmic-ray proton, as compared with only 0.1% of its energy for electron–positron pair production.
This factor of 200 = 20% / 0.1% comes from two causes: The pion has a mass only about ~130 times the leptons, but the extra energy appears as different kinetic energies of the pion or leptons, and results in relatively more kinetic energy transferred to a heavier product pion, in order to conserve momentum . The much larger total energy losses from pion production result in pion production becoming the process limiting high-energy cosmic-ray travel, rather than the lower-energy process of light-lepton production.
The pion production process continues until the cosmic ray energy falls below the threshold for pion production. Due to the mean path associated with this interaction, extragalactic cosmic ray protons traveling over distances larger than 50 Mpc ( 163 Mly ) and with energies greater than the threshold should never be observed on Earth. This distance is also known as GZK horizon.
The precise GZK limit is derived under the assumption that ultra-high energy cosmic rays , those with energies above 1 × 10 18 eV , are protons. Measurements by the largest cosmic-ray observatory , the Pierre Auger Observatory , suggest that most ultra-high energy cosmic rays are heavier elements known as HZE ions . [ 5 ] In this case, the argument behind the GZK limit does not apply in the originally simple form: however, as Greisen noted, the giant dipole resonance also occurs roughly in this energy range (at 10 EeV/nucleon) and similarly restricts very long-distance propagation.
A number of observations have been made by the largest cosmic-ray experiments Akeno Giant Air Shower Array (AGASA), High Resolution Fly's Eye Cosmic Ray Detector , the Pierre Auger Observatory and Telescope Array Project that appeared to show cosmic rays with energies above the GZK limit.
These observations appear to contradict the predictions of special relativity and particle physics as they are presently understood. However, there are a number of possible explanations for these observations that may resolve this inconsistency.
Another suggestion involves ultra-high-energy weakly interacting particles (for instance, neutrinos ), which might be created at great distances and later react locally to give rise to the particles observed. In the proposed Z-burst model, an ultra-high-energy cosmic neutrino collides with a relic anti-neutrino in our galaxy and annihilates to hadrons. [ 6 ] This process proceeds through a (virtual) Z-boson:
The cross-section for this process becomes large if the center-of-mass energy of the neutrino antineutrino pair is equal to the Z-boson mass (such a peak in the cross-section is called "resonance"). Assuming that the relic anti-neutrino is at rest, the energy of the incident cosmic neutrino has to be
where m Z {\displaystyle m_{\text{Z}}} is the mass of the Z-boson, and m ν {\displaystyle m_{\nu }} the mass of the neutrino.
A suppression of the cosmic-ray flux that can be explained with the GZK limit has been confirmed by the latest generation of cosmic-ray observatories. A former claim by the AGASA experiment that there is no suppression was overruled. It remains controversial whether the suppression is due to the GZK effect. The GZK limit only applies if ultra-high-energy cosmic rays are mostly protons.
In July 2007, during the 30th International Cosmic Ray Conference in Mérida, Yucatán, México, the High Resolution Fly's Eye Experiment (HiRes) and the Pierre Auger Observatory (Auger) presented their results on ultra-high-energy cosmic rays (UHECR). HiRes observed a suppression in the UHECR spectrum at just the right energy, observing only 13 events with an energy above the threshold, while expecting 43 with no suppression. This was interpreted as the first observation of the GZK limit. [ 7 ] Auger confirmed the flux suppression, but did not claim it to be the GZK limit: instead of the 30 events necessary to confirm the AGASA results, Auger saw only two, which are believed to be heavy-nuclei events. [ 8 ] The flux suppression was previously brought into question when the AGASA experiment found no suppression in their spectrum [ citation needed ] . According to Alan Watson , former spokesperson for the Auger Collaboration, AGASA results have been shown to be incorrect, possibly due to the systematic shift in energy assignment.
In 2010 and the following years, both the Pierre Auger Observatory and HiRes confirmed again a flux suppression, [ 9 ] [ 10 ] in case of the Pierre Auger Observatory the effect is statistically significant at the level of 20 standard deviations.
After the flux suppression was established, a heated debate ensued whether cosmic rays that violate the GZK limit are protons. The Pierre Auger Observatory, the world's largest observatory, found with high statistical significance that ultra-high-energy cosmic rays are not purely protons, but a mixture of elements, which is getting heavier with increasing energy. [ 5 ] The Telescope Array Project , a joint effort from members of the HiRes and AGASA collaborations, agrees with the former HiRes result that these cosmic rays look like protons. [ 11 ] The claim is based on data with lower statistical significance, however. The area covered by Telescope Array is about one third of the area covered by the Pierre Auger Observatory, and the latter has been running for a longer time.
The controversy was partially resolved in 2017, when a joint working group formed by members of both experiments presented a report at the 35th International Cosmic Ray Conference. [ 12 ] According to the report, the raw experimental results are not in contradiction with each other. The different interpretations are mainly based on the use of different theoretical models and the fact that Telescope Array has not collected enough events yet to distinguish the pure-proton hypothesis from the mixed-nuclei hypothesis.
EUSO , which was scheduled to fly on the International Space Station (ISS) in 2009, was designed to use the atmospheric- fluorescence technique to monitor a huge area and boost the statistics of UHECRs considerably. EUSO is to make a deep survey of UHECR-induced extensive air showers (EASs) from space, extending the measured energy spectrum well beyond the GZK cutoff. It is to search for the origin of UHECRs, determine the nature of the origin of UHECRs, make an all-sky survey of the arrival direction of UHECRs, and seek to open the astronomical window on the extreme-energy universe with neutrinos. The fate of the EUSO Observatory is still unclear, since NASA is considering early retirement of the ISS.
Launched in June 2008, the Fermi Gamma-ray Space Telescope (formerly GLAST) will also provide data that will help resolve these inconsistencies. | https://en.wikipedia.org/wiki/Greisen–Zatsepin–Kuzmin_limit |
In crystallography , a Greninger chart [ 1 ] / ˈ ɡ r ɛ n ɪ ŋ ər / is a chart that allows angular relations between zones and planes in a crystal to be directly read from an x-ray diffraction photograph.
The Greninger chart is a simple trigonometric tool to determine g and d for a fixed sample-to-film distance. (If one uses a 2-d detector the problem of determining g and d could be solved mathematically using the equations which generate the Greninger chart) A new chart must be generated for different sample to detector distances. (2 s is 2 q for the diffraction peak and tan m is x / y for the Cartesian coordinates of the diffraction peak.) The Greninger chart gives directly the two angles needed to plot poles on the Wulff net . It is critical to keep track of the relative arrangement of the sample to the film, if photographic film is used then this is achieved by cutting the corner of the film. For Polaroid film one must make a note of the arrangement of the face of the film in the camera. | https://en.wikipedia.org/wiki/Greninger_chart |
The grey atmosphere (or gray) is a useful set of approximations made for radiative transfer applications in studies of stellar atmospheres (atmospheres of stars) based on the simplified notion that the absorption coefficient α ν {\displaystyle \alpha _{\nu }} of matter within a star's atmosphere is constant—that is, unchanging—for all frequencies of the star's incident radiation .
The grey atmosphere approximation is the primary method astronomers use to determine the temperature and basic radiative properties of astronomical objects, including planets with atmospheres, the Sun, other stars, and interstellar clouds of gas and dust. Although the simplified model of grey atmosphere approximation demonstrates good correlation to observations, it deviates from observational results because real atmospheres are not grey, e.g. radiation absorption is frequency-dependent.
The primary approximation is based on the assumption that the absorption coefficient , typically represented by an α ν {\displaystyle \alpha _{\nu }} , has no dependence on frequency ν {\displaystyle \nu } for the frequency range being worked in, e.g. α ν ⟶ α {\displaystyle \alpha _{\nu }\longrightarrow \alpha } .
Typically a number of other assumptions are made simultaneously:
This set of assumptions leads directly to the mean intensity and source function being directly equivalent to a blackbody Planck function of the temperature at that optical depth .
The Eddington approximation (see next section) may also be used optionally, to solve for the source function. This greatly simplifies the model without greatly distorting results.
Deriving various quantities from the grey atmosphere model involves solving an integro-differential equation, an exact solution of which is complex. Therefore, this derivation takes advantage of a simplification known as the Eddington Approximation. Starting with an application of a plane-parallel model, we can imagine an atmospheric model built up of plane-parallel layers stacked on top of each other, where properties such as temperature are constant within a plane. This means that such parameters are function of physical depth z {\displaystyle z} , where the direction of positive z {\displaystyle z} points towards the upper layers of the atmosphere. From this it is easy to see that a ray path d s {\displaystyle ds} at angle θ {\displaystyle \theta } to the vertical, is given by
d s = d z c o s θ {\displaystyle \mathrm {d} s={\frac {\mathrm {d} z}{cos\theta }}}
We now define optical depth as
d τ = − α d s {\displaystyle \mathrm {d} \tau =-\alpha \mathrm {d} s}
where α {\displaystyle \alpha } is the absorption coefficient associated with the various constituents of the atmosphere. We now turn to the radiation transfer equation
d I d s = j − α I {\displaystyle {\frac {\mathrm {d} I}{\mathrm {d} s}}=j-\alpha I}
where I {\displaystyle I} is the total specific intensity, j {\displaystyle j} is the emission coefficient. After substituting for d s {\displaystyle \mathrm {d} s} and dividing by − α {\displaystyle -\alpha } we have
μ d I d τ = I − S {\displaystyle \mu {\frac {\mathrm {d} I}{\mathrm {d} \tau }}=I-S}
where S {\displaystyle S} is the so-called total source function defined as the ratio between emission and absorption coefficients. This differential equation can by solved by multiplying both sides by e − τ / μ {\displaystyle e^{-\tau /\mu }} , re-writing the lefthand side as d d τ ( I e − τ / μ ) {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} \tau }}(Ie^{-\tau /\mu })} and then integrating the whole equation with respect to τ {\displaystyle \tau } . This gives the solution
I ( τ , μ ) = e τ μ μ ∫ τ ∞ S e − τ μ d τ {\displaystyle I(\tau ,\mu )={\frac {e^{\frac {\tau }{\mu }}}{\mu }}\int _{\tau }^{\infty }Se^{-{\frac {\tau }{\mu }}}\mathrm {d} \tau }
where we have used the limits τ ∈ [ τ , ∞ ) {\displaystyle \tau \in [\tau ,\infty )} as we are integrating outward from some depth within the atmosphere; therefore μ ∈ [ 0 , 1 ] {\displaystyle \mu \in [0,1]} . Even though we have neglected the frequency-dependence of parameters such as S {\displaystyle S} , we know that it is a function of optical depth therefore in order to integrate this we need to have a method for deriving the source function. We now define some important parameters such as energy density U {\displaystyle U} , total flux F {\displaystyle F} and radiation pressure P {\displaystyle P} as follows
U = 2 π c ∫ − 1 + 1 I d μ {\displaystyle U={\frac {2\pi }{c}}\int _{-1}^{+1}I\mathrm {d} \mu }
F = 2 π ∫ − 1 + 1 I μ d μ {\displaystyle F=2\pi \int _{-1}^{+1}I\mu \mathrm {d} \mu }
P = 2 π c ∫ − 1 + 1 I μ 2 d μ {\displaystyle P={\frac {2\pi }{c}}\int _{-1}^{+1}I\mu ^{2}\mathrm {d} \mu }
We also define the average specific intensity (averaged over all angles [ 1 ] ) as
J = 1 2 ∫ − 1 + 1 I d μ {\displaystyle J={\frac {1}{2}}\int _{-1}^{+1}I\mathrm {d} \mu }
We see immediately that by dividing the radiative transfer equation by 2 and integrating over μ {\displaystyle \mu } , we have
1 4 π d F d τ = J − S {\displaystyle {\frac {1}{4\pi }}{\frac {\mathrm {d} F}{\mathrm {d} \tau }}=J-S}
Furthermore, by multiplying the same equation by μ 2 {\displaystyle {\frac {\mu }{2}}} and integrating w.r.t. μ {\displaystyle \mu } , we have
d P d τ = F c {\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} \tau }}={\frac {F}{c}}}
By substituting the average specific intensity J into the definition of energy density, we also have the following relationship
J = c 4 π U {\displaystyle J={\frac {c}{4\pi }}U}
Now, it is important to note that total flux must remain constant through the atmosphere therefore
d F d τ = 0 ⟺ J = S {\displaystyle {\frac {\mathrm {d} F}{\mathrm {d} \tau }}=0\iff J=S}
This condition is known as radiative equilibrium. Taking advantage of the constancy of total flux, we now integrate d P d τ {\displaystyle {\frac {dP}{d\tau }}} to obtain
P = F c ( τ + C ) {\displaystyle P={\frac {F}{c}}(\tau +C)}
where C {\displaystyle C} is a constant of integration. We know from thermodynamics that for an isotropic gas the following relationship holds
P = 1 3 U = 4 π 3 c J {\displaystyle P={\frac {1}{3}}U={\frac {4\pi }{3c}}J}
where we have substituted the relationship between energy density and average specific intensity derived earlier. Although this may be true for lower depths within the stellar atmosphere, near the surface it almost certainly isn't. However, the Eddington Approximation assumes this to hold at all levels within the atmosphere. Substituting this in the previous equation for pressure gives
J = 3 F 4 π ( τ + C ) {\displaystyle J={\frac {3F}{4\pi }}(\tau +C)}
and under the condition of radiative equilibrium
S = 3 F 4 π ( τ + C ) {\displaystyle S={\frac {3F}{4\pi }}(\tau +C)}
This means we have solved the source function except for a constant of integration. Substituting this result into the solution to the radiation transfer equation and integrating gives
I ( τ = 0 , μ ) = 3 F 4 π e τ / μ μ ∫ 0 ∞ ( τ + C ) e − τ / μ d τ = 3 F 4 π ( μ + C ) f o r μ > 0 {\displaystyle I(\tau =0,\mu )={\frac {3F}{4\pi }}{\frac {e^{\tau /\mu }}{\mu }}\int _{0}^{\infty }(\tau +C)e^{-\tau /\mu }\mathrm {d} \tau ={\frac {3F}{4\pi }}(\mu +C)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathrm {for} \ \mu >0}
Here we have set the lower limit of τ {\displaystyle \tau } to zero, which is the value of optical depth at the surface of the atmosphere. This would represent radiation coming out of, say, the surface of the Sun. Finally, substituting this into the definition of total flux and integrating gives
F = 2 π ∫ 0 1 I μ d μ = 3 F 2 ∫ 0 1 ( μ 2 + C μ ) d μ = 3 F 2 ( 1 3 + C 2 ) {\displaystyle F=2\pi \int _{0}^{1}I\mu \mathrm {d} \mu ={\frac {3F}{2}}\int _{0}^{1}(\mu ^{2}+C\mu )\mathrm {d} \mu ={\frac {3F}{2}}\left({\frac {1}{3}}+{\frac {C}{2}}\right)}
Therefore, C = 2 3 {\displaystyle C={\frac {2}{3}}} and the source function is given by
S ( τ ) = 3 F 4 π ( τ + 2 3 ) {\displaystyle S(\tau )={\frac {3F}{4\pi }}\left(\tau +{\frac {2}{3}}\right)}
Integrating the first and second moments of the radiative transfer equation, applying the above relation and the Two-Stream Limit approximation leads to information about each of the higher moments in cos θ {\displaystyle \cos \theta } . The first moment of the mean intensity, H {\displaystyle H} is constant regardless of optical depth :
H ( τ ) = H {\displaystyle H(\tau )=H}
The second moment of the mean intensity, K {\displaystyle K} is then given by:
K ( τ ) = τ H + 2 3 H = 1 3 J ( τ ) {\displaystyle K(\tau )=\tau H+{\frac {2}{3}}H={\frac {1}{3}}J(\tau )}
Note that the Eddington approximation is a direct consequence of these assumptions.
Defining an effective temperature T e f f {\displaystyle T_{\rm {eff}}} for the Eddington flux H {\displaystyle H} and applying the Stefan–Boltzmann law , realize this relation between the externally observed effective temperature and the internal blackbody temperature T {\displaystyle T} of the medium.
T 4 = T e f f 4 3 4 ( τ + 2 3 ) {\displaystyle T^{4}=T_{\rm {eff}}^{4}{\frac {3}{4}}\left(\tau +{\frac {2}{3}}\right)}
The results of the grey atmosphere solution: The observed temperature T e f f {\displaystyle T_{\rm {eff}}} is a good measure of the true temperature T {\displaystyle T} at an optical depth τ ≈ 2 / 3 {\displaystyle \tau \approx 2/3} and the atmosphere top temperature is ≈ 0.841 T e f f {\displaystyle \approx 0.841T_{\rm {eff}}} .
This approximation makes the source function linear in optical depth.
Rybicki, George; Lightman, Alan (2004). Radiative Processes in Astrophysics . Wiley-VCH . ISBN 978-0-471-82759-7 . | https://en.wikipedia.org/wiki/Grey_atmosphere |
Grey relational analysis ( GRA ) was developed by Deng Julong of Huazhong University of Science and Technology . It is one of the most widely used models of grey system theory. GRA uses a specific concept of information . It defines situations with no information as black, and those with perfect information as white. However, neither of these idealized situations ever occurs in real world problems . In fact, situations between these extremes, which contain partial information, are described as being grey, hazy or fuzzy. A variant of GRA model, Taguchi -based GRA model, is a popular optimization method in manufacturing engineering.
Let X 0 = ( x 0 ( 1 ) , x 0 ( 2 ) , … , x 0 ( n ) ) {\displaystyle X_{0}=\left(x_{0}\left(1\right),x_{0}\left(2\right),\dots ,x_{0}\left(n\right)\right)} is an ideal data set and X k = ( x k ( 1 ) , x k ( 2 ) , … , x k ( n ) ) , k = 1 , 2 , 3 , … , m {\displaystyle X_{k}=\left(x_{k}\left(1\right),x_{k}\left(2\right),\dots ,x_{k}\left(n\right)\right),k\mathrm {\ =\ 1,2,3,\dots ,} m} are the alternative data sets of the same length. The Grey Relational Grade (GRG) between the two data sets is given by [ 1 ]
Γ 0 k = ∫ j = 1 n w ( j ) × γ 0 k ( j ) {\displaystyle {\mathit {\Gamma }}_{0k}=\int _{j=1}^{n}{w\left(j\right)\times {\gamma }_{0k}\left(j\right)}}
where the Grey Relational Coefficients (GRC) is
γ 0 k ( j ) = min k min j | x 0 ( j ) − x k ( j ) | + ξ ( j ) 0 k max k max j | x 0 ( j ) − x k ( j ) | | x 0 ( j ) − x k ( j ) | + ξ ( j ) 0 k max k max j | x 0 ( j ) − x k ( j ) | {\displaystyle {\gamma }_{0k}\left(j\right)={\frac {{{\min }_{k}{\min }_{j}\ }|x_{0}\left(j\right)-x_{k}(j)|+{{{\xi \left(j\right)}_{0k}\mathrm {\ } \max }_{k}{\max }_{j}|x_{0}\left(j\right)-x_{k}(j)|\ }}{|x_{0}\left(j\right)-x_{k}(j)|+{{{\xi \left(j\right)}_{0k}\mathrm {\ } \max }_{k}{\max }_{j}|x_{0}\left(j\right)-x_{k}(j)|\ }}}}
where, w ( j ) {\displaystyle w\left(j\right)} is the weight of the elements of the data sets, and is needed when the GRA method is used to solve multiple criteria decision-making problems . Here, ξ ( j ) ∈ ( 0 , 1 ] {\displaystyle \xi \left(j\right)\in (0,1]} denotes the Dynamic Distinguishing Coefficient. Thus, the GRA model defined in this way is called Dynamic Grey Relational Analysis (Dynamic GRA) model. It is the generalized form of Deng's GRA model.
GRA is an important part of grey system theory, pioneered by Deng Julong in 1982. [ 2 ] A grey system means that a system in which part of information is known and part of information is unknown. Formally, grey systems theory describes uncertainty by interval-valued unknowns called grey numbers , with the width of the interval reflecting more or less precise knowledge. [ 3 ] With this definition, information quantity and quality form a continuum from a total lack of information to complete information – from black through grey to white. Since uncertainty always exists, one is always somewhere in the middle, somewhere between the extremes, somewhere in the grey area. Grey analysis then comes to a clear set of statements about system solutions [ specify ] . At one extreme, no solution can be defined for a system with no information. At the other extreme, a system with perfect information has a unique solution. In the middle, grey systems will give a variety of available solutions. Grey relational analysis does not attempt to find the best solution, but does provide techniques for determining a good solution, an appropriate solution for real-world problems. The theory inspired many noted scholars and business leaders like Jeffrey Yi-Lin Forrest , Liu Sifeng , Ren Zhengfei and Joseph L. Badaracco , a professor at Harvard Business School .
The theory has been applied in various fields of engineering and management. Initially, the grey method was adapted to effectively study air pollution [ 4 ] and subsequently used to investigate the nonlinear multiple-dimensional model of the socio-economic activities’ impact on the city air pollution. [ 5 ] It has also been used to study the research output and growth of countries. [ 6 ]
In the world, there are many universities, associations and societies promoting grey system theory e.g., International Association of Grey Systems and Decision Sciences (IAGSUA), Chinese Grey System Association (CGSA), Grey Systems Society of China (GSSC), Grey Systems Society of Pakistan (GSSP), Polish Scientific Society of Grey Systems (PSGS), Grey Systems Committee ( IEEE Systems, Man, and Cybernetics Society ), Centre for Computational Intelligence ( De Montfort University ), etc. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
There are several journals dedicated to grey systems research and studies e.g., "The Journal of Grey System" (UK), [ 12 ] [ 13 ] "Grey Systems Theory and Application" ( Emerald Group Publishing ), [ 14 ] "International Journal of Grey Systems" (USA), [ 15 ] "Journal of Grey System" (Taiwan), [ 16 ] "The Grey Journal", [ 17 ] Journal of Intelligent and Fuzzy Systems , [ 18 ] Kybernetes , etc. | https://en.wikipedia.org/wiki/Grey_relational_analysis |
Greywater (or grey water , sullage , also spelled gray water in the United States) refers to domestic wastewater generated in households or office buildings from streams without fecal contamination, i.e., all streams except for the wastewater from toilets. Sources of greywater include sinks , showers , baths , washing machines or dishwashers . As greywater contains fewer pathogens than blackwater , it is generally safer to handle and easier to treat and reuse onsite for toilet flushing , landscape or crop irrigation , and other non- potable uses. Greywater may still have some pathogen content from laundering soiled clothing or cleaning the anal area in the shower or bath.
The application of greywater reuse in urban water systems provides substantial benefits for both the water supply subsystem, by reducing the demand for fresh clean water , and the wastewater subsystems by reducing the amount of conveyed and treated wastewater. [ 1 ] Treated greywater has many uses, such as toilet flushing or irrigation. [ 2 ]
Greywater usually contains some traces of human waste and is therefore not free of pathogens. [ 3 ] The excreta come from washing the anal area in the bath and shower or from the laundry (washing underwear and diapers). The quality of greywater can deteriorate rapidly during storage because it is often warm and contains some nutrients and organic matter (e.g. dead skin cells), as well as pathogens. Stored greywater also leads to odour nuisances for the same reason. [ 4 ]
Synthetic personal care products (e.g. toothpaste , face wash , and shower gel ) commonly rinsed into greywater may contain microbeads , a form of microplastics . [ 5 ] Greywater originating from washing clothes made from synthetic fabrics (e.g. nylon ) is also likely to contain microfibers . [ 5 ]
In households with conventional flush toilets, greywater makes up about 65% of the total wastewater produced by that household. [ 3 ] It may be a good source of water for reuse because there is a close relationship between the production of greywater and the potential demand for toilet flushing water.
Misconnections of pipes can cause greywater tanks to contain a percentage of blackwater. [ 6 ]
The small traces of feces that enter the greywater stream via effluent from the shower, sink, or washing machine do not pose practical hazards under normal conditions, as long as the greywater is used correctly (for example, percolated from a dry well or used correctly in farming irrigation).
The separate treatment of greywater falls under the concept of source separation , which is one principle commonly applied in ecological sanitation approaches. The main advantage of keeping greywater separate from toilet wastewater is that the pathogen load is greatly reduced, and the greywater is therefore easier to treat and reuse. [ 3 ]
When greywater is mixed with toilet wastewater, it is called sewage or blackwater and should be treated in sewage treatment plants or an onsite sewage facility, which is often a septic system.
Greywater from kitchen sinks contains fats , oils and grease , and high loads of organic matter. It should undergo preliminary treatment to remove these substances before discharge into a greywater tank. If this is difficult to apply, it could be directed to the sewage system or to an existing sewer . [ 7 ]
Most greywater is easier to treat and recycle than sewage because of lower levels of contaminants. If collected using a separate plumbing system from blackwater, domestic greywater can be recycled directly within the home, garden or company and used either immediately or processed and stored. If stored, it must be used within a very short time or it will begin to putrefy due to the organic solids in the water. Recycled greywater of this kind is never safe to drink , but a number of treatment steps can be used to provide water for washing or flushing toilets.
The treatment processes that can be used are in principle the same as those used for sewage treatment, except that they are usually installed on a smaller scale (decentralized level), often at household or building level:
In constructed wetlands, the plants use contaminants of greywater, such as food particles, as nutrients in their growth. Salt and soap residues can be toxic to microbial and plant life alike, but can be absorbed and degraded through constructed wetlands and aquatic plants such as sedges , rushes , and grasses.
Global water resource supplies are shrinking. According to a report from the United Nations, water shortages will affect 2.7 billion people by 2025, which means 1 out of every 3 people in the world will be affected by this problem. [ citation needed ] Reusing greywater has become a good way to solve this problem, and wastewater reuse is also called recycled or reclaimed water . [ 9 ]
Demand on conventional water supplies and pressure on sewage treatment systems is reduced by the use of greywater. Re-using greywater also reduces the volume of sewage effluent entering watercourses which can be ecologically beneficial. In times of drought, especially in urban areas, greywater use in irrigation or toilet systems helps to achieve some of the goals of ecologically sustainable development .
The potential ecological benefits of greywater recycling include:
In the U.S. Southwest and the Middle East where available water supplies are limited, especially in view of a rapidly growing population, a strong imperative exists for adoption of alternative water technologies.
The potential economic benefits of greywater recycling include:
Greywater use for irrigation appears to be a safe practice. A 2015 epidemiological study found no additional burden of disease among greywater users irrigating arid regions. [ 12 ] The safety of reuse of greywater as potable water has also been studied. A few organic micropollutants including benzene were found in greywater in significant concentrations but most pollutants were in very low concentrations. [ 13 ] Fecal contamination, peripheral pathogens (e.g., skin and mucous tissue), and food-derived pathogens are the three major sources of pathogens in greywater. [ 14 ]
Greywater reuse in toilet flushing and garden irrigation may produce aerosols . These could transmit legionella disease and bring a potential health risk for people. However, the result of the research shows that the health risk due to reuse of greywater either for garden irrigation or toilet flushing was not significantly higher than the risk associated with using clear water for the same activities. [ 15 ]
Most greywater should be assumed to have some blackwater-type components, including pathogens . Greywater should be applied below the surface where possible (e.g., via drip line on top of the soil, under mulch ; or in mulch-filled trenches) and not sprayed, as there is a danger of inhaling the water as an aerosol .
In any greywater system, it is important to avoid toxic materials such as bleaches, bath salts , artificial dyes, chlorine -based cleansers, strong acids / alkali , solvents , and products containing boron , which is toxic to plants at high levels. Most cleaning agents contain sodium salts , which can cause excessive soil alkalinity , inhibit seed germination, and destroy the structure of soils by dispersing clay. Soils watered with greywater systems can be amended with gypsum ( calcium sulfate ) to reduce pH . Cleaning products containing ammonia are safe to use, as plants can use it to obtain nitrogen. [ 16 ] A 2010 study of greywater irrigation found no major health effects on plants, and suggests sodium buildup is largely dependent on the degree to which greywater migrates vertically through the soil. [ 17 ]
Some greywater may be applied directly from the sink to the garden or container field, receiving further treatment from soil life and plant roots.
The use of non-toxic and low-sodium soap and personal care products is recommended to protect vegetation when reusing greywater for irrigation purposes. [ 18 ]
Recycled greywater from showers and bathtubs can be used for flushing toilets in most European and Australian jurisdictions and in United States jurisdictions that have adopted the International Plumbing Code .
Such a system could provide an estimated 30% reduction in water use for the average household. The danger of biological contamination is avoided by using:
Greywater recycling without treatment is used in certain dwellings for applications where potable water is not required (e.g., garden and land irrigation, toilet flushing ). It may also be used in dwellings when the greywater (e.g., from rainwater ) is already fairly clean to begin with and/or has not been polluted with non-degradable chemicals such as non-natural soaps (thus using natural cleaning products instead). It is not recommended to use water that has been in the greywater filtration system for more than 24 hours as bacteria builds up, affecting the water that is being reused.
Due to the limited treatment technology, the treated greywater still contains some chemicals and bacteria, so some safety issues should be observed when using the treated greywater around the home. [ 19 ]
A clothes washer grey water system is sized to recycle the grey water of a one or two family home using the reclaimed water of a washing machine (produces 15 gallons per person per day). [ 20 ] It relies on either the pump from the washing machine or gravity to irrigate. This particular system is the most common and least restricted system. In most states with in the United States, this system does not require construction permits. This system is often characterized as Laundry to Landscape (L2L). The system relies on valves, draining to a mulch basin, or the area of irrigation for certain landscape features (a mulch basin for a tree requires 12.6 ft 2 ). The drip system must be calibrated to avoid uneven distribution of grey water or overloading. [ 21 ]
Recycled grey water from domestic appliances also can be used to flush toilet. [ 22 ] Its application is based on standards set by plumbing codes. Indoor grey water reuse requires an efficient cleaning tank for insoluble waste, as well as a well regulated control mechanism.
The Uniform Plumbing Code , adopted in some U.S. jurisdictions, prohibits greywater use indoors. However, the California Plumbing Code, derived from the UPC, permits it.
Devices are currently available that capture heat from residential and industrial greywater through a process called drain water heat recovery, greywater heat recovery, or hot water heat recycling .
Rather than flowing directly into a water heating device, incoming cold water flows first through a heat exchanger where it is pre-warmed by heat from greywater flowing out from such activities as dish washing or showering. Typical household devices receiving greywater from a shower can recover up to 60% of the heat that would otherwise go to waste. [ citation needed ]
Government regulation governing domestic greywater use for landscape irrigation (diversion for reuse) is still a developing area and continues to gain wider support as the actual risks and benefits are considered and put into clearer perspective.
"Greywater" (by pure legal definition) is considered in some jurisdictions to be "sewage" (all wastewater including greywater and toilet waste), but in the U.S. states that adopt the International Plumbing Code , it can be used for subsurface irrigation and for toilet flushing, and in states that adopt the Uniform Plumbing Code , it can be used in underground disposal fields that are akin to shallow sewage disposal fields.
Wyoming allows surface and subsurface irrigation and other non-specific use of greywater under a Department of Environmental Quality policy enacted in March 2010. California , Utah , New Mexico and some other states allow true subsurface drip irrigation with greywater. Where greywater is still considered sewage, it is bound by the same regulatory procedures enacted to ensure properly engineered septic tank and effluent disposal systems are installed for long system life and to control spread of disease and pollution. In such regulatory jurisdictions, this has commonly meant domestic greywater diversion for landscape irrigation was either not permitted or was discouraged by expensive and complex sewage system approval requirements. Wider legitimate community greywater diversion for landscape irrigation has subsequently been handicapped and resulted in greywater reuse continuing to still be widely undertaken by householders outside of and in preference to the legal avenues.
However, with water conservation becoming a necessity in a growing number of jurisdictions, business, political and community pressure has made regulators seriously reconsider the actual risks against actual benefits.
It is now recognized and accepted by an increasing number of regulators [ citation needed ] that the microbiological risks of greywater reuse at the single dwelling level where inhabitants already had intimate knowledge of that greywater are in reality an insignificant risk, when properly managed without the need for onerous approval processes. This is reflected in the New South Wales Government Department of Water and Energy's newly released greywater diversion rules, and the recent passage of greywater legislation in Montana . [ 23 ] In the 2009 Legislative Session, the state of Montana passed a bill expanding greywater use into multi-family and commercial buildings. The Department of Environmental Quality has already drafted rules and design guidelines for greywater re-use systems in all these applications. Existing staff would review systems proposed for new subdivisions in conjunction with review of all other wastewater system components. [ 24 ]
Strict permit requirements in Austin, Texas , led to issuance of only one residential graywater permit since 2010. A working group formed to streamline the permitting process, and in 2013, the city created new code that has eased the requirements, resulting in four more permits. [ 25 ]
In California, a push has been made in recent years to address greywater in connection with the State's greenhouse gas reduction goals (see AB 32 ). As a large amount of energy (electricity) is used for pumping, treating and transporting potable water within the state, water conservation has been identified as one of several ways California is seeking to reduce greenhouse gas emissions. [ 26 ]
In July 2009, the California Building Standards Commission (CBSC) approved the addition of Chapter 16A "Non-potable Water Reuse Systems" to the 2007 California Plumbing Code. Emergency regulations allowing greywater reuse systems were subsequently filed with the California Secretary of State August 2009 and became effective immediately upon filing. Assembly Bill 371 (Goldberg 2006) and Senate Bill 283 (DeSaulnier 2009) directed the California Department of Water Resources (DWR), in consultation with the State Department of Health Services, to adopt and submit to the CBSC regulations for a State version of Appendix J (renamed Chapter 16 Part 2) of the Uniform Plumbing Code to provide design standards to safely plumb buildings with both potable and recycled water systems. November 2009 the CBSC unanimously voted to approve the California Dual Plumbing Code that establishes statewide standards for potable and recycled water plumbing systems in commercial, retail and office buildings, theaters, auditoriums, condominiums, schools, hotels, apartments, barracks, dormitories, jails, prisons and reformatories. In addition, the California Department of Housing and Community Development has greywater standards and DWR has also proposed dual plumbing design standards.
In Arizona, greywater is defined as water with a BOD5 less than 380 mg/L, TSS<430 and the Fats, Oil, and Grease (FOG) content should be less than 75 mg/L. The Arizona water has issued advice that people should avoid direct contact with greywater. Most greywater use is by underground drip irrigation since surface irrigation is not permitted. There are three types of use in Arizona: up to a quota of 400 gpd per family (close to 1500 L per day) no permission is required for greywater use, between 400 and 3000 gpd (1500 and 11,355 L per day, respectively) permission is required and above 3000 gpd (>11,355 L per day) it is considered as conventional wastewater venture.
Other limitations include restrictions on contact, restrictions on use on herbaceous food plants, exclusion of hazardous materials and effective separation from surface water run-off. [ 27 ]
The Uniform Plumbing Code, adopted in some U.S. jurisdictions, prohibits gray water use indoors.
Greywater recycling is relatively uncommon in the UK, largely because the financial cost and environmental impact of mains water is very low. Greywater systems should comply with BS8525 and the Water Supply (Water Fittings) Regulations in order to avoid risks to health. [ 28 ]
Greywater from single sewered premises has the potential to be reused on site for ornamental, garden and lawn irrigation, toilet flushing. The reuse options include Horizontal flow reed bed (HFRB), Vertical flow reed bed (VFRB), Green roof water recycling system (GROW), Membrane bioreactor (MBR) and Membrane chemical reactor (MCR). [ 29 ]
Although Canada is a water-rich country, the center of the country freezes in the winter and droughts happen some summers. There are locations where watering outdoors is restricted in the dry season, some water must be transported from an outside source, or on-site costs are high. At present, the standards for greywater reuse are not strict compared with other countries. [ 29 ]
The National Plumbing Code, which is adopted in whole or in part by the provinces, indicates that non-potable water systems should only be used to supply toilets and underground irrigation systems, collecting rainwater with roof gutters is included as a form of greywater. [ 30 ] [ 31 ] Health Canada has published a guideline to use greywater for toilet flushing and British Columbia's building code includes subsurface irrigation with greywater. [ 32 ] [ 33 ] In Alberta "Reclaimed wastewater from any source cannot be used domestically unless it is approved and meets water quality testing and monitoring by the local municipality." [ 34 ] Saskatchewan also treats greywater as sewage. [ 35 ]
Household greywater from a single contaminated site may be reused on-site at the ornamental garden and lawn watering, toilet flushing and laundry uses, depending on the type of greywater and treatment level. Some people wisely re-use the gross weight, but others use it even worse (without any treatment), such as bathing in the bath or simply transferring laundry water to the lawn where children and pets may be exposed directly. The Department of Health and Community Services (DHCS) focuses on protecting public health and then takes action to control and minimize the public health risks associated with greywater reuse. [ 29 ]
The government of Cyprus has implemented four water-saving subsidies: drilling installations, drilling with lavatories, installation of hot water circulation systems and installation of greywater recycling systems. [ 29 ]
The emphasis on the use of greywater in Jordan has two main purposes: water conservation and socioeconomic aspects. The Amman Islamic Water Development and Management Network (INWRDAM) in Jordan promoted research on gray water reuse in Jordan. At present, greywater research in Jordan is funded mainly by the International Development Research Center (IDRC) in Ottawa, Canada, to install and use greywater systems based on the establishment of small wetland systems in private households. The cost of this system is about 500 US dollars per household. [ 29 ] | https://en.wikipedia.org/wiki/Greywater |
A grid leak detector is an electronic circuit that demodulates an amplitude modulated alternating current and amplifies the recovered modulating voltage. The circuit utilizes the non-linear cathode to control grid conduction characteristic and the amplification factor of a vacuum tube. [ 1 ] [ 2 ] Invented by Lee De Forest around 1912, it was used as the detector (demodulator) in the first vacuum tube radio receivers until the 1930s.
Early applications of triode tubes ( Audions ) as detectors usually did not include a resistor in the grid circuit. [ 3 ] [ 4 ] [ 5 ] First use of a resistance in the grid circuit of a vacuum tube detector circuit may have been by Sewall Cabot in 1906. Cabot wrote that he made a pencil mark to discharge the grid condenser, after finding that touching the grid terminal of the tube would cause the detector to resume operation after having stopped. [ 6 ] Edwin H. Armstrong, in 1915, describes the use of "a resistance of several hundred thousand ohms placed across the grid condenser" for the purpose of discharging the grid condenser. [ 7 ] The heyday for grid leak detectors was the 1920s, when battery operated, multiple dial tuned radio frequency receivers using low amplification factor triodes with directly heated cathodes were the contemporary technology. The Zenith Models 11, 12, and 14 are examples of these kinds of radios. [ 8 ] After screen-grid tubes became available for new designs in 1927, most manufacturers switched to plate detectors , [ 9 ] [ 2 ] and later to diode detectors .
The grid leak detector has been popular for many years with amateur radio operators and shortwave listeners who construct their own receivers.
The stage performs two functions:
The control grid and cathode are operated as a diode while at the same time the control grid voltage exerts its usual influence on the electron stream from cathode to plate.
In the circuit, a capacitor (the grid condenser ) couples a radio frequency signal (the carrier) to the control grid of an electron tube. [ 16 ] The capacitor also facilitates development of dc voltage on the grid. The impedance of the capacitor is small at the carrier frequency and high at the modulating frequencies. [ 17 ]
A resistor (the grid leak ) is connected either in parallel with the capacitor or from the grid to the cathode. The resistor permits dc charge to "leak" from the capacitor [ 18 ] and is utilized in setting up the grid bias. [ 19 ]
At small carrier signal levels, typically not more than 0.1 volt, [ 20 ] the grid to cathode space exhibits non-linear resistance. Grid current occurs during 360 degrees of the carrier frequency cycle. [ 21 ] The grid current increases more during the positive excursions of the carrier voltage than it decreases during the negative excursions, due to the parabolic grid current versus grid voltage curve in this region. [ 22 ] This asymmetrical grid current develops a dc grid voltage that includes the modulation frequencies. [ 23 ] [ 24 ] [ 25 ] In this region of operation, the demodulated signal is developed in series with the dynamic grid resistance R g {\displaystyle Rg} , which is typically in the range of 50,000 to 250,000 ohms. [ 26 ] [ 27 ] R g {\displaystyle Rg} and the grid condenser along with the grid capacitance form a low pass filter that determines the audio frequency bandwidth at the grid. [ 26 ] [ 27 ]
At carrier signal levels large enough to make conduction from cathode to grid cease during the negative excursions of the carrier, the detection action is that of a linear diode detector. [ 28 ] Grid leak detection optimized for operation in this region is known as power grid detection or grid leak power detection . [ 29 ] [ 30 ] Grid current occurs only on the positive peaks of the carrier frequency cycle. The coupling capacitor will acquire a dc charge due to the rectifying action of the cathode to grid path. [ 31 ] [ 32 ] The capacitor discharges through the resistor (thus grid leak ) during the time that the carrier voltage is decreasing. [ 33 ] [ 34 ] The dc grid voltage will vary with the modulation envelope of an amplitude modulated signal. [ 35 ]
The plate current is passed through a load impedance chosen to produce the desired amplification in conjunction with the tube characteristics. In non-regenerative receivers, a capacitor of low impedance at the carrier frequency is connected from the plate to cathode to prevent amplification of the carrier frequency. [ 36 ]
The capacitance of the grid condenser is chosen to be around ten times the grid input capacitance [ 37 ] and is typically 100 to 300 picofarads (pF), with the smaller value for screen grid and pentode tubes. [ 2 ] [ 26 ]
The resistance and electrical connection of the grid leak along with the grid current determine the grid bias . [ 19 ] For operation of the detector at maximum sensitivity, the bias is placed near the point on the grid current versus grid voltage curve where maximum rectification effect occurs, which is the point of maximum rate of change of slope of the curve. [ 38 ] [ 24 ] [ 39 ] If a dc path is provided from the grid leak to an indirectly heated cathode or to the negative end of a directly heated cathode, negative initial velocity grid bias is produced relative to the cathode determined by the product of the grid leak resistance and the grid current. [ 40 ] [ 41 ] For certain directly heated cathode tubes, the optimum grid bias is at a positive voltage relative to the negative end of the cathode. For these tubes, a dc path is provided from the grid leak to the positive side of the cathode or the positive side of the "A" battery; providing a positive fixed bias voltage at the grid determined by the dc grid current and the resistance of the grid leak. [ 42 ] [ 24 ] [ 43 ]
As the resistance of the grid leak is increased, the grid resistance R g {\displaystyle Rg} increases and the audio frequency bandwidth at the grid decreases, for a given grid condenser capacitance. [ 26 ] [ 27 ]
For triode tubes, the dc voltage at the plate is chosen for operation of the tube at the same plate current usually used in amplifier operation and is typically less than 100 volts. [ 44 ] [ 45 ] For pentode and tetrode tubes, the screen grid voltage is chosen or made adjustable to permit the desired plate current and amplification with the chosen plate load impedance. [ 46 ]
For grid leak power detection, the time constant of the grid leak and condenser must be shorter than the period of the highest audio frequency to be reproduced. [ 47 ] [ 48 ] A grid leak of around 250,000 to 500,000 ohms is suitable with a condenser of 100 pF. [ 30 ] [ 47 ] The grid leak resistance for grid leak power detection can be determined by R = 1 / 6.28 C F {\displaystyle R=1/6.28CF} where F {\displaystyle F} is the highest audio frequency to be reproduced and C {\displaystyle C} is the grid condenser capacitance. [ 49 ] A tube requiring comparatively large grid voltage for plate current cutoff is of advantage (usually a low amplification factor triode). [ 29 ] The peak 100 percent modulated input signal voltage the grid leak detector can demodulate without excess distortion is about one half of the projected cutoff bias voltage ( E a / μ ) {\displaystyle (E_{\mathrm {a} }/\mu )} , [ 50 ] corresponding to a peak unmodulated carrier voltage of about one quarter of the projected cutoff bias. [ 51 ] [ 29 ] For power grid detection using a directly heated cathode tube, the grid leak resistor is connected between the grid and the negative end of the filament, either directly or through the RF transformer.
Tetrode and pentode tubes provide significantly higher grid input impedance than triodes, resulting in less loading of the circuit providing the signal to the detector. [ 52 ] Tetrode and pentode tubes also produce significantly higher audio frequency output amplitude at small carrier input signal levels (around one volt or less) in grid leak detector applications than triodes. [ 53 ] [ 54 ]
One potential disadvantage of the grid leak detector, primarily in non-regenerative circuits, is that of the load it can present to the preceding circuit. [ 36 ] The radio frequency input impedance of the grid leak detector is dominated by the tube's grid input impedance, which can be on the order of 6000 ohms or less for triodes, depending on tube characteristics and signal frequency. Other disadvantages are that it can produce more distortion and is less suitable for input signal voltages over a volt or two than the plate detector or diode detector. [ 55 ] [ 56 ] | https://en.wikipedia.org/wiki/Grid-leak_detector |
grid.org was a website and online community established in 2001 for cluster computing and grid computing software users. For six years it operated several different volunteer computing projects that allowed members to donate their spare computer cycles to worthwhile causes. In 2007, it became a community for open source cluster and grid computing software. After around 2010 it redirected to other sites.
From its establishment in April 2001 until April 27, 2007, [ 1 ] grid.org was the website and organization that ran distributed computing projects such as the United Devices Cancer Research Project, led by Jikku Venkat, Ph.D and was sponsored philanthropically by United Devices (UD) and members participated in volunteer computing by running the UD Agent software (version 3.0).
The United Devices Cancer Research Project, which began in 2001, was seeking possible drugs for the treatment of cancer using distributed computing . [ 2 ] There were around 150,000 users in the United States and 170,000 in Europe along with hundreds of thousands more in other parts of the world.
The project was an alliance of several companies and organisations: [ 3 ]
United Devices released the cancer research screensaver under the principle of using spare computing power. The program, which could be set to run continually, used "virtual screening" to find possible interactions between molecules and target proteins, i.e. a drug. These molecules ( ligands ) are sent to the host computer's UD Agent. When these molecules dock successfully with a target protein this interaction is scored for further investigation.
The research consisted of two phases:
The IBM -sponsored Human Proteome Folding Project ("HPF"), phase 1, was announced on November 16, 2004 and was completed July 3, 2006. The project operated simultaneously on both grid.org and the IBM's World Community Grid . [ 7 ]
It made use of the "Rosetta" software to predict the structure of human proteins in order to help predict the function of proteins. This information may someday be used to help cure a variety of diseases and genetic defects.
According to an announcement on the grid.org forums, [ 8 ] after the HPF1 project was completed it was left to continue running on grid.org until August 9, 2006. [ 9 ] During that time, members whose computers were configured to run this project got new work and spent computing resources calculating a result, but the result was returned to grid.org for points only—it was not used for scientific research.
The status of the Human Proteome Folding Project caused some discussion on the grid.org forums. Most members wanted to see all available computing power directed toward the still-active cancer project, [ 9 ] but UD representative Robby Brewer asserted that "some [users] like the screensaver". [ 8 ] [ 10 ] As noted above, in the end the redundant HPF1 work on grid.org was halted. [ 9 ]
The Smallpox Research Grid was a part of United Devices "Patriot Grid" initiative to fight biological terrorism. This project helped analyze potential drug candidates for a medical therapy in the fight against smallpox virus. It made use of the "LigandFit" software (that had already been used by phase 2 of the Cancer Research project), but with a specialized set of target molecules that targeted the smallpox virus. [ 11 ]
The partners of the project included University of Oxford , the University of Western Ontario , Memorial Sloan–Kettering Cancer Center , Essex University , Evotec OAI , Accelrys , and IBM . [ 12 ]
The World Community Grid largely began because of the success of this project. [ citation needed ]
The Anthrax Research Project was a part of the United Devices "Patriot Grid" initiative to fight biological terrorism. It made use of the "LigandFit" software (that had already been used by phase 2 of the Cancer Research project), but with a specialized set of target molecules that targeted the advanced stages of anthrax bacterial infection.
The project was operated from January 22, 2002 until February 14, 2002 and ended after a total of 3.57 billion molecules had finished screening. The results of the research project were transmitted to biological scientists in order to finish the screening of the computational simulations. [ 13 ]
The partners of the project included Oxford University .
The HMMER Genetic Research project made use of the Hidden Markov model to search for patterns in genetic DNA sequences. [ 14 ]
The Web Performance Testing project was operated as a commercial opportunity with select web hosting providers in order to help them test the scalability of their server infrastructures under periods of high-demand. [ 15 ]
In November 2007, grid.org was repositioned by Univa as a community to allow users to interact and discuss open source cluster and grid related topics. [ 16 ] It allowed users to download, get support for, contribute to, and report issues about the open source Globus Toolkit based products offered by Univa.
Over 100,000 unique visitors were reported in 2008. [ 17 ] Around mid 2010 it redirected to Unicluster.org (a Univa product) and by 2012 it redirected to Univa's main site. | https://en.wikipedia.org/wiki/Grid.org |
GridCase (stylized as GRiDCASE ) is a line of rugged tablets and laptops by Grid Systems Corporation released as a successor of the GRiD Compass line. The first model was released in 1985.
1590
1985; low-contrast LCD screen with green background (instead of orange/yellow plasma screen with black background on a Compass laptops). [ 4 ]
This model was based on a MSDOS 2.11 and a bunch of apps burned on ROM that fit in the ROM tray in front.
1986; returning to a plasma screens (but with red/orange text color and wide aspect ratio ). The internal power supply can be ejected and a battery-pack is installed in its place. [ 5 ]
3D view: http://vintage-laptops.com/en/grid-case-3
1986; the rugged version of GRiDCASE 3 with electromagnetic protection.
3D view: http://vintage-laptops.com/en/grid-case-tempest
1988; can be equipped with plasma or LCD screen.
Was based on a 286 CPU.
3D view: http://vintage-laptops.com/en/grid-case-1520
Source: [ 6 ]
The modification of 1530 – the Grid GRiDCASE 1535EXP is a rugged laptop with a 80386 CPU , an optional 80387 floating-point processor and up to 8 Mbyte of DRAM . It was first flown into space in December 1992 on the STS-53 for use of the HERCULES geolocation device. The 1535EXP was also the first rugged portable PC to attain full TEMPEST accreditation from the NSA. [ 3 ]
Another modification – the Grid GRiDCASE 1537EXP – has another screen (640×480 instead of 640×400, but with less physical size).
The power input is 100–240 V AC 50/60/400 Hz , 80 W . The 400 Hz utility frequency is common on airplanes and submarines .
3D view: http://vintage-laptops.com/en/grid-case-1537e
1990; [ 6 ] This model has an integrated pointing device, [ 6 ] and was based on an Intel 80386sx processor.
The 1550sx version has 4 MB RAM, a 100 MB hard disk, a 1.44 MB floppy drive, a hardly visible B/W VGA-LCD screen, a built in 2400 bps modem and weights 5.5 kg (12 lbs.) without battery or power supply. [ 8 ]
3D view: http://vintage-laptops.com/en/grid-case-1550sx
In the period from 1990–1994, the Tandy Corporation together with GRiD Systems Corporation produced the following models:
GRiD 1450SX – 3D view: http://vintage-laptops.com/en/grid-1450sx
GRiD 1810 – 3D view: http://vintage-laptops.com/en/grid-1810
GRiD 1660 and 1660C – 3D view: http://vintage-laptops.com/en/grid-1660
GRiD 1680 and 1680C – 3D view: http://vintage-laptops.com/en/grid-1680
GRiD 1720, 1750, 1755 – 3D view: http://vintage-laptops.com/en/grid-1755
GRiD 4025N and 4025NC – 3D view: http://vintage-laptops.com/en/grid-4025nc
Reintroduced in 1995 line.
There is also a Tempest version of this model (1580T).
Pentium and pointing stick (initial model, 1995) or Pentium II and touchpad (1580 XGA, 1998).
3D view: http://vintage-laptops.com/en/grid-case-1580
3D view: http://vintage-laptops.com/en/grid-case-1580-tempest
3D view: https://vintage-laptops.com/en/grid-case-1580-XGA
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GridCase |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.