text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Gentzen's consistency proof is a result of proof theory in mathematical logic , published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are " consistent "), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called " primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε 0 ", is neither weaker nor stronger than the system of Peano axioms. Gentzen argued that it avoids the questionable modes of inference contained in Peano arithmetic and that its consistency is therefore less controversial.
Gentzen's theorem is concerned with first-order arithmetic: the theory of the natural numbers , including their addition and multiplication, axiomatized by the first-order Peano axioms . This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence .
Gentzen showed that the consistency of the first-order Peano axioms is provable over the base theory of primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε 0 . Primitive recursive arithmetic is a much simplified form of arithmetic that is rather uncontroversial. The additional principle means, informally, that there is a well-ordering on the set of finite rooted trees . Formally, ε 0 is the first ordinal α {\displaystyle \alpha } such that ω α = α {\displaystyle \omega ^{\alpha }=\alpha } , i.e. the limit of the sequence
It is a countable ordinal much smaller than large countable ordinals . To express ordinals in the language of arithmetic, an ordinal notation is needed, i.e. a way to assign natural numbers to ordinals less than ε 0 . This can be done in various ways, one example provided by Cantor's normal form theorem . Gentzen's proof is based on the following assumption: for any quantifier-free formula A(x), if there is an ordinal a < ε 0 for which A(a) is false, then there is a least such ordinal.
Gentzen defines a notion of "reduction procedure" for proofs in Peano arithmetic. For a given proof, such a procedure produces a tree of proofs, with the given one serving as the root of the tree, and the other proofs being, in a sense, "simpler" than the given one. This increasing simplicity is formalized by attaching an ordinal < ε 0 to every proof, and showing that, as one moves down the tree, these ordinals get smaller with every step. He then shows that if there were a proof of a contradiction, the reduction procedure would result in an infinite strictly descending sequence of ordinals smaller than ε 0 produced by a primitive recursive operation on proofs corresponding to a quantifier-free formula. [ 1 ]
Gentzen's proof highlights one commonly missed aspect of Gödel's second incompleteness theorem . It is sometimes claimed that the consistency of a theory can only be proved in a stronger theory. Gentzen's theory obtained by adding quantifier-free transfinite induction to primitive recursive arithmetic proves the consistency of first-order Peano arithmetic (PA) but does not contain PA. For example, it does not prove ordinary mathematical induction for all formulae, whereas PA does (since all instances of induction are axioms of PA). Gentzen's theory is not contained in PA, either, however, since it can prove a number-theoretical fact—the consistency of PA—that PA cannot. Therefore, the two theories are, in one sense, incomparable .
That said, there are other, finer ways to compare the strength of theories, the most important of which is defined in terms of the notion of interpretability . It can be shown that, if one theory T is interpretable in another B, then T is consistent if B is. (Indeed, this is a large point of the notion of interpretability.) And, assuming that T is not extremely weak, T itself will be able to prove this very conditional: If B is consistent, then so is T. Hence, T cannot prove that B is consistent, by the second incompleteness theorem, whereas B may well be able to prove that T is consistent. This is what motivates the idea of using interpretability to compare theories, i.e., the thought that, if B interprets T, then B is at least as strong (in the sense of 'consistency strength') as T is.
A strong form of the second incompleteness theorem, proved by Pavel Pudlák, [ 2 ] who was building on earlier work by Solomon Feferman , [ 3 ] states that no consistent theory T that contains Robinson arithmetic , Q, can interpret Q plus Con(T), the statement that T is consistent. By contrast, Q+Con(T) does interpret T, by a strong form of the arithmetized completeness theorem . So Q+Con(T) is always stronger (in one good sense) than T is. But Gentzen's theory trivially interprets Q+Con(PA), since it contains Q and proves Con(PA), and so Gentzen's theory interprets PA. But, by Pudlák's result, PA cannot interpret Gentzen's theory, since Gentzen's theory (as just said) interprets Q+Con(PA), and interpretability is transitive. That is: If PA did interpret Gentzen's theory, then it would also interpret Q+Con(PA) and so would be inconsistent, by Pudlák's result. So, in the sense of consistency strength, as characterized by interpretability, Gentzen's theory is stronger than Peano arithmetic.
Hermann Weyl made the following comment in 1946 regarding the significance of Gentzen's consistency result following the devastating impact of Gödel's 1931 incompleteness result on Hilbert's plan to prove the consistency of mathematics. [ 4 ]
Kleene (2009 , p. 479) made the following comment in 1952 on the significance of Gentzen's result, particularly in the context of the formalist program which was initiated by Hilbert.
In contrast, Bernays (1967) commented on whether Hilbert's confinement to finitary methods was too restrictive:
Gentzen's first version of his consistency proof was not published during his lifetime because Paul Bernays had objected to a method implicitly used in the proof. The modified proof, described above, was published in 1936 in the Annals . Gentzen went on to publish two more consistency proofs, one in 1938 and one in 1943. All of these are contained in ( Gentzen & Szabo 1969 ).
Kurt Gödel reinterpreted Gentzen's 1936 proof in a lecture in 1938 in what came to be known as the no-counterexample interpretation. Both the original proof and the reformulation can be understood in game-theoretic terms. ( Tait 2005 ).
In 1940 Wilhelm Ackermann published another consistency proof for Peano arithmetic, also using the ordinal ε 0 .
Another proof of consistency of Arithmetic was published by I. N. Khlodovskii, in 1959.
Gentzen's proof is the first example of what is called proof-theoretic ordinal analysis . In ordinal analysis one gauges the strength of theories by measuring how large the (constructive) ordinals are that can be proven to be well-ordered, or equivalently for how large a (constructive) ordinal can transfinite induction be proven. A constructive ordinal is the order type of a recursive well-ordering of natural numbers.
In this language, Gentzen's work establishes that the proof-theoretic ordinal of first-order Peano arithmetic is ε 0 .
Laurence Kirby and Jeff Paris proved in 1982 that Goodstein's theorem cannot be proven in Peano arithmetic. Their proof was based on Gentzen's theorem. [ 5 ] | https://en.wikipedia.org/wiki/Gentzen's_consistency_proof |
In mathematics , genus ( pl. : genera ) has a few different, but closely related, meanings. Intuitively, the genus is the number of "holes" of a surface . [ 1 ] A sphere has genus 0, while a torus has genus 1.
The genus of a connected , orientable surface is an integer representing the maximum number of cuttings along non-intersecting closed simple curves without rendering the resultant manifold disconnected. [ 2 ] It is equal to the number of handles on it. Alternatively, it can be defined in terms of the Euler characteristic χ {\displaystyle \chi } , via the relationship χ = 2 − 2 g {\displaystyle \chi =2-2g} for closed surfaces , where g {\displaystyle g} is the genus. For surfaces with b {\displaystyle b} boundary components, the equation reads χ = 2 − 2 g − b {\displaystyle \chi =2-2g-b} .
In layman's terms, the genus is the number of "holes" an object has ("holes" interpreted in the sense of doughnut holes; a hollow sphere would be considered as having zero holes in this sense). [ 3 ] A torus has 1 such hole, while a sphere has 0. The green surface pictured above has 2 holes of the relevant sort.
For instance:
Explicit construction of surfaces of the genus g is given in the article on the fundamental polygon .
The non-orientable genus , demigenus , or Euler genus of a connected, non-orientable closed surface is a positive integer representing the number of cross-caps attached to a sphere . Alternatively, it can be defined for a closed surface in terms of the Euler characteristic χ, via the relationship χ = 2 − k , where k is the non-orientable genus.
For instance:
The genus of a knot K is defined as the minimal genus of all Seifert surfaces for K . [ 4 ] A Seifert surface of a knot is however a manifold with boundary , the boundary being the knot, i.e. homeomorphic to the unit circle . The genus of such a surface is defined to be the genus of the two-manifold, which is obtained by gluing the unit disk along the boundary.
The genus of a 3-dimensional handlebody is an integer representing the maximum number of cuttings along embedded disks without rendering the resultant manifold disconnected. It is equal to the number of handles on it.
For instance:
The genus of a graph is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n handles (i.e. an oriented surface of the genus n ). Thus, a planar graph has genus 0, because it can be drawn on a sphere without self-crossing.
The non-orientable genus of a graph is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n cross-caps (i.e. a non-orientable surface of (non-orientable) genus n ). (This number is also called the demigenus .)
The Euler genus is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n cross-caps or on a sphere with n/2 handles. [ 5 ]
In topological graph theory there are several definitions of the genus of a group . Arthur T. White introduced the following concept. The genus of a group G is the minimum genus of a (connected, undirected) Cayley graph for G .
The graph genus problem is NP-complete . [ 6 ]
There are two related definitions of genus of any projective algebraic scheme X {\displaystyle X} : the arithmetic genus and the geometric genus . [ 7 ] When X {\displaystyle X} is an algebraic curve with field of definition the complex numbers , and if X {\displaystyle X} has no singular points , then these definitions agree and coincide with the topological definition applied to the Riemann surface of X {\displaystyle X} (its manifold of complex points). For example, the definition of elliptic curve from algebraic geometry is connected non-singular projective curve of genus 1 with a given rational point on it .
By the Riemann–Roch theorem , an irreducible plane curve of degree d {\displaystyle d} given by the vanishing locus of a section s ∈ Γ ( P 2 , O P 2 ( d ) ) {\displaystyle s\in \Gamma (\mathbb {P} ^{2},{\mathcal {O}}_{\mathbb {P} ^{2}}(d))} has geometric genus
where s {\displaystyle s} is the number of singularities when properly counted.
In differential geometry , a genus of an oriented manifold M {\displaystyle M} may be defined as a complex number Φ ( M ) {\displaystyle \Phi (M)} subject to the conditions
In other words, Φ {\displaystyle \Phi } is a ring homomorphism R → C {\displaystyle R\to \mathbb {C} } , where R {\displaystyle R} is Thom's oriented cobordism ring . [ 8 ]
The genus Φ {\displaystyle \Phi } is multiplicative for all bundles on spinor manifolds with a connected compact structure if log Φ {\displaystyle \log _{\Phi }} is an elliptic integral such as log Φ ( x ) = ∫ 0 x ( 1 − 2 δ t 2 + ε t 4 ) − 1 / 2 d t {\displaystyle \log _{\Phi }(x)=\int _{0}^{x}(1-2\delta t^{2}+\varepsilon t^{4})^{-1/2}dt} for some δ , ε ∈ C . {\displaystyle \delta ,\varepsilon \in \mathbb {C} .} This genus is called an elliptic genus.
The Euler characteristic χ ( M ) {\displaystyle \chi (M)} is not a genus in this sense since it is not invariant concerning cobordisms.
Genus can be also calculated for the graph spanned by the net of chemical interactions in nucleic acids or proteins . In particular, one may study the growth of the genus along the chain. Such a function (called the genus trace) shows the topological complexity and domain structure of biomolecules. [ 9 ] | https://en.wikipedia.org/wiki/Genus_(mathematics) |
In mathematics, a genus g surface (also known as a g -torus or g -holed torus ) is a surface formed by the connected sum of g distinct tori : the interior of a disk is removed from each of g distinct tori and the boundaries of the g many disks are identified (glued together), forming a g -torus. The genus of such a surface is g .
A genus g surface is a two-dimensional manifold . The classification theorem for surfaces states that every compact connected two-dimensional manifold is homeomorphic to either the sphere, the connected sum of tori, or the connected sum of real projective planes .
The genus of a connected orientable surface is an integer representing the maximum number of cuttings along non-intersecting closed simple curves without rendering the resultant manifold disconnected. [ 1 ] It is equal to the number of handles on it. Alternatively, it can be defined in terms of the Euler characteristic χ , via the relationship χ = 2 − 2 g for closed surfaces , where g is the genus.
The genus (sometimes called the demigenus or Euler genus) of a connected non-orientable closed surface is a positive integer representing the number of cross-caps attached to a sphere. Alternatively, it can be defined for a closed surface in terms of the Euler characteristic χ , via the relationship χ = 2 − g , where g is the non-orientable genus.
An orientable surface of genus zero is the sphere S 2 . Another surface of genus zero is the disc .
A genus one orientable surface is the ordinary torus. A non-orientable surface of genus one is the projective plane . [ 2 ]
Elliptic curves over the complex numbers can be identified with genus 1 surfaces. The formulation of elliptic curves as the embedding of a torus in the complex projective plane follows naturally from a property of Weierstrass's elliptic functions that allows elliptic curves to be obtained from the quotient of the complex plane by a lattice . [ 3 ]
The term double torus is occasionally used to denote a genus 2 surface. [ 4 ] [ 5 ] A non-orientable surface of genus two is the Klein bottle .
The Bolza surface is the most symmetric Riemann surface of genus 2, in the sense that it has the largest possible conformal automorphism group . [ 6 ]
The term triple torus is also occasionally used to denote a genus 3 surface. [ 7 ] [ 5 ]
The Klein quartic is a compact Riemann surface of genus 3 with the highest possible order automorphism group for compact Riemann surfaces of genus 3. It has 168 orientation-preserving automorphisms, and 336 automorphisms altogether. | https://en.wikipedia.org/wiki/Genus_g_surface |
GENWI is a privately held technology company based in San Jose, CA that provides a mobile content enablement platform. [ 1 ] GENWI is short for "Generation Wireless". [ 2 ]
GENWI was a free web-based news reader, or aggregator , initially released in March 2007. [ 3 ] Genwi provided a news feed service by enabling users to publish their feeds to one profile and follow others' news feeds in the feed reader – this feed reader was called "Wire" and was capable of reading RSS , Media RSS , iTunes RSS and ATOM feeds. [ 4 ] Genwi offered a suite of social networking features built into the RSS reader. Users were able to add friends, send messages, leave comments and share individual feed items. The site underwent a major redesign in November 2008 [ 5 ] and was shut down in 2009.
In January 2010, GENWI, Inc. used the same technology that built their RSS reader to launch iSites.us, [ 6 ] a smartphone app builder and management system, which enables businesses to build applications for iPhone and Android using RSS, ATOM or social feeds. [ 7 ] GENWI uses cloud-based technology to keep more than 1,500 native apps up-to-date and to instantly build HTML5 apps for iPhone. [ 8 ]
In September 2011, GENWI launched Condé Nast 's "The Daily W" app and rebranded the iSites brand back to GENWI and now helps publishers and brands create engaging native and HTML5 apps with a cloud-based mobile content management system , or mCMS.
category;genetica | https://en.wikipedia.org/wiki/Genwi |
GeoSpy was [ 2 ] an outdoor recreational activity which combines geographic locations and maps with photography in a location-based game. To play the game requires a camera and a mobile Global Positioning System (GPS) device. [ 3 ]
There are several goals in the game, but the primary one is to find and create objects through pictures of objects and places which are uploaded to the game's website.
To create an object the participant requires a complete knowledge about the object, and the GPS coordinates and photos of the object that the participant has taken. Similarly, existing objects can be secured by visiting and photographing the object and posting the photo on the game’s website as proof that the competitor visited the object. The objects are divided into different categories namely Civil, Religious & Historical, Natural, Technical and Military objects with further sub-categories such as hospitals, museums, factories, memorials, etc. [ 4 ]
Participants in the game are called spies. | https://en.wikipedia.org/wiki/GeoSpy |
Geobiology is a field of scientific research that explores the interactions between the physical Earth and the biosphere . It is a relatively young field, and its borders are fluid. There is considerable overlap with the fields of ecology , evolutionary biology , microbiology , paleontology , and particularly soil science and biogeochemistry . Geobiology applies the principles and methods of biology, geology, and soil science to the study of the ancient history of the co-evolution of life and Earth as well as the role of life in the modern world. [ 2 ] Geobiologic studies tend to be focused on microorganisms , and on the role that life plays in altering the chemical and physical environment of the pedosphere , which exists at the intersection of the lithosphere , atmosphere , hydrosphere and/or cryosphere . It differs from biogeochemistry in that the focus is on processes and organisms over space and time rather than on global chemical cycles.
Geobiological research synthesizes the geologic record with modern biologic studies. It deals with process - how organisms affect the Earth and vice versa - as well as history - how the Earth and life have changed together. Much research is grounded in the search for fundamental understanding, but geobiology can also be applied, as in the case of microbes that clean up oil spills . [ 3 ]
Geobiology employs molecular biology , environmental microbiology , organic geochemistry , and the geologic record to investigate the evolutionary interconnectedness of life and Earth. It attempts to understand how the Earth has changed since the origin of life and what it might have been like along the way. Some definitions of geobiology even push the boundaries of this time frame - to understanding the origin of life and to the role that humans have played and will continue to play in shaping the Earth in the Anthropocene . [ 3 ]
The term geobiology was coined by Lourens Baas Becking in 1934. In his words, geobiology "is an attempt to describe the relationship between organisms and the Earth," for "the organism is part of the Earth and its lot is interwoven with that of the Earth." Baas Becking's definition of geobiology was born of a desire to unify environmental biology with laboratory biology. The way he practiced it aligns closely with modern environmental microbial ecology , though his definition remains applicable to all of geobiology. In his book, Geobiology, Bass Becking stated that he had no intention of inventing a new field of study. [ 4 ] Baas Becking's understanding of geobiology was heavily influenced by his predecessors, including Martinus Beyerinck , his teacher from the Dutch School of Microbiology. Others included Vladimir Vernadsky , who argued that life changes the surface environment of Earth in The Biosphere, his 1926 book, [ 5 ] and Sergei Vinogradsky, famous for discovering lithotrophic bacteria. [ 6 ]
The first laboratory officially dedicated to the study of geobiology was the Baas Becking Geobiological Laboratory in Australia, which opened its doors in 1965. [ 4 ] However, it took another 40 or so years for geobiology to become a firmly rooted scientific discipline, thanks in part to advances in geochemistry and genetics that enabled scientists to begin to synthesize the study of life and planet.
In the 1930s, Alfred Treibs discovered chlorophyll -like porphyrins in petroleum , confirming its biological origin, [ 7 ] thereby founding organic geochemistry and establishing the notion of biomarkers , a critical aspect of geobiology. But several decades passed before the tools were available to begin to search in earnest for chemical marks of life in the rocks. In the 1970s and '80s, scientists like Geoffrey Eglington and Roger Summons began to find lipid biomarkers in the rock record using equipment like GCMS . [ 8 ]
On the biology side of things, in 1977, Carl Woese and George Fox published a phylogeny of life on Earth, including a new domain - the Archaea . [ 9 ] And in the 1990s, genetics and genomics studies became possible, broadening the scope of investigation of the interaction of life and planet.
Today, geobiology has its own journals, such as Geobiology , established in 2003, [ 10 ] and Biogeosciences , established in 2004, [ 11 ] as well as recognition at major scientific conferences. It got its own Gordon Research Conference in 2011, [ 12 ] a number of geobiology textbooks have been published, [ 3 ] [ 13 ] and many universities around the world offer degree programs in geobiology (see External links).
Perhaps the most profound geobiological event is the introduction of oxygen into the atmosphere by photosynthetic bacteria . This oxygenation of Earth 's primordial atmosphere (the so-called oxygen catastrophe or Great Oxygenation Event ) and the oxygenation of the oceans altered surface biogeochemical cycles and the types of organisms that have been evolutionarily selected for.
A subsequent major change was the advent of multicellularity . The presence of oxygen allowed eukaryotes and, later, multicellular life to evolve.
More anthropocentric geobiologic events include the origin of animals and the establishment of terrestrial plant life, which affected continental erosion and nutrient cycling , and likely changed the types of rivers observed, allowing channelization of what were previously predominantly braided rivers.
More subtle geobiological events include the role of termites in overturning sediments, coral reefs in depositing calcium carbonate and breaking waves, sponges in absorbing dissolved marine silica, the role of dinosaurs in breaching river levees and promoting flooding, and the role of large mammal dung in distributing nutrients. [ 15 ] [ 16 ]
Geobiology is founded upon a few core concepts that unite the study of Earth and life. While there are many aspects of studying past and present interactions between life and Earth that are unclear, several important ideas and concepts provide a basis of knowledge in geobiology that serve as a platform for posing researchable questions, including the evolution of life and planet and the co-evolution of the two, genetics - from both a historical and functional standpoint, the metabolic diversity of all life, the sedimentological preservation of past life, and the origin of life.
A core concept in geobiology is that life changes over time through evolution . The theory of evolution postulates that unique populations of organisms or species arose from genetic modifications in the ancestral population which were passed down by drift and natural selection . [ 17 ]
Along with standard biological evolution, life and planet co-evolve. Since the best adaptations are those that suit the ecological niche that the organism lives in, the physical and chemical characteristics of the environment drive the evolution of life by natural selection, but the opposite can also be true: with every advent of evolution, the environment changes.
A classic example of co-evolution is the evolution of oxygen -producing photosynthetic cyanobacteria which oxygenated Earth's Archean atmosphere. The ancestors of cyanobacteria began using water as an electron source to harness the energy of the sun and expelling oxygen before or during the early Paleoproterozoic . During this time, around 2.4 to 2.1 billion years ago, [ 18 ] geologic data suggests that atmospheric oxygen began to rise in what is termed the Great Oxygenation Event (GOE) . [ 19 ] [ 20 ] It is unclear for how long cyanobacteria had been doing oxygenic photosynthesis before the GOE. Some evidence suggests there were geochemical "buffers" or sinks suppressing the rise of oxygen such as volcanism [ 21 ] though cyanobacteria may have been around producing it before the GOE. [ 22 ] Other evidence indicates that the rise of oxygenic photosynthesis was coincident with the GOE. [ 23 ]
The presence of oxygen on Earth from its first production by cyanobacteria to the GOE and through today has drastically impacted the course of evolution of life and planet. [ 19 ] It may have triggered the formation of oxidized minerals [ 24 ] and the disappearance of oxidizable minerals like pyrite from ancient stream beds. [ 25 ] The presence of banded-iron formations (BIFs) have been interpreted as a clue for the rise of oxygen since small amounts of oxygen could have reacted with reduced ferrous iron (Fe(II)) in the oceans, resulting in the deposition of sediments containing Fe(III) oxide in places like Western Australia. [ 26 ] However, any oxidizing environment, including that provided by microbes such as the iron-oxidizing photoautotroph Rhodopseudomonas palustris , [ 27 ] can trigger iron oxide formation and thus BIF deposition. [ 28 ] [ 29 ] [ 30 ] Other mechanisms include oxidation by UV light . [ 31 ] Indeed, BIFs occur across large swaths of Earth's history and may not correlate with only one event. [ 30 ]
Other changes correlated with the rise of oxygen include the appearance of rust-red ancient paleosols , [ 19 ] different isotope fractionation of elements such as sulfur , [ 32 ] and global glaciations and Snowball Earth events, [ 33 ] perhaps caused by the oxidation of methane by oxygen, not to mention an overhaul of the types of organisms and metabolisms on Earth. Whereas organisms prior to the rise of oxygen were likely poisoned by oxygen gas as many anaerobes are today, [ 34 ] those that evolved ways to harness the electron-accepting and energy-giving power of oxygen were poised to thrive and colonize the aerobic environment.
Earth has not remained the same since its planetary formation 4.5 billion years ago. [ 35 ] [ 36 ] Continents have formed, broken up, and collided, offering new opportunities for and barriers to the dispersal of life. The redox state of the atmosphere and the oceans has changed, as indicated by isotope data. Fluctuating quantities of inorganic compounds such as carbon dioxide , nitrogen , methane , and oxygen have been driven by life evolving new biological metabolisms to make these chemicals and have driven the evolution of new metabolisms to use those chemicals. Earth acquired a magnetic field about 3.4 Ga [ 37 ] that has undergone a series of geomagnetic reversals on the order of millions of years. [ 38 ] The surface temperature is in constant fluctuation, falling in glaciations and Snowball Earth events due to ice–albedo feedback , [ 39 ] rising and melting due to volcanic outgassing, and stabilizing due to silicate weathering feedback . [ 40 ]
And the Earth is not the only one that changed - the luminosity of the sun has increased over time. Because rocks record a history of relatively constant temperatures since Earth's beginnings, there must have been more greenhouse gasses to keep the temperatures up in the Archean when the sun was younger and fainter. [ 41 ] All these major differences in the environment of the Earth placed very different constraints on the evolution of life throughout our planet's history. Moreover, more subtle changes in the habitat of life are always occurring, shaping the organisms and traces that we observe today and in the rock record.
The genetic code is key to observing the history of evolution and understanding the capabilities of organisms. Genes are the basic unit of inheritance and function and, as such, they are the basic unit of evolution and the means behind metabolism . [ 42 ]
Phylogeny takes genetic sequences from living organisms and compares them to each other to reveal evolutionary relationships, much like a family tree reveals how individuals are connected to their distant cousins. [ 43 ] It allows us to decipher modern relationships and infer how evolution happened in the past.
Phylogeny can give some sense of history when combined with a little bit more information. Each difference in the DNA indicates divergence between one species and another. [ 43 ] This divergence, whether via drift or natural selection, is representative of some lapse of time. [ 43 ] Comparing DNA sequences alone gives a record of the history of evolution with an arbitrary measure of phylogenetic distance “dating” that last common ancestor. However, if information about the rate of genetic mutation is available or geologic markers are present to calibrate evolutionary divergence (i.e. fossils ), we have a timeline of evolution. [ 44 ] From there, with an idea about other contemporaneous changes in life and environment, we can begin to speculate why certain evolutionary paths might have been selected for. [ 45 ]
Molecular biology allows scientists to understand a gene's function using microbial culturing and mutagenesis . Searching for similar genes in other organisms and in metagenomic and metatranscriptomic data allows us to understand what processes could be relevant and important in a given ecosystem, providing insight into the biogeochemical cycles in that environment.
For example, an intriguing problem in geobiology is the role of organisms in the global cycling of methane . Genetics has revealed that the methane monooxygenase gene ( pmo ) is used for oxidizing methane and is present in all aerobic methane-oxidizers, or methanotrophs . [ 46 ] The presence of DNA sequences of the pmo gene in the environment can be used as a proxy for methanotrophy. [ 47 ] [ 48 ] A more generalizable tool is the 16S ribosomal RNA gene, which is found in bacteria and archaea. This gene evolves very slowly over time and is not usually horizontally transferred , and so it is often used to distinguish different taxonomic units of organisms in the environment. [ 9 ] [ 49 ] In this way, genes are clues to organismal metabolism and identity. Genetics enables us to ask 'who is there?' and 'what are they doing?' This approach is called metagenomics . [ 49 ]
Life harnesses chemical reactions to generate energy, perform biosynthesis , and eliminate waste. [ 52 ] Different organisms use very different metabolic approaches to meet these basic needs. [ 53 ] While animals such as ourselves are limited to aerobic respiration , other organisms can "breathe" sulfate (SO42-), nitrate (NO3-), ferric iron (Fe(III)), and uranium (U(VI)), or live off energy from fermentation . [ 53 ] Some organisms, like plants, are autotrophs , meaning that they can fix carbon dioxide for biosynthesis. Plants are photoautotrophs , in that they use the energy of light to fix carbon. Microorganisms employ oxygenic and anoxygenic photoautotrophy, as well as chemoautotrophy . Microbial communities can coordinate in syntrophic metabolisms to shift reaction kinetics in their favor. Many organisms can perform multiple metabolisms to achieve the same end goal; these are called mixotrophs . [ 53 ]
Biotic metabolism is directly tied to the global cycling of elements and compounds on Earth. The geochemical environment fuels life, which then produces different molecules that go into the external environment. (This is directly relevant to biogeochemistry .) In addition, biochemical reactions are catalyzed by enzymes which sometimes prefer one isotope over others. For example, oxygenic photosynthesis is catalyzed by RuBisCO , which prefers carbon-12 over carbon-13, resulting in carbon isotope fractionation in the rock record. [ 54 ]
Sedimentary rocks preserve remnants of the history of life on Earth in the form of fossils , biomarkers , isotopes , and other traces. The rock record is far from perfect, and the preservation of biosignatures is a rare occurrence. Understanding what factors determine the extent of preservation and the meaning behind what is preserved are important components to detangling the ancient history of the co-evolution of life and Earth. [ 8 ] The sedimentary record allows scientists to observe changes in life and Earth in composition over time and sometimes even date major transitions, like extinction events.
Some classic examples of geobiology in the sedimentary record include stromatolites and banded-iron formations. The role of life in the origin of both of these is a heavily debated topic. [ 19 ]
The first life arose from abiotic chemical reactions . When this happened, how it happened, and even what planet it happened on are uncertain. However, life follows the rules of and arose from lifeless chemistry and physics . It is constrained by principles such as thermodynamics . This is an important concept in the field because it represents the epitome of the interconnectedness, if not sameness, of life and Earth. [ 55 ]
While often delegated to the field of astrobiology , attempts to understand how and when life arose are relevant to geobiology as well. [ 56 ] The first major strides towards understanding the “how” came with the Miller-Urey experiment , when amino acids formed out of a simulated “ primordial soup ”. Another theory is that life originated in a system much like the hydrothermal vents at mid-oceanic spreading centers . In the Fischer-Tropsch synthesis , a variety of hydrocarbons form under vent-like conditions. Other ideas include the “RNA World” hypothesis , which postulates that the first biologic molecule was RNA , and the idea that life originated elsewhere in the Solar System and was brought to Earth, perhaps via a meteorite . [ 55 ]
While geobiology is a diverse and varied field, encompassing ideas and techniques from a wide range of disciplines, there are a number of important methods that are key to the study of the interaction of life and Earth that are highlighted here. [ 3 ]
As its name suggests, geobiology is closely related to many other fields of study, and does not have clearly defined boundaries or perfect agreement on what exactly they comprise. Some practitioners take a very broad view of its boundaries, encompassing many older, more established fields such as biogeochemistry, paleontology, and microbial ecology. Others take a more narrow view, assigning it to emerging research that falls between these existing fields, such as with geomicrobiology. The following list includes both those that are clearly a part of geobiology, e.g. geomicrobiology, as well as those that share scientific interests but have not historically been considered a sub-discipline of geobiology, e.g. paleontology.
Astrobiology is an interdisciplinary field that uses a combination of geobiological and planetary science data to establish a context for the search for life on other planets . The origin of life from non-living chemistry and geology, or abiogenesis , is a major topic in astrobiology. Even though it is fundamentally an earth-bound concern, and therefore of great geobiological interest, getting at the origin of life necessitates considering what life requires, what, if anything, is special about Earth, what might have changed to allow life to blossom, what constitutes evidence for life, and even what constitutes life itself. These are the same questions that scientists might ask when searching for alien life. In addition, astrobiologists research the possibility of life based on other metabolisms and elements, the survivability of Earth's organisms on other planets or spacecraft, planetary and solar system evolution, and space geochemistry. [ 57 ]
Biogeochemistry is a systems science that synthesizes the study of biological, geological, and chemical processes to understand the reactions and composition of the natural environment. It is concerned primarily with global elemental cycles, such as that of nitrogen and carbon. The father of biogeochemistry was James Lovelock , whose “ Gaia hypothesis ” proposed that Earth's biological, chemical, and geologic systems interact to stabilize the conditions on Earth that support life. [ 58 ]
Geobiochemistry is similar to biogeochemistry , but differs by placing emphasis on the effects of geology on the development of life's biochemical processes, as distinct from the role of life on Earth's cycles. Its primary goal is to link biological changes, encompassing evolutionary modifications of genes and changes in the expression of genes and proteins, to changes in the temperature, pressure, and composition of geochemical processes to understand when and how metabolism evolved. Geobiochemistry is founded on the notion that life is a planetary response because metabolic catalysis enables the release of energy trapped by a cooling planet. [ 59 ]
Microbiology is a broad scientific discipline pertaining to the study of that life which is best viewed under a microscope. It encompasses several fields that are of direct relevance to geobiology, and the tools of microbiology all pertain to geobiology. Environmental microbiology is especially entangled in geobiology since it seeks an understanding of the actual organisms and processes that are relevant in nature, as opposed to the traditional lab-based approach to microbiology. Microbial ecology is similar, but tend to focus more on lab studies and the relationships between organisms within a community, as well as within the ecosystem of their chemical and geological physical environment. Both rely on techniques such as sample collection from diverse environments, metagenomics , DNA sequencing , and statistics .
Geomicrobiology traditionally studies the interactions between microbes and minerals . While it is generally reliant on the tools of microbiology, microbial geochemistry uses geological and chemical methods to approach the same topic from the perspective of the rocks. Geomicrobiology and microbial geochemistry (GMG) is a relatively new interdisciplinary field that more broadly takes on the relationship between microbes, Earth, and environmental systems. Billed as a subset of both geobiology and geochemistry, GMG seeks to understand elemental biogeochemical cycles and the evolution of life on Earth. Specifically, it asks questions about where microbes live, their local and global abundance, their structural and functional biochemistry, how they have evolved, biomineralization, and their preservation potential and presence in the rock record. In many ways, GMG appears to be equivalent to geobiology, but differs in scope: geobiology focuses on the role of all life, while GMG is strictly microbial. Regardless, it is these tiniest creatures that dominated to history of life integrated over time and seem to have had the most far-reaching effects. [ 60 ]
Molecular geomicrobiology takes a mechanistic approach to understanding biological processes that are geologically relevant. It can be at the level of DNA, protein, lipids, or any metabolite . One example of Molecular geomicrobiology research is studying how recently created lava fields are colonized by microbes. The University of Helsinki is currently conducting research to determine what specific microbial traits are necessary for successful initial colonization, and how waves of microbial succession can transform the volcanic rock into fertile soil. [ 61 ]
Organic geochemistry is the study of organic molecules that appear in the fossil record in sedimentary rocks. Research in this field concerns molecular fossils that are often lipid biomarkers. Molecules like sterols and hopanoids, membrane lipids found in eukaryotes and bacteria, respectively, can be preserved in the rock record on billion-year timescales. Following the death of the organism they came from and sedimentation, they undergo a process called diagenesis whereby many of the specific functional groups from the lipids are lost, but the hydrocarbon skeleton remains intact. These fossilized lipids are called steranes and hopanes, respectively. [ 62 ] There are also other types of molecular fossils, like porphyrins , the discovery of which in petroleum by Alfred E. Treibs actually led to the invention of the field. [ 8 ] Other aspects of geochemistry that are also pertinent to geobiology include isotope geochemistry, in which scientists search for isotope fractionation in the rock record, and the chemical analysis of biominerals , such as magnetite or microbially-precipitated gold.
Perhaps the oldest of the bunch, paleontology is the study of fossils. It involves the discovery, excavation, dating, and paleoecological understanding of any type of fossil, microbial or dinosaur, trace or body fossil. Micropaleontology is particularly relevant to geobiology. Putative bacterial microfossils and ancient stromatolites are used as evidence for the rise of metabolisms such as oxygenic photosynthesis. [ 63 ] The search for molecular fossils, such as lipid biomarkers like steranes and hopanes, has also played an important role in geobiology and organic geochemistry. [ 8 ] Relevant sub-disciples include paleoecology and paleobiogeoraphy .
Biogeography is the study of the geographic distribution of life through time. It can look at the present distribution of organisms across continents or between microniches, or the distribution of organisms through time, or in the past, which is called paleobiogeography.
Evolutionary biology is the study of the evolutionary processes that have shaped the diversity of life on Earth. It incorporates genetics , ecology, biogeography, and paleontology to analyze topics including natural selection , variance, adaptation , divergence, genetic drift , and speciation .
Ecohydrology is an interdisciplinary field studying the interactions between water and ecosystems. Stable isotopes of water are sometimes used as tracers of water sources and flow paths between the physical environment and the biosphere. [ 64 ] [ 65 ] | https://en.wikipedia.org/wiki/Geobiology |
Geobiology is a peer-reviewed scientific journal of geobiology published by Wiley-Blackwell . It was established in 2003 as both a print and online journal, with five issues per year. In 2011, the journal became online-only, and increased publication to six times per year. The editor-in-chief is Kurt Konhauser ( University of Alberta ).
The journal is indexed and abstracted in::
According to the Journal Citation Reports , the journal has a 2011 impact factor of 4.111, ranking it 6th out of 170 journals in the category "Geosciences, Multidisciplinary", [ 2 ] 11th out of 84 journals in the category "Biology", [ 3 ] and 19th out of 205 journals in the category "Environmental Sciences". [ 4 ]
This article about a biology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This article about a journal on geology is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Geobiology_(journal) |
Geocaching ( / ˈ dʒ iː oʊ k æ ʃ ɪ ŋ / , JEE -oh- KASH -ing ) is an outdoor recreational activity, in which participants use a Global Positioning System (GPS) receiver or mobile device and other navigational techniques to hide and seek containers, called geocaches or caches , at specific locations marked by coordinates all over the world. [ 2 ] The first geocache was placed in 2000, and by 2023 there were over 3 million active caches worldwide. [ 3 ]
Geocaching can be considered a real-world, outdoor treasure hunting game . A typical cache is a small waterproof container containing a logbook and sometimes a pen or pencil. [ 4 ] The geocacher signs the log with their established code name/username and dates it, in order to prove that they found the cache. After signing the log, the cache must be placed back exactly where the person found it. Larger containers such as plastic storage containers ( Tupperware or similar) or ammo boxes can also contain items for trading, such as toys or trinkets, usually of more sentimental worth than financial. [ 5 ] Geocaching shares many aspects with benchmarking , trigpointing , orienteering , treasure hunting , letterboxing , trail blazing , and another type of location-based game called Munzee .
Geocaching is similar to the game letterboxing (originating in 1854), which uses clues and references to landmarks embedded in stories. [ 6 ] [ 7 ] Geocaching was conceived shortly after the removal of Selective Availability from the Global Positioning System on May 2, 2000 ( Blue Switch Day [ 8 ] ), because the improved accuracy [ 9 ] of the system allowed for a small container to be specifically placed and located. [ 7 ] [ 10 ]
The first documented placement of a GPS-located cache took place on May 3, 2000, by Dave Ulmer in Beavercreek, Oregon . [ 11 ] The location was posted on the Usenet newsgroup sci.geo.satellite-nav [ 12 ] at 45°17.460′N 122°24.800′W / 45.291000°N 122.413333°W / 45.291000; -122.413333 . Within three days, the cache had been found twice, first by Mike Teague. [ 13 ] According to Dave Ulmer's message, this cache was a black plastic bucket that was partially buried and contained various items, such as software, videos, books, money, a can of beans , and a slingshot . [ 12 ] The geocache and most of its contents were eventually destroyed by a lawn mower , but the can of beans was the only item salvaged and was later turned into a trackable item known as the "Original Can of Beans". [ 14 ] [ 15 ] Another geocache and plaque, called the Original Stash Tribute Plaque, now sits at the site. [ 14 ]
Geocaching company Groundspeak allows extraterrestrial caches, e.g. the Moon or Mars , although presently, the website provides only earthbound coordinates. The first published extraterrestrial geocache was GC1BE91, which was placed on the International Space Station by Richard Garriott in 2008. [ 16 ] It used the Baikonur launch area in Kazakhstan as its position. [ 17 ] The original cache contained a Travel Bug (the first geocaching trackable item in space), which stayed on the station until it was brought back to earth in 2013. Due to fire restrictions on board the station, the geocache contained no official paper logbook. As of June 2024, only one confirmed geocacher (on November 17, 2013) has actually found the geocache, [ 18 ] although others have claimed to have found it providing varying amounts of evidence. To commemorate the occasion, Groundspeak allowed specialized geocaching events to be published across the world, allowing attendees to obtain a virtual souvenir on their profile.
The second geocaching trackable in space is TB5EFXK [ 19 ] which is attached to the SHERLOC calibration target on board the Mars Perseverance Rover , which landed on Mars on 18 February 2021. [ 20 ] Geocachers were given the opportunity to virtually discover the trackable after the WATSON camera sent back its first photographs of the calibration target that contained the tracking code number. The code is printed on a prototype helmet visor material that will be used to test how well it can withstand the Martian environment. This will help scientists in creating a viable Martian spacesuit for future crewed missions to Mars . [ citation needed ]
The activity was originally referred to as the GPS stash hunt or gpsstashing. This was changed shortly after the original hide when it was suggested in the gpsstash eGroup that "stash" could have negative connotations and the term geocaching was adopted. [ 21 ]
Over time, a variety of different hide-and-seek-type activities have been created or abandoned, so that "Geocaching" may now refer to hiding and seeking containers, or locations or information without containers. [ 22 ]
An independent accounting of the early history documents several controversial actions taken by Jeremy Irish and Grounded, Inc., a predecessor to Groundspeak, to increase "commercialization and monopolistic control over the hobby". [ 23 ] More recently, other similar hobbies such as Munzee have attracted some geocachers by rapidly adopting smart-phone technology, which has caused "some resistance from geocaching organizers about placing caches along with Munzees". [ 24 ]
For the traditional geocache, a geocacher will place a waterproof container containing a log book, often also a pen and/or pencil and trade items or trackables , then record the cache's coordinates . These coordinates, along with other details of the location, are posted on a listing site (see list of some sites below). Other geocachers obtain the coordinates from that listing site and seek out the cache using their handheld GPS receivers. [ 7 ] The finding geocachers record their exploits in the logbook and online, but then must return the cache to the same coordinates so that other geocachers may find it. Geocachers are free to take objects (except the logbook, pencil, or stamp) from the cache in exchange for leaving something of similar or higher value. [ 25 ]
Typical cache "treasures", also known in the geocaching world as SWAG (a backronym of "stuff we all get"), [ 26 ] [ 27 ] are not high in monetary value but may hold personal value to the finder. [ 25 ] Aside from the logbook, common cache contents are unusual coins or currency , small toys, ornamental buttons, CDs, or books. Although not required, many geocachers decide to leave behind signature items, such as personal geocoins , pins, or craft items, to mark their presence at the cache location. [ 26 ] Disposable cameras are popular as they allow for anyone who found the cache to take a picture which can be developed and uploaded to a geocaching web site listed below. [ 28 ] Also common are objects that are moved from cache to cache called "hitchhikers", such as Travel Bugs or geocoins, whose travels may be logged and followed online. [ 29 ] Cachers who initially place a Travel Bug or Geocoin(s) often assign specific goals for their trackable items. Examples of goals are to be placed in a certain cache a long distance from home, or to travel to a certain country, or to travel faster and farther than other hitchhikers in a race. Less common trends are site-specific information pages about the historic significance of the site, types of trees, birds in the area or other such information. Higher-value items are occasionally included in geocaches as a reward for the First to Find (called "FTF"), or in locations which are harder to reach.
Dangerous or illegal items, including weapons and drugs, are not allowed and are specifically against the rules of most geocache listing sites. Food is also disallowed, even if sealed, as it is considered unhygienic and can attract animals.
If a geocache has been vandalized or stolen by a person who is not familiar with geocaching, it is said to have been "muggled". [ 30 ] [ 31 ] The term plays off the fact that those not familiar with geocaching are called " muggles ", a word borrowed from the Harry Potter series of books which were rising in popularity at the same time geocaching started. [ 26 ]
Geocaches vary in size, difficulty, and location. Simple caches that are placed near a roadside are often called "drive-bys", "park 'n grabs" (PNGs), or "cache and dash". Geocaches may also be complex, involving lengthy searches, significant travel, or use of specialist equipment such as SCUBA diving , kayaking , or abseiling . Different geocaching websites list different variations per their own policies.
Container sizes range from nano , particularly magnetic nanos , which can be smaller than the tip of a finger and have only enough room to store the log sheet, to 20-liter (5 gallon) buckets or even larger containers, such as entire trucks. [ 32 ] The most common cache containers in rural areas are lunch-box-sized plastic storage containers or surplus military ammunition cans. Ammo cans are considered the gold standard of containers because they are very sturdy, waterproof, animal- and fire-resistant, and relatively cheap, and have plenty of room for trade items. Smaller containers are more common in urban areas because they can be more easily hidden.
Over time many variations of geocaches have developed. Different platforms often have their own rules on which types are allowed or how they are classified. The following cache types are supported by geocaching.com.
The simplest form of a geocache. It consists of a container with a log sheet, and is located at the posted coordinates. Cache containers come in many different sizes. [ 33 ]
These caches are intended to be found at night, usually by use of a UV torch. [ 34 ]
These caches include at least one stage in addition to the physical final container with a log sheet. The posted coordinates for a multi-cache are the first stage. At each stage, the geocacher gathers information that leads them to the next stage or to the final container. [ 33 ] [ 35 ] Multi-caches can consist of physical stages (i.e. the first stage contains coordinates for the next stage and so forth) or virtual stages (i.e. the first stage is a historical marker where geocachers have to answer questions to calculate the coordinates to the final physical container).
Also called a 'puzzle cache', players might need to solve a puzzle or bring a special tool to reveal the next waypoint or final coordinates. Most often, the final container is not at the posted coordinates which is noted in the cache description. [ 33 ] Some puzzles can be easy and involve basic math operations or they can be quite difficult, with some of the more challenging ones requiring a firm understanding of computer programming . Geocaching Toolbox, a website dedicated to create and solve puzzle geocaches, provides a comprehensive list of common puzzle cache ciphers.
There are also some subcategories of the mystery cache, which are normally listed as a Mystery Type, which are listed below.
This requires a geocacher to complete a reasonably attainable geocaching-related task before being able to log the cache as a find online. [ 36 ] It does not restrict geocachers from finding the cache and signing the logbook at anytime. However a geocacher is not allowed to log a find on the geocaching website unless they qualify for the challenge specified in the cache description. Examples include finding a number of caches that meet a category, completing a number of cache finds within a period of time, or finding a cache for every calendar day.
Since 2017, Groundspeak has required new challenges to have a geochecker in which users can put their name into an algorithm to see if they qualify without the need of physically checking all of one's previous finds. These geocheckers can be requested using the ProjectGC forums where volunteers can write and create scripts for specific challenges. [ 37 ] Groundspeak also has been more strict into what types of challenges are published. For example, prior to 2017 it was possible to create a challenge cache to find 10 caches that have a food item in the title. Under current guidelines, this is no longer allowed because it restricts geocachers to find specific geocaches. Instead, Groundspeak has encouraged new challenges to be more creative. Acceptable challenges include finding caches in 10 states, finding 100 traditional geocaches, or finding 1000 geocaches with the "wheelchair accessible" attribute. [ 36 ]
A bonus cache requires the finder to have found an amount of caches, usually in a specific series by the same hider, before finding the bonus cache. The cache can be any type, however a bonus cache cannot be required for a second bonus cache. [ 38 ]
These were found at a listed set of coordinates. The finder hides the cache in a different location, and updates the listing, essentially becoming the hider, and the next finder continues the cycle. This cache has been discontinued at geocaching.com and those that have been grandfathered in are solely declining and are being archived. [ 35 ] [ 39 ]
Also known as a wireless beacon cache . This is a Garmin -created innovative on multi-caches using wireless beacon technology. It is a physical game piece, about the size of a half dollar that can be hidden anywhere. Powered by a small battery, it is able to transmit a signal detectable on Garmin devices. The Chirp stores hints, multicache coordinates, counts visitors, and can confirm the cache is nearby. [ 40 ] [ 41 ] These caches caused considerable discussion and some controversy at Groundspeak, where they were ultimately given a new "attribute". [ 33 ] [ 42 ] These types of geocaches can also be listed as a traditional, multi-cache, or letterbox. It is up to the cache owner to designated the cache type for wireless beacon caches.
This is an official geocache located inside the Groundspeak headquarters office in Seattle, Washington . It is technically classified as a separate cache type under mystery caches, with its own unique icon both on the geocaching app and on one's profile statistics tab. Since publication in 2004, it has over 20,000 finds as of June 2024. [ 43 ]
A multi-stage cache hunt that uses a Wherigo "cartridge" to guide players to find a physical cache sometime during cartridge play, usually at the end. However, not all Wherigo cartridges incorporate geocaches into gameplay. Wherigo caches are unique to the geocaching.com website. [ 33 ] Wherigo is a GPS location-aware software platform initially released in January 2008. Authors can develop self-enclosed story files (called "cartridges") that are read by the Wherigo player software, installed on either a GPS unit or smartphone. The player and story take advantage of the location information provided by the GPS to trigger in-game events, such as using a virtual object or interacting with characters. Completing an adventure can require reaching different locations and solving puzzles. Cartridges are coded in Lua . Lua may be used directly, but a builder application is usually used. The Wherigo site [ 44 ] offers a builder application and a database of adventures free for download, though the builder has remained in its Alpha version since its last release in May 2008. [ 45 ] The official player is only available for Pocket PC . A built-in player is available on Garmin Colorado and Oregon GPS models. The Wherigo Foundation [ 46 ] was organized in December 2012. The group is composed of all Wherigo application developers who, up until that time, had been acting and developing separately. Their goal is to provide a consistent Wherigo experience across platforms, connect Wherigo applications via an API , and add modern features to the Wherigo platform. While Groundspeak is aware of this project, the company has yet to take a position.
An RWIG provides three lines of code composed of 9 digits each that a player can type into the RWIG cartridge. Instead of following a story or interacting with characters, and RWIG gives you the distance to the final cache, but not direction. It requires geocachers to get closer to the final geocache by process of elimination. Once you are within 25 metres, the final coordinates are given to provide a more accurate location for the geocache. [ 47 ]
This is a combination of a geocache and a letterbox in the same container. Letterboxes involve a rubber stamp and logbook that are not supposed to be traded and taken instead of tradable items, but letterbox hybrids may or may not include trade items. Letterboxers carry their own stamp with them, to stamp the letterbox's logbook and inversely stamp their personal logbook with the letterbox stamp. The letterbox hybrid cache contains the important materials for this. [ 33 ] [ 35 ] Typically, letterbox hybrid caches are not found at the given coordinates which only act as a starting location. Instead, a series of clues are given as to where to find the cache such as "take a left past the bridge" or "about 25 paces past the big oak tree".
Also known as Ape caches , these are a special type of traditional cache that were hidden in conjunction with 20th Century Fox and Groundspeak to promote the 2001 remake of Planet of the Apes . There were 14 APE geocaches placed around the world and each one contained a prop from the film. As of 2023, only 2 APE caches are still active with one near Seattle, Washington ('Tunnel of Light', GC1169) and the other in Brazil ('Southern Bowl', GCC67). Of those two, the Brazil APE cache is the only surviving original APE cache because GC1169 was muggled in 2016. However, the original container was later found by a Groundspeak led survey in April of that year. What remains of "Tunnel of Light" is an "official" replacement of the original ammo can that was left in 2001. [ 48 ]
This cache type does not contain a physical logbook. They are normally hidden at a rather interesting or unique location, usually with a described object such as an art sculpture or a scenic lookout. Validation for finding a virtual cache generally requires one to email the cache hider with information such as a date or a name on a plaque, or to post a picture of oneself at the site with a GPS receiver in hand. [ 33 ] As of 2005, new virtual caches are no longer allowed by Groundspeak as it is considered a legacy cache. [ 49 ]
On August 24, 2017, Groundspeak announced "Virtual Rewards", allowing 4000 new virtual caches to be placed during the following year. [ 50 ] Each year, eligible geocachers can opt-in to a drawing and some selected with the opportunity to submit a virtual cache for publication. From 2005 to 2017, the geocaching website no longer listed new caches without a physical container, including virtual and webcam caches (with the exception of earthcaches and events); however, older caches of these types have been grandfathered .
Similar to virtual geocaches, an Earth cache is published not by a local reviewer, but by a volunteer regional reviewer associated with the Geological Society of America . The geocacher usually has to perform a task which teaches an education lesson about the geology of the cache area. [ 51 ] [ 33 ] Visitors must answer geological questions to complete the cache which can be as simple as describing the color and thickness of layers in an outcrop or can be as complicated as taking measurements of stream velocities or fault offsets. Earthcaches covers geologic topics such as: rock formation , mineralogy , earthquakes , fluvial processes , erosion , volcanology , and planetary science (among others).
Otherwise known as a Reverse cache, a locationless cache is similar to a scavenger hunt . A description is given for something to find, such as a one-room schoolhouse, and the finder locates an example of this object. The finder records the location using their GPS receiver and often takes a picture at the location showing the named object with their GPS receiver. Typically others are not allowed to log that same location as a find. [ 33 ]
Since 2005, all locationless caches have been archived and locked, meaning they are unable to be logged. However, with geocaching's 20th anniversary in 2020 Groundspeak decided to publish a special locationless cache for geocachers to "find" at various Mega- and Giga-Events around the world. The first locationless cache in 15 years (GC8FR0G) required finders to take a picture of themselves with the geocaching mascot, Signal the Frog, at Mega- and Giga-Events during 2020. The cache was made available to log starting 1 January 2020. However, because of the COVID-19 pandemic , nearly all planned Mega- and Giga-events were cancelled for the year, including the planned 20th anniversary celebration event in Seattle, Washington . Therefore, Groundspeak decided to extend the deadline to log this geocache through 1 January 2023. With 22,500 finds it is the second most logged geocache in history. [ 52 ]
The second published locationless cache since 2005 (GC8NEAT) required visitors to take a photo of them picking up trash and cleaning up their local area. [ 52 ] geocachers were able to log this cache from 6 February 2021 through 31 December 2022. It has been logged over 33,500 times and holds the title for the most "found" geocache. On 17 August 2022, Geocaching.com made available the third locationless cache to be logged since 2005 (GC9FAVE). Instead of finding Signal or picking up trash, this cache encouraged geocachers from around the world to share their favorite geocaching story. This geocache was archived and locked on 1 January 2024. [ 53 ] [ 54 ] In 2025, Geocaching.com announced the fourth locationless cache since 2005 (GCA2025). In honor of the 25-th anniversary of geocaching, geocachers were encouraged to take a photo next to a pre-existing number 25. [ 55 ]
A type of virtual cache whose coordinates provide the location to a public webcam . The finder is required to capture an image of themselves through the webcam for verification of the find. [ 33 ] New webcam caches are no longer allowed by Groundspeak as it is a legacy cache. [ 49 ] Webcam caches are a category at Waymarking.com.
A type of virtual cache that typically consists of a set of 5 waypoints, with each waypoint counting as a "cache find". The waypoints usually have an overall theme such showcasing the history of a small town and are often created as a walking tour of a city or park. An example would be Route 66 or the Lincoln Highway , which are a nationwide series of Adventure Lab sets of 10 that stretch the entire route across the United States . [ 56 ]
Adventure labs were first introduced in 2014 as a way to test market ideas through Groundspeak. Initially, geocachers would find a key word at a designated site where they could then enter it onto a website to claim "credit". Soon after, they were made available to "find" at select Mega-Events. In 2020, Groundspeak released the "Adventure Lab" app, separate from the geocaching app. The app made it possible to enter a geo-fence when, once inside, a question will appear that can be answered either in the form of a written answer or a multiple choice answer. This question can be answered at anytime once activated, however, some Adventure Labs must be completed sequentially implying that one must answer the question to move on to the next waypoint. [ citation needed ]
Many Adventure Labs caches have a physical bonus cache associated with them that are listed as a "mystery cache". Coordinates to the bonus cache, if applicable, can be seen in the journal entries once a user has correctly answered the question at a waypoint.
Geocachers can create their own Adventure Lab, but must first opt-in to receive an "Adventure Lab credit" which allows for the creation of 1 set of 5 waypoints, with each of the 5 waypoints counting towards a cache find. If selected, Adventure Labs can be created using the Adventure Lab builder. [ 57 ] Adventure Labs, unlike all other geocaches, are not subject to review and are published at will by the creator. However, Adventure Labs can at anytime be archived by Groundspeak if they are in violation of terms of use. For example, placing an Adventure Lab in a place that requires people to pay a fee to visit such as airports or theme parks may get the Adventure permanently removed from the Adventure Lab app. [ 58 ]
There are several kinds of events geocaches. While encouraged, events do not require visitors to sign their name a logbook to prove they attended an event. Attendees of event caches can log that they 'attended', which will increment their number of found caches. Event caches can be of the following types:
GPX files containing information such as a cache description and information about recent visitors to the cache are available from various listing sites. Geocachers may upload geocache data (also known as waypoints ) from various websites in various formats, most commonly in file-type GPX , which uses XML . [ 59 ] Some websites allow geocachers to search (build queries) for multiple caches within a geographic area based on criteria such as ZIP code or coordinates , downloading the results as an email attachment on a schedule. In recent years, Android and iPhone users can download apps such as GeoBeagle [ 60 ] that allow them to use their 3G and GPS-enabled devices to actively search for and download new caches. [ 61 ] [ 62 ]
A variety of geocaching applications are available for geocache data management, file-type translation, and personalization. Geocaching software can assign special icons or search (filter) for caches based on certain criteria (e.g. distance from an assigned point, difficulty, date last found).
Paperless geocaching means hunting a geocache without a physical printout of the cache description. Traditionally, this means that the seeker has an electronic means of viewing the cache information in the field, such as pre-downloading the information to a PDA or other electronic device. Various applications can directly upload and read GPX files without further conversion. Newer GPS devices released by Garmin , DeLorme , and Magellan have the ability to read GPX files directly, thus eliminating the need for a PDA . [ 63 ] Other methods include viewing real-time information on a portable computer with internet access or with an Internet-enabled smart phone. The latest advancement of this practice involves installing dedicated applications on a smart phone with a built-in GPS receiver. Seekers can search for and download caches in their immediate vicinity directly to the application and use the on-board GPS receiver to find the cache.
A more controversial version of paperless Caching involves mass-downloading only the coordinates and cache names (or waypoint IDs) for hundreds of caches into older receivers. This is a common practice of some cachers and has been used successfully for years. In many cases, however, the cache description and hint are never read by the seeker before hunting the cache. This means they are unaware of potential restrictions such as limited hunt times, park open/close times, off-limit areas, and suggested parking locations.
The website geocaching.com [ 64 ] now sells mobile applications which allow users to view caches through a variety of different devices. Currently, the Android , iOS , and Windows Phone mobile platforms have applications in their respective stores. The apps also allow for a trial version with limited functionality. The site promotes mobile applications, and lists over two dozen applications (both mobile and browser/desktop based) that are using their proprietary but royalty-free public application programming interface ( API ). [ 65 ] Developers at c:geo have criticised Groundspeak for being incompatible with open-source development. [ 66 ]
Additionally, "c:geo - opensource" [ 67 ] is a free opensource full function application for Android phones that is very popular. [ 68 ] [ 69 ] [ 70 ] [ 71 ] This app includes similar features to the official Geocaching mobile application, such as: View caches on a live map ( Google Maps or OpenStreetMap ), navigation using a compass, map, or other applications, logging finds online and offline, etc. [ 72 ]
Geocaching enthusiasts have also made their own hand-held GPS devices using a Lego Mindstorms NXT GPS sensor. [ 73 ] [ 74 ]
Geocache listing websites have their own guidelines for acceptable geocache publications. Government agencies and others responsible for public use of land often publish guidelines for geocaching, and a "Geocacher's Creed" posted on the Internet asks participants to "avoid causing disruptions or public alarm". [ 75 ] [ 76 ] Generally accepted rules are to not endanger others, to minimize the impact on nature, to respect private property , and to avoid public alarm.
The reception from authorities and the general public outside geocache participants has been mixed.
Cachers have been approached by police and questioned when they were seen as acting suspiciously. [ 77 ] [ 78 ] [ 79 ] Other times, investigation of a cache location after suspicious activity was reported has resulted in police and bomb squad discovery of the geocache, [ 80 ] such as the evacuation of a busy street in Wetherby , Yorkshire , England in 2011, [ 81 ] and a street in Alvaston , Derby in 2020. [ 82 ]
Schools have been evacuated when a cache has been seen by teachers or police, such as the case of Fairview High School in Boulder, Colorado in 2009. [ 83 ] A number of caches have been destroyed by bomb squads. [ 81 ] [ 84 ] [ 85 ] [ 86 ] [ 87 ] Diverse locations, from rural cemeteries to Disneyland , have been locked down as a result of such scares. [ 88 ] [ 89 ]
The placement of geocaches has occasional critics among some government personnel and the public at large, who consider it littering . [ 90 ] [ 91 ] Some geocachers act to mitigate this perception by picking up litter while they search for geocaches, a practice referred to in the community as "Cache In Trash Out". [ 92 ] [ 90 ] Events and caches are often organized revolving around this practice, with many areas seeing significant cleanup that would otherwise not take place, or would instead require federal, state, or local funds to accomplish. Geocachers are also encouraged to clean up after themselves by retrieving old containers once a cache has been removed from play.
Geocaching is legal in most countries and is usually positively received when explained to law enforcement officials. [ 93 ] [ 79 ] However, certain types of placements can be problematic. Although generally disallowed, hiders could place caches on private property without adequate permission (intentionally or otherwise), which encourages cache finders to trespass. Historic buildings and structures have also been damaged by geocachers, who have wrongly believed the geocache can be/has been placed within, or on the roof of, the buildings. [ 94 ] Caches might also be hidden in places where the act of searching can make a finder look suspicious (e.g., near schools, children's playgrounds, banks, courthouses, or in residential neighborhoods), or where the container placement could be mistaken for a drug stash or a bomb (especially in urban settings, under bridges, [ 95 ] near banks, courthouses, or embassies). As a result, geocachers are strongly advised to label their geocaches when possible, so that they are not mistaken for a harmful object if discovered by non-geocachers. [ 85 ] [ 96 ]
As well as concerns about littering and bomb threats, some geocachers have hidden their caches in inappropriate locations, such as electrical boxes, which may encourage risky behavior, especially by children. Hides in these areas are discouraged, [ 83 ] and cache listing websites enforce guidelines that disallow certain types of placements. However, as cache reviewers typically cannot see exactly where and how every cache is hidden, problematic hides can slip through. Ultimately it is also up to cache finders to use discretion when attempting to search for a cache, and report any problems.
Regional rules for placement of caches have become complex. For example, in Virginia, the Virginia Department of Transportation and the Wildlife Management Agency now forbids the placement of geocaches on all land controlled by those agencies. Some cities, towns, and recreation areas allow geocaches with few or no restrictions, but others require compliance with lengthy permitting procedures. [ 97 ]
The South Carolina House of Representatives passed Bill 3777 [ 98 ] in 2005, stating, "It is unlawful for a person to engage in the activity of Geocaching or letterboxing in a cemetery or in a historic or archaeological site or property publicly identified by a historical marker without the express written consent of the owner or entity which oversees that cemetery site or property." The bill was referred to committee on first reading in the Senate and has been there ever since. [ 99 ]
The Illinois Department of Natural Resources requires geocachers who wish to place a geocache at any Illinois state park to submit the location on a USGS 7.5 minute topographical map, the name and contact information of the person(s) wishing to place the geocache, a list of the original items to be included in the geocache, and a picture of the container that is to be placed. [ 100 ]
In April 2020, during the COVID-19 pandemic , the township of Highlands East , Ontario , Canada temporarily banned geocaching, over concerns that geocache containers could not be properly disinfected between finds. [ 101 ]
Several deaths have occurred during the course of Caching. [ 102 ] [ 103 ] [ 104 ] [ 105 ]
The death of a 21-year-old experienced cacher in December 2011 "while attempting a Groundspeak Cache that does not look all that dangerous" led to discussions of whether changes should be made, and whether cache owners or Groundspeak could be held liable. Groundspeak has since updated their geocaching.com terms of use agreement to specify that geocachers find geocaches at their own risk. [ 106 ]
In 2008, two lost hikers on Mount Hood in Oregon , U.S. stumbled across a geocache and phoned this information out to rescuers, allowing crews to locate and rescue them. [ 107 ]
Three adult geocachers, a 24-year-old woman and her parents, were trapped in a cave and rescued by firefighters in Rochester, New York , U.S. while searching for a geocache in 2012. Rochester Fire Department spokesman Lt. Ted Kuppinger said, "It's difficult, because you're invested in it, you want to find something like that, so people will probably try to push themselves more than they should, but you need to be prudent about what you're capable of doing." [ 108 ]
In 2015, members of the public called the British coastguard to check on a group of geocachers who were spotted walking into the Severn Estuary off the coast of Clevedon , England, in search of clues to locate a multi-cache. Although they felt they were safe and able to return to land, they were considered to be in danger and were airlifted back to the shore. [ 109 ]
In October 2016, four people discovered a crashed car at the bottom of a ravine in Benton County, Washington , U.S., while out geocaching. They spotted the driver still trapped inside and alerted emergency services, who rescued the driver. [ 110 ]
On 9 June 2018, four people in Prague , Czech Republic were searching for a cache in a 4 km long tunnel when a storm surge carried them through the tunnel to its terminus at the Vltava river. Two of the geocachers died, [ 111 ] while two others were rescued from the river. [ 112 ] [ 113 ]
Numerous websites list geocaches around the world. Geocaching websites vary in many ways, such as subscription options, activity levels, and volunteers available to check and ensure caches registered remain open for others.
The first website to list geocaches was announced by Mike Teague on May 8, 2000. [ 114 ] On September 2, 2000, Jeremy Irish emailed the gpsstash mailing list that he had registered the domain name geocaching.com and had set up his own Web site. He copied the caches from Mike Teague's database into his own. On September 6, Mike Teague announced that Jeremy Irish was taking over cache listings. As of 2012 [update] , Teague had logged only 5 caches. [ 115 ]
The largest site is Geocaching.com, owned by Groundspeak Inc., which began operating in late 2000. With a worldwide membership and a freemium business model, the website claims millions of caches and members in over 190 countries and all seven continents including Antarctica. [ 116 ] Hides and events are reviewed by volunteer regional cache reviewers before publication. Free membership allows users access to coordinates, descriptions, and logs for some caches; for a subscription fee, users are allowed additional search tools, the ability to download large amounts of cache information onto their GPS at once, instant email notifications about new caches, and access to premium-member-only caches (although, you can still access such caches on the website itself; the premium cache restriction only applies to the application). [ 117 ] Geocaching Headquarters are located in the Fremont neighborhood of Seattle , Washington, United States. [ 118 ]
The Opencaching Network provides independent, non-commercial listing sites based in the cacher's country or region. The Opencaching Network lists the most types of caches, including traditional, virtual, moving, multi, quiz, webcam, BIT, guest book, USB, event, and MP3. The Opencaching Network is less restrictive than many sites, and does not charge for the use of the sites, the service being community-driven. Some (or all) listings may or may not be required to be reviewed by community volunteers before being published and although cross-listing is permitted, it is discouraged. Some listings are listed on other sites, but there are many that are unique to the Opencaching Network. Features include the ability to organize one's favourite caches, build custom searches, be instantly notified of new caches in one's area, seek and create caches of all types, export GPX queries, statpics, etc. Each Opencaching Node provides the same API for free (called "OKAPI" [ 119 ] ) for use by developers who want to create third-party applications which can use the Opencaching Network's content.
Countries with associated opencaching websites include the United States at www.opencaching.us; [ 120 ] Germany at www.opencaching.de; [ 121 ] [ 122 ] Sweden at www.opencaching.se; Poland at www.opencaching.pl; [ 123 ] Czech Republic at www.opencaching.cz; The Netherlands at www.opencaching.nl; Romania at www.opencaching.ro; the United Kingdom at www.opencache.uk. [ 124 ] [ 125 ]
The main difference between opencaching and traditional listing sites is that all services are open to the users at no cost. Generally, most geocaching services or websites offer some basic information for free, but users may have to pay for premium membership that allows access to more information or advanced searching capabilities. This is not the case with opencaching; every geocache is listed and accessible to everyone for free. [ 124 ]
Additionally, Opencaching sites allow users to rate and report on existing geocaches. This allows users to see what other cachers think of the cache and it encourages participants to place higher-quality caches. The rating system also greatly reduces the problem of abandoned or unsatisfactory caches still being listed after repeated negative comments or posts in the cache logs. [ 124 ]
OpenCaching.com (short: OX) was a site created and run by Garmin from 2010 to 2015, which had the stated aim of being as free and open as possible with no paid content. Caches were approved by a community process and coordinates were available without an account. The service closed on 14 August 2015. [ 126 ]
In many countries there are regional geocaching sites, but these mostly only compile lists of caches in the area from the three main sites. Many of them also accept unique listings of caches for their site; these listings tend to be less popular than the international sites, although occasionally the regional sites may have more caches than the international sites. There are some exceptions, such as how, in the territory of the former Soviet Union , the site Geocaching .su remains popular because it accepts listings in the Cyrillic script . Additional international sites include Geocaching.de, a German website, and Geocaching Australia, which accepts listings of cache types deprecated by geocaching.com, cache types such as TrigPoint and Moveable caches, as well as traditional geocache types.
GPSgames.org was an online community dedicated to all kinds of games involving Global Positioning System receivers. [ 127 ] GPSgames.org allowed traditional geocaches along with virtual, locationless, and traveler geocaches.
The site's geodashing game generated a large number of randomly positioned "dashpoints", requiring players to reach as many as possible, competing as individuals or teams. [ 128 ] Shutterspot, GeoVexilla, MinuteWar, GeoPoker, and GeoGolf were among the other GPS games available. [ 129 ]
GPSgames.org was 100% free since 2001, through donations. [ 130 ] The site was retired on 30 June 2021.
Navicache.com started as a regional listing service in 2001. [ 131 ] While many of the website's listings have been posted to other sites, it also offers unique listings. The website lists nearly any type of geocache and does not charge to access any of the caches listed in its database. All submissions are reviewed and approved. [ 132 ] In 2012 it was announced that Navicache was under transition to new owners, who said they "plan to develop a site that geocachers want, with rules that geocachers think are suitable. Geocaching.com and OX are both backed by large enterprises, and while that means they have more funding and people, we're a much smaller team – so our advantage is the ability to be dynamic and listen to the users." [ 131 ] However, as of 2021 the site is mostly dormant, and the most recent cache listing is from 2014. [ 133 ]
Terracaching.com aims to provide high-quality caches made so by the difficulty of the hide or from the quality of the location. Membership is managed through a sponsorship system, and each cache is under continual peer review from other members. Terracaching.com embraces virtual caches alongside traditional or multi-stage caches and includes many locationless caches among the thousands of caches in its database. It is increasingly attracting members who like the point system. In Europe, TerraCaching is supported by Terracaching.eu. This site is translated in different European languages, has an extended FAQ and extra supporting tools for TerraCaching. TerraCaching strongly discourages caches that are listed on other sites (so-called double-listing). [ 134 ]
Extremcaching is a German private database for alternative geocaches with a focus on T5 / climbing caches, night caches, and lost place caches. [ 135 ] [ 136 ]
Geocaching Australia is a community website for geocachers in Australia and New Zealand. Geocaching Australia also has many unique cache types such as Burke And Wills, Moveable_cache & Podcache geocaches. [ 137 ] | https://en.wikipedia.org/wiki/Geocaching |
Geocarpy is "an extremely rare means of plant reproduction", [ 1 ] in which plants produce diaspores within the soil . [ 2 ] This may occur with subterranean flowers (protogeocarpy), or from aerial flowers, parts of which penetrate the soil after flowering (hysterocarpy). It has evolved as an effective means of ensuring a suitable environment for the plant's offspring. [ 2 ]
Geocarpy is also linked with solifluction soils, where rapid thawing and freezing of surface soil causes almost continuous movement. [ 3 ] This phenomenon is prevalent in high altitude areas of East Africa . [ 3 ] In order to reproduce, geocarpic plants bend their stems so that the fruit can be embedded in the soil during the freezing process while the fruit is still attached to the plant itself. [ 3 ]
Geocarpy is most frequent in tropical or semi-desert areas, [ 2 ] and geocarpic species may be found in the families Araceae , Begoniaceae , Brassicaceae (Cruciferae), Callitrichaceae , Convolvulaceae , Cucurbitaceae , Fabaceae (Leguminosae), Loganiaceae , Moraceae and Rubiaceae . [ 2 ] [ 4 ] [ 5 ] The best-known example is the peanut , Arachis hypogaea .
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geocarpy |
Geocentric Coordinate Time ( TCG - Temps-coordonnée géocentrique ) is a coordinate time standard intended to be used as the independent variable of time for all calculations pertaining to precession , nutation , the Moon , and artificial satellites of the Earth . It is equivalent to the proper time experienced by a clock at rest in a coordinate frame co-moving with the center of the Earth [ citation needed ] : that is, a clock that performs exactly the same movements as the Earth but is outside the Earth's gravity well . It is therefore not influenced by the gravitational time dilation caused by the Earth. The TCG is the time coordinate for the Geocentric Celestial Reference System (GCRS). [ 1 ]
TCG was defined in 1991 by the International Astronomical Union . [ 2 ] Unlike former astronomical time scales, TCG is defined in the context of the general theory of relativity . The relationships between TCG and other relativistic time scales are defined with fully general relativistic metrics .
Because the reference frame for TCG is not rotating with the surface of the Earth and not in the gravitational potential of the Earth, TCG ticks faster than clocks on the surface of the Earth by a factor of about 7.0 × 10 −10 (about 22 milliseconds per year). Consequently, the values of physical constants to be used with calculations using TCG differ from the traditional values of physical constants. (The traditional values were in a sense wrong, incorporating corrections for the difference in time scales.) Adapting the large body of existing software to change from TDB ( Barycentric Dynamical Time ) to TCG is a formidable task, and as of 2002 many calculations continue to use TDB in some form.
Time coordinates on the TCG scale are conventionally specified using traditional means of specifying days, carried over from non-uniform time standards based on the rotation of the Earth. Specifically, both Julian Dates and the Gregorian calendar are used. For continuity with its predecessor Ephemeris Time , TCG was set to match ET at around Julian Date 2443144.5 (1977-01-01T00Z). More precisely, it was defined that TCG instant 1977-01-01T00:00:32.184 exactly corresponds to TAI instant 1977-01-01T00:00:00.000 exactly. This is also the instant at which TAI introduced corrections for gravitational time dilation.
TCG is a Platonic time scale: a theoretical ideal, not dependent on a particular realisation. For practical purposes, TCG must be realised by actual clocks in the Earth system. Because of the linear relationship between Terrestrial Time (TT) and TCG, the same clocks that realise TT also serve for TCG. See the article on TT for details of the relationship and how TT is realised.
Barycentric Coordinate Time (TCB) is the analog of TCG, used for calculations relating to the Solar System beyond Earth orbit. TCG is defined by a different reference frame from TCB, such that they are not linearly related. Over the long term, TCG ticks more slowly than TCB by about 1.6 × 10 −8 (about 0.5 seconds per year). In addition there are periodic variations, as Earth moves within the Solar System. When the Earth is at perihelion in January, TCG ticks even more slowly than it does on average, due to gravitational time dilation from being deeper in the Sun 's gravity well and also velocity time dilation from moving faster relative to the Sun. At aphelion in July the opposite holds, with TCG ticking faster than it does on average. | https://en.wikipedia.org/wiki/Geocentric_Coordinate_Time |
In astronomy , the geocentric model (also known as geocentrism , often exemplified specifically by the Ptolemaic system ) is a superseded description of the Universe with Earth at the center. Under most geocentric models, the Sun , Moon , stars , and planets all orbit Earth. The geocentric model was the predominant description of the cosmos in many European ancient civilizations, such as those of Aristotle in Classical Greece and Ptolemy in Roman Egypt, as well as during the Islamic Golden Age .
Two observations supported the idea that Earth was the center of the Universe. First, from anywhere on Earth, the Sun appears to revolve around Earth once per day . While the Moon and the planets have their own motions, they also appear to revolve around Earth about once per day. The stars appeared to be fixed on a celestial sphere rotating once each day about an axis through the geographic poles of Earth. [ 1 ] Second, Earth seems to be unmoving from the perspective of an earthbound observer; it feels solid, stable, and stationary.
Ancient Greek , ancient Roman , and medieval philosophers usually combined the geocentric model with a spherical Earth , in contrast to the older flat-Earth model implied in some mythology . However, the Greek astronomer and mathematician Aristarchus of Samos ( c. 310 – c. 230 BC ) developed a heliocentric model placing all of the then-known planets in their correct order around the Sun. The ancient Greeks believed that the motions of the planets were circular , a view that was not challenged in Western culture until the 17th century, when Johannes Kepler postulated that orbits were heliocentric and elliptical (Kepler's first law of planetary motion ). In 1687, Isaac Newton showed that elliptical orbits could be derived from his laws of gravitation.
The astronomical predictions of Ptolemy's geocentric model , developed in the 2nd century of the Christian era, served as the basis for preparing astrological and astronomical charts for over 1,500 years. The geocentric model held sway into the early modern age, but from the late 16th century onward, it was gradually superseded by the heliocentric model of Copernicus , Galileo , and Kepler . There was much resistance to the transition between these two theories, since for a long time the geocentric postulate produced more accurate results. Additionally some felt that a new, unknown theory could not subvert an accepted consensus for geocentrism.
In the 6th century BC, Anaximander proposed a cosmology in which Earth is shaped like a section of a pillar (a cylinder), held aloft at the center of everything. The Sun, Moon, and planets were holes in invisible wheels which surround Earth, and through those holes, humans could see concealed fire. At around the same time, Pythagoras thought that Earth was a sphere (in accordance with observations of eclipses), but not at the center; he believed that it was in motion around an unseen fire. Later these two concepts were combined, so that most of the educated Greeks from the 4th century BC onwards thought that Earth was a sphere at the center of the universe. [ 2 ]
In the 4th century BC Plato and his student Aristotle , wrote works based on the geocentric model [ citation needed ] . According to Plato, the Earth was a sphere, stationary at the center of the universe. The stars and planets were carried around the Earth on spheres or circles , arranged in the order (outwards from the center): Moon, Sun, Venus, Mercury, Mars, Jupiter, Saturn, fixed stars, with the fixed stars located on the celestial sphere. In his " Myth of Er ", a section of the Republic , Plato describes the cosmos as the Spindle of Necessity , attended by the Sirens and turned by the three Fates . Eudoxus of Cnidus , who worked with Plato, developed a less mythical, more mathematical explanation of the planets' motion based on Plato's dictum stating that all phenomena in the heavens can be explained with uniform circular motion. Aristotle elaborated on Eudoxus' system.
In the fully developed Aristotelian system, the spherical Earth is at the center of the universe, and all other heavenly bodies are attached to 47–55 transparent, rotating spheres surrounding the Earth, all concentric with it. (The number is so high because several spheres are needed for each planet.) These spheres, known as crystalline spheres, all moved at different uniform speeds to create the revolution of bodies around the Earth. They were composed of an incorruptible substance called aether . Aristotle believed that the Moon was in the innermost sphere and therefore touches the realm of Earth, causing the dark spots ( maculae ) and the ability to go through lunar phases . He further described his system by explaining the natural tendencies of the terrestrial elements: earth, water, fire, air, as well as celestial aether. His system held that earth was the heaviest element, with the strongest movement towards the center, thus water formed a layer surrounding the sphere of Earth. The tendency of air and fire, on the other hand, was to move upwards, away from the center, with fire being lighter than air. Beyond the layer of fire, were the solid spheres of aether in which the celestial bodies were embedded. They were also entirely composed of aether.
Adherence to the geocentric model stemmed largely from several important observations. First of all, if the Earth did move, then one ought to be able to observe the shifting of the fixed stars due to stellar parallax . Thus if the Earth was moving, the shapes of the constellations should change considerably over the course of a year. As they did not appear to move, either the stars are much farther away than the Sun and the planets than previously conceived, making their motion undetectable, or the Earth is not moving at all. Because the stars are actually much further away than Greek astronomers postulated (making angular movement extremely small), stellar parallax was not detected until the 19th century . Therefore, the Greeks chose the simpler of the two explanations. Another observation used in favor of the geocentric model at the time was the apparent consistency of Venus' luminosity, which implies that it is usually about the same distance from Earth, which in turn is more consistent with geocentrism than heliocentrism. (In fact, Venus' luminous consistency is due to any loss of light caused by its phases being compensated for by an increase in apparent size caused by its varying distance from Earth.) Objectors to heliocentrism noted that terrestrial bodies naturally tend to come to rest as near as possible to the center of the Earth. Further, barring the opportunity to fall closer the center, terrestrial bodies tend not to move unless forced by an outside object, or transformed to a different element by heat or moisture.
Atmospheric explanations for many phenomena were preferred because the Eudoxan–Aristotelian model based on perfectly concentric spheres was not intended to explain changes in the brightness of the planets due to a change in distance. [ 3 ] Eventually, perfectly concentric spheres were abandoned as it was impossible to develop a sufficiently accurate model under that ideal, with the mathematical methods then available. However, while providing for similar explanations, the later deferent and epicycle model was already flexible enough to accommodate observations.
Although the basic tenets of Greek geocentrism were established by the time of Aristotle, the details of his system did not become standard. The Ptolemaic system, developed by the Hellenistic astronomer Claudius Ptolemaeus in the 2nd century AD, finally standardised geocentrism. His main astronomical work, the Almagest , was the culmination of centuries of work by Hellenic , Hellenistic and Babylonian astronomers. For over a millennium, European and Islamic astronomers assumed it was the correct cosmological model. Because of its influence, people sometimes wrongly think the Ptolemaic system is identical with the geocentric model .
Ptolemy argued that the Earth was a sphere in the center of the universe, from the simple observation that half the stars were above the horizon and half were below the horizon at any time (stars on rotating stellar sphere), and the assumption that the stars were all at some modest distance from the center of the universe. If the Earth were substantially displaced from the center, this division into visible and invisible stars would not be equal. [ n 1 ]
In the Ptolemaic system, each planet is moved by a system of two spheres: one called its deferent; the other, its epicycle . The deferent is a circle whose center point, called the eccentric and marked in the diagram with an X, is distant from the Earth. The original purpose of the eccentric was to account for the difference in length of the seasons (northern autumn was about five days shorter than spring during this time period) by placing the Earth away from the center of rotation of the rest of the universe. Another sphere, the epicycle, is embedded inside the deferent sphere and is represented by the smaller dotted line to the right. A given planet then moves around the epicycle at the same time the epicycle moves along the path marked by the deferent. These combined movements cause the given planet to move closer to and further away from the Earth at different points in its orbit, and explained the observation that planets slowed down, stopped, and moved backward in retrograde motion , and then again reversed to resume normal, or prograde, motion.
The deferent-and-epicycle model had been used by Greek astronomers for centuries along with the idea of the eccentric (a deferent whose center is slightly away from the Earth), which was even older. In the illustration, the center of the deferent is not the Earth but the spot marked X, making it eccentric (from the Greek ἐκ ec- meaning "from" and κέντρον kentron meaning "center"), from which the spot takes its name. Unfortunately, the system that was available in Ptolemy's time did not quite match observations , even though it was an improvement over Hipparchus' system. Most noticeably the size of a planet's retrograde loop (especially that of Mars) would be smaller, or sometimes larger, than expected, resulting in positional errors of as much as 30 degrees. To alleviate the problem, Ptolemy developed the equant . The equant was a point near the center of a planet's orbit where, if you were to stand there and watch, the center of the planet's epicycle would always appear to move at uniform speed; all other locations would see non-uniform speed, as on the Earth. By using an equant, Ptolemy claimed to keep motion which was uniform and circular, although it departed from the Platonic ideal of uniform circular motion . The resultant system, which eventually came to be widely accepted in the west, seems unwieldy to modern astronomers; each planet required an epicycle revolving on a deferent, offset by an equant which was different for each planet. It predicted various celestial motions, including the beginning and end of retrograde motion, to within a maximum error of 10 degrees, considerably better than without the equant.
The model with epicycles is in fact a very good model of an elliptical orbit with low eccentricity. The well-known ellipse shape does not appear to a noticeable extent when the eccentricity is less than 5%, but the offset distance of the "center" (in fact the focus occupied by the Sun) is very noticeable even with low eccentricities as possessed by the planets.
To summarize, Ptolemy conceived a system that was compatible with Aristotelian philosophy and succeeded in tracking actual observations and predicting future movement mostly to within the limits of the next 1000 years of observations. The observed motions and his mechanisms for explaining them include:
The geocentric model was eventually replaced by the heliocentric model . Copernican heliocentrism could remove Ptolemy's epicycles because the retrograde motion could be seen to be the result of the combination of the movements and speeds of Earth and planets. Copernicus felt strongly that equants were a violation of Aristotelian purity, and proved that replacement of the equant with a pair of new epicycles was entirely equivalent. Astronomers often continued using the equants instead of the epicycles because the former was easier to calculate, and gave the same result.
It has been determined [ by whom? ] that the Copernican, Ptolemaic and even the Tychonic models provide identical results to identical inputs: they are computationally equivalent. It was not until Kepler demonstrated a physical observation that could show that the physical Sun is directly involved in determining an orbit that a new model was required.
The Ptolemaic order of spheres from Earth outward is: [ 5 ]
Ptolemy did not invent or work out this order, which aligns with the ancient Seven Heavens religious cosmology common to the major Eurasian religious traditions. It also follows the decreasing orbital periods of the Moon, Sun, planets and stars.
After the translation movement that included the translation of Almagest from Latin to Arabic, Muslims adopted and refined the geocentric model of Ptolemy , which they believed correlated with the teachings of Islam. [ 6 ] [ 7 ] [ 8 ] Muslim astronomers generally accepted the Ptolemaic system and the geocentric model, [ 9 ] but by the 10th century, texts appeared regularly whose subject matter expressed doubts concerning Ptolemy ( shukūk ). [ 10 ] Several Muslim scholars questioned Earth's apparent immobility [ 11 ] [ 12 ] and centrality within the universe. [ 13 ] Some Muslim astronomers believed that Earth rotates around its axis , such as Abu Sa'id al-Sijzi (d. circa 1020). [ 14 ] [ 15 ] According to al-Biruni , Sijzi invented an astrolabe called al-zūraqī , based upon a belief held by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky". [ 15 ] [ 16 ] The prevalence of this belief is further confirmed by a reference from the 13th century that states:
According to the geometers [or engineers] ( muhandisīn ), the Earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the Earth and not the stars. [ 15 ]
Early in the 11th century, Alhazen wrote a scathing critique of Ptolemy 's model in his Doubts on Ptolemy ( c. 1028 ), which some have interpreted to imply he was criticizing Ptolemy's geocentrism, [ 17 ] but most agree that he was actually criticizing the details of Ptolemy's model rather than his geocentrism. [ 18 ]
In the 12th century, Arzachel departed from the ancient Greek idea of uniform circular motions by hypothesizing that the planet Mercury moves in an elliptic orbit , [ 19 ] [ 20 ] while Alpetragius proposed a planetary model that abandoned the equant , epicycle and eccentric mechanisms, [ 21 ] though this resulted in a system that was mathematically less accurate. [ 22 ] His alternative system spread through most of Europe during the 13th century. [ 23 ]
Fakhr al-Din al-Razi (1149–1209), in dealing with his conception of physics and the physical world in his Matalib , rejects the Aristotelian and Avicennian notion of the Earth's centrality within the universe, but instead argues that there are "a thousand thousand worlds ( alfa alfi 'awalim ) beyond this world, such that each one of those worlds be bigger and more massive than this world, as well as having the like of what this world has." To support his theological argument , he cites the Qur'anic verse, "All praise belongs to God, Lord of the Worlds", emphasizing the term "Worlds". [ 13 ]
The "Maragha Revolution" refers to the Maragha school's revolution against Ptolemaic astronomy. The "Maragha school" was an astronomical tradition beginning in the Maragha observatory and continuing with astronomers from the Damascus mosque and Samarkand observatory . Like their Andalusian predecessors, the Maragha astronomers attempted to solve the equant problem (the circle around whose circumference a planet or the center of an epicycle was conceived to move uniformly) and produce alternative configurations to the Ptolemaic model without abandoning geocentrism. They were more successful than their Andalusian predecessors in producing non-Ptolemaic configurations which eliminated the equant and eccentrics, were more accurate than the Ptolemaic model in numerically predicting planetary positions, and were in better agreement with empirical observations. [ 24 ] The most important of the Maragha astronomers included Mo'ayyeduddin Urdi (died 1266), Nasīr al-Dīn al-Tūsī (1201–1274), Qutb al-Din al-Shirazi (1236–1311), Ibn al-Shatir (1304–1375), Ali Qushji ( c. 1474 ), Al-Birjandi (died 1525), and Shams al-Din al-Khafri (died 1550). [ 25 ]
However, the Maragha school never made the paradigm shift to heliocentrism. [ 26 ] The influence of the Maragha school on Copernicus remains speculative, since there is no documentary evidence to prove it. The possibility that Copernicus independently developed the Tusi couple remains open, since no researcher has yet demonstrated that he knew about Tusi's work or that of the Maragha school. [ 26 ] [ 27 ]
Not all Greeks agreed with the geocentric model. The Pythagorean system has already been mentioned; some Pythagoreans believed the Earth to be one of several planets going around a central fire. [ 28 ] Hicetas and Ecphantus , two Pythagoreans of the 5th century BC, and Heraclides Ponticus in the 4th century BC, believed that the Earth rotated on its axis but remained at the center of the universe. [ 29 ] Such a system still qualifies as geocentric. It was revived in the Middle Ages by Jean Buridan . Heraclides Ponticus was once thought to have proposed that both Venus and Mercury went around the Sun rather than the Earth, but it is now known that he did not. [ 30 ] Martianus Capella definitely put Mercury and Venus in orbit around the Sun. [ 31 ] Aristarchus of Samos wrote a work, which has not survived, on heliocentrism , saying that the Sun was at the center of the universe, while the Earth and other planets revolved around it. [ 32 ] His theory was not popular, and he had one named follower, Seleucus of Seleucia . [ 33 ] Epicurus was the most radical. He correctly realized in the 4th century BC that the universe does not have any single center. This theory was widely accepted by the later Epicureans and was notably defended by Lucretius in his poem De rerum natura . [ 34 ]
In 1543, the geocentric system met its first serious challenge with the publication of Copernicus ' De revolutionibus orbium coelestium ( On the Revolutions of the Heavenly Spheres ), which posited that the Earth and the other planets instead revolved around the Sun. The geocentric system was still held for many years afterwards, as at the time the Copernican system did not offer better predictions than the geocentric system, and it posed problems for both natural philosophy and scripture. The Copernican system was no more accurate than Ptolemy's system, because it still used circular orbits. This was not altered until Johannes Kepler postulated that they were elliptical (Kepler's first law of planetary motion ).
Tycho Brahe (1545-1601), made more accurate determinations of the positions of planets and stars. He sought the effect of stellar parallax, which would have been empirically verifiable proof of the Earth's motion around the Sun predicted by the Copernican model. Having observed no effect, he rejected the idea of the Earth's motion. [ 35 ]
Consequently, he introduced a new system, the Tychonic system, in which the Earth was still at the center of the universe, and around it revolved the Sun, but all the other planets revolved around the Sun in a set of epicycles. His model considered both the benefits of the Copernican model and the lack of evidence for the Earth's motion. [ 36 ]
With the invention of the telescope in 1609, observations made by Galileo Galilei (such as that Jupiter has moons) called into question some of the tenets of geocentrism but did not seriously threaten it. Because he observed dark "spots" on the Moon, craters, he remarked that the moon was not a perfect celestial body as had been previously conceived. This was the first detailed observation by telescope of the Moon's imperfections, which had previously been explained by Aristotle as the Moon being contaminated by Earth and its heavier elements, in contrast to the aether of the higher spheres. Galileo could also see the moons of Jupiter, which he dedicated to Cosimo II de' Medici , and stated that they orbited around Jupiter, not Earth. [ 37 ] This was a significant claim as it would mean not only that not everything revolved around Earth as stated in the Ptolemaic model, but also showed a secondary celestial body could orbit a moving celestial body, strengthening the heliocentric argument that a moving Earth could retain the Moon. [ 38 ] Galileo's observations were verified by other astronomers of the time period who quickly adopted use of the telescope, including Christoph Scheiner , Johannes Kepler , and Giovan Paulo Lembo. [ 39 ]
In December 1610, Galileo Galilei used his telescope to observe that Venus showed all phases , just like the Moon . He thought that while this observation was incompatible with the Ptolemaic system, it was a natural consequence of the heliocentric system.
However, Ptolemy placed Venus' deferent and epicycle entirely inside the sphere of the Sun (between the Sun and Mercury), but this was arbitrary; he could just as easily have swapped Venus and Mercury and put them on the other side of the Sun, or made any other arrangement of Venus and Mercury, as long as they were always near a line running from the Earth through the Sun, such as placing the center of the Venus epicycle near the Sun. In this case, if the Sun is the source of all the light, under the Ptolemaic system:
If Venus is between Earth and the Sun, the phase of Venus must always be crescent or all dark.
If Venus is beyond the Sun, the phase of Venus must always be gibbous or full.
But Galileo saw Venus at first small and full, and later large and crescent.
This showed that with a Ptolemaic cosmology, the Venus epicycle can be neither completely inside nor completely outside of the orbit of the Sun. As a result, Ptolemaics abandoned the idea that the epicycle of Venus was completely inside the Sun, and later 17th-century competition between astronomical cosmologies focused on variations of the Tychonic or Copernican systems.
The famous Galileo affair pitted the geocentric model against the claims of Galileo . In regards to the theological basis for such an argument, two Popes addressed the question of whether the use of phenomenological language would compel one to admit an error in Scripture. Both taught that it would not. Pope Leo XIII wrote:
we have to contend against those who, making an evil use of physical science, minutely scrutinize the Sacred Book in order to detect the writers in a mistake, and to take occasion to vilify its contents. ... There can never, indeed, be any real discrepancy between the theologian and the physicist, as long as each confines himself within his own lines, and both are careful, as St. Augustine warns us, "not to make rash assertions, or to assert what is not known as known". If dissension should arise between them, here is the rule also laid down by St. Augustine, for the theologian: "Whatever they can really demonstrate to be true of physical nature, we must show to be capable of reconciliation with our Scriptures; and whatever they assert in their treatises which is contrary to these Scriptures of ours, that is to Catholic faith, we must either prove it as well as we can to be entirely false, or at all events we must, without the smallest hesitation, believe it to be so." To understand how just is the rule here formulated we must remember, first, that the sacred writers, or to speak more accurately, the Holy Ghost "Who spoke by them, did not intend to teach men these things (that is to say, the essential nature of the things of the visible universe), things in no way profitable unto salvation." Hence they did not seek to penetrate the secrets of nature, but rather described and dealt with things in more or less figurative language, or in terms which were commonly used at the time, and which in many instances are in daily use at this day, even by the most eminent men of science. Ordinary speech primarily and properly describes what comes under the senses; and somewhat in the same way the sacred writers-as the Angelic Doctor also reminds us – "went by what sensibly appeared", or put down what God, speaking to men, signified, in the way men could understand and were accustomed to.
Maurice Finocchiaro, author of a book on the Galileo affair, notes that this is "a view of the relationship between biblical interpretation and scientific investigation that corresponds to the one advanced by Galileo in the " Letter to the Grand Duchess Christina ". [ 40 ] Pope Pius XII repeated his predecessor's teaching:
The first and greatest care of Leo XIII was to set forth the teaching on the truth of the Sacred Books and to defend it from attack. Hence with grave words did he proclaim that there is no error whatsoever if the sacred writer, speaking of things of the physical order "went by what sensibly appeared" as the Angelic Doctor says, speaking either "in figurative language, or in terms which were commonly used at the time, and which in many instances are in daily use at this day, even among the most eminent men of science". For "the sacred writers, or to speak more accurately – the words are St. Augustine's – the Holy Spirit, Who spoke by them, did not intend to teach men these things – that is the essential nature of the things of the universe – things in no way profitable to salvation"; which principle "will apply to cognate sciences, and especially to history", that is, by refuting, "in a somewhat similar way the fallacies of the adversaries and defending the historical truth of Sacred Scripture from their attacks".
In 1664, Pope Alexander VII republished the Index Librorum Prohibitorum ( List of Prohibited Books ) and attached the various decrees connected with those books, including those concerned with heliocentrism. He stated in a papal bull that his purpose in doing so was that "the succession of things done from the beginning might be made known [ quo rei ab initio gestae series innotescat ]". [ 41 ]
The position of the curia evolved slowly over the centuries towards permitting the heliocentric view. In 1757, during the papacy of Benedict XIV, the Congregation of the Index withdrew the decree that prohibited all books teaching the Earth's motion, although the Dialogue and a few other books continued to be explicitly included. In 1820, the Congregation of the Holy Office, with the pope's approval, decreed that Catholic astronomer Giuseppe Settele was allowed to treat the Earth's motion as an established fact and removed any obstacle for Catholics to hold to the motion of the Earth:
The Assessor of the Holy Office has referred the request of Giuseppe Settele, Professor of Optics and Astronomy at La Sapienza University, regarding permission to publish his work Elements of Astronomy in which he espouses the common opinion of the astronomers of our time regarding the Earth’s daily and yearly motions, to His Holiness through Divine Providence, Pope Pius VII. Previously, His Holiness had referred this request to the Supreme Sacred Congregation and concurrently to the consideration of the Most Eminent and Most Reverend General Cardinal Inquisitor. His Holiness has decreed that no obstacles exist for those who sustain Copernicus' affirmation regarding the Earth's movement in the manner in which it is affirmed today, even by Catholic authors. He has, moreover, suggested the insertion of several notations into this work, aimed at demonstrating that the above mentioned affirmation [of Copernicus], as it has come to be understood, does not present any difficulties; difficulties that existed in times past, prior to the subsequent astronomical observations that have now occurred. [Pope Pius VII] has also recommended that the implementation [of these decisions] be given to the Cardinal Secretary of the Supreme Sacred Congregation and Master of the Sacred Apostolic Palace. He is now appointed the task of bringing to an end any concerns and criticisms regarding the printing of this book, and, at the same time, ensuring that in the future, regarding the publication of such works, permission is sought from the Cardinal Vicar whose signature will not be given without the authorization of the Superior of his Order. [ 42 ]
In 1822, the Congregation of the Holy Office removed the prohibition on the publication of books treating of the Earth's motion in accordance with modern astronomy and Pope Pius VII ratified the decision:
The most excellent [cardinals] have decreed that there must be no denial, by the present or by future Masters of the Sacred Apostolic Palace, of permission to print and to publish works which treat of the mobility of the Earth and of the immobility of the sun, according to the common opinion of modern astronomers, as long as there are no other contrary indications, on the basis of the decrees of the Sacred Congregation of the Index of 1757 and of this Supreme [Holy Office] of 1820; and that those who would show themselves to be reluctant or would disobey, should be forced under punishments at the choice of [this] Sacred Congregation, with derogation of [their] claimed privileges, where necessary. [ 43 ]
The 1835 edition of the Catholic List of Prohibited Books for the first time omits the Dialogue from the list. [ 40 ] In his 1921 papal encyclical , In praeclara summorum , Pope Benedict XV stated that, "though this Earth on which we live may not be the center of the universe as at one time was thought, it was the scene of the original happiness of our first ancestors, witness of their unhappy fall, as too of the Redemption of mankind through the Passion and Death of Jesus Christ". [ 44 ] In 1965 the Second Vatican Council stated that, "Consequently, we cannot but deplore certain habits of mind, which are sometimes found too among Christians, which do not sufficiently attend to the rightful independence of science and which, from the arguments and controversies they spark, lead many minds to conclude that faith and science are mutually opposed." [ 45 ] The footnote on this statement is to Msgr. Pio Paschini's, Vita e opere di Galileo Galilei , 2 volumes, Vatican Press (1964). Pope John Paul II regretted the treatment that Galileo received, in a speech to the Pontifical Academy of Sciences in 1992. The Pope declared the incident to be based on a "tragic mutual miscomprehension". He further stated:
Cardinal Poupard has also reminded us that the sentence of 1633 was not irreformable, and that the debate which had not ceased to evolve thereafter, was closed in 1820 with the imprimatur given to the work of Canon Settele. ... The error of the theologians of the time, when they maintained the centrality of the Earth, was to think that our understanding of the physical world's structure was, in some way, imposed by the literal sense of Sacred Scripture. Let us recall the celebrated saying attributed to Baronius "Spiritui Sancto mentem fuisse nos docere quomodo ad coelum eatur, non quomodo coelum gradiatur". In fact, the Bible does not concern itself with the details of the physical world, the understanding of which is the competence of human experience and reasoning. There exist two realms of knowledge, one which has its source in Revelation and one which reason can discover by its own power. To the latter belong especially the experimental sciences and philosophy. The distinction between the two realms of knowledge ought not to be understood as opposition. [ 46 ]
Johannes Kepler analysed Tycho Brahe 's famously accurate observations, and afterwards constructed his three laws in 1609 and 1619, based upon a heliocentric model wherein the planets move in elliptical paths. Using these laws, he was the first astronomer to successfully predict a transit of Venus for the year 1631. The change from circular orbits to elliptical planetary paths dramatically improved the accuracy of celestial observations and predictions. Because the heliocentric model devised by Copernicus was no more accurate than Ptolemy's system, new observations were needed to persuade those who still adhered to the geocentric model. However, Kepler's laws based upon Brahe's data became a problem that geocentrists could not easily overcome.
In 1687, Isaac Newton stated the law of universal gravitation , which was described earlier as a hypothesis by Robert Hooke and others. His main achievement was to mathematically derive Kepler's laws of planetary motion from the law of gravitation, thus helping to prove the latter. This introduced gravitation as the force which kept Earth and the planets moving through the universe, and also kept the atmosphere from flying away. The theory of gravity allowed scientists to rapidly construct a plausible heliocentric model for the Solar System. In his Principia , Newton explained his theory of how gravity, previously thought to be a mysterious, unexplained occult force, directed the movements of celestial bodies, and kept our Solar System in working order. His descriptions of centripetal force [ 47 ] were a breakthrough in scientific thought, using the newly developed mathematical discipline of differential calculus , finally replacing the previous schools of scientific thought, which had been dominated by Aristotle and Ptolemy. However, the process was gradual.
Several empirical tests of Newton's theory, explaining the longer period of oscillation of a pendulum at the equator and the differing size of a degree of latitude, would gradually become available between 1673 and 1738. In addition, stellar aberration was observed by Robert Hooke in 1674, and tested in a series of observations by Jean Picard over a period of ten years, finishing in 1680. However, it was not explained until 1729, when James Bradley provided an approximate explanation in terms of the Earth's revolution about the Sun.
In 1838, astronomer Friedrich Wilhelm Bessel measured the parallax of the star 61 Cygni successfully, and disproved Ptolemy's claim that parallax motion did not exist. This finally confirmed the assumptions made by Copernicus, providing accurate, dependable scientific observations, and conclusively displaying how distant stars are from Earth.
A geocentric frame is useful for many everyday activities and most laboratory experiments, but is a less appropriate choice for Solar System mechanics and space travel. While a heliocentric frame is most useful in those cases, galactic and extragalactic astronomy is easier if the Sun is treated as neither stationary nor the center of the universe, but rather rotating around the center of our galaxy, while in turn our galaxy is also not at rest in the cosmic background .
Albert Einstein and Leopold Infeld wrote in The Evolution of Physics (1938): "Can we formulate physical laws so that they are valid for all CS [ coordinate systems ], not only those moving uniformly, but also those moving quite arbitrarily, relative to each other? If this can be done, our difficulties will be over. We shall then be able to apply the laws of nature to any CS. The struggle, so violent in the early days of science, between the views of Ptolemy and Copernicus would then be quite meaningless. Either CS could be used with equal justification. The two sentences, 'the sun is at rest and the Earth moves', or 'the sun moves and the Earth is at rest', would simply mean two different conventions concerning two different CS.
Could we build a real relativistic physics valid in all CS; a physics in which there would be no place for absolute, but only for relative, motion? This is indeed possible!" [ 48 ]
Despite giving more respectability to the geocentric view than Newtonian physics does, [ 49 ] relativity is not geocentric. Rather, relativity states that the Sun, the Earth, the Moon, Jupiter, or any other point for that matter could be chosen as a center of the Solar System with equal validity. [ 50 ]
Relativity agrees with Newtonian predictions that regardless of whether the Sun or the Earth are chosen arbitrarily as the center of the coordinate system describing the Solar System, the paths of the planets form (roughly) ellipses with respect to the Sun, not the Earth. With respect to the average reference frame of the fixed stars , the planets do indeed move around the Sun, which due to its much larger mass, moves far less than its own diameter and the gravity of which is dominant in determining the orbits of the planets (in other words, the center of mass of the Solar System is near the center of the Sun). The Earth and Moon are much closer to being a binary planet ; the center of mass around which they both rotate is still inside the Earth, but is about 4,624 km (2,873 miles) or 72.6% of the Earth's radius away from the centre of the Earth (thus closer to the surface than the center). [ citation needed ]
What the principle of relativity points out is that correct mathematical calculations can be made regardless of the reference frame chosen, and these will all agree with each other as to the predictions of actual motions of bodies with respect to each other. It is not necessary to choose the object in the Solar System with the largest gravitational field as the center of the coordinate system in order to predict the motions of planetary bodies, though doing so may make calculations easier to perform or interpret. A geocentric coordinate system can be more convenient when dealing only with bodies mostly influenced by the gravity of the Earth (such as artificial satellites and the Moon ), or when calculating what the sky will look like when viewed from Earth (as opposed to an imaginary observer looking down on the entire Solar System, where a different coordinate system might be more convenient). [ citation needed ]
The Ptolemaic model held sway into the early modern age ; from the late 16th century onward it was gradually replaced as the consensus description by the heliocentric model . Geocentrism as a separate religious belief, however, never completely died out. In the United States between 1870 and 1920, for example, various members of the Lutheran Church–Missouri Synod published articles disparaging Copernican astronomy and promoting geocentrism. [ 51 ] However, in the 1902 Theological Quarterly , A. L. Graebner observed that the synod had no doctrinal position on geocentrism, heliocentrism, or any scientific model, unless it were to contradict Scripture. He stated that any possible declarations of geocentrists within the synod did not set the position of the church body as a whole. [ 52 ]
Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters. [ which? ] Contemporary advocates for such religious beliefs include Robert Sungenis (author of the 2006 book Galileo Was Wrong and the 2014 pseudo-documentary film The Principle ). [ 53 ] Most contemporary creationist organizations reject such perspectives. [ n 2 ] A few Orthodox Jewish leaders maintain a geocentric model of the universe and an interpretation of Maimonides to the effect that he ruled that the Earth is orbited by the Sun. [ 55 ] [ 56 ] The Lubavitcher Rebbe also explained that geocentrism is defensible based on the theory of relativity . [ 57 ] While geocentrism is important in Maimonides' calendar calculations, [ 58 ] the great majority of Jewish religious scholars, who accept the divinity of the Bible and accept many of his rulings as legally binding, do not believe that the Bible or Maimonides command a belief in geocentrism. [ 56 ] [ 59 ] There have been some modern Islamic scholars who promoted geocentrism. One of them was Ahmed Raza Khan Barelvi , a Sunni scholar of the Indian subcontinent . He rejected the heliocentric model and wrote a book [ 60 ] that explains the movement of the sun, moon and other planets around the Earth.
According to a report released in 2014 by the National Science Foundation , 26% of Americans surveyed believe that the Sun revolves around the Earth. [ 61 ] Morris Berman quotes a 2006 survey that show currently some 20% of the U.S. population believe that the Sun goes around the Earth (geocentricism) rather than the Earth goes around the Sun (heliocentricism), while a further 9% claimed not to know. [ 62 ] Polls conducted by Gallup in the 1990s found that 16% of Germans, 18% of Americans and 19% of Britons hold that the Sun revolves around the Earth. [ 63 ] A study conducted in 2005 by Jon D. Miller of Northwestern University , an expert in the public understanding of science and technology, [ 64 ] found that about 20%, or one in five, of American adults believe that the Sun orbits the Earth. [ 65 ] According to 2011 VTSIOM poll, 32% of Russians believe that the Sun orbits the Earth. [ 66 ]
Many planetariums can switch between heliocentric and geocentric models. [ 67 ] [ 68 ] In particular, the geocentric model is still used for projecting the celestial sphere and lunar phases in education [ 69 ] and sometimes for navigation.
All Islamic astronomers from Thabit ibn Qurra in the ninth century to Ibn al-Shatir in the fourteenth, and all natural philosophers from al-Kindi to Averroes and later, are known to have accepted ... the Greek picture of the world as consisting of two spheres of which one, the celestial sphere ... concentrically envelops the other. | https://en.wikipedia.org/wiki/Geocentric_model |
The Geochemical Journal is a peer-reviewed open-access scientific journal covering all aspects of geochemistry and cosmochemistry . It is published by the Geochemical Society of Japan and the editor-in-chief is Katsuhiko Suzuki .
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2020 impact factor of 1.561. [ 5 ]
This article about a journal on geochemistry is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Geochemical_Journal |
The Geochemical Ocean Sections Study (GEOSECS) was a global survey of the three-dimensional distributions of chemical, isotopic , and radiochemical tracers in the ocean. [ 1 ] A key objective was to investigate the deep thermohaline circulation of the ocean, using chemical tracers, including radiotracers, to establish the pathways taken by this. [ 2 ]
Expeditions undertaken during GEOSECS took place in the Atlantic Ocean from July 1972 to May 1973, in the Pacific Ocean from August 1973 to June 1974, and in the Indian Ocean from December 1977 to March 1978. [ 3 ]
Measurements included those of physical oceanographic quantities such as temperature , salinity , pressure and density , chemical / biological quantities such as total inorganic carbon , alkalinity , nitrate , phosphate , silicic acid , oxygen and apparent oxygen utilisation (AOU), and radiochemical / isotopic quantities such as carbon-13 , carbon-14 and tritium . [ 3 ]
This oceanography article is a stub . You can help Wikipedia by expanding it .
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geochemical_Ocean_Sections_Study |
Geochemical Perspectives Letters is a peer-reviewed open access scholarly journal publishing original research in geochemistry . It is published by the European Association for Geochemistry .
The journal is abstracted and indexed in:
This article about a chemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This article about a journal on geology is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geochemical_Perspectives_Letters |
The Geochemical Society is a nonprofit scientific organization founded to encourage the application of chemistry to solve problems involving geology and cosmology . The society promotes understanding of geochemistry through the annual Goldschmidt Conference, publication of a peer-reviewed journal and electronic newsletter, awards programs recognizing significant accomplishments in the field, and student development programs. The society's offices are located on the campus of the Carnegie Institution for Science in Washington, DC.
The Geochemical Society was founded in 1955 at a meeting of the Geological Society of America . Its first president was Earl Ingerson and dues started at two dollars per year. [ 1 ] In 1990 it was incorporated as a 501(c)(3) nonprofit organization in 1990. [ 2 ]
In 1988, the Geochemical society created the Goldschmidt Conferences in honor of the geochemist Victor Goldschmidt (1888–1947), [ 6 ] "considered to be the founder of modern geochemistry and crystal chemistry". [ 7 ] It was soon joined by the European Association of Geochemistry , [ 6 ] and at the 2014 meeting the two organizations signed a Memorandum of Understanding for the governance and trademark protection of the meeting. [ 8 ] The conference is one of the world's largest devoted to geochemistry. [ 9 ] The society's board of directors holds its annual meeting during the conference. [ 6 ]
The Geochemical Society has nearly 4,000 members from more than 70 countries. [ 9 ] Most members are students, researchers and faculty of geochemistry related fields, although anyone with an interest in geochemistry may join. Membership is calendar year and dues are US$35 for a Professional, US$15 for Student, and $20 for Seniors. Membership includes a subscription to Elements Magazine and also offers discounts on Geochemical Society publications, Mineralogical Society of America publications and conference registration discounts at the Goldschmidt Conference, Fall AGU, and the annual GSA conference. [ 10 ]
The Geochemical Society publishes, co-publishes, or sponsors the following: [ 11 ]
The Geochemical Society presents the following annual awards: [ 14 ]
The Distinguished Service Award, which recognizes outstanding service to the Society or the geochemical community, is not awarded every year. [ 17 ]
The Geochemical Society sponsors a special lecture at the annual meeting of the Geological Society of America. Called the F. Earl Ingerson Lecture Series, it honors the first president of the Geochemical Society. At the Goldschmidt Conference, the Paul W. Gast Lecture is awarded to a mid-career scientist (under 45 years old) in honor of the first Goldschmidt medalist. [ 17 ] | https://en.wikipedia.org/wiki/Geochemical_Society |
In Earth science , a geochemical cycle is the pathway that chemical elements undergo to be able to interact with the reservoirs of chemicals in the surface and crust of the Earth . [ 1 ] The term " geochemical " tells us that geological and chemical factors are all included. The migration of heated and compressed chemical elements and compounds such as silicon , aluminium , and general alkali metals through the means of subduction and volcanism is known in the geological world as geochemical cycles.
The geochemical cycle encompasses the natural separation and concentration of elements and heat-assisted recombination processes. Changes may not be apparent over a short term, such as with biogeochemical cycles , but over a long term changes of great magnitude occur, including the evolution of continents and oceans. [ 1 ]
Some [ who? ] may use the terms biogeochemical cycle and geochemical cycle interchangeably because both cycles deal with Earth's reservoirs . However, a biogeochemical cycle refers to the chemical interactions in surface reservoirs such as the atmosphere , hydrosphere , lithosphere , and biosphere [ citation needed ] whereas a geochemical cycle refers to the chemical interactions that exist in crustal and sub crustal reservoirs such as the deep earth and lithosphere . [ citation needed ]
The Earth , as a system, is open to radiation from the sun and space, but is practically closed with regard to matter . [ 2 ] As all closed systems, it follows the law of conservation of mass which states that matter cannot be created nor destroyed, thus, the matter, although transformed and migrated, remains the same as when the Earth was formed. The Earth system contains seven different reservoirs that are separated into surface reservoirs, which include atmosphere , hydrosphere , biosphere , pedosphere , and lithosphere and the isolated reservoirs that include deep Earth and outer space . [ 2 ] Geochemical cycles are concerned with the interactions between deep earth which consists of Earth's mantle and core, and the lithosphere which consists of the Earth's crust.
Flux in geochemical cycles is the movement of material between the deep Earth and the surface reservoirs. This occurs through two different processes: volcanism and subduction of tectonic plates .
Subduction is the process that takes place at convergent boundaries by which one tectonic plate moves under another tectonic plate and sinks into the mantle as the plates converge. This leads to the sinking of one plate into the mantle which creates a broad range of geochemical transformations or cycling.
Volcanism is the process that takes place at divergent boundaries by which one tectonic plate separates from another creating a rift in which molten rock ( magma ) erupts onto the surface of the Earth . This molten rock magma then cools and crystallizes, forming igneous rocks. If crystallization occurs at the Earth's surface, extrusive igneous rocks are formed; if crystallization occurs within the Earth's lithosphere , intrusive igneous rocks are formed which can then be brought to Earth's surface by denudation [ 3 ]
Categories and examples of geochemical cycles: | https://en.wikipedia.org/wiki/Geochemical_cycle |
Geochemistry is the science that uses the tools and principles of chemistry to explain the mechanisms behind major geological systems such as the Earth's crust and its oceans . [ 1 ] : 1 The realm of geochemistry extends beyond the Earth , encompassing the entire Solar System , [ 2 ] and has made important contributions to the understanding of a number of processes including mantle convection , the formation of planets and the origins of granite and basalt . [ 1 ] : 1 It is an integrated field of chemistry and geology .
The term geochemistry was first used by the Swiss-German chemist Christian Friedrich Schönbein in 1838: "a comparative geochemistry ought to be launched, before geognosy can become geology, and before the mystery of the genesis of our planets and their inorganic matter may be revealed." [ 3 ] However, for the rest of the century the more common term was "chemical geology", and there was little contact between geologists and chemists . [ 3 ]
Geochemistry emerged as a separate discipline after major laboratories were established, starting with the United States Geological Survey (USGS) in 1884, which began systematic surveys of the chemistry of rocks and minerals. The chief USGS chemist, Frank Wigglesworth Clarke , noted that the elements generally decrease in abundance as their atomic weights increase, and summarized the work on elemental abundance in The Data of Geochemistry . [ 3 ] [ 4 ] : 2
The composition of meteorites was investigated and compared to terrestrial rocks as early as 1850. In 1901, Oliver C. Farrington hypothesised that, although there were differences, the relative abundances should still be the same. [ 3 ] This was the beginnings of the field of cosmochemistry and has contributed much of what we know about the formation of the Earth and the Solar System. [ 5 ]
In the early 20th century, Max von Laue and William L. Bragg showed that X-ray scattering could be used to determine the structures of crystals. In the 1920s and 1930s, Victor Goldschmidt and associates at the University of Oslo applied these methods to many common minerals and formulated a set of rules for how elements are grouped. Goldschmidt published this work in the series Geochemische Verteilungsgesetze der Elemente [Geochemical Laws of the Distribution of Elements]. [ 4 ] : 2 [ 6 ]
The research of Manfred Schidlowski from the 1960s to around the year 2002 was concerned with the biochemistry of the Early Earth with a focus on isotope-biogeochemistry and the evidence of the earliest life processes in Precambrian . [ 7 ] [ 8 ]
Some subfields of geochemistry are: [ 9 ]
The building blocks of materials are the chemical elements . These can be identified by their atomic number Z, which is the number of protons in the nucleus . An element can have more than one value for N, the number of neutrons in the nucleus. The sum of these is the mass number , which is roughly equal to the atomic mass . Atoms with the same atomic number but different neutron numbers are called isotopes . A given isotope is identified by a letter for the element preceded by a superscript for the mass number. For example, two common isotopes of chlorine are 35 Cl and 37 Cl. There are about 1700 known combinations of Z and N, of which only about 260 are stable. However, most of the unstable isotopes do not occur in nature. In geochemistry, stable isotopes are used to trace chemical pathways and reactions, while radioactive isotopes are primarily used to date samples. [ 4 ] : 13–17
The chemical behavior of an atom – its affinity for other elements and the type of bonds it forms – is determined by the arrangement of electrons in orbitals , particularly the outermost ( valence ) electrons. These arrangements are reflected in the position of elements in the periodic table . [ 4 ] : 13–17 Based on position, the elements fall into the broad groups of alkali metals , alkaline earth metals , transition metals , semi-metals (also known as metalloids ), halogens , noble gases , lanthanides and actinides . [ 4 ] : 20–23
Another useful classification scheme for geochemistry is the Goldschmidt classification , which places the elements into four main groups. Lithophiles combine easily with oxygen. These elements, which include Na , K , Si , Al , Ti , Mg and Ca , dominate in the Earth's crust , forming silicates and other oxides. Siderophile elements ( Fe , Co , Ni , Pt , Re , Os ) have an affinity for iron and tend to concentrate in the core . Chalcophile elements ( Cu , Ag , Zn , Pb , S ) form sulfides ; and atmophile elements ( O , N , H and noble gases) dominate the atmosphere. Within each group, some elements are refractory , remaining stable at high temperatures, while others are volatile , evaporating more easily, so heating can separate them. [ 1 ] : 17 [ 4 ] : 23
The chemical composition of the Earth and other bodies is determined by two opposing processes: differentiation and mixing. In the Earth's mantle , differentiation occurs at mid-ocean ridges through partial melting , with more refractory materials remaining at the base of the lithosphere while the remainder rises to form basalt . After an oceanic plate descends into the mantle, convection eventually mixes the two parts together. Erosion differentiates granite , separating it into clay on the ocean floor, sandstone on the edge of the continent, and dissolved minerals in ocean waters. Metamorphism and anatexis (partial melting of crustal rocks) can mix these elements together again. In the ocean, biological organisms can cause chemical differentiation, while dissolution of the organisms and their wastes can mix the materials again. [ 1 ] : 23–24
A major source of differentiation is fractionation , an unequal distribution of elements and isotopes. This can be the result of chemical reactions, phase changes , kinetic effects, or radioactivity . [ 1 ] : 2–3 On the largest scale, planetary differentiation is a physical and chemical separation of a planet into chemically distinct regions. For example, the terrestrial planets formed iron-rich cores and silicate-rich mantles and crusts. [ 2 ] : 218 In the Earth's mantle, the primary source of chemical differentiation is partial melting , particularly near mid-ocean ridges. [ 16 ] : 68, 153 This can occur when the solid is heterogeneous or a solid solution , and part of the melt is separated from the solid. The process is known as equilibrium or batch melting if the solid and melt remain in equilibrium until the moment that the melt is removed, and fractional or Rayleigh melting if it is removed continuously. [ 17 ]
Isotopic fractionation can have mass-dependent and mass-independent forms. Molecules with heavier isotopes have lower ground state energies and are therefore more stable. As a result, chemical reactions show a small isotope dependence, with heavier isotopes preferring species or compounds with a higher oxidation state; and in phase changes, heavier isotopes tend to concentrate in the heavier phases. [ 18 ] Mass-dependent fractionation is largest in light elements because the difference in masses is a larger fraction of the total mass. [ 19 ] : 47
Ratios between isotopes are generally compared to a standard. For example, sulfur has four stable isotopes, of which the two most common are 32 S and 34 S. [ 19 ] : 98 The ratio of their concentrations, R = 34 S/ 32 S , is reported as
where R s is the same ratio for a standard. Because the differences are small, the ratio is multiplied by 1000 to make it parts per thousand (referred to as parts per mil). This is represented by the symbol ‰ . [ 18 ] : 55
Equilibrium fractionation occurs between chemicals or phases that are in equilibrium with each other. In equilibrium fractionation between phases, heavier phases prefer the heavier isotopes. For two phases A and B, the effect can be represented by the factor
In the liquid-vapor phase transition for water, a l-v at 20 degrees Celsius is 1.0098 for 18 O and 1.084 for 2 H. In general, fractionation is greater at lower temperatures. At 0 °C, the factors are 1.0117 and 1.111. [ 18 ] : 59
When there is no equilibrium between phases or chemical compounds, kinetic fractionation can occur. For example, at interfaces between liquid water and air, the forward reaction is enhanced if the humidity of the air is less than 100% or the water vapor is moved by a wind. Kinetic fractionation generally is enhanced compared to equilibrium fractionation and depends on factors such as reaction rate, reaction pathway and bond energy. Since lighter isotopes generally have weaker bonds, they tend to react faster and enrich the reaction products. [ 18 ] : 60
Biological fractionation is a form of kinetic fractionation since reactions tend to be in one direction. Biological organisms prefer lighter isotopes because there is a lower energy cost in breaking energy bonds. In addition to the previously mentioned factors, the environment and species of the organism can have a large effect on the fractionation. [ 18 ] : 70
Through a variety of physical and chemical processes, chemical elements change in concentration and move around in what are called geochemical cycles . An understanding of these changes requires both detailed observation and theoretical models. Each chemical compound, element or isotope has a concentration that is a function C ( r , t ) of position and time, but it is impractical to model the full variability. Instead, in an approach borrowed from chemical engineering , [ 1 ] : 81 geochemists average the concentration over regions of the Earth called geochemical reservoirs . The choice of reservoir depends on the problem; for example, the ocean may be a single reservoir or be split into multiple reservoirs. [ 20 ] In a type of model called a box model , a reservoir is represented by a box with inputs and outputs. [ 1 ] : 81 [ 20 ]
Geochemical models generally involve feedback. In the simplest case of a linear cycle, either the input or the output from a reservoir is proportional to the concentration. For example, salt is removed from the ocean by formation of evaporites , and given a constant rate of evaporation in evaporite basins, the rate of removal of salt should be proportional to its concentration. For a given component C , if the input to a reservoir is a constant a and the output is kC for some constant k , then the mass balance equation is
This expresses the fact that any change in mass must be balanced by changes in the input or output. On a time scale of t = 1/k , the system approaches a steady state in which C steady = a / k . The residence time is defined as
where I and O are the input and output rates. In the above example, the steady-state input and output rates are both equal to a , so τ res = 1/ k . [ 20 ]
If the input and output rates are nonlinear functions of C , they may still be closely balanced over time scales much greater than the residence time; otherwise, there will be large fluctuations in C . In that case, the system is always close to a steady-state and the lowest order expansion of the mass balance equation will lead to a linear equation like Equation ( 1 ). In most systems, one or both of the input and output depend on C , resulting in feedback that tends to maintain the steady-state. If an external forcing perturbs the system, it will return to the steady-state on a time scale of 1/ k . [ 20 ]
The composition of the Solar System is similar to that of many other stars, and aside from small anomalies it can be assumed to have formed from a solar nebula that had a uniform composition, and the composition of the Sun 's photosphere is similar to that of the rest of the Solar System. The composition of the photosphere is determined by fitting the absorption lines in its spectrum to models of the Sun's atmosphere. [ 22 ] By far the largest two elements by fraction of total mass are hydrogen (74.9%) and helium (23.8%), with all the remaining elements contributing just 1.3%. [ 23 ] There is a general trend of exponential decrease in abundance with increasing atomic number, although elements with even atomic number are more common than their odd-numbered neighbors (the Oddo–Harkins rule ). Compared to the overall trend, lithium , boron and beryllium are depleted and iron is anomalously enriched. [ 24 ] : 284–285
The pattern of elemental abundance is mainly due to two factors. The hydrogen, helium, and some of the lithium were formed in about 20 minutes after the Big Bang , while the rest were created in the interiors of stars . [ 4 ] : 316–317
Meteorites come in a variety of compositions, but chemical analysis can determine whether they were once in planetesimals that melted or differentiated . [ 22 ] : 45 Chondrites are undifferentiated and have round mineral inclusions called chondrules . With the ages of 4.56 billion years, they date to the early solar system . A particular kind, the CI chondrite , has a composition that closely matches that of the Sun's photosphere, except for depletion of some volatiles (H, He, C, N, O) and a group of elements (Li, B, Be) that are destroyed by nucleosynthesis in the Sun. [ 4 ] : 318 [ 22 ] Because of the latter group, CI chondrites are considered a better match for the composition of the early Solar System. Moreover, the chemical analysis of CI chondrites is more accurate than for the photosphere, so it is generally used as the source for chemical abundance, despite their rareness (only five have been recovered on Earth). [ 22 ]
The planets of the Solar System are divided into two groups: the four inner planets are the terrestrial planets ( Mercury , Venus , Earth and Mars ), with relatively small sizes and rocky surfaces. The four outer planets are the giant planets , which are dominated by hydrogen and helium and have lower mean densities. These can be further subdivided into the gas giants ( Jupiter and Saturn ) and the ice giants ( Uranus and Neptune ) that have large icy cores. [ 25 ] : 26–27, 283–284
Most of our direct information on the composition of the giant planets is from spectroscopy . Since the 1930s, Jupiter was known to contain hydrogen, methane and ammonium . In the 1960s, interferometry greatly increased the resolution and sensitivity of spectral analysis, allowing the identification of a much greater collection of molecules including ethane , acetylene , water and carbon monoxide . [ 26 ] : 138–139 However, Earth-based spectroscopy becomes increasingly difficult with more remote planets, since the reflected light of the Sun is much dimmer; and spectroscopic analysis of light from the planets can only be used to detect vibrations of molecules, which are in the infrared frequency range. This constrains the abundances of the elements H, C and N. [ 26 ] : 130 Two other elements are detected: phosphorus in the gas phosphine (PH 3 ) and germanium in germane (GeH 4 ). [ 26 ] : 131
The helium atom has vibrations in the ultraviolet range, which is strongly absorbed by the atmospheres of the outer planets and Earth. Thus, despite its abundance, helium was only detected once spacecraft were sent to the outer planets, and then only indirectly through collision-induced absorption in hydrogen molecules. [ 26 ] : 209 Further information on Jupiter was obtained from the Galileo probe when it was sent into the atmosphere in 1995; [ 27 ] [ 28 ] and the final mission of the Cassini probe in 2017 was to enter the atmosphere of Saturn. [ 29 ] In the atmosphere of Jupiter, He was found to be depleted by a factor of 2 compared to solar composition and Ne by a factor of 10, a surprising result since the other noble gases and the elements C, N and S were enhanced by factors of 2 to 4 (oxygen was also depleted but this was attributed to the unusually dry region that Galileo sampled). [ 28 ]
Spectroscopic methods only penetrate the atmospheres of Jupiter and Saturn to depths where the pressure is about equal to 1 bar , approximately Earth's atmospheric pressure at sea level . [ 26 ] : 131 The Galileo probe penetrated to 22 bars. [ 28 ] This is a small fraction of the planet, which is expected to reach pressures of over 40 Mbar. To constrain the composition in the interior, thermodynamic models are constructed using the information on temperature from infrared emission spectra and equations of state for the likely compositions. [ 26 ] : 131 High-pressure experiments predict that hydrogen will be a metallic liquid in the interior of Jupiter and Saturn, while in Uranus and Neptune it remains in the molecular state. [ 26 ] : 135–136 Estimates also depend on models for the formation of the planets. Condensation of the presolar nebula would result in a gaseous planet with the same composition as the Sun, but the planets could also have formed when a solid core captured nebular gas. [ 26 ] : 136
In current models, the four giant planets have cores of rock and ice that are roughly the same size, but the proportion of hydrogen and helium decreases from about 300 Earth masses in Jupiter to 75 in Saturn and just a few in Uranus and Neptune. [ 26 ] : 220 Thus, while the gas giants are primarily composed of hydrogen and helium, the ice giants are primarily composed of heavier elements (O, C, N, S), primarily in the form of water, methane, and ammonia. The surfaces are cold enough for molecular hydrogen to be liquid, so much of each planet is likely a hydrogen ocean overlaying one of heavier compounds. [ 30 ] Outside the core, Jupiter has a mantle of liquid metallic hydrogen and an atmosphere of molecular hydrogen and helium. Metallic hydrogen does not mix well with helium, and in Saturn, it may form a separate layer below the metallic hydrogen. [ 26 ] : 138
Terrestrial planets are believed to have come from the same nebular material as the giant planets, but they have lost most of the lighter elements and have different histories. Planets closer to the Sun might be expected to have a higher fraction of refractory elements, but if their later stages of formation involved collisions of large objects with orbits that sampled different parts of the Solar System, there could be little systematic dependence on position. [ 31 ] : 3–4
Direct information on Mars, Venus and Mercury largely comes from spacecraft missions. Using gamma-ray spectrometers , the composition of the crust of Mars has been measured by the Mars Odyssey orbiter, [ 32 ] the crust of Venus by some of the Venera missions to Venus, [ 31 ] and the crust of Mercury by the MESSENGER spacecraft. [ 33 ] Additional information on Mars comes from meteorites that have landed on Earth (the Shergottites , Nakhlites , and Chassignites , collectively known as SNC meteorites). [ 34 ] : 124 Abundances are also constrained by the masses of the planets, while the internal distribution of elements is constrained by their moments of inertia. [ 4 ] : 334
The planets condensed from the solar nebula, and much of the details of their composition are determined by fractionation as they cooled. The phases that condense fall into five groups. First to condense are materials rich in refractory elements such as Ca and Al. These are followed by nickel and iron, then magnesium silicates . Below about 700 kelvins (700 K), FeS and volatile-rich metals and silicates form a fourth group, and in the fifth group FeO enter the magnesium silicates. [ 35 ] The compositions of the planets and the Moon are chondritic , meaning that within each group the ratios between elements are the same as in carbonaceous chondrites. [ 4 ] : 334
The estimates of planetary compositions depend on the model used. In the equilibrium condensation model, each planet was formed from a feeding zone in which the compositions of solids were determined by the temperature in that zone. Thus, Mercury formed at 1400 K, where iron remained in a pure metallic form and there was little magnesium or silicon in solid form; Venus at 900 K, so all the magnesium and silicon condensed; Earth at 600 K, so it contains FeS and silicates; and Mars at 450 K, so FeO was incorporated into magnesium silicates. The greatest problem with this theory is that volatiles would not condense, so the planets would have no atmospheres and Earth no atmosphere. [ 4 ] : 335–336
In chondritic mixing models, the compositions of chondrites are used to estimate planetary compositions. For example, one model mixes two components, one with the composition of C1 chondrites and one with just the refractory components of C1 chondrites. [ 4 ] : 337 In another model, the abundances of the five fractionation groups are estimated using an index element for each group. For the most refractory group, uranium is used; iron for the second; the ratios of potassium and thallium to uranium for the next two; and the molar ratio FeO/(FeO+ MgO ) for the last. Using thermal and seismic models along with heat flow and density, Fe can be constrained to within 10 percent on Earth, Venus, and Mercury. U can be constrained within about 30% on Earth, but its abundance on other planets is based on "educated guesses". One difficulty with this model is that there may be significant errors in its prediction of volatile abundances because some volatiles are only partially condensed. [ 35 ] [ 4 ] : 337–338
The more common rock constituents are nearly all oxides ; chlorides , sulfides and fluorides are the only important exceptions to this and their total amount in any rock is usually much less than 1%. By 1911, F. W. Clarke had calculated that a little more than 47% of the Earth's crust consists of oxygen . It occurs principally in combination as oxides, of which the chief are silica , alumina , iron oxides , and various carbonates ( calcium carbonate , magnesium carbonate , sodium carbonate , and potassium carbonate ). The silica functions principally as an acid, forming silicates, and all the commonest minerals of igneous rocks are of this nature. From a computation based on 1672 analyses of numerous kinds of rocks Clarke arrived at the following as the average percentage composition of the Earth's crust: SiO 2 =59.71, Al 2 O 3 =15.41, Fe 2 O 3 =2.63, FeO=3.52, MgO=4.36, CaO=4.90, Na 2 O=3.55, K 2 O=2.80, H 2 O=1.52, TiO 2 =0.60, P 2 O 5 =0.22, (total 99.22%). All the other constituents occur only in very small quantities, usually much less than 1%. [ 36 ]
These oxides combine in a haphazard way. For example, potash (potassium carbonate) and soda ( sodium carbonate ) combine to produce feldspars . In some cases, they may take other forms, such as nepheline , leucite , and muscovite , but in the great majority of instances they are found as feldspar. Phosphoric acid with lime (calcium carbonate) forms apatite . Titanium dioxide with ferrous oxide gives rise to ilmenite . Part of the lime forms lime feldspar. Magnesium carbonate and iron oxides with silica crystallize as olivine or enstatite , or with alumina and lime form the complex ferromagnesian silicates of which the pyroxenes , amphiboles , and biotites are the chief. Any excess of silica above what is required to neutralize the bases will separate out as quartz ; excess of alumina crystallizes as corundum . These must be regarded only as general tendencies. It is possible, by rock analysis, to say approximately what minerals the rock contains, but there are numerous exceptions to any rule. [ 36 ]
Except in acid or siliceous igneous rocks containing greater than 66% of silica , known as felsic rocks, quartz is not abundant in igneous rocks. In basic rocks (containing 20% of silica or less) it is rare for them to contain as much silicon, these are referred to as mafic rocks. If magnesium and iron are above average while silica is low, olivine may be expected; where silica is present in greater quantity over ferromagnesian minerals, such as augite , hornblende , enstatite or biotite , occur rather than olivine. Unless potash is high and silica relatively low, leucite will not be present, for leucite does not occur with free quartz. Nepheline , likewise, is usually found in rocks with much soda and comparatively little silica. With high alkalis , soda-bearing pyroxenes and amphiboles may be present. The lower the percentage of silica and alkali's, the greater is the prevalence of plagioclase feldspar as contracted with soda or potash feldspar. [ 36 ]
Earth's crust is composed of 90% silicate minerals and their abundance in the Earth is as follows: plagioclase feldspar (39%), alkali feldspar (12%), quartz (12%), pyroxene (11%), amphiboles (5%), micas (5%), clay minerals (5%); the remaining silicate minerals make up another 3% of Earth's crust. Only 8% of the Earth is composed of non-silicate minerals such as carbonates , oxides , and sulfides . [ 37 ]
The other determining factor, namely the physical conditions attending consolidation, plays, on the whole, a smaller part, yet is by no means negligible. Certain minerals are practically confined to deep-seated intrusive rocks, e.g., microcline, muscovite, diallage. Leucite is very rare in plutonic masses; many minerals have special peculiarities in microscopic character according to whether they crystallized in-depth or near the surface, e.g., hypersthene, orthoclase, quartz. There are some curious instances of rocks having the same chemical composition, but consisting of entirely different minerals, e.g., the hornblendite of Gran, in Norway, which contains only hornblende, has the same composition as some of the camptonites of the same locality that contain feldspar and hornblende of a different variety. In this connection, we may repeat what has been said above about the corrosion of porphyritic minerals in igneous rocks. In rhyolites and trachytes, early crystals of hornblende and biotite may be found in great numbers partially converted into augite and magnetite. Hornblende and biotite were stable under the pressures and other conditions below the surface, but unstable at higher levels. In the ground-mass of these rocks, augite is almost universally present. But the plutonic representatives of the same magma, granite, and syenite contain biotite and hornblende far more commonly than augite. [ 36 ]
Those rocks that contain the most silica, and on crystallizing yield free quartz, form a group generally designated the "felsic" rocks. Those again that contain the least silica and most magnesia and iron, so that quartz is absent while olivine is usually abundant, form the "mafic" group. The "intermediate" rocks include those characterized by the general absence of both quartz and olivine. An important subdivision of these contains a very high percentage of alkalis, especially soda, and consequently has minerals such as nepheline and leucite not common in other rocks. It is often separated from the others as the "alkali" or "soda" rocks, and there is a corresponding series of mafic rocks. Lastly, a small sub-group rich in olivine and without feldspar has been called the "ultramafic" rocks. They have very low percentages of silica but much iron and magnesia.
Except these last, practically all rocks contain felspars or feldspathoid minerals. In the acid rocks, the common feldspars are orthoclase, perthite, microcline, and oligoclase—all having much silica and alkalis. In the mafic rocks labradorite, anorthite, and bytownite prevail, being rich in lime and poor in silica, potash, and soda. Augite is the most common ferromagnesian in mafic rocks, but biotite and hornblende are on the whole more frequent in felsic rocks. [ 36 ]
Rocks that contain leucite or nepheline, either partly or wholly replacing felspar, are not included in this table. They are essentially of intermediate or of mafic character. We might in consequence regard them as varieties of syenite, diorite, gabbro, etc., in which feldspathoid minerals occur, and indeed there are many transitions between syenites of ordinary type and nepheline — or leucite — syenite, and between gabbro or dolerite and theralite or essexite. But, as many minerals develop in these "alkali" rocks that are uncommon elsewhere, it is convenient in a purely formal classification like that outlined here to treat the whole assemblage as a distinct series. [ 36 ]
This classification is based essentially on the mineralogical constitution of the igneous rocks. Any chemical distinctions between the different groups, though implied, are relegated to a subordinate position. It is admittedly artificial, but it has grown up with the growth of the science and is still adopted as the basis on which more minute subdivisions are erected. The subdivisions are by no means of equal value. The syenites, for example, and the peridotites, are far less important than the granites, diorites, and gabbros. Moreover, the effusive andesites do not always correspond to the plutonic diorites but partly also to the gabbros. As the different kinds of rock, regarded as aggregates of minerals, pass gradually into one another, transitional types are very common and are often so important as to receive special names. The quartz-syenites and nordmarkites may be interposed between granite and syenite, the tonalites and adamellites between granite and diorite, the monzonites between syenite and diorite, norites and hyperites between diorite and gabbro, and so on. [ 36 ]
Trace metals readily form complexes with major ions in the ocean, including hydroxide , carbonate , and chloride and their chemical speciation changes depending on whether the environment is oxidized or reduced . [ 38 ] Benjamin (2002) defines complexes of metals with more than one type of ligand , other than water, as mixed-ligand-complexes. In some cases, a ligand contains more than one donor atom, forming very strong complexes, also called chelates (the ligand is the chelator). One of the most common chelators is EDTA ( ethylenediaminetetraacetic acid ), which can replace six molecules of water and form strong bonds with metals that have a plus two charge. [ 39 ] With stronger complexation, lower activity of the free metal ion is observed. One consequence of the lower reactivity of complexed metals compared to the same concentration of free metal is that the chelation tends to stabilize metals in the aqueous solution instead of in solids. [ 39 ]
Concentrations of the trace metals cadmium , copper , molybdenum , manganese , rhenium , uranium and vanadium in sediments record the redox history of the oceans. [ 40 ] Within aquatic environments, cadmium(II) can either be in the form CdCl + (aq) in oxic waters or CdS(s) in a reduced environment. Thus, higher concentrations of Cd in marine sediments may indicate low redox potential conditions in the past. For copper(II), a prevalent form is CuCl + (aq) within oxic environments and CuS(s) and Cu 2 S within reduced environments. The reduced seawater environment leads to two possible oxidation states of copper, Cu(I) and Cu(II). [ 40 ] Molybdenum is present as the Mo(VI) oxidation state as MoO 4 2− (aq) in oxic environments. Mo(V) and Mo(IV) are present in reduced environments in the forms MoO 2 + (aq) and MoS 2(s) . [ 40 ] Rhenium is present as the Re(VII) oxidation state as ReO 4 − within oxic conditions, but is reduced to Re(IV) which may form ReO 2 or ReS 2 . Uranium is in oxidation state VI in UO 2 (CO 3 ) 3 4− (aq) and is found in the reduced form UO 2 (s). [ 40 ] Vanadium is in several forms in oxidation state V(V); HVO 4 2− and H 2 VO 4 − . Its reduced forms can include VO 2 + , VO(OH) 3 − , and V(OH) 3 . [ 40 ] These relative dominance of these species depends on pH .
In the water column of the ocean or deep lakes, vertical profiles of dissolved trace metals are characterized as following conservative–type , nutrient–type , or scavenged–type distributions. Across these three distributions, trace metals have different residence times and are used to varying extents by planktonic microorganisms. Trace metals with conservative-type distributions have high concentrations relative to their biological use. One example of a trace metal with a conservative-type distribution is molybdenum. It has a residence time within the oceans of around 8 x 10 5 years and is generally present as the molybdate anion (MoO 4 2− ). Molybdenum interacts weakly with particles and displays an almost uniform vertical profile in the ocean. Relative to the abundance of molybdenum in the ocean, the amount required as a metal cofactor for enzymes in marine phytoplankton is negligible. [ 41 ]
Trace metals with nutrient-type distributions are strongly associated with the internal cycles of particulate organic matter, especially the assimilation by plankton. The lowest dissolved concentrations of these metals are at the surface of the ocean, where they are assimilated by plankton . As dissolution and decomposition occur at greater depths, concentrations of these trace metals increase. Residence times of these metals, such as zinc, are several thousand to one hundred thousand years. Finally, an example of a scavenged-type trace metal is aluminium , which has strong interactions with particles as well as a short residence time in the ocean. The residence times of scavenged-type trace metals are around 100 to 1000 years. The concentrations of these metals are highest around bottom sediments, hydrothermal vents , and rivers. For aluminium, atmospheric dust provides the greatest source of external inputs into the ocean. [ 41 ]
Iron and copper show hybrid distributions in the ocean. They are influenced by recycling and intense scavenging. Iron is a limiting nutrient in vast areas of the oceans and is found in high abundance along with manganese near hydrothermal vents. Here, many iron precipitates are found, mostly in the forms of iron sulfides and oxidized iron oxyhydroxide compounds. Concentrations of iron near hydrothermal vents can be up to one million times the concentrations found in the open ocean. [ 41 ]
Using electrochemical techniques, it is possible to show that bioactive trace metals (zinc, cobalt, cadmium, iron, and copper) are bound by organic ligands in surface seawater. These ligand complexes serve to lower the bioavailability of trace metals within the ocean. For example, copper, which may be toxic to open ocean phytoplankton and bacteria, can form organic complexes. The formation of these complexes reduces the concentrations of bioavailable inorganic complexes of copper that could be toxic to sea life at high concentrations. Unlike copper, zinc toxicity in marine phytoplankton is low and there is no advantage to increasing the organic binding of Zn 2+ . In high-nutrient, low-chlorophyll regions , iron is the limiting nutrient, with the dominant species being strong organic complexes of Fe(III). [ 41 ] | https://en.wikipedia.org/wiki/Geochemistry |
The geochemistry of carbon is the study of the transformations involving the element carbon within the systems of the Earth. To a large extent this study is organic geochemistry, but it also includes the very important carbon dioxide. Carbon is transformed by life, and moves between the major phases of the Earth, including the water bodies, atmosphere, and the rocky parts. Carbon is important in the formation of organic mineral deposits, such as coal, petroleum or natural gas. Most carbon is cycled through the atmosphere into living organisms and then respirated back into the atmosphere. However an important part of the carbon cycle involves the trapping of living matter into sediments . The carbon then becomes part of a sedimentary rock when lithification happens.
Human technology or natural processes such as weathering, or underground life or water can return the carbon from sedimentary rocks to the atmosphere. From that point it can be transformed in the rock cycle into metamorphic rocks, or melted into igneous rocks. Carbon can return to the surface of the Earth by volcanoes or via uplift in tectonic processes. Carbon is returned to the atmosphere via volcanic gases .
Carbon undergoes transformation in the mantle under pressure to diamond and other minerals, and also exists in the Earth's outer core in solution with iron, and may also be present in the inner core. [ 1 ]
Carbon can form a huge variety stable compounds. It is an essential component of living matter.
Living organisms can live in a limited range of conditions on the Earth that are limited by temperature and the existence of liquid water. The potential habitability of other planets or moons can also be assessed by the existence of liquid water. [ 1 ]
Carbon makes up only 0.08% of the combination of the lithosphere , hydrosphere , and atmosphere . Yet it is the twelfth most common element there. In the rock of the lithosphere, carbon commonly occurs as carbonate minerals containing calcium or magnesium. It is also found as fossil fuels in coal and petroleum and gas. Native forms of carbon are much rarer, requiring pressure to form. Pure carbon exists as graphite or diamond. [ 1 ]
The deeper parts of Earth such as the mantle are very hard to discover. Few samples are known, in the form of uplifted rocks, or xenoliths. Even fewer remain in the same state they were in where the pressure and temperature is much higher. Some diamonds retain inclusions held at pressures they were formed at, but the temperature is much lower at the surface. Iron meteorites may represent samples of the core of an asteroid, but it would have formed under different conditions to the Earth's core. Therefore, experimental studies are conducted in which minerals or substances are compressed and heated to determine what happens in similar conditions to the planetary interior.
The two common isotopes of carbon are stable. On Earth, carbon 12 , 12 C is by far the most common at 98.894%. Carbon 13 is much rarer averaging 1.106%. This percentage can vary slightly and its value is important in isotope geochemistry whereby the origin of the carbon is suggested. [ 1 ]
Carbon can be produced in stars at least as massive as the Sun by fusion of three helium-4 nuclei: 4 He + 4 He + 4 He --> 12 C. This is the triple alpha process .
In stars as massive as the Sun, carbon-12 is also converted to carbon-13 and then onto nitrogen-14 by fusion with protons. 12 C + 1 H --> 13 C + e + . 13 C + 1 H --> 14 N. In more massive stars, two carbon nuclei can fuse to magnesium , or a carbon and an oxygen to sulfur . [ 1 ]
In molecular clouds , simple carbon molecules are formed, including carbon monoxide and dicarbon . Reactions with the trihydrogen cation of the simple carbon molecules yield carbon containing ions that readily react to form larger organic molecules. Carbon compounds that exist as ions, or isolated gas molecules in the interstellar medium , can condense onto dust grains. Carbonaceous dust grains consist mostly of carbon. Grains can stick together to form larger aggregates. [ 1 ]
Meteorites and interplanetary dust shows the composition of solid material at the start of the Solar System, as they have not been modified since its formation. Carbonaceous chondrites are meteorites with around 5% carbon compounds. Their composition resembles the Sun's minus the very volatile elements like hydrogen and noble gases.
The Earth is believed to have formed by the gravitational collapse of material like meteorites. [ 1 ]
Important effects on Earth in the first Hadian Era include strong solar winds during the T-Tauri stage of the Sun. The Moon forming impact caused major changes to the surface. Juvenile volatiles outgased from the early molten surface of the Earth. These included carbon dioxide and carbon monoxide. The emissions probably did not include methane, but the Earth was probably free of molecular oxygen. The Late Heavy Bombardment was between 4.0 and 3.8 billion years ago (Ga). To start with, the Earth did not have a crust as it does today. Plate tectonics in its present form commenced about 2.5 Ga. [ 1 ]
Early sedimentary rocks formed under water date to 3.8 Ga. Pillow lavas dating from 3.5 Ga prove the existence of oceans. Evidence of early life is given by fossils of stromatolites, and later by chemical tracers. [ 1 ]
Organic matter continues to be added to the Earth from space via interplanetary dust, which also includes some interstellar particles. The amounts added to the Earth were around 60,000 tonnes per year about 4 Ga. [ 1 ]
Biological sequestration of carbon causes enrichment of carbon-12, so that substances that originate from living organisms have a higher carbon-12 content. Due to the kinetic isotope effect, chemical reactions can happen faster with lighter isotopes, so that photosynthesis fixes lighter carbon-12 faster than carbon-13. Also lighter isotopes diffuse across a biological membrane faster. Enrichment in carbon 13 is measured by delta 13 C(o/oo) = [( 13 C/ 12 C)sample/( 13 C/ 12 C)standard - 1] * 1000.
The common standard for carbon is Cretaceous Peedee formation belemnite. [ 1 ]
Complex molecules, in particular those containing carbon can be in the form of stereoisomers . With abiotic processes they would be expected to be equally likely, but in carbonaceous chondrites this is not the case. The reasons for this are unknown. [ 1 ]
The outer layer of the Earth, the crust along with its outer layers contain about 10 20 kg of carbon. This is enough for each square meter of the surface to have 200 tons of carbon. [ 2 ]
Carbon added to sedimentary rocks can take the form of carbonates, or organic carbon compounds. In order of source quantity the organic carbon comes from phytoplankton, plants, bacteria and zooplankton. However terrestrial sediments may be mostly from higher plants, and some oxygen deficient sediments from water may be mostly bacteria. Fungi and other animals make insignificant contributions. [ 3 ] On the oceans the main contributor of organic matter to sediments is plankton, either dead fragments or faecal pellets termed marine snow. Bacteria degrade this matter in the water column, and the amount surviving to the ocean floor is inversely proportional to the depth. This is accompanied by biominerals consisting of silicates and carbonates. The particulate organic matter in sediments is about 20% of known molecules 80% of material that cannot be analysed. Detritivores consume some of the fallen organic materials. Aerobic bacteria and fungi also consume organic matter in the oxic surface parts of the sediment. Coarse-grained sediments are oxygenated to about half a meter, but fine grained clays may only have a couple of millimetres exposed to oxygen. The organic matter in the oxygenated zone will become completely mineralized if it stays there long enough. [ 4 ]
Deeper in sediments where oxygen is exhausted, anaerobic biological processes continue at a slower rate. These include anaerobic mineralization making ammonium , phosphate and sulfide ions; fermentation making short chain alcohols, acids or methyl amines; acetogenesis making acetic acid ; methanogenesis making methane, and sulfate, nitrite and nitrate reduction. Carbon dioxide and hydrogen are also outputs. Under freshwater, sulfate is usually very low, so methanogenesis is more important. Yet other bacteria can convert methane, back into living matter, by oxidising with other substrates. Bacteria can reside at great depths in sediments. However sedimentary organic matter accumulates the indigestible components. [ 4 ]
Deep bacteria may be lithotrophes , using hydrogen, and carbon dioxide as a carbon source. [ 4 ]
In the oceans and other waters there is much dissolved organic materials . These are several thousand years old on average, and are called gelbstoff (yellow substance) particularly in fresh waters. Much of this is tannins . The nitrogen containing materials here appear to be amides, perhaps from peptidoglycans from bacteria. Microorganisms have trouble consuming the high molecular weight dissolved substances, but quickly consume small molecules. [ 4 ]
From terrestrial sources black carbon produced by charring is an important component. Fungi are important decomposers in soil. [ 4 ]
Proteins are normally hydrolysed slowly even without enzymes or bacteria, with a half life of 460 years, but can be preserved if they are desiccated, pickled or frozen. Being enclosed in bone also helps preservation. Over time the amino acids tend to racemize, and those with more functional groups are lost earlier. Protein still will degrade on the timescale of a million years. DNA degrades rapidly, lasting only about four years in water. Cellulose and chitin have a half life in water at 25° of about 4.7 million years. Enzymes can accelerate this by a factor of 10 17 . About 10 11 tons of chitin are produced each year, but it is almost all degraded. [ 5 ]
Lignin is only efficiently degraded by fungi, white rot, or brown rot. These require oxygen. [ 5 ]
Lipids are hydrolysed to fatty acids over long time periods. Plant cuticle waxes are very difficult to degrade, and may survive over geological time periods. [ 5 ]
More organic matter is preserved in sediments if there is high primary production, or the sediment is fine-grained. The lack of oxygen helps preservation greatly, and that also is caused by a large supply of organic matter. Soil does not usually preserve organic matter, it would need to be acidified or water logged, as in the bog. Rapid burial ensures the material gets to an oxygen free depth, but also dilutes the organic matter. A low energy environment ensures the sediment is not stirred up and oxygenated. Salt marshes and mangroves meet some of these requirements, but unless the sea level is rising will not have a chance to accumulate much. Coral reefs are very productive, but are well oxygenated, and recycle everything before it is buried. [ 5 ]
In dead Sphagnum , sphagnan a polysaccharide with D-lyxo-5-hexosulouronic acid is a major remaining substance. It makes the bog very acidic, so that bacteria cannot grow. Not only that, the plant ensures there is no available nitrogen. Holocellulose also absorbs any digestive enzymes around. Together this leads to major accumulation of peat under sphagnum bogs.
Earth's mantle is a significant reservoir of carbon. The mantle contains more carbon than the crust, oceans, biosphere, and atmosphere put together. The figure is estimated to be very roughly 10 22 kg. [ 2 ] Carbon concentration in the mantle is very variable, varying by more than a factor of 100 between different parts. [ 6 ] [ 7 ]
The form carbon takes depends on its oxidation state, which depends on the oxygen fugacity of the environment. Carbon dioxide and carbonate are found where the oxygen fugacity is high. Lower oxygen fugacity results in diamond formation, first in eclogite , then peridotite , and lastly in fluid water mixtures. At even lower oxygen fugacity, methane is stable in contact with water, and even lower, metallic iron and nickel form along with carbides. Iron carbides include Fe 3 C and Fe 7 C 3 . [ 8 ]
Minerals that contain carbon include calcite and its higher density polymorphs. Other significant carbon minerals include magnesium and iron carbonates. Dolomite is stable above 100 km depth. Below 100 km, dolomite reacts with orthopyroxine (found in peridotite) to yield magnesite (an iron magnesium carbonate). [ 2 ] Below 200 km deep, carbon dioxide is reduced by ferrous iron (Fe 2+ ), forming diamond, and ferric iron (Fe 3+ ). Even deeper pressure induced disproportionation of iron minerals produces more ferric iron, and metallic iron. The metallic iron combines with carbon to form the mineral cohenite with formula Fe 3 C. Cohenite also contains some nickel substituting for iron. This form or carbon is called "carbide". [ 9 ] Diamond forms in the mantle below 150 km deep, but because it is so durable, it can survive in eruptions to the surface in kimberlites , lamproites , or ultramafic lamprophyres . [ 8 ]
Xenoliths can come from the mantle, and different compositions come from different depths. Above 90 km (3.2 GPa) spinel peridotite occurs, below this garnet peridotite is found. [ 2 ]
Inclusions trapped in diamond can reveal the material and conditions much deeper in the mantle. Large gem diamonds are usually formed in the transition zone part of the mantle, (410 to 660 km deep) and crystallise from a molten iron-nickel-carbon solution, that also contains sulfur and trace amounts of hydrogen, chromium, phosphorus and oxygen. Carbon atoms constitute about 12% of the melt (about 3% by mass). Inclusions of the crystallised metallic melt are sometimes included in diamonds. Diamond can be caused to precipitate from the liquid metal, by increasing pressure, or by adding sulfur. [ 10 ]
Fluid inclusions in crystals from the mantle have contents that most often are liquid carbon dioxide , but which also include carbon oxysulfide , methane and carbon monoxide [ 6 ]
Material is added by subduction from the crust. This includes the major carbon containing sediments such as limestone, or coal. Each year 2×10 11 kg of CO 2 is transferred from the crust to the mantle by subduction. (1700 tons of carbon per second). [ 2 ]
Upwelling mantle material can add to the crust at mid oceanic ridges. Fluids can extract carbon from the mantle and erupt in volcanoes. At 330 km deep a liquid consisting of carbon dioxide and water can form. It is highly corrosive, and dissolves incompatible elements from the solid mantle. These elements include uranium, thorium, potassium, helium and argon. The fluids can then go on to cause metasomatism or extend to the surface in carbonatite eruptions. [ 11 ] The total mid oceanic ridge, and hot spot volcanic emissions of carbon dioxide match the loss due to subduction: 2×10 11 kg of CO 2 per year. [ 2 ]
In slowly convecting mantle rocks, diamond that slowly rises above 150 km will slowly turn into graphite or be oxidised to carbon dioxide or carbonate minerals. [ 8 ]
Earth's core is believed to be mostly an alloy of iron and nickel . The density indicates that it also contains a significant amount of lighter elements. Elements such as hydrogen would be stable in the Earth's core, however the conditions at the formation of the core would not be suitable for its inclusion. Carbon is a very likely constituent of the core. [ 12 ] Preferential partitioning of the carbon isotope 12 C into the metallic core, during its formation, may explain why there seems to be more 13 C on the surface and mantle of the Earth compared to other solar system bodies (−5‰ compared to -20‰). The difference can also help to predict the value of the carbon proportion of the core. [ 12 ]
The outer core has a density around 11 cm −3 , and a mass of 1.3×10 24 kg. It contains roughly 10 22 kg of carbon. Carbon dissolved in liquid iron affect the solution of other elements. Dissolved carbon changes lead from a siderophile to a lithophile . It has the opposite effect on tungsten and molybdenum , causing more tungsten or molybdenum to dissolve in the metallic phase. [ 12 ] The measured amounts of these elements in the rocks compared to the Solar System can be explained by a 0.6% carbon composition of the core. [ 12 ]
The inner core is about 1221 km in radius. It has a density of 13 g cm −3 , and a total mass of 9×10 22 kg and a surface area of 18,000,000 square kilometers. Experiments with mixtures under pressure and temperature attempt to reproduce the known properties of the inner and outer core. Carbides are among the first to precipitate from a molten metal mix, and so the inner core may be mostly iron carbides, Fe 7 C 3 or Fe 3 C . [ 12 ] At atmospheric pressure (100 kPa) the iron-Fe 3 C eutectic point is at 4.1% carbon. This percentage decreases as pressure increases to around 50 GPa. Above that pressure the percentage of carbon at the eutectic increases. [ 12 ] The pressure on the inner core ranges from 330 GPa to 360 GPa at the centre of the Earth. The temperature at the inner core surface is about 6000 K. The material of the inner core must be stable at the pressure and temperature found there, and more dense than that of the outer core liquid. Extrapolations show that either Fe 3 C or Fe 7 C 3 match the requirements. [ 12 ] Fe 7 C 3 is 8.4% carbon, and Fe 3 C is 6.7% carbon. The inner core is growing by about 1 mm per year, or adding about 18 cubic kilometres per year. This is about 18×10 12 kg of carbon added to the inner core every year. It contains about 8×10 21 kg of carbon.
In order to determine the fate of natural carbon containing substances deep in the Earth, experiments have been conducted to see what happens when high pressure, and or temperatures are applied. Such substances include carbon dioxide, carbon monoxide, graphite , methane , and other hydrocarbons such as benzene , carbon dioxide water mixtures and carbonate minerals such as calcite , magnesium carbonate , or ferrous carbonate . Under super high pressures carbon may take on a higher coordination number than the four found in sp 3 compounds like diamond, or the three found in carbonates. Perhaps carbon can substitute into silicates, or form a silicon oxycarbide . [ 13 ] Carbides may be possible. [ 14 ]
At 15 GPa graphite changes to a hard transparent form , that is not diamond. Diamond is very resistant to pressure, but at about 1 TPa (1000 GPa) transforms to a BC-8 form . [ 14 ] But these conditions are not found in the Earth.
Carbides are predicted to be more likely lower in the mantle as experiments have shown a much lower oxygen fugacity for high pressure iron silicates. Cohenite remains stable to over 187 GPa, but is predicted to have a denser orthorhombic Cmcm form in the inner core. [ 14 ]
Under 0.3 GPa pressure, carbon dioxide is stable at room temperature in the same form as dry ice. Over 0.5 GPa carbon dioxide forms a number of different solid forms containing molecules. At pressures over 40 GPa and high temperatures, carbon dioxide forms a covalent solid that contains CO 4 tetrahedra, and has the same structure as β- cristobalite . This is called phase V or CO 2 -V. When CO 2 -V is subjected to high temperatures, or higher pressures, experiments show it breaks down to form diamond and oxygen. In the mantle the geotherm would mean that carbon dioxide would be a liquid till a pressure of 33 GPa, then it would adopt the solid CO 2 -V form till 43 GPa, and deeper than that would make diamond and fluid oxygen. [ 14 ]
High pressure carbon monoxide forms the high energy polycarbonyl covalent solid, however it is not expected to be present inside the Earth. [ 14 ]
Under 1.59 GPa pressure at 25 °C, methane converts to a cubic solid. The molecules are rotationally disordered. But over 5.25 GPa the molecules become locked into position and cannot spin. Other hydrocarbons under high pressure have hardly been studied. [ 14 ]
Calcite changes to calcite-II and calcite-III at pressures of 1.5, and 2.2 GPa. Siderite undergoes a chemical change at 10 GPa at 1800K to form Fe 4 O 5 . Dolomite decomposes 7GPa and below 1000 °C to yield aragonite and magnesite . However, there are forms of iron containing dolomite stable at higher pressures and temperatures. Over 130 GPa aragonite undergoes a transformation to a SP 3 tetrahedrally connected carbon, in a covalent network in a C 222 1 structure. Magnesite can survive 80 GPa, but with more than 100 GPa (as at a depth of 1800 km it changes to forms with three-member rings of CO 4 tetrahedra (C 3 O 9 6− ). If iron is present in this mineral, at these pressures it will convert to magnetite and diamond. Melted carbonates with SP 3 carbon are predicted to be very viscous. [ 14 ]
Some minerals that contain both silicate and carbonate exist, spurrite and tilleyite . But high-pressure forms have not been studied. There have been attempts to make silicon carbonate . [ 14 ] Six coordinated silicates mixed with carbonate should not exist on Earth, but may exist on more massive planets. [ 14 ] | https://en.wikipedia.org/wiki/Geochemistry_of_carbon |
Geochimica et Cosmochimica Acta ( Latin for 'Geochemical and Cosmochemical Journal') is a biweekly peer-reviewed scientific journal published by Elsevier . It was established in 1950 and is sponsored by the Geochemical Society and the Meteoritical Society . The editor-in-chief is Jeffrey Catalano ( Washington University in St. Louis ). The journal covers topics in Earth geochemistry , planetary geochemistry, cosmochemistry and meteoritics .
Publishing formats include original research articles and invited reviews and occasional editorials , book reviews , and announcements. In addition, the journal publishes short comments (4 pages) targeting specific articles and designed to improve understanding of the target article by advocating a different interpretation supported by the literature, followed by a response by the author.
The journal is abstracted and indexed in: [ 1 ]
According to the Journal Citation Reports , the journal has a 2021 impact factor of 5.921. [ 2 ] | https://en.wikipedia.org/wiki/Geochimica_et_Cosmochimica_Acta |
Geochores ( Greek γῆ gé "the earth" and χώρα chora "area") are relatively large landscape areas with similar – but owing to their size not fully uniform – characteristics. They therefore consist of a tapestry of smaller landscape units , which can be hierarchically grouped: | https://en.wikipedia.org/wiki/Geochore |
In general relativity , if two objects are set in motion along two initially parallel trajectories, the presence of a tidal gravitational force will cause the trajectories to bend towards or away from each other, producing a relative acceleration between the objects. [ 1 ]
Mathematically, the tidal force in general relativity is described by the Riemann curvature tensor , [ 1 ] and the trajectory of an object solely under the influence of gravity is called a geodesic . The geodesic deviation equation relates the Riemann curvature tensor to the relative acceleration of two neighboring geodesics. In differential geometry , the geodesic deviation equation is more commonly known as the Jacobi equation .
To quantify geodesic deviation, one begins by setting up a family of closely spaced geodesics indexed by a continuous variable s and parametrized by an affine parameter τ . That is, for each fixed s , the curve swept out by γ s ( τ ) as τ varies is a geodesic. When considering the geodesic of a massive object, it is often convenient to choose τ to be the object's proper time . If x μ ( s , τ ) are the coordinates of the geodesic γ s ( τ ) , then the tangent vector of this geodesic is
If τ is the proper time, then T μ is the four-velocity of the object traveling along the geodesic.
One can also define a deviation vector , which is the displacement of two objects travelling along two infinitesimally separated geodesics:
The relative acceleration A μ of the two objects is defined, roughly, as the second derivative of the separation vector X μ as the objects advance along their respective geodesics. Specifically, A μ is found by taking the directional covariant derivative of X along T twice:
The geodesic deviation equation relates A μ , T μ , X μ , and the Riemann tensor R μ νρσ : [ 2 ] [ 3 ]
An alternate notation for the directional covariant derivative T α ∇ α {\displaystyle T^{\alpha }\nabla _{\alpha }} is D / d τ {\displaystyle D/d\tau } , so the geodesic deviation equation may also be written as
The geodesic deviation equation can be derived from the second variation of the point particle Lagrangian along geodesics, or from the first variation of a combined Lagrangian. [ clarification needed ] The Lagrangian approach has two advantages. First it allows various formal approaches of quantization to be applied to the geodesic deviation system. Second it allows deviation to be formulated for much more general objects than geodesics (any dynamical system which has a one spacetime indexed momentum appears to have a corresponding generalization of geodesic deviation). [ citation needed ]
The connection between geodesic deviation and tidal acceleration can be seen more explicitly by examining geodesic deviation in the weak-field limit , where the metric is approximately Minkowski, and the velocities of test particles are assumed to be much less than c . Then the tangent vector T μ is approximately (1, 0, 0, 0); i.e., only the timelike component is nonzero.
The spatial components of the relative acceleration are then given by
where i and j run only over the spatial indices 1, 2, and 3.
In the particular case of a metric corresponding to the Newtonian potential Φ( x , y , z ) of a massive object at x = y = z = 0, we have
which is the tidal tensor of the Newtonian potential. | https://en.wikipedia.org/wiki/Geodesic_deviation |
In general relativity , a geodesic generalizes the notion of a "straight line" to curved spacetime . Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.
In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space.
The full geodesic equation is d 2 x μ d s 2 + Γ μ α β d x α d s d x β d s = 0 {\displaystyle {d^{2}x^{\mu } \over ds^{2}}+\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over ds}{dx^{\beta } \over ds}=0\ } where s is a scalar parameter of motion (e.g. the proper time ), and Γ μ α β {\displaystyle \Gamma ^{\mu }{}_{\alpha \beta }} are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices α {\displaystyle \alpha } and β {\displaystyle \beta } . The quantity on the left-hand-side of this equation is the acceleration of a particle, so this equation is analogous to Newton's laws of motion , which likewise provide formulae for the acceleration of a particle. The Christoffel symbols are functions of the four spacetime coordinates and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
So far the geodesic equation of motion has been written in terms of a scalar parameter s . It can alternatively be written in terms of the time coordinate, t ≡ x 0 {\displaystyle t\equiv x^{0}} (here we have used the triple bar to signify a definition). The geodesic equation of motion then becomes: d 2 x μ d t 2 = − Γ μ α β d x α d t d x β d t + Γ 0 α β d x α d t d x β d t d x μ d t . {\displaystyle {d^{2}x^{\mu } \over dt^{2}}=-\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over dt}{dx^{\beta } \over dt}+\Gamma ^{0}{}_{\alpha \beta }{dx^{\alpha } \over dt}{dx^{\beta } \over dt}{dx^{\mu } \over dt}\ .}
This formulation of the geodesic equation of motion can be useful for computer calculations and to compare General Relativity with Newtonian Gravity. [ 1 ] It is straightforward to derive this form of the geodesic equation of motion from the form which uses proper time as a parameter using the chain rule . Notice that both sides of this last equation vanish when the mu index is set to zero. If the particle's velocity is small enough, then the geodesic equation reduces to this: d 2 x n d t 2 = − Γ n 00 . {\displaystyle {d^{2}x^{n} \over dt^{2}}=-\Gamma ^{n}{}_{00}.}
Here the Latin index n takes the values [1,2,3]. This equation simply means that all test particles at a particular place and time will have the same acceleration, which is a well-known feature of Newtonian gravity. For example, everything floating around in the International Space Station will undergo roughly the same acceleration due to gravity.
Physicist Steven Weinberg has presented a derivation of the geodesic equation of motion directly from the equivalence principle . [ 2 ] The first step in such a derivation is to suppose that a free falling particle does not accelerate in the neighborhood of a point-event with respect to a freely falling coordinate system ( X μ {\displaystyle X^{\mu }} ). Setting T ≡ X 0 {\displaystyle T\equiv X^{0}} , we have the following equation that is locally applicable in free fall: d 2 X μ d T 2 = 0. {\displaystyle {d^{2}X^{\mu } \over dT^{2}}=0.} The next step is to employ the multi-dimensional chain rule. We have: d X μ d T = d x ν d T ∂ X μ ∂ x ν {\displaystyle {dX^{\mu } \over dT}={dx^{\nu } \over dT}{\partial X^{\mu } \over \partial x^{\nu }}} Differentiating once more with respect to the time, we have: d 2 X μ d T 2 = d 2 x ν d T 2 ∂ X μ ∂ x ν + d x ν d T d x α d T ∂ 2 X μ ∂ x ν ∂ x α {\displaystyle {d^{2}X^{\mu } \over dT^{2}}={d^{2}x^{\nu } \over dT^{2}}{\partial X^{\mu } \over \partial x^{\nu }}+{dx^{\nu } \over dT}{dx^{\alpha } \over dT}{\partial ^{2}X^{\mu } \over \partial x^{\nu }\partial x^{\alpha }}} We have already said that the left-hand-side of this last equation must vanish because of the Equivalence Principle. Therefore: d 2 x ν d T 2 ∂ X μ ∂ x ν = − d x ν d T d x α d T ∂ 2 X μ ∂ x ν ∂ x α {\displaystyle {d^{2}x^{\nu } \over dT^{2}}{\partial X^{\mu } \over \partial x^{\nu }}=-{dx^{\nu } \over dT}{dx^{\alpha } \over dT}{\partial ^{2}X^{\mu } \over \partial x^{\nu }\partial x^{\alpha }}} Multiply both sides of this last equation by the following quantity: ∂ x λ ∂ X μ {\displaystyle {\partial x^{\lambda } \over \partial X^{\mu }}} Consequently, we have this: d 2 x λ d T 2 = − d x ν d T d x α d T [ ∂ 2 X μ ∂ x ν ∂ x α ∂ x λ ∂ X μ ] . {\displaystyle {d^{2}x^{\lambda } \over dT^{2}}=-{dx^{\nu } \over dT}{dx^{\alpha } \over dT}\left[{\partial ^{2}X^{\mu } \over \partial x^{\nu }\partial x^{\alpha }}{\partial x^{\lambda } \over \partial X^{\mu }}\right].}
Weinberg defines the affine connection as follows: [ 3 ] Γ λ ν α = [ ∂ 2 X μ ∂ x ν ∂ x α ∂ x λ ∂ X μ ] {\displaystyle \Gamma ^{\lambda }{}_{\nu \alpha }=\left[{\partial ^{2}X^{\mu } \over \partial x^{\nu }\partial x^{\alpha }}{\partial x^{\lambda } \over \partial X^{\mu }}\right]} which leads to this formula: d 2 x λ d T 2 = − Γ ν α λ d x ν d T d x α d T . {\displaystyle {d^{2}x^{\lambda } \over dT^{2}}=-\Gamma _{\nu \alpha }^{\lambda }{dx^{\nu } \over dT}{dx^{\alpha } \over dT}.}
Notice that, if we had used the proper time “s” as the parameter of motion, instead of using the locally inertial time coordinate “T”, then our derivation of the geodesic equation of motion would be complete. In any event, let us continue by applying the one-dimensional chain rule : d 2 x λ d t 2 ( d t d T ) 2 + d x λ d t d 2 t d T 2 = − Γ ν α λ d x ν d t d x α d t ( d t d T ) 2 . {\displaystyle {d^{2}x^{\lambda } \over dt^{2}}\left({\frac {dt}{dT}}\right)^{2}+{dx^{\lambda } \over dt}{\frac {d^{2}t}{dT^{2}}}=-\Gamma _{\nu \alpha }^{\lambda }{dx^{\nu } \over dt}{dx^{\alpha } \over dt}\left({\frac {dt}{dT}}\right)^{2}.} d 2 x λ d t 2 + d x λ d t d 2 t d T 2 ( d T d t ) 2 = − Γ ν α λ d x ν d t d x α d t . {\displaystyle {d^{2}x^{\lambda } \over dt^{2}}+{dx^{\lambda } \over dt}{\frac {d^{2}t}{dT^{2}}}\left({\frac {dT}{dt}}\right)^{2}=-\Gamma _{\nu \alpha }^{\lambda }{dx^{\nu } \over dt}{dx^{\alpha } \over dt}.}
As before, we can set t ≡ x 0 {\displaystyle t\equiv x^{0}} . Then the first derivative of x 0 with respect to t is one and the second derivative is zero. Replacing λ with zero gives: d 2 t d T 2 ( d T d t ) 2 = − Γ ν α 0 d x ν d t d x α d t . {\displaystyle {\frac {d^{2}t}{dT^{2}}}\left({\frac {dT}{dt}}\right)^{2}=-\Gamma _{\nu \alpha }^{0}{dx^{\nu } \over dt}{dx^{\alpha } \over dt}.}
Subtracting d x λ / d t times this from the previous equation gives: d 2 x λ d t 2 = − Γ ν α λ d x ν d t d x α d t + Γ ν α 0 d x ν d t d x α d t d x λ d t {\displaystyle {d^{2}x^{\lambda } \over dt^{2}}=-\Gamma _{\nu \alpha }^{\lambda }{dx^{\nu } \over dt}{dx^{\alpha } \over dt}+\Gamma _{\nu \alpha }^{0}{dx^{\nu } \over dt}{dx^{\alpha } \over dt}{dx^{\lambda } \over dt}} which is a form of the geodesic equation of motion (using the coordinate time as parameter).
The geodesic equation of motion can alternatively be derived using the concept of parallel transport . [ 4 ]
We can (and this is the most common technique) derive the geodesic equation via the action principle. Consider the case of trying to find a geodesic between two timelike-separated events.
Let the action be S = ∫ d s {\displaystyle S=\int ds} where d s = − g μ ν ( x ) d x μ d x ν {\displaystyle ds={\sqrt {-g_{\mu \nu }(x)\,dx^{\mu }\,dx^{\nu }}}} is the line element . There is a negative sign inside the square root because the curve must be timelike. To get the geodesic equation we must vary this action. To do this let us parameterize this action with respect to a parameter λ {\displaystyle \lambda } . Doing this we get: S = ∫ − g μ ν d x μ d λ d x ν d λ d λ {\displaystyle S=\int {\sqrt {-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}}}\,d\lambda }
We can now go ahead and vary this action with respect to the curve x μ {\displaystyle x^{\mu }} . By the principle of least action we get: 0 = δ S = ∫ δ ( − g μ ν d x μ d λ d x ν d λ ) d λ = ∫ δ ( − g μ ν d x μ d λ d x ν d λ ) 2 − g μ ν d x μ d λ d x ν d λ d λ {\displaystyle 0=\delta S=\int \delta \left({\sqrt {-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}}}\right)\,d\lambda =\int {\frac {\delta \left(-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}\right)}{2{\sqrt {-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}}}}}d\lambda }
Using the product rule we get: 0 = ∫ ( d x μ d λ d x ν d τ δ g μ ν + g μ ν d δ x μ d λ d x ν d τ + g μ ν d x μ d τ d δ x ν d λ ) d λ = ∫ ( d x μ d λ d x ν d τ ∂ α g μ ν δ x α + 2 g μ ν d δ x μ d λ d x ν d τ ) d λ {\displaystyle 0=\int \left({\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\tau }}\delta g_{\mu \nu }+g_{\mu \nu }{\frac {d\delta x^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\tau }}+g_{\mu \nu }{\frac {dx^{\mu }}{d\tau }}{\frac {d\delta x^{\nu }}{d\lambda }}\right)\,d\lambda =\int \left({\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }\delta x^{\alpha }+2g_{\mu \nu }{\frac {d\delta x^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\tau }}\right)\,d\lambda } where d τ d λ = − g μ ν d x μ d λ d x ν d λ {\displaystyle {\frac {d\tau }{d\lambda }}={\sqrt {-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}}}}
Integrating by-parts the last term and dropping the total derivative (which equals to zero at the boundaries) we get that: 0 = ∫ ( d x μ d τ d x ν d τ ∂ α g μ ν δ x α − 2 δ x μ d d τ ( g μ ν d x ν d τ ) ) d τ = ∫ ( d x μ d τ d x ν d τ ∂ α g μ ν δ x α − 2 δ x μ ∂ α g μ ν d x α d τ d x ν d τ − 2 δ x μ g μ ν d 2 x ν d τ 2 ) d τ {\displaystyle 0=\int \left({\frac {dx^{\mu }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }\delta x^{\alpha }-2\delta x^{\mu }{\frac {d}{d\tau }}\left(g_{\mu \nu }{\frac {dx^{\nu }}{d\tau }}\right)\right)\,d\tau =\int \left({\frac {dx^{\mu }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }\delta x^{\alpha }-2\delta x^{\mu }\partial _{\alpha }g_{\mu \nu }{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}-2\delta x^{\mu }g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}\right)\,d\tau }
Simplifying a bit we see that: 0 = ∫ ( − 2 g μ ν d 2 x ν d τ 2 + d x α d τ d x ν d τ ∂ μ g α ν − 2 d x α d τ d x ν d τ ∂ α g μ ν ) δ x μ d τ {\displaystyle 0=\int \left(-2g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}+{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\mu }g_{\alpha \nu }-2{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }\right)\delta x^{\mu }d\tau } so, 0 = ∫ ( − 2 g μ ν d 2 x ν d τ 2 + d x α d τ d x ν d τ ∂ μ g α ν − d x α d τ d x ν d τ ∂ α g μ ν − d x ν d τ d x α d τ ∂ ν g μ α ) δ x μ d τ {\displaystyle 0=\int \left(-2g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}+{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\mu }g_{\alpha \nu }-{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }-{\frac {dx^{\nu }}{d\tau }}{\frac {dx^{\alpha }}{d\tau }}\partial _{\nu }g_{\mu \alpha }\right)\delta x^{\mu }\,d\tau } multiplying this equation by − 1 2 {\textstyle -{\frac {1}{2}}} we get: 0 = ∫ ( g μ ν d 2 x ν d τ 2 + 1 2 d x α d τ d x ν d τ ( ∂ α g μ ν + ∂ ν g μ α − ∂ μ g α ν ) ) δ x μ d τ {\displaystyle 0=\int \left(g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}+{\frac {1}{2}}{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\left(\partial _{\alpha }g_{\mu \nu }+\partial _{\nu }g_{\mu \alpha }-\partial _{\mu }g_{\alpha \nu }\right)\right)\delta x^{\mu }\,d\tau }
So by Hamilton's principle we find that the Euler–Lagrange equation is g μ ν d 2 x ν d τ 2 + 1 2 d x α d τ d x ν d τ ( ∂ α g μ ν + ∂ ν g μ α − ∂ μ g α ν ) = 0 {\displaystyle g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}+{\frac {1}{2}}{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\left(\partial _{\alpha }g_{\mu \nu }+\partial _{\nu }g_{\mu \alpha }-\partial _{\mu }g_{\alpha \nu }\right)=0}
Multiplying by the inverse metric tensor g μ β {\displaystyle g^{\mu \beta }} we get that d 2 x β d τ 2 + 1 2 g μ β ( ∂ α g μ ν + ∂ ν g μ α − ∂ μ g α ν ) d x α d τ d x ν d τ = 0 {\displaystyle {\frac {d^{2}x^{\beta }}{d\tau ^{2}}}+{\frac {1}{2}}g^{\mu \beta }\left(\partial _{\alpha }g_{\mu \nu }+\partial _{\nu }g_{\mu \alpha }-\partial _{\mu }g_{\alpha \nu }\right){\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}=0}
Thus we get the geodesic equation: d 2 x β d τ 2 + Γ β α ν d x α d τ d x ν d τ = 0 {\displaystyle {\frac {d^{2}x^{\beta }}{d\tau ^{2}}}+\Gamma ^{\beta }{}_{\alpha \nu }{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}=0} with the Christoffel symbol defined in terms of the metric tensor as Γ β α ν = 1 2 g μ β ( ∂ α g μ ν + ∂ ν g μ α − ∂ μ g α ν ) {\displaystyle \Gamma ^{\beta }{}_{\alpha \nu }={\frac {1}{2}}g^{\mu \beta }\left(\partial _{\alpha }g_{\mu \nu }+\partial _{\nu }g_{\mu \alpha }-\partial _{\mu }g_{\alpha \nu }\right)}
(Note: Similar derivations, with minor amendments, can be used to produce analogous results for geodesics between light-like [ citation needed ] or space-like separated pairs of points.)
Albert Einstein believed that the geodesic equation of motion can be derived from the field equations for empty space , i.e. from the fact that the Ricci curvature vanishes. He wrote: [ 5 ]
It has been shown that this law of motion — generalized to the case of arbitrarily large gravitating masses — can be derived from the field equations of empty space alone. According to this derivation the law of motion is implied by the condition that the field be singular nowhere outside its generating mass points.
and [ 6 ]
One of the imperfections of the original relativistic theory of gravitation was that as a field theory it was not complete; it introduced the independent postulate that the law of motion of a particle is given by the equation of the geodesic.
A complete field theory knows only fields and not the concepts of particle and motion. For these must not exist independently from the field but are to be treated as part of it.
On the basis of the description of a particle without singularity, one has the possibility of a logically more satisfactory treatment of the combined problem: The problem of the field and that of the motion coincide.
Both physicists and philosophers have often repeated the assertion that the geodesic equation can be obtained from the field equations to describe the motion of a gravitational singularity , but this claim remains disputed. [ 7 ] According to David Malament , “Though the geodesic principle can be recovered as theorem in general relativity, it is not a consequence of Einstein’s equation (or the conservation principle) alone. Other assumptions are needed to derive the theorems in question.” [ 8 ] Less controversial is the notion that the field equations determine the motion of a fluid or dust, as distinguished from the motion of a point-singularity. [ 9 ]
In deriving the geodesic equation from the equivalence principle, it was assumed that particles in a local inertial coordinate system are not accelerating. However, in real life, the particles may be charged, and therefore may be accelerating locally in accordance with the Lorentz force . That is: d 2 X μ d s 2 = q m F μ β d X α d s η α β . {\displaystyle {d^{2}X^{\mu } \over ds^{2}}={q \over m}{F^{\mu \beta }}{dX^{\alpha } \over ds}{\eta _{\alpha \beta }}.} with η α β d X α d s d X β d s = − 1. {\displaystyle {\eta _{\alpha \beta }}{dX^{\alpha } \over ds}{dX^{\beta } \over ds}=-1.}
The Minkowski tensor η α β {\displaystyle \eta _{\alpha \beta }} is given by: η α β = ( − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle \eta _{\alpha \beta }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}}
These last three equations can be used as the starting point for the derivation of an equation of motion in General Relativity, instead of assuming that acceleration is zero in free fall. [ 2 ] Because the Minkowski tensor is involved here, it becomes necessary to introduce something called the metric tensor in General Relativity. The metric tensor g is symmetric, and locally reduces to the Minkowski tensor in free fall. The resulting equation of motion is as follows: [ 10 ] d 2 x μ d s 2 = − Γ μ α β d x α d s d x β d s + q m F μ β d x α d s g α β . {\displaystyle {d^{2}x^{\mu } \over ds^{2}}=-\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over ds}{dx^{\beta } \over ds}\ +{q \over m}{F^{\mu \beta }}{dx^{\alpha } \over ds}{g_{\alpha \beta }}.} with g α β d x α d s d x β d s = − 1. {\displaystyle {g_{\alpha \beta }}{dx^{\alpha } \over ds}{dx^{\beta } \over ds}=-1.}
This last equation signifies that the particle is moving along a timelike geodesic; massless particles like the photon instead follow null geodesics (replace −1 with zero on the right-hand side of the last equation). It is important that the last two equations are consistent with each other, when the latter is differentiated with respect to proper time, and the following formula for the Christoffel symbols ensures that consistency: Γ λ α β = 1 2 g λ τ ( ∂ g τ α ∂ x β + ∂ g τ β ∂ x α − ∂ g α β ∂ x τ ) {\displaystyle \Gamma ^{\lambda }{}_{\alpha \beta }={\frac {1}{2}}g^{\lambda \tau }\left({\frac {\partial g_{\tau \alpha }}{\partial x^{\beta }}}+{\frac {\partial g_{\tau \beta }}{\partial x^{\alpha }}}-{\frac {\partial g_{\alpha \beta }}{\partial x^{\tau }}}\right)} This last equation does not involve the electromagnetic fields, and it is applicable even in the limit as the electromagnetic fields vanish. The letter g with superscripts refers to the inverse of the metric tensor. In General Relativity, indices of tensors are lowered and raised by contraction with the metric tensor or its inverse, respectively.
A geodesic between two events can also be described as the curve joining those two events which has a stationary interval (4-dimensional "length"). Stationary here is used in the sense in which that term is used in the calculus of variations , namely, that the interval along the curve varies minimally among curves that are nearby to the geodesic.
In Minkowski space there is only one geodesic that connects any given pair of events, and for a time-like geodesic, this is the curve with the longest proper time between the two events. In curved spacetime, it is possible for a pair of widely separated events to have more than one time-like geodesic between them. In such instances, the proper times along several geodesics will not in general be the same. For some geodesics in such instances, it is possible for a curve that connects the two events and is nearby to the geodesic to have either a longer or a shorter proper time than the geodesic. [ 11 ]
For a space-like geodesic through two events, there are always nearby curves which go through the two events that have either a longer or a shorter proper length than the geodesic, even in Minkowski space. In Minkowski space, the geodesic will be a straight line. Any curve that differs from the geodesic purely spatially ( i.e. does not change the time coordinate) in any inertial frame of reference will have a longer proper length than the geodesic, but a curve that differs from the geodesic purely temporally ( i.e. does not change the space coordinates) in such a frame of reference will have a shorter proper length.
The interval of a curve in spacetime is l = ∫ | g μ ν x ˙ μ x ˙ ν | d s . {\displaystyle l=\int {\sqrt {\left|g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }\right|}}\,ds\ .}
Then, the Euler–Lagrange equation , d d s ∂ ∂ x ˙ α | g μ ν x ˙ μ x ˙ ν | = ∂ ∂ x α | g μ ν x ˙ μ x ˙ ν | , {\displaystyle {d \over ds}{\partial \over \partial {\dot {x}}^{\alpha }}{\sqrt {\left|g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }\right|}}={\partial \over \partial x^{\alpha }}{\sqrt {\left|g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }\right|}}\ ,} becomes, after some calculation, 2 ( Γ λ μ ν x ˙ μ x ˙ ν + x ¨ λ ) = U λ d d s ln | U ν U ν | , {\displaystyle 2\left(\Gamma ^{\lambda }{}_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+{\ddot {x}}^{\lambda }\right)=U^{\lambda }{d \over ds}\ln |U_{\nu }U^{\nu }|\ ,} where U μ = x ˙ μ . {\displaystyle U^{\mu }={\dot {x}}^{\mu }.}
The goal being to find a curve for which the value of l = ∫ d τ = ∫ d τ d ϕ d ϕ = ∫ ( d τ ) 2 ( d ϕ ) 2 d ϕ = ∫ − g μ ν d x μ d x ν d ϕ d ϕ d ϕ = ∫ f d ϕ {\displaystyle l=\int d\tau =\int {d\tau \over d\phi }\,d\phi =\int {\sqrt {(d\tau )^{2} \over (d\phi )^{2}}}\,d\phi =\int {\sqrt {-g_{\mu \nu }dx^{\mu }dx^{\nu } \over d\phi \,d\phi }}\,d\phi =\int f\,d\phi } is stationary, where f = − g μ ν x ˙ μ x ˙ ν {\displaystyle f={\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}} such goal can be accomplished by calculating the Euler–Lagrange equation for f , which is d d τ ∂ f ∂ x ˙ λ = ∂ f ∂ x λ . {\displaystyle {d \over d\tau }{\partial f \over \partial {\dot {x}}^{\lambda }}={\partial f \over \partial x^{\lambda }}.}
Substituting the expression of f into the Euler–Lagrange equation (which makes the value of the integral l stationary), gives d d τ ∂ − g μ ν x ˙ μ x ˙ ν ∂ x ˙ λ = ∂ − g μ ν x ˙ μ x ˙ ν ∂ x λ {\displaystyle {d \over d\tau }{\partial {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}} \over \partial {\dot {x}}^{\lambda }}={\partial {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}} \over \partial x^{\lambda }}}
Now calculate the derivatives: d d τ ( − g μ ν ∂ x ˙ μ ∂ x ˙ λ x ˙ ν − g μ ν x ˙ μ ∂ x ˙ ν ∂ x ˙ λ 2 − g μ ν x ˙ μ x ˙ ν ) = − g μ ν , λ x ˙ μ x ˙ ν 2 − g μ ν x ˙ μ x ˙ ν ( 1 ) d d τ ( g μ ν δ μ λ x ˙ ν + g μ ν x ˙ μ δ ν λ 2 − g μ ν x ˙ μ x ˙ ν ) = g μ ν , λ x ˙ μ x ˙ ν 2 − g μ ν x ˙ μ x ˙ ν ( 2 ) d d τ ( g λ ν x ˙ ν + g μ λ x ˙ μ − g μ ν x ˙ μ x ˙ ν ) = g μ ν , λ x ˙ μ x ˙ ν − g μ ν x ˙ μ x ˙ ν ( 3 ) − g μ ν x ˙ μ x ˙ ν d d τ ( g λ ν x ˙ ν + g μ λ x ˙ μ ) − ( g λ ν x ˙ ν + g μ λ x ˙ μ ) d d τ − g μ ν x ˙ μ x ˙ ν − g μ ν x ˙ μ x ˙ ν = g μ ν , λ x ˙ μ x ˙ ν − g μ ν x ˙ μ x ˙ ν ( 4 ) ( − g μ ν x ˙ μ x ˙ ν ) d d τ ( g λ ν x ˙ ν + g μ λ x ˙ μ ) + 1 2 ( g λ ν x ˙ ν + g μ λ x ˙ μ ) d d τ ( g μ ν x ˙ μ x ˙ ν ) − g μ ν x ˙ μ x ˙ ν = g μ ν , λ x ˙ μ x ˙ ν ( 5 ) {\displaystyle {\begin{aligned}{d \over d\tau }\left({-g_{\mu \nu }{\partial {\dot {x}}^{\mu } \over \partial {\dot {x}}^{\lambda }}{\dot {x}}^{\nu }-g_{\mu \nu }{\dot {x}}^{\mu }{\partial {\dot {x}}^{\nu } \over \partial {\dot {x}}^{\lambda }} \over 2{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}\right)&={-g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu } \over 2{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}&&(1)\\[1ex]{d \over d\tau }\left({g_{\mu \nu }\delta ^{\mu }{}_{\lambda }{\dot {x}}^{\nu }+g_{\mu \nu }{\dot {x}}^{\mu }\delta ^{\nu }{}_{\lambda } \over 2{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}\right)&={g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu } \over 2{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}&&(2)\\[1ex]{d \over d\tau }\left({g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu } \over {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}\right)&={g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu } \over {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}&&(3)\\[1ex]{{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}{d \over d\tau }(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu })-(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu }){d \over d\tau }{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}} \over -g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}&={g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu } \over {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}&&(4)\\[1ex]{(-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }){d \over d\tau }(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu })+{1 \over 2}(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu }){d \over d\tau }(g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }) \over -g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}&=g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu }&&(5)\end{aligned}}} ( g μ ν x ˙ μ x ˙ ν ) ( g λ ν , μ x ˙ ν x ˙ μ + g μ λ , ν x ˙ μ x ˙ ν + g λ ν x ¨ ν + g λ μ x ¨ μ ) = ( g μ ν , λ x ˙ μ x ˙ ν ) ( g α β x ˙ α x ˙ β ) + 1 2 ( g λ ν x ˙ ν + g λ μ x ˙ μ ) d d τ ( g μ ν x ˙ μ x ˙ ν ) ( 6 ) {\displaystyle {\begin{aligned}&(g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu })(g_{\lambda \nu ,\mu }{\dot {x}}^{\nu }{\dot {x}}^{\mu }+g_{\mu \lambda ,\nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+g_{\lambda \nu }{\ddot {x}}^{\nu }+g_{\lambda \mu }{\ddot {x}}^{\mu })\\&=(g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu })(g_{\alpha \beta }{\dot {x}}^{\alpha }{\dot {x}}^{\beta })+{1 \over 2}(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\lambda \mu }{\dot {x}}^{\mu }){d \over d\tau }(g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu })\qquad \qquad (6)\end{aligned}}} g λ ν , μ x ˙ μ x ˙ ν + g λ μ , ν x ˙ μ x ˙ ν − g μ ν , λ x ˙ μ x ˙ ν + 2 g λ μ x ¨ μ = x ˙ λ d d τ ( g μ ν x ˙ μ x ˙ ν ) g α β x ˙ α x ˙ β ( 7 ) {\displaystyle g_{\lambda \nu ,\mu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+g_{\lambda \mu ,\nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }-g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+2g_{\lambda \mu }{\ddot {x}}^{\mu }={{\dot {x}}_{\lambda }{d \over d\tau }(g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }) \over g_{\alpha \beta }{\dot {x}}^{\alpha }{\dot {x}}^{\beta }}\qquad \qquad (7)} 2 ( Γ λ μ ν x ˙ μ x ˙ ν + x ¨ λ ) = x ˙ λ d d τ ( x ˙ ν x ˙ ν ) x ˙ β x ˙ β = U λ d d τ ( U ν U ν ) U β U β = U λ d d τ ln | U ν U ν | ( 8 ) {\displaystyle 2(\Gamma _{\lambda \mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+{\ddot {x}}_{\lambda })={{\dot {x}}_{\lambda }{d \over d\tau }({\dot {x}}_{\nu }{\dot {x}}^{\nu }) \over {\dot {x}}_{\beta }{\dot {x}}^{\beta }}={U_{\lambda }{d \over d\tau }(U_{\nu }U^{\nu }) \over U_{\beta }U^{\beta }}=U_{\lambda }{d \over d\tau }\ln |U_{\nu }U^{\nu }|\qquad \qquad (8)}
This is just one step away from the geodesic equation.
If the parameter s is chosen to be affine, then the right side of the above equation vanishes (because U ν U ν {\displaystyle U_{\nu }U^{\nu }} is constant). Finally, we have the geodesic equation Γ λ μ ν x ˙ μ x ˙ ν + x ¨ λ = 0 . {\displaystyle \Gamma ^{\lambda }{}_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+{\ddot {x}}^{\lambda }=0\ .}
The geodesic equation can be alternatively derived from the autoparallel transport of curves. The derivation is based on the lectures given by Frederic P. Schuller at the We-Heraeus International Winter School on Gravity & Light.
Let ( M , O , A , ∇ ) {\displaystyle (M,O,A,\nabla )} be a smooth manifold with connection and γ {\displaystyle \gamma } be a curve on the manifold. The curve is said to be autoparallely transported if and only if ∇ v γ v γ = 0 {\displaystyle \nabla _{v_{\gamma }}v_{\gamma }=0} .
In order to derive the geodesic equation, we have to choose a chart ( U , x ) ∈ A {\displaystyle (U,x)\in A} : ∇ γ ˙ i ∂ ∂ x i ( γ ˙ m ∂ ∂ x m ) = 0 {\displaystyle \nabla _{{\dot {\gamma }}^{i}{\frac {\partial }{\partial x^{i}}}}\left({\dot {\gamma }}^{m}{\frac {\partial }{\partial x^{m}}}\right)=0} Using the C ∞ {\displaystyle C^{\infty }} linearity and the Leibniz rule: γ ˙ i ( ∇ ∂ ∂ x i γ ˙ m ) ∂ ∂ x m + γ ˙ i γ ˙ m ∇ ∂ ∂ x i ( ∂ ∂ x m ) = 0 {\displaystyle {\dot {\gamma }}^{i}\left(\nabla _{\frac {\partial }{\partial x^{i}}}{\dot {\gamma }}^{m}\right){\frac {\partial }{\partial x^{m}}}+{\dot {\gamma }}^{i}{\dot {\gamma }}^{m}\nabla _{\frac {\partial }{\partial x^{i}}}\left({\frac {\partial }{\partial x^{m}}}\right)=0}
Using how the connection acts on functions ( γ ˙ m {\displaystyle {\dot {\gamma }}^{m}} ) and expanding the second term with the help of the connection coefficient functions: γ ˙ i ∂ γ ˙ m ∂ x i ∂ ∂ x m + γ ˙ i γ ˙ m Γ i m q ∂ ∂ x q = 0 {\displaystyle {\dot {\gamma }}^{i}{\frac {\partial {\dot {\gamma }}^{m}}{\partial x^{i}}}{\frac {\partial }{\partial x^{m}}}+{\dot {\gamma }}^{i}{\dot {\gamma }}^{m}\Gamma _{im}^{q}{\frac {\partial }{\partial x^{q}}}=0}
The first term can be simplified to γ ¨ m ∂ ∂ x m {\displaystyle {\ddot {\gamma }}^{m}{\frac {\partial }{\partial x^{m}}}} . Renaming the dummy indices: γ ¨ q ∂ ∂ x q + γ ˙ i γ ˙ m Γ i m q ∂ ∂ x q = 0 {\displaystyle {\ddot {\gamma }}^{q}{\frac {\partial }{\partial x^{q}}}+{\dot {\gamma }}^{i}{\dot {\gamma }}^{m}\Gamma _{im}^{q}{\frac {\partial }{\partial x^{q}}}=0}
We finally arrive to the geodesic equation: γ ¨ q + γ ˙ i γ ˙ m Γ i m q = 0 {\displaystyle {\ddot {\gamma }}^{q}+{\dot {\gamma }}^{i}{\dot {\gamma }}^{m}\Gamma _{im}^{q}=0} | https://en.wikipedia.org/wiki/Geodesics_in_general_relativity |
Geodesign is a set of concepts and methods [ 1 ] used to involve all stakeholders and various professions in collaboratively designing and realizing the optimal solution for spatial challenges in the built and natural environments , utilizing all available techniques and data in an integrated process. Originally, geodesign was mainly applied during the design and planning phase. "Geodesign is a design and planning method which tightly couples the creation of design proposals with impact simulations informed by geographic contexts." [ 2 ] Now, it is also used during realization and maintenance phases and to facilitate re-use of for example buildings or industrial areas. [ 3 ] [ 4 ] Geodesign includes project conceptualization, analysis, design specification, stakeholder participation and collaboration, design creation, simulation, and evaluation (among other stages).
Geodesign builds greatly on a long history of work in geographic information science , computer-aided design , landscape architecture , and other environmental design fields. See for instance, the work of Ian McHarg and Carl Steinitz .
Members of the various disciplines and practices relevant to geodesign have held defining discussions at a workshop on Spatial Concepts in GIS and Design in December 2008 and the GeoDesign Summit in January 2010. GeoDesign Summit 2010 Conference Videos from Day 1 and Day 2 are an important resource to learn about the many different aspects of GeoDesign. ESRI co-founder Jack Dangermond has introduced each of the GeoDesign Summit meetings. Designer and technologist Bran Ferren , was the keynote speaker for the first and fourth Summit meetings in Redlands, California. [ 5 ] During the fourth conference he presented a provocative view of how what is needed is a 250-year plan, and how GeoDesign was a key concept in making this a reality. [ 6 ] Carl Steinitz was a presenter at both the 2010 [ 7 ] and 2015 [ 8 ] Summits.
The 2013 Geodesign Summit drew a record 260 attendees from the United States and abroad. That same year, a master's degree in Geodesign — the first of its kind in the nation — began at Philadelphia University . [ 9 ] Claudia Goetz Phillips, director of Landscape Architecture and GeoDesign at Philadelphia University says "it is very exciting to be at the forefront of this exciting and relevant paradigm shift in how we address twenty-first-century global to local design and planning issues." [ 10 ]
The theory underpinning Geodesign derives from the work of Patrick Geddes in the first half of the twentieth century and Ian McHarg in its second half. They advocated a layered approach to regional planning, landscape planning and urban planning. McHarg drew the layers on translucent overlays. Through the work of Jack Dangermond , Carl Steinitz, Henk Scholten and others the layers were modeled with Geographical Information Systems (GIS). [ 11 ] The three components of this term each say something about its character. 'Geographical' implies that the layers are geographical (geology, soils, hydrology, roads, land use etc.). 'Information' implies a positivist and scientific methodology. 'System' implies the use of computer technology for the information processing. [ 12 ] The scientific aspects of Geodesign contrast with the cultural emphasis of Landscape Urbanism but the two approaches to landscape planning share a concern for layered analysis [ 13 ] which sits comfortably with postmodern and post-postmodern theory.
Nascent geodesign technology extends geographic information systems so that in addition to analyzing existing environments and geodata , users can synthesize new environments and modify geodata. See, for example, CommunityViz or marinemap .
"GeoDesign brings geographic analysis into the design process, where initial design sketches are instantly vetted for suitability against myriad database layers describing a variety of physical and social factors for the spatial extent of the project. This on-the-fly suitability analysis provides a framework for design, giving land-use planners, engineers, transportation planners, and others involved with design, the tools to leverage geographic information within their design workflows." [ 14 ] | https://en.wikipedia.org/wiki/Geodesign |
Geodesy or geodetics [ 1 ] is the science of measuring and representing the geometry , gravity , and spatial orientation of the Earth in temporally varying 3D . It is called planetary geodesy when studying other astronomical bodies , such as planets or circumplanetary systems . [ 2 ]
Geodynamical phenomena, including crustal motion, tides , and polar motion , can be studied by designing global and national control networks , applying space geodesy and terrestrial geodetic techniques, and relying on datums and coordinate systems .
Geodetic job titles include geodesist and geodetic surveyor . [ 3 ]
Geodesy began in pre-scientific antiquity , so the very word geodesy comes from the Ancient Greek word γεωδαισία or geodaisia (literally, "division of Earth"). [ 4 ]
Early ideas about the figure of the Earth held the Earth to be flat and the heavens a physical dome spanning over it. [ 5 ] Two early arguments for a spherical Earth were that lunar eclipses appear to an observer as circular shadows and that Polaris appears lower and lower in the sky to a traveler headed South. [ 6 ]
In English , geodesy refers to the science of measuring and representing geospatial information , while geomatics encompasses practical applications of geodesy on local and regional scales, including surveying .
In German , geodesy can refer to either higher geodesy ( höhere Geodäsie or Erdmessung , literally "geomensuration") — concerned with measuring Earth on the global scale, or engineering geodesy ( Ingenieurgeodäsie ) that includes surveying — measuring parts or regions of Earth.
For the longest time, geodesy was the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field; however, geodetic science and operations are applied to other astronomical bodies in our Solar System also. [ 2 ]
To a large extent, Earth's shape is the result of rotation , which causes its equatorial bulge , and the competition of geological processes such as the collision of plates , as well as of volcanism , resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface ( dynamic sea surface topography ), and Earth's atmosphere . For this reason, the study of Earth's gravitational field is called physical geodesy .
The geoid essentially is the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of seawater , the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. Unlike a reference ellipsoid , the geoid is irregular and too complicated to serve as the computational surface for solving geometrical problems like point positioning. The geometrical separation between the geoid and a reference ellipsoid is called geoidal undulation , and it varies globally between ±110 m based on the GRS 80 ellipsoid.
A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) a and flattening f . The quantity f = a − b / a , where b is the semi-minor axis (polar radius), is purely geometrical. The mechanical ellipticity of Earth (dynamical flattening, symbol J 2 ) can be determined to high precision by observation of satellite orbit perturbations . Its relationship with geometrical flattening is indirect and depends on the internal density distribution or, in simplest terms, the degree of central concentration of mass.
The 1980 Geodetic Reference System ( GRS 80 ), adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics ( IUGG ), posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. GRS 80 essentially constitutes the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. Numerous systems used for mapping and charting are becoming obsolete as countries increasingly move to global, geocentric reference systems utilizing the GRS 80 reference ellipsoid.
The geoid is a "realizable" surface, meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge . The geoid can, therefore, be considered a physical ("real") surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, so it is an abstract surface. The third primary surface of geodetic interest — the topographic surface of Earth — is also realizable.
The locations of points in 3D space most conveniently are described by three cartesian or rectangular coordinates, X , Y , and Z . Since the advent of satellite positioning, such coordinate systems are typically geocentric , with the Z-axis aligned to Earth's (conventional or instantaneous) rotation axis.
Before the era of satellite geodesy , the coordinate systems associated with a geodetic datum attempted to be geocentric , but with the origin differing from the geocenter by hundreds of meters due to regional deviations in the direction of the plumbline (vertical). These regional geodetic datums, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927), have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas.
It is only because GPS satellites orbit about the geocenter that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space themselves get computed within such a system.
Geocentric coordinate systems used in geodesy can be divided naturally into two classes:
The coordinate transformation between these two systems to good approximation is described by (apparent) sidereal time , which accounts for variations in Earth's axial rotation ( length-of-day variations). A more accurate description also accounts for polar motion as a phenomenon closely monitored by geodesists.
In geodetic applications like surveying and mapping , two general types of coordinate systems in the plane are in use:
One can intuitively use rectangular coordinates in the plane for one's current location, in which case the x -axis will point to the local north. More formally, such coordinates can be obtained from 3D coordinates using the artifice of a map projection . It is impossible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen — called a conformal projection — preserves angles and length ratios so that small circles get mapped as small circles and small squares as squares.
An example of such a projection is UTM ( Universal Transverse Mercator ). Within the map plane, we have rectangular coordinates x and y . In this case, the north direction used for reference is the map north, not the local north. The difference between the two is called meridian convergence .
It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be α and s respectively; then we have:
The reverse transformation is given by:
In geodesy, point or terrain heights are " above sea level " as an irregular, physically defined surface.
Height systems in use are:
Each system has its advantages and disadvantages. Both orthometric and normal heights are expressed in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m 2 s −2 ) and not metric. The reference surface is the geoid , an equigeopotential surface approximating the mean sea level as described above. For normal heights, the reference surface is the so-called quasi-geoid , which has a few-metre separation from the geoid due to the density assumption in its continuation under the continental masses. [ 7 ]
One can relate these heights through the geoid undulation concept to ellipsoidal heights (also known as geodetic heights ), representing the height of a point above the reference ellipsoid . Satellite positioning receivers typically provide ellipsoidal heights unless fitted with special conversion software based on a model of the geoid.
Because coordinates and heights of geodetic points always get obtained within a system that itself was constructed based on real-world observations, geodesists introduced the concept of a "geodetic datum" (plural datums ): a physical (real-world) realization of a coordinate system used for describing point locations. This realization follows from choosing (therefore conventional) coordinate values for one or more datum points. In the case of height data, it suffices to choose one datum point — the reference benchmark, typically a tide gauge at the shore. Thus we have vertical datums, such as the NAVD 88 (North American Vertical Datum 1988), NAP ( Normaal Amsterdams Peil ), the Kronstadt datum, the Trieste datum, and numerous others.
In both mathematics and geodesy, a coordinate system is a "coordinate system" per ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system" for the same. When coordinates are realized by choosing datum points and fixing a geodetic datum, ISO speaks of a "coordinate reference system", whereas IERS uses a "reference frame" for the same. The ISO term for a datum transformation again is a "coordinate transformation". [ 8 ]
General geopositioning , or simply positioning, is the determination of the location of points on Earth, by myriad techniques. Geodetic positioning employs geodetic methods to determine a set of precise geodetic coordinates of a point on land, at sea, or in space. It may be done within a coordinate system ( point positioning or absolute positioning ) or relative to another point ( relative positioning ). One computes the position of a point in space from measurements linking terrestrial or extraterrestrial points of known location ("known points") with terrestrial ones of unknown location ("unknown points"). The computation may involve transformations between or among astronomical and terrestrial coordinate systems. Known points used in point positioning can be GNSS continuously operating reference stations or triangulation points of a higher-order network.
Traditionally, geodesists built a hierarchy of networks to allow point positioning within a country. The highest in this hierarchy were triangulation networks, densified into the networks of traverses ( polygons ) into which local mapping and surveying measurements, usually collected using a measuring tape, a corner prism , and the red-and-white poles, are tied.
Commonly used nowadays is GPS, except for specialized measurements (e.g., in underground or high-precision engineering). The higher-order networks are measured with static GPS , using differential measurement to determine vectors between terrestrial points. These vectors then get adjusted in a traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is the basis for defining a single global, geocentric reference frame that serves as the "zero-order" (global) reference to which national measurements are attached.
Real-time kinematic positioning (RTK GPS) is employed frequently in survey mapping. In that measurement technique, unknown points can get quickly tied into nearby terrestrial known points.
One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. There can be thousands of those geodetically determined points in a country, usually documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements.
In geometrical geodesy, there are two main problems:
The solutions to both problems in plane geometry reduce to simple trigonometry and are valid for small areas on Earth's surface; on a sphere, solutions become significantly more complex as, for example, in the inverse problem, the azimuths differ going between the two end points along the arc of the connecting great circle .
The general solution is called the geodesic for the surface considered, and the differential equations for the geodesic are solvable numerically. On the ellipsoid of revolution, geodesics are expressible in terms of elliptic integrals, which are usually evaluated in terms of a series expansion — see, for example, Vincenty's formulae .
As defined in geodesy (and also astronomy ), some basic observational concepts like angles and coordinates include (most commonly from the viewpoint of a local observer):
The reference surface (level) used to determine height differences and height reference systems is known as mean sea level . The traditional spirit level directly produces such (for practical purposes most useful) heights above sea level ; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid , as GPS only gives heights above the GRS80 reference ellipsoid. As geoid determination improves, one may expect that the use of GPS in height determination shall increase, too.
The theodolite is an instrument used to measure horizontal and vertical (relative to the local vertical) angles to target points. In addition, the tachymeter determines, electronically or electro-optically , the distance to a target and is highly automated or even robotic in operations. Widely used for the same purpose is the method of free station position.
Commonly for local detail surveys, tachymeters are employed, although the old-fashioned rectangular technique using an angle prism and steel tape is still an inexpensive alternative. As mentioned, also there are quick and relatively accurate real-time kinematic (RTK) GPS techniques. Data collected are tagged and recorded digitally for entry into Geographic Information System (GIS) databases.
Geodetic GNSS (most commonly GPS ) receivers directly produce 3D coordinates in a geocentric coordinate frame. One such frame is WGS84 , as well as frames by the International Earth Rotation and Reference Systems Service ( IERS ). GNSS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys.
To monitor the Earth's rotation irregularities and plate tectonic motions and for planet-wide geodetic surveys, methods of very-long-baseline interferometry (VLBI) measuring distances to quasars , lunar laser ranging (LLR) measuring distances to prisms on the Moon, and satellite laser ranging (SLR) measuring distances to prisms on artificial satellites , are employed.
Gravity is measured using gravimeters , of which there are two kinds. First are absolute gravimeter s, based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube ). They are used to establish vertical geospatial control or in the field. Second, relative gravimeter s are spring-based and more common. They are used in gravity surveys over large areas — to establish the figure of the geoid over these areas. The most accurate relative gravimeters are called superconducting gravimeter s, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide in studying Earth's tides , rotation , interior, oceanic and atmospheric loading, as well as in verifying the Newtonian constant of gravitation .
In the future, gravity and altitude might become measurable using the special-relativistic concept of time dilation as gauged by optical clocks .
Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are angles , not metric
measures, and describe the direction of the local normal to the reference ellipsoid of revolution. This direction is approximately the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works reasonably well when one also uses an ellipsoidal model of the figure of the Earth.
One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator same as with the nautical mile.
A metre was originally defined as the 10-millionth part of the length from the equator to the North Pole along the meridian through Paris (the target was not quite reached in actual implementation, as it is off by 200 ppm in the current definitions). This situation means that one kilometre roughly equals (1/40,000) * 360 * 60 meridional minutes of arc, or 0.54 nautical miles. (This is not exactly so as the two units had been defined on different bases, so the international nautical mile is 1,852 m exactly, which corresponds to rounding the quotient from 1,000/0.54 m to four digits).
Various techniques are used in geodesy to study temporally changing surfaces, bodies of mass, physical fields, and dynamical systems. Points on Earth's surface change their location due to a variety of mechanisms:
Geodynamics is the discipline that studies deformations and motions of Earth's crust and its solidity as a whole. Often the study of Earth's irregular rotation is included in the above definition. Geodynamical studies require terrestrial reference frames [ 14 ] realized by the stations belonging to the Global Geodetic Observing System (GGOS [ 15 ] ).
Techniques for studying geodynamic phenomena on global scales include:
Fundamentals
Governmental agencies
International organizations
Other
Geodesy at Wikibooks Media related to Geodesy at Wikimedia Commons | https://en.wikipedia.org/wiki/Geodesy |
A geodetic airframe is a type of construction for the airframes of aircraft developed by British aeronautical engineer Barnes Wallis in the 1930s (who sometimes spelt it "geodesic"). Earlier, it was used by Prof. Schütte for the Schütte Lanz Airship SL 1 in 1909. [ 1 ] It makes use of a space frame formed from a spirally crossing basket-weave of load-bearing members. [ 2 ] The principle is that two geodesic arcs can be drawn to intersect on a curving surface (the fuselage) in a manner that the torsional load on each cancels out that on the other. [ 3 ]
The "diagonal rider" structural element was used by Joshua Humphreys in the first US Navy sail frigates in 1794. [ 4 ] Diagonal riders are viewable in the interior hull structure of the preserved USS Constitution on display in Boston Harbor. [ 5 ] [ 6 ] [ 4 ] The structure was a pioneering example of placing "non- orthogonal " structural components within an otherwise conventional structure for its time. [ 6 ] The "diagonal riders" were included in these American naval vessels' construction as one of five elements to reduce the problem of hogging in the ship's hull, and did not make up the bulk of the vessel's structure, they do not constitute a completely "geodetic" space frame. [ citation needed ]
Calling any diagonal wood brace (as used on gates, buildings, ships or other structures with cantilevered or diagonal loads) an example of geodesic design is a misnomer. In a geodetic structure, the strength and structural integrity, and indeed the shape, come from the diagonal "braces" - the structure does not need the "bits in between" for part of its strength (implicit in the name space frame) as does a more conventional wooden structure.
The earliest-known use of a geodetic airframe design for any aircraft was for the pre-World War I Schütte-Lanz SL1 rigid airship's envelope structure] of 1911, with the airship capable of up to a 38.3 km/h (23.8 mph) top airspeed. [ 7 ] [ unreliable source? ]
The Latécoère 6 was a French four-engined biplane bomber of the early 1920s. It was of advanced all-metal construction and probably the first aeroplane to use geodetic construction. Only one was built.
Barnes Wallis , inspired by his earlier experience with light alloy structures and the use of geodesically-arranged wiring to distribute the lifting loads of the gasbags in the design of the R100 airship, evolved the geodetic construction method (although it is commonly stated, there was no geodetic structure in R100 ). [ 8 ] Wallis used the term "geodetic" to apply to the airframe; it is referred to as "Vickers-Wallis construction" in some early company documents. [ 9 ] "Geodesic" is used in the United States for aircraft structures. [ 10 ]
The system was later used by Wallis's employer, Vickers-Armstrongs in a series of bomber aircraft, the Wellesley , Wellington , Warwick and Windsor . In these aircraft, the fuselage and wing were built up from duralumin alloy channel-beams that were formed into a large framework. Wooden battens were screwed onto the metal, to which the doped linen skin of the aircraft was fixed. The Windsor had a woven metal skin. [ citation needed ]
The metal lattice-work gave a light and very strong structure. [ 2 ] The benefit of the geodetic construction was larger internal volume for a given streamlined shape. [ 9 ] Flight magazine described a geodetic frame as sheet metal covering in which diamond shaped holes have been cut leaving behind the geodetic strips. [ 11 ] The benefit was offset by having to construct the fuselage as a complete assembly unlike aircraft using stressed-skin construction which could be built in sections. In addition, fabric covering on the geodetic frame was not suitable for higher flying aircraft that had to be pressurised. The difficulty of providing a pressurised compartment in a geodetic frame was a challenge during the design of the high altitude Wellington Mk. V. The pressure cabin, which expanded and contracted independently of the rest of the airframe, had to be attached at the nodal points of the structure. [ 12 ]
Geodetic wing and fin structures, taken from the Wellington, were used on the post-war Vickers VC.1 Viking , but with a metal stressed-skin fuselage. [ 13 ] Later production Vikings were completely stressed-skin construction marking the end of geodetic construction at Vickers. [ 14 ] | https://en.wikipedia.org/wiki/Geodetic_airframe |
Geodetic astronomy or astronomical geodesy ( astro-geodesy ) is the application of astronomical methods into geodetic networks and other technical projects of geodesy .
The most important applications are:
Important measuring techniques are:
The accuracy of these methods depends on the instrument and its spectral wavelength, the measuring or scanning method, the time amount (versus economy), the atmospheric situation, the stability of the surface resp. the satellite, on mechanical and temperature effects to the instrument, on the experience and skill of the observer , and on the accuracy of the physical-mathematical models .
Therefore, the accuracy reaches from 60" (navigation, ~1 mile) to 0,001" and better (a few cm; satellites, VLBI), e.g.:
Astrogeodetic leveling is a local geoid determination method based on vertical deflection measurements. Given a starting value at one point, determining the geoid undulations for an area becomes a matter for simple integration of vertical deflection, as it represents the horizontal spatial gradient of the geoid undulation. | https://en.wikipedia.org/wiki/Geodetic_astronomy |
A geodetic control network is a network, often of triangles , that are measured precisely by techniques of control surveying , such as terrestrial surveying or satellite geodesy . It is also known as a geodetic network , reference network , control point network , or simply control network .
A geodetic control network consists of stable, identifiable points with published datum values derived from observations that tie the points together. [ 1 ]
Classically, a control is divided into horizontal (X-Y) and vertical (Z) controls (components of the control), however with the advent of satellite navigation systems, GPS in particular, this division is becoming obsolete.
In the U.S., there is a national control network called the National Spatial Reference System (NSRS). [ 2 ]
Many organizations contribute information to the geodetic control network. [ 3 ]
The higher-order (high precision, usually millimeter-to-decimeter on a scale of continents) control points are normally defined in both space and time using global or space techniques, and are used for "lower-order" points to be tied into. The lower-order control points are normally used for engineering , construction and navigation . The scientific discipline that deals with the establishing of coordinates of points in a control network is called geodesy .
After a cartographer registers key points in a digital map to the real world coordinates of those points on the ground, the map is then said to be "in control". Having a base map and other data in geodetic control means that they will overlay correctly.
When map layers are not in control, it requires extra work to adjust them to line up, which introduces additional error.
Those real world coordinates are generally in some particular map projection , unit, and geodetic datum . [ 4 ]
In "classical geodesy" (up to the sixties) control networks were established by triangulation using measurements of angles and of some spare distances. The precise orientation to the geographic north is achieved through methods of geodetic astronomy . The principal instruments used are theodolites and tacheometers , which nowadays are equipped with infrared distance measuring, data bases , communication systems and partly by satellite links.
Electronic distance measurement (EDM) was introduced around 1960, when the prototype instruments became small enough to be used in the field. Instead of using only sparse and much less accurate distance measurements some control networks were established or updated by using trilateration more accurate distance measurements than was previously possible and no angle measurements.
EDM increased network accuracies up to 1:1 million (1 cm per 10 km; today at least 10 times better), and made surveying less costly.
The geodetic use of satellites began around the same time. By using bright satellites like Echo I , Echo II and Pageos , global networks were determined, which later provided support for the theory of plate tectonics .
Another important improvement was the introduction of radio and electronic satellites like Geos A and B (1965–70), of the Transit system ( Doppler effect ) 1967-1990 — which was the predecessor of GPS - and of laser techniques like LAGEOS (USA, Italy) or Starlette (France). Despite the use of spacecraft, small networks for cadastral and technical projects are mainly measured terrestrially, but in many cases incorporated in national and global networks by satellite geodesy.
Nowadays, several hundred geospatial satellites are in orbit, including a large number of remote sensing satellites and navigation systems like GPS and Glonass , which was followed by the European Galileo satellites in 2020 and China's Beidou constellation .
While these developments have made satellite-based geodetic network surveying more flexible and cost effective than its terrestrial equivalent for areas free of tree canopy or urban canyons, the continued existence of fixed point networks is still needed for administrative and legal purposes on local and regional scales. Global geodetic networks cannot be defined to be fixed, since geodynamics are continuously changing the position of all continents by 2 to 20 cm per year. Therefore, modern global networks like ETRS89 or ITRF show not only coordinates of their "fixed points", but also their annual velocities . | https://en.wikipedia.org/wiki/Geodetic_control_network |
In biogeography , geodispersal is the erosion of barriers to gene flow and biological dispersal (Lieberman, 2005.; [ 1 ] Albert and Crampton, 2010. [ 2 ] ). Geodispersal differs from vicariance , which reduces gene flow through the creation of geographic barriers. [ 3 ] In geodispersal, the geographical ranges of individual taxa , or of whole biotas , are merged by erosion of a physical barrier to gene flow or dispersal . [ 4 ] Multiple related geodispersal and vicariance events can be mutually responsible for differences among populations. [ 5 ] As these geographic barriers break down, organisms of the secluded ecosystems can interact, allowing gene flow between previously separated species, creating more biological variation within a region. [ 6 ]
A well documented example of geodispersal in between continental ecosystems was the Great American Biotic Interchange (GABI) between the terrestrial faunas and floras of North America and South America , that followed the formation of the Isthmus of Panama about 3 million years ago. Between 69 and 47 million years ago, the Thulean Land Bridge facilitated gene flow by allowing bees from the Old World to travel to the New World , an example of geodispersal from the Old World to the New World. [ 7 ] Another example was the formation of the modern Amazon River Basin about 10 million years ago, [ 8 ] which involved the merging of previously isolated Neotropical fish faunas to form what is now the most species-rich continental aquatic ecosystem on Earth (Oberdorff et al., 2011). [ 9 ] | https://en.wikipedia.org/wiki/Geodispersal |
Geoduck aquaculture or geoduck farming is the practice of cultivating geoducks (specifically the Pacific geoduck, Panopea generosa ) for human consumption . The geoduck is a large edible saltwater clam , a marine bivalve mollusk , that is native to the Pacific Northwest .
Juvenile geoducks are planted or seeded on the ocean floor or substrate within the soft intertidal and subtidal zones, then harvested five to seven years later when they have reached marketable size (about 1 kg or 2.2 lbs). [ 1 ] They are native to the Pacific region and are found from Baja California , through the Pacific Northwest and Southern Alaska . [ 2 ]
Most geoducks are harvested from the wild, but because of state government-instituted limits on the amount that can be harvested, [ 3 ] the need to grow geoducks in farms to meet an increasing demand has led to the growth of the geoduck aquaculture industry, particularly in Puget Sound, Washington . Geoduck meat is a prized delicacy in Asian cuisine; the majority of exports are sent to China ( Shanghai , Shenzhen , Guangzhou , Beijing , are the main Chinese markets), Hong Kong and Japan . [ 4 ]
Washington state Wild geoducks had been harvested in Puget Sound, Washington by residents and visitors for hundreds of years, but it was not until 1970 that the Washington Department of Natural Resources (WDNR) auctioned off the first right to commercially harvest wild geoducks. [ 5 ] Research into the viability of farming geoducks began in the 1970s. [ 6 ] In 1991, the development of hatchery and grow-out methods from brood stock were initiated. By 1996, commercial aquaculture had begun. As of 2011, there were 237 commercial sites operating on 145 hectares (360 acres) of privately owned properties (including those leased from other private owners). [ 7 ] Commercial geoduck aquaculture has been primarily undertaken within the intertidal zone . [ 8 ]
British Columbia Commercial harvesting of wild geoducks began in 1976. [ 9 ] In the early 1990s, the cultivation method developed in Washington was adopted in British Columbia by Fan Seafoods Ltd and the Underwater Harvesters Association (UHA), a group of 55 licence holders for geoduck and horse clam fishery. The UHA used this method to initiate a wild geoduck enhancement program by seeding depleted subtidal areas with cultivated juvenile geoducks thereby ensuring continued supply in the wild. It even invented a mechanical seeder that plants cultured juvenile geoducks on subtidal beds. Through a collaboration agreement between the provincial government's Department of Fisheries and Oceans (DFO), Fan Seafoods Ltd. and UHA, five pilot sites were selected in 1996 to study the feasibility of a geoduck aquaculture venture. [ 10 ] In 2007, the provincial government of B.C. licensed UHA to operate the first commercial geoduck farm on 25.3 hectares (63 acres) off Hernando Island . [ 11 ]
Other areas No geoduck aquaculture industry exists in Southern Alaska [ 12 ] and Mexico . [ 4 ] In New Zealand , Cawthron recently reported successful attempts at rearing juvenile geoducks. The plan is to plant them in subtidal areas in order to supplement wild geoduck harvest. [ 13 ]
Panopea generosa is the geoduck species that is found in the Pacific Northwest and Alaska . Panopea globosa , which is another species in the same genus, Panopea , is harvested in Mexico's Gulf of California .
A small wild geoduck fishery exists in New Zealand for Panopea zelandica , the "deepwater clam", and in Argentina for Panopea abbreviata , the "southern geoduck". A fifth species, Panopea japonica , the Japanese geoduck, is found in Korea and Japan , but there is no viable commercial industry in those countries for this species. [ 6 ]
Biomass densities in Southeast Alaska are estimated by divers then inflated by twenty percent to account for geoducks not visible at the time of survey. [ 14 ] This estimate is used to predict the two percent allowed for commercial harvesting. [ 14 ]
Juvenile geoducks are susceptible to attack from predators in their first year when they have not yet burrowed deeply into the substrate. Crabs , sea stars , predatory gastropods , and flatfishes have been observed to feed on them. [ 6 ] Adult geoducks, which are already buried deep in the substrate, are out of reach of most predators except for sea otters [ 12 ] and humans.
In 2012, no infectious diseases had been observed attacking cultured juvenile geoducks planted in the wild up to that point. [ 15 ] Surface abnormalities were observed in wild adult geoducks, but the pathogen or pathogens could not be identified. However, a protozoan parasite ( Isonema sp) was believed to be the causal agent of cultured geoduck larvae mortalities at a Washington experimental hatchery. [ 16 ]
The Washington Department of Fisheries (WDF) Point Whitney Laboratory pioneered research into the aquaculture of geoducks in 1970. The initial purpose of developing the techniques was to enhance the wild population that was being depleted by commercial fishing. Their first challenge was inducing spawning from wild adult geoducks brought into the hatchery; the second challenge was the survival of the resulting larvae. [ 6 ] As per 2012, research into improving culture techniques is continuing, however the basic environmental conditions for growth of geoducks have already been established.
Summary of optimal biophysical parameters for geoduck culture (nursery and grow-out) [ 17 ]
(penetration to 1m)
The techniques for culturing geoducks is similar to that of other bivalves. Modifications have been made by both academic and private laboratories through the years. [ 6 ]
Geoducks spawn from spring to late summer in the wild, peaking in June and July. Because of this timing, an equal number of male and female clams are collected starting in the early fall when gametogenesis commences. The clams are placed in milk crates and maintained in polyethylene fish totes supplied with flowing seawater (10-12 °C) for several weeks. Microalgae is added as feed and regular cleaning is carried out to remove biodeposits.
Spawning is initiated by changing the seawater and increasing the amount of microalgae to increase the temperature of the water. The higher temperature and abundant supply of microalgal feed induces spawning in males first, then in females.
Fertilized geoduck eggs remain floating in the water column for 16 to 35 days until they metamorphose and settle on the substrate. As larvae, they are kept at a water temperature of 16 °C and supplied with microalgal feed with frequent water changes.
Larvae that are ready to metamorphose are collected and placed in a primary nursery system where water temperature is kept at 15-17 °C and supplied with microalgal feed. Metamorphosed larvae are characterized by the development of an attachment mechanism known as byssal threads .
Once byssal threads have developed, the clams are moved to a secondary nursery system which contains sand as the substrate . They are kept here until they are large enough to be moved outdoors.
Tertiary nursery systems are made of large outdoor tanks or totes which have the same sand substrate and flowing seawater. The clams are kept in these systems until they reach a valve length of 5 mm, at which point they are ready to be planted.
Four to five juvenile geoducks are planted inside PVC tubes that are "wiggled" into the sandy substrate along the intertidal zone during low tide. The PVC tubes are between 5 and 15 cm in diameter, with lengths from 20 to 30 cm, about 7 cm of which remain above the substrate. The plastic tubes are covered with a mesh net to protect the clams from predators. The tubes also serve to retain seawater at low tide, which prevents dehydration of the clams. [ 18 ] After one to two growing seasons when the juvenile geoducks have burrowed themselves deep enough into the substrate to be out of reach of predators, the PVC tubes are removed. Not all tidelands are suitable for geoduck aquaculture. The sand must be deep and clean, and the water must have the right salinity and degree of cleanliness. [ 19 ]
In Washington State, the aquaculture of geoducks occurs on intertidal lands, whereas in British Columbia, geoducks are cultured in subtidal areas, which necessitates the growing of juvenile geoducks to at least 12 mm instead of 5 mm before planting. [ 17 ] Once planted in the subtidal bed, the area is covered with netting to protect the clams from predators (PVC tubes are not stable in subtidal beds due to strong currents ).
Mature geoducks are left to grow out until they are large enough to be marketable (1.0 kg). This can take from five to seven years. Wild and cultured geoducks are harvested by first loosening the substrate around them using a powerful nozzle that ejects high-pressured water. Once loosened, the clams are collected by hand and placed in crates for transport to a processing facility.
Although there is no standard grading system for quality, the color of the siphon (the whiter the better) and the size (up to 1 kg) are the main determiners of price. [ 4 ] Live geoducks are packed in coolers and shipped on the same day they are harvested.
The geoduck industry produces an estimated 6000 metric tons of clams annually, [ 4 ] of which only about 10-13% come from aquaculture. Washington is the largest producer of wild and cultured geoducks.
Average Annual Production 2007-2010 [ 4 ]
(in metric tons)
Difference between wild and cultured geoduck [ 4 ]
China (Mainland and Hong Kong) is where 95% of geoducks exports are sent. [ 4 ] [ 20 ] Although the clams are priced at about $20 per pound at the point of origin, they can sell for $100 to $150 per pound at their destination. While exports to Japan have decreased in recent years because of increasing prices, the market in China is expected to soar.
Environmental groups and citizens of British Columbia have voiced their concerns about geoduck aquaculture operations in the Province, even though the industry is still in its preliminary stages. Their main issue has been the lack of peer-reviewed studies on the impact geoduck aquaculture practices will have on the environment. [ 11 ] Most concerned groups point to the situation in Puget Sound, Washington as an example of the environmental harm posed by geoduck farms. [ 21 ] Other concerns being raised include the destruction of the natural aquatic habitat, washed up waste (such as nets), disease outbreaks, competition with wild species, and "purge fishing" or the removable of all wild geoducks in a specific area prior to the planting of cultured geoducks. This procedure is apparently necessary because of the economics of the industry. [ 10 ]
The management of Canada's aquaculture sector is headed by the Department of Fisheries and Oceans . The department shares this responsibility with 17 other departments and agencies at the federal and provincial levels. [ 22 ] The DFO works with these government offices to "create the policy and regulatory conditions necessary to ensure that the aquaculture industry develops in an environmentally responsible way while remaining economically competitive in national and international markets". [ 23 ] In the case of wild geoduck fishery, the agency co-manages the activity with the Underwater Harvesters Association . [ 24 ]
Aquaculture in Canada is regulated by three main acts: the Fisheries Act , Navigable Waters Protection Act , and Canadian Environmental Assessment Act . Other acts that control aquaculture practices include the Land Act , Health of Animals Act , Food and Drugs Act , Pest Control Products Act , and Species at Risk Act . [ 22 ] All of these acts specify regulations at the local, provincial, and federal levels, resulting in a total of 73 rules and regulations for the aquaculture industry; these rules and regulations have been described as being conflicting and contradictory. [ 25 ] The rules and regulations have resulted in the aquaculture industry being described as "one of the most heavily regulated in the world". [ 26 ] A recent survey showed that Canadians support the creation of an Aquaculture Act that specifically addresses the needs of the industry. [ 27 ] The DFO collects fees from aquaculture licences and leases, and receives government funding for its research programs. The UHA also funds research on geoduck aquaculture. [ 28 ]
To address consumer concerns regarding unsafe aquaculture practices, the DFO launched the Aquaculture Sustainability Reporting Initiative in 2011. [ 29 ] This report backs the Federal Sustainable Development Strategy implemented in 2010, and aims to provide its citizens with information on the sustainable aquaculture practices that government agencies and the aquaculture industry are undertaking or plan to undertake. There are currently 29 participants in this initiative, coming from different sectors such as academia , the aquaculture industry, government agencies, and environmental organizations .
The Canadian government and the aquaculture industry demonstrate sustainable practices by several means such as federal ( Canadian General Standards Board ) and third-party certifications ( International Organization for Standardization for traceability of produce). [ 30 ] The Aquaculture Sustainability Reporting Initiative is patterned after the Global Reporting Initiative , which emphasizes reporting transparency and the accountability of an organization's sustainability performance.
The DFO also recently released Aquaculture in Canada 2012: A Report on Aquaculture Sustainability in which it outlines its performance in terms of sustainability. The aquaculture industry has also taken steps to develop a Codes of Practice for sustainable operations that are in line with or exceed international standards. [ 31 ] In the case of geoduck, the UHA has adopted a labeling system ("Market Approved") to ensure that the geoducks that end up in the market are safe to eat, of approved quality, and not illegally harvested. [ 32 ]
The DFO plans to undertake geoduck aquaculture in subtidal areas. No geoduck production currently occurs on private tidelands, although conversion of other shellfish aquaculture ventures operating on tidelands to geoduck is also being considered. There is no large-scale commercial production underway yet; ongoing trial farms are currently being studied and assessed. Although tenures to possible geoduck farm sites have been granted, commercial licences have not been issued, except for the one granted to the UHA. [ 11 ]
Marketing and promotion When promoting its products, the Canadian aquaculture industry touts the environmentally sound practices it observes in producing high-quality fish and shellfish. The Canadian Aquaculture Industry Alliance recently received backing from the DFO with a $1 M investment to promote awareness of the industry and increase sales. [ 33 ] The Aquaculture Innovation and Market Access Program (AIMAP) of the DFO aims to encourage technological innovation in the industry to improve its "global competitiveness and environmental performance". [ 34 ] However, in the case of geoduck there is no formal marketing and promotion underway. Since its main market is China, this industry has relied on connections between Vancouver -based export businesses with close ties (especially familial ties) to Hong Kong and mainland China importers. The UHA however has been promoting geoducks in China with support from the federal government. [ 4 ]
Concerns have also been raised regarding the impact of geoduck aquaculture on the natural habitat, particularly in Puget Sound . Currently, geoduck aquaculture in Puget Sound occupies 80 ha of private tidelands which are either owned by aquaculture companies or leased from other landowners. [ 1 ] (another report put the area at 141 ha [ 35 ] ) Because geoduck aquaculture occurs on private lands, there is minimal government oversight, and environmental concerns raised by the residents are most often left to the aquaculture companies to address, and in some cases for the courts to arbitrate. The aquaculture companies do create their own environmental codes of conduct and best management practices to address such concerns. [ 36 ]
The state government is considering leasing public aquatic lands (state-owned) specifically for geoduck aquaculture. [ 1 ] It currently leases 849 ha for aquaculture of other shellfish, such as oysters , other kinds of clams, and mussels . Fees are collected from aquaculture companies, and the resulting revenue is used to manage and protect public aquatic lands throughout the State. Since its statehood in 1889, Washington had been selling tidelands to private individuals, initially as a source of revenue for the state. [ 37 ] By 1971, when this practice was stopped, the State had already sold about 60% of public tidelands to private ownership. The state currently owns 1 million ha of aquatic lands.
Several state aquatic land statutes [ 38 ] enacted under the Shoreline Management Act of 1971 gave authority to the DNR to "foster the commercial and recreational use of the aquatic environment for production of food, fibre, income, and public enjoyment from state-owned aquatic lands under its jurisdiction and from associated waters, and to this end the department may develop and improve production and harvesting of seaweeds and sea life attached to or growing on aquatic land or contained in aquaculture containers..."
Aquaculture is given priority in Washington: "The legislature finds that many areas of the state of Washington are scientifically and biologically suitable for aquaculture development, and therefore the legislature encourages promotion of aquacultural activities, programs, and development with the same status as other agricultural activities, programs, and development within the state". [ 39 ] At the national level in the US, the National Oceanic and Atmospheric Administration (NOAA) is the lead agency for aquaculture. In February 2011, this agency released a draft of the national policy for sustainable marine aquaculture that aims to protect but, at the same time, utilize the nation's aquatic resources in a sustainable manner as well as encourage the growth of a sustainable aquaculture industry. [ 40 ]
Commercial aquaculture in Washington is regulated by local, state, and federal government entities, each tasked with different responsibilities. Some of the agencies involved are the Environmental Protection Agency , Washington Department of Fish and Wildlife, US Army Corps of Engineers , and the Food and Drug Administration . The decisions of these agencies are governed by several federal acts, such as the Clean Water Act , Lacey Act , Federal Water Pollution Control Act , and Animal Health Protection Act . [ 36 ]
Because of the concerns raised by residents and environmental groups regarding the ecological impact of geoduck aquaculture on private tidelands, the WDNR has adopted a more cautious approach on leasing state-owned aquatic lands for geoduck aquaculture. In 2003, the State legislature instructed the WDNR to explore the feasibility of a geoduck aquaculture program on state-owned tidelands. [ 1 ] In 2007, the state passed House Bill 2220 on Shellfish Aquaculture [ 41 ] which, among other things, commissions the Washington Sea Grant (WSG) of the University of Washington to conduct "a series of scientific research studies that examines the possible effects, including the cumulative effects, of the current prevalent geoduck aquaculture techniques and practices on the natural environment in and around Puget Sound, including the Strait of Juan de Fuca". The research is expected to end on December 1, 2013. The bill further stipulates that not more than 15 ha of state-owned aquatic land be leased for commercial geoduck aquaculture every year until 2014. It also created the Shellfish Aquaculture Regulatory Committee, [ 42 ] which is composed of government agencies, aquaculture producers (2), concerned environmental organizations (2), and landowners (2). The role of the committee is to recommend guidelines and policies for shellfish aquaculture operations. In 2010, the WDNR tok a further step further by opening a dialogue with stakeholders and the public. They created an online forum on geoduck aquaculture to elicit concerns from residents, environmental groups and geoduck farm owners. [ 43 ]
Marketing and promotion Half of the geoducks produced in Washington are exported to Vancouver, BC . before being re-exported to the final markets in China and Hong Kong. The remaining half are exported through Seattle, WA and Anchorage, AK . [ 4 ] These three cities have the best air connections to China and Hong Kong. Even though China is Washington's biggest market for geoduck, there is little promotion from the state's geoduck producers there. [ 4 ]
In order to address priorities set by the Washington State legislature, the WSG is conducting research on three key areas:
The WSG released its most recent progress report in February 2012 on the possible effects of geoduck aquaculture on the environment. [ 44 ] The preliminary results of some of the studies appear to show that geoduck aquaculture does not negatively affect the natural habitat. One of the studies have been completed, and results showed that the seemingly disruptive nature of harvesting geoducks has no effect on the infaunal benthic community . The report suggested that because the infauna are already accustomed to natural disturbances such as wave action and extreme weather conditions, harvesting does not affect them any differently. [ 45 ] This report, however, has garnered criticisms which point out that the studies are not long-term, so the effect of geoduck aquaculture practices over many years still cannot be ascertained. [ 46 ] | https://en.wikipedia.org/wiki/Geoduck_aquaculture |
Geoengineering (also known as climate engineering or climate intervention ) is the deliberate large-scale interventions in the Earth’s climate system intended to counteract human-caused climate change . [ 1 ] The term commonly encompasses two broad categories: large-scale carbon dioxide removal (CDR) and solar radiation modification (SRM). CDR involves techniques to remove carbon dioxide from the atmosphere and is generally considered a form of climate change mitigation . SRM aims to reduce global warming by reflecting a small portion of sunlight (solar radiation) away from Earth and back into space. Although historically grouped together, these approaches differ substantially in mechanisms, timelines, and risk profiles, and are now typically discussed separately. [ 2 ] : 168 [ 3 ] Some other large-scale engineering proposals—such as interventions to slow the melting of polar and alpine ice—are also sometimes classified as forms of geoengineering.
Some types of climate engineering present political, social and ethical issues. One common objection is that focusing on these technologies could undermine efforts to reduce greenhouse gas emissions. Effective governance and international oversight are widely regarded as essential.
Major scientific organizations have examined the potential, risks, and governance needs of climate engineering, including the US National Academies of Sciences, Engineering, and Medicine , [ 4 ] [ 5 ] [ 6 ] the Royal Society , [ 7 ] the UN Educational, Scientific and Cultural Organization ( UNESCO ), [ 8 ] and the World Climate Research Programme . [ 1 ]
Carbon dioxide removal (CDR) is a process in which carbon dioxide (CO 2 ) is removed from the atmosphere by deliberate human activities and durably stored in geological, terrestrial, or ocean reservoirs, or in products. [ 11 ] : 2221 This process is also known as carbon removal, greenhouse gas removal or negative emissions. CDR is more and more often integrated into climate policy , as an element of climate change mitigation strategies. [ 12 ] [ 13 ] Achieving net zero emissions will require first and foremost deep and sustained cuts in emissions, and then—in addition—the use of CDR ("CDR is what puts the net into net zero emissions" [ 14 ] ). In the future, CDR may be able to counterbalance emissions that are technically difficult to eliminate, such as some agricultural and industrial emissions. [ 15 ] : 114
Solar radiation modification (SRM) (or solar geoengineering) is a group of large-scale approaches to reduce global warming by increasing the amount of sunlight that is reflected away from Earth and back to space . It is not intended to replace efforts to reduce greenhouse gas emissions , [ 19 ] but rather to complement them as a potential way to limit global warming. [ 20 ] : 1489 SRM is a form ofgeoengineering or climate engineering.
Glacial geoengineering is a set of proposed geoengineering that focus on slowing the loss of glaciers , ice sheets , and sea ice in polar regions and, in some cases, alpine areas. Proposals are motivated by concerns that feedback loops—such as ice-albedo loss, accelerated glacier flow, and permafrost methane release—could amplify climate change and trigger climate tipping points . [ 22 ] [ 23 ]
Proposed glacial geoengineering methods include regional or local solar radiation management , thinning cirrus clouds to allow more heat to escape, and deploying mechanical or engineering structures to stabilize ice. Specific strategies under investigation are stratospheric aerosol injection focused on polar regions, [ 22 ] marine cloud brightening , [ 24 ] surface albedo modification with reflective materials, [ 25 ] basal interventions such as draining subglacial water or promoting basal freezing, [ 23 ] and ice shelf protection measures including seabed curtains. [ 26 ]
Most governance issues relating geoengineering are specific to the category or the specific method. Nevertheless, a couple of international governance instruments have addressed geoengineering collectively.
The Conference of Parties to the Convention on Biological Diversity have made several decisions regarding "climate related geoengineering." That of 2010 established "a comprehensive non-binding normative framework" [ 27 ] : 106 for "climate-related geoengineering activities that may affect biodiversity," requesting that such activities be justified by the need to gather specific scientific data, undergo prior environmental assessment, be subject to effective regulatory oversight. [ 28 ] : 96–97 [ 29 ] : 161–162 The Parties' 2016 decision called for "more transdisciplinary research and sharing of knowledge... in order to better understand the impacts of climate-related geoengineering." [ 29 ] : 161-162 [ 30 ]
The parties to the London Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter and its associated London Protocol have addressed "marine geoengineering." In 2013, the parties to the London Protocol adopted an amendment to establish a legally binding framework for regulating marine geoengineering, initially limited to ocean fertilization and requiring assessment and permitting before any activity proceeds. This amendment has not yet entered into force due to insufficient ratifications. In 2022, the parties to both agreements acknowledged growing interest in marine geoengineering, identified four techniques for priority review, and encouraged careful assessment of proposed projects under existing guidelines while considering options for further regulation. In 2023, they cautioned that these techniques could pose serious environmental risks, highlighted scientific uncertainty about their effects, urged strict application of assessment frameworks, and called for broader international cooperation. [ 31 ] Their work is supported by the Joint Group of Experts on the Scientific Aspects of Marine Environmental Protection of the International Maritime Organization . | https://en.wikipedia.org/wiki/Geoengineering |
Geoethics is the branch of ethics which relates to the interaction of human activity with our physical world in general, and with the practice of the Earth sciences in particular. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] It may also have relevance to planetary sciences . [ 7 ] It is described as an emerging scientific and philosophical discipline, consisted of research and reflection on the values that serve as the bases of behaviors and practices wherever human activities interact with the Earth system. [ 3 ] [ 4 ] Moreover, geoethics promotes the ethical and social roles of geoscientists in conducting scientific and technological research and practice. [ 8 ]
For these reasons, geoethics pursues recognition of humankind's duties and responsibility towards the Earth system. A more specialized use emerged as the term came to deal with the ethical, social, and cultural implications of the behavior and professional activities of geoscientists. [ 3 ] [ 4 ] [ 9 ] [ 10 ] Some scholars also cited that it provides a point of intersection for geosciences , sociology , economics and philosophy . [ 3 ] [ 4 ]
The International Association for Promoting Geoethics , included in the international geoethics infrastructure together with the IUGS Commission on Geoethics and the CIPSH Chair on Geoethics , is the leading organization that is carrying out studies to develop the geoethical thought and to promote geoethics outcomes worldwide.
This article about ethics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geoethics |
A geofence is a virtual " perimeter " or " fence " around a given geographic feature . [ 1 ] A geofence can be dynamically generated (as in a radius around a point location) or match a predefined set of boundaries (such as school zones or neighborhood boundaries).
The use of a geofence is called geofencing , and one example of use involves a location-aware device of a location-based service (LBS) user entering or exiting a geofence. Geofencing approach is based on the observation that users move from one place to another and then stay at that place for a while. This method combines awareness of the user's current location with awareness of the user's proximity to locations that may be of interest. [ 2 ] This activity could trigger an alert to the device's user as well as messaging to the geofence operator. This info, which could contain the location of the device, could be sent to a mobile telephone or an email account.
Geofencing was invented in the early 1990s and patented in 1995 by American inventor Michael Dimino, using the first-of-its-kind GPS and GSM technology for tracking and locating anywhere on the globe from a remote location.
Cellular geofencing for global tracking is cited in the United States Patent Office over 240 times by major companies such as IBM and Microsoft since 1995 and is first mentioned as: [ 3 ]
A global tracking system (GTS) for monitoring an alarm condition associated with and locating a movable object, the GTS comprising:
Geofencing uses technologies like GPS, or even IP address ranges, to build its virtual fence. In most cases, mobile phones are using combinations of positioning methods, e.g., Assisted GPS (A-GPS). "A-GPS uses assistance data received from the network to obtain a faster location calculation compared with GPS alone." [ 4 ] The global system of tracking and geofencing is supported by a group of subsystems based on global navigation satellite system (GNSS) services. Both horizontal and vertical accuracy of GNSS is just a few centimetres for baseline ≤ 5 km. [ 5 ] The Wide Area Augmentation System (WAAS) is used by devices equipped and used in North America—the accuracy is considered to be within 3 m at least 95% of the time. [ 6 ] These virtual fences can be used to track the physical location of the device active in the particular region or the fence area. The location of the person using the device is taken as geocoding data and can be used further for advertising purposes.
It is possible to monitor several geofences at once (multiple active geofences). The number of active geofences on Android devices is limited to 100 per app and per user. [ 7 ] It is possible to monitor different type of triggering activity for each geofence separately—entrance, exit, or dwell in the monitored area.
There are two types of geofencing—choice of type depends on the purpose of using geofencing in a given situation.
It uses GPS services for the entire time when the application is running and therefore consumes more battery as a result. The reason for the higher battery consumption is the fact that the service requires running in the foreground throughout the time of usage.
This type does not require a constantly active state of the application and is able to run in the background. It is rather suitable for the process of data collection. It does not use GPS services, therefore cannot be used for an app depending on real time (sending notifications immediately, etc).
The FBI has used geofence warrants to identify rioters who participated in the 6 January Capitol attack . [ 8 ]
Geofencing, used with child location services, can notify parents if a child leaves a designated area. [ 9 ]
It is also being used for flexible home controls and monitoring system—for example setting a phone to unlock the door or turn on the heating when arriving home. [ 10 ]
Geofencing used with location-based guns can restrict those firearms to fire only in locations where their firing is permitted, thereby making them unable to be used elsewhere.
Other applications include sending an alert if a vehicle is stolen, [ 11 ] and notifying rangers when wildlife stray into farmland. [ 12 ]
A geofence can be used for location-based messaging for tourist safety and communication. [ 13 ]
In 2015, U.S. Senator Chuck Schumer proposed a law requiring drone manufacturers to build geofencing constraints into unmanned aerial vehicle navigation systems that would override the commands of the unsophisticated operator, preventing the device from flying into protected airspace . [ 14 ] [ 15 ]
Geofencing is critical to telematics . It allows users of the system to draw zones around places of work, customer's sites and secure areas. These geofences when crossed by an equipped vehicle or person can trigger a warning to the user or operator via SMS or email.
In some companies, geofencing is used by the human resource department to monitor employees working in special locations, especially those doing field works. Using a geofencing tool, an employee is allowed to log his or her attendance using a GPS-enabled device when within a designated perimeter.
Geofencing, in a security strategy model, provides security to wireless local area networks. This is done by using predefined borders (e.g., an office space with borders established by positioning technology attached to a specially programmed server). The office space becomes an authorized location for designated users and wireless mobile devices. [ 16 ] [ page needed ]
During the use of Starlink satellites in the Russo-Ukrainian War , SpaceX used geofencing to limit the use of Starlink Internet services outside the borders of Ukraine such as in Russian-occupied territories in Ukraine . [ 17 ]
Applications of geofencing extend to advertising and geomarketing . Geofencing solution providers allow marketers and advertisers to precisely choose the exact location that their ads show up on. Geofencing uses different types of targeting to identify zip codes, street addresses, GPS coordinates using latitude and longitude, as well as IP targeting.
Geofencing enables competitive marketing tactics for advertisers and marketers to grab the attention of in-market shoppers in their competitive store location, large scale events such as concerts, sports events, conferences, etc. in stadiums, convention centers, malls, outlets, parks, neighborhoods. For example: at a concert, a digital ad relating to the performer or an affiliated company could be sent to only those people in the venue.
For example, a local auto-dealership builds a virtual boundary within a few square miles from its dealership's location to target car buyers within the same neighborhood. This way they limit their ad spending on prospects who are more likely to purchase in order to get a better ROI. Using tracking technologies to identify devices where the ads were shown, geofencing solution providers are able to provide walk-in attribution for their advertising. This means that using a geofencing solution, companies can now track the customers who walked into the showroom after seeing the ad. This level of attribution provides better visibility and analytics for marketers to spend their advertising budget wisely.
A local service business may only be interested in (a) likely clients (b) within a service region or catchment basin. Broadcasting or advertising more extensively brings irrelevant responses and wastes energy, time, money, and opportunity. Electronic advertising can identify and target only desired market objects (people).
Target Corporation settled for $5 million with the San Diego City Attorney in April 2022, promising to audit and improve pricing procedures, after a San Diego complaint that the company used geofencing to raise prices when a customer entered a store. [ 18 ] | https://en.wikipedia.org/wiki/Geofence |
Geoffrey Norman Malcolm (23 April 1931 – 11 August 2019) was a New Zealand physical chemist . Appointed in 1969, he was the first chemistry professor at Massey University .
Born in Feilding on 23 April 1931, Malcolm was educated at Feilding Agricultural High School . [ 1 ] He then studied at Canterbury University College , graduating Master of Science with first-class honours in 1954. [ 2 ] He was awarded an 1851 Exhibition Memorial Scholarship , [ 2 ] and completed doctoral studies at the University of Manchester in 1956. [ 1 ] [ 3 ]
In 1958, Malcolm married Sheila Mary Wilson, and the couple went on to have four children. [ 1 ]
After a short period as an assistant lecturer at the University of Manchester in 1956–57, Malcolm returned to New Zealand. [ 1 ] He was appointed as a lecturer in chemistry at the University of Otago in 1958, rising to the rank of reader . [ 1 ] In 1969, he was appointed as professor of physical chemistry at Massey University , and was the first professor of chemistry at that institution. [ 4 ] He later served as dean of science from 1984 to 1994. [ 1 ] [ 5 ] Following his retirement in 1995, he was conferred the title of professor emeritus. [ 1 ] [ 3 ] [ 5 ] Malcolm was elected a Fellow of the New Zealand Institute of Chemistry (NZIC) in 1966, and served as president of the NZIC in 1977. [ 1 ] [ 3 ]
Malcolm died in Palmerston North on 11 August 2019. [ 5 ]
This biographical article about a New Zealand academic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geoff_Malcolm |
Geoff Sutcliffe is a US-based computer scientist working in the field of automated reasoning . He was born in the former British colony of Northern Rhodesia (now Zambia ),
grew up in South Africa , and earned his PhD in Australia . Sutcliffe currently works at the University of Miami , and is of both British and Australian nationality. [ 1 ]
Geoff Sutcliffe is the developer of the Thousands of Problems for Theorem Provers (TPTP) problem library, and of the TPTP language for formal specification of Automated theorem proving problems and solutions. Since 1996 he has been organizing the annual CADE ATP System Competition (CASC), associated with the Conference on Automated Deduction and International Joint Conference on Automated Reasoning . He has been a co-organizer of several Automated reasoning challenges, including the Modal Logic $100 Challenge, [ 2 ] the MPTP $100 Challenges, [ 3 ] and the SUMO $100 Challenges. [ 4 ] : 139 Together with Stephan Schulz , Sutcliffe founded and has been organizing the ES* Workshop series, [ 5 ] a venue for presentation and publishing of practically oriented Automated Reasoning research.
In 2025 Sutcliffe went to Federal University of Goiás and gave a two-day lecture about TPTP. [ 6 ] | https://en.wikipedia.org/wiki/Geoff_Sutcliffe |
Geoffrey Alan Stuart Ozin FRSC is a British chemist, currently Tier 1 Canada Research Chair in Materials Chemistry and Distinguished University Professor at the University of Toronto . [ 1 ] Ozin is the recipient of numerous awards for his research on nanomaterials, including the Meldola Medal and Prize in 1972 and the Rutherford Memorial Medal in 1982. He won the Albert Einstein World Award of Science in 2011, the Royal Society of Chemistry's Centenary Prize in 2015, and the Humboldt Prize in 2005 and 2019. [ 2 ] He has co-founded three university spin-off companies: Torrovap in 1985, which manufactures metal vapor synthesis scientific instrumentation; Opalux in 2006, which develops tunable photonic crystals ; and Solistra in 2019, which develops photocatalysts and photoreactors for hydrogen production from carbon dioxide and methane. [ 3 ]
Initially planning on entering the family fashion and tailor business, he entered King's College London in 1962, the first member of his family to attend university. [ 4 ] Ozin graduated with a first-class honours degree in chemistry from King's College London in 1965, and obtained his PhD in inorganic chemistry at Oriel College, Oxford in 1967 with Prof. Ian R. Beattie. [ 5 ] [ 6 ] He then was an Imperial Chemical Industries postdoctoral fellow at the University of Southampton from 1967-1969. [ 7 ] In 1969, he began his independent career at the University of Toronto as an assistant professor. He was promoted to associate professor in 1972, and full professor in 1977.
This biographical article about a chemist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geoffrey_Ozin |
Worldwide Geographic Location Codes (GLCs) list the number and letter codes federal agencies should use in designating geographic locations anywhere in the United States or abroad in computer programs . Use of standard codes facilitates the interchange of machine-readable data from agency to agency within the federal community and between federal offices and state and local groups. These codes are also used by some companies as a coding standard as well, especially those that must deal with federal, state and local governments for such things as taxes . The GLCs are administered by the U.S. General Services Administration (GSA).
This article about geography terminology is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geographic_Locator_Codes |
Geographic information science ( GIScience , GISc ) or geoinformation science is a scientific discipline at the crossroads of computational science , social science , and natural science that studies geographic information , including how it represents phenomena in the real world, how it represents the way humans understand the world, and how it can be captured, organized , and analyzed . It is a sub-field of geography , specifically part of technical geography . [ 1 ] [ 2 ] [ 3 ] It has applications to both physical geography and human geography , although its techniques can be applied to many other fields of study as well as many different industries .
As a field of study or profession, it can be contrasted with geographic information systems (GIS), which are the actual repositories of geospatial data, the software tools for carrying out relevant tasks, and the profession of GIS users. That said, one of the major goals of GIScience is to find practical ways to improve GIS data, software, and professional practice; it is more focused on how gis is applied in real life as opposed to being a geographic information system tool in and of itself. The field is also sometimes called geographical information science .
British geographer Michael Goodchild defined this area in the 1990s and summarized its core interests, including spatial analysis , visualization, and the representation of uncertainty. [ 4 ] GIScience is conceptually related to geomatics , information science , computer science , and data science , but it claims the status of an independent scientific discipline. [ 5 ] Recent developments in the field have expanded its focus to include studies on human dynamics in hybrid physical-virtual worlds, quantum GIScience, the development of smart cities , and the social and environmental impacts of technological innovations. [ 6 ] These advancements indicate a growing intersection of GIScience with contemporary societal and technological issues. Overlapping disciplines are: geocomputation , geoinformatics , geomatics and geovisualization . [ 7 ] Other related terms are geographic data science (after data science ) [ 8 ] [ 9 ] and geographic information science and technology (GISci&T), [ 10 ] with job titles geospatial information scientists and technologists . [ 11 ]
Since its inception in the 1990s, the boundaries between GIScience and cognate disciplines are contested, and different communities might disagree on what GIScience is and what it studies. In particular, Goodchild stated that "information science can be defined as the systematic study according to scientific principles of the nature and properties of information. Geographic information science is the subset of/or information science that is about geographic information." [ 12 ] Another influential definition is that by geographic information scientist ( GIScientist ) David Mark , which states:
Geographic Information Science (GIScience) is the basic research field that seeks to redefine geographic concepts and their use in the context of geographic information systems. GIScience also examines the impacts of GIS on individuals and society, and the influences of society on GIS. GIScience re-examines some of the most fundamental themes in traditional spatially oriented fields such as geography, cartography, and geodesy, while incorporating more recent developments in cognitive and information science. It also overlaps with and draws from more specialized research fields such as computer science, statistics, mathematics, and psychology, and contributes to progress in those fields. It supports research in political science and anthropology, and draws on those fields in studies of geographic information and society. [ 13 ]
In 2009, Goodchild summarized the history of GIScience and its achievements and open challenges. [ 14 ] | https://en.wikipedia.org/wiki/Geographic_information_science |
A GIS software program is a computer program to support the use of a geographic information system , providing the ability to create, store, manage, query, analyze , and visualize geographic data , that is, data representing phenomena for which location is important. [ 1 ] [ 2 ] [ 3 ] The GIS software industry encompasses a broad range of commercial and open-source products that provide some or all of these capabilities within various information technology architectures. [ 4 ]
The earliest geographic information systems, such as the Canadian Geographic Information System started in 1963, were bespoke programs developed specifically for a single installation (usually a government agency), based on custom-designed data models. [ 5 ] During the 1950s and 1960s, academic researchers during the quantitative revolution of geography began writing computer programs to perform spatial analysis , especially at the University of Washington and the University of Michigan , but these were also custom programs that were rarely available to other potential users.
Perhaps the first general-purpose software that provided a range of GIS functionality was the Synagraphic Mapping Package (SYMAP), developed by Howard T. Fisher and others at the nascent Harvard Laboratory for Computer Graphics and Spatial Analysis starting in 1965. While not a true full-range GIS program, it included some basic mapping and analysis functions, and was freely available to other users. [ 6 ] Through the 1970s, the Harvard Lab continued to develop and publish other packages focused on automating specific operations, such as SYMVU (3-D surface visualization), CALFORM ( choropleth maps ), POLYVRT ( topological vector data management), WHIRLPOOL ( vector overlay ), GRID and IMGRID ( raster data management), and others. During the late 1970s, several of these modules were brought together into Odyssey, one of the first commercial complete GIS programs, released in 1980.
During the late 1970s and early 1980s, GIS was emerging in many large government agencies that were responsible for managing land and facilities. Particularly, federal agencies of the United States government developed software that was by definition in the public domain because of the Freedom of Information Act , and was thus released to the public. Notable examples included the Map Overlay and Statistical System (MOSS) developed by the Fish & Wildlife Service and Bureau of Land Management (BLM) starting in 1976; [ 7 ] the PROJ library developed at the United States Geological Survey (USGS), one of the first programming libraries available; and GRASS GIS originally developed by the Army Corps of Engineers starting in 1982. [ 8 ] These formed the foundation of the open source GIS software community.
The 1980s also saw the beginnings of most commercial GIS software, including Esri ARC/INFO in 1982; [ 9 ] Intergraph IGDS in 1985, and the Mapping Display and Analysis System (MIDAS), the first GIS product for MS-DOS personal computers, which later became MapInfo . [ 10 ] These would proliferate in the 1990s with the advent of more powerful personal computers, Microsoft Windows , and the 1990 U.S. Census , which raised awareness of the usefulness of geographic data to businesses and other new users.
Several trends emerged in the late 1990s that have significantly changed the GIS software ecosystem leading to the present, by moving in directions beyond the traditional full-featured desktop GIS application. The emergence of object-oriented programming languages facilitated the release of component libraries and application programming interfaces , both commercial and open-source, which encapsulated specific GIS functions, allowing programmers to build spatial capabilities into their own programs. Second, the development of spatial extensions to object-relational database management systems (also both open-source and commercial) created new opportunities for data storage for traditional GIS, but also enabled spatial capabilities to be integrated into enterprise information systems , including business processes such as human resources . Third, as the World Wide Web emerged, web mapping quickly became one of its most popular applications; this led to the development of Server-based GIS software that could perform the same functions as a traditional GIS, but at a location remote from a client who only needed a web browser installed. All of these have combined to enable emerging trends in GIS software, such as the use of cloud computing , software as a service (SAAS), and smartphones to broaden the availability of spatial data, processing, and visualization.
The software component of a traditional geographic information system is expected to provide a wide range of functions for handling spatial data: [ 11 ] : 16
The modern GIS software ecosystem includes a variety of products that may include more or less of these capabilities, collect them in a single program, or distribute them over the Internet . These products can be grouped into the following broad classes:
The current software industry consists of many competing products of each of these types, in both open-source and commercial forms. Many of these are listed below; for a direct comparison of the characteristics of some of them, see Comparison of geographic information systems software .
The development of open source GIS software has—in terms of software history—a long tradition [ 12 ] with the appearance of a first system in 1978. Numerous systems are available which cover all sectors of geospatial data handling.
The following open-source desktop GIS projects are reviewed in Steiniger and Bocher (2008/9): [ 13 ]
Besides these, there are other open source GIS tools:
Apart from desktop GIS, many other types of GIS software exist.
Note: Almost all of the companies below offer Desktop GIS and WebMap Server products. Some such as Manifold Systems and Esri offer Spatial DBMS products as well.
Many suppliers are now starting to offer Internet based services as well as or instead of downloadable software and/or data. These can be free, funded by advertising or paid for on subscription; they split into three areas: | https://en.wikipedia.org/wiki/Geographic_information_system_software |
Geographic routing (also called georouting [ 1 ] or position-based routing ) is a routing principle that relies on geographic position information. It is mainly proposed for wireless networks and based on the idea that the source sends a message to the geographic location of the destination instead of using the network address . In the area of packet radio networks, the idea of using position information for routing was first proposed in the 1980s [ 2 ] for interconnection networks. [ 3 ] Geographic routing requires that each node can determine its own location and that the source is aware of the location of the destination. With this information, a message can be routed to the destination without knowledge of the network topology or a prior route discovery.
There are various approaches, such as single-path, multi-path and flooding -based strategies (see [ 4 ] for a survey). Most single-path strategies rely on two techniques: greedy forwarding and face routing . Greedy forwarding tries to bring the message closer to the destination in each step using only local information. Thus, each node forwards the message to the neighbor that is most suitable from a local point of view. The most suitable neighbor can be the one who minimizes the distance to the destination in each step (Greedy). Alternatively, one can consider another notion of progress, namely the projected distance on the source-destination-line (MFR, NFP), or the minimum angle between neighbor and destination (Compass Routing). Not all of these strategies are loop-free, i.e. a message can circulate among nodes in a certain constellation. It is known that the basic greedy strategy and MFR are loop free, while NFP and Compass Routing are not. [ 5 ]
Greedy forwarding can lead into a dead end, where there is no neighbor closer to the destination. Then, face routing helps to recover from that situation and find a path to another node, where greedy forwarding can be resumed. A recovery strategy such as face routing is necessary to assure that a message can be delivered to the destination. The combination of greedy forwarding and face routing was first proposed in 1999 under the name GFG (Greedy-Face-Greedy). [ 6 ] It guarantees delivery in the so-called unit disk graph network model. Various variants, which were proposed later [ 7 ] , also for non-unit disk graphs, are based on the principles of GFG
. [ 1 ]
Face routing depends on a planar subgraph in general; however distributed planarization is difficult for real wireless sensor networks and does not scale well to 3D environments. [ 8 ]
Although originally developed as a routing scheme that uses the physical positions of each node, geographic routing algorithms have also been applied to networks in which each node is associated with a point in a virtual space, unrelated to its physical position. The process of finding a set of virtual positions for the nodes of a network such that geographic routing using these positions is guaranteed to succeed is called greedy embedding . [ 9 ] | https://en.wikipedia.org/wiki/Geographic_routing |
In the telecommunications industry, a Geographical Operations System (GOS) is an integrated process that combines data integration with geographic mapping capabilities within telecommunications companies. This system encompasses the integration of Geographic Information Systems (GIS) and Operational Support Systems (OSS) to facilitate the seamless exchange of information among employees.
GOS software relies on a central repository for critical data to foster better communication between the various branches of a telecom. GOS software may offer companies a means to achieve technological convergence in their marketed products. Open Database Connectivity (ODBC) is utilized to create a discernible pathway for retrieving information from GOS software for a range of employees that may not be familiar with database protocols. The software creates a channel within a company for experts to share information on the various aspects of the telecommunications company, thus opening the spread of information and increasing efficiency for employees. [ 1 ]
The increasing pressures of competition and expansion in the telecommunications market have driven many vendors in the field to reassess the internal organization and cooperation. Technological innovation has introduced greater capacity and capabilities in the telecommunications market, but also added complexity for many companies, as they attempt to develop commercial offerings with an ever-growing list of products and services. A Geographical Operations Systems meshes the importance of Geographical Information Systems – which provides the ability to store data in a geographically-correct map – with the reliance of telecommunications companies on Operational Support Systems as a way to categorize and maintain customer and equipment records.
The Geographical Operations System simplifies interoperability in a telecommunications company by converging resources that may be stored in different programming languages from across the company into a single software program to be utilized by customer satisfaction representatives, equipment technicians, telecommunications engineers, and the accounting department, among others. Information is made general and uniform throughout a company to allow independent employees to carry out tasks without seeking out the expertise and time of coworkers. | https://en.wikipedia.org/wiki/Geographical_Operations_System |
Baku - Guba - Samur ( Azerbaijan–Russia border )
This 150 km (93 mi) toll road serves as an alternative to the existing Baku-Guba-Samur border road which is 13 km (8.1 mi) longer.
Bangladesh has five toll bridges and four toll roads. None of them are of an electronic collection system. In Bangladesh, roads and bridges are built by the government. After building the roads and bridge, the government invites tender to give an operation and management (O&M) contract for five years against a fee. The O&M operators maintains the bridge and collects toll on behalf of the government. The toll tariff of Bangabandhu Multipurpose Bridge (formerly known as Jamuna Bridge), length 4.8 km (3.0 mi), the longest bridge of the country is considered very high compared with other bridges. Mr. Md. Mobarak Hossain, the CEO of Marga Net One Limited (Joint Venture by Pt. Jasa Marga (Persaro) Indonesia) and Net One Solutions Ltd, Bangladesh, who was also the CEO of the second O&M Operator of Bangabandhu (Jamuna) Bridge, feels that ৳ 400 ( US$ 6.00 ) per private car is too high, while the trucks and lorries pays a maximum of US$18.00 for a single trip. The Bangabandhu Bridge is a vital link connecting the eastern part of the country with its northern part.
Nearly all Chinese expressways and express routes charge tolls, although they are not often networked from one toll expressway to another. However, beginning with the Jingshen Expressway , tolls are gradually being networked. Given the size of the nation, however, the task is rather difficult.
China National Highways , which are not expressways, but "grade-A" routes, also charge tolls. Some provincial, autonomous-regional and municipal routes, as well as some major bridges, will also charge passage fees. In November 2004, legislation in China provided for a minimum length of a stretch of road or expressway in order for tolls to be charged.
In Hong Kong , most tunnels and some bridges that form part of the motorway networks are tolled to cover construction and maintenance costs. Some built recently are managed in the Build-Operate-Transfer (BOT) basis. The companies which build the tunnels or bridges are given franchise of a certain length of time (usually 30 years) to operate. Ownership will be transferred to the government when the franchise expires. An example is the Cross-Harbour Tunnel .
Access-controlled roads in India are tolled. In addition to cash tolls, toll plazas have dedicated electronic toll collection lanes for quicker operation.
In addition, most of the upgraded sections of the National Highway network are also tolled. These tolls are lower than those on expressways.
Currently, a massive project is underway to expand the highway network and the Government of India plans to add an additional 18,637 km (11,580 mi) of expressways to the network by the year 2022. [ 1 ]
Indonesia opened its first toll road, the Jagorawi Toll Road , in 1978. This linked the capital city of Jakarta to the neighboring cities of Bogor and Ciawi south of the capital. Since then, Indonesia has seen a dramatic increase on the operational length and reach of its toll road system, spanning 2,893.02 km (1,797.64 mi) by the end of June 2024, with most of it being built during the presidency of Joko Widodo . [ 2 ]
Since October 2017, all toll booths in Indonesia only accepts electronic payments through payment cards. [ 3 ]
Highway 6 in Israel , widely known as the Trans-Israel Highway or Cross-Israel Highway, is to date the only electronic toll highway in Israel. Currently Highway 6 is 110 km long, all of which is a freeway. This figure will grow in the next few years as additional segments, currently undergoing statutory approvals and permitting processes, are added to the main section of the road. Highway 6 uses a system of cameras and transponders to toll vehicles automatically. There are no toll booths, allowing Highway 6 to be designed as a normal freeway with interchanges.
The vast majority of Japan's extensive expressway consists of toll roads. Payment of the fare can either be made in cash as one exits or using the electronic toll collection card system. As of 2001, the toll fees for an ordinary passenger car was ¥ 24.60 per kilometre plus a ¥150 terminal charge.
Malaysia has extensive toll roads that forms the majority of country's expressways which in length spans more than 1,400 km (870 mi) ranging North to the Thai border, South to the Causeway and Second Link to Singapore, West to Klang and Pulau Indah and East towards Kuantan. Most of the toll roads are in major cities and conurbations such as Klang Valley , Johor Bahru and Penang . All of Malaysian toll roads are managed in the Build-Operate-Transfer basis as in Hong Kong and Japan (see below ).
All motorways and few expressways are toll roads. First such motorway M2 was opened to public in 1997. Since then the M3, M9, M10, and M1, all toll roads, have become operational. The M8 is under construction.
Currently, the Philippines have toll roads, mostly on the island of Luzon and one in Visayas . The toll roads, mostly named after their locations, comprise a total length of 626 km (389 mi). The three major concessionaires of those toll roads are San Miguel Corporation , NLEX Corporation , and Prime Asset Ventures, Inc. The Toll Regulatory Board regulates all toll roads, while the Department of Public Works and Highways plays a crucial role in the planning, design, construction, and maintenance of infrastructure facilities, including expresssways.
Electronic toll collections on all Philippine expressways are on a dry run since 2023, aiming for full implementation in 2024. [ 4 ]
In Singapore, toll stations are automated, thus reducing manpower. The automated toll stations, also known to the locals as ERP or Electronic Road Pricing, was introduced by Land Transport Authority (LTA) to reduce city traffic jams. The number of toll stations is increasing rapidly and some Singaporeans even call it "Every road pay".
Sri Lanka currently operates 2 toll roads. The Southern Expressway (E 01) and the Katunayake Expressway (E 03). The Kandy Colombo Expressway (E 02) is under planning at the moment (2013). The toll revenue is used to repair and maintain the expressways.
Freeways in Taiwan are not exactly toll roads in the sense that toll gates/stations are not located at the entrance and exits of the freeway. Toll stations with weigh stations are located every thirty to forty kilometres on the No. 1 and No. 3 National Freeways of the Republic of China . There are usually no freeway exits once a toll station notification sign appears, making it necessary for the driver to be familiar with the locations of the toll stations in advance.
Other toll roads in Taiwan are usually newly built bridges and tunnels. Tolls are frequently collected to pay off the construction cost and once paid off, the tolls may be repealed.
Toll roads in Tajikistan are owned and operated by Innovative Road Solutions (IRS). The northern point is in the Sughd Viloyat and the Southern point ends at Kurgan Tyube (100 km (62 mi) south of the capital of the country, Dushanbe ). While going from end to end costs roughly US$12 for regular 2 axle vehicles, it can top to US$100 for semitrucks. The IRS is setting up new toll plazas that are going to be able to read off the digital device attached to windshield while passing through at the speed of no more than 15 km/h (9.3 mph), similar to ones in the United States. This is the only toll road in the entire Central Asia with about 5 cars going through each toll plaza every minute in every direction. The more information can be found on their homepage at www.IRS.tj
Most of the toll roads in Thailand are either within Greater Bangkok or originated from Bangkok. They are called expressways , tollways , and motorways . Two government agencies under the Ministry of Transport, namely the Expressway Authority of Thailand (EXAT) and the Department of Highways (DoH), own networks of toll roads. Some are operated by the agencies themselves; others are operated by private concessionaires. EXAT is in charge of Chaloem Mahanakhon Expressway, Si Rat Expressway, Chalong Rat Expressway, Udon Ratthaya Expressway, Burapha Withi Expressway and Kanchanphisek Outer Ring Road (southern section). DoH is in charge of Uttraphimuk Tollway (formerly Don Mueang Tollway), Motorway No. 7 (Bangkok- Chonburi ), and Motorway No. 9 (Kanchanphisek Outer Ring Road - eastern section). Both agencies have plans to build more toll roads in the future, expanding their networks to the provinces.
Electronic Toll Collection using passive RFID tags is used in Chaloem Mahanakhon, Si Rat, Chalong Rat, and Burapha Withi Expressway while Uttraphimuk Tollway employs passive IC-card-based Touch-and-Go system. There are plans to upgrade and expand ETC systems in the near future.
The toll system Salik started in Dubai in July 2007.
Morocco has an extensive system of toll roads or Autoroutes. These were for the most part recently built, and from Casablanca connect all of Moroccos major cities such as Marrakech , Rabat , and Tangier . Operator Autoroutes Du Maroc runs the network on a pay-per-use basis, with toll stations placed along its length. Goal is completing a north–south and an east–west link crossing the country. Both axis will be important sections of the Pan-African main links. [ 5 ]
In South Africa, some of the National routes have sections that are toll roads (with physical tollgates), namely the N1 , N2 , N3 , N4 & N17 . All toll roads are run by the South African National Roads Agency Limited [1] except for the N4 and part of the N3, which are run by concessionaires.
In 2013, the Ntabazinduna Toll Plaza was opened outside of Bulawayo on the A5 Road to Harare and the toll system was introduced by a South African Group named Group Five as part of a government project to provide safer highways and to also benefit the local community and also the local economy. Additionally eight more toll plazas will be operating in Zimbabwe. [ when? ] [ citation needed ]
In Zambia, every Inter-Territorial Road (designated with the letter T; except for T6), together with many Territorial Roads (designated with the letter M) and very few District Roads (designated with the letter D) are toll roads with tollgates. [ 6 ] The tollgates are run by the National Road Fund Agency (NRFA) and the Road Development Agency (RDA). [ 6 ]
Toll roads in Europe have a long history. The first turnpike road in England was authorised in the seventeenth century. The term turnpike refers to a gate on which sharp pikes would be fixed as a defence against cavalry. Early references include the (mythical) Greek ferryman Charon charging a toll to ferry (dead) people across the river Acheron . Germanic tribes charged tolls to travellers across mountain passes . Tolls were used in the Holy Roman Empire in the 14th century and 15th century.
In some European countries payment of road tolls is made using stickers which are affixed to the windscreen. Germany uses a system based on satellite technology for large vehicles. In other countries payment may be made in cash, by credit card, by pre-paid card or by an electronic toll collection system. Tolls may vary according to the distance travelled, the building and maintenance costs of the motorway and the type of vehicle.
Some of these toll roads are privately owned and operated. Others are owned by the government. Some of the government-owned toll roads are privately operated.
Major highways in Belarus are toll roads with Open road tolling (ORT) or free-flow tolling. BelToll is an electronic toll collection system (ETC), valid from 1 July 2013 in the Republic of Belarus.
Almost all Croatian highways are toll roads with the exception of the Zagreb bypass and Rijeka bypass .
There are five vehicle categories in Croatia that differ in weight, height, number of axles and trailer attachment. The toll for the use of a motorway on which a closed or open toll system has been introduced is calculated and charged according to the distance between the two toll points the vehicle passes, the group of vehicles to which the vehicle is deployed and the unit price per kilometer. The unit price per kilometer of motorway, i.e. individual sections of motorway, is determined according to construction costs, maintenance costs, management costs and costs of development of motorways and toll road facilities. The unit price per kilometer can be determined differently for each section of the motorway. [ 7 ]
Toll payment is possible in six ways: [ 8 ]
There are three Croatian companies that build and maintain highways and collect tolls: [ 9 ]
The Great Belt Fixed Link the Øresund Bridge are toll roads.
In the Faroe Islands , the inter-island road tunnels Vágatunnilin , Norðoyatunnilin , Eysturoyartunnilin and Sandoyartunnilin have tolls (but no physical toll booths are present and the toll must be paid at nearby petrol stations).
In Europe, the most substantial use of toll roads is in France , where most of the autoroutes carry quite heavy tolls.
The Hvalfjörður Tunnel was tolled from 11 July 1998 but became toll free as of 28 September 2018. Currently the Vaðlaheiðargöng tunnel, opened in December 2018, is the only toll road in Iceland.
Ireland has three toll roads, three toll bridges, and two toll tunnels, which are operated by various independent operators. Most were built under a public-private partnership system, giving the company which arranged for the road to be built the right to collect tolls for a defined period. Tolls vary from €1.65 to €12 for cars.
Most Italian motorways are toll roads, with some exceptions such as some motorways in Southern Italy and Sicily or the Grande Raccordo Anulare ( Rome 's ring road).
In most motorways, toll is proportional to the distance traveled and has to be paid on exit, where toll gates — (in Italian) caselli — are placed. On other motorways, however, toll gates are placed directly along the route — (in Italian) barriere —. In such cases, it is required to pay a fixed amount, regardless of the distance traveled. A8 , A9 , A52 are good examples of that system.
Toll can be paid in cash, by credit card, by pre-paid card, or by Telepass .
61% of the Italian motorways are handled by the "Autostrade per l'Italia S.p.A." company, and its subsidiaries. All of these carriers are now privately owned and supervised by ANAS. The network of motorways covers most of Italy : northern and central Italy are well covered, the south and Sicily are scarcely covered, Sardinia is not covered at all.
The motorway operators are required to build, operate and maintain their networks at cost and to cover their expenses from the toll they collect. The tolls vary according to the building and maintenance costs of the motorway and the type of vehicle.
Besides the motorways, only some alpine tunnels (such as the Mont Blanc Tunnel ) are tolled. Today, no toll is required on other roads, including motorway-like dual carriageways — (in Italian) superstrade . The first tolled superstrada is under construction now north of Venice.
In the beginning of the 20th century, almost all communities collected toll on all passing traffic, usually including pedestrians and livestock. In 1953, the central government abolished all communal tolls.
As of 2008, there are three effective toll roads in Netherlands. They are for the Western Scheldt Tunnel , Kil tunnel , both major arteries, and the "Tolbrug" (Toll Bridge) in Nieuwerbrug , a local hand-drawn bridge. Also, for the Wijkertunnel , a "shadow toll" is paid by Rijkswaterstaat for each passed vehicle.
Norway has extensively been using toll as a way to finance road infrastructure in the last decades.
There are also toll rings around some cities, where drivers have to pay to enter or leave the city, regardless of if the road is new or old. The first city was Bergen in 1986. The money goes to construction of infrastructure in and around the city.
There are three toll highways in Poland, connecting the major cities and the nation's boundaries. Two routes travel east–west, one running between Łódź and the German border, the other currently connecting Katowice and Kraków , with current construction extending the roads to the German and Ukrainian boundaries. A north–south route connects Rybnik to Katowice, and Toruń to Gdańsk .
In Portugal a certain number of roads are designated Toll-Roads. They charge a fixed value per kilometre distance, with several classes depending on vehicle type and regulated by the government. Several authorised franchises run them, the largest at present being BRISA . For cash-free payments there exists the Via Verde , an electronic toll collection system. On leaving the motorway, charges are automatically debited from a bank account.
A number of toll roads in Barnaul and Pskov Region (Nevil-Velezh (190 ₽ ($8)), Pechori-state border RUR 140), also M4-Don (18 km (11 mi) close to Lipetsk costs 20 ₽ ($0.75) for cars and 40 ₽ ($1.70) for trucks).
Overall toll network is 383 kilometres (238 mi) or 0.05% of total road network. Average price in Pskov region having 226 km (140 mi) of toll roads is 2.4 ₽ to 5.5 ₽ per km for cars and 7.9 ₽ to 19.5 ₽ for trucks. This comes close to $0.50 per km for trucks.
Ordinary speed limits apply so far. In 2007 adopted Toll Road Law and Concession Law in 2005 to develop this sector.
For the use of 464.7 km (288.8 mi) of the Slovenian freeways and expressways use of toll stickers is obligatory for all vehicles with the permissible maximum weight of 3.5 tonnes (3.9 tons) on motorways and expressways as of 1 July 2008. The sticker costs are €15 for 7 days, €30 for a month and €95 for a year. Motorcyclists have to pay €7.50 for 7 days, €25 for a half year and €47.50 for a year. [ 10 ] Trucks use existing toll road stops. [ 11 ] Use of highways and expressways without a valid and properly displayed sticker in a vehicle is a violation of the law and is punished with a fine of €300 or more. [ 12 ]
Due to the high costs of toll stickers for transit drivers going to vacation to Croatian and Montenegrin coast and others only passing through Slovenia, the highways are avoided by some travellers. [ 12 ] [ 13 ] [ 14 ] Brussels had opened the case with the statement that the Slovenian vignette violates prevailing EU rights and discriminates road users. The European commissioner for traffic and transport, Antonio Tajani, had investigated in the case of discrimination. [ 15 ] On 28 January 2010, after short-term vignettas were introduced by Slovenia and some other changes were made to the Slovenian vignette system, the European Commission concluded that the vignette system is in accordance with the European law . [ 16 ]
Most Spanish toll roads are networked, so you must get a ticket on entering and pay when leaving the road. Technically, all roads belong to the Government, although toll roads are built and maintained by private companies under a State concession; when the concession expires, the road is reverted to State ownership, however most of then are renewed. Toll roads are called in Spanish autopistas . Freeways, often comparable to autopistas in building and ride quality, are called autovías .
There are some autovías which are actually built and maintained by private companies, such as Pamplona-Logroño A-12 [ 17 ] [ 18 ] or Madrid access road M45. [ 19 ] The company assumes the building costs and the Autonomous Community where they are located (in the given examples, Navarre and Madrid ) pays a yearly per-vehicle fee to the company based upon usage statistics, called "shadow toll" (in Spanish, peaje en la sombra ). [ 20 ] The system can be regarded as a way for the Government to finance the build of new roads at the expense of the building company. Also, since the payment starts only after the road is finished, construction delays are usually shorter than those of regular state-owned freeways. However, those cannot be classified as toll roads since drivers do not need to pay any fees.
The border-crossing Oresund Bridge has around €50 toll. Two other bridges, including Sundsvall Bridge (and until 2021 also the Svinesund Bridge ) have tolls of smaller amounts. The Stockholm and Gothenburg City areas have congestion pricing on entry or exit. Before the year 2000 road tolls did not exist in Sweden for several decades.
For the use of Swiss motorways the use of toll stickers is obligatory. They costs CHF 40 per year per vehicle (a car towing a trailer needs two stickers). There are no stickers for shorter periods and they are valid 14 months (the 2010 sticker is valid from 1 December 2009 until 31 January 2011). However, this also means that a sticker bought any time during the year can only be used for less than the maximum period until 31 January of the following year.
In the Republic of Turkey toll is collected on certain highways, the so-called Otoyol or Karayolu . This is done so by three different systems. Every toll road has lanes for all three payment methods. One method is KGS (Kartlı Geçiş Sistemi) ( English : Card passage system ) which requires a Prepaid card to be presented at the toll port. Every passing will be withdrawn from the card. Another method is HGS (Hızlı Geçiş Sistemi) ( English : Fast passage system .) which uses a RFID chip stuck to the windshield of the vehicle. This chip is scanned automatically when passing the toll collecting point and money will be automatically withdrawn from the connected bank account. The last system is OGS which is calles Otomatik Geçiş Sistemi in Turkish , and translates Automatic passage system in English . This form of payment requires a fixed amount of money to be paid for a monthly or annual subscription. When subscribed, your car will be equipped with a barcoded sticker, which will be checked by CCTV cameras automatically to check if the car is actually subscribed. The Turkish toll system can't be avoided (except for avoiding toll roads), because one can simply not pass KGS without the required card, and due to the cameras, all cars passing OGS and HGS with no or expired or counterfied chips or stickers will get a fine nicely presented at their home address.
Road rates were introduced in England in the seventeenth century. The first turnpike road, whereby travellers paid tolls to be used for road upkeep, was authorised in 1663 for a section of the Great North Road in Hertfordshire . The first turnpike trust was established by Parliament through a Turnpike Act in 1706. From 1751 until 1772, there was a flurry of interest in turnpike trusts and a further 390 were established. By 1825, over 1,000 trusts controlled 25,000 miles (40,000 km) of road in England and Wales .
The rise of railway transport largely halted the improving schemes of the turnpike trusts. Unable to earn sufficient revenue from tolls alone the trusts took to requiring taxes from the local parishes. The system was never properly reformed but from the 1870s Parliament stopped renewing the acts and roads began to revert to local authorities, the last trust vanishing in 1895. The Local Government Act 1888 created county councils and gave them responsibility for maintaining the major roads. There are still a small number of toll bridges left including Swinford toll bridge near Oxford .
Most UK roads today are maintained from general taxation, some of which is raised from motoring taxes including fuel duty and vehicle excise duty . Today, there are few tolls on roads in the United Kingdom - mainly toll bridges and tunnels. Until recently there were only two toll roads to which there is a public right of way (Rye Road in Stanstead Abbotts and College Road in Dulwich ) together with another five or so private toll-roads. The M6 Toll motorway to the north of Birmingham levies a usage charge.
In November 2006, the Ministry of Public Works, Services and Housing of Bolivia created Vías Bolivia, a public entity with the purpose of pricing, managing and maintaining the toll roads across the country. [ 21 ] As of 2021, there are currently 141 operating toll roads in Bolivia as well as 13 weigh stations for commercial vehicles and trucks. [ 21 ]
In Brazil , toll roads are a recent institution, and were adopted mostly in non-federal highways . The state of São Paulo has the highest length of toll roads, which are exploited either by private companies which bought a concession from the state, or by a state owned company (see Highway system of São Paulo ). In São Paulo there is also a statewide electronic collection system using a plastic transponder (e-tag) attached to the windscreen, named SemParar' . There is a growing trend towards tolling in all major highways of the country, but some resistance by the population is beginning to be felt, particularly due to some abuses which are being imposed, restricting the constitutional rights of coming and going (because the Brazilian highway system has very few non-tolled vicinal roads in parallel to highways) and making some trips an extremely expensive affair, as compared to average Brazilian earning power (in São Paulo, a 1,000 km (620 mi) round trip may cost upward of two hundreds Brazilian real in some roads, higher than petrol expenses).
Most tolled roadways in Canada are bridges to the United States , although a few domestic bridges in some provinces have tolls. Toll highways disappeared, for the most part, in the 1970s and 1980s. In the 1990s, political pressure dropped the new tolls on an upgraded section of the Trans-Canada Highway in New Brunswick . Highway 407 in the Greater Toronto Area is a modern toll route and does not have collection booths but an overhead sensor. It's heavily criticized as the government leased it for 99 years with the company having unlimited control of the highway and tolls so it is expensive, but still a necessity for gridlocked Toronto . Nova Scotia has a toll highway on the Trans Canada Highway between Debert and Oxford .
Many highways in Colombia charge tolls. Motorcycles are allowed to bypass for free.
The Pan-American Highway in Ecuador charges tolls. Motorcycles pay a reduced fare.
Mexico has an extensive system of toll roads or Autopistas. Autopistas are built and funded by Federal taxes and are built to nearly identical standards as the US Interstate Highways System. Also, many states in Mexico have their own toll roads such as Puebla, Veracruz and Nuevo León.
All federal toll highways operate with 3 payment options, cash, credit card and electronic tag IAVE.
IAVE in all the highways is operated by Caminos y Puentes Federales (CAPUFE).
Most of the toll roads in Panama were built in the mid-1990s, with the exception of the Arraijan-Chorrerra Highway . The three modern toll roads were built after the transportation plan made by the Government of Japan in the mid-1980s using the BOT formula. This highways are the Corredor Norte in the north of the Panama City , and the Corredor Sur in the south. Another highway was built and is the Panama-Colon Highway .
There are several toll roads in Puerto Rico, where toll roads are called "autopista" (which loosely translates to "car track") and toll houses are called "peaje".
A toll road in the United States, especially near the east coast, is often called a turnpike . The term turnpike originated from the turnstile or gate which blocked passage until the fare was paid at a toll house (or toll booth in current terminology). Most tolled facilities in the US today use an electronic toll collection system as an alternative to paying cash. Examples of this are the E-ZPass system used on most toll bridges and toll roads in the eastern U.S. from North Carolina to Maine and Illinois ; Houston 's EZ Tag , which also works in other parts of the state of Texas , Oklahoma 's Pikepass (which also works in Texas and Kansas), California 's FasTrak , Illinois ' I-Pass , and Florida 's SunPass . Toll roads are only in 26 states as of 2006. The majority of states without any turnpikes are in the West and South .
After a halt in toll road construction following the establishment of the Interstate Highway System in 1956, many states are going back to implementing tolls to fund capital improvements and manage congestion. This is because the cost of expanding and maintaining the highway network is increasing faster than the amount of revenue that can be generated by the federal gasoline tax for the Highway Trust Fund . Years after abolishing tolls, Kentucky and Connecticut have both re-examinined the possibility of reinstating tolls on some highways, while several other states are advancing the construction of new toll roads to supplement their existing networks of toll-free expressways.
In Australia, a small number of motorways have been tolled due to cover the expense of their construction. Such roads can be found in the Australian cities of Brisbane, Sydney and Melbourne. There are no toll roads in the Australian states of South Australia , Western Australia , Tasmania or any of the mainland territories . Toll collection is by electronic toll collection; there are no longer any cash booths in Australia.
In Brisbane , there are three tollway operators ( Brisbane City Council , Queensland Motorways , and RiverCity Motorway ). Brisbane City Council owns and operates the Go Between Bridge over the Brisbane River in the city. Queensland Motorways operates the tolls on the Sir Leo Hielscher Bridges , and another two on the Logan Motorway on the south side. RiverCity Motorway operates the Clem Jones Tunnel , which runs underneath the city between the inner southern and northern suburbs. All toll collection points are electronically operated. Another company, BrisConnections, is currently constructing another toll tunnel (the longest tunnel in Australia) called the Airport Link , and will allow traffic to flow from the northern Clem Jones - Inner City Bypass interchange, direct to Brisbane Airport . Construction is due to be complete in 2012. International Travellers and people that are new to Brisbane should note, the penalty for non payment of tolls is in excess of $140 (per trip).
In Melbourne , there are two companies that operate tollways within the Melbourne metropolitan area. Transurban operates CityLink covering sections of the Monash Freeway , Southern Link, Western Link and the upgraded sections of the Tullamarine Freeway . ConnectEast operates EastLink that runs through the Eastern Suburbs of Melbourne. All Melbourne tollways are electronically tolled. The West Gate Bridge opened as a toll bridge upon its completion in 1978, however the toll was abolished in 1985.
In Sydney , many of the motorways contain at least one tolled section with a mixture of government and private ownership. The State Government owns the Sydney Harbour Bridge and Sydney Harbour Tunnel , while the M2 Motorway , M4 Motorway , M5 Motorway , Eastern Distributor , Westlink M7 and Lane Cove Tunnel are privately operated by a variety of companies such as Macquarie Infrastructure, Transurban, and to a lesser extent Industry Super funds such as Retail Employees Super, SunSuper, and the Industry Funds Management which partly own the M5 motorway in South Western Sydney.
As well as the tolled motorways, the Cross City Tunnel - an east–west route underneath the Sydney CBD - was opened to traffic in 2005. This road has become somewhat controversial due to the relatively high toll charge and the closure of surrounding roads designed to funnel traffic through the tunnel.
All Sydney tollways accept E-tags; the Westlink M7 , Sydney Harbour Tunnel , Cross City Tunnel , Lane Cove Tunnel and from 1 December 2007 on the M2 Motorway [2] [3] have no cash booths, just E-Tag readers to zoom on through as they charge their tolls only through electronic tolling methods or through the use of number plate reading as you go through, then you have to pay after a certain time frame (for example; before 24 hours), otherwise you will get a fine in the mail. The M5 Motorway moved to electronic only tolling in 2013. Tolls on the M4 Motorway were abolished in 2010. An E-Tag is an RFID device that allow a driver to pass through a toll point without physically stopping. When a vehicle fitted with an E-Tag passes through a toll collection point, the E-Tag identifies the electronic account of the vehicle passing through and the toll-road operator recovers the toll via that account. There are four providers of E-tag accounts in New South Wales (RTA, RoamTag, Interlink Roads, and M2 Consortium). All tags provided by these four providers can be used on every E-Tag-enabled tollway in Australia. | https://en.wikipedia.org/wiki/Geography_of_toll_roads |
Geohashing / ˈ dʒ iː oʊ ˌ h æ ʃ ɪ ŋ / is an outdoor recreational activity inspired by the webcomic xkcd , in which participants have to reach a random location (chosen by a computer algorithm ), prove their achievement by taking a picture of a Global Positioning System (GPS) receiver or another mobile device and then tell the story of their trip online. Proof based on non-electronic navigation is also acceptable. [ 1 ]
The geohashing community and culture is extremely tongue-in-cheek, supporting any kind of humorous behavior during the practice of geohashing and resulting in a parody of traditional outdoor activities. [ 2 ] Navigating to a random point is sometimes done with a goal in mind. Some geohashers document new mapping features they find on the OpenStreetMap project, clean up litter, or create art to commemorate the trip, among other activities.
A variation on geocaching , known as geodashing , features a closely comparable principle, with participants racing between coordinate points.
On May 21, 2008, the 426th xkcd comic was published. Titled "Geohashing", it described a way for a computer to create an algorithm that could generate random Global Positioning System (GPS) coordinates each day based on the Dow Jones Industrial Average and the current date. [ 3 ] The algorithm was quickly seized by the xkcd community, which used it as intended by xkcd creator Randall Munroe . [ 4 ]
Originally a stub where people willing to try the algorithm in real life were to issue their reports, the geohashing official wiki expanded in the following weeks and was a working website as early as June 2008. [ 4 ] The current expedition protocol was then established during the following years, with the creation of humorous awards, regional meetups and a hall of amazingness for the various geohasher achievements. [ 5 ]
Over time, geohashing gained fame across the internet and now counts more than 15,000 expedition reports. [ 6 ] Over a thousand users are registered on the geohashing wiki, though not all are currently active. [ 7 ] Geohashing has spread mostly in North America, Europe and Australia, especially around cities. [ 8 ]
Geohashing divides the earth into a grid made up of graticules which are one degree wide in latitude and longitude. Inside these graticules, a random location is set. Geohashers then have the opportunity to go at the chosen location, either inside their own graticule or in a nearby one. If the location is inaccessible or in a private area, geohashers are advised not to try to reach it, although seemingly-inaccessible locations have been reached several times. In addition to the repeating location in each graticule, each day there is a single global hashpoint, much more challenging to reach. [ 9 ] | https://en.wikipedia.org/wiki/Geohashing |
The Geologic Calendar is a scale in which the geological timespan of the Earth is mapped onto a calendrical year; that is to say, the day one of the Earth took place on a geologic January 1 at precisely midnight, and today's date and time is December 31 at midnight. [ 1 ] On this calendar, the inferred appearance of the first living single-celled organisms, prokaryotes , occurred on a geologic February 25 around 12:30 pm to 1:07 pm, [ 2 ] dinosaurs first appeared on December 13, the first flower plants on December 22 and the first primates on December 28 at about 9:43 pm. The first anatomically modern humans did not arrive until around 11:48 p.m. on New Year's Eve, and all of human history since the end of the last ice-age occurred in the last 82.2 seconds before midnight of the new year.
A variation of this analogy instead compresses Earth's 4.6 billion year-old history into a single day: While the Earth still forms at midnight, and the present day is also represented by midnight, the first life on Earth would appear at 4:00 am, dinosaurs would appear at 10:00 pm, the first flowers 10:30 pm, the first primates 11:30 pm, and modern humans would not appear until the last two seconds of 11:59 pm.
A third analogy, created by University of Washington paleontologist Peter Ward and astronomer Donald Brownlee, who are both famous for their Rare Earth hypothesis, for their book The Life and Death of Planet Earth , alters the calendar so it includes the Earth's future leading up to the Sun's death in the next 5 billion years. As a result, each month now represents 1 of 12 billion years of the Earth's life. According to this calendar, the first life appears in January, and the first animals first appeared in May, with the present day taking place on May 18. Even though the Sun won't destroy Earth until December 31, all animals will die out by the end of May.
Use of the geologic calendar as a conceptual aid dates back at least to the mid 20th century, for example in Richard Carrington's 1956 book A Guide to Earth History [ 3 ] and Gove Hambidge's 1941 chapter in the book Climate and Man . [ 4 ] Some authors also used a similar imaginative device of compressing the entire history of the human species to a shorter period, whether a single year as in Ramsay Muir 's 1940 book Civilization and Liberty, [ 5 ] or fifty-year span as in James Harvey Robinson's 1921 book The Mind in the Making . [ 6 ]
This geology article is a stub . You can help Wikipedia by expanding it .
This time -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geologic_Calendar |
A geologic preliminary investigation is a survey of the subsoil conducted by an engineering geologist in conjunction with a civil engineer . Typically, the footprint of the structure is established on the proposed building site and trenches up to fourteen feet deep are dug both outside, and more importantly, inside, the proposed footprint using the bucket-end of a backhoe. In extreme cases, a larger, more powerful tracked excavator is used. The geologist is looking for potential failure planes, expansive clays , excessive moisture, potential for proper compaction, and other variables that go into the construction of a solid foundation (such as potential for liquefaction ). Materials are also gathered to determine the maximum compaction value (a "proctor") of the subsurface. Prelims should always be conducted prior to the construction of any permanent structure. | https://en.wikipedia.org/wiki/Geologic_preliminary_investigation |
The geologic record in stratigraphy , paleontology and other natural sciences refers to the entirety of the layers of rock strata . That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus ( clays , sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate , geography, geology and the evolution of life on its surface. According to the law of superposition , sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified ( competent ) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods —for the particular geographic region or regions. The geologic record is in no one place entirely complete [ 1 ] for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins , some of which go 7 miles (11 km) deep thoroughly support the law of superposition. [ clarification needed ]
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale using the law of superposition, for where tectonic forces have uplifted one ridge newly subject to erosion and weathering in folding and faulting the strata, they have also created a nearby trough or structural basin region that lies at a relative lower elevation that can accumulate additional deposits. By comparing overall formations, geologic structures and local strata, calibrated by those layers which are widespread, a nearly complete geologic record has been constructed since the 17th century.
Correcting for discordancies can be done in a number of ways and utilizing a number of technologies or field research results from studies in other disciplines.
In this example, the study of layered rocks and the fossils they contain is called biostratigraphy and utilizes amassed geobiology and paleobiological knowledge. Fossils can be used to recognize rock layers of the same or different geologic ages , thereby coordinating locally occurring geologic stages to the overall geologic timeline .
The pictures of the fossils of monocellular algae in this USGS figure were taken with a scanning electron microscope and have been magnified 250 times.
In the U.S. state of South Carolina three marker species of fossil algae are found in a core of rock whereas in Virginia only two of the three species are found in the Eocene Series of rock layers spanning three stages and the geologic ages from 37.2–55.8 MA .
Comparing the record about the discordance in the record to the full rock column shows the non-occurrence of the missing species and that portion of the local rock record , from the early part of the middle Eocene is missing there. This is one form of discordancy and the means geologists use to compensate for local variations in the rock record. With the two remaining marker species it is possible to correlate rock layers of the same age (early Eocene and latter part of the middle Eocene) in both South Carolina and Virginia, and thereby "calibrate" the local rock column into its proper place in the overall geologic record.
Consequently, as the picture of the overall rock record emerged, and discontinuities and similarities in one place were cross-correlated to those in others, it became useful to subdivide the overall geologic record into a series of component sub-sections representing different sized groups of layers within known geologic time, from the shortest time span stage to the largest thickest strata eonothem and time spans eon . Concurrent work in other natural science fields required a time continuum be defined, and earth scientists decided to coordinate the system of rock layers and their identification criteria with that of the geologic time scale. This gives the pairing between the physical layers of the left column and the time units of the center column in the table at right. | https://en.wikipedia.org/wiki/Geologic_record |
Geological engineering is a discipline of engineering concerned with the application of geological science and engineering principles to fields, such as civil engineering , mining , environmental engineering , and forestry , among others. [ 1 ] The work of geological engineers often directs or supports the work of other engineering disciplines such as assessing the suitability of locations for civil engineering , environmental engineering , mining operations, and oil and gas projects by conducting geological, geoenvironmental, geophysical, and geotechnical studies. [ 2 ] They are involved with impact studies for facilities and operations that affect surface and subsurface environments. The engineering design input and other recommendations made by geological engineers on these projects will often have a large impact on construction and operations. Geological engineers plan, design , and implement geotechnical, geological, geophysical, hydrogeological, and environmental data acquisition. This ranges from manual ground-based methods to deep drilling, to geochemical sampling , to advanced geophysical techniques and satellite surveying. [ 3 ] Geological engineers are also concerned with the analysis of past and future ground behaviour, mapping at all scales, and ground characterization programs for specific engineering requirements. [ 1 ] These analyses lead geological engineers to make recommendations and prepare reports which could have major effects on the foundations of construction, mining, and civil engineering projects. [ 1 ] Some examples of projects include rock excavation, building foundation consolidation, pressure grouting, hydraulic channel erosion control, slope and fill stabilization, landslide risk assessment, groundwater monitoring, and assessment and remediation of contamination. In addition, geological engineers are included on design teams that develop solutions to surface hazards, groundwater remediation , underground and surface excavation projects, and resource management. Like mining engineers , geological engineers also conduct resource exploration campaigns, mine evaluation and feasibility assessments, and contribute to the ongoing efficiency, sustainability, and safety of active mining projects [ 4 ]
While the term geological engineering was not coined until the 19th century, [ 5 ] principles of geological engineering are demonstrated through millennia of human history.
One of the oldest examples of geological engineering principles is the Euphrates tunnel , which was constructed around 2180 B.C. – 2160 B.C... [ 6 ] This, and other tunnels and qanats from around the same time were used by ancient civilizations such as Babylon and Persia for the purposes of irrigation . [ 6 ] Another famous example where geological engineering principles were used in an ancient engineering project was the construction of the Eupalinos aqueduct tunnel in Ancient Greece . [ 7 ] This was the first tunnel to be constructed inward from both ends using principles of geometry and trigonometry , marking a significant milestone for both civil engineering and geological engineering [ 7 ]
Although projects that applied geological engineering principles in their design and construction have been around for thousands of years, these were included within the civil engineering discipline for most of this time. Courses in geological engineering have been offered since the early 1900s; however, these remained specialized offerings until a large increase in demand arose in the mid-20th century. [ 2 ] This demand was created by issues encountered from development of increasingly large and ambitious structures, human-generated waste, scarcity of mineral and energy resources, and anthropogenic climate change – all of which created the need for a more specialized field of engineering with professional engineers who were also experts in geological or Earth sciences .
Notable disasters that are attributed to the formal creation of the geological engineering discipline include dam failures in the United States and western Europe in the 1950s and 1960s. These most famously include the St Francis dam failure (1928), [ 8 ] Malpasset dam failure (1959), [ 9 ] and the Vajont dam failure (1963), [ 10 ] where a lack of knowledge of geology resulted in almost 3,000 deaths between the latter two alone. The Malpasset dam failure is regarded as the largest civil engineering disaster of the 20th century in France and Vajont dam failure is still the deadliest landslide in European history.
Post-secondary degrees in geological engineering are offered at various universities around the world but are concentrated primarily in North America . Geological engineers often obtain degrees that include courses in both geological or Earth sciences and engineering . To practice as a professional geological engineer, a bachelor's degree in a related discipline from an accredited institution is required. [ 2 ] For certain positions, a Master’s or Doctorate degree in a related engineering discipline may be required. [ 2 ] After obtaining these degrees, an individual who wishes to practice as a professional geological engineer must go through the process of becoming licensed by a professional association or regulatory body in their jurisdiction.
In Canada, 8 universities are accredited by Engineers Canada to offer undergraduate degrees in geological engineering. [ 11 ] Many of these universities also offer graduate degree programs in geological engineering. These include:
In the United States there are 13 geological engineering programs recognized by the Engineering Accreditation Commission (EAC) of the Accreditation Board for Engineering and Technology (ABET) . [ 12 ] These include:
Universities in other countries that hold accreditation to offer degree programs in geological engineering from the EAC by the ABET include: [ 12 ]
In geological engineering there are multiple subdisciplines which analyze different aspects of Earth sciences and apply them to a variety of engineering projects. The subdisciplines listed below are commonly taught at the undergraduate level, and each has overlap with disciplines external to geological engineering. However, a geological engineer who specializes in one of these subdisciplines throughout their education may still be licensed to work in any of the other subdisciplines.
Geoenvironmental engineering is the subdiscipline of geological engineering that focuses on preventing or mitigating the environmental effects of anthropogenic contaminants within soil and water. [ 13 ] [ 14 ] It solves these issues via the development of processes and infrastructure for the supply of clean water , waste disposal , and control of pollution of all kinds. [ 15 ] The work of geoenvironmental engineers largely deals with investigating the migration, interaction, and result of contaminants; remediating contaminated sites ; and protecting uncontaminated sites. [ 14 ] Typical work of a geoenvironmental engineer includes:
Mineral and energy resource exploration (commonly known as MinEx for short) is the subdiscipline of geological engineering that applies modern tools and concepts to the discovery and sustainable extraction of natural mineral and energy resources. [ 4 ] A geological engineer who specializes in this field may work on several stages of mineral exploration and mining projects, including exploration and orebody delineation, mine production operations, mineral processing , and environmental impact and risk assessment programs for mine tailings and other mine waste. [ 17 ] Like a mining engineer, mineral and energy resource exploration engineers may also be responsible for the design, finance, and management of mine sites.
Geophysical engineering is the subdiscipline of geological engineering that applies geophysics principles to the design of engineering projects such as tunnels, dams, and mines or for the detection of subsurface geohazards, groundwater, and pollution. Geophysical investigations are undertaken from ground surface, in boreholes, or from space to analyze ground conditions, composition, and structure at all scales. Geophysical techniques apply a variety of physics principles such as seismicity , magnetism , gravity , and resistivity . This subdiscipline was created in the early 1990s as a result of an increased demand in more accurate subsurface information created by a rapidly increasing global population. [ 18 ] Geophysical engineering and applied geophysics differ from traditional geophysics primarily by their need for marginal returns and optimized designs and practices as opposed to satisfying regulatory requirements at a minimum cost [ 18 ]
Geological engineers are responsible for the planning, development, and coordination of site investigation and data acquisition programs for geological, geotechnical, geophysical, geoenvironmental, and hydrogeological studies. [ 4 ] These studies are traditionally conducted for civil engineering, mining, petroleum, waste management, and regional development projects but are becoming increasingly focused on environmental and coastal engineering projects and on more specialized projects for long-term underground nuclear waste storage. [ 3 ] Geological engineers are also responsible for analyzing and preparing recommendations and reports to improve construction of foundations for civil engineering projects such as rock and soil excavation, pressure grouting , and hydraulic channel erosion control. In addition, geological engineers analyze and prepare recommendations and reports on the settlement of buildings, stability of slopes and fills, and probable effects of landslides and earthquakes to support construction and civil engineering projects. [ 3 ] They must design means to safely excavate and stabilize the surrounding rock or soil in underground excavations and surface construction, in addition to managing water flow from, and within these excavations. [ 4 ]
Geological engineers also perform a primary role in all forms of underground infrastructure including tunnelling , mining , hydropower projects , shafts, deep repositories and caverns for power, storage, industrial activities, and recreation. [ 4 ] Moreover, geological engineers design monitoring systems, analyze natural and induced ground response, and prepare recommendations and reports on the settlement of buildings, stability of slopes and fills, and the probable effects of natural disasters to support construction and civil engineering projects. [ 4 ] In some jobs, geological engineers conduct theoretical and applied studies of groundwater flow and contamination to develop site specific solutions which treat the contaminants and allow for safe construction. [ 4 ] Additionally, they design means to manage and protect surface and groundwater resources and remediation solutions in the event of contamination. [ 4 ] If working on a mine site, geological engineers may be tasked with planning, development, coordination, and conducting theoretical and experimental studies in mining exploration, mine evaluation and feasibility studies relative to the mining industry. [ 4 ] They conduct surveys and studies of ore deposits, ore reserve calculations, and contribute mineral resource expertise, geotechnical and geomechanical design and monitoring expertise and environmental management to a developing or ongoing mining operation. [ 4 ] In a variety of projects, they may be expected to design and perform geophysical investigations from surface using boreholes or from space to analyze ground conditions, composition, and structure at all scales [ 4 ]
Professional Engineering Licenses may be issued through a municipal, provincial/state, or federal/national government organization, depending on the jurisdiction. The purpose of this licensing process is to ensure professional engineers possess the necessary technical knowledge, real-world experience, and basic understanding of the local legal system to practice engineering at a professional level. In Canada , the United States , Japan , South Korea , Bangladesh , and South Africa , the title of Professional Engineer is granted through licensure. [ 19 ] In the United Kingdom , Ireland , India , and Zimbabwe the granted title is Chartered Engineer . In Australia , the granted title is Chartered Professional Engineer. [ 19 ] Lastly, in the European Union , the granted title is European Engineer. All these titles have similar requirements for accreditation, including a recognized post-secondary degree and relevant work experience. [ 19 ]
In Canada, Professional Engineer (P.Eng.) and Professional Geoscientist (P.Geo.) licenses are regulated by provincial professional bodies which have the groundwork for their legislation laid out by Engineers Canada [ 20 ] and Geoscientists Canada . [ 21 ] The provincial organizations are listed in the table below.
In the United States, all individuals seeking to become a Professional Engineer (P.E.) must attain their license through the Engineering Accreditation Commission (EAC) of the Accreditation Board for Engineering and Technology (ABET) . [ 12 ] Licenses to be a Certified Professional Geologist in the United States are issued and regulated by the American Institute of Professional Geologists (AIPG) [ 21 ]
Professional societies in geological engineering are not-for-profit organizations that seek to advance and promote the represented profession(s) and connect professionals using networking, regular conferences, meetings, and other events, as well as provide platforms to publish technical literature through forms of conference proceedings, books, technical standards, and suggested methods, and provide opportunities for professional development such as short courses, workshops, and technical tours. Some regional, national, and international professional societies relevant to geological engineers are listed here:
Engineering geologists and geological engineers are both interested in the study of the Earth , its shifting movement, and alterations, [ 22 ] [ 23 ] and the interactions of human society and infrastructure with, on, and in Earth materials . Both disciplines require licenses from professional bodies in most jurisdictions to conduct related work. [ 22 ] [ 23 ] The primary difference between geological engineers and engineering geologists is that geological engineers are licensed professional engineers (and sometimes also professional geoscientists /geologists) with a combined understanding of Earth sciences and engineering principles, while engineering geologists are geological scientists whose work focusses on applications to engineering projects, and they may be licensed professional geoscientists /geologists, but not professional engineers . The following subsections provide more details on the differing responsibilities between engineering geologists and geological engineers.
Engineering geologists are applied geological scientists who assess problems that might arise before, during, and after an engineering project. They are trained to be aware of potential problems like:
They use a variety of field and laboratory testing techniques to characterize ground materials that might affect the construction, the long-term safety, or environmental footprint of a project. Job responsibilities of an engineering geologist include:
Geological engineers are engineers with extensive knowledge of geological or Earth sciences as well as engineering geology, engineering principles, and engineering design practices. These professionals are qualified to perform the role of or interact with engineering geologists . Their primary focus, however, is the use of engineering geology data, as well as engineering skills to:
In all these activities, the geological model , geological history , and environment, as well as measured engineering properties of relevant Earth materials are critical to engineering design and decision making. [ 23 ] | https://en.wikipedia.org/wiki/Geological_engineering |
Although oxygen is the most abundant element in Earth's crust , due to its high reactivity it mostly exists in compound ( oxide ) forms such as water , carbon dioxide , iron oxides and silicates . Before photosynthesis evolved, Earth's atmosphere had no free diatomic elemental oxygen (O 2 ). [ 2 ] Small quantities of oxygen were released by geological [ 3 ] and biological processes, but did not build up in the reducing atmosphere due to reactions with then-abundant reducing gases such as atmospheric methane and hydrogen sulfide and surface reductants such as ferrous iron .
Oxygen began building up in the prebiotic atmosphere at approximately 1.85 Ga during the Neoarchean - Paleoproterozoic boundary, a paleogeological event known as the Great Oxygenation Event (GOE). At current rates of primary production , today's concentration of oxygen could be produced by photosynthetic organisms in 2,000 years. [ 4 ] In the absence of plants , the rate of oxygen production by photosynthesis was slower in the Precambrian , and the concentrations of O 2 attained were less than 10% of today's and probably fluctuated greatly.
The increase in oxygen concentrations had wide ranging and significant impacts on Earth's biosphere . Most significantly, the rise of oxygen and the oxidative depletion of greenhouse gases (especially atmospheric methane ) due to the GOE led to an icehouse Earth that caused a mass extinction of anaerobic microbes , but paved the way for the evolution of eukaryotes and later the rise of complex lifeforms .
Photosynthetic prokaryotic organisms that produced O 2 as a byproduct lived long before the first build-up of free oxygen in the atmosphere, [ 5 ] perhaps as early as 3.5 billion years ago. The oxygen cyanobacteria produced would have been rapidly removed from the oceans by weathering of reducing minerals, [ citation needed ] most notably ferrous iron . [ 1 ] This rusting led to the deposition of the oxidized ferric iron oxide on the ocean floor, forming banded iron formations . Thus, the oceans rusted and turned red. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of the Great Oxygenation Event . [ 6 ]
Early fluctuations in oxygen concentration had little direct effect on life, with mass extinctions not observed until around the start of the Cambrian period, 538.8 million years ago . [ 7 ] The presence of O 2 provided life with new opportunities. Aerobic metabolism is more efficient than anaerobic pathways, and the presence of oxygen created new possibilities for life to explore. [ 8 ] [ 9 ] Since the start of the Cambrian period, atmospheric oxygen concentrations have fluctuated between 15% and 35% of atmospheric volume. [ 10 ] 430-million-year-old fossilized charcoal produced by wildfires show that the atmospheric oxygen levels in the Silurian must have been equivalent to, or possibly above, present day levels. [ 11 ] The maximum of 35% was reached towards the end of the Carboniferous period (about 300 million years ago), a peak which may have contributed to the large size of various arthropods , including insects, millipedes and scorpions. [ 9 ] Whilst human activities, such as the burning of fossil fuels , affect relative carbon dioxide concentrations, their effect on the much larger concentration of oxygen is less significant. [ 12 ]
The Great Oxygenation Event had the first major effect on the course of evolution . Due to the rapid buildup of oxygen in the atmosphere, the mostly anaerobic microbial biosphere that existed during the Archean eon was devastated, and only the aerobes that had antioxidant capabilities to neutralize oxygen thrived out in the open. [ 9 ] This then led to symbiosis of anaerobic and aerobic organisms, who metabolically complemented each other, and eventually led to endosymbiosis and the evolution of eukaryotes during the Proterozoic eon , who were now actually reliant on aerobic respiration to survive. After the Huronian glaciation came to an end, the Earth entered a long period of geological and climatic stability known as the Boring Billion . However, this long period was noticeably euxinic , meaning oxygen was scarce and the ocean and atmosphere were significantly sulfidic , and that evolution then was likely comparatively slow and quite conservative.
The Boring Billion ended during the Neoproterozoic period with a significant increase in photosynthetic activities, causing oxygen levels to rise 10- to 20-fold to about one-tenth of the modern level. This rise in oxygen concentration, known as the Neoproterozoic oxygenation event or "Second Great Oxygenation Event", was likely caused by the evolution of nitrogen fixation in cyanobacteria and the rise of eukaryotic photoautotrophs ( green and red algae ), and often cited as a possible contributor to later large-scale evolutionary radiations such as the Avalon explosion and the Cambrian explosion , which not only trended in larger [ 13 ] but also more robust and motile multicellular organisms . The climatic changes associated with rising oxygen also produced cycles of glaciation and extinction events , [ 9 ] each of which created disturbances that sped up ecological turnovers . During the Silurian and Devonian periods, the colonization and proliferation on land by early plants (which evolved from freshwater green algae) further increased the atmospheric oxygen concentration, leading to the historic peak during the Carboniferous period.
Data show an increase in biovolume soon after oxygenation events by more than 100-fold and a moderate correlation between atmospheric oxygen and maximum body size later in the geological record. [ 13 ] The large size of many arthropods in the Carboniferous period , when the oxygen concentration in the atmosphere reached 35%, has been attributed to the limiting role of diffusion in these organisms' metabolism. [ 14 ] But J.B.S. Haldane 's essay On Being the Right Size [ 15 ] points out that it would only apply to insects. However, the biological basis for this correlation is not firm, and many lines of evidence show that oxygen concentration is not size-limiting in modern insects. [ 9 ] Ecological constraints can better explain the diminutive size of post-Carboniferous dragonflies – for instance, the appearance of flying competitors such as pterosaurs , birds, and bats. [ 9 ]
Rising oxygen concentrations have been cited as one of several drivers for evolutionary diversification, although the physiological arguments behind such arguments are questionable, and a consistent pattern between oxygen concentrations and the rate of evolution is not clearly evident. [ 9 ] The most celebrated link between oxygen and evolution occurred at the end of the last of the Snowball Earth glaciations, where complex multicellular life is first found in the fossil record. Under low oxygen concentrations and before the evolution of nitrogen fixation , biologically-available nitrogen compounds were in limited supply, [ 16 ] and periodic "nitrogen crises" could render the ocean inhospitable to life. [ 9 ] Significant concentrations of oxygen were just one of the prerequisites for the evolution of complex life. [ 9 ] Models based on uniformitarian principles (i.e. extrapolating present-day ocean dynamics into deep time) suggest that such a concentration was only reached immediately before metazoa first appeared in the fossil record. [ 9 ] Further, anoxic or otherwise chemically "inhospitable" oceanic conditions that resemble those supposed to inhibit macroscopic life re-occurred at intervals through the early Cambrian, and also in the late Cretaceous – with no apparent effect on lifeforms at these times. [ 9 ] This might suggest that the geochemical signatures found in ocean sediments reflect the atmosphere in a different way before the Cambrian – perhaps as a result of the fundamentally different mode of nutrient cycling in the absence of planktivory. [ 7 ] [ 9 ]
An oxygen-rich atmosphere can release phosphorus and iron from rock, by weathering, and these elements then become available for sustenance of new species whose metabolisms require these elements as oxides. [ 2 ]
The coupling between oxygen levels and nutrient cycling has been a critical factor in shaping Earth's biosphere. Oxygen plays a central role in the oxidation of minerals during weathering. Weathering of the contienents which releases nutrients such as phosphorus and iron into rivers and oceans. Phosphorus, is an essential element for ATP, DNA, and membranes. This makes its bioavailability a limiting factor for primary productivity of ecosystems. When oxygen levels increase, weathering rates are also seen to rise. Which likely expand the nutrient base of marine ecosystems. As a result, this enables higher production and supports more complex food webs. In marine environments, oxygen also regulates the redox cycling of nitrogen. In anoxic conditions, denitrification processes dominate, which convert bioavailable nitrogen (nitrate and ammonium) back into inert N₂ gas. This removes it from the biosphere. As oxygen concentrations rose, nitrification became more prevalent, converting ammonia into nitrate and nitrite, more stable and accessible forms for primary producers. This shift helped stabilize nitrogen availability, which can limit biological expansion if depleted.
The emergence of planktivory, microorganisms feeding on plankton, was another historic turning point in nutrient cycling. Prior to the evolution of complex grazing organisms, nutrient regeneration occurred mainly through microbial remineralization, which was a much slower process. With the rise of planktivores and active predation in the water column during the Cambrian, nutrient cycling accelerated and became more efficient. The led to tighter biological feedback loops and higher rates of primary production. In addition, the shift toward higher rates of primariy production is inferred to have influenced isotopic signatures in marines sediments.The co-evolution of oxygenation and nutrient cycling created a feedback system that enabled both the expansion of biomass and the diversification of life. Rising oxygen not only enabled more complex metabolisms but also transformed the nutrient landscape. The relationship was not coalescent, however. Many disruptions, such as climate shifts, caused a temporary deconstruction of the nutrient-oxygen dynamic in marine systems.
Disruptions led to eriods of environmental instability, such as during Snowball Earth events. The Late Devonian, or Ocean Anoxic Events demonstrate how fragile the oxygen-nutrient balance can be. During these intervals, rapid climate change or massive volcanic activity altered ocean circulation and chemistry. This caused widespread deoxygenation and led to declines in biodiversity and primary productivity. Widespread oxygention from sudden shifts in global climate caused key nutrients to become locked in sediments or lost within inefficient nutrient recycling. These periods demonstrate that while oxygenation was essential to biological expansion, it also made the biosphere more sensitive to disruption. The resilience of nutrient cycling systems, particularly through microbial buffering and evolving trophic interactions, helped restore the oxygentation of the global climate. Over time, this restored nutrient cycling and was able to increase levels of biodiversity again. | https://en.wikipedia.org/wiki/Geological_history_of_oxygen |
The geology of solar terrestrial planets mainly deals with the geological aspects of the four terrestrial planets of the Solar System – Mercury , Venus , Earth , and Mars – and one terrestrial dwarf planet : Ceres . Earth is the only terrestrial planet known to have an active hydrosphere .
Terrestrial planets are substantially different from the giant planets , which might not have solid surfaces and are composed mostly of some combination of hydrogen , helium , and water existing in various physical states . Terrestrial planets have a compact, rocky surfaces, and Venus, Earth, and Mars each also has an atmosphere . Their size, radius, and density are all similar.
Terrestrial planets have numerous similarities to dwarf planets (objects like Pluto ), which also have a solid surface, but are primarily composed of icy materials. During the formation of the Solar System, there were probably many more ( planetesimals ), but they have all merged with or been destroyed by the four remaining worlds in the solar nebula .
The terrestrial planets all have roughly the same structure: a central metallic core, mostly iron , with a surrounding silicate mantle . The Moon is similar, but lacks a substantial iron core. [ 1 ] Three of the four solar terrestrial planets (Venus, Earth, and Mars) have substantial atmospheres ; all have impact craters and tectonic surface features such as rift valleys and volcanoes .
The term inner planet should not be confused with inferior planet , which refers to any planet that is closer to the Sun than the observer's planet is, but usually refers to Mercury and Venus.
The Solar System is believed to have formed according to the nebular hypothesis , first proposed in 1755 by Immanuel Kant and independently formulated by Pierre-Simon Laplace . [ 2 ] This theory holds that 4.6 billion years ago the Solar System formed from the gravitational collapse of a giant molecular cloud . This initial cloud was likely several light-years across and probably birthed several stars. [ 3 ]
The first solid particles were microscopic in size. These particles orbited the Sun in nearly circular orbits right next to each other, as the gas from which they condensed. Gradually, gentle collisions allowed the flakes to stick together and make larger particles which, in turn, attracted more solid particles towards them. This process is known as accretion . The objects formed by accretion are called planetesimals —they act as seeds for planet formation. Initially, planetesimals were closely packed. They coalesced into larger objects, forming clumps up to a few kilometers across in a few million years, a small time in comparison to the age of the Solar System. [ 3 ] After the planetesimals grew bigger in sizes, collisions became highly destructive, making further growth more difficult. Only the biggest planetesimals survived the fragmentation process and continued to slowly grow into protoplanets by accretion of planetesimals of similar composition. [ 3 ] After the protoplanet formed, accumulation of heat from radioactive decay of short-lived elements melted the planet, allowing materials to differentiate (i.e. to separate according to their density ). [ 3 ]
In the warmer inner Solar System, planetesimals formed from rocks and metals cooked billions of years ago in the cores of massive stars .
These elements constituted only 0.6% of the material in the solar nebula . That is why the terrestrial planets could not grow very large and could not exert a strong pull on hydrogen and helium gas. [ 3 ] Also, the faster collisions among particles close to the Sun were more destructive on average. Even if the terrestrial planets had had hydrogen and helium , the Sun would have heated the gases and caused them to escape. [ 3 ] Hence, solar terrestrial planets such as Mercury , Venus , Earth , and Mars are dense, small planets composed mostly from 2% of heavier elements contained in the solar nebula.
The four inner or terrestrial planets have dense, rocky compositions, few or no moons , and no ring systems . They are composed largely of minerals with high melting points, such as the silicates which form their solid crusts and semi-liquid mantles , and metals such as iron and nickel , which form their cores .
The Mariner 10 mission (1974) mapped about half the surface of Mercury. On the basis of that data, scientists have a first-order understanding of the geology and history of the planet. [ 4 ] [ 5 ] Mercury's surface shows intercrater plains, basins , smooth plains , craters , and tectonic features.
Mercury's oldest surface is its intercrater plains, [ 4 ] [ 6 ] which are present (but much less extensive) on the Moon . The intercrater plains are level to gently rolling terrain that occur between and around large craters. The plains predate the heavily cratered terrain, and have obliterated many of the early craters and basins of Mercury; [ 4 ] [ 7 ] they probably formed by widespread volcanism early in Mercurian history.
Mercurian craters have the morphological elements of lunar craters—the smaller craters are bowl-shaped, and with increasing size they develop scalloped rims, central peaks, and terraces on the inner walls. [ 6 ] The ejecta sheets have a hilly, lineated texture and swarms of secondary impact craters. Fresh craters of all sizes have dark or bright halos and well-developed ray systems. Although Mercurian and lunar craters are superficially similar, they show subtle differences, especially in deposit extent. The continuous ejecta and fields of secondary craters on Mercury are far less extensive (by a factor of about 0.65) for a given rim diameter than those of comparable lunar craters. This difference results from the 2.5 times higher gravitational field on Mercury compared with the Moon. [ 6 ] As on the Moon, impact craters on Mercury are progressively degraded by subsequent impacts. [ 4 ] [ 7 ] The freshest craters have ray systems and a crisp morphology. With further degradation, the craters lose their crisp morphology and rays and features on the continuous ejecta become more blurred until only the raised rim near the crater remains recognizable. Because craters become progressively degraded with time, the degree of degradation gives a rough indication of the crater's relative age. [ 7 ] On the assumption that craters of similar size and morphology are roughly the same age, it is possible to place constraints on the ages of other underlying or overlying units and thus to globally map the relative age of craters.
At least 15 ancient basins have been identified on Mercury. [ 7 ] Tolstoj is a true multi-ring basin , displaying at least two, and possibly as many as four, concentric rings. [ 7 ] [ 8 ] It has a well-preserved ejecta blanket extending outward as much as 500 kilometres (311 mi) from its rim. The basin interior is flooded with plains that clearly postdate the ejecta deposits. Beethoven has only one, subdued massif-like rim 625 kilometres (388 mi) in diameter, but displays an impressive, well lineated ejecta blanket that extends as far as 500 kilometres (311 mi). As at Tolstoj, Beethoven ejecta is asymmetric. The Caloris basin is defined by a ring of mountains 1,300 kilometres (808 mi) in diameter. [ 7 ] [ 9 ] [ 10 ] Individual massifs are typically 30 kilometres (19 mi) to 50 kilometres (31 mi) long; the inner edge of the unit is marked by basin-facing scarps. [ 10 ] Lineated terrain extends for about 1,000 kilometres (621 mi) out from the foot of a weak discontinuous scarp on the outer edge of the Caloris mountains; this terrain is similar to the sculpture surrounding the Imbrium basin on the Moon. [ 7 ] [ 10 ] Hummocky material forms a broad annulus about 800 kilometres (497 mi) from the Caloris mountains. It consists of low, closely spaced to scattered hills about 0.3 to 1 kilometre (1 mi) across and from tens of meters to a few hundred meters high. The outer boundary of this unit is gradational with the (younger) smooth plains that occur in the same region. A hilly and furrowed terrain is found antipodal to the Caloris basin, probably created by antipodal convergence of intense seismic waves generated by the Caloris impact. [ 11 ]
The floor of the Caloris basin is deformed by sinuous ridges and fractures, giving the basin fill a grossly polygonal pattern. These plains may be volcanic, formed by the release of magma as part of the impact event, or a thick sheet of impact melt. Widespread areas of Mercury are covered by relatively flat, sparsely cratered plains materials. [ 7 ] [ 12 ] They fill depressions that range in size from regional troughs to crater floors. The smooth plains are similar to the maria of the Moon, an obvious difference being that the smooth plains have the same albedo as the intercrater plains. Smooth plains are most strikingly exposed in a broad annulus around the Caloris basin. No unequivocal volcanic features, such as flow lobes, leveed channels, domes, or cones are visible. Crater densities indicate that the smooth plains are significantly younger than ejecta from the Caloris basin. [ 7 ] In addition, distinct color units, some of lobate shape, are observed in newly processed color data. [ 13 ] Such relations strongly support a volcanic origin for the mercurian smooth plains, even in the absence of diagnostic landforms. [ 7 ] [ 12 ] [ 13 ]
Lobate scarps are widely distributed over Mercury [ 7 ] [ 12 ] [ 14 ] and consist of sinuous to arcuate scarps that transect preexisting plains and craters. They are most convincingly interpreted as thrust faults , indicating a period of global compression. [ 14 ] The lobate scarps typically transect smooth plains materials (early Calorian age) on the floors of craters, but post-Caloris craters are superposed on them. These observations suggest that lobate-scarp formation was confined to a relatively narrow interval of time, beginning in the late pre-Tolstojan period and ending in the middle to late Calorian Period. In addition to scarps, wrinkle ridges occur in the smooth plains materials. These ridges probably were formed by local to regional surface compression caused by lithospheric loading by dense stacks of volcanic lavas, as suggested for those of the lunar maria. [ 7 ] [ 14 ]
The surface of Venus is comparatively very flat. When 93% of the topography was mapped by Pioneer Venus , [ 15 ] scientists found that the total distance from the lowest point to the highest point on the entire surface was about 13 kilometres (8 mi), while on the Earth the distance from the basins to the Himalayas is about 20 kilometres (12.4 mi).
According to the data of the altimeters of the Pioneer , nearly 51% of the surface is found located within 500 metres (1,640 ft) of the median radius of 6,052 km (3760 mi); only 2% of the surface is located at greater elevations than 2 kilometres (1 mi) from the median radius.
Venus shows no evidence of active plate tectonics. There is debatable evidence of active tectonics in the planet's distant past; however, events taking place since then (such as the plausible and generally accepted hypothesis that the Venusian lithosphere has thickened greatly over the course of several hundred million years) has made constraining the course of its geologic record difficult. However, the numerous well-preserved impact craters have been utilized as a dating method to approximately date the Venusian surface (since there are thus far no known samples of Venusian rock to be dated by more reliable methods). Dates derived are primarily in the range ~500 Mya–750Mya, although ages of up to ~1.2 Gya have been calculated. This research has led to the fairly well accepted hypothesis that Venus has undergone an essentially complete volcanic resurfacing at least once in its distant past, with the last event taking place approximately within the range of estimated surface ages. While the mechanism of such an impressionable thermal event remains a debated issue in Venusian geosciences, some scientists are advocates of processes involving plate motion to some extent. There are almost 1,000 impact craters on Venus, more or less evenly distributed across its surface.
Earth-based radar surveys made it possible to identify some topographic patterns related to craters , and the Venera 15 and Venera 16 probes identified almost 150 such features of probable impact origin. Global coverage from Magellan subsequently made it possible to identify nearly 900 impact craters.
Crater counts give an important estimate for the age of the surface of a planet. Over time, bodies in the Solar System are randomly impacted, so the more craters a surface has, the older it is. Compared to Mercury , the Moon and other such bodies, Venus has very few craters. In part, this is because Venus's dense atmosphere burns up smaller meteorites before they hit the surface. The Venera and Magellan data agree: there are very few impact craters with a diameter less than 30 kilometres (19 mi), and data from Magellan show an absence of any craters less than 2 kilometres (1 mi) in diameter. However, there are also fewer of the large craters, and those appear relatively young; they are rarely filled with lava, showing that they happened after volcanic activity in the area, and radar shows that they are rough and have not had time to be eroded down.
Much of Venus' surface appears to have been shaped by volcanic activity. Overall, Venus has several times as many volcanoes as Earth, and it possesses some 167 giant volcanoes that are over 100 kilometres (62 mi) across. The only volcanic complex of this size on Earth is the Big Island of Hawaii . However, this is not because Venus is more volcanically active than Earth, but because its crust is older. Earth's crust is continually recycled by subduction at the boundaries of tectonic plates , and has an average age of about 100 million years, while Venus' surface is estimated to be about 500 million years old. [ 16 ] Venusian craters range from 3 kilometres (2 mi) to 280 kilometres (174 mi) in diameter. There are no craters smaller than 3 km, because of the effects of the dense atmosphere on incoming objects. Objects with less than a certain kinetic energy are slowed down so much by the atmosphere that they do not create an impact crater. [ 17 ]
The Earth's terrain varies greatly from place to place. About 70.8% [ 18 ] of the surface is covered by water. The sea floor has mountainous features, including a globe-spanning mid-ocean ridge system, as well as undersea volcanoes , [ 19 ] oceanic trenches , submarine canyons , oceanic plateaus , and abyssal plains . The remaining 29.2% not covered by water consists of mountains , deserts , plains , plateaus , and other geomorphologies .
The planetary surface undergoes reshaping over geological time periods due to the effects of tectonics and erosion . Surface features built up or deformed through plate tectonics are subject to steady weathering from precipitation , thermal cycles, and chemical effects. Glaciation , coastal erosion , the build-up of coral reefs , and large meteorite impacts [ 20 ] also act to reshape the landscape.
As the continental plates migrate across the planet, the ocean floor is subducted under the leading edges. At the same time, upwellings of mantle material create a divergent boundary along mid-ocean ridges . The combination of these processes continually recycles the ocean plate material. Most of the ocean floor is less than 100 million years in age. The oldest ocean plate is located in the Western Pacific, and has an estimated age of about 200 million years. By comparison, the oldest fossils found on land have an age of about 3 billion years. [ 21 ] [ 22 ]
The continental plates consist of lower density material such as the igneous rocks granite and andesite . Less common is basalt , a denser volcanic rock that is the primary constituent of the ocean floors. [ 23 ] Sedimentary rock is formed from the accumulation of sediment that becomes compacted together. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form only about 5% of the crust. [ 24 ] The third form of rock material found on Earth is metamorphic rock , which is created from the transformation of pre-existing rock types through high pressures, high temperatures, or both. The most abundant silicate minerals on the Earth's surface include quartz , the feldspars , amphibole , mica , pyroxene , and olivine . [ 25 ] Common carbonate minerals include calcite (found in limestone ), aragonite , and dolomite . [ 26 ]
The pedosphere is the outermost layer of the Earth that is composed of soil and subject to soil formation processes . It exists at the interface of the lithosphere , atmosphere , hydrosphere , and biosphere . Currently the total arable land is 13.31% of the land surface, with only 4.71% supporting permanent crops. [ 27 ] Close to 40% of the Earth's land surface is presently used for cropland and pasture, or an estimated 13 million square kilometres (5.0 million square miles) of cropland and 34 million square kilometres (13 million square miles) of pastureland. [ 28 ]
The physical features of land are remarkably varied. The largest mountain ranges—the Himalayas in Asia and the Andes in South America—extend for thousands of kilometres. The longest rivers are the river Nile in Africa (6,695 kilometres or 4,160 miles) and the Amazon river in South America (6,437 kilometres or 4,000 miles). Deserts cover about 20% of the total land area. The largest is the Sahara , which covers nearly one-third of Africa.
The elevation of the land surface of the Earth varies from the low point of −418 m (−1,371 ft) at the Dead Sea , to a 2005-estimated maximum altitude of 8,848 m (29,028 ft) at the top of Mount Everest . The mean height of land above sea level is 686 m (2,250 ft). [ 29 ]
The geological history of Earth can be broadly classified into two periods namely:
The surface of Mars is thought to be primarily composed of basalt , based upon the observed lava flows from volcanos, the Martian meteorite collection, and data from landers and orbital observations. The lava flows from Martian volcanos show that lava has a very low viscosity, typical of basalt. [ 30 ] Analysis of the soil samples collected by the Viking landers in 1976 indicate iron-rich clays consistent with weathering of basaltic rocks. [ 30 ] There is some evidence that some portion of the Martian surface might be more silica-rich than typical basalt , perhaps similar to andesitic rocks on Earth, though these observations may also be explained by silica glass, phyllosilicates, or opal. Much of the surface is deeply covered by dust as fine as talcum powder. The red/orange appearance of Mars' surface is caused by iron(III) oxide (rust). [ 31 ] [ 32 ] Mars has twice as much iron oxide in its outer layer as Earth does, despite their supposed similar origin. It is thought that Earth, being hotter, transported much of the iron downwards in the 1,800 kilometres (1,118 mi) deep, 3,200 °C (5,792 °F ), lava seas of the early planet, while Mars, with a lower lava temperature of 2,200 °C (3,992 °F) was too cool for this to happen. [ 31 ]
The core is surrounded by a silicate mantle that formed many of the tectonic and volcanic features on the planet. The average thickness of the planet's crust is about 50 km, and it is no thicker than 125 kilometres (78 mi), [ 33 ] which is much thicker than Earth's crust which varies between 5 kilometres (3 mi) and 70 kilometres (43 mi). As a result, Mars' crust does not easily deform, as was shown by the recent radar map of the south polar ice cap which does not deform the crust despite being about 3 km thick. [ 34 ]
Crater morphology provides information about the physical structure and composition of the surface. Impact craters allow us to look deep below the surface and into Mars geological past. Lobate ejecta blankets (pictured left) and central pit craters are common on Mars but uncommon on the Moon , which may indicate the presence of near-surface volatiles (ice and water) on Mars. Degraded impact structures record variations in volcanic , fluvial , and aeolian activity. [ 35 ]
The Yuty crater is an example of a Rampart crater so called because of the rampart like edge of the ejecta. In the Yuty crater the ejecta completely covers an older crater at its side, showing that the ejected material is just a thin layer. [ 36 ]
The geological history of Mars can be broadly classified into many epochs, but the following are the three major ones:
The geology of the dwarf planet, Ceres, was largely unknown until Dawn spacecraft explored it in early 2015. However, certain surface features such as "Piazzi", named after the dwarf planets' discoverer, had been resolved.[a] Ceres's oblateness is consistent with a differentiated body, a rocky core overlain with an icy mantle. This 100-kilometer-thick mantle (23%–28% of Ceres by mass; 50% by volume) contains 200 million cubic kilometers of water, which is more than the amount of fresh water on Earth. This result is supported by the observations made by the Keck telescope in 2002 and by evolutionary modeling. Also, some characteristics of its surface and history (such as its distance from the Sun, which weakened solar radiation enough to allow some fairly low-freezing-point components to be incorporated during its formation), point to the presence of volatile materials in the interior of Ceres. It has been suggested that a remnant layer of liquid water may have survived to the present under a layer of ice.
The surface composition of Ceres is broadly similar to that of C-type asteroids. Some differences do exist. The ubiquitous features of the Cererian IR spectra are those of hydrated materials, which indicate the presence of significant amounts of water in the interior. Other possible surface constituents include iron-rich clay minerals (cronstedtite) and carbonate minerals (dolomite and siderite), which are common minerals in carbonaceous chondrite meteorites. The spectral features of carbonates and clay minerals are usually absent in the spectra of other C-type asteroids. Sometimes Ceres is classified as a G-type asteroid.
The Cererian surface is relatively warm. The maximum temperature with the Sun overhead was estimated from measurements to be 235 K (about −38 °C, −36 °F) on 5 May 1991.
Prior to the Dawn mission, only a few Cererian surface features had been unambiguously detected. High-resolution ultraviolet Hubble Space Telescope images taken in 1995 showed a dark spot on its surface, which was nicknamed "Piazzi" in honor of the discoverer of Ceres. This was thought to be a crater. Later near-infrared images with a higher resolution taken over a whole rotation with the Keck telescope using adaptive optics showed several bright and dark features moving with Ceres's rotation. Two dark features had circular shapes and are presumably craters; one of them was observed to have a bright central region, whereas another was identified as the "Piazzi" feature. More recent visible-light Hubble Space Telescope images of a full rotation taken in 2003 and 2004 showed 11 recognizable surface features, the natures of which are currently unknown. One of these features corresponds to the "Piazzi" feature observed earlier.
These last observations also determined that the north pole of Ceres points in the direction of right ascension 19 h 24 min (291°), declination +59°, in the constellation Draco. This means that Ceres's axial tilt is very small—about 3°.
There are indications that Ceres may have a tenuous atmosphere and water frost on the surface. Surface water ice is unstable at distances less than 5 AU from the Sun, so it is expected to vaporize if it is exposed directly to solar radiation. Water ice can migrate from the deep layers of Ceres to the surface, but escapes in a very short time. As a result, it is difficult to detect water vaporization. Water escaping from polar regions of Ceres was possibly observed in the early 1990s but this has not been unambiguously demonstrated. It may be possible to detect escaping water from the surroundings of a fresh impact crater or from cracks in the subsurface layers of Ceres. Ultraviolet observations by the IUE spacecraft detected statistically significant amounts of hydroxide ions near the Cererean north pole, which is a product of water-vapor dissociation by ultraviolet solar radiation.
In early 2014, using data from the Herschel Space Observatory, it was discovered that there are several localized (not more than 60 km in diameter) mid-latitude sources of water vapor on Ceres, which each give off about 10 26 molecules (or 3 kg) of water per second. Two potential source regions, designated Piazzi (123°E, 21°N) and Region A (231°E, 23°N), have been visualized in the near infrared as dark areas (Region A also has a bright center) by the W. M. Keck Observatory. Possible mechanisms for the vapor release are sublimation from about 0.6 km2 of exposed surface ice, or cryovolcanic eruptions resulting from radiogenic internal heat or from pressurization of a subsurface ocean due to growth of an overlying layer of ice. Surface sublimation would be expected to decline as Ceres recedes from the Sun in its eccentric orbit, whereas internally powered emissions should not be affected by orbital position. The limited data available are more consistent with cometary-style sublimation. The spacecraft Dawn is approaching Ceres at aphelion, which may constrain Dawn's ability to observe this phenomenon.
Note: This info was taken directly from the main article, sources for the material are included there.
Asteroids, comets, and meteoroids are all debris remaining from the nebula in which the Solar System formed 4.6 billion years ago.
The asteroid belt is located between Mars and Jupiter . It is made of thousands of rocky planetesimals from 1,000 kilometres (621 mi) to a few meters across. These are thought to be debris of the formation of the Solar System that could not form a planet due to Jupiter's gravity. When asteroids collide they produce small fragments that occasionally fall on Earth. These rocks are called meteorites and provide information about the primordial solar nebula. Most of these fragments have the size of sand grains. They burn up in the Earth's atmosphere, causing them to glow like meteors .
A comet is a small Solar System body that orbits the Sun and (at least occasionally) exhibits a coma (or atmosphere) and/or a tail—both primarily from the effects of solar radiation upon the comet's nucleus , which itself is a minor body composed of rock, dust, and ice.
The Kuiper belt, sometimes called the Edgeworth–Kuiper belt, is a region of the Solar System beyond the planets extending from the orbit of Neptune (at 30 AU ) [ 37 ] to approximately 55 AU from the Sun . [ 38 ] It is similar to the asteroid belt , although it is far larger; 20 times as wide and 20–200 times as massive. [ 39 ] [ 40 ] Like the asteroid belt, it consists mainly of small bodies (remnants from the Solar System's formation) and at least one dwarf planet — Pluto , which may be geologically active. [ 41 ] But while the asteroid belt is composed primarily of rock and metal , the Kuiper belt is composed largely of ices , such as methane , ammonia , and water . The objects within the Kuiper belt, together with the members of the scattered disc and any potential Hills cloud or Oort cloud objects, are collectively referred to as trans-Neptunian objects (TNOs). [ 42 ] Two TNOs have been visited and studied at close range, Pluto and 486958 Arrokoth .
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Geology_of_solar_terrestrial_planets |
Geomatics is defined in the ISO/TC 211 series of standards as the " discipline concerned with the collection, distribution, storage, analysis, processing, presentation of geographic data or geographic information ". [ 1 ] Under another definition, it consists of products, services and tools involved in the collection, integration and management of geographic (geospatial) data. [ 2 ] Surveying engineering was the widely used name for geomatic(s) engineering in the past. Geomatics was placed by the UNESCO Encyclopedia of Life Support Systems under the branch of technical geography . [ 3 ] [ 4 ]
The term was proposed in French ("géomatique") at the end of the 1960s by scientist Bernard Dubuisson to reflect at the time recent changes in the jobs of surveyor and photogrammetrist . [ 5 ] The term was first employed in a French Ministry of Public Works memorandum dated 1 June 1971 instituting a "standing committee of geomatics" in the government. [ 6 ]
The term was popularised in English by French-Canadian surveyor Michel Paradis in his The little Geodesist that could article, in 1981 and in a keynote address at the centennial congress of the Canadian Institute of Surveying (now known as the Canadian Institute of Geomatics ) in April 1982. He claimed that at the end of the 20th century the needs for geographical information would reach a scope without precedent in history and that, in order to address these needs, it was necessary to integrate in a new discipline both the traditional disciplines of land surveying and the new tools and techniques of data capture, manipulation, storage and diffusion. [ 7 ]
Geomatics includes the tools and techniques used in land surveying , remote sensing , cartography , geographic information systems (GIS), global navigation satellite systems ( GPS , GLONASS , Galileo , BeiDou ), photogrammetry , geophysics , geography , and related forms of earth mapping . The term was originally used in Canada but has since been adopted by the International Organization for Standardization , the Royal Institution of Chartered Surveyors , and many other international authorities, although some (especially in the United States) have shown a preference for the term geospatial technology , [ 8 ] which may be defined as synonym of "geospatial information and communications technology ". [ 9 ]
Although many definitions of geomatics , such as the above, appear to encompass the entire discipline relating to geographic information – including geodesy , geographic information systems , remote sensing , satellite navigation , and cartography –, the term is almost exclusively restricted to the perspective of surveying and engineering toward geographic information. [ citation needed ] Geoinformatics and Geographic information science has been proposed as alternative comprehensive term; however, their popularity is, like geomatics, largely dependent on country. [ 10 ]
The related field of hydrogeomatics covers the area associated with surveying work carried out on, above or below the surface of the sea or other areas of water. The older term of hydrographics was considered [ by whom? ] too specific to the preparation of marine charts, and failed to include the broader concept of positioning or measurements in all marine environments. The use of different data processing technologies in hydrography does not change the purpose of its research. [ 11 ]
Health geomatics can improve our understanding of the important relationship between location and health, and thus assist us in Public Health tasks like disease prevention, and also in better healthcare service planning. [ 12 ] An important area of research is the use of open data in planning lifesaving activities. [ 13 ]
Mining geomatics is the use of information systems to integrate and process spatial data for monitoring, modelling, visualisation and design of mining operations. [ 14 ]
A growing number of university departments which were once titled "surveying", "survey engineering" or " topographic science" have re-titled themselves using the terms "geomatics" or "geomatics engineering", while others have switched to program titles such as "spatial information technology", and similar names. [ 15 ] [ 16 ]
The rapid progress and increased visibility of geomatics since the 1990s has been made possible by advances in computer hardware, computer science , and software engineering , as well as by airborne and space observation remote-sensing technologies.
Geomatics engineering is a rapidly developing engineering discipline which focuses on spatial information (i.e. information that has a location). [ 17 ] The location is the primary factor used to integrate a very wide range of data for spatial analysis and visualization. Geomatics engineers design, develop, and operate systems for collecting and analyzing spatial information about the land, the oceans, natural resources, and manmade features. [ 18 ] [ 19 ]
Geomatics engineers apply engineering principles to spatial information and implement relational data structures involving measurement sciences, thus using geomatics and acting as spatial information engineers. Geomatics engineers manage local, regional, national and global spatial data infrastructures. [ 20 ] Geomatics engineering also involves aspects of Computer Engineering, Software Engineering and Civil Engineering. [ 21 ]
Application areas include:
Geomatics integrates science and technology from both new and traditional disciplines: | https://en.wikipedia.org/wiki/Geomatics |
Geomechanics (from the Greek γεός , i.e. prefix geo- meaning " earth "; and " mechanics ") is the study of the mechanical state of the Earth's crust and the processes occurring in it under the influence of natural physical factors. It involves the study of the mechanics of soil and rock.
The two main disciplines of geomechanics are soil mechanics and rock mechanics . Former deals with the soil behaviour from a small scale to a landslide scale. The latter deals with issues in geosciences related to rock mass characterization and rock mass mechanics, such as applied to petroleum, mining and civil engineering problems, such as borehole stability, tunnel design, rock breakage, slope stability, foundations, and rock drilling. [ 1 ]
Many aspects of geomechanics overlap with parts of geotechnical engineering , engineering geology , and geological engineering. Modern developments relate to seismology , continuum mechanics , discontinuum mechanics, transport phenomena, numerical methods etc. [ 2 ]
In the petroleum industry geomechanics is used to:
To put into practice the geomechanics capabilities mentioned above, it is necessary to create a Geomechanical Model of the Earth (GEM) which consists of six key components that can be both calculated and estimated using field data:
Geotechnical engineers rely on various techniques to obtain reliable data for geomechanical models. These techniques include coring and core testing, seismic data and log analysis, well testing methods such as transient pressure analysis and hydraulic fracturing stress testing, and geophysical methods such as acoustic emission.
This geology article is a stub . You can help Wikipedia by expanding it .
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geomechanics |
Geometallurgy relates to the practice of combining geological understanding with metallurgical test work and / or real time processing plant data (for extractive metallurgy ), to create a geological based three-dimensional predictive model of mineral processing response. It is used in the hard rock mining industry for risk management and mitigation during mineral processing plant design. It is also used for production mine planning to optimize the ore feed to the processing plant.
There are four important components or steps to developing a geometallurgical program,: [ 1 ]
The sample mass and size distribution requirements are dictated by the kind of mathematical model that will be used to simulate the process plant, and the test work required to provide the appropriate model parameters. Flotation testing usually requires several kg of sample and grinding/hardness testing can required between 2 and 300 kg. [ 2 ]
The sample selection procedure is performed to optimize granularity , sample support, and cost. Samples are usually core samples composited over the height of the mining bench. [ 3 ] For hardness parameters, the variogram often increases rapidly near the origin and can reach the sill at distances significantly smaller than the typical drill hole collar spacing. For this reason the incremental model precision due to additional test work is often simply a consequence of the central limit theorem , and secondary correlations are sought to increase the precision without incurring additional sampling and testing costs. These secondary correlations can involve multi-variable regression analysis with other, non-metallurgical, ore parameters and/or domaining by rock type, lithology, alteration, mineralogy , or structural domains. [ 4 ] [ 5 ]
Geometallurgical test work is broken into those that impact comminution and those that impact recovery of the valuable component.
The following tests are commonly used to generate inputs for geometallurgical modelling of comminution parameters, that is crushing, grinding and their associated energy use:
Block kriging is the most common geostatistical method used for interpolating metallurgical index parameters and it is often applied on a domain basis. [ 17 ] Classical geostatistics require that the estimation variable be additive, and there is currently some debate on the additive nature of the metallurgical index parameters measured by the above tests. The Bond ball mill work index test is thought to be additive because of its units of energy; [ 18 ] nevertheless, experimental blending results show a non-additive behavior. [ 19 ] The SPI(R) value is known not to be an additive parameter, however errors introduced by block kriging are not thought to be significant . [ 20 ] [ 21 ] These issues, among others, are being investigated as part of the Amira P843 research program on Geometallurgical mapping and mine modelling.
The following process models are commonly applied to geometallurgy: | https://en.wikipedia.org/wiki/Geometallurgy |
Geometriae Dedicata is a mathematical journal , founded in 1972, concentrating on geometry and its relationship to topology , group theory and the theory of dynamical systems . It was created on the initiative of Hans Freudenthal in Utrecht , the Netherlands . [ 1 ] It is published by Springer Netherlands. The Editor-in-Chief is Richard Alan Wentworth . [ 2 ]
This article about a mathematics journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Geometriae_Dedicata |
Geometric and Functional Analysis ( GAFA ) is a mathematical journal published by Birkhäuser , an independent division of Springer-Verlag . The journal is published bi-monthly.
The journal publishes major results on a broad range of mathematical topics related to geometry and analysis. [ 1 ] GAFA is both an acronym and a part of the official full name of the journal.
GAFA was founded in 1991 by Mikhail Gromov and Vitali Milman . The idea for the journal was inspired by the long-running Israeli seminar series "Geometric Aspects of Functional Analysis" of which Vitali Milman had been one of the main organizers in the previous years. The journal retained the same acronym as the series to stress the connection between the two. [ 2 ]
The journal is reviewed cover-to-cover in Mathematical Reviews and zbMATH Open and is indexed cover-to-cover in the Web of Science . According to the Journal Citation Reports , the journal has a 2022 impact factor of 2.2. [ 3 ]
The journal has seven editors: Vitali Milman (editor-in-chief), Simon Donaldson , Mikhail Gromov , Larry Guth , Boáz Klartag , Leonid Polterovich , and Peter Sarnak . [ 4 ] | https://en.wikipedia.org/wiki/Geometric_and_Functional_Analysis |
In mathematics , geometric calculus extends geometric algebra to include differentiation and integration . The formalism is powerful and can be shown to reproduce other mathematical theories including vector calculus , differential geometry , and differential forms . [ 1 ]
With a geometric algebra given, let a {\displaystyle a} and b {\displaystyle b} be vectors and let F {\displaystyle F} be a multivector -valued function of a vector. The directional derivative of F {\displaystyle F} along b {\displaystyle b} at a {\displaystyle a} is defined as
provided that the limit exists for all b {\displaystyle b} , where the limit is taken for scalar ϵ {\displaystyle \epsilon } . This is similar to the usual definition of a directional derivative but extends it to functions that are not necessarily scalar-valued.
Next, choose a set of basis vectors { e i } {\displaystyle \{e_{i}\}} and consider the operators, denoted ∂ i {\displaystyle \partial _{i}} , that perform directional derivatives in the directions of e i {\displaystyle e_{i}} :
Then, using the Einstein summation notation , consider the operator:
which means
where the geometric product is applied after the directional derivative. More verbosely:
This operator is independent of the choice of frame, and can thus be used to define what in geometric calculus is called the vector derivative :
This is similar to the usual definition of the gradient , but it, too, extends to functions that are not necessarily scalar-valued.
The directional derivative is linear regarding its direction, that is:
From this follows that the directional derivative is the inner product of its direction by the vector derivative. All needs to be observed is that the direction a {\displaystyle a} can be written a = ( a ⋅ e i ) e i {\displaystyle a=(a\cdot e^{i})e_{i}} , so that:
For this reason, ∇ a F ( x ) {\displaystyle \nabla _{a}F(x)} is often noted a ⋅ ∇ F ( x ) {\displaystyle a\cdot \nabla F(x)} .
The standard order of operations for the vector derivative is that it acts only on the function closest to its immediate right. Given two functions F {\displaystyle F} and G {\displaystyle G} , then for example we have
Although the partial derivative exhibits a product rule , the vector derivative only partially inherits this property. Consider two functions F {\displaystyle F} and G {\displaystyle G} :
Since the geometric product is not commutative with e i F ≠ F e i {\displaystyle e^{i}F\neq Fe^{i}} in general, we need a new notation to proceed. A solution is to adopt the overdot notation , in which the scope of a vector derivative with an overdot is the multivector-valued function sharing the same overdot. In this case, if we define
then the product rule for the vector derivative is
Let F {\displaystyle F} be an r {\displaystyle r} -grade multivector. Then we can define an additional pair of operators, the interior and exterior derivatives,
In particular, if F {\displaystyle F} is grade 1 (vector-valued function), then we can write
and identify the divergence and curl as
Unlike the vector derivative, neither the interior derivative operator nor the exterior derivative operator is invertible.
The derivative with respect to a vector as discussed above can be generalized to a derivative with respect to a general multivector, called the multivector derivative .
Let F {\displaystyle F} be a multivector-valued function of a multivector. The directional derivative of F {\displaystyle F} with respect to X {\displaystyle X} in the direction A {\displaystyle A} , where X {\displaystyle X} and A {\displaystyle A} are multivectors, is defined as
where A ∗ B = ⟨ A B ⟩ {\displaystyle A*B=\langle AB\rangle } is the scalar product . With { e i } {\displaystyle \{e_{i}\}} a vector basis and { e i } {\displaystyle \{e^{i}\}} the corresponding dual basis , the multivector derivative is defined in terms of the directional derivative as [ 2 ]
This equation is just expressing ∂ X {\displaystyle \partial _{X}} in terms of components in a reciprocal basis of blades, as discussed in the article section Geometric algebra#Dual basis .
A key property of the multivector derivative is that
where P X ( A ) {\displaystyle P_{X}(A)} is the projection of A {\displaystyle A} onto the grades contained in X {\displaystyle X} .
The multivector derivative finds applications in Lagrangian field theory .
Let { e 1 , … , e n } {\displaystyle \{e_{1},\ldots ,e_{n}\}} be a set of basis vectors that span an n {\displaystyle n} -dimensional vector space. From geometric algebra, we interpret the pseudoscalar e 1 ∧ e 2 ∧ ⋯ ∧ e n {\displaystyle e_{1}\wedge e_{2}\wedge \cdots \wedge e_{n}} to be the signed volume of the n {\displaystyle n} - parallelotope subtended by these basis vectors. If the basis vectors are orthonormal , then this is the unit pseudoscalar.
More generally, we may restrict ourselves to a subset of k {\displaystyle k} of the basis vectors, where 1 ≤ k ≤ n {\displaystyle 1\leq k\leq n} , to treat the length, area, or other general k {\displaystyle k} -volume of a subspace in the overall n {\displaystyle n} -dimensional vector space. We denote these selected basis vectors by { e i 1 , … , e i k } {\displaystyle \{e_{i_{1}},\ldots ,e_{i_{k}}\}} . A general k {\displaystyle k} -volume of the k {\displaystyle k} -parallelotope subtended by these basis vectors is the grade k {\displaystyle k} multivector e i 1 ∧ e i 2 ∧ ⋯ ∧ e i k {\displaystyle e_{i_{1}}\wedge e_{i_{2}}\wedge \cdots \wedge e_{i_{k}}} .
Even more generally, we may consider a new set of vectors { x i 1 e i 1 , … , x i k e i k } {\displaystyle \{x^{i_{1}}e_{i_{1}},\ldots ,x^{i_{k}}e_{i_{k}}\}} proportional to the k {\displaystyle k} basis vectors, where each of the { x i j } {\displaystyle \{x^{i_{j}}\}} is a component that scales one of the basis vectors. We are free to choose components as infinitesimally small as we wish as long as they remain nonzero. Since the outer product of these terms can be interpreted as a k {\displaystyle k} -volume, a natural way to define a measure is
The measure is therefore always proportional to the unit pseudoscalar of a k {\displaystyle k} -dimensional subspace of the vector space. Compare the Riemannian volume form in the theory of differential forms. The integral is taken with respect to this measure:
More formally, consider some directed volume V {\displaystyle V} of the subspace. We may divide this volume into a sum of simplices . Let { x i } {\displaystyle \{x_{i}\}} be the coordinates of the vertices. At each vertex we assign a measure Δ U i ( x ) {\displaystyle \Delta U_{i}(x)} as the average measure of the simplices sharing the vertex. Then the integral of F ( x ) {\displaystyle F(x)} with respect to U ( x ) {\displaystyle U(x)} over this volume is obtained in the limit of finer partitioning of the volume into smaller simplices:
The reason for defining the vector derivative and integral as above is that they allow a strong generalization of Stokes' theorem . Let L ( A ; x ) {\displaystyle {\mathsf {L}}(A;x)} be a multivector-valued function of r {\displaystyle r} -grade input A {\displaystyle A} and general position x {\displaystyle x} , linear in its first argument. Then the fundamental theorem of geometric calculus relates the integral of a derivative over the volume V {\displaystyle V} to the integral over its boundary:
∫ V L ˙ ( ∇ ˙ d X ; x ) = ∮ ∂ V L ( d S ; x ) . {\displaystyle \int _{V}{\dot {\mathsf {L}}}\left({\dot {\nabla }}dX;x\right)=\oint _{\partial V}{\mathsf {L}}(dS;x).}
As an example, let L ( A ; x ) = ⟨ F ( x ) A I − 1 ⟩ {\displaystyle {\mathsf {L}}(A;x)=\langle F(x)AI^{-1}\rangle } for a vector-valued function F ( x ) {\displaystyle F(x)} and a ( n − 1 {\displaystyle n-1} )-grade multivector A {\displaystyle A} . We find that
Likewise,
Thus we recover the divergence theorem ,
A sufficiently smooth k {\displaystyle k} -surface in an n {\displaystyle n} -dimensional space is deemed a manifold . To each point on the manifold, we may attach a k {\displaystyle k} -blade B {\displaystyle B} that is tangent to the manifold. Locally, B {\displaystyle B} acts as a pseudoscalar of the k {\displaystyle k} -dimensional space. This blade defines a projection of vectors onto the manifold:
Just as the vector derivative ∇ {\displaystyle \nabla } is defined over the entire n {\displaystyle n} -dimensional space, we may wish to define an intrinsic derivative ∂ {\displaystyle \partial } , locally defined on the manifold:
(Note: The right hand side of the above may not lie in the tangent space to the manifold. Therefore, it is not the same as P B ( ∇ F ) {\displaystyle {\mathcal {P}}_{B}(\nabla F)} , which necessarily does lie in the tangent space.)
If a {\displaystyle a} is a vector tangent to the manifold, then indeed both the vector derivative and intrinsic derivative give the same directional derivative:
Although this operation is perfectly valid, it is not always useful because ∂ F {\displaystyle \partial F} itself is not necessarily on the manifold. Therefore, we define the covariant derivative to be the forced projection of the intrinsic derivative back onto the manifold:
Since any general multivector can be expressed as a sum of a projection and a rejection, in this case
we introduce a new function, the shape tensor S ( a ) {\displaystyle {\mathsf {S}}(a)} , which satisfies
where × {\displaystyle \times } is the commutator product . In a local coordinate basis { e i } {\displaystyle \{e_{i}\}} spanning the tangent surface, the shape tensor is given by
Importantly, on a general manifold, the covariant derivative does not commute. In particular, the commutator is related to the shape tensor by
Clearly the term S ( a ) × S ( b ) {\displaystyle {\mathsf {S}}(a)\times {\mathsf {S}}(b)} is of interest. However it, like the intrinsic derivative, is not necessarily on the manifold. Therefore, we can define the Riemann tensor to be the projection back onto the manifold:
Lastly, if F {\displaystyle F} is of grade r {\displaystyle r} , then we can define interior and exterior covariant derivatives as
and likewise for the intrinsic derivative.
On a manifold, locally we may assign a tangent surface spanned by a set of basis vectors { e i } {\displaystyle \{e_{i}\}} . We can associate the components of a metric tensor , the Christoffel symbols , and the Riemann curvature tensor as follows:
These relations embed the theory of differential geometry within geometric calculus.
In a local coordinate system ( x 1 , … , x n {\displaystyle x^{1},\ldots ,x^{n}} ), the coordinate differentials d x 1 {\displaystyle dx^{1}} , ..., d x n {\displaystyle dx^{n}} form a basic set of one-forms within the coordinate chart . Given a multi-index I = ( i 1 , … , i k ) {\displaystyle I=(i_{1},\ldots ,i_{k})} with 1 ≤ i p ≤ n {\displaystyle 1\leq i_{p}\leq n} for 1 ≤ p ≤ k {\displaystyle 1\leq p\leq k} , we can define a k {\displaystyle k} -form
We can alternatively introduce a k {\displaystyle k} -grade multivector A {\displaystyle A} as
and a measure
Apart from a subtle difference in meaning for the exterior product with respect to differential forms versus the exterior product with respect to vectors (in the former the increments are covectors, whereas in the latter they represent scalars), we see the correspondences of the differential form
its derivative
and its Hodge dual
embed the theory of differential forms within geometric calculus.
Following is a diagram summarizing the history of geometric calculus. | https://en.wikipedia.org/wiki/Geometric_calculus |
Geometric combinatorics is a branch of mathematics in general and combinatorics in particular. It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra ), convex geometry (the study of convex sets , in particular combinatorics of their intersections), and discrete geometry , which in turn has many applications to computational geometry . Other important areas include metric geometry of polyhedra , such as the Cauchy theorem on rigidity of convex polytopes. The study of regular polytopes , Archimedean solids , and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron , associahedron and Birkhoff polytope .
This combinatorics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geometric_combinatorics |
Geometrical design ( GD ) is a branch of computational geometry . It deals with the construction and representation of free-form curves, surfaces, or volumes [ 1 ] and is closely related to geometric modeling . Core problems are curve and surface modelling and representation. GD studies especially the construction and manipulation of curves and surfaces given by a set of points using polynomial, rational, piecewise polynomial, or piecewise rational methods. The most important instruments here are parametric curves and parametric surfaces , such as Bézier curves , spline curves and surfaces. An important non-parametric approach is the level-set method .
Application areas include shipbuilding, aircraft, and automotive industries, as well as architectural design. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by shipbuilders of 1960s.
Geometric models can be built for objects of any dimension in any geometric space. Both 2D and 3D geometric models are extensively used in computer graphics . 2D models are important in computer typography and technical drawing . 3D models are central to computer-aided design and manufacturing , and many applied technical fields such as geology and medical image processing .
Geometric models are usually distinguished from procedural and object-oriented models , which define the shape implicitly by an algorithm . They are also contrasted with digital images and volumetric models ; and with mathematical models such as the zero set of an arbitrary polynomial . However, the distinction is often blurred: for instance, geometric shapes can be represented by objects ; a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, the modeling of fractal objects often requires a combination of geometric and procedural techniques.
Geometric problems originating in architecture can lead to interesting research and results in geometry processing, computer-aided geometric design , and discrete differential geometry. [ 2 ]
In architecture, geometric design is associated with the pioneering explorations of Chuck Hoberman into transformational geometry as a design idiom, and applications of this design idiom within the domain of architectural geometry . | https://en.wikipedia.org/wiki/Geometric_design |
Geometric drawing consists of a set of processes for constructing geometric shapes and solving problems with the use of a ruler without graduation and the compass (drawing tool) . [ 1 ] [ 2 ] Modernly, such studies can be done with the aid of software , which simulates the strokes performed by these instruments. [ 3 ]
For ancient mathematicians, geometry could not do without the methods of geometric constructions, necessary for understanding, theoretical enrichment, and problem-solving.
The accuracy and precision required of geometric drawing make it an important ally in the application of geometric concepts in significant areas of human knowledge, such as architecture , engineering , industrial design , among others.
The process of geometric drawing is based on constructions with a ruler and compass, which in turn are based on the first three postulates of Euclid's Elements .
The historical importance of rulers and compasses as instruments in solving geometric problems leads many authors to limit Geometric Drawing to the representation and solution of geometric figures in the plane. [ 4 ]
With the development of computer-aided design ( CAD ) programs, geometric drawing has become more important in teaching-learning processes (development of spatial faculties) than the more imprecise tracing offered by rulers and compasses, when taking into account the precision of computer systems. [ 5 ] | https://en.wikipedia.org/wiki/Geometric_drawing |
Geometric Dynamic Recrystallization (GDR) is a recrystallization mechanism that has been proposed to occur in several metals and alloys, particularly aluminium , mainly at high temperatures and low strain rates. [ 1 ] It is a variant of dynamic recrystallization .
The basic mechanism is that during deformation the grains will be increasingly flattened until the boundaries on each side are separated by only a small distance. The deformation is accompanied by the serration of the grain boundaries due to surface tension effects where they are in contact with low-angle grain boundaries belonging to sub-grains.
Eventually the points of the serrations will come into contact. Since the contacting boundaries are defects of opposite 'sign' they are able to annihilate and so reduce the total energy in the system. In effect the grain will pinch in two new grains.
The grain size is known to decrease as the applied stress is increased. However, high stresses require a high strain rate and at some point statically recrystallized grains will begin to nucleate and consume the GDRX microstructure.
There are features that are unique to GDRX:
This metallurgy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geometric_dynamic_recrystallization |
In mathematical logic, geometric logic is an infinitary generalisation of coherent logic , a restriction of first-order logic due to Skolem that is proof-theoretically tractable. Geometric logic is capable of expressing many mathematical theories and has close connections to topos theory .
A theory of first-order logic is geometric if it can be axiomatised using only axioms of the form ⋀ i ∈ I ϕ i , 1 ∨ ⋯ ∨ ϕ i , n i ⟹ ⋁ j ∈ J ϕ j , 1 ∨ ⋯ ∨ ϕ j , m j {\displaystyle \bigwedge _{i\in I}\phi _{i,1}\vee \dots \vee \phi _{i,n_{i}}\implies \bigvee _{j\in J}\phi _{j,1}\vee \dots \vee \phi _{j,m_{j}}} where I and J are disjoint collections of formulae indices that each may be infinite and the formulae φ are either atoms or negations of atoms. [ citation needed ] If all the axioms are finite (i.e., for each axiom, both I and J are finite), the theory is coherent.
Every first-order theory has a coherent conservative extension. [ citation needed ]
Dyckhoff & Negri (2015) list eight consequences of the above theorem that explain its significance (omitting footnotes and most references): [ 1 ] | https://en.wikipedia.org/wiki/Geometric_logic |
In Euclidean geometry , the right triangle altitude theorem or geometric mean theorem is a relation between the altitude on the hypotenuse in a right triangle and the two line segments it creates on the hypotenuse. It states that the geometric mean of those two segments equals the altitude.
If h denotes the altitude in a right triangle and p and q the segments on the hypotenuse then the theorem can be stated as: [ 1 ]
or in term of areas:
The converse statement is true as well. Any triangle, in which the altitude equals the geometric mean of the two line segments created by it, is a right triangle.
The theorem can also be thought of as a special case of the intersecting chords theorem for a circle, since the converse of Thales' theorem ensures that the hypotenuse of the right angled triangle is the diameter of its circumcircle . [ 1 ]
The formulation in terms of areas yields a method to square a rectangle with ruler and compass , that is to construct a square of equal area to a given rectangle. For such a rectangle with sides p and q we denote its top left vertex with D (see the Proof > Based on similarity section for a graphic of the construction). Now we extend the segment q to its left by p (using arc AE centered on D ) and draw a half circle with endpoints A and B with the new segment p + q as its diameter. Then we erect a perpendicular line to the diameter in D that intersects the half circle in C . Due to Thales' theorem C and the diameter form a right triangle with the line segment DC as its altitude, hence DC is the side of a square with the area of the rectangle. The method also allows for the construction of square roots (see constructible number ), since starting with a rectangle that has a width of 1 the constructed square will have a side length that equals the square root of the rectangle's length. [ 1 ]
Another application of this theorem provides a geometrical proof of the AM–GM inequality in the case of two numbers. For the numbers p and q one constructs a half circle with diameter p + q . Now the altitude represents the geometric mean and the radius the arithmetic mean of the two numbers. Since the altitude is always smaller or equal to the radius, this yields the inequality. [ 2 ]
The theorem is usually attributed to Euclid (ca. 360–280 BC), who stated it as a corollary to proposition 8 in book VI of his Elements . In proposition 14 of book II Euclid gives a method for squaring a rectangle, which essentially matches the method given here. Euclid however provides a different slightly more complicated proof for the correctness of the construction rather than relying on the geometric mean theorem. [ 1 ] [ 3 ]
Proof of theorem :
The triangles △ ADC , △ BCD are similar , since:
Therefore, both triangles △ ACD , △ BCD are similar to △ ABC and themselves, i.e. △ A C D ∼ △ A B C ∼ △ B C D . {\displaystyle \triangle ACD\sim \triangle ABC\sim \triangle BCD.}
Because of the similarity we get the following equality of ratios and its algebraic rearrangement yields the theorem: [ 1 ]
Proof of converse:
For the converse we have a triangle △ ABC in which h 2 = p q {\displaystyle h^{2}=pq} holds and need to show that the angle at C is a right angle. Now because of h 2 = p q {\displaystyle h^{2}=pq} we also have h p = q h . {\displaystyle {\tfrac {h}{p}}={\tfrac {q}{h}}.} Together with ∠ A D C = ∠ C D B {\displaystyle \angle ADC=\angle CDB} the triangles △ ADC , △ BDC have an angle of equal size and have corresponding pairs of legs with the same ratio. This means the triangles are similar, which yields:
In the setting of the geometric mean theorem there are three right triangles △ ABC , △ ADC and △ DBC in which the Pythagorean theorem yields:
Adding the first 2 two equations and then using the third then leads to:
which finally yields the formula of the geometric mean theorem. [ 4 ]
Dissecting the right triangle along its altitude h yields two similar triangles, which can be augmented and arranged in two alternative ways into a larger right triangle with perpendicular sides of lengths p + h and q + h . One such arrangement requires a square of area h 2 to complete it, the other a rectangle of area pq . Since both arrangements yield the same triangle, the areas of the square and the rectangle must be identical.
A square constructed on the altitude can be transformed into a rectangle of equal area with sides p and q with the help of three shear mappings (shear mappings preserve the area): | https://en.wikipedia.org/wiki/Geometric_mean_theorem |
In mathematics , geometric measure theory ( GMT ) is the study of geometric properties of sets (typically in Euclidean space ) through measure theory . It allows mathematicians to extend tools from differential geometry to a much larger class of surfaces that are not necessarily smooth .
Geometric measure theory was born out of the desire to solve Plateau's problem (named after Joseph Plateau ) which asks if for every smooth closed curve in R 3 {\displaystyle \mathbb {R} ^{3}} there exists a surface of least area among all surfaces whose boundary equals the given curve. Such surfaces mimic soap films .
The problem had remained open since it was posed in 1760 by Lagrange . It was solved independently in the 1930s by Jesse Douglas and Tibor Radó under certain topological restrictions. In 1960 Herbert Federer and Wendell Fleming used the theory of currents with which they were able to solve the orientable Plateau's problem analytically without topological restrictions, thus sparking geometric measure theory. Later Jean Taylor after Fred Almgren proved Plateau's laws for the kind of singularities that can occur in these more general soap films and soap bubbles clusters.
The following objects are central in geometric measure theory:
The following theorems and concepts are also central:
The Brunn–Minkowski inequality for the n -dimensional volumes of convex bodies K and L ,
can be proved on a single page and quickly yields the classical isoperimetric inequality . The Brunn–Minkowski inequality also leads to Anderson's theorem in statistics. The proof of the Brunn–Minkowski inequality predates modern measure theory; the development of measure theory and Lebesgue integration allowed connections to be made between geometry and analysis, to the extent that in an integral form of the Brunn–Minkowski inequality known as the Prékopa–Leindler inequality the geometry seems almost entirely absent. | https://en.wikipedia.org/wiki/Geometric_measure_theory |
Geometric mechanics is a branch of mathematics applying particular geometric methods to many areas of mechanics , from mechanics of particles and rigid bodies to fluid mechanics and control theory .
Geometric mechanics applies principally to systems for which the configuration space is a Lie group , or a group of diffeomorphisms , or more generally where some aspect of the configuration space has this group structure. For example, the configuration space of a rigid body such as a satellite is the group of Euclidean motions (translations and rotations in space), while the configuration space for a liquid crystal is the group of diffeomorphisms coupled with an internal state (gauge symmetry or order parameter).
One of the principal ideas of geometric mechanics is reduction , which goes back to Jacobi's elimination of the node in the 3-body problem, but in its modern form is due to K. Meyer (1973) and independently J.E. Marsden and A. Weinstein (1974), both inspired by the work of Smale (1970). Symmetry of a Hamiltonian or Lagrangian system gives rise to conserved quantities, by Noether's theorem , and these conserved quantities are the components of the momentum map J . If P is the phase space and G the symmetry group, the momentum map is a map J : P → g ∗ {\displaystyle \mathbf {J} :P\to {\mathfrak {g}}^{*}} , and the reduced spaces are quotients of the level sets of J by the subgroup of G preserving the level set in question: for μ ∈ g ∗ {\displaystyle \mu \in {\mathfrak {g}}^{*}} one defines P μ = J − 1 ( μ ) / G μ {\displaystyle P_{\mu }=\mathbf {J} ^{-1}(\mu )/G_{\mu }} , and this reduced space is a symplectic manifold if μ {\displaystyle \mu } is a regular value of J .
One of the important developments arising from the geometric approach to mechanics is the incorporation of the geometry into numerical methods.
In particular symplectic and variational integrators are proving particularly accurate for long-term integration of Hamiltonian and Lagrangian systems.
The term "geometric mechanics" occasionally refers to 17th-century mechanics. [ 1 ]
As a modern subject, geometric mechanics has its roots in four works written in the 1960s. These were by Vladimir Arnold (1966), Stephen Smale (1970) and Jean-Marie Souriau (1970), and the first edition of Abraham and Marsden 's Foundation of Mechanics (1967). Arnold's fundamental work showed that Euler's equations for the free rigid body are the equations for geodesic flow on the rotation group SO(3) and carried this geometric insight over to the dynamics of ideal fluids, where the rotation group is replaced by the group of volume-preserving diffeomorphisms. Smale's paper on Topology and Mechanics investigates the conserved quantities arising from Noether's theorem when a Lie group of symmetries acts on a mechanical system, and defines what is now called the momentum map (which Smale calls angular momentum), and he raises questions about the topology of the energy-momentum level surfaces and the effect on the dynamics. In his book, Souriau also considers the conserved quantities arising from the action of a group of symmetries, but he concentrates more on the geometric structures involved (for example the equivariance properties of this momentum for a wide class of symmetries), and less on questions of dynamics.
These ideas, and particularly those of Smale were central in the second edition of Foundations of Mechanics (Abraham and Marsden, 1978). | https://en.wikipedia.org/wiki/Geometric_mechanics |
Geometric modeling is a branch of applied mathematics and computational geometry that studies methods and algorithms for the mathematical description of shapes .
The shapes studied in geometric modeling are mostly two- or three- dimensional ( solid figures ), although many of its tools and principles can be applied to sets of any finite dimension. Today most geometric modeling is done with computers and for computer-based applications. Two-dimensional models are important in computer typography and technical drawing . Three-dimensional models are central to computer-aided design and manufacturing (CAD/CAM), and widely used in many applied technical fields such as civil and mechanical engineering , architecture , geology and medical image processing . [ 1 ]
Geometric models are usually distinguished from procedural and object-oriented models , which define the shape implicitly by an opaque algorithm that generates its appearance. [ citation needed ] They are also contrasted with digital images and volumetric models which represent the shape as a subset of a fine regular partition of space; and with fractal models that give an infinitely recursive definition of the shape. However, these distinctions are often blurred: for instance, a digital image can be interpreted as a collection of colored squares ; and geometric shapes such as circles are defined by implicit mathematical equations. Also, a fractal model yields a parametric or implicit model when its recursive definition is truncated to a finite depth.
Notable awards of the area are the John A. Gregory Memorial Award [ 2 ] and the Bézier award. [ 3 ]
General textbooks:
For multi-resolution (multiple level of detail ) geometric modeling :
Subdivision methods (such as subdivision surfaces ):
This applied mathematics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geometric_modeling |
The study of geometric morphometrics in anthropology has made a major impact on the field of morphometrics by aiding in some of the technological and methodological advancements. Geometric morphometrics is an approach that studies shape using Cartesian landmark and semilandmark coordinates that are capable of capturing morphologically distinct shape variables. The landmarks can be analyzed using various statistical techniques separate from size, position, and orientation so that the only variables being observed are based on morphology . Geometric morphometrics is used to observe variation in numerous formats, especially those pertaining to evolutionary and biological processes, which can be used to help explore the answers to a lot of questions in physical anthropology . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Geometric morphometrics is part of a larger subfield in anthropology, which has more recently been named virtual anthropology. Virtual anthropology looks at virtual morphology, the use of virtual copies of specimens to perform various quantitative analyses on shape (such as geometric morphometrics) and form... [ 7 ]
The field of geometric morphometrics grew out of the accumulation of improvements of methods and approaches over several decades beginning with Francis Galton (1822-1911). Galton was a polymath and the president of the Anthropological Institute of Great Britain. [ 6 ] In 1907 he invented a way to quantify facial shapes using a base-line registration approach for shape comparisons. [ 5 ] [ 6 ] This was later adapted by Fred Bookstein and termed “two-point coordinates” or “Bookstein-shape coordinates”. [ 4 ] [ 5 ]
In the 1940s, D’Arcy Wentworth Thompson (biologist and mathematician, 1860-1948) looked at ways to quantify that could be attached to biological shape based on developmental and evolutionary theories. This led to the first branch of multivariate morphometrics, which emphasized matrix manipulations involving variables. [ 8 ] In the late 1970s and early 1980s, Fred Bookstein (currently a professor of Anthropology at the University of Vienna) began using Cartesian transformations and David George Kendall (statistician, 1918-2007) showed that figures that hold the same shape can be treated as separate points in a geometric space. [ 8 ] [ 9 ] Finally, in 1996, Leslie Marcus (paleontologist, 1930-2002) convinced colleagues to use morphometrics on the famous Ötzi skeleton , which helped expose the importance of the applications of these methods. [ 9 ]
Traditional morphometrics is the study of morphological variations between or within groups using multivariate statistical tools. Shape is defined by collecting and analyzing length measurements, counts, ratios, and angles. [ 1 ] [ 2 ] [ 6 ] The statistical tools are able to quantify the covariation within and between samples. Some of the typical statistical tools used for traditional morphometrics are: principal components , factor analysis , canonical variate , and discriminant function analysis . It is also possible to study allometry , which is the observed change in shape when there is change in size. However, there are problems pertaining to size correction since linear distance is highly correlated with size. There have been multiple methods put forth to correct for this correlation, but these methods disagree and can end up with different results using the same dataset. Another problem is linear distances are not always defined by the same landmarks making it difficult to use for comparative purposes. [ 2 ] For shape analysis itself, which is the goal of morphometrics, the biggest downside to traditional morphometrics is that it does not capture the complete variation of shape in space, which is what the measurements are supposed to be based on. [ 2 ] [ 6 ] For example, if one tried to compare the length and width for an oval and tear drop shape with the same dimensions they would be deemed as the same using traditional morphometrics. [ 2 ] Geometric morphometrics tries to correct these problems by capturing more variability in shape.
There is a basic structure to successfully performing and completing every geometric morphometric study:
The first step is to define your landmark set. Landmarks have to be anatomically recognizable and the same for all specimens in the study. Landmarks should be selected to properly capture the shape trying to be observed and capable of being replicated. The sample size should be roughly three times the amount of landmarks chosen and they must be recorded in the same order for every specimen. [ 1 ] [ 4 ] [ 5 ]
Semilandmarks, also called sliding landmarks, are used when the location of a landmark along a curvature might not be identifiable or repeatable. [ 4 ] [ 5 ] Semilandmarks were created in order to take landmark based geometric morphometrics to the next step by capturing the shape of difficult areas such as smooth curves and surfaces. [ 5 ] In order to obtain a semilandmark, the curvature still has to start and end on definable landmarks, capture observed morphology, remain homologous across specimens in the same steps seen above for regular landmarks, be equal in number, and equally distant apart. [ 2 ] [ 5 ] When this approach was first proposed, Bookstein suggested gaining semilandmarks by densely sampling landmarks along the surface in a mesh and slowly thinning out the landmarks until the desired curvature was obtained. [ 4 ] Newer landmark programs aid in the process but there are still some steps that must be taken in order for the semilandmarks to be the same across the whole sample. Semilandmarks are not placed on the actual curve or surface but on tangent vectors to the curve or tangent planes to the surface. The sliding of semilandmarks in new programs is performed by either selecting a specimen to be the model specimen for the rest of the specimens or using a computational sample mean from tangent vectors. Semilandmarks are automatically placed in most programs when the observer chooses a starting and ending point on definable landmarks and sliding the semilandmarks between them until the shape is captured. The semilandmarks are then mapped onto the rest of the specimens in the sample. [ 5 ] Since shape will differ between specimens, the observer has to manually go through and make sure the landmarks and semilandmarks are on the surface for the rest of the specimens. If not they must be moved to touch the surface, but this process still maintains the correct location. There is still room for improvement to these methods but this is the most consistent option at the moment. Once mapped on, these semilandmarks can be treated just like landmarks for statistical analysis.
This is a different approach to data collection than using landmarks and semilandmarks. In this approach, deformation grids are used to capture the morphological shape differences and changes. The general idea is that shape variations can be recorded from one specimen to another based on the distortion of a grid. [ 5 ] Bookstein proposed the use of a thin-plate spline (TPS) interpolation, which is a computed deformation grid that calculates a mapping function between two individuals that measures point differences. [ 4 ] Basically, the TPS interpolation has a template computed grid that is applied to specimens and the differences in shape can be read from the different deformations of the template. [ 4 ] [ 5 ] The TPS can be used for both two- and three-dimensional data, but has proved less effective for visualizing three-dimensional differences, but it can easily be applied to the pixels of an image or volumetric data from CT or MRI scans. [ 5 ]
Landmark and semilandmark coordinates can be recorded on each specimen, but size, orientation, and position can vary for each of those specimens adding in variables that distract from the analysis of shape. This can be fixed by using superimposition, with generalized procrusted analysis (GPA) being the most common application. GPA removes the variation of size, orientation, and position by superimposing the landmarks in a common coordinate system. [ 2 ] [ 6 ] The landmarks for all specimens are optimally translated, rotated, and scaled based on a least-squared estimation. The first step is translation and rotation to minimize the squared and summed differences (squared Procrustes distance) between landmarks on each specimen. Then the landmarks are individually scaled to the same unit Centroid size. Centroid size is the square root of the sum of squared distances of the landmarks in configuration to their mean location. The translation, rotation, and scaling bring the landmark configurations for all specimens into a common coordinate system so that the only differing variables are based on shape alone. The new superimposed landmarks can now be analyzed in multivariate statistical analyses. [ 6 ]
In general, principal components analysis is used to construct overarching variables that take the place of multiple correlated variables in order to reveal the underlying structure of the dataset. This is helpful in geometric morphometrics where a large set of landmarks can create correlated relationships that might be difficult to differentiate without reducing them in order to look at the overall variability in the data. [ 5 ] [ 6 ] Reducing the number of variables is also necessary because the number of variables being observed and analyzed should not exceed sample size. [ 6 ] Principal component scores are computed through an eigendecomposition of a sample’s covariance matrix and rotates the data to preserve procrustes distances. In other words, a principal components analysis preserves the shape variables that were scaled, rotated, and translated during the generalize procrustes analysis. The resulting principal component scores project the shape variables onto low-dimensional space based on eigenvectors. [ 5 ] The scores can be plotted various ways to look at the shape variables, such as scatterplots. It is important to explore what shape variables are being observed to make sure the principal components being analyzed are pertinent to the questions being asked. Although the components might show shape variables not relevant to the question at hand, it is perfectly acceptable to leave those components out any further analysis for a specific project. [ 6 ]
Partial least squares is similar the principal components analysis in the fact that it reduces the number of variables being observed so patterns are more easily observed in the data, but it uses a linear regression model. PLS is an approach that looks at two or more sets of variables measured on the same specimens and extracts the linear combinations that best represent the pattern of covariance across the sets. [ 5 ] [ 6 ] The linear combinations will optimally describe the covariances and provide a low-dimensional output to compare the different sets. With the highest shape variation covariance, mean shape, and the other shape covariances that exists among the sets, this approach is ideal for looking at the significance of group differences. PLS has been used a lot in studies that look at things such as sexual dimorphism, or other general morphological differences found at the population, subspecies, and species level. [ 6 ] It has also been used to look at functional, environmental, or behavioral differences that could influence the found shape covariance between sets [ 5 ]
Multiple or multivariate regression is an approach to look at the relationship between several independent or predictor variables and a dependent or influential variable. It is best used in geometric morphometrics when analyzing shape variables based on an external influence. For example, it can be used in studies with attached functional or environmental variables like age or the development over time in certain environments. [ 4 ] [ 5 ] [ 6 ] The multivariate regression of shape based on the logarithm of centroid size (square root of the sum of squared distances of landmarks) is ideal for allometric studies. Allometry is the analysis of shape based on the biological parameters of growth and size. This approach is not affected by the number of dependent shape variables or their covariance, so the results of regression coefficients can be seen as a deformation in shape. [ 5 ]
The human brain is unique from other species based on the size of the visual cortex , temporal lobe , and parietal cortex , and increased gyrification (folds of the brain). There have been many questions as to why these changes occurred and how they contributed to cognition and behavior, which are important questions in human evolution. Geometric morphometrics has been used to explore some of these questions using virtual endocasts (casts of the inside of the cranium) to gather information since brain tissue does not preserve in the fossil record. Geometric morphometrics can reveal small shape differences between brains such as differences between modern humans and Neanderthals whose brains were similar in size. [ 10 ] Neubauer and colleagues looked at the endocasts of chimpanzees and modern humans to observe brain growth using 3D landmarks and semilandmarks. They found that there is an early “ globularization phase” in human brain development that shows expansion of the parietal and cerebellar areas, which does not occur in chimpanzees. [ 10 ] [ 11 ] Gunz and colleagues extended the study further and found that the “globularization phase” does not occur in Neanderthals and instead Neanderthal brain growth is more similar to chimpanzees. This difference could point to some important changes in the human brain that led to different organization and cognitive functions [ 10 ] [ 12 ] [ 13 ]
There have been many debates on the relationships between Middle Pleistocene hominin crania from Eurasia and Africa because they display a mosaic of both primitive and derived traits. Studies on cranial morphology for these specimens have created arguments that Eurasian fossils from the Middle Pleistocene are a transition between Homo erectus and later hominins like Neanderthals and modern humans. However, there are two sides to the argument with one side saying that the European and African fossils are from a single taxon while others say that the Neanderthal lineage should be included. Harvati and colleagues decided to attempt to quantify the craniofacial features of Neanderthals and European Middle Pleistocene fossils using 3D landmarks to try to add to the debate. They found that some features were more Neanderthal like while others were primitive and likely from the Middle Pleistocene African hominins, so the argument could still go either way. [ 10 ] [ 14 ] Freidline and colleagues further added to the debate by looking at both adult and subadult crania of modern and Pleistocene hominins using 3D landmarks and semilandmarks. They found similarities in facial morphology between Middle Pleistocene fossils from Europe and Africa and a divide in facial morphology during the Pleistocene based on time period. The study also found that some characteristics separating Neanderthals from Middle Pleistocene hominins, like the size of the nasal aperture and degree of midfacial prognathism, might be due to allometric differences [ 10 ] [ 15 ]
Crania can be used to classify ancestry and sex to aid in forensic contexts such as crime scenes and mass fatalities. In 2010, Ross and colleagues were provided federal funds by the U.S. Department of Justice to compile data for population specific classification criteria using geometric morphometrics. Their aim was to create an extensive population database from 3D landmarks on human crania, to develop and validate population specific procedures for classification of unknown individuals, and develop software to use in forensic identification. They placed 3D landmarks on 75 craniofacial landmarks from European, African, and Hispanic populations of about 1000 individuals with a Microscribe digitizer. The software they developed, called 3D-ID , can classify unknown individuals into probable sex and ancestry, and allows for fragmentary and damaged specimens to be used. [ 16 ] A copy of the full manuscript can be found here: Geometric Morphometric Tools for the Classification of Human Skulls
Geometric morphometrics can also be used to capture the slight shape variations found in postcranial bones of the human body such as os coxae . Bierry and colleagues used 3D CT reconstructions of modern adult pelvic bones for 104 individuals to look at the shape of the obturator foramen . After a normalization technique to take out the factor of size, they outlined the obturator foramen with landmarks and semilandmarks to capture its shape. They chose the obturator foramen because it tends to be oval in males and triangular in females. The results show a classification accuracy of 88.5% for males and 80.8% for females using a Discriminant Fourier Analysis . [ 17 ] Another study done by Gonzalez and colleagues used geometric morphometrics to capture the complete shape of the ilium and ischiopubic ramus . They placed landmarks and semilandmarks on 2D photographic images of 121 left pelvic bones from a collection of undocumented skeletons at the Museu Anthropológico de Coimbra in Portugal. Since the pelvic bones were of unknown origin, they used a K-means Cluster Analysis to determine a sex category before performing a Discriminant Function analysis . The results had a classification accuracy for the greater sciatic notch of 90.9% and the ischiopubic ramus at 93.4 to 90.1% [ 18 ]
In archaeology, Geometric morphometrics are used to examine the shape variations or standardization of artifacts to answer questions about typological and technological changes. Most applications are for stone tools to measure variations in morphology between different assemblage groups to understand their functions. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] Some applications to pottery shape is to identify the level of standardization to explore ceramic production and its implication about social organization. [ 24 ] [ 25 ] [ 26 ]
The books listed below are the standard suggestions for anyone who wants to obtain a comprehensive understanding of morphometrics (referred to by colors):
- The Red Book : Bookstein, F. L., B. Chernoff, R. Elder, J. Humphries, G. Smith, and R. Strauss. 1985. Morphometrics in Evolutionary Biology
- The Blue Book : Rohlf, F. J. and F. L. Bookstein (eds.). 1990. Proceedings of the Michigan Morphometrics Workshop
- The Orange Book : Bookstein, F. L. 1991. Morphometric Tools for Landmark Data. Geometry and Biology
- The Black Book : Marcus, L. F., E. Bello, A. García-Valdecasas (eds.). 1993. Contributions to Morphometrics
- The Green Book : Zelditch, M. L., D. L. Swiderski, H. D. Sheets, and W. L. Fink. 2004. Geometric Morphometrics for biologists: A Primer
2D Equipment
3D Equipment | https://en.wikipedia.org/wiki/Geometric_morphometrics_in_anthropology |
In classical and quantum mechanics , geometric phase is a phase difference acquired over the course of a cycle , when a system is subjected to cyclic adiabatic processes , which results from the geometrical properties of the parameter space of the Hamiltonian . [ 1 ] The phenomenon was independently discovered by S. Pancharatnam (1956), [ 2 ] in classical optics and by H. C. Longuet-Higgins (1958) [ 3 ] in molecular physics; it was generalized by Michael Berry in (1984). [ 4 ] It is also known as the Pancharatnam–Berry phase , Pancharatnam phase , or Berry phase .
It can be seen in the conical intersection of potential energy surfaces [ 3 ] [ 5 ] and in the Aharonov–Bohm effect . Geometric phase around the conical intersection involving the ground electronic state of the C 6 H 3 F 3 + molecular ion is discussed on pages 385–386 of the textbook by Bunker and Jensen. [ 6 ] In the case of the Aharonov–Bohm effect, the adiabatic parameter is the magnetic field enclosed by two interference paths, and it is cyclic in the sense that these two paths form a loop. In the case of the conical intersection, the adiabatic parameters are the molecular coordinates . Apart from quantum mechanics, it arises in a variety of other wave systems, such as classical optics . As a rule of thumb, it can occur whenever there are at least two parameters characterizing a wave in the vicinity of some sort of singularity or hole in the topology; two parameters are required because either the set of nonsingular states will not be simply connected , or there will be nonzero holonomy .
Waves are characterized by amplitude and phase , and may vary as a function of those parameters. The geometric phase occurs when both parameters are changed simultaneously but very slowly (adiabatically), and eventually brought back to the initial configuration. In quantum mechanics, this could involve rotations but also translations of particles, which are apparently undone at the end. One might expect that the waves in the system return to the initial state, as characterized by the amplitudes and phases (and accounting for the passage of time). However, if the parameter excursions correspond to a loop instead of a self-retracing back-and-forth variation, then it is possible that the initial and final states differ in their phases. This phase difference is the geometric phase, and its occurrence typically indicates that the system's parameter dependence is singular (its state is undefined) for some combination of parameters.
To measure the geometric phase in a wave system, an interference experiment is required. The Foucault pendulum is an example from classical mechanics that is sometimes used to illustrate the geometric phase. This mechanics analogue of the geometric phase is known as the Hannay angle .
In a quantum system at the n -th eigenstate , an adiabatic evolution of the Hamiltonian sees the system remain in the n -th eigenstate of the Hamiltonian, while also obtaining a phase factor. The phase obtained has a contribution from the state's time evolution and another from the variation of the eigenstate with the changing Hamiltonian. The second term corresponds to the Berry phase, and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution.
However, if the variation is cyclical, the Berry phase cannot be cancelled; it is invariant and becomes an observable property of the system. By reviewing the proof of the adiabatic theorem given by Max Born and Vladimir Fock , in Zeitschrift für Physik 51 , 165 (1928), we could characterize the whole change of the adiabatic process into a phase term. Under the adiabatic approximation, the coefficient of the n -th eigenstate under adiabatic process is given by C n ( t ) = C n ( 0 ) exp [ − ∫ 0 t ⟨ ψ n ( t ′ ) | ψ ˙ n ( t ′ ) ⟩ d t ′ ] = C n ( 0 ) e i γ n ( t ) , {\displaystyle C_{n}(t)=C_{n}(0)\exp \left[-\int _{0}^{t}\langle \psi _{n}(t')|{\dot {\psi }}_{n}(t')\rangle \,dt'\right]=C_{n}(0)e^{i\gamma _{n}(t)},} where γ n ( t ) {\displaystyle \gamma _{n}(t)} is the Berry's phase with respect to parameter t . Changing the variable t into generalized parameters, we could rewrite the Berry's phase into γ n [ C ] = i ∮ C ⟨ n , t | ( ∇ R | n , t ⟩ ) d R , {\displaystyle \gamma _{n}[C]=i\oint _{C}\langle n,t|{\big (}\nabla _{R}|n,t\rangle {\big )}\,dR,} where R {\displaystyle R} parametrizes the cyclic adiabatic process. Note that the normalization of | n , t ⟩ {\displaystyle |n,t\rangle } implies that the integrand is imaginary, so that γ n [ C ] {\displaystyle \gamma _{n}[C]} is real. It follows a closed path C {\displaystyle C} in the appropriate parameter space. Geometric phase along the closed path C {\displaystyle C} can also be calculated by integrating the Berry curvature over surface enclosed by C {\displaystyle C} .
One of the easiest examples is the Foucault pendulum . An easy explanation in terms of geometric phases is given by Wilczek and Shapere: [ 7 ]
How does the pendulum precess when it is taken around a general path C ? For transport along the equator , the pendulum will not precess. [...] Now if C is made up of geodesic segments, the precession will all come from the angles where the segments of the geodesics meet; the total precession is equal to the net deficit angle which in turn equals the solid angle enclosed by C modulo 2 π . Finally, we can approximate any loop by a sequence of geodesic segments, so the most general result (on or off the surface of the sphere) is that the net precession is equal to the enclosed solid angle.
To put it in different words, there are no inertial forces that could make the pendulum precess, so the precession (relative to the direction of motion of the path along which the pendulum is carried) is entirely due to the turning of this path. Thus the orientation of the pendulum undergoes parallel transport . For the original Foucault pendulum, the path is a circle of latitude , and by the Gauss–Bonnet theorem , the phase shift is given by the enclosed solid angle. [ 8 ]
In a near-inertial frame moving in tandem with the Earth, but not sharing the rotation of the Earth about its own axis, the suspension point of the pendulum traces out a circular path during one sidereal day.
At the latitude of Paris, 48 degrees 51 minutes north, a full precession cycle takes just under 32 hours, so after one sidereal day, when the Earth is back in the same orientation as one sidereal day before, the oscillation plane has turned by just over 270 degrees. If the plane of swing was north–south at the outset, it is east–west one sidereal day later.
This also implies that there has been exchange of momentum ; the Earth and the pendulum bob have exchanged momentum. The Earth is so much more massive than the pendulum bob that the Earth's change of momentum is unnoticeable. Nonetheless, since the pendulum bob's plane of swing has shifted, the conservation laws imply that an exchange must have occurred.
Rather than tracking the change of momentum, the precession of the oscillation plane can efficiently be described as a case of parallel transport . For that, it can be demonstrated, by composing the infinitesimal rotations, that the precession rate is proportional to the projection of the angular velocity of Earth onto the normal direction to Earth, which implies that the trace of the plane of oscillation will undergo parallel transport. After 24 hours, the difference between initial and final orientations of the trace in the Earth frame is α = −2 π sin φ , which corresponds to the value given by the Gauss–Bonnet theorem . α is also called the holonomy or geometric phase of the pendulum. When analyzing earthbound motions, the Earth frame is not an inertial frame , but rotates about the local vertical at an effective rate of 2π sin φ radians per day. A simple method employing parallel transport within cones tangent to the Earth's surface can be used to describe the rotation angle of the swing plane of Foucault's pendulum. [ 9 ] [ 10 ]
From the perspective of an Earth-bound coordinate system (the measuring circle and spectator are Earth-bounded, also if terrain reaction to Coriolis force is not perceived by spectator when he moves), using a rectangular coordinate system with its x axis pointing east and its y axis pointing north, the precession of the pendulum is due to the Coriolis force (other fictitious forces as gravity and centrifugal force have not direct precession component, Euler's force is low because Earth's rotation speed is nearly constant). Consider a planar pendulum with constant natural frequency ω in the small angle approximation . There are two forces acting on the pendulum bob: the restoring force provided by gravity and the wire, and the Coriolis force (the centrifugal force, opposed to the gravitational restoring force, can be neglected). The Coriolis force at latitude φ is horizontal in the small angle approximation and is given by F c , x = 2 m Ω d y d t sin φ , F c , y = − 2 m Ω d x d t sin φ , {\displaystyle {\begin{aligned}F_{{\text{c}},x}&=2m\Omega {\dfrac {dy}{dt}}\sin \varphi ,\\F_{{\text{c}},y}&=-2m\Omega {\dfrac {dx}{dt}}\sin \varphi ,\end{aligned}}} where Ω is the rotational frequency of Earth, F c, x is the component of the Coriolis force in the x direction, and F c, y is the component of the Coriolis force in the y direction.
The restoring force, in the small-angle approximation and neglecting centrifugal force, is given by F g , x = − m ω 2 x , F g , y = − m ω 2 y . {\displaystyle {\begin{aligned}F_{g,x}&=-m\omega ^{2}x,\\F_{g,y}&=-m\omega ^{2}y.\end{aligned}}}
Using Newton's laws of motion , this leads to the system of equations d 2 x d t 2 = − ω 2 x + 2 Ω d y d t sin φ , d 2 y d t 2 = − ω 2 y − 2 Ω d x d t sin φ . {\displaystyle {\begin{aligned}{\dfrac {d^{2}x}{dt^{2}}}&=-\omega ^{2}x+2\Omega {\dfrac {dy}{dt}}\sin \varphi ,\\{\dfrac {d^{2}y}{dt^{2}}}&=-\omega ^{2}y-2\Omega {\dfrac {dx}{dt}}\sin \varphi .\end{aligned}}}
Switching to complex coordinates z = x + iy , the equations read d 2 z d t 2 + 2 i Ω d z d t sin φ + ω 2 z = 0. {\displaystyle {\frac {d^{2}z}{dt^{2}}}+2i\Omega {\frac {dz}{dt}}\sin \varphi +\omega ^{2}z=0.}
To first order in Ω / ω , this equation has the solution z = e − i Ω sin φ t ( c 1 e i ω t + c 2 e − i ω t ) . {\displaystyle z=e^{-i\Omega \sin \varphi t}\left(c_{1}e^{i\omega t}+c_{2}e^{-i\omega t}\right).}
If time is measured in days, then Ω = 2 π and the pendulum rotates by an angle of −2 π sin φ during one day.
A second example is linearly polarized light entering a single-mode optical fiber . Suppose the fiber traces out some path in space, and the light exits the fiber in the same direction as it entered. Then compare the initial and final polarizations. In semiclassical approximation the fiber functions as a waveguide , and the momentum of the light is at all times tangent to the fiber. The polarization can be thought of as an orientation perpendicular to the momentum. As the fiber traces out its path, the momentum vector of the light traces out a path on the sphere in momentum space . The path is closed, since initial and final directions of the light coincide, and the polarization is a vector tangent to the sphere. Going to momentum space is equivalent to taking the Gauss map . There are no forces that could make the polarization turn, just the constraint to remain tangent to the sphere. Thus the polarization undergoes parallel transport , and the phase shift is given by the enclosed solid angle (times the spin, which in case of light is 1).
A stochastic pump is a classical stochastic system that responds with nonzero, on average, currents to periodic changes of parameters.
The stochastic pump effect can be interpreted in terms of a geometric phase in evolution of the moment generating function of stochastic currents. [ 11 ]
The geometric phase can be evaluated exactly for a spin- 1 ⁄ 2 particle in a magnetic field. [ 1 ]
While Berry's formulation was originally defined for linear Hamiltonian systems, it was soon realized by Ning and Haken [ 12 ] that similar geometric phase can be defined for entirely different systems such as nonlinear dissipative systems that possess certain cyclic attractors. They showed that such cyclic attractors exist in a class of nonlinear dissipative systems with certain symmetries. [ 13 ] There are several important aspects of this generalization of Berry's phase: 1) Instead of the parameter space for the original Berry phase, this Ning-Haken generalization is defined in phase space; 2) Instead of the adiabatic evolution in quantum mechanical system, the evolution of the system in phase space needs not to be adiabatic. There is no restriction on the time scale of the temporal evolution; 3) Instead of a Hermitian system or non-hermitian system with linear damping, systems can be generally nonlinear and non-hermitian.
There are several ways to compute the geometric phase in molecules within the Born–Oppenheimer framework. One way is through the "non-adiabatic coupling M × M {\displaystyle M\times M} matrix" defined by τ i j μ = ⟨ ψ i | ∂ μ ψ j ⟩ , {\displaystyle \tau _{ij}^{\mu }=\langle \psi _{i}|\partial ^{\mu }\psi _{j}\rangle ,} where ψ i {\displaystyle \psi _{i}} is the adiabatic electronic wave function, depending on the nuclear parameters R μ {\displaystyle R_{\mu }} . The nonadiabatic coupling can be used to define a loop integral, analogous to a Wilson loop (1974) in field theory, developed independently for molecular framework by M. Baer (1975, 1980, 2000). Given a closed loop Γ {\displaystyle \Gamma } , parameterized by R μ ( t ) , {\displaystyle R_{\mu }(t),} where t ∈ [ 0 , 1 ] {\displaystyle t\in [0,1]} is a parameter, and R μ ( t + 1 ) = R μ ( t ) {\displaystyle R_{\mu }(t+1)=R_{\mu }(t)} . The D -matrix is given by D [ Γ ] = P ^ e ∮ Γ τ μ d R μ {\displaystyle D[\Gamma ]={\hat {P}}e^{\oint _{\Gamma }\tau ^{\mu }\,dR_{\mu }}} (here P ^ {\displaystyle {\hat {P}}} is a path-ordering symbol). It can be shown that once M {\displaystyle M} is large enough (i.e. a sufficient number of electronic states is considered), this matrix is diagonal, with the diagonal elements equal to e i β j , {\displaystyle e^{i\beta _{j}},} where β j {\displaystyle \beta _{j}} are the geometric phases associated with the loop for the j {\displaystyle j} -th adiabatic electronic state.
For time-reversal symmetrical electronic Hamiltonians the geometric phase reflects the number of conical intersections encircled by the loop. More accurately, e i β j = ( − 1 ) N j , {\displaystyle e^{i\beta _{j}}=(-1)^{N_{j}},} where N j {\displaystyle N_{j}} is the number of conical intersections involving the adiabatic state ψ j {\displaystyle \psi _{j}} encircled by the loop Γ . {\displaystyle \Gamma .}
An alternative to the D -matrix approach would be a direct calculation of the Pancharatnam phase. This is especially useful if one is interested only in the geometric phases of a single adiabatic state. In this approach, one takes a number N + 1 {\displaystyle N+1} of points ( n = 0 , … , N ) {\displaystyle (n=0,\dots ,N)} along the loop R ( t n ) {\displaystyle R(t_{n})} with t 0 = 0 {\displaystyle t_{0}=0} and t N = 1 , {\displaystyle t_{N}=1,} then using only the j -th adiabatic states ψ j [ R ( t n ) ] {\displaystyle \psi _{j}[R(t_{n})]} computes the Pancharatnam product of overlaps: I j ( Γ , N ) = ∏ n = 0 N − 1 ⟨ ψ j [ R ( t n ) ] | ψ j [ R ( t n + 1 ) ] ⟩ . {\displaystyle I_{j}(\Gamma ,N)=\prod \limits _{n=0}^{N-1}\langle \psi _{j}[R(t_{n})]|\psi _{j}[R(t_{n+1})]\rangle .}
In the limit N → ∞ {\displaystyle N\to \infty } one has (see Ryb & Baer 2004 for explanation and some applications) I j ( Γ , N ) → e i β j . {\displaystyle I_{j}(\Gamma ,N)\to e^{i\beta _{j}}.}
An electron subjected to magnetic field B {\displaystyle B} moves on a circular (cyclotron) orbit. [2] Classically, any cyclotron radius R c {\displaystyle R_{c}} is acceptable. Quantum-mechanically, only discrete energy levels ( Landau levels ) are allowed, and since R c {\displaystyle R_{c}} is related to electron's energy, this corresponds to quantized values of R c {\displaystyle R_{c}} . The energy quantization condition obtained by solving Schrödinger's equation reads, for example, E = ( n + α ) ℏ ω c , {\displaystyle E=(n+\alpha )\hbar \omega _{c},} α = 1 / 2 {\displaystyle \alpha =1/2} for free electrons (in vacuum) or E = v 2 ( n + α ) e B ℏ , α = 0 {\textstyle E=v{\sqrt {2(n+\alpha )eB\hbar }},\quad \alpha =0} for electrons in graphene , where n = 0 , 1 , 2 , … {\displaystyle n=0,1,2,\ldots } . [3] Although the derivation of these results is not difficult, there is an alternative way of deriving them, which offers in some respect better physical insight into the Landau level quantization. This alternative way is based on the semiclassical Bohr–Sommerfeld quantization condition ℏ ∮ d r ⋅ k − e ∮ d r ⋅ A + ℏ γ = 2 π ℏ ( n + 1 / 2 ) , {\displaystyle \hbar \oint d\mathbf {r} \cdot \mathbf {k} -e\oint d\mathbf {r} \cdot \mathbf {A} +\hbar \gamma =2\pi \hbar (n+1/2),} which includes the geometric phase γ {\displaystyle \gamma } picked up by the electron while it executes its (real-space) motion along the closed loop of the cyclotron orbit. [ 14 ] For free electrons, γ = 0 , {\displaystyle \gamma =0,} while γ = π {\displaystyle \gamma =\pi } for electrons in graphene. It turns out that the geometric phase is directly linked to α = 1 / 2 {\displaystyle \alpha =1/2} of free electrons and α = 0 {\displaystyle \alpha =0} of electrons in graphene.
^ For simplicity, we consider electrons confined to a plane, such as 2DEG and magnetic field perpendicular to the plane.
^ ω c = e B / m {\displaystyle \omega _{c}=eB/m} is the cyclotron frequency (for free electrons) and v {\displaystyle v} is the Fermi velocity (of electrons in graphene). | https://en.wikipedia.org/wiki/Geometric_phase |
Geometric phase analysis is a method of digital signal processing used to determine crystallographic quantities such as d-spacing or strain from high-resolution transmission electron microscope images. [ 1 ] [ 2 ] The analysis needs to be performed using specialized computer program .
In geometric phase analysis, local changes in the periodicity of a high resolution image of a crystalline material are quantified, resulting in a two-dimensional map. Quantities which can be mapped with geometric phase analysis include interplanar distances (d-spacing), two-dimensional deformation and strain tensors and displacement vectors. This allows strain fields to be determined at very high resolution, down to the unit cell of the material. Importantly, GPA performed on images that have sub unit-cell resolution can produce erroneous results. For example, a change in composition may appear as a component of the deformation tensor, with the result that an interface appears to have a strain field associated with it when in fact there is none. [ 3 ]
Since the calculations are performed in the frequency domain, the input image, with a periodicity of the crystal lattice , must be transformed into a spatial frequency representation using a 2D Fourier transform . From a mathematical point of view, the frequency image is a complex matrix with a size equal to the original image. From a crystallographic point of view, there is an analogy between the 2D Fourier transform and diffraction pattern and the reciprocal lattice . The intensity peaks (or power peaks) in the Fourier transform correspond to crystallographic planes depicted in the original image, specifically a sine wave with the orientation and period of the corresponding planes. A change in the phase of this sine wave indicates a change in the position of its peaks and troughs, which can be interpreted as a component of a 2D deformation tensor.
Due to the complex nature of the frequency image, it can be used to calculate amplitude and phase . Together with a vector of one crystallographic plane depicted in the image, the amplitude and phase can be used to generate a 2D map of d-spacing. [ 1 ] If two vectors of non-parallel planes are known, the method can be used to generate maps of strain and displacement. [ 2 ]
In order to perform geometric phase analysis, a computer tool is needed. Firstly, because manual evaluation of transforms between spatial and frequential domain would be highly impractical. Secondly, a vector of crystallographic plane is an important input parameter and the analysis is sensitive to the accuracy of its localization. Therefore, the accuracy and repeatability of the analysis requires precise localization of diffraction spots.
The required functionalities are available in several software packages including Strain++ [ 4 ] and the crystallographic suite CrysTBox . The latter offers an interactive geometric phase analysis called gpaGUI . In both packages it is possible to locate peaks in the Fourier transform with sub-pixel precision (e.g. diffractGUI ). [ 5 ] | https://en.wikipedia.org/wiki/Geometric_phase_analysis |
Problems of the following type, and their solution techniques, were first studied in the 18th century, and the general topic became known as geometric probability .
For mathematical development see the concise monograph by Solomon. [ 1 ]
Since the late 20th century, the topic has split into two topics with different emphases. Integral geometry sprang from the principle that the mathematically natural probability models are those that are invariant under certain transformation groups. This topic emphasises systematic development of formulas for calculating expected values associated with the geometric
objects derived from random points, and can in part be viewed as a sophisticated branch of multivariate calculus . Stochastic geometry emphasises the random geometrical objects themselves. For instance: different models for random lines or for random tessellations of the plane; random sets formed by making points of a spatial Poisson process be (say) centers of discs. | https://en.wikipedia.org/wiki/Geometric_probability |
A geometric progression , also known as a geometric sequence , is a mathematical sequence of non-zero numbers where each term after the first is found by multiplying the previous one by a fixed number called the common ratio . For example, the sequence 2, 6, 18, 54, ... is a geometric progression with a common ratio of 3. Similarly 10, 5, 2.5, 1.25, ... is a geometric sequence with a common ratio of 1/2.
Examples of a geometric sequence are powers r k of a fixed non-zero number r , such as 2 k and 3 k . The general form of a geometric sequence is
where r is the common ratio and a is the initial value.
The sum of a geometric progression's terms is called a geometric series .
The n th term of a geometric sequence with initial value a = a 1 and common ratio r is given by
and in general
Geometric sequences satisfy the linear recurrence relation
This is a first order, homogeneous linear recurrence with constant coefficients .
Geometric sequences also satisfy the nonlinear recurrence relation
a n = a n − 1 2 / a n − 2 {\displaystyle a_{n}=a_{n-1}^{2}/a_{n-2}} for every integer n > 2. {\displaystyle n>2.}
This is a second order nonlinear recurrence with constant coefficients.
When the common ratio of a geometric sequence is positive, the sequence's terms will all share the sign of the first term. When the common ratio of a geometric sequence is negative, the sequence's terms alternate between positive and negative; this is called an alternating sequence. For instance the sequence 1, −3, 9, −27, 81, −243, ... is an alternating geometric sequence with an initial value of 1 and a common ratio of −3. When the initial term and common ratio are complex numbers, the terms' complex arguments follow an arithmetic progression .
If the absolute value of the common ratio is smaller than 1, the terms will decrease in magnitude and approach zero via an exponential decay . If the absolute value of the common ratio is greater than 1, the terms will increase in magnitude and approach infinity via an exponential growth . If the absolute value of the common ratio equals 1, the terms will stay the same size indefinitely, though their signs or complex arguments may change.
Geometric progressions show exponential growth or exponential decline, as opposed to arithmetic progressions showing linear growth or linear decline. This comparison was taken by T.R. Malthus as the mathematical foundation of his An Essay on the Principle of Population . The two kinds of progression are related through the exponential function and the logarithm : exponentiating each term of an arithmetic progression yields a geometric progression, while taking the logarithm of each term in a geometric progression yields an arithmetic progression. The relation that the logarithm provides between a geometric progression in its argument and an arithmetic progression of values, prompted A. A. de Sarasa to make the connection of Saint-Vincent's quadrature and the tradition of logarithms in prosthaphaeresis , leading to the term "hyperbolic logarithm", a synonym for natural logarithm.
In mathematics , a geometric series is a series summing the terms of an infinite geometric sequence , in which the ratio of consecutive terms is constant. For example, the series 1 2 + 1 4 + 1 8 + ⋯ {\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{4}}+{\tfrac {1}{8}}+\cdots } is a geometric series with common ratio 1 2 {\displaystyle {\tfrac {1}{2}}} , which converges to the sum of 1 {\displaystyle 1} . Each term in a geometric series is the geometric mean of the term before it and the term after it, in the same way that each term of an arithmetic series is the arithmetic mean of its neighbors.
While Greek philosopher Zeno's paradoxes about time and motion (5th century BCE) have been interpreted as involving geometric series, such series were formally studied and applied a century or two later by Greek mathematicians , for example used by Archimedes to calculate the area inside a parabola (3rd century BCE). Today, geometric series are used in mathematical finance , calculating areas of fractals, and various computer science topics.
The infinite product of a geometric progression is the product of all of its terms. The partial product of a geometric progression up to the term with power n {\displaystyle n} is
∏ k = 0 n a r ( k ) = a n + 1 r n ( n + 1 ) / 2 . {\displaystyle \prod _{k=0}^{n}ar^{(k)}=a^{n+1}r^{n(n+1)/2}.}
When a {\displaystyle a} and r {\displaystyle r} are positive real numbers, this is equivalent to taking the geometric mean of the partial progression's first and last individual terms and then raising that mean to the power given by the number of terms n + 1. {\displaystyle n+1.}
∏ k = 0 n a r k = a n + 1 r n ( n + 1 ) / 2 = ( a 2 r n ) n + 1 for a ≥ 0 , r ≥ 0. {\displaystyle \prod _{k=0}^{n}ar^{k}=a^{n+1}r^{n(n+1)/2}=({\sqrt {a^{2}r^{n}}})^{n+1}{\text{ for }}a\geq 0,r\geq 0.}
This corresponds to a similar property of sums of terms of a finite arithmetic sequence : the sum of an arithmetic sequence is the number of terms times the arithmetic mean of the first and last individual terms. This correspondence follows the usual pattern that any arithmetic sequence is a sequence of logarithms of terms of a geometric sequence and any geometric sequence is a sequence of exponentiations of terms of an arithmetic sequence. Sums of logarithms correspond to products of exponentiated values.
Let P n {\displaystyle P_{n}} represent the product up to power n {\displaystyle n} . Written out in full,
Carrying out the multiplications and gathering like terms,
The exponent of r is the sum of an arithmetic sequence. Substituting the formula for that sum,
which concludes the proof.
One can rearrange this expression to
Rewriting a as a 2 {\displaystyle \textstyle {\sqrt {a^{2}}}} and r as r 2 {\displaystyle \textstyle {\sqrt {r^{2}}}} though this is not valid for a < 0 {\displaystyle a<0} or r < 0 , {\displaystyle r<0,}
which is the formula in terms of the geometric mean.
A clay tablet from the Early Dynastic Period in Mesopotamia (c. 2900 – c. 2350 BC), identified as MS 3047, contains a geometric progression with base 3 and multiplier 1/2. It has been suggested to be Sumerian , from the city of Shuruppak . It is the only known record of a geometric progression from before the time of old Babylonian mathematics beginning in 2000 BC. [ 1 ]
Books VIII and IX of Euclid 's Elements analyze geometric progressions (such as the powers of two , see the article for details) and give several of their properties. [ 2 ] | https://en.wikipedia.org/wiki/Geometric_progression |
In discrete geometry , geometric rigidity is a theory for determining if a geometric constraint system (GCS) has finitely many d {\displaystyle d} -dimensional solutions, or frameworks , in some metric space . A framework of a GCS is rigid in d {\displaystyle d} -dimensions, for a given d {\displaystyle d} if it is an isolated solution of the GCS, factoring out the set of trivial motions, or isometric group , of the metric space, e.g. translations and rotations in Euclidean space . In other words, a rigid framework ( G , p ) {\displaystyle (G,p)} of a GCS has no nearby framework of the GCS that is reachable via a non-trivial continuous motion of ( G , p ) {\displaystyle (G,p)} that preserves the constraints of the GCS. Structural rigidity is another theory of rigidity that concerns generic frameworks , i.e., frameworks whose rigidity properties are representative of all frameworks with the same constraint graph . Results in geometric rigidity apply to all frameworks; in particular, to non-generic frameworks.
Geometric rigidity was first explored by Euler , who conjectured that all polyhedra in 3 {\displaystyle 3} -dimensions are rigid. Much work has gone into proving the conjecture, leading to many interesting results discussed below. However, a counterexample was eventually found. There are also some generic rigidity results with no combinatorial components, so they are related to both geometric and structural rigidity.
The definitions below, which can be found in, [ 1 ] are with respect to bar-joint frameworks in d {\displaystyle d} -dimensional Euclidean space , and will be generalized for other frameworks and metric spaces as needed. Consider a linkage ( G , δ ) {\displaystyle (G,\delta )} , i.e. a constraint graph G = ( V , E ) {\displaystyle G=(V,E)} with distance constraints δ {\displaystyle \delta } assigned to its edges, and the configuration space C ( G , δ ) {\displaystyle {\mathcal {C}}(G,\delta )} consisting of frameworks ( G , p ) {\displaystyle (G,p)} of ( G , δ ) {\displaystyle (G,\delta )} . The frameworks in C ( G , δ ) {\displaystyle {\mathcal {C}}(G,\delta )} consist of maps p : V → R d | V | {\displaystyle p:V\rightarrow \mathbb {R} ^{d|V|}} that satisfy
‖ p ( u ) − p ( v ) ‖ 2 = δ u v , {\displaystyle \|p(u)-p(v)\|^{2}=\delta _{uv},}
for all edges ( u , v ) {\displaystyle (u,v)} of G {\displaystyle G} . In other words, p {\displaystyle p} is a placement of the vertices of G {\displaystyle G} as points in d {\displaystyle d} -dimensions that satisfy all distance constraints δ {\displaystyle \delta } . The configuration space C ( G , δ ) {\displaystyle {\mathcal {C}}(G,\delta )} is an algebraic set .
Continuous and trivial motions. A continuous motion is a continuous path in C ( G , δ ) {\displaystyle {\mathcal {C}}(G,\delta )} that describes the physical motion between two frameworks of ( G , δ ) {\displaystyle (G,\delta )} that preserves all constraints. A trivial motion is a continuous motion resulting from the ( d + 1 2 ) {\displaystyle d+1 \choose 2} Euclidean isometries , i.e. translations and rotations. In general, any metric space has a set of trivial motions coming from the isometric group of the space.
Local rigidity. A framework of a GCS is locally rigid, or just rigid, if all its continuous motions are trivial.
Testing for local rigidity is co-NP hard.
Rigidity map. The rigidity map ρ : R d | V | → R | E | {\displaystyle \rho :\mathbb {R} ^{d|V|}\rightarrow \mathbb {R} ^{|E|}} takes a framework ( G , p ) {\displaystyle (G,p)} and outputs the squared-distances ‖ p ( u ) − p ( v ) ‖ 2 {\displaystyle \|p(u)-p(v)\|^{2}} between all pairs of points that are connected by an edge.
Rigidity matrix. The Jacobian , or derivative , of the rigidity map yields a system of linear equations of the form
( p ( u ) − p ( v ) ) ⋅ ( p ′ ( v ) − p ′ ( u ) ) = 0 , {\displaystyle (p(u)-p(v))\cdot (p'(v)-p'(u))=0,}
for all edges ( u , v ) {\displaystyle (u,v)} of G {\displaystyle G} . The rigidity matrix R ( G , p ) {\displaystyle R(G,p)} is an | E | × d | V | {\displaystyle |E|\times d|V|} matrix that encodes the information in these equations. Each edge of G {\displaystyle G} corresponds to a row of R ( G , p ) {\displaystyle R(G,p)} and each vertex corresponds to d {\displaystyle d} columns of R ( G , p ) {\displaystyle R(G,p)} . The row corresponding to the edge ( u , v ) {\displaystyle (u,v)} is defined as follows.
[ … columns for u … columns for v … ⋮ ⋮ row for ( u , v ) 0 … 0 p ( u ) − p ( v ) 0 … 0 p ( v ) − p ( u ) 0 … 0 ⋮ ⋮ ] {\displaystyle {\begin{bmatrix}\,&\dots &{\text{columns for }}u&\dots &{\text{columns for }}v&\dots \\\vdots &\,&\,&\vdots &\,&\,\\{\text{row for }}(u,v)&0\dots 0&p(u)-p(v)&0\dots 0&p(v)-p(u)&0\dots 0\\\vdots &\,&\,&\vdots &\,&\,\end{bmatrix}}}
Infinitesimal motion. An infinitesimal motion is an assignment p ′ : V → R d {\displaystyle p':V\rightarrow \mathbb {R} ^{d}} of velocities to the vertices of a framework ( G , p ) {\displaystyle (G,p)} such that R ( G , p ) p ′ = 0 {\displaystyle R(G,p)p'=0} . Hence, the kernel of the rigidity matrix is the space of infinitesimal motions. A trivial infinitesimal motion is defined analogously to a trivial continuous motion.
Stress. A stress is an assignment ω : E → R {\displaystyle \omega :E\rightarrow \mathbb {R} } to the edges of a framework ( G , p ) {\displaystyle (G,p)} . A stress is proper if its entries are nonnegative and is a self stress if it satisfies ω R ( G , p ) = 0 {\displaystyle \omega R(G,p)=0} . A stress satisfying this equation is also called a resolvable stress, equilibrium stress, prestress, or sometimes just a stress.
Stress Matrix. For a stress ω {\displaystyle \omega } applied to the edges of a framework ( G , p ) {\displaystyle (G,p)} with the constraint graph G = ( V , E ) {\displaystyle G=(V,E)} , define the | V | × | V | {\displaystyle |V|\times |V|} stress matrix Ω {\displaystyle \Omega } as
Ω u v = { − ω u v if u ≠ v ∑ v ∈ V ω u v otherwise {\displaystyle \Omega _{uv}={\begin{cases}-\omega _{uv}&{\text{if }}u\neq v\\\sum _{v\in V}{\omega _{uv}}&{\text{otherwise}}\end{cases}}} .
It is easily verified that for any two p , q ∈ R d | V | {\displaystyle p,q\in \mathbb {R} ^{d|V|}} and any stress ω {\displaystyle \omega } ,
ω R ( p ) q = p T Ω q . {\displaystyle \omega R(p)q=p^{T}\Omega q.}
The information in this section can be found in. [ 1 ] The rigidity matrix can be viewed as a linear transformation from R d | V | {\displaystyle \mathbb {R} ^{d|V|}} to R | E | {\displaystyle \mathbb {R} ^{|E|}} . The domain of this transformation is the set of 1 × d | V | {\displaystyle 1\times d|V|} column vectors, called velocity or displacements vectors, denoted by p ′ {\displaystyle p'} , and the image is the set of 1 × | E | {\displaystyle 1\times |E|} edge distortion vectors, denoted by e ′ {\displaystyle e'} . The entries of the vector p ′ {\displaystyle p'} are velocities assigned to the vertices of a framework ( G , p ) {\displaystyle (G,p)} , and the equation R ( G , p ) p ′ = e ′ {\displaystyle R(G,p)p'=e'} describes how the edges are compressed or stretched as a result of these velocities.
The dual linear transformation leads to a different physical interpretation. The codomain of the linear transformation is the set of 1 × | E | {\displaystyle 1\times |E|} column vectors, or stresses, denoted by ω {\displaystyle \omega } , that apply a stress ω u v {\displaystyle \omega _{uv}} to each edge ( u , v ) {\displaystyle (u,v)} of a framework ( G , p ) {\displaystyle (G,p)} . The stress ω u v {\displaystyle \omega _{uv}} applies forces to the vertices of ( u , v ) {\displaystyle (u,v)} that are equal in magnitude but opposite in direction, depending on whether ( u , v ) {\displaystyle (u,v)} is being compressed or stretched by ω u v {\displaystyle \omega _{uv}} . Consider the equation ω T R ( p ) = f , {\displaystyle \omega ^{T}R(p)=f,} where f {\displaystyle f} is a 1 × d | V | {\displaystyle 1\times d|V|} vector. The terms on the left corresponding to the d {\displaystyle d} columns of a vertex v {\displaystyle v} in R ( p ) {\displaystyle R(p)} yield the entry in f {\displaystyle f} that is the net force f v {\displaystyle f_{v}} applied to v {\displaystyle v} by the stresses on edges incident to v {\displaystyle v} . Hence, the domain of the dual linear transformation is the set of stresses on edges and the image is the set of net forces on vertices. A net force f {\displaystyle f} can be viewed as being able to counteract, or resolve, the force − f {\displaystyle -f} , so the image of the dual linear transformation is really the set of resolvable forces.
The relationship between these dual linear transformations is described by the work done by a velocity vector p ′ {\displaystyle p'} under a net force f {\displaystyle f} :
W = f p ′ = ( ω R ( p ) ) p ′ = ω ( R ( p ) p ′ ) = ω e ′ , {\displaystyle W=fp'=(\omega R(p))p'=\omega (R(p)p')=\omega e',}
where ω {\displaystyle \omega } is a stress and e ′ {\displaystyle e'} is an edge distortion. In terms of the stress matrix, this equation above becomes W = p T Ω p ′ {\displaystyle W=p^{T}\Omega p'} .
This section covers the various types of rigidity and how they are related. For more information, see. [ 1 ]
Infinitesimal rigidity is the strongest form of rigidity that restricts a framework from admitting even non-trivial infinitesimal motions. It is also called first-order rigidity because of its relation to the rigidity matrix. More precisely, consider the linear equations
( p ( u ) − p ( v ) ) ⋅ ( p ′ ( u ) − p ′ ( v ) ) = 0 {\displaystyle (p(u)-p(v))\cdot (p'(u)-p'(v))=0}
resulting from the equation R ( G , p ) p ′ = 0 {\displaystyle R(G,p)p'=0} . These equations state that the projections of the velocities p ′ ( u ) {\displaystyle p'(u)} and p ′ ( v ) {\displaystyle p'(v)} onto the edge ( u , v ) {\displaystyle (u,v)} cancel out. Each of the following statements is sufficient for a d {\displaystyle d} -dimensional framework to be infinitesimally rigid in d {\displaystyle d} -dimensions:
In general, any type of framework is infinitesimally rigid in d {\displaystyle d} -dimensions if space of its infinitesimal motions is the space of trivial infinitesimal motions of the metric space. The following theorem by Asimow and Roth relates infinitesimal rigidity and rigidity.
Theorem. [ 2 ] [ 3 ] If a framework is infinitesimally rigid, then it is rigid.
The converse of this theorem is not true in general; however, it is true for generic rigid frameworks (with respect to infinitesimal rigidity), see combinatorial characterizations of generically rigid graphs .
A d {\displaystyle d} -dimensional framework ( G , p ) {\displaystyle (G,p)} is statically rigid in d {\displaystyle d} -dimensions if every force vector f {\displaystyle f} on the vertices of ( G , p ) {\displaystyle (G,p)} that is orthogonal to the trivial motions can be resolved by the net force of some proper stress ω {\displaystyle \omega } ; or written mathematically, for every such force vector f {\displaystyle f} there exists a proper stress ω {\displaystyle \omega } such that
f + ω R ( p ) = 0. {\displaystyle f+\omega R(p)=0.}
Equivalently, the rank of R ( p ) {\displaystyle R(p)} must be d | V | − ( d + 1 2 ) {\displaystyle d|V|-{d+1 \choose 2}} . Static rigidity is equivalent to infinitesimal rigidity.
Second-order rigidity is weaker than infinitesimal and static rigidity. The second derivative of the rigidity map consists of equations of the form
( p ( u ) − p ( v ) ) ⋅ ( p ″ ( u ) − p ″ ( v ) ) + ( p ′ ( u ) − p ′ ( v ) ) ⋅ ( p ′ ( u ) − p ′ ( v ) ) = 0. {\displaystyle (p(u)-p(v))\cdot (p''(u)-p''(v))+(p'(u)-p'(v))\cdot (p'(u)-p'(v))=0.}
The vector p ″ {\displaystyle p''} assigns an acceleration to each vertex of a framework ( G , p ) {\displaystyle (G,p)} . These equations can be written in terms of matrices: R ( p ) p ″ = − R ( p ′ ) p ′ {\displaystyle R(p)p''=-R(p')p'} ,
where R ( p ′ ) {\displaystyle R(p')} is defined similarly to the rigidity matrix. Each of the following statements are sufficient for a d {\displaystyle d} -dimensional framework to be second-order rigid in d {\displaystyle d} -dimensions:
The third statement shows that for each such p ′ {\displaystyle p'} , R ( p ′ ) p ′ {\displaystyle R(p')p'} is not in the column span of R ( p ) {\displaystyle R(p)} , i.e., it is not an edge distortion resulting from p ′ {\displaystyle p'} . This follows from the Fredholm alternative : since the column span of R ( p ) {\displaystyle R(p)} is orthogonal to the kernel of R ( p ) T {\displaystyle R(p)^{T}} , i.e., the set of equilibrium stresses, either R ( p ) p ″ = − R ( p ′ ) p ′ {\displaystyle R(p)p''=-R(p')p'} for some acceleration p ″ {\displaystyle p''} or there is an equilibrium stress ω {\displaystyle \omega } satisfying the third condition. The third condition can be written in terms of the stress matrix: p ′ T Ω p ′ > 0 {\displaystyle p'^{T}\Omega p'>0} . Solving for ω {\displaystyle \omega } is a non-linear problem in p ′ {\displaystyle p'} with no known efficient algorithm. [ 4 ]
Prestress stability is weaker than infinitesimal and static rigidity but stronger than second-order rigidity. Consider the third sufficient condition for second-order rigidity. A d {\displaystyle d} -dimensional framework ( G , p ) {\displaystyle (G,p)} is prestress stable if there exists an equilibrium stress ω {\displaystyle \omega } such that for all non-trivial velocities p ′ {\displaystyle p'} , p ′ T Ω p ′ > 0 {\displaystyle p'^{T}\Omega p'>0} . Prestress stability can be verified via semidefinite programming techniques. [ 4 ]
A d {\displaystyle d} -dimensional framework ( G , p ) {\displaystyle (G,p)} of a linkage ( G , δ ) {\displaystyle (G,\delta )} is globally rigid in d {\displaystyle d} -dimensions if all frameworks in the configuration space C ( G , δ ) {\displaystyle {\mathcal {C}}(G,\delta )} are equivalent up to trivial motions, i.e., factoring out the trivial motions, there is only one framework of ( G , δ ) {\displaystyle (G,\delta )} .
Theorem. Global rigidity is a generic property of graphs.
A d {\displaystyle d} -dimensional framework ( G , p ) {\displaystyle (G,p)} is minimally rigid in d {\displaystyle d} -dimensions if ( G , p ) {\displaystyle (G,p)} is rigid and removing any edge from ( G , p ) {\displaystyle (G,p)} results in a framework that is not rigid.
There are two types of redundant rigidity: vertex-redundant and edge-redundant rigidity. A d {\displaystyle d} -dimensional framework ( G , p ) {\displaystyle (G,p)} is edge-redundantly rigid in d {\displaystyle d} -dimensions if ( G , p ) {\displaystyle (G,p)} is rigid and removing any edge from ( G , p ) {\displaystyle (G,p)} results in another rigid framework. Vertex-redundant rigidity is defined analogously.
This section concerns the rigidity of polyhedra in 3 {\displaystyle 3} -dimensions, see polyhedral systems for a definition of this type of GCS. A polyhedron is rigid if its underlying bar-joint framework is rigid. One of the earliest results for rigidity was a conjecture by Euler in 1766. [ 5 ]
Conjecture. [ 5 ] A closed spatial figure allows no changes, as long as it is not ripped apart.
Much work has gone into proving this conjecture, which has now been proved false by counterexample. [ 6 ] The first major result is by Cauchy in 1813 and is known as Cauchy's theorem .
Cauchy's Theorem. [ 7 ] If there is an isometry between the surfaces of two strictly convex polyhedra which is an isometry on each of the faces, then the two polyhedra are congruent.
There were minor errors with Cauchy's proof. The first complete proof was given in, [ 8 ] and a slightly generalized result was given in. [ 9 ] The following corollary of Cauchy's theorem relates this result to rigidity.
Corollary. The 2-skeleton of a strictly convex polyhedral framework in 3 {\displaystyle 3} -dimensions is rigid.
In other words, if we treat the convex polyhedra as a set of rigid plates, i.e., as a variant of a body-bar-hinge framework , then the framework is rigid. The next result, by Bricard in 1897, shows that the strict convexity condition can be dropped for 2 {\displaystyle 2} -skeletons of the octahedron .
Theorem. [ 10 ] The 2 {\displaystyle 2} -skeleton of any polyhedral framework of the octahedron in 3 {\displaystyle 3} -dimensions is rigid. However, there exists a framework of the octahedron whose 1 {\displaystyle 1} -skeleton is not rigid in 3 {\displaystyle 3} -dimensions.
The proof of the latter part of this theorem shows that these flexible frameworks exist due to self-intersections. Progress on Eurler's conjecture did not pick up again until the late 19th century. The next theorem and corollary concern triangulated polyhedra.
Theorem. [ 9 ] If vertices are inserted in the edges of a strictly convex polyhedron and the faces are triangulated, then the 1 {\displaystyle 1} -skeleton of the resulting polyhedron is infinitesimally rigid.
Corollary. If a convex polyhedron in 3 {\displaystyle 3} -dimensions has the property that the collection of faces containing a given vertex do not all lie in the same plane, then the 2 {\displaystyle 2} -skeleton of that polyhedron is infinitesimally rigid.
The following result shows that the triangulation condition in the above theorem is necessary.
Theorem. [ 2 ] The 1 {\displaystyle 1} -skeleton of a strictly convex polyhedron embedded in 3 {\displaystyle 3} -dimensions which has at least one non-triangluar face is not rigid.
The following conjecture extends Cauchy's result to more general polyhedra.
Conjecture. [ 11 ] Two combinatorially equivalent polyhedra with equal corresponding dihedral angles are isogonal .
This conjecture has been proved for some special cases. [ 12 ] The next result applies in the generic setting, i.e., to almost all polyhedra with the same combinatorial structure, see structural rigidity .
Theorem. [ 13 ] Every closed simply connected polyhedral surface with a 3 {\displaystyle 3} -dimensional framework is generically rigid.
This theorem demonstrates that Euler's conjecture is true for almost all polyhedra. However, a non-generic polyhedron was found that is not rigid in 3 {\displaystyle 3} -dimensions, disproving the conjecture. [ 6 ] This polyhedra is topologically a sphere, which shows that the generic result above is optimal. Details on how to construct this polyhedra can be found in. [ 14 ] An interesting property of this polyhedra is that its volume remains constant along any continuous motion path, leading to the following conjecture.
Bellows Conjecture. [ 15 ] Every orientable closed polyhedral surface flexes with constant volume.
This conjecture was first proven for spherical polyhedra [ 16 ] and then in general. [ 17 ]
This section concerns the rigidity of tensegrities , see tensegrity systems for a definition of this type of GCS.
The definitions below can be found in. [ 1 ]
Infinitesimal motion. An infinitesimal motion of a tensegrity framework ( G , p ) {\displaystyle (G,p)} is a velocity vector p ′ : V → R d {\displaystyle p':V\rightarrow \mathbb {R} ^{d}} such that for each edge ( u , v ) {\displaystyle (u,v)} of the framework,
Second-order motion. A second-order motion of a tensegrity framework ( G , p ) {\displaystyle (G,p)} is a solution ( p ′ , p ″ ) {\displaystyle (p',p'')} to the following constraints:
Global rigidity. ’ A d {\displaystyle d} -dimensional tensegrity framework ( G , p ) {\displaystyle (G,p)} of a tensegrity GCS is globally rigid in d {\displaystyle d} -dimensions if every other d {\displaystyle d} -dimensional framework ( G , q ) {\displaystyle (G,q)} of the same GCS that is dominated by ( G , p ) {\displaystyle (G,p)} can be obtained via a trivial motion of ( G , p ) {\displaystyle (G,p)} .
Universal rigidity. A d {\displaystyle d} -dimensional tensegrity framework ( G , p ) {\displaystyle (G,p)} of a tensegrity GCS is universally rigid if it is globally rigid in any dimension.
Dimensional rigidity. A d {\displaystyle d} -dimensional tensegrity framework ( G , p ) {\displaystyle (G,p)} of a tensegrity GCS is dimensionally rigid in d {\displaystyle d} -dimensions if any other D {\displaystyle D} -dimensional tensegrity framework ( G , q ) {\displaystyle (G,q)} , for any D {\displaystyle D} satisfying the constraints of the GCS, has an affine span of dimension at most d {\displaystyle d} .
Super stable. A d {\displaystyle d} -dimensional tensegrity framework ( G , p ) {\displaystyle (G,p)} is super stable in d {\displaystyle d} -dimensions if is rigid in d {\displaystyle d} -dimensions as a bar-joint framework and has a proper equilibrium stress ω {\displaystyle \omega } such that the stress matrix Ω {\displaystyle \Omega } is positive semidefinite and has rank | V | − d − 1 {\displaystyle |V|-d-1} .
Generic results.
Infinitesimal rigidity is not a generic property of tensegrities, see structural rigidity . In other words, not all generic tensegrities with the same constraint graph have the same infinitesimal rigidity properties. Hence, some work has gone into identifying specific classes of graphs for which infinitesimal rigidity is a generic property of tensegrities. Graphs satisfying this condition are called strongly rigid. Testing a graph for strong rigidity is NP-hard, even for 1 {\displaystyle 1} -dimension. [ 18 ] The following result equates generic redundant rigidity of graphs to infinitesimally rigid tensegrities.
Theorem. [ 19 ] A graph G {\displaystyle G} has an infinitesimally rigid tensegrity framework in d {\displaystyle d} -dimensions, for some partition of the edges of G {\displaystyle G} into bars, cables, and struts if and only if G {\displaystyle G} is generically edge-redundantly rigid in d {\displaystyle d} -dimensions.
The first result demonstrates when rigidity and infinitesimal rigidity of tensegrities are equivalent.
Theorem. [ 20 ] Let ( G , p ) {\displaystyle (G,p)} be a d {\displaystyle d} -dimensional tensegrity framework where: the vertices of G {\displaystyle G} are realized as a strictly convex polygon; the bars form a Hamilton cycle on the boundary of this polygon; and there are no struts. Then, ( G , p ) {\displaystyle (G,p)} is rigid in d {\displaystyle d} -dimensions if and only if it is infinitesimally rigid in d {\displaystyle d} -dimensions.
The following is a necessary condition for rigidity.
Theorem. [ 21 ] Let ( G , p ) {\displaystyle (G,p)} be a d {\displaystyle d} -dimensional tensegrity framework with at least one cable or strut. If ( G , p ) {\displaystyle (G,p)} is rigid in d {\displaystyle d} -dimensions, then it has a non-zero proper equilibrium stress.
Rigidity of tensegrities can also be written in terms of bar-joint frameworks as follows.
Theorem. [ 22 ] Let ( G , p ) {\displaystyle (G,p)} be a d {\displaystyle d} -dimensional tensegrity framework with at least one cable or strut. Then ( G , p ) {\displaystyle (G,p)} is infinitesimally rigid in d {\displaystyle d} -dimensions if it is rigid in d {\displaystyle d} -dimensions as a bar-joint framework and has a strict proper stress.
The following is a sufficient condition for second-order rigidity.
Theorem. [ 20 ] Let ( G , p ) {\displaystyle (G,p)} be a d {\displaystyle d} -dimensional tensegrity framework. If for all non-trivial infinitesimal motions p ′ {\displaystyle p'} of ( G , p ) {\displaystyle (G,p)} , there exists a proper equilibrium stress ω {\displaystyle \omega } such that
∑ u , v ∈ V ω u v ( p u ′ − p v ′ ) ⋅ ( p u ′ − p v ′ ) > 0 , {\displaystyle \sum _{u,v\in V}\omega _{uv}(p'_{u}-p'_{v})\cdot (p'_{u}-p'_{v})>0,}
then ( G , p ) {\displaystyle (G,p)} is second-order rigid.
An interesting application of tensegrities is in sphere-packings in polyhedral containers. Such a packing can be modelled as a tensegrity with struts between pairs of tangent spheres and between the boundaries of the container and the spheres tangent to them. This model has been studied to compute local maximal densities of these packings. [ 23 ] [ 24 ]
The next result demonstrates when tensegrity frameworks have the same equilibrium stresses.
Theorem. [ 25 ] Let ( G , p ) {\displaystyle (G,p)} be a d {\displaystyle d} -dimensional tensegrity framework with a proper stress ω {\displaystyle \omega } such that the stress matrix Ω {\displaystyle \Omega } is positive semidefinite . Then, ω {\displaystyle \omega } is a proper stress of all d {\displaystyle d} -dimensional tensegrity frameworks dominated by ( G , p ) {\displaystyle (G,p)} .
The following is a sufficient condition for global rigidity of generic tensegrity frameworks based on stress matrices.
Theorem. [ 26 ] Let ( G , p ) {\displaystyle (G,p)} be a d {\displaystyle d} -dimensional generic tensegrity framework with a proper equilibrium stress ω {\displaystyle \omega } . If the stress matrix Ω {\displaystyle \Omega } has rank | V | − d − 1 {\displaystyle |V|-d-1} , then ( G , p ) {\displaystyle (G,p)} is globally rigid in d {\displaystyle d} dimensions.
While this theorem is for the generic setting, it does not offer a combinatorial characterization of generic global rigidity, so it is not quite a result of structural rigidity .
Let ( G , p ) {\displaystyle (G,p)} be a d {\displaystyle d} -dimensional generic tensegrity framework, such that the affine span of p {\displaystyle p} is R d {\displaystyle \mathbb {R} ^{d}} , with a proper equilibrium stress ω {\displaystyle \omega } and the stress matrix Ω {\displaystyle \Omega } . A finite set of non-zero vectors in R d {\displaystyle \mathbb {R} ^{d}} lie on a conic at infinity if, treating them as points in ( d − 1 ) {\displaystyle (d-1)} -dimensional projective space, they lie on a conic. Consider the following three statements:
If Statements 1 and 2 hold, then ( G , p ) {\displaystyle (G,p)} is dimensionally rigid in d {\displaystyle d} -dimensions, [ 25 ] and if Statement 3 also holds, then ( G , p ) {\displaystyle (G,p)} is universally rigid in d {\displaystyle d} -dimensions. [ 27 ] | https://en.wikipedia.org/wiki/Geometric_rigidity |
A geometric separator is a line (or another shape) that partitions a collection of geometric shapes into two subsets, such that proportion of shapes in each subset is bounded, and the number of shapes that do not belong to any subset (i.e. the shapes intersected by the separator itself) is small.
When a geometric separator exists, it can be used for building divide-and-conquer algorithms for solving various problems in computational geometry .
In 1979, Helge Tverberg [ 1 ] raised the following question. For two positive integers k , l , what is the smallest number n ( k , l ) such that, for any family of pairwise-disjoint convex objects in the plane, there exists a straight line that has at least k objects on one side and at least l on the other side?
The following results are known.
Given a set of N =4 k disjoint axis-parallel rectangles in the plane, there is a line, either horizontal or vertical, such that at least N /4 rectangles lie entirely to each side of it (thus at most N /2 rectangles are intersected by the separator line).
Define W as the most western vertical line with at least N /4 rectangles entirely to its west. There are two cases:
The number of intersected shapes, guaranteed by the above theorem, is O( N ). This upper bound is asymptotically tight even when the shapes are squares, as illustrated in the figure to the right. This is in sharp contrast to the upper bound of O( √ N ) intersected shapes, which is guaranteed when the separator is a closed shape (see previous section ).
Moreover, when the shapes are arbitrary rectangles, there are cases in which no line that separates more than a single rectangle can cross less than N /4 rectangles, as illustrated in the figure to the right. [ 4 ]
The above theorem can be generalized from disjoint rectangles to k -thick rectangles. Additionally, by induction on d , it is possible to generalize the above theorem to d dimensions and get the following theorem: [ 5 ]
For the special case when k = N − 1 (i.e. each point is contained in at most N − 1
boxes), the following theorem holds: [ 5 ]
The objects need not be boxes, and the separators need not be axis-parallel:
It is possible to find the hyperplanes guaranteed by the above theorems in O( Nd ) steps. Also, if the 2 d lists of the lower and upper endpoints of the intervals defining the boxes's i th coordinates are pre-sorted, then the best such hyperplane (according to a wide variety of optimality measures) may be found in O( Nd ) steps.
A simple case in which a separator is guaranteed to exist is the following: [ 5 ] [ 6 ]
Thus, R is a geometric separator that separates the n squares into two subset ("inside R " and "outside R "), with a relatively small "loss" (the squares intersected by R are considered "lost" because they do not belong to any of the two subsets).
Define a 2-fat rectangle as an axis-parallel rectangle with an aspect ratio of at most 2.
Let R 0 be a minimal-area 2-fat rectangle that contains the centers of at least n /3 squares. Thus every 2-fat rectangle smaller than R 0 contains fewer than n /3 squares.
For every t in [0,1), let R t be a 2-fat rectangle with the same center as R 0 , inflated by 1 + t .
Now it remains to show that there is a t for which R t intersects at most O(sqrt( n )) squares.
First, consider all the "large squares" – the squares whose side-length is at least width ( R 0 ) / 2 n {\displaystyle \operatorname {width} (R_{0})/2{\sqrt {n}}} . For every t , the perimeter of R t is at most 2·perimeter( R 0 ) which is at most 6·width( R 0 ), so it can intersect at most 12 n {\displaystyle 12{\sqrt {n}}} large squares.
Next, consider all the "small squares" – the squares whose side-length is less than width ( R 0 ) / 2 n {\displaystyle \operatorname {width} (R_{0})/2{\sqrt {n}}} .
For every t , define: intersect( t ) as the set of small squares intersected by the boundary of R t . For every t 1 and t 2 , if | t 1 − t 2 | ≥ 1 / n {\displaystyle |t_{1}-t_{2}|\geq 1/{\sqrt {n}}} , then | width ( R t 1 ) − width ( R t 2 ) | ≥ width ( R 0 ) / n {\displaystyle |\operatorname {width} (R_{t_{1}})-\operatorname {width} (R_{t_{2}})|\geq \operatorname {width} (R_{0})/{\sqrt {n}}} . Therefore, there is a gap of at least width ( R 0 ) / 2 n {\displaystyle \operatorname {width} (R_{0})/2{\sqrt {n}}} between the boundary of R t 1 and the boundary of R t 2 . Therefore, intersect( t 1 ) and intersect( t 2 ) are disjoint. Therefore:
Therefore, by the pigeonhole principle there is a certain j 0 for which:
The separator we look for is the rectangle R t , where t = j 0 / n {\displaystyle t=j_{0}/{\sqrt {n}}} . [ 7 ]
Using this separator theorem, we can solve certain problems in computational geometry in the following way:
The above theorem can be generalized in many different ways, with possibly different constants. For example:
The ratio of 1:2, in the square separator theorem above, is the best that can be guaranteed: there are collections of shapes that cannot be separated in a better ratio using a separator that crosses only O(sqrt( n )) shapes. Here is one such collection (from theorem 34 of [ 5 ] ):
Consider an equilateral triangle . At each of its 3 vertices, put N /3 shapes arranged in an exponential spiral, such that the diameter increases by a constant factor every turn of the spiral, and each shape touches its neighbours in the spiral ordering. For example, start with a 1-by-Φ rectangle, where Φ is the golden ratio . Add an adjacent Φ-by-Φ square and get another golden rectangle. Add an adjacent (1+Φ)-by-(1+Φ) square and get a larger golden rectangle, and so on.
Now, in order to separate more than 1/3 of the shapes, the separator must separate O( N ) shapes from two different vertices. But to do this, the separator must intersect O( N ) shapes.
[ 8 ]
Define the centerpoint of Q as a point o such that every line through it has at most 2 n /3 points of Q in each side of it. The existence of a centerpoint can be proved using Helly's theorem .
For a given point p and constant a >0, define Pr(a,p,o) as the probability that a random line through o lies at a distance of less than a from p . The idea is to bound this probability and thus bound the expected number of points at a distance less than a from a random line through o . Then, by the pigeonhole principle , at least one line through o is the desired separator.
Bounded-width separators can be used for approximately solving the protein folding problem. [ 9 ] It can also be used for an exact sub-exponential algorithm to find a maximum independent set , as well as several related covering problems, in geometric graphs. [ 8 ]
The planar separator theorem may be proven by using the circle packing theorem to represent a planar graph as the contact graph of a system of disks in the plane, and then by finding a circle that forms a geometric separator for those disks. [ 10 ] | https://en.wikipedia.org/wiki/Geometric_separator |
In mathematics , a geometric series is a series summing the terms of an infinite geometric sequence , in which the ratio of consecutive terms is constant. For example, the series 1 2 + 1 4 + 1 8 + ⋯ {\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{4}}+{\tfrac {1}{8}}+\cdots } is a geometric series with common ratio 1 2 {\displaystyle {\tfrac {1}{2}}} , which converges to the sum of 1 {\displaystyle 1} . Each term in a geometric series is the geometric mean of the term before it and the term after it, in the same way that each term of an arithmetic series is the arithmetic mean of its neighbors.
While Greek philosopher Zeno's paradoxes about time and motion (5th century BCE) have been interpreted as involving geometric series, such series were formally studied and applied a century or two later by Greek mathematicians , for example used by Archimedes to calculate the area inside a parabola (3rd century BCE). Today, geometric series are used in mathematical finance , calculating areas of fractals, and various computer science topics.
Though geometric series most commonly involve real or complex numbers , there are also important results and applications for matrix-valued geometric series, function-valued geometric series, p {\displaystyle p} - adic number geometric series, and most generally geometric series of elements of abstract algebraic fields , rings , and semirings .
The geometric series is an infinite series derived from a special type of sequence called a geometric progression . This means that it is the sum of infinitely many terms of geometric progression: starting from the initial term a {\displaystyle a} , and the next one being the initial term multiplied by a constant number known as the common ratio r {\displaystyle r} . By multiplying each term with a common ratio continuously, the geometric series can be defined mathematically as [ 1 ] a + a r + a r 2 + a r 3 + ⋯ = ∑ k = 0 ∞ a r k . {\displaystyle a+ar+ar^{2}+ar^{3}+\cdots =\sum _{k=0}^{\infty }ar^{k}.} The sum of a finite initial segment of an infinite geometric series is called a finite geometric series , expressed as [ 2 ] a + a r + a r 2 + a r 3 + ⋯ + a r n = ∑ k = 0 n a r k . {\displaystyle a+ar+ar^{2}+ar^{3}+\cdots +ar^{n}=\sum _{k=0}^{n}ar^{k}.}
When r > 1 {\displaystyle r>1} it is often called a growth rate or rate of expansion. When 0 < r < 1 {\displaystyle 0<r<1} it is often called a decay rate or shrink rate, where the idea that it is a "rate" comes from interpreting k {\displaystyle k} as a sort of discrete time variable. When an application area has specialized vocabulary for specific types of growth, expansion, shrinkage, and decay, that vocabulary will also often be used to name r {\displaystyle r} parameters of geometric series. In economics , for instance, rates of increase and decrease of price levels are called inflation rates and deflation rates, while rates of increase in values of investments include rates of return and interest rates . [ 3 ]
When summing infinitely many terms, the geometric series can either be convergent or divergent . Convergence means there is a value after summing infinitely many terms, whereas divergence means no value after summing. The convergence of a geometric series can be described depending on the value of a common ratio, see § Convergence of the series and its proof . Grandi's series is an example of a divergent series that can be expressed as 1 − 1 + 1 − 1 + ⋯ {\displaystyle 1-1+1-1+\cdots } , where the initial term is 1 {\displaystyle 1} and the common ratio is − 1 {\displaystyle -1} ; this is because it has three different values.
Decimal numbers that have repeated patterns that continue forever can be interpreted as geometric series and thereby converted to expressions of the ratio of two integers . [ 4 ] For example, the repeated decimal fraction 0.7777 … {\displaystyle 0.7777\ldots } can be written as the geometric series 0.7777 … = 7 10 + 7 10 ( 1 10 ) + 7 10 ( 1 10 2 ) + 7 10 ( 1 10 3 ) + ⋯ , {\displaystyle 0.7777\ldots ={\frac {7}{10}}+{\frac {7}{10}}\left({\frac {1}{10}}\right)+{\frac {7}{10}}\left({\frac {1}{10^{2}}}\right)+{\frac {7}{10}}\left({\frac {1}{10^{3}}}\right)+\cdots ,} where the initial term is a = 7 10 {\displaystyle a={\tfrac {7}{10}}} and the common ratio is r = 1 10 {\displaystyle r={\tfrac {1}{10}}} .
The convergence of the infinite sequence of partial sums of the infinite geometric series depends on the magnitude of the common ratio r {\displaystyle r} alone:
The rate of convergence shows how the sequence quickly approaches its limit. In the case of the geometric series—the relevant sequence is S n {\displaystyle S_{n}} and its limit is S {\displaystyle S} —the rate and order are found via lim n → ∞ | S n + 1 − S | | S n − S | q , {\displaystyle \lim _{n\rightarrow \infty }{\frac {\left|S_{n+1}-S\right|}{\left|S_{n}-S\right|^{q}}},} where q {\displaystyle q} represents the order of convergence. Using | S n − S | = | a r n + 1 1 − r | {\textstyle |S_{n}-S|=\left|{\frac {ar^{n+1}}{1-r}}\right|} and choosing the order of convergence q = 1 {\displaystyle q=1} gives: [ 6 ] lim n → ∞ | a r n + 2 1 − r | | a r n + 1 1 − r | 1 = | r | . {\displaystyle \lim _{n\rightarrow \infty }{\frac {\left|{\frac {ar^{n+2}}{1-r}}\right|}{\left|{\frac {ar^{n+1}}{1-r}}\right|^{1}}}=|r|.} When the series converges, the rate of convergence gets slower as | r | {\displaystyle |r|} approaches 1 {\displaystyle 1} . [ 6 ] The pattern of convergence also depends on the sign or complex argument of the common ratio. If r > 0 {\displaystyle r>0} and | r | < 1 {\displaystyle |r|<1} then terms all share the same sign and the partial sums of the terms approach their eventual limit monotonically . If r < 0 {\displaystyle r<0} and | r | < 1 {\displaystyle |r|<1} , adjacent terms in the geometric series alternate between positive and negative, and the partial sums S n {\displaystyle S_{n}} of the terms oscillate above and below their eventual limit S {\displaystyle S} . For complex r {\displaystyle r} and | r | < 1 , {\displaystyle |r|<1,} the S n {\displaystyle S_{n}} converge in a spiraling pattern.
The convergence is proved as follows. The partial sum of the first n + 1 {\displaystyle n+1} terms of a geometric series, up to and including the r n {\displaystyle r^{n}} term, S n = a r 0 + a r 1 + ⋯ + a r n = ∑ k = 0 n a r k , {\displaystyle S_{n}=ar^{0}+ar^{1}+\cdots +ar^{n}=\sum _{k=0}^{n}ar^{k},} is given by the closed form S n = { a ( n + 1 ) r = 1 a ( 1 − r n + 1 1 − r ) otherwise {\displaystyle S_{n}={\begin{cases}a(n+1)&r=1\\a\left({\frac {1-r^{n+1}}{1-r}}\right)&{\text{otherwise}}\end{cases}}} where r {\displaystyle r} is the common ratio. The case r = 1 {\displaystyle r=1} is merely a simple addition, a case of an arithmetic series . The formula for the partial sums S n {\displaystyle S_{n}} with r ≠ 1 {\displaystyle r\neq 1} can be derived as follows: [ 7 ] [ 8 ] [ 9 ] S n = a r 0 + a r 1 + ⋯ + a r n , r S n = a r 1 + a r 2 + ⋯ + a r n + 1 , S n − r S n = a r 0 − a r n + 1 , S n ( 1 − r ) = a ( 1 − r n + 1 ) , S n = a ( 1 − r n + 1 1 − r ) , {\displaystyle {\begin{aligned}S_{n}&=ar^{0}+ar^{1}+\cdots +ar^{n},\\rS_{n}&=ar^{1}+ar^{2}+\cdots +ar^{n+1},\\S_{n}-rS_{n}&=ar^{0}-ar^{n+1},\\S_{n}\left(1-r\right)&=a\left(1-r^{n+1}\right),\\S_{n}&=a\left({\frac {1-r^{n+1}}{1-r}}\right),\end{aligned}}} for r ≠ 1 {\displaystyle r\neq 1} . As r {\displaystyle r} approaches 1, polynomial division or L'Hôpital's rule recovers the case S n = a ( n + 1 ) {\displaystyle S_{n}=a(n+1)} . [ 10 ]
As n {\displaystyle n} approaches infinity, the absolute value of r must be less than one for this sequence of partial sums to converge to a limit. When it does, the series converges absolutely . The infinite series then becomes S = a + a r + a r 2 + a r 3 + a r 4 + ⋯ = lim n → ∞ S n = lim n → ∞ a ( 1 − r n + 1 ) 1 − r = a 1 − r − a 1 − r lim n → ∞ r n + 1 = a 1 − r , {\displaystyle {\begin{aligned}S&=a+ar+ar^{2}+ar^{3}+ar^{4}+\cdots \\&=\lim _{n\rightarrow \infty }S_{n}\\&=\lim _{n\rightarrow \infty }{\frac {a(1-r^{n+1})}{1-r}}\\&={\frac {a}{1-r}}-{\frac {a}{1-r}}\lim _{n\rightarrow \infty }r^{n+1}\\&={\frac {a}{1-r}},\end{aligned}}} for | r | < 1 {\displaystyle |r|<1} . [ 7 ]
This convergence result is widely applied to prove the convergence of other series as well, whenever those series's terms can be bounded from above by a suitable geometric series; that proof strategy is the basis for the ratio test and root test for the convergence of infinite series. [ 11 ]
Like the geometric series, a power series has one parameter for a common variable raised to successive powers corresponding to the geometric series's r {\displaystyle r} , but it has additional parameters a 0 , a 1 , a 2 , … , {\displaystyle a_{0},a_{1},a_{2},\ldots ,} one for each term in the series, for the distinct coefficients of each x 0 , x 1 , x 2 , … {\displaystyle x^{0},x^{1},x^{2},\ldots } , rather than just a single additional parameter a {\displaystyle a} for all terms, the common coefficient of r k {\displaystyle r^{k}} in each term of a geometric series. The geometric series can therefore be considered a class of power series in which the sequence of coefficients satisfies a k = a {\displaystyle a_{k}=a} for all k {\displaystyle k} and x = r {\displaystyle x=r} . [ 12 ]
This special class of power series plays an important role in mathematics, for instance for the study of ordinary generating functions in combinatorics and the summation of divergent series in analysis. Many other power series can be written as transformations and combinations of geometric series, making the geometric series formula a convenient tool for calculating formulas for those power series as well. [ 13 ] [ 14 ]
As a power series, the geometric series has a radius of convergence of 1. [ 15 ] This could be seen as a consequence of the Cauchy–Hadamard theorem and the fact that lim n → ∞ a n = 1 {\displaystyle \lim _{n\rightarrow \infty }{\sqrt[{n}]{a}}=1} for any a {\displaystyle a} or as a consequence of the ratio test for the convergence of infinite series, with lim n → ∞ | a r n + 1 | | a r n | = | r | {\displaystyle \lim _{n\rightarrow \infty }{\frac {|ar^{n+1}|}{|ar^{n}|}}=|r|} implying convergence only for | r | < 1. {\displaystyle |r|<1.} However, both the ratio test and the Cauchy–Hadamard theorem are proven using the geometric series formula as a logically prior result, so such reasoning would be subtly circular. [ 16 ]
2,500 years ago, Greek mathematicians believed that an infinitely long list of positive numbers must sum to infinity. Therefore, Zeno of Elea created a paradox , demonstrating as follows: in order to walk from one place to another, one must first walk half the distance there, and then half of the remaining distance, and half of that remaining distance, and so on, covering infinitely many intervals before arriving. In doing so, he partitioned a fixed distance into an infinitely long list of halved remaining distances, each with a length greater than zero. Zeno's paradox revealed to the Greeks that their assumption about an infinitely long list of positive numbers needing to add up to infinity was incorrect. [ 17 ]
Euclid's Elements has the distinction of being the world's oldest continuously used mathematical textbook, and it includes a demonstration of the sum of finite geometric series in Book IX, Proposition 35, illustrated in an adjacent figure. [ 18 ]
Archimedes in his The Quadrature of the Parabola used the sum of a geometric series to compute the area enclosed by a parabola and a straight line. Archimedes' theorem states that the total area under the parabola is 4 / 3 of the area of the blue triangle. His method was to dissect the area into infinite triangles as shown in the adjacent figure. [ 19 ] He determined that each green triangle has 1 / 8 the area of the blue triangle, each yellow triangle has 1 / 8 the area of a green triangle, and so forth. Assuming that the blue triangle has area 1, then, the total area is the sum of the infinite series 1 + 2 ( 1 8 ) + 4 ( 1 8 ) 2 + 8 ( 1 8 ) 3 + ⋯ . {\displaystyle 1+2\left({\frac {1}{8}}\right)+4\left({\frac {1}{8}}\right)^{2}+8\left({\frac {1}{8}}\right)^{3}+\cdots .} Here the first term represents the area of the blue triangle, the second term is the area of the two green triangles, the third term is the area of the four yellow triangles, and so on. Simplifying the fractions gives 1 + 1 4 + 1 16 + 1 64 + ⋯ , {\displaystyle 1+{\frac {1}{4}}+{\frac {1}{16}}+{\frac {1}{64}}+\cdots ,} a geometric series with common ratio r = 1 4 {\displaystyle r={\tfrac {1}{4}}} and its sum is: [ 19 ]
In addition to his elegantly simple proof of the divergence of the harmonic series , Nicole Oresme [ 20 ] proved that the arithmetico-geometric series known as Gabriel's Staircase, [ 21 ] 1 2 + 2 4 + 3 8 + 4 16 + 5 32 + 6 64 + 7 128 + ⋯ = 2. {\displaystyle {\frac {1}{2}}+{\frac {2}{4}}+{\frac {3}{8}}+{\frac {4}{16}}+{\frac {5}{32}}+{\frac {6}{64}}+{\frac {7}{128}}+\cdots =2.} In the diagram for his geometric proof, similar to the adjacent diagram, shows a two-dimensional geometric series. The first dimension is horizontal, in the bottom row, representing the geometric series with initial value a = 1 2 {\displaystyle a={\tfrac {1}{2}}} and common ratio r = 1 2 {\displaystyle r={\tfrac {1}{2}}} S = 1 2 + 1 4 + 1 8 + 1 16 + 1 32 + ⋯ = 1 2 1 − 1 2 = 1. {\displaystyle S={\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}+{\frac {1}{32}}+\cdots ={\frac {\frac {1}{2}}{1-{\frac {1}{2}}}}=1.} The second dimension is vertical, where the bottom row is a new initial term a = S {\displaystyle a=S} and each subsequent row above it shrinks according to the same common ratio r = 1 2 {\displaystyle r={\tfrac {1}{2}}} , making another geometric series with sum T {\displaystyle T} , T = S ( 1 + 1 2 + 1 4 + 1 8 + ⋯ ) = S 1 − r = 1 1 − 1 2 = 2. {\displaystyle {\begin{aligned}T&=S\left(1+{\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+\cdots \right)\\&={\frac {S}{1-r}}={\frac {1}{1-{\frac {1}{2}}}}=2.\end{aligned}}} This approach generalizes usefully to higher dimensions, and that generalization is described below in § Connection to the power series .
As mentioned above, the geometric series can be applied in the field of economics . This leads to the common ratio of a geometric series that may refer to the rates of increase and decrease of price levels are called inflation rates and deflation rates; in contrast, the rates of increase in values of investments include rates of return and interest rates . More specifically in mathematical finance , geometric series can also be applied in time value of money ; that is to represent the present values of perpetual annuities , sums of money to be paid each year indefinitely into the future. This sort of calculation is used to compute the annual percentage rate of a loan, such as a mortgage loan . It can also be used to estimate the present value of expected stock dividends , or the terminal value of a financial asset assuming a stable growth rate. However, the assumption that interest rates are constant is generally incorrect and payments are unlikely to continue forever since the issuer of the perpetual annuity may lose its ability or end its commitment to make continued payments, so estimates like these are only heuristic guidelines for decision making rather than scientific predictions of actual current values. [ 3 ]
In addition to finding the area enclosed by a parabola and a line in Archimedes ' The Quadrature of the Parabola , [ 19 ] the geometric series may also be applied in finding the Koch snowflake 's area described as the union of infinitely many equilateral triangles (see figure). Each side of the green triangle is exactly 1 / 3 the size of a side of the large blue triangle and therefore has exactly 1 / 9 the area. Similarly, each yellow triangle has 1 / 9 the area of a green triangle, and so forth. All of these triangles can be represented in terms of geometric series: the blue triangle's area is the first term, the three green triangles' area is the second term, the twelve yellow triangles' area is the third term, and so forth. Excluding the initial 1, this series has a common ratio r = 4 9 {\textstyle r={\frac {4}{9}}} , and by taking the blue triangle as a unit of area, the total area of the snowflake is: [ 22 ] 1 + 3 ( 1 9 ) + 12 ( 1 9 ) 2 + 48 ( 1 9 ) 3 + ⋯ = 1 1 − 4 9 = 8 5 . {\displaystyle 1+3\left({\frac {1}{9}}\right)+12\left({\frac {1}{9}}\right)^{2}+48\left({\frac {1}{9}}\right)^{3}+\cdots ={\frac {1}{1-{\frac {4}{9}}}}={\frac {8}{5}}.}
Various topics in computer science may include the application of geometric series in the following: [ citation needed ]
While geometric series with real and complex number parameters a {\displaystyle a} and r {\displaystyle r} are most common, geometric series of more general terms such as functions , matrices , and p {\displaystyle p} - adic numbers also find application. [ 23 ] The mathematical operations used to express a geometric series given its parameters are simply addition and repeated multiplication, and so it is natural, in the context of modern algebra , to define geometric series with parameters from any ring or field . [ 24 ] Further generalization to geometric series with parameters from semirings is more unusual, but also has applications; for instance, in the study of fixed-point iteration of transformation functions , as in transformations of automata via rational series . [ 25 ]
In order to analyze the convergence of these general geometric series, then on top of addition and multiplication, one must also have some metric of distance between partial sums of the series. This can introduce new subtleties into the questions of convergence, such as the distinctions between uniform convergence and pointwise convergence in series of functions, and can lead to strong contrasts with intuitions from the real numbers, such as in the convergence of the series 1 + 2 + 4 + 8 + ⋯ {\displaystyle 1+2+4+8+\cdots } with a = 1 {\displaystyle a=1} and r = 2 {\displaystyle r=2} to a 1 − r = − 1 {\displaystyle {\frac {a}{1-r}}=-1} in the 2-adic numbers using the 2-adic absolute value as a convergence metric. In that case, the 2-adic absolute value of the common coefficient is | r | 2 = | 2 | 2 = 1 2 {\displaystyle |r|_{2}=|2|_{2}={\tfrac {1}{2}}} , and while this is counterintuitive from the perspective of real number absolute value (where | 2 | = 2 , {\displaystyle |2|=2,} naturally), it is nonetheless well-justified in the context of p-adic analysis . [ 23 ]
When the multiplication of the parameters is not commutative , as it often is not for matrices or general physical operators , particularly in quantum mechanics , then the standard way of writing the geometric series,
a + a r + a r 2 + a r 3 + ⋯ , {\displaystyle a+ar+ar^{2}+ar^{3}+\cdots ,}
multiplying from the right, may need to be distinguished from the alternative
a + r a + r 2 a + r 3 a + ⋯ , {\displaystyle a+ra+r^{2}a+r^{3}a+\cdots ,}
multiplying from the left, and also the symmetric
a + r 1 2 a r 1 2 + r a r + r 3 2 a r 3 2 + ⋯ , {\displaystyle a+r^{\frac {1}{2}}ar^{\frac {1}{2}}+rar+r^{\frac {3}{2}}ar^{\frac {3}{2}}+\cdots ,}
multiplying half on each side. These choices may correspond to important alternatives with different strengths and weaknesses in applications, as in the case of ordering the mutual interferences of drift and diffusion differently at infinitesimal temporal scales in Ito integration and Stratonovitch integration in stochastic calculus . | https://en.wikipedia.org/wiki/Geometric_series |
The geometric set cover problem is the special case of the set cover problem in geometric settings. The input is a range space Σ = ( X , R ) {\displaystyle \Sigma =(X,{\mathcal {R}})} where X {\displaystyle X} is a universe of points in R d {\displaystyle \mathbb {R} ^{d}} and R {\displaystyle {\mathcal {R}}} is a family of subsets of X {\displaystyle X} called ranges , defined by the intersection of X {\displaystyle X} and geometric shapes such as disks and axis-parallel rectangles. The goal is to select a minimum-size subset C ⊆ R {\displaystyle {\mathcal {C}}\subseteq {\mathcal {R}}} of ranges such that every point in the universe X {\displaystyle X} is covered by some range in C {\displaystyle {\mathcal {C}}} .
Given the same range space Σ {\displaystyle \Sigma } , a closely related problem is the geometric hitting set problem , where the goal is to select a minimum-size subset H ⊆ X {\displaystyle H\subseteq X} of points such that every range of R {\displaystyle {\mathcal {R}}} has nonempty intersection with H {\displaystyle H} , i.e., is hit by H {\displaystyle H} .
In the one-dimensional case, where X {\displaystyle X} contains points on the real line and R {\displaystyle {\mathcal {R}}} is defined by intervals, both the geometric set cover and hitting set problems can be solved in polynomial time using a simple greedy algorithm . However, in higher dimensions, they are known to be NP-complete even for simple shapes, i.e., when R {\displaystyle {\mathcal {R}}} is induced by unit disks or unit squares. [ 1 ] The discrete unit disc cover problem is a geometric version of the general set cover problem which is NP-hard . [ 2 ]
Many approximation algorithms have been devised for these problems. Due to the geometric nature, the approximation ratios for these problems can be much better than the general set cover/hitting set problems. Moreover, these approximate solutions can even be computed in near-linear time. [ 3 ]
The greedy algorithm for the general set cover problem gives O ( log n ) {\displaystyle O(\log n)} approximation, where n = max { | X | , | R | } {\displaystyle n=\max\{|X|,|{\mathcal {R}}|\}} . This approximation is known to be tight up to constant factor. [ 4 ] However, in geometric settings, better approximations can be obtained. Using a multiplicative weight algorithm , [ 5 ] Brönnimann and Goodrich [ 6 ] showed that an O ( log O P T ) {\displaystyle O(\log {\mathsf {OPT}})} -approximate set cover/hitting set for a range space Σ {\displaystyle \Sigma } with constant VC-dimension can be computed in polynomial time, where O P T ≤ n {\displaystyle {\mathsf {OPT}}\leq n} denotes the size of the optimal solution. The approximation ratio can be further improved to O ( log log O P T ) {\displaystyle O(\log \log {\mathsf {OPT}})} or O ( 1 ) {\displaystyle O(1)} when R {\displaystyle {\mathcal {R}}} is induced by axis-parallel rectangles or disks in R 2 {\displaystyle \mathbb {R} ^{2}} , respectively.
Based on the iterative-reweighting technique of Clarkson [ 7 ] and Brönnimann and Goodrich, [ 6 ] Agarwal and Pan [ 3 ] gave algorithms that computes an approximate set cover/hitting set of a geometric range space in O ( n p o l y l o g ( n ) ) {\displaystyle O(n~\mathrm {polylog} (n))} time. For example, their algorithms computes an O ( log log O P T ) {\displaystyle O(\log \log {\mathsf {OPT}})} -approximate hitting set in O ( n log 3 n log log log O P T ) {\displaystyle O(n\log ^{3}n\log \log \log {\mathsf {OPT}})} time for range spaces induced by 2D axis-parallel rectangles; and it computes an O ( 1 ) {\displaystyle O(1)} -approximate set cover in O ( n log 4 n ) {\displaystyle O(n\log ^{4}n)} time for range spaces induced by 2D disks. | https://en.wikipedia.org/wiki/Geometric_set_cover_problem |
In probability theory and statistics , the geometric standard deviation ( GSD ) describes how spread out are a set of numbers whose preferred average is the geometric mean . For such data, it may be preferred to the more usual standard deviation . Note that unlike the usual arithmetic standard deviation, the geometric standard deviation is a multiplicative factor, and thus is dimensionless , rather than having the same dimension as the input values. Thus, the geometric standard deviation may be more appropriately called geometric SD factor . [ 1 ] [ 2 ] When using geometric SD factor in conjunction with geometric mean, it should be described as "the range from (the geometric mean divided by the geometric SD factor) to (the geometric mean multiplied by the geometric SD factor), and one cannot add/subtract "geometric SD factor" to/from geometric mean. [ 3 ]
If the geometric mean of a set of numbers A 1 , A 2 , . . . , A n {\textstyle {A_{1},A_{2},...,A_{n}}} is denoted as μ g {\textstyle \mu _{\mathrm {g} }} , then the geometric standard deviation is
σ g = exp 1 n ∑ i = 1 n ( ln A i μ g ) 2 . {\displaystyle \sigma _{\mathrm {g} }=\exp {\sqrt {{1 \over n}\sum _{i=1}^{n}\left(\ln {A_{i} \over \mu _{\mathrm {g} }}\right)^{2}}}\,.}
If the geometric mean is
μ g = A 1 A 2 ⋯ A n n {\displaystyle \mu _{\mathrm {g} }={\sqrt[{n}]{A_{1}A_{2}\cdots A_{n}}}}
then taking the natural logarithm of both sides results in
ln μ g = 1 n ln ( A 1 A 2 ⋯ A n ) . {\displaystyle \ln \mu _{\mathrm {g} }={1 \over n}\ln(A_{1}A_{2}\cdots A_{n}).}
The logarithm of a product is a sum of logarithms (assuming A i {\textstyle A_{i}} is positive for all i {\textstyle i} ), so
ln μ g = 1 n [ ln A 1 + ln A 2 + ⋯ + ln A n ] . {\displaystyle \ln \mu _{\mathrm {g} }={1 \over n}\left[\ln A_{1}+\ln A_{2}+\cdots +\ln A_{n}\right].}
It can now be seen that ln μ g {\displaystyle \ln \mu _{\mathrm {g} }} is the arithmetic mean of the set { ln A 1 , ln A 2 , … , ln A n } {\displaystyle \{\ln A_{1},\ln A_{2},\dots ,\ln A_{n}\}} , therefore the arithmetic standard deviation of this same set should be
ln σ g = 1 n ∑ i = 1 n ( ln A i − ln μ g ) 2 . {\displaystyle \ln \sigma _{\mathrm {g} }={\sqrt {{1 \over n}\sum _{i=1}^{n}(\ln A_{i}-\ln \mu _{\mathrm {g} })^{2}}}\,.}
This simplifies to
σ g = exp 1 n ∑ i = 1 n ( ln A i μ g ) 2 . {\displaystyle \sigma _{\mathrm {g} }=\exp {\sqrt {{1 \over n}\sum _{i=1}^{n}\left(\ln {A_{i} \over \mu _{\mathrm {g} }}\right)^{2}}}\,.}
The geometric version of the standard score is
z = ln x − ln μ g ln σ g = log σ g ( x μ g ) . {\displaystyle z={{\ln x-\ln \mu _{\mathrm {g} }} \over \ln \sigma _{\mathrm {g} }}=\log _{\sigma _{\mathrm {g} }}\left({x \over \mu _{\mathrm {g} }}\right).}
If the geometric mean, standard deviation, and z-score of a datum are known, then the raw score can be reconstructed by
x = μ g σ g z . {\displaystyle x=\mu _{\mathrm {g} }{\sigma _{\mathrm {g} }}^{z}.}
The geometric standard deviation is used as a measure of log-normal dispersion analogously to the geometric mean. [ 3 ] As the log-transform of a log-normal distribution results in a normal distribution, we see that the
geometric standard deviation is the exponentiated value of the standard deviation of the log-transformed values, i.e. σ g = exp ( stdev ( ln A ) ) {\displaystyle \sigma _{\mathrm {g} }=\exp(\operatorname {stdev} (\ln A))} .
As such, the geometric mean and the geometric standard deviation of a sample of
data from a log-normally distributed population may be used to find the bounds of confidence intervals analogously to the way the arithmetic mean and standard deviation are used to bound confidence intervals for a normal distribution. See discussion in log-normal distribution for details. | https://en.wikipedia.org/wiki/Geometric_standard_deviation |
Geometric symmetry is a book by mathematician E.H. Lockwood and design engineer R.H. Macmillan published by Cambridge University Press in 1978. The subject matter of the book is symmetry and geometry .
The book is divided into two parts. The first part (chapters 1-13) is largely descriptive and written for the non-mathematical reader. The second part (chapters 14-27) is more mathematical, but only elementary geometrical knowledge is assumed.
In the first part the authors describe and illustrate the following topics: symmetry elements , frieze patterns , wallpaper patterns , and rod , layer and space patterns. The first part also introduces the concepts of continuous , dilation , dichromatic and polychromatic symmetry .
In the second part the authors revisit all of the topics from the first part; but in more detail, and with greater mathematical rigour. Group theory and symmetry are the foundations of the material in the second part of the book. A detailed analysis of the subject matter is given in the appendix below.
The book is printed in two colours, red and black, to facilitate the identification of colour symmetry in patterns.
In the preface the authors state: "In this book we attempt to provide a fairly comprehensive account of symmetry in a form acceptable to readers without much mathematical knowledge [...] The treatment is geometrical which should appeal to art students and to readers whose mathematical interests are that way inclined." However, Joseph H. Gehringer in a review in The Mathematics Teacher commented "Clearly not intended as a popular treatment of symmetry, the style of the authors is both concise and technical [...] this volume will appeal primarily to those devoting special attention to this field," [ 1 ]
The reception of the book was mixed. | https://en.wikipedia.org/wiki/Geometric_symmetry_(book) |
Geometric terms of location describe directions or positions relative to the shape of an object. These terms are used in descriptions of engineering , physics , and other sciences, as well as ordinary day-to-day discourse.
Though these terms themselves may be somewhat ambiguous , they are usually used in a context in which their meaning is clear. For example, when referring to a drive shaft it is clear what is meant by axial or radial directions . Or, in a free body diagram , one may similarly infer a sense of orientation by the forces or other vectors represented. [ citation needed ]
Common geometric terms of location are: | https://en.wikipedia.org/wiki/Geometric_terms_of_location |
In mathematics , a geometric transformation is any bijection of a set to itself (or to another such set) with some salient geometrical underpinning, such as preserving distances, angles, or ratios (scale). More specifically, it is a function whose domain and range are sets of points – most often a real coordinate space , R 2 {\displaystyle \mathbb {R} ^{2}} or R 3 {\displaystyle \mathbb {R} ^{3}} – such that the function is bijective so that its inverse exists. [ 1 ] The study of geometry may be approached by the study of these transformations, such as in transformation geometry . [ 2 ]
Geometric transformations can be classified by the dimension of their operand sets (thus distinguishing between, say, planar transformations and spatial transformations). They can also be classified according to the properties they preserve:
Each of these classes contains the previous one. [ 8 ]
Transformations of the same type form groups that may be sub-groups of other transformation groups.
Many geometric transformations are expressed with linear algebra. The bijective linear transformations are elements of a general linear group . The linear transformation A is non-singular. For a row vector v , the matrix product vA gives another row vector w = vA .
The transpose of a row vector v is a column vector v T , and the transpose of the above equality is w T = ( v A ) T = A T v T . {\displaystyle w^{T}=(vA)^{T}=A^{T}v^{T}.} Here A T provides a left action on column vectors.
In transformation geometry there are compositions AB . Starting with a row vector v , the right action of the composed transformation is w = vAB . After transposition,
Thus for AB the associated left group action is B T A T . {\displaystyle B^{T}A^{T}.} In the study of opposite groups , the distinction is made between opposite group actions because commutative groups are the only groups for which these opposites are equal.
Geometric transformations can be distinguished into two types: active or alibi transformations which change the physical position of a set of points relative to a fixed frame of reference or coordinate system ( alibi meaning "being somewhere else at the same time"); and passive or alias transformations which leave points fixed but change the frame of reference or coordinate system relative to which they are described ( alias meaning "going under a different name"). [ 10 ] [ 11 ] By transformation , mathematicians usually refer to active transformations, while physicists and engineers could mean either. [ citation needed ]
For instance, active transformations are useful to describe successive positions of a rigid body . On the other hand, passive transformations may be useful in human motion analysis to observe the motion of the tibia relative to the femur , that is, its motion relative to a ( local ) coordinate system which moves together with the femur, rather than a ( global ) coordinate system which is fixed to the floor. [ 11 ]
In three-dimensional Euclidean space , any proper rigid transformation , whether active or passive, can be represented as a screw displacement , the composition of a translation along an axis and a rotation about that axis. | https://en.wikipedia.org/wiki/Geometric_transformation |
Geometrical Product Specification and Verification ( GPS&V ) [ 1 ] is a set of ISO standards developed by ISO Technical Committee 213. [ 2 ] The aim of those standards is to develop a common language to specify macro geometry (size, form, orientation, location) and micro-geometry (surface texture) of products or parts of products so that the language can be used consistently worldwide.
GPS&V standards cover:
Other ISO technical committees are strongly related to ISO TC 213. ISO Technical Committee 10 [ 3 ] is in charge of the standardization and coordination of technical product documentation (TPD).
The GPS&V standards describe the rules to define geometrical specifications which are further included in the technical product documentation. The technical product documentation is defined as the:
The technical product documentation can be either a conventional documentation made of two dimensional engineering drawings or a documentation based on computer-aided design (CAD) models with 3RD [ clarification needed ] annotations. The ISO rules to write the documentation are mainly described in ISO 128 and ISO 129 [ 5 ] series while the rules for 3RD annotations are described in ISO 16792. [ 6 ]
ISO Technical Committee 184 [ 7 ] develops standards that are closely related to GPS&V standards. In particular ISO TC 184/SC4 [ 8 ] develops ISO 10303 standard known as STEP standard (see STEP-file ).
GPS&V shall not to be confused with the use of ASME Y.14.5 which is often referred to as geometric dimensioning and tolerancing (GD&T).
ISO TC 213 was born in 1996 by merging three previous committees: [ 9 ]
GPS&V standards are built on several basic operations defined in ISO 17450-1:2011: [ 10 ]
Those operations are supposed to completely describe the process of tolerancing from the point of view of the design and from the point of view of the measurement. They are presented in ISO 17450 standard series. Some of them are further described in other standards e.g ISO 16610 series for filtration. Those concepts are based on academic works. [ 11 ] The key idea is to start from the real part with its imperfect geometry (skin model) and then to apply a sequence of well defined operations to completely describe the tolerancing process.
The operations are used in the GPS&V standards to define the meaning of dimensional, geometrical or surface texture specifications.
The skin model is a representation of the surface of the real part. The model in CAD systems describes the nominal geometry of the parts of a product. The nominal geometry is perfect. However, the geometrical tolerancing has to take into account the geometrical deviations that arise inevitably from the manufacturing process in order to limit them to what is considered as acceptable by the designer for the part and the complete product to be functional. This is why a representation of the real part with geometrical deviations (skin model) is introduced as the starting point in the tolerancing process.
The skin model is a representation of a whole real part. However, the designer very often, if not always, needs to identify some specific geometrical features of the part to apply well-suited specifications. The process of identifying geometrical features from the skin model or the nominal model is called a partition. The standardization of this operation is a work in progress in ISO TC 213 (ISO 18183 series).
Several methods can be used to obtain a partition from a skin model as described in [ 12 ]
The skin model and the partitioned geometrical features are usually considered as continuous, however it is often necessary when measuring the part to consider only points extracted from a line or a surface. The process of e.g. selecting the number of points, their distribution over the real geometrical feature and the way to obtain them is part of the extraction operation.
This operation is described in ISO 14406:2011 [ 13 ]
Filtration is an operation that is useful to select features of interest from other features in the data. This operation is heavily used for surface texture specifications however, it is a general operation that can be applied to define other specifications. This operation is well known in signal processing where it can be used for example to isolate some specific wave length in a raw signal.
The filtration is standardized in ISO 16610 series where a lot of different filters are described.
Association is useful when we need to fit an ideal (perfect) geometrical feature to a real geometrical feature e.g. to find a perfect cylinder that approximates a cloud of points that have been extracted from a real (imperfect) cylindrical geometrical feature. This can be viewed as a mathematical optimization process. A criterion for optimization has to be defined. This criterion can be the minimisation of a quantity such as the squares of the distances from the points to the ideal surface for example. Constraints can also be added such as a condition for the ideal geometrical feature to lie outside the material of the part or to have a specific orientation or location from an other geometrical feature.
Different criteria and constraints are used as defaults throughout the GPS&V standards for different purposes such as geometrical specification on geometrical features or datum establishment for example. However, standardization of association as a whole is a work in progress in ISO TC 213.
Collection is a grouping operation. The designer can define a group of geometrical features that are contributing to the same function. It could be used to group two or more holes because they constitute one datum used for the assembly of a part. It could also be used to group nominally planar geometrical features that are constrained to lie inside the same flatness tolerance zone . This operation is described throughout several GPS&V standards. It is heavily used in ISO 5458:2018 for grouping planar geometrical feature and cylindrical geometrical features (holes or pins).
The collection operation can be viewed as applying constraints of orientation and or constraints of location among the geometrical features of the considered group.
Construction is described as an operation used to build ideal geometrical features with perfect geometry from other geometrical features. An example, given in ISO 17450-1:2011 is the construction of a straight line resulting from the intersection of two perfect planes.
No specific standard addresses this operation, however it is used and defined throughout a lot of standards in GPS&V system.
Reconstruction is an operation allowing the build of a continuous geometrical feature from a discrete geometrical feature. It is useful for example when there is a need to obtain a point between two extracted points as can be the case when identifying a dimension between two opposite points in a particular section in the process of obtaining a linear size of a cylinder. The reconstruction operation is not yet standardized in the GPS&V system however the operation has been described in academic papers [ 14 ]
Reduction is an operation allowing to compute a new geometrical feature from an existing one. The new geometrical feature is a derived geometrical feature.
Dimensional tolerances are dealt with in ISO 14405:
The linear size is indicated above a line ended with arrows and numerical values for the nominal size and the tolerance.The linear size of a geometrical feature of size is defined by default, as the distances between opposite points taken from the surface of the real part. [ note 1 ] The process to build both the sections and the directions needed to identify the opposite points is defined in ISO 14405-1 standard. This process includes the definition of an associated perfect geometrical feature of the same type as the nominal geometrical feature. By default a least-squares criterion is used. This process is defined only for geometrical features where opposite points exist.
ISO 14405-2 illustrates cases where dimensional specification are often misused because opposite points don't exist. In these cases, the use of linear dimensions is considered as ambiguous (see example ). The recommendation is to replace dimensional specifications with geometrical specifications to properly specify the location of a geometrical feature with respect to an other geometrical feature, the datum feature (see examples ).
Angular sizes are useful for cones, wedges or opposite straight lines. They are defined in ISO 14405-3. The definition implies to associate perfect geometrical features e.g. planes for a wedge and to measure the angle between lines of those perfect geometrical features in different sections. The angular sizes are indicated with an arrow and numerical values for the nominal size and the tolerance. It is to be noted that angular size specification is different from angularity specification. Angularity specification controls the shape of the toleranced feature but it is not the case for angular size specification.
We consider here the specification of a size of a cylinder to illustrate the definition of a size according to ISO 14405-1. The nominal model is assumed to be a perfect cylinder with a dimensional specification of the diameter without any modifiers changing the default definition of size.
According to ISO 14405-1:2016 annex D, the process to establish a dimension between two opposite points starting from the real surface of the manufactured part which is nominally a cylinder is as follows:
See example hereafter for an illustration.
The envelope requirement is specified by adding the symbol Ⓔ after the tolerance value of a dimensional specification .
The symbol Ⓔ modifies the definition of the dimensional specification in the following way (ISO 14405-1 3.8):
The maximum inscribed dimension for a nominally cylindrical hole is defined as the maximum diameter of a perfect cylinder associated to the real surface with a constraint applied to the associated cylinder to stay outside the material of the part.
The minimum circumscribed dimension for a nominally cylindrical pin is defined as the minimum diameter of a perfect cylinder associated to the real surface with a constraint applied to the associated cylinder to stay outside the material of the part. See example hereafter for an illustration.
The use of the envelope symbol Ⓔ is closely related to the very common function of fitting parts together. A dimensional specification without envelope on the two parts to be fitted is not sufficient to ensure the fitting because the shape deviation of the parts is not limited by the dimensional specifications. The fitting of a cylindrical pin inside a cylindrical hole, for example requires to limit the sizes of both geometrical features but also to limit the deviation of straightness of both geometrical features as it is the combination of the size specification and the geometrical specification (straightness) that will allow the fitting of the two parts.
Then the cylindrical pin and the cylindrical hole will fit even in the worst conditions without over constraining the parts with specific form specifications.
It is to be noted that the use of dimensional size with envelope does not constrain the orientation nor the location of the parts. The use of geometrical specification together with the maximum material requirement (symbol Ⓜ) allows to ensure fitting of parts when additional constraints on orientation or location are required. ISO 2692:2021 [ 18 ] describes the use of the maximum material modifier.
GPS&V standards dealing with geometrical specifications are listed below:
The word geometry, used in this paragraph is to be understood as macrogeometry as opposed to surface texture specifications which are dealt with in other standards.
The main source for geometrical specifications in GPS&V standards is ISO 1101. ISO 5459 can be considered as a companion standard with ISO 1101 as it defines datum which are heavily used in ISO 1101.
ISO 5458 and ISO 1660 are only focussing on subsets of ISO 1101. However, those standards are very useful for the user of GPS&V systems as they cover very common aspects of geometrical tolerancing namely groups of cylinders or planes and profile specifications (lines and surfaces).
A geometrical specification allows to define the three following objects:
The steps to read a geometrical specification can be summarised as in follows:
Toleranced features are defined in ISO 1101. The toleranced feature is a real geometrical feature with imperfect geometry identified either directly from the skin model (integral feature) or by a process starting from the skin model (derived feature).
Whether the toleranced feature is an integral feature or a derived feature depends upon the precise writing of the corresponding specification: if the arrow of the leader line of the specification is in the prolongation of a dimension line otherwise it is an integral feature. A Ⓐ modifier can also be used in the specification to designate a derived feature.
The nominal toleranced feature is a geometrical feature with perfect geometry defined in the TPD corresponding to the toleranced feature.
Datums are defined in ISO 5459 as a simulation of a contact partner at a single part specification, where the contact partner is missing. The contacts "planar touch" and "fit of linear size" are covered by defaults. With this simulation a specification mistake appears against the nature function, which appears in assembly constrains.
In essence, the datum is used to link the toleranced feature (imperfect real geometry) to the toleranced zone (perfect geometry). As such the datum object is a three folded object:
The link between the orientation, location or run-out specification and the datums is specified in the geometrical specification frame as follows:
Some geometrical specification may not have any datum section at all (e.g. form specification).
The content of each cell can be either:
The process to build a datum system is first described and the process for building a common datum follows.
A datum is identified by at most three cells in the geometrical specification frame corresponding to primary, secondary and tertiary datums.
For the primary, secondary and tertiary datum, a perfect geometry feature of the same kind [ note 2 ] as the nominal feature is associated to the real feature as described hereafter:
The result is a set of associated features. Finally, this set of associated features is used to build a situation feature which is the specified datum.
The datum features are identified on the skin model from the datum component in the dash separated list of nominal datum appearing in a particular cell of an orientation or location specification. The common datum can be used as primary, secondary or tertiary datum. In either cases, the process to build a common datum is the same however additional orientation constraints shall be added when the common datum is used as secondary or tertiary datum as is done for datum systems and explained hereafter.
The criterion for association of common datum is applied on all the associated features together with the following constraints:
The result is a set of associated feature. Finally, this set of associated features is used to build a situation feature which is the specified datum.
The final step in the datum establishment process is to combine the associated features to obtain a final object defined as situation feature which is identified to the specified datum (ISO 5459:2011 Table B.1). It is a member of the following set:
How to build the situation features and therefore the specified datum, is currently mainly defined through examples in ISO 5459:2011. More specific rules are under development. The specified datum concept is closely related to classes of surfaces invariant through displacements. It has been shown that surfaces can be classified according to the displacements that let them invariant. [ 23 ] The number of classes is seven. If a displacement let a surface invariant then this displacement cannot be locked by the corresponding specified datum. So the displacement that are not invariant are used to lock specific degrees of freedom of the tolerance zone.
For example a set of associated datums made of three mutually perpendicular planes corresponds to the following situation feature: a plane containing a straight line containing a point. The plane is the first associated plane obtained, the line is the intersection between the second associated plane and the first one and the point is the intersection between the line and the third associated plane. The specified datum is therefore belonging to the complex invariance class ( C X {\displaystyle \mathbf {C} _{\mathbf {X} }} ) and all the degrees of freedom of a tolerance zone can be locked with this specified datum.
The invariance class graphic symbols are not defined in ISO standards but only used in literature as a useful reminder.
An Helicoidal class ( C H {\displaystyle \mathbf {C} _{\mathbf {H} }} ) can also be defined however it is generally replaced with a cylindrical class in real world applications.
Tolerance zones are defined in ISO 1101. The tolerance zone is a surface or a volume with perfect geometry. It is a surface when it is intended to contain a tolerance feature which is a line. It is a volume when it is intended to contain a tolerance feature which is a surface It can often be described as a rigid body with the following attributes:
Theoretical exact dimensions (TED) are identified on a nominal model by dimensions with a framed nominal value without any tolerance. Those dimensions are not specification by themselves but are needed when applying constraints to build datum or to determine the orientation or location of the tolerance zone. TED can also be used for other purposes e.g. to define the nominal shape or dimensions of a profile.
When applying constraints generally two types of TED are to be taken into account:
The geometrical specifications are divided into three categories:
Run-out specification is another family that involves both form and location:
This paragraph contains examples of dimensional and geometrical specification to illustrate the definition and use of dimensional and positional specifications.
The dimensions and tolerance values (displayed in blue in the figures) shall be numerical values on actual drawings. d, l1, l2 are used for length values. Δd is used for a dimensional tolerance value and t, t1, t2 for positional tolerance values. For each example we present:
The deviations are enlarged compared to actual parts in order to show as clearly as possible the steps necessary to build the GPS&V operators. The first angle projection is used in technical drawing.
This example is often surprising for new practitioners of GPS&V. However, it is a direct consequence of the definition of a linear dimension in ISO 14405-1.
The function targeted here is probably to locate the two planes, therefore a location specification on one surface with respect to the other surface or the location of the two surfaces with respect to one another is considered the right way to achieve the function. See examples .
This specification could be useful when one surface (datum plane in this case) has a higher priority in the assembly process. For example a second part could be required to fit inside the slot being guided by the plane where the datum has been indicated.
The part is not conformant to the specification for this particular real part, as the toleranced feature (orange line segment) is not included in the tolerance zone (green).
This case 2 is similar to case 1 above however the toleranced feature and the datum are switched so that the result is totally different as explained above.
This specification could be useful when one surface (datum plane) has a higher priority over the other surface in the assembly process. For example a second part could be required to fit inside the slot being guided by the plane where the datum has been indicated.
The part is not conformant to the specification for this particular real part, as the toleranced feature (orange line segment) is not included in the tolerance zone (green)
This specification could be useful when the two surfaces (plane in this case) have the same priority in the assembly process. For example a second part could be required to fit inside the slot being guided by the two planes.
The part is conformant to the specification for this particular real part, as the toleranced feature (two orange line segments) is included in the tolerance zone (green).
This specification could be useful when the holes is actually located from the edges of the plates in an assembly process and where the A surface has a higher priority over B. If the assembly process is modified then the datum specification shall be adapted in accordance. The order of the datum is important in a datum system as the resulting specified datum can be very different.
The part is conformant to the specification for this particular real part, as the toleranced feature (purple line on the left, purple dot on the right) is included in the tolerance zone (green). | https://en.wikipedia.org/wiki/Geometrical_Product_Specification_and_Verification |
The concept of geometrical continuity was primarily applied to the conic sections (and related shapes) by mathematicians such as Leibniz , Kepler , and Poncelet . The concept was an early attempt at describing, through geometry rather than algebra, the concept of continuity as expressed through a parametric function. [ 1 ]
The basic idea behind geometric continuity was that the five conic sections were really five different versions of the same shape. An ellipse tends to a circle as the eccentricity approaches zero, or to a parabola as it approaches one; and a hyperbola tends to a parabola as the eccentricity drops toward one; it can also tend to intersecting lines . Thus, there was continuity between the conic sections. These ideas led to other concepts of continuity. For instance, if a circle and a straight line were two expressions of the same shape, perhaps a line could be thought of as a circle of infinite radius . For such to be the case, one would have to make the line closed by allowing the point x = ∞ {\displaystyle x=\infty } to be a point on the circle, and for x = + ∞ {\displaystyle x=+\infty } and x = − ∞ {\displaystyle x=-\infty } to be identical. Such ideas were useful in crafting the modern, algebraically defined, idea of the continuity of a function and of ∞ {\displaystyle \infty } (see projectively extended real line for more). [ 1 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geometrical_continuity |
In condensed matter physics , geometrical frustration (or in short, frustration ) is a phenomenon where the combination of conflicting inter-atomic forces leads to complex structures. Frustration can imply a plenitude of distinct ground states at zero temperature , and usual thermal ordering may be suppressed at higher temperatures. Much-studied examples include amorphous materials, glasses , and dilute magnets .
The term frustration , in the context of magnetic systems, was introduced by Gerard Toulouse in 1977. [ 1 ] [ 2 ] Frustrated magnetic systems had been studied even before. Early work includes a study of the Ising model on a triangular lattice with nearest-neighbor spins coupled antiferromagnetically , by G. H. Wannier , published in 1950. [ 3 ] Related features occur in magnets with competing interactions , where both ferromagnetic as well as antiferromagnetic couplings between pairs of spins or magnetic moments are present, with the type of interaction depending on the separation distance of the spins. In that case commensurability , such as helical spin arrangements may result, as had been discussed originally, especially, by A. Yoshimori, [ 4 ] T. A. Kaplan, [ 5 ] R. J. Elliott , [ 6 ] and others, starting in 1959, to describe experimental findings on rare-earth metals. A renewed interest in such spin systems with frustrated or competing interactions arose about two decades later, beginning in the 1970s, in the context of spin glasses and spatially modulated magnetic superstructures. In spin glasses, frustration is augmented by stochastic disorder in the interactions, as may occur experimentally in non- stoichiometric magnetic alloys . Carefully analyzed spin models with frustration include the Sherrington–Kirkpatrick model , [ 7 ] describing spin glasses, and the ANNNI model , [ 8 ] describing commensurable magnetic superstructures. Recently, the concept of frustration has been used in brain network analysis to identify the non-trivial assemblage of neural connections and highlight the adjustable elements of the brain. [ 9 ]
Geometrical frustration is an important feature in magnetism , where it stems from the relative arrangement of spins . A simple 2D example is shown in Figure 1. Three magnetic ions reside on the corners of a triangle with antiferromagnetic interactions between them; the energy is minimized when each spin is aligned opposite to neighbors. Once the first two spins align antiparallel, the third one is frustrated because its two possible orientations, up and down, give the same energy. The third spin cannot simultaneously minimize its interactions with both of the other two. Since this effect occurs for each spin, the ground state is sixfold degenerate . Only the two states where all spins are up or down have more energy.
Similarly in three dimensions, four spins arranged in a tetrahedron (Figure 2) may experience geometric frustration. If there is an antiferromagnetic interaction between spins, then it is not possible to arrange the spins so that all interactions between spins are antiparallel. There are six nearest-neighbor interactions, four of which are antiparallel and thus favourable, but two of which (between 1 and 2, and between 3 and 4) are unfavourable. It is impossible to have all interactions favourable, and the system is frustrated.
Geometrical frustration is also possible if the spins are arranged in a non- collinear way. If we consider a tetrahedron with a spin on each vertex pointing along the easy axis (that is, directly towards or away from the centre of the tetrahedron), then it is possible to arrange the four spins so that there is no net spin (Figure 3). This is exactly equivalent to having an antiferromagnetic interaction between each pair of spins, so in this case there is no geometrical frustration. With these axes, geometric frustration arises if there is a ferromagnetic interaction between neighbours, where energy is minimized by parallel spins. The best possible arrangement is shown in Figure 4, with two spins pointing towards the centre and two pointing away. The net magnetic moment points upwards, maximising ferromagnetic interactions in this direction, but left and right vectors cancel out (i.e. are antiferromagnetically aligned), as do forwards and backwards. There are three different equivalent arrangements with two spins out and two in, so the ground state is three-fold degenerate.
The mathematical definition is simple (and analogous to the so-called Wilson loop in quantum chromodynamics ): One considers for example expressions ("total energies" or "Hamiltonians") of the form
where G is the graph considered, whereas the quantities I k ν , k μ are the so-called "exchange energies" between nearest-neighbours, which (in the energy units considered) assume the values ±1 (mathematically, this is a signed graph ), while the S k ν · S k μ are inner products of scalar or vectorial spins or pseudo-spins. If the graph G has quadratic or triangular faces P , the so-called "plaquette variables" P W , "loop-products" of the following kind, appear:
which are also called "frustration products". One has to perform a sum over these products, summed over all plaquettes. The result for a single plaquette is either +1 or −1. In the last-mentioned case the plaquette is "geometrically frustrated".
It can be shown that the result has a simple gauge invariance : it does not change – nor do other measurable quantities, e.g. the "total energy" H {\displaystyle {\mathcal {H}}} – even if locally the exchange integrals and the spins are simultaneously modified as follows:
Here the numbers ε i and ε k are arbitrary signs, i.e. +1 or −1, so that the modified structure may look totally random.
Although most previous and current research on frustration focuses on spin systems, the phenomenon was first studied in ordinary ice . In 1936 Giauque and Stout published The Entropy of Water and the Third Law of Thermodynamics. Heat Capacity of Ice from 15 K to 273 K , reporting calorimeter measurements on water through the freezing and vaporization transitions up to the high temperature gas phase. The entropy was calculated by integrating the heat capacity and adding the latent heat contributions; the low temperature measurements were extrapolated to zero, using Debye's then recently derived formula. [ 10 ] The resulting entropy, S 1 = 44.28 cal/(K·mol) = 185.3 J/(mol·K) was compared to the theoretical result from statistical mechanics of an ideal gas, S 2 = 45.10 cal/(K·mol) = 188.7 J/(mol·K). The two values differ by S 0 = 0.82 ± 0.05 cal/(K·mol) = 3.4 J/(mol·K). This result was then explained by Linus Pauling [ 11 ] to an excellent approximation, who showed that ice possesses a finite entropy (estimated as 0.81 cal/(K·mol) or 3.4 J/(mol·K)) at zero temperature due to the configurational disorder intrinsic to the protons in ice.
In the hexagonal or cubic ice phase the oxygen ions form a tetrahedral structure with an O–O bond length 2.76 Å (276 pm ), while the O–H bond length measures only 0.96 Å (96 pm). Every oxygen (white) ion is surrounded by four hydrogen ions (black) and each hydrogen ion is surrounded by 2 oxygen ions, as shown in Figure 5. Maintaining the internal H 2 O molecule structure, the minimum energy position of a proton is not half-way between two adjacent oxygen ions. There are two equivalent positions a hydrogen may occupy on the line of the O–O bond, a far and a near position. Thus a rule leads to the frustration of positions of the proton for a ground state configuration: for each oxygen two of the neighboring protons must reside in the far position and two of them in the near position, so-called ‘ ice rules ’. Pauling proposed that the open tetrahedral structure of ice affords many equivalent states satisfying the ice rules.
Pauling went on to compute the configurational entropy in the following way: consider one mole of ice, consisting of N O 2− and 2 N protons. Each O–O bond has two positions for a proton, leading to 2 2 N possible configurations. However, among the 16 possible configurations associated with each oxygen, only 6 are energetically favorable, maintaining the H 2 O molecule constraint. Then an upper bound of the numbers that the ground state can take is estimated as Ω < 2 2 N ( 6 / 16 ) N . Correspondingly the configurational entropy S 0 = k B ln( Ω ) = Nk B ln( 3 / 2 ) = 0.81 cal/(K·mol) = 3.4 J/(mol·K) is in amazing agreement with the missing entropy measured by Giauque and Stout.
Although Pauling's calculation neglected both the global constraint on the number of protons and the local constraint arising from closed loops on the Wurtzite lattice, the estimate was subsequently shown to be of excellent accuracy.
A mathematically analogous situation to the degeneracy in water ice is found in the spin ices . A common spin ice structure is shown in Figure 6 in the cubic pyrochlore structure with one magnetic atom or ion residing on each of the four corners. Due to the strong crystal field in the material, each of the magnetic ions can be represented by an Ising ground state doublet with a large moment. This suggests a picture of Ising spins residing on the corner-sharing tetrahedral lattice with spins fixed along the local quantization axis, the <111> cubic axes , which coincide with the lines connecting each tetrahedral vertex to the center. Every tetrahedral cell must have two spins pointing in and two pointing out in order to minimize the energy. Currently the spin ice model has been approximately realized by real materials, most notably the rare earth pyrochlores Ho 2 Ti 2 O 7 , Dy 2 Ti 2 O 7 , and Ho 2 Sn 2 O 7 . These materials all show nonzero residual entropy at low temperature.
The spin ice model is only one subdivision of frustrated systems. The word frustration was initially introduced to describe a system's inability to simultaneously minimize the competing interaction energy between its components. In general frustration is caused either by competing interactions due to site disorder (see also the Villain model [ 12 ] ) or by lattice structure such as in the triangular , face-centered cubic (fcc), hexagonal-close-packed , tetrahedron , pyrochlore and kagome lattices with antiferromagnetic interaction. So frustration is divided into two categories: the first corresponds to the spin glass , which has both disorder in structure and frustration in spin; the second is the geometrical frustration with an ordered lattice structure and frustration of spin. The frustration of a spin glass is understood within the framework of the RKKY model, in which the interaction property, either ferromagnetic or anti-ferromagnetic, is dependent on the distance of the two magnetic ions. Due to the lattice disorder in the spin glass, one spin of interest and its nearest neighbors could be at different distances and have a different interaction property, which thus leads to different preferred alignment of the spin.
With the help of lithography techniques, it is possible to fabricate sub-micrometer size magnetic islands whose geometric arrangement reproduces the frustration found in naturally occurring spin ice materials. Recently R. F. Wang et al. reported [ 13 ] the discovery of an artificial geometrically frustrated magnet composed of arrays of lithographically fabricated single-domain ferromagnetic islands. These islands are manually arranged to create a two-dimensional analog to spin ice. The magnetic moments of the ordered ‘spin’ islands were imaged with magnetic force microscopy (MFM) and then the local accommodation of frustration was thoroughly studied. In their previous work on a square lattice of frustrated magnets, they observed both ice-like short-range correlations and the absence of long-range correlations, just like in the spin ice at low temperature. These results solidify the uncharted ground on which the real physics of frustration can be visualized and modeled by these artificial geometrically frustrated magnets, and inspires further research activity.
These artificially frustrated ferromagnets can exhibit unique magnetic properties when studying their global response to an external field using Magneto-Optical Kerr Effect. [ 14 ] In particular, a non-monotonic angular dependence of the square lattice coercivity is found to be related to disorder in the artificial spin ice system.
Another type of geometrical frustration arises from the propagation of a local order. A main question that a condensed matter physicist faces is to explain the stability of a solid.
It is sometimes possible to establish some local rules, of chemical nature, which lead to low energy configurations and therefore govern structural and chemical order. This is not generally the case and often the local order defined by local interactions cannot propagate freely, leading to geometric frustration. A common feature of all these systems is that, even with simple local rules, they present a large set of, often complex, structural realizations. Geometric frustration plays a role in fields of condensed matter, ranging from clusters and amorphous solids to complex fluids.
The general method of approach to resolve these complications follows two steps. First, the constraint of perfect space-filling is relaxed by allowing for space curvature. An ideal, unfrustrated, structure is defined in this curved space. Then, specific distortions are applied to this ideal template in order to embed it into three dimensional Euclidean space. The final structure is a mixture of ordered regions, where the local order is similar to that of the template, and defects arising from the embedding. Among the possible defects, disclinations play an important role.
Two-dimensional examples are helpful in order to get some understanding about the origin of the competition between local rules and geometry in the large. Consider first an arrangement of identical discs (a model for a hypothetical two-dimensional metal) on a plane; we suppose that the interaction between discs is isotropic and locally tends to arrange the disks in the densest way as possible. The best arrangement for three disks is trivially an equilateral triangle with the disk centers located at the triangle vertices. The study of the long range structure can therefore be reduced to that of plane tilings with equilateral triangles. A well known solution is provided by the triangular tiling with a total compatibility between the local and global rules: the system is said to be "unfrustrated".
But now, the interaction energy is supposed to be at a minimum when atoms sit on the vertices of a regular pentagon . Trying to propagate in the long range a packing of these pentagons sharing edges (atomic bonds) and vertices (atoms) is impossible. This is due to the impossibility of tiling a plane with regular pentagons, simply because the pentagon vertex angle does not divide 2 π . Three such pentagons can easily fit at a common vertex, but a gap remains between two edges. It is this kind of discrepancy which is called "geometric frustration". There is one way to overcome this difficulty. Let the surface to be tiled be free of any presupposed topology, and let us build the tiling with a strict application of the local interaction rule. In this simple example, we observe that the surface inherits the topology of a sphere and so receives a curvature. The final structure, here a pentagonal dodecahedron, allows for a perfect propagation of the pentagonal order. It is called an "ideal" (defect-free) model for the considered structure.
The stability of metals is a longstanding question of solid state physics, which can only be understood in the quantum mechanical framework by properly taking into account the interaction between the positively charged ions and the valence and conduction electrons. It is nevertheless possible to use a very simplified picture of metallic bonding and only keeps an isotropic type of interactions, leading to structures which can be represented as densely packed spheres. And indeed the crystalline simple metal structures are often either close packed face-centered cubic (fcc) or hexagonal close packing (hcp) lattices. Up to some extent amorphous metals and quasicrystals can also be modeled by close packing of spheres. The local atomic order is well modeled by a close packing of tetrahedra, leading to an imperfect icosahedral order.
A regular tetrahedron is the densest configuration for the packing of four equal spheres. The dense random packing of hard spheres problem can thus be mapped on the tetrahedral packing problem . It is a practical exercise to try to pack table tennis balls in order to form only tetrahedral configurations. One starts with four balls arranged as a perfect tetrahedron, and try to add new spheres, while forming new tetrahedra. The next solution, with five balls, is trivially two tetrahedra sharing a common face; note that already with this solution, the fcc structure, which contains individual tetrahedral holes, does not show such a configuration (the tetrahedra share edges, not faces). With six balls, three regular tetrahedra are built, and the cluster is incompatible with all compact crystalline structures (fcc and hcp). Adding a seventh sphere gives a new cluster consisting in two "axial" balls touching each other and five others touching the latter two balls, the outer shape being an almost regular pentagonal bi-pyramid. However, we are facing now a real packing problem, analogous to the one encountered above with the pentagonal tiling in two dimensions. The dihedral angle of a tetrahedron is not commensurable with 2 π ; consequently, a hole remains between two faces of neighboring tetrahedra. As a consequence, a perfect tiling of the Euclidean space R 3 is impossible with regular tetrahedra. The frustration has a topological character: it is impossible to fill Euclidean space with tetrahedra, even severely distorted, if we impose that a constant number of tetrahedra (here five) share a common edge.
The next step is crucial: the search for an unfrustrated structure by allowing for curvature in the space , in order for the local configurations to propagate identically and without defects throughout the whole space.
Twenty irregular tetrahedra pack with a common vertex in such a way that the twelve outer vertices form a regular icosahedron. Indeed, the icosahedron edge length l is slightly longer than the circumsphere radius r ( l ≈ 1.05 r ). There is a solution with regular tetrahedra if the space is not Euclidean, but spherical. It is the polytope {3,3,5}, using the Schläfli notation, also known as the 600-cell .
There are one hundred and twenty vertices which all belong to the hypersphere S 3 with radius equal to the golden ratio ( φ = 1 + √ 5 / 2 ) if the edges are of unit length. The six hundred cells are regular tetrahedra grouped by five around a common edge and by twenty around a common vertex. This structure is called a polytope (see Coxeter ) which is the general name in higher dimension in the series containing polygons and polyhedra. Even if this structure is embedded in four dimensions, it has been considered as a three dimensional (curved) manifold. This point is conceptually important for the following reason. The ideal models that have been introduced in the curved Space are three dimensional curved templates. They look locally as three dimensional Euclidean models. So, the {3,3,5} polytope, which is a tiling by tetrahedra, provides a very dense atomic structure if atoms are located on its vertices. It is therefore naturally used as a template for amorphous metals, but one should not forget that it is at the price of successive idealizations. | https://en.wikipedia.org/wiki/Geometrical_frustration |
In mathematics , a univariate polynomial of degree n with real or complex coefficients has n complex roots , if counted with their multiplicities . They form a multiset of n points in the complex plane . This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.
Some of these geometrical properties are related to a single polynomial, such as upper bounds on the absolute values of the roots, which define a disk containing all roots, or lower bounds on the distance between two roots. Such bounds are widely used for root-finding algorithms for polynomials, either for tuning them, or for computing their computational complexity .
Some other properties are probabilistic, such as the expected number of real roots of a random polynomial of degree n with real coefficients, which is less than 1 + 2 π ln ( n ) {\displaystyle 1+{\frac {2}{\pi }}\ln(n)} for n sufficiently large.
In this article, a polynomial that is considered is always denoted
where a 0 , … , a n {\displaystyle a_{0},\dots ,a_{n}} are real or complex numbers and a n ≠ 0 {\displaystyle a_{n}\neq 0} ; thus n {\displaystyle n} is the degree of the polynomial.
The n roots of a polynomial of degree n depend continuously on the coefficients. For simple roots, this results immediately from the implicit function theorem . This is true also for multiple roots, but some care is needed for the proof.
A small change of coefficients may induce a dramatic change of the roots, including the change of a real root into a complex root with a rather large imaginary part (see Wilkinson's polynomial ). A consequence is that, for classical numeric root-finding algorithms , the problem of approximating the roots given the coefficients can be ill-conditioned for many inputs.
The complex conjugate root theorem states that if the coefficients
of a polynomial are real, then the non-real roots appear in pairs of the form ( a + ib , a – ib ) .
It follows that the roots of a polynomial with real coefficients are mirror-symmetric with respect to the real axis.
This can be extended to algebraic conjugation : the roots of a polynomial with rational coefficients are conjugate (that is, invariant) under the action of the Galois group of the polynomial. However, this symmetry can rarely be interpreted geometrically.
Upper bounds on the absolute values of polynomial roots are widely used for root-finding algorithms , either for limiting the regions where roots should be searched, or for the computation of the computational complexity of these algorithms.
Many such bounds have been given, and the sharper one depends generally on the specific sequence of coefficient that are considered. Most bounds are greater or equal to one, and are thus not sharp for a polynomial which have only roots of absolute values lower than one. However, such polynomials are very rare, as shown below.
Any upper bound on the absolute values of roots provides a corresponding lower bound. In fact, if a n ≠ 0 , {\displaystyle a_{n}\neq 0,} and U is an upper bound of the absolute values of the roots of
then 1/ U is a lower bound of the absolute values of the roots of
since the roots of either polynomial are the multiplicative inverses of the roots of the other. Therefore, in the remainder of the article lower bounds will not be given explicitly .
Lagrange and Cauchy were the first to provide upper bounds on all complex roots. [ 1 ] Lagrange's bound is [ 2 ]
and Cauchy's bound is [ 3 ]
Lagrange's bound is sharper (smaller) than Cauchy's bound only when 1 is larger than the sum of all | a i a n | {\displaystyle \left|{\frac {a_{i}}{a_{n}}}\right|} but the largest. This is relatively rare in practice, and explains why Cauchy's bound is more widely used than Lagrange's.
Both bounds result from the Gershgorin circle theorem applied to the companion matrix of the polynomial and its transpose . They can also be proved by elementary methods.
If z is a root of the polynomial, and | z | ≥ 1 one has
Dividing by | a n | | z | n − 1 , {\displaystyle |a_{n}||z|^{n-1},} one gets
which is Lagrange's bound when there is at least one root of absolute value larger than 1. Otherwise, 1 is a bound on the roots, and is not larger than Lagrange's bound.
Similarly, for Cauchy's bound, one has, if | z | ≥ 1 ,
Thus
Solving in | z | , one gets Cauchy's bound if there is a root of absolute value larger than 1. Otherwise the bound is also correct, as Cauchy's bound is larger than 1.
These bounds are not invariant by scaling. That is, the roots of the polynomial p ( sx ) are the quotient by s of the root of p , and the bounds given for the roots of p ( sx ) are not the quotient by s of the bounds of p . Thus, one may get sharper bounds by minimizing over possible scalings. This gives
and
for Lagrange's and Cauchy's bounds respectively.
Another bound, originally given by Lagrange, but attributed to Zassenhaus by Donald Knuth , is [ 4 ]
This bound is invariant by scaling.
Let A be the largest | a i a n | 1 n − i {\displaystyle \left|{\frac {a_{i}}{a_{n}}}\right|^{\frac {1}{n-i}}} for 0 ≤ i < n . Thus one has
for 0 ≤ i < n . {\displaystyle 0\leq i<n.} If z is a root of p , one has
and thus, after dividing by a n , {\displaystyle a_{n},}
As we want to prove | z | ≤ 2 A , we may suppose that | z | > A (otherwise there is nothing to prove).
Thus
which gives the result, since | z | > A . {\displaystyle |z|>A.}
Lagrange improved this latter bound into the sum of the two largest values (possibly equal) in the sequence [ 4 ]
Lagrange also provided the bound [ citation needed ]
where a i {\displaystyle a_{i}} denotes the i th nonzero coefficient when the terms of the polynomials are sorted by increasing degrees.
Hölder's inequality allows the extension of Lagrange's and Cauchy's bounds to every h -norm . The h -norm of a sequence
is
for any real number h ≥ 1 , and
If 1 h + 1 k = 1 , {\displaystyle {\frac {1}{h}}+{\frac {1}{k}}=1,} with 1 ≤ h , k ≤ ∞ , and 1 / ∞ = 0 , an upper bound on the absolute values of the roots of p is
For k = 1 and k = ∞ , one gets respectively Cauchy's and Lagrange's bounds.
For h = k = 2 , one has the bound
This is not only a bound of the absolute values of the roots, but also a bound of the product of their absolute values larger than 1; see § Landau's inequality , below.
Let z be a root of the polynomial
Setting
we have to prove that every root z of p satisfies
If | z | ≤ 1 , {\displaystyle |z|\leq 1,} the inequality is true; so, one may suppose | z | > 1 {\displaystyle |z|>1} for the remainder of the proof.
Writing the equation as
Hölder's inequality implies
If k = ∞ , this is
Thus
In the case 1 ≤ k < ∞ , the summation formula for a geometric progression , gives
Thus
which simplifies to
Thus, in all cases
which finishes the proof.
Many other upper bounds for the magnitudes of all roots have been given. [ 5 ]
Fujiwara's bound [ 6 ]
slightly improves the bound given above by dividing the last argument of the maximum by two.
Kojima's bound is [ 7 ] [ verification needed ]
where a i {\displaystyle a_{i}} denotes the i th nonzero coefficient when the terms of the polynomials are sorted by increasing degrees. If all coefficients are nonzero, Fujiwara's bound is sharper, since each element in Fujiwara's bound is the geometric mean of first elements in Kojima's bound.
Sun and Hsieh obtained another improvement on Cauchy's bound. [ 8 ] Assume the polynomial is monic with general term a i x i . Sun and Hsieh showed that upper bounds 1 + d 1 and 1 + d 2 could be obtained from the following equations.
d 2 is the positive root of the cubic equation
They also noted that d 2 ≤ d 1 .
The previous bounds are upper bounds for each root separately. Landau's inequality provides an upper bound for the absolute values of the product of the roots that have an absolute value greater than one. This inequality, discovered in 1905 by Edmund Landau , [ 9 ] has been forgotten and rediscovered at least three times during the 20th century. [ 10 ] [ 11 ] [ 12 ]
This bound of the product of roots is not much greater than the best preceding bounds of each root separately. [ 13 ] Let z 1 , … , z n {\displaystyle z_{1},\ldots ,z_{n}} be the n roots of the polynomial p . If
is the Mahler measure of p ,
then
Surprisingly, this bound of the product of the absolute values larger than 1 of the roots is not much larger than the best bounds of one root that have been given above for a single root. This bound is even exactly equal to one of the bounds that are obtained using Hölder's inequality .
This bound is also useful to bound the coefficients of a divisor of a polynomial with integer coefficients: [ 14 ] if
is a divisor of p , then
and, by Vieta's formulas ,
for i = 0, ..., m , where ( m i ) {\displaystyle {\binom {m}{i}}} is a binomial coefficient . Thus
and
Rouché's theorem allows defining discs centered at zero and containing a given number of roots. More precisely, if there is a positive real number R and an integer 0 ≤ k ≤ n such that
then there are exactly k roots, counted with multiplicity, of absolute value less than R .
If | z | = R , {\displaystyle |z|=R,} then
By Rouché's theorem, this implies directly that p ( z ) {\displaystyle p(z)} and z k {\displaystyle z^{k}} have the same number of roots of absolute values less than R , counted with multiplicities. As this number is k , the result is proved.
The above result may be applied if the polynomial
takes a negative value for some positive real value of x .
In the remaining of the section, suppose that a 0 ≠ 0 . If it is not the case, zero is a root, and the localization of the other roots may be studied by dividing the polynomial by a power of the indeterminate, getting a polynomial with a nonzero constant term.
For k = 0 and k = n , Descartes' rule of signs shows that the polynomial has exactly one positive real root. If R 0 {\displaystyle R_{0}} and R n {\displaystyle R_{n}} are these roots, the above result shows that all the roots satisfy
As these inequalities apply also to h 0 {\displaystyle h_{0}} and h n , {\displaystyle h_{n},} these bounds are optimal for polynomials with a given sequence of the absolute values of their coefficients. They are thus sharper than all bounds given in the preceding sections.
For 0 < k < n , Descartes' rule of signs implies that h k ( x ) {\displaystyle h_{k}(x)} either has two positive real roots that are not multiple, or is nonnegative for every positive value of x . So, the above result may be applied only in the first case. If R k , 1 < R k , 2 {\displaystyle R_{k,1}<R_{k,2}} are these two roots, the above result implies that
for k roots of p , and that
for the n – k other roots.
Instead of explicitly computing R k , 1 {\displaystyle R_{k,1}} and R k , 2 , {\displaystyle R_{k,2},} it is generally sufficient to compute a value R k {\displaystyle R_{k}} such that h k ( R k ) < 0 {\displaystyle h_{k}(R_{k})<0} (necessarily R k , 1 < R k < R k , 2 {\displaystyle R_{k,1}<R_{k}<R_{k,2}} ). These R k {\displaystyle R_{k}} have the property of separating roots in terms of their absolute values: if, for h < k , both R h {\displaystyle R_{h}} and R k {\displaystyle R_{k}} exist, there are exactly k – h roots z such that R h < | z | < R k . {\displaystyle R_{h}<|z|<R_{k}.}
For computing R k , {\displaystyle R_{k},} one can use the fact that h ( x ) x k {\displaystyle {\frac {h(x)}{x^{k}}}} is a convex function (its second derivative is positive). Thus R k {\displaystyle R_{k}} exists if and only if h ( x ) x k {\displaystyle {\frac {h(x)}{x^{k}}}} is negative at its unique minimum. For computing this minimum, one can use any optimization method, or, alternatively, Newton's method for computing the unique positive zero of the derivative of h ( x ) x k {\displaystyle {\frac {h(x)}{x^{k}}}} (it converges rapidly, as the derivative is a monotonic function ).
One can increase the number of existing R k {\displaystyle R_{k}} 's by applying the root squaring operation of the Dandelin–Graeffe iteration . If the roots have distinct absolute values, one can eventually completely separate the roots in terms of their absolute values, that is, compute n + 1 positive numbers R 0 < R 1 < ⋯ < R n {\displaystyle R_{0}<R_{1}<\dots <R_{n}} such there is exactly one root with an absolute value in the open interval ( R k − 1 , R k ) , {\displaystyle (R_{k-1},R_{k}),} for k = 1, ..., n .
The Gershgorin circle theorem applies the companion matrix of the polynomial on a basis related to Lagrange interpolation to define discs centered at the interpolation points, each containing a root of the polynomial; see Durand–Kerner method § Root inclusion via Gerschgorin's circles for details.
If the interpolation points are close to the roots of the roots of the polynomial, the radii of the discs are small, and this is a key ingredient of Durand–Kerner method for computing polynomial roots.
For polynomials with real coefficients, it is often useful to bound only the real roots. It suffices to bound the positive roots, as the negative roots of p ( x ) are the positive roots of p (– x ) .
Clearly, every bound of all roots applies also for real roots. But in some contexts, tighter bounds of real roots are useful. For example, the efficiency of the method of continued fractions for real-root isolation strongly depends on tightness of a bound of positive roots. This has led to establishing new bounds that are tighter than the general bounds of all roots. These bounds are generally expressed not only in terms of the absolute values of the coefficients, but also in terms of their signs.
Other bounds apply only to polynomials whose all roots are reals (see below).
To give a bound of the positive roots, one can assume a n > 0 {\displaystyle a_{n}>0} without loss of generality, as changing the signs of all coefficients does not change the roots.
Every upper bound of the positive roots of
is also a bound of the real zeros of
In fact, if B is such a bound, for all x > B , one has p ( x ) ≥ q ( x ) > 0 .
Applied to Cauchy's bound, this gives the upper bound
for the real roots of a polynomial with real coefficients. If this bound is not greater than 1 , this means that all nonzero coefficients have the same sign, and that there is no positive root.
Similarly, another upper bound of the positive roots is
If all nonzero coefficients have the same sign, there is no positive root, and the maximum must be zero.
Other bounds have been recently developed, mainly for the method of continued fractions for real-root isolation . [ 15 ] [ 16 ]
If all roots of a polynomial are real, Laguerre proved the following lower and upper bounds of the roots, by using what is now called Samuelson's inequality . [ 17 ]
Let ∑ k = 0 n a k x k {\displaystyle \sum _{k=0}^{n}a_{k}x^{k}} be a polynomial with all real roots. Then its roots are located in the interval with endpoints
For example, the roots of the polynomial x 4 + 5 x 3 + 5 x 2 − 5 x − 6 = ( x + 3 ) ( x + 2 ) ( x + 1 ) ( x − 1 ) {\displaystyle x^{4}+5x^{3}+5x^{2}-5x-6=(x+3)(x+2)(x+1)(x-1)} satisfy
The root separation of a polynomial is the minimal distance between two roots, that is the minimum of the absolute values of the difference of two roots:
The root separation is a fundamental parameter of the computational complexity of root-finding algorithms for polynomials. In fact, the root separation determines the precision of number representation that is needed for being certain of distinguishing distinct roots. Also, for real-root isolation , it allows bounding the number of interval divisions that are needed for isolating all roots.
For polynomials with real or complex coefficients, it is not possible to express a lower bound of the root separation in terms of the degree and the absolute values of the coefficients only, because a small change on a single coefficient transforms a polynomial with multiple roots into a square-free polynomial with a small root separation, and essentially the same absolute values of the coefficient. However, involving the discriminant of the polynomial allows a lower bound.
For square-free polynomials with integer coefficients, the discriminant is an integer, and has thus an absolute value that is not smaller than 1 . This allows lower bounds for root separation that are independent from the discriminant.
Mignotte's separation bound is [ 18 ] [ 19 ] [ 20 ]
where Δ ( p ) {\displaystyle \Delta (p)} is the discriminant, and ‖ p ‖ 2 = a 0 2 + a 1 2 + ⋯ + a n 2 . {\displaystyle \textstyle \|p\|_{2}={\sqrt {a_{0}^{2}+a_{1}^{2}+\dots +a_{n}^{2}}}.}
For a square free polynomial with integer coefficients, this implies
where s is the bit size of p , that is the sum of the bitsize of its coefficients.
The Gauss–Lucas theorem states that the convex hull of the roots of a polynomial contains the roots of the derivative of the polynomial.
A sometimes useful corollary is that, if all roots of a polynomial have positive real part, then so do the roots of all derivatives of the polynomial.
A related result is Bernstein's inequality . It states that for a polynomial P of degree n with derivative P′ we have
If the coefficients a i of a random polynomial are independently and identically distributed with a mean of zero, most complex roots are on the unit circle or close to it. In particular, the real roots are mostly located near ±1 , and, moreover, their expected number is, for a large degree, less than the natural logarithm of the degree.
If the coefficients are Gaussian distributed with a mean of zero and variance of σ then the mean density of real roots is given by the Kac formula [ 21 ] [ 22 ]
where
When the coefficients are Gaussian distributed with a non-zero mean and variance of σ , a similar but more complex formula is known. [ citation needed ]
For large n , the mean density of real roots near x is asymptotically
if x 2 − 1 ≠ 0 , {\displaystyle x^{2}-1\neq 0,} and
It follows that the expected number of real roots is, using big O notation
where C is a constant approximately equal to 0.625 735 8072 . [ 23 ]
In other words, the expected number of real roots of a random polynomial of high degree is lower than the natural logarithm of the degree .
Kac, Erdős and others have shown that these results are insensitive to the distribution of the coefficients, if they are independent and have the same distribution with mean zero. However, if the variance of the i th coefficient is equal to ( n i ) , {\displaystyle {\binom {n}{i}},} the expected number of real roots is n . {\displaystyle {\sqrt {n}}.} [ 23 ]
A polynomial p {\displaystyle p} can be written in the form of
p ( x ) = a ( x − z 1 ) m 1 ⋯ ( x − z k ) m k {\displaystyle p(x)=a(x-z_{1})^{m_{1}}\cdots (x-z_{k})^{m_{k}}}
with distinct roots z 1 , … , z k {\displaystyle z_{1},\ldots ,z_{k}} and corresponding multiplicities m 1 , … , m k {\displaystyle m_{1},\ldots ,m_{k}} . A root z j {\displaystyle z_{j}} is a simple root if m j = 1 {\displaystyle m_{j}=1} or a multiple root if m j ≥ 2 {\displaystyle m_{j}\geq 2} . Simple roots are Lipschitz continuous with respect to coefficients but multiple roots are not. In other words, simple roots have bounded sensitivities but multiple roots are infinitely sensitive if the coefficients are perturbed arbitrarily. As a result, most root-finding algorithms suffer substantial loss of accuracy on multiple roots in numerical computation.
In 1972, William Kahan proved that there is an inherent stability of multiple roots. [ 24 ] Kahan discovered that polynomials with a particular set of multiplicities form what he called a pejorative manifold and proved that a multiple root is Lipschitz continuous if the perturbation maintains its multiplicity.
This geometric property of multiple roots is crucial in numerical computation of multiple roots . | https://en.wikipedia.org/wiki/Geometrical_properties_of_polynomial_roots |
Geometrically necessary dislocations are like-signed dislocations needed to accommodate for plastic bending in a crystalline material . [ 1 ] They are present when a material's plastic deformation is accompanied by internal plastic strain gradients. [ 2 ] They are in contrast to statistically stored dislocations, with statistics of equal positive and negative signs, which arise during plastic flow from multiplication processes like the Frank-Read source.
As straining progresses, the dislocation density increases and the dislocation mobility decreases during plastic flow. There are different ways through which dislocations can accumulate. Many of the dislocations are accumulated by multiplication, where dislocations encounters each other by chance. Dislocations stored in such progresses are called statistically stored dislocations, with corresponding density ρ s {\displaystyle \rho _{s}} . [ 2 ] In other words, they are dislocations evolved from random trapping processes during plastic deformation. [ 3 ]
In addition to statistically stored dislocation, geometrically necessary dislocations are accumulated in strain gradient fields caused by geometrical constraints of the crystal lattice. In this case, the plastic deformation is accompanied by internal plastic strain gradients. The theory of geometrically necessary dislocations was first introduced by Nye [ 4 ] in 1953. Since geometrically necessary dislocations are present in addition to statistically stored dislocations, the total density is the accumulation of two densities, e.g. ρ s + ρ g {\displaystyle \rho _{s}+\rho _{g}} , where ρ g {\displaystyle \rho _{g}} is the density of geometrically necessary dislocations.
The plastic bending of a single crystal can be used to illustrate the concept of geometrically necessary dislocation, where the slip planes and crystal orientations are parallel to the direction of bending. The perfect (non-deformed) crystal has a length l {\displaystyle l} and thickness t {\displaystyle t} . When the crystal bar is bent to a radius of curvature r {\displaystyle r} , a strain gradient forms where a tensile strain occurs in the upper portion of the crystal bar, increasing the length of upper surface from l {\displaystyle l} to l + d l {\displaystyle l+dl} . Here d l {\displaystyle dl} is positive and its magnitude is assumed to be t θ / 2 {\displaystyle t\theta /2} . Similarly, the length of the opposite inner surface is decreased from l {\displaystyle l} to l − d l {\displaystyle l-dl} due to the compression strain caused by bending. Thus, the strain gradient is the strain difference between the outer and inner crystal surfaces divided by the distance over which the gradient exists
s t r a i n g r a d i e n t = 2 d l / l t = 2 t θ / 2 l t = θ l {\displaystyle strain\ gradient=2{\frac {dl/l}{t}}=2{\frac {t\theta /2l}{t}}={\frac {\theta }{l}}} . Since l = r θ {\displaystyle l=r\theta } , s t r a i n g r a d i e n t = 1 r {\displaystyle strain\ gradient={\frac {1}{r}}} .
The surface length divided by the interatomic spacing is the number of crystal planes on this surface. The interatomic spacing b {\displaystyle b} is equal to the magnitude of Burgers vector b {\displaystyle b} . Thus the numbers of crystal planes on the outer (tension) surface and inner (compression) surface are ( l + d l ) / b {\displaystyle (l+dl)/b} and ( l − d l ) / b {\displaystyle (l-dl)/b} , respectively. Therefore, the concept of geometrically necessary dislocations is introduced that the same sign edge dislocations compensate the difference in the number of atomic planes between surfaces. The density of geometrically necessary dislocations ρ g {\displaystyle \rho _{g}} is this difference divided by the crystal surface area
ρ g = ( l + d l ) / b − ( l − d l ) / b l t = 2 d l l t b = 1 r b = s t r a i n g r a d i e n t b {\displaystyle \rho _{g}={\frac {(l+dl)/b-(l-dl)/b}{lt}}=2{\frac {dl}{ltb}}={\frac {1}{rb}}={\frac {strain\ gradient}{b}}} .
More precisely, the orientation of the slip plane and direction with respect to the bending should be considered when calculating the density of geometrically necessary dislocations. In a special case when the slip plane normals are parallel to the bending axis and the slip directions are perpendicular to this axis, ordinary dislocation glide instead of geometrically necessary dislocation occurs during bending process. Thus, a constant of order unity α {\displaystyle \alpha } is included in the expression for the density of geometrically necessary dislocations
ρ g = α s t r a i n g r a d i e n t b {\displaystyle \rho _{g}=\alpha {\frac {strain\ gradient}{b}}} .
Between the adjacent grains of a polycrystalline material, geometrically necessary dislocations can provide displacement compatibility by accommodating each crystal's strain gradient. Empirically, it can be inferred that such dislocations regions exist because crystallites in a polycrystalline material do not have voids or overlapping segments between them. In such a system, the density of geometrically necessary dislocations can be estimated by considering an average grain. Overlap between two adjacent grains is proportional to ε ¯ d {\displaystyle {\overline {\varepsilon }}d} where ε ¯ {\displaystyle {\overline {\varepsilon }}} is average strain and d {\displaystyle d} is the diameter of the grain. The displacement d l {\displaystyle dl} is proportional to ε ¯ {\displaystyle {\overline {\varepsilon }}} multiplied by the gage length, which is taken as d {\displaystyle d} for a polycrystal. This divided by the Burgers vector , b , yields the number of dislocations, and dividing by the area ( ≅ d 2 {\displaystyle \cong d^{2}} ) yields the density
ρ g ≅ ε ¯ b d {\displaystyle \rho _{g}\cong {\frac {\overline {\varepsilon }}{bd}}}
which, with further geometrical considerations, can be refined to
ρ g = ε ¯ 4 b d {\displaystyle \rho _{g}={\frac {\overline {\varepsilon }}{4bd}}} . [ 2 ]
Nye has introduced a set of tensor (so-called Nye's tensor) to calculate the geometrically necessary dislocation density. [ 4 ]
For a three dimension dislocations in a crystal, considering a region where the effects of dislocations is averaged (i.e. the crystal is large enough). The dislocations can be determined by Burgers vectors . If a Burgers circuit of the unit area normal to the unit vector l j {\displaystyle l_{j}} has a Burgers vector B i {\displaystyle B_{i}}
B i = α i j l j {\displaystyle B_{i}=\alpha _{ij}l_{j}} ( i , j = 1 , 2 , 3 {\displaystyle i,j=1,2,3} )
where the coefficient α i j {\displaystyle \alpha _{ij}} is Nye's tensor relating the unit vector l j {\displaystyle l_{j}} and Burgers vector B i {\displaystyle B_{i}} . This second-rank tensor determines the dislocation state of a special region.
Assume B i = b i ( n r j l j ) {\displaystyle B_{i}=b_{i}(nr_{j}l_{j})} , where r {\displaystyle r} is the unit vector parallel to the dislocations and b {\displaystyle b} is the Burgers vector, n is the number of dislocations crossing unit area normal to r {\displaystyle r} . Thus, α i j = n b i r j {\displaystyle \alpha _{ij}=nb_{i}r_{j}} . The total α i j {\displaystyle \alpha _{ij}} is the sum of all different values of n b i r j {\displaystyle nb_{i}r_{j}} . Assume a second-rank tensor k i j {\displaystyle k_{ij}} to describe the curvature of the lattice, d ϕ i = k i j d x j {\displaystyle d\phi _{i}=k_{ij}dx_{j}} , where d ϕ i {\displaystyle d\phi _{i}} is the small lattice rotations about the three axes and d x j {\displaystyle dx_{j}} is the displacement vector. It can be proved that k i j = α j i − 1 2 δ i j α k k {\displaystyle k_{ij}=\alpha _{ji}-{\tfrac {1}{2}}\delta _{ij}\alpha _{kk}} where δ i j = 1 {\displaystyle \delta _{ij}=1} for i = j {\displaystyle i=j} , and δ i j = 0 {\displaystyle \delta _{ij}=0} for i ≠ j {\displaystyle i\neq j} .
The equation of equilibrium yields ∂ α i j ∂ x j = 0 {\displaystyle {\frac {\partial \alpha _{ij}}{\partial x_{j}}}=0} . Since k i j = ∂ ϕ i ∂ x j {\displaystyle k_{ij}={\frac {\partial \phi {i}}{\partial x_{j}}}} , thus ∂ k i j ∂ x k = ∂ 2 ∂ x j ∂ x k ϕ i = ∂ k i k ∂ x j {\displaystyle {\frac {\partial k_{ij}}{\partial x_{k}}}={\partial ^{2} \over \partial x_{j}\partial x_{k}}\phi _{i}={\frac {\partial k_{ik}}{\partial x_{j}}}} . By substituting α {\displaystyle \alpha } for k {\displaystyle k} , ∂ α j i ∂ x k − ∂ α k i ∂ x j = 1 2 ( δ i j ∂ α l l ∂ x k − δ i k ∂ α l l ∂ x j ) {\displaystyle {\frac {\partial \alpha _{ji}}{\partial x_{k}}}-{\frac {\partial \alpha _{ki}}{\partial x_{j}}}={\frac {1}{2}}(\delta _{ij}{\frac {\partial \alpha _{ll}}{\partial x_{k}}}-\delta _{ik}{\frac {\partial \alpha _{ll}}{\partial x_{j}}})} . Due to the zero solution for equations with j = k {\displaystyle j=k} are zero and the symmetry of j {\displaystyle j} and k {\displaystyle k} , only nine independent equations remain of all twenty-seven possible permutations of i , j , k {\displaystyle i,j,k} . The Nye's tensor α i j {\displaystyle \alpha _{ij}} can be determined by these nine differential equations.
Thus the dislocation potential can be written as W = 1 2 α i j k i j {\displaystyle W={\tfrac {1}{2}}\alpha _{ij}k_{ij}} , where ∂ W ∂ k i j = 1 2 α i j + 1 2 ∂ α k l ∂ k i j k k l = 1 2 α i j + 1 2 k j i − 1 2 δ i j k k k = α i j {\displaystyle {\frac {\partial W}{\partial k_{ij}}}={\frac {1}{2}}\alpha _{ij}+{\frac {1}{2}}{\frac {\partial \alpha _{kl}}{\partial k_{ij}}}k_{kl}={\frac {1}{2}}\alpha _{ij}+{\frac {1}{2}}k_{ji}-{\frac {1}{2}}\delta _{ij}k_{kk}=\alpha _{ij}} .
The uniaxial tensile test has largely been performed to obtain the stress-strain relations and related mechanical properties of bulk specimens. However, there is an extra storage of defects associated with non-uniform plastic deformation in geometrically necessary dislocations, and ordinary macroscopic test alone, e.g. uniaxial tensile test, is not enough to capture the effects of such defects, e.g. plastic strain gradient. Besides, geometrically necessary dislocations are in the micron scale, where a normal bending test performed at millimeter-scale fails to detect these dislocations. [ 5 ]
Only after the invention of spatially and angularly resolved methods to measure lattice distortion via electron backscattered diffraction by Adams et al. [ 6 ] in 1997, experimental measurements of geometrically necessary dislocations became possible. For example, Sun et al. [ 7 ] in 2000 studied the pattern of lattice curvature near the interface of deformed aluminum bicrystals using diffraction-based orientation imaging microscopy. Thus the observation of geometrically necessary dislocations was realized using the curvature data.
But due to experimental limitations, the density of geometrically necessary dislocation for a general deformation state was hard to measure until a lower bound method was introduced by Kysar et al. [ 8 ] at 2010. They studied wedge indentation with a 90 degree included angle into a single nickel crystal (and later the included angles of 60 degree and 120 degree were also available by Dahlberg et al. [ 9 ] ). By comparing the orientation of the crystal lattice in the after-deformed configuration to the undeformed homogeneous sample, they were able to determine the in-plane lattice rotation and found it an order of magnitude larger than the out-of-plane lattice rotations, thus demonstrating the plane strain assumption.
The Nye dislocation density tensor [ 4 ] has only two non-zero components due to two-dimensional deformation state and they can be derived from the lattice rotation measurements. Since the linear relationship between two Nye tensor components and densities of geometrically necessary dislocations is usually under-determined, the total density of geometrically necessary dislocations is minimized subject to this relationship. This lower bound solution represents the minimum geometrically necessary dislocation density in the deformed crystal consistent with the measured lattice geometry. And in regions where only one or two effective slip systems are known to be active, the lower bound solution reduces to the exact solution for geometrically necessary dislocation densities.
Because ρ g {\displaystyle \rho _{g}} is in addition to the density of statistically stored dislocations ρ s {\displaystyle \rho _{s}} , the increase in dislocation density due to accommodated polycrystals leads to a grain size effect during strain hardening ; that is, polycrystals of finer grain size will tend to work-harden more rapidly. [ 2 ]
Geometrically necessary dislocations can provide strengthening, where two mechanisms exists in different cases. The first mechanism provides macroscopic isotropic hardening via local dislocation interaction, e.g. jog formation when an existing geometrically necessary dislocation is cut through by a moving dislocation. The second mechanism is kinematic hardening via the accumulation of long range back stresses. [ 10 ]
Geometrically necessary dislocations can lower their free energy by stacking one atop another (see Peach-Koehler formula for dislocation-dislocation stresses) and form low-angle tilt boundaries . This movement often requires the dislocations to climb to different glide planes, so an annealing at elevated temperature is often necessary. The result is an arc that transforms from being continuously bent to discretely bent with kinks at the low-angle tilt boundaries. [ 1 ] | https://en.wikipedia.org/wiki/Geometrically_necessary_dislocations |
In mathematics, Thurston's geometrization conjecture (now a theorem ) states that each of certain three-dimensional topological spaces has a unique geometric structure that can be associated with it. It is an analogue of the uniformization theorem for two-dimensional surfaces , which states that every simply connected Riemann surface can be given one of three geometries ( Euclidean , spherical , or hyperbolic ).
In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by William Thurston ( 1982 ) as part of his 24 questions , and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture .
Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s, and since then, several complete proofs have appeared in print.
Grigori Perelman announced a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery in two papers posted at the arxiv.org preprint server. Perelman's papers were studied by several independent groups that produced books and online manuscripts filling in the complete details of his arguments. Verification was essentially complete in time for Perelman to be awarded the 2006 Fields Medal for his work, and in 2010 the Clay Mathematics Institute awarded him its 1 million USD prize for solving the Poincaré conjecture, though Perelman declined both awards.
The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture.
A 3-manifold is called closed if it is compact – without "punctures" or "missing endpoints" – and has no boundary ("edge").
Every closed 3-manifold has a prime decomposition : this means it is the connected sum ("a gluing together") of prime 3-manifolds . [ a ] This reduces much of the study of 3-manifolds to the case of prime 3-manifolds: those that cannot be written as a non-trivial connected sum.
Here is a statement of Thurston's conjecture:
There are 8 possible geometric structures in 3 dimensions. There is a unique minimal way of cutting an irreducible oriented 3-manifold along tori into pieces that are Seifert manifolds or atoroidal called the JSJ decomposition , which is not quite the same as the decomposition in the geometrization conjecture, because some of the pieces in the JSJ decomposition might not have finite volume geometric structures. (For example, the mapping torus of an Anosov map of a torus has a finite volume solv structure, but its JSJ decomposition cuts it open along one torus to produce a product of a torus and a unit interval, and the interior of this has no finite volume geometric structure.)
For non-oriented manifolds the easiest way to state a geometrization conjecture is to first take the oriented double cover . It is also possible to work directly with non-orientable manifolds, but this gives some extra complications: it may be necessary to cut along projective planes and Klein bottles as well as spheres and tori, and manifolds with a projective plane boundary component usually have no geometric structure.
In 2 dimensions, every closed surface has a geometric structure consisting of a metric with constant curvature; it is not necessary to cut the manifold up first. Specifically, every closed surface is diffeomorphic to a quotient of S 2 , E 2 , or H 2 . [ 1 ]
A model geometry is a simply connected smooth manifold X together with a transitive action of a Lie group G on X with compact stabilizers.
A model geometry is called maximal if G is maximal among groups acting smoothly and transitively on X with compact stabilizers. Sometimes this condition is included in the definition of a model geometry.
A geometric structure on a manifold M is a diffeomorphism from M to X /Γ for some model geometry X , where Γ is a discrete subgroup of G acting freely on X ; this is a special case of a complete ( G , X )-structure . If a given manifold admits a geometric structure, then it admits one whose model is maximal.
A 3-dimensional model geometry X is relevant to the geometrization conjecture if it is maximal and if there is at least one compact manifold with a geometric structure modelled on X . Thurston classified the 8 model geometries satisfying these conditions; they are listed below and are sometimes called Thurston geometries . (There are also uncountably many model geometries without compact quotients.)
There is some connection with the Bianchi groups : the 3-dimensional Lie groups. Most Thurston geometries can be realized as a left invariant metric on a Bianchi group. However S 2 × R cannot be, Euclidean space corresponds to two different Bianchi groups, and there are an uncountable number of solvable non-unimodular Bianchi groups, most of which give model geometries with no compact representatives.
The point stabilizer is O(3, R ), and the group G is the 6-dimensional Lie group O(4, R ), with 2 components. The corresponding manifolds are exactly the closed 3-manifolds with finite fundamental group . Examples include the 3-sphere , the Poincaré homology sphere , Lens spaces . This geometry can be modeled as a left invariant metric on the Bianchi group of type IX . Manifolds with this geometry are all compact, orientable, and have the structure of a Seifert fiber space (often in several ways). The complete list of such manifolds is given in the article on spherical 3-manifolds . Under Ricci flow, manifolds with this geometry collapse to a point in finite time.
The point stabilizer is O(3, R ), and the group G is the 6-dimensional Lie group R 3 × O(3, R ), with 2 components. Examples are the 3-torus , and more generally the mapping torus of a finite-order automorphism of the 2-torus; see torus bundle . There are exactly 10 finite closed 3-manifolds with this geometry, 6 orientable and 4 non-orientable. This geometry can be modeled as a left invariant metric on the Bianchi groups of type I or VII 0 . Finite volume manifolds with this geometry are all compact, and have the structure of a Seifert fiber space (sometimes in two ways). The complete list of such manifolds is given in the article on Seifert fiber spaces . Under Ricci flow, manifolds with Euclidean geometry remain invariant.
The point stabilizer is O(3, R ), and the group G is the 6-dimensional Lie group O + (1, 3, R ), with 2 components. There are enormous numbers of examples of these, and their classification is not completely understood. The example with smallest volume is the Weeks manifold . Other examples are given by the Seifert–Weber space , or "sufficiently complicated" Dehn surgeries on links , or most Haken manifolds . The geometrization conjecture implies that a closed 3-manifold is hyperbolic if and only if it is irreducible, atoroidal , and has infinite fundamental group. This geometry can be modeled as a left invariant metric on the Bianchi group of type V or VII h≠0 . Under Ricci flow, manifolds with hyperbolic geometry expand.
The point stabilizer is O(2, R ) × Z /2 Z , and the group G is O(3, R ) × R × Z /2 Z , with 4 components. The four finite volume manifolds with this geometry are: S 2 × S 1 , the mapping torus of the antipode map of S 2 , the connected sum of two copies of 3-dimensional projective space, and the product of S 1 with two-dimensional projective space. The first two are mapping tori of the identity map and antipode map of the 2-sphere, and are the only examples of 3-manifolds that are prime but not irreducible. The third is the only example of a non-trivial connected sum with a geometric structure. This is the only model geometry that cannot be realized as a left invariant metric on a 3-dimensional Lie group. Finite volume manifolds with this geometry are all compact and have the structure of a Seifert fiber space (often in several ways). Under normalized Ricci flow manifolds with this geometry converge to a 1-dimensional manifold.
The point stabilizer is O(2, R ) × Z /2 Z , and the group G is O + (1, 2, R ) × R × Z /2 Z , with 4 components. Examples include the product of a hyperbolic surface with a circle, or more generally the mapping torus of an isometry of a hyperbolic surface. Finite volume manifolds with this geometry have the structure of a Seifert fiber space if they are orientable. (If they are not orientable the natural fibration by circles is not necessarily a Seifert fibration: the problem is that some fibers may "reverse orientation"; in other words their neighborhoods look like fibered solid Klein bottles rather than solid tori. [ 2 ] ) The classification of such (oriented) manifolds is given in the article on Seifert fiber spaces . This geometry can be modeled as a left invariant metric on the Bianchi group of type III . Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
The universal cover of SL(2, R ) is denoted S L ~ ( 2 , R ) {\displaystyle {\widetilde {\rm {SL}}}(2,\mathbf {R} )} . It fibers over H 2 , and the space is sometimes called "Twisted H 2 × R". The group G has 2 components. Its identity component has the structure ( R × S L ~ 2 ( R ) ) / Z {\displaystyle (\mathbf {R} \times {\widetilde {\rm {SL}}}_{2}(\mathbf {R} ))/\mathbf {Z} } . The point stabilizer is O(2, R ).
Examples of these manifolds include: the manifold of unit vectors of the tangent bundle of a hyperbolic surface, and more generally the Brieskorn homology spheres (excepting the 3-sphere and the Poincaré dodecahedral space ). This geometry can be modeled as a left invariant metric on the Bianchi group of type VIII or III . Finite volume manifolds with this geometry are orientable and have the structure of a Seifert fiber space . The classification of such manifolds is given in the article on Seifert fiber spaces . Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
This fibers over E 2 , and so is sometimes known as "Twisted E 2 × R". It is the geometry of the Heisenberg group . The point stabilizer is O(2, R ). The group G has 2 components, and is a semidirect product of the 3-dimensional Heisenberg group by the group O(2, R ) of isometries of a circle. Compact manifolds with this geometry include the mapping torus of a Dehn twist of a 2-torus, or the quotient of the Heisenberg group by the "integral Heisenberg group". This geometry can be modeled as a left invariant metric on the Bianchi group of type II . Finite volume manifolds with this geometry are compact and orientable and have the structure of a Seifert fiber space . The classification of such manifolds is given in the article on Seifert fiber spaces . Under normalized Ricci flow, compact manifolds with this geometry converge to R 2 with the flat metric.
This geometry (also called Solv geometry ) fibers over the line with fiber the plane, and is the geometry of the identity component of the group G . The point stabilizer is the dihedral group of order 8. The group G has 8 components, and is the group of maps from 2-dimensional Minkowski space to itself that are either isometries or multiply the metric by −1. The identity component has a normal subgroup R 2 with quotient R , where R acts on R 2 with 2 (real) eigenspaces, with distinct real eigenvalues of product 1. This is the Bianchi group of type VI 0 and the geometry can be modeled as a left invariant metric on this group. All finite volume manifolds with solv geometry are compact. The compact manifolds with solv geometry are either the mapping torus of an Anosov map of the 2-torus (such a map is an automorphism of the 2-torus given by an invertible 2 by 2 matrix whose eigenvalues are real and distinct, such as ( 2 1 1 1 ) {\displaystyle \left({\begin{array}{*{20}c}2&1\\1&1\\\end{array}}\right)} ), or quotients of these by groups of order at most 8. The eigenvalues of the automorphism of the torus generate an order of a real quadratic field, and the solv manifolds can be classified in terms of the units and ideal classes of this order. [ 3 ] Under normalized Ricci flow compact manifolds with this geometry converge (rather slowly) to R 1 .
A closed 3-manifold has a geometric structure of at most one of the 8 types above, but finite volume non-compact 3-manifolds can occasionally have more than one type of geometric structure. (Nevertheless, a manifold can have many different geometric structures of the same type; for example, a surface of genus at least 2 has a continuum of different hyperbolic metrics.) More precisely, if M is a manifold with a finite volume geometric structure, then the type of geometric structure is almost determined as follows, in terms of the fundamental group π 1 ( M ):
Infinite volume manifolds can have many different types of geometric structure: for example, R 3 can have 6 of the different geometric structures listed above, as 6 of the 8 model geometries are homeomorphic to it. Moreover if the volume does not have to be finite there are an infinite number of new geometric structures with no compact models; for example, the geometry of almost any non-unimodular 3-dimensional Lie group.
There can be more than one way to decompose a closed 3-manifold into pieces with geometric structures. For example:
It is possible to choose a "canonical" decomposition into pieces with geometric structure, for example by first cutting the manifold into prime pieces in a minimal way, then cutting these up using the smallest possible number of tori. However this minimal decomposition is not necessarily the one produced by Ricci flow; in fact, the Ricci flow can cut up a manifold into geometric pieces in many inequivalent ways, depending on the choice of initial metric.
The Fields Medal was awarded to Thurston in 1982 partially for his proof of the geometrization conjecture for Haken manifolds .
In 1982, Richard S. Hamilton showed that given a closed 3-manifold with a metric of positive Ricci curvature , the Ricci flow would collapse the manifold to a point in finite time, which proves the geometrization conjecture for this case as the metric becomes "almost round" just before the collapse. He later developed a program to prove the geometrization conjecture by Ricci flow with surgery . The idea is that the Ricci flow will in general produce singularities, but one may be able to continue the Ricci flow past the singularity by using surgery to change the topology of the manifold. Roughly speaking, the Ricci flow contracts positive curvature regions and expands negative curvature regions, so it should kill off the pieces of the manifold with the "positive curvature" geometries S 3 and S 2 × R , while what is left at large times should have a thick–thin decomposition into a "thick" piece with hyperbolic geometry and a "thin" graph manifold .
In 2003, Grigori Perelman announced a proof of the geometrization conjecture by showing that the Ricci flow can indeed be continued past the singularities, and has the behavior described above.
One component of Perelman's proof was a novel collapsing theorem in Riemannian geometry. Perelman did not release any details on the proof of this result (Theorem 7.4 in the preprint 'Ricci flow with surgery on three-manifolds'). Beginning with Shioya and Yamaguchi, there are now several different proofs of Perelman's collapsing theorem, or variants thereof. [ 4 ] [ 5 ] [ 6 ] [ 7 ] Shioya and Yamaguchi's formulation was used in the first fully detailed formulations of Perelman's work. [ 8 ]
A second route to the last part of Perelman's proof of geometrization is the method of Laurent Bessières and co-authors, [ 9 ] [ 10 ] which uses Thurston's hyperbolization theorem for Haken manifolds and Gromov 's norm for 3-manifolds. [ 11 ] [ 12 ] A book by the same authors with complete details of their version of the proof has been published by the European Mathematical Society . [ 13 ]
In four dimensions, only a rather restricted class of closed 4-manifolds admit a geometric decomposition. [ 14 ] However, lists of maximal model geometries can still be given. [ 15 ]
The four-dimensional maximal model geometries were classified by Richard Filipkiewicz in 1983. They number eighteen, plus one countably infinite family: [ 15 ] their usual names are E 4 , Nil 4 , Nil 3 × E 1 , Sol 4 m , n (a countably infinite family), Sol 4 0 , Sol 4 1 , H 3 × E 1 , S L ~ {\displaystyle {\widetilde {\rm {SL}}}} × E 1 , H 2 × E 2 , H 2 × H 2 , H 4 , H 2 ( C ) (a complex hyperbolic space ), F 4 (the tangent bundle of the hyperbolic plane), S 2 × E 2 , S 2 × H 2 , S 3 × E 1 , S 4 , CP 2 (the complex projective plane ), and S 2 × S 2 . [ 14 ] No closed manifold admits the geometry F 4 , but there are manifolds with proper decomposition including an F 4 piece. [ 14 ]
The five-dimensional maximal model geometries were classified by Andrew Geng in 2016. There are 53 individual geometries and six infinite families. Some new phenomena not observed in lower dimensions occur, including two uncountable families of geometries and geometries with no compact quotients. [ 1 ] | https://en.wikipedia.org/wiki/Geometrization_conjecture |
In physics, geometrothermodynamics (GTD) is a formalism developed in 2007 by Hernando Quevedo to describe the properties of thermodynamic systems in terms of concepts of differential geometry. [ 1 ]
Consider a thermodynamic system in the framework of classical equilibrium thermodynamics. The states of thermodynamic equilibrium are considered as points of an abstract equilibrium space in which a Riemannian metric can be introduced in several ways. In particular, one can introduce Hessian metrics like the Fisher information metric , the Weinhold metric , the Ruppeiner metric and others, whose components are calculated as the Hessian of a particular thermodynamic potential .
Another possibility is to introduce metrics which are independent of the thermodynamic potential, a property which is shared by all thermodynamic systems in classical thermodynamics. [ 2 ] Since a change of thermodynamic potential is equivalent to a Legendre transformation , and Legendre transformations do not act in the equilibrium space, it is necessary to introduce an auxiliary space to correctly handle the Legendre transformations. This is the so-called thermodynamic phase space. If the phase space is equipped with a Legendre invariant Riemannian metric, a smooth map can be introduced that induces a thermodynamic metric in the equilibrium manifold. The thermodynamic metric can then be used with different thermodynamic potentials without changing the geometric properties of the equilibrium manifold. One expects the geometric properties of the equilibrium manifold to be related to the macroscopic physical properties.
The details of this relation can be summarized in three main points:
The main ingredient of GTD is a (2 n + 1)-dimensional manifold T {\displaystyle {\mathcal {T}}} with coordinates Z A = { Φ , E a , I a } {\displaystyle Z^{A}=\{\Phi ,E^{a},I^{a}\}} , where Φ {\displaystyle \Phi } is an arbitrary thermodynamic potential, E a {\displaystyle E^{a}} , a = 1 , 2 , … , n {\displaystyle a=1,2,\ldots ,n} , are the
extensive variables, and I a {\displaystyle I^{a}} the intensive variables. It is also
possible to introduce in a canonical manner the fundamental
one-form Θ = d Φ − δ a b I a d E b {\displaystyle \Theta =d\Phi -\delta _{ab}I^{a}dE^{b}} (summation over repeated indices) with δ a b = d i a g ( + 1 , … , + 1 ) {\displaystyle \delta _{ab}={\rm {diag}}(+1,\ldots ,+1)} , which satisfies the condition Θ ∧ ( d Θ ) n ≠ 0 {\displaystyle \Theta \wedge (d\Theta )^{n}\neq 0} , where n {\displaystyle n} is the number of thermodynamic
degrees of freedom of the system, and is invariant with respect to
Legendre transformations [ 3 ]
where i ∪ j {\displaystyle i\cup j} is any disjoint decomposition of the set of indices { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} ,
and k , l = 1 , … , i {\displaystyle k,l=1,\ldots ,i} . In particular, for i = { 1 , … , n } {\displaystyle i=\{1,\ldots ,n\}} and i = ∅ {\displaystyle i=\emptyset } we obtain
the total Legendre transformation and the identity, respectively.
It is also assumed that in T {\displaystyle {\mathcal {T}}} there exists a metric G {\displaystyle G} which is also
invariant with respect to Legendre transformations. The triad ( T , Θ , G ) {\displaystyle ({\mathcal {T}},\Theta ,G)} defines a Riemannian contact manifold which is
called the thermodynamic phase space (phase manifold). The space of
thermodynamic equilibrium states (equilibrium manifold) is an
n-dimensional Riemannian submanifold E ⊂ T {\displaystyle {\mathcal {E}}\subset {\mathcal {T}}} induced by a smooth map φ : E → T {\displaystyle \varphi :{\mathcal {E}}\rightarrow {\mathcal {T}}} ,
i.e. φ : { E a } ↦ { Φ , E a , I a } {\displaystyle \varphi :\{E^{a}\}\mapsto \{\Phi ,E^{a},I^{a}\}} , with Φ = Φ ( E a ) {\displaystyle \Phi =\Phi (E^{a})} and I a = I a ( E a ) {\displaystyle I^{a}=I^{a}(E^{a})} , such that φ ∗ ( Θ ) = φ ∗ ( d Φ − δ a b I a d E b ) = 0 {\displaystyle \varphi ^{*}(\Theta )=\varphi ^{*}(d\Phi -\delta _{ab}I^{a}dE^{b})=0} holds, where φ ∗ {\displaystyle \varphi ^{*}} is the
pullback of φ {\displaystyle \varphi } . The manifold E {\displaystyle {\mathcal {E}}} is naturally equipped
with the Riemannian metric g = φ ∗ ( G ) {\displaystyle g=\varphi ^{*}(G)} . The purpose of GTD is
to demonstrate that the geometric properties of E {\displaystyle {\mathcal {E}}} are
related to the thermodynamic properties of a system with fundamental
thermodynamic equation Φ = Φ ( E a ) {\displaystyle \Phi =\Phi (E^{a})} .
The condition of invariance with respect total Legendre transformations leads to the metrics
where ξ a b {\displaystyle \xi _{ab}} is a constant diagonal matrix that can be expressed in terms of δ a b {\displaystyle \delta _{ab}} and η a b {\displaystyle \eta _{ab}} , and Λ {\displaystyle \Lambda } is an arbitrary Legendre invariant function of Z A {\displaystyle Z^{A}} . The metrics G I {\displaystyle G^{I}} and G I I {\displaystyle G^{II}} have been used to describe thermodynamic systems with first and second order phase transitions, respectively. The most general metric which is invariant with respect to partial Legendre transformations is
The components of the corresponding metric for the equilibrium manifold E {\displaystyle {\mathcal {E}}} can be computed as
GTD has been applied to describe laboratory systems like the ideal gas, van der Waals gas, the Ising model, etc., more exotic systems like black holes in different gravity theories, [ 4 ] in the context of relativistic cosmology, [ 5 ] and to describe chemical reactions
. [ 6 ] | https://en.wikipedia.org/wiki/Geometrothermodynamics |
Geometry & Topology is a peer-refereed, international mathematics research journal devoted to geometry and topology , and their applications. It is currently based at the University of Warwick , United Kingdom , and published by Mathematical Sciences Publishers , a nonprofit academic publishing organisation.
It was founded in 1997 [ 1 ] by a group of topologists who were dissatisfied with recent substantial rises in subscription prices of journals published by major publishing corporations. The aim was to set up a high-quality journal, capable of competing with existing journals, but with substantially lower subscription fees. The journal was open-access for its first ten years of existence and was available free to individual users, although institutions were required to pay modest subscription fees for both online access and for printed volumes. At present, an online subscription is required to view full-text PDF copies of articles in the most recent three volumes; articles older than that are open-access, at which point copies of the published articles are uploaded to the arXiv . A traditional printed version is also published, at present on an annual basis.
The journal has grown to be well respected in its field, and has in recent years published a number of important papers, in particular proofs of the Property P conjecture and the Birman conjecture .
This article about a mathematics journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Geometry_&_Topology |
Geometry Expert ( GEX ) is a Chinese software package for dynamic diagram drawing and automated geometry theorem proving and discovering.
There's a new Chinese version of Geometry Expert, called MMP/Geometer .
Java Geometry Expert is free under GNU General Public License . | https://en.wikipedia.org/wiki/Geometry_Expert |
In coordination chemistry and crystallography , the geometry index or structural parameter ( τ ) is a number ranging from 0 to 1 that indicates what the geometry of the coordination center is. The first such parameter for 5-coordinate compounds was developed in 1984. [ 1 ] Later, parameters for 4-coordinate compounds were developed. [ 2 ]
To distinguish whether the geometry of the coordination center is trigonal bipyramidal or square pyramidal, the τ 5 (originally just τ ) parameter was proposed by Addison et al. : [ 1 ]
where: β > α are the two greatest valence angles of the coordination center.
When τ 5 is close to 0 the geometry is similar to square pyramidal, while if τ 5 is close to 1 the geometry is similar to trigonal bipyramidal:
In 2007 Houser et al. developed the analogous τ 4 parameter to distinguish whether the geometry of the coordination center is square planar or tetrahedral. [ 2 ] The formula is:
where: α and β are the two greatest valence angles of coordination center; θ = cos −1 (− 1 ⁄ 3 ) ≈ 109.5° is a tetrahedral angle.
When τ 4 is close to 0 the geometry is similar to square planar, while if τ 4 is close to 1 then the geometry is similar to tetrahedral. However, in contrast to the τ 5 parameter, this does not distinguish α and β angles, so structures of significantly different geometries can have similar τ 4 values. To overcome this issue, in 2015 Okuniewski et al. developed parameter τ 4 ′ that adopts values similar to τ 4 but better differentiates the examined structures: [ 3 ]
where: β > α are the two greatest valence angles of coordination center; θ = cos −1 (− 1 ⁄ 3 ) ≈ 109.5° is a tetrahedral angle.
Extreme values of τ 4 and τ 4 ′ denote exactly the same geometries, however τ 4 ′ is always less or equal to τ 4 so the deviation from ideal tetrahedral geometry is more visible. If for tetrahedral complex the value of τ 4 ′ parameter is low, then one should check if there are some additional interactions within coordination sphere. For example, in complexes of mercury(II), the Hg··· π interactions were found this way. [ 4 ] | https://en.wikipedia.org/wiki/Geometry_index |
In computer science , one approach to the dynamic optimality problem on online algorithms for binary search trees involves reformulating the problem geometrically, in terms of augmenting a set of points in the plane with as few additional points as possible to avoid rectangles with only two points on their boundary. [ 1 ]
As typically formulated, the online binary search tree problem involves search trees defined over a fixed key set { 1 , 2 , . . . , n } {\displaystyle \{1,2,...,n\}} . An access sequence is a sequence x 1 , x 2 , {\displaystyle x_{1},x_{2},} ... where each access x i {\displaystyle x_{i}} belongs to the key set.
Any particular algorithm for maintaining binary search trees (such as the splay tree algorithm or Iacono's working set structure ) has a cost for each access sequence that models the amount of time it would take to use the structure to search for each of the keys in the access sequence in turn. The cost of a search is modeled by assuming that the search tree algorithm has a single pointer into a binary search tree, which at the start of each search points to the root of the tree. The algorithm may then perform any sequence of the following operations:
The search is required, at some point within this sequence of operations to move the pointer to a node containing the key, and the cost of the search is the number of operations that are performed in the sequence. The total cost cost A ( X ) for algorithm A on access sequence X is the sum of the costs of the searches for each successive key in the sequence.
As is standard in competitive analysis , the competitive ratio of an algorithm A is defined to be the maximum, over all access sequences, of the ratio of the cost for A to the best cost that any algorithm could achieve:
The dynamic optimality conjecture states that splay trees have a constant competitive ratio, but this remains unproven. The geometric view of binary search trees provides a different way of understanding the problem that has led to the development of alternative algorithms that could also (conjecturally) have a constant competitive ratio.
In the geometric view of the online binary search tree problem,
an access sequence x 1 , . . . , x m {\displaystyle x_{1},...,x_{m}} (sequence of searches performed on a binary search tree (BST) with a key set 1 , 2 , . . . , n {\displaystyle {1,2,...,n}} ) is mapped to the set of points ( x i , i ) {\displaystyle {(x_{i},i)}} , where the X-axis represents the key space and the Y-axis represents time; to which a set of touched nodes is added. By touched nodes we mean the following. Consider a BST access algorithm with a single pointer to a node in the tree. At the beginning of an access to a given key x i {\displaystyle x_{i}} , this pointer is initialized to the root of the tree. Whenever the pointer moves to or is initialized to a node, we say that the node is touched. [ 2 ] We represent a BST algorithm for a given input sequence by drawing a point for each item that gets touched.
For example, assume the following BST on 4 nodes is given: The key set is {1, 2, 3, 4}.
Let 3, 1, 4, 2 be the access sequence.
The touches are represented geometrically: If an item x is touched in the operations for the i th access, then a point ( x , i ) is plotted.
A point set is said to be arborally satisfied if the following property holds: for any
pair of points that do not lie on the same horizontal or vertical line, there exists a third point
which lies in the rectangle spanned by the first two points (either inside or on the boundary).
A point set containing the points ( x i , i ) {\displaystyle (x_{i},i)} is arborally satisfied if and only if it corresponds to a valid BST for the input sequence x 1 , x 2 , . . . , x m {\displaystyle x_{1},x_{2},...,x_{m}} .
First, prove that the point set for any valid BST algorithm is arborally satisfied.
Consider points ( x , i ) {\displaystyle (x,i)} and ( y , j ) {\displaystyle (y,j)} , where x is touched at time i and y is touched at time j . Assume by symmetry that x < y {\displaystyle x<y} and i < j {\displaystyle i<j} . It needs to be shown that there exists a third point in the rectangle
with corners as ( x , i ) {\displaystyle (x,i)} and ( y , j ) {\displaystyle (y,j)} . Also let L C A t ( a , b ) {\displaystyle \mathrm {LCA} _{t}(a,b)} denote the lowest common ancestor of nodes a and b right before time t . There are a few cases:
Next, show the other direction: given an arborally satisfied point set, a valid BST corresponding to that point set can be constructed. Organize our BST into a treap which is organized in heap-order by next-touch-time. Note that next-touch-time has ties and is thus not uniquely defined, but this isn’t a problem as long as there is a way to break ties. When time i reached, the nodes touched form a connected subtree at the top, by the heap ordering property. Now, assign new next-touch-times for this subtree, and rearrange it into a new local treap.
If a pair of nodes, x and y , straddle the boundary between the touched and untouched part
of the treap, then if y is to be touched sooner than x then ( x , n o w ) → ( y , n e x t − t o u c h ( y ) ) {\displaystyle (x,now)\to (y,next-touch(y))} is an unsatisfied rectangle because the leftmost such point would be the right child of x , not y .
Finding the best BST execution for the input sequence x 1 , x 2 , . . . , x m {\displaystyle x_{1},x_{2},...,x_{m}} is equivalent to finding the minimum cardinality superset of points (that contains the input in geometric representation) that is arborally satisfied. The more general problem of finding the minimum cardinality arborally satisfied superset of a general set of input points (not limited to one input point per y coordinate), is known to be NP-complete . [ 1 ]
The following greedy algorithm constructs arborally satisfiable sets:
The algorithm has been conjectured to be optimal within an additive term. [ 3 ]
The geometry of binary search trees has been used to provide an algorithm which is dynamically optimal if any binary search tree algorithm is dynamically optimal. [ 4 ] | https://en.wikipedia.org/wiki/Geometry_of_binary_search_trees |
In proof theory , the Geometry of Interaction (GoI) was introduced by Jean-Yves Girard shortly after his work on linear logic . In linear logic, proofs can be seen as various kinds of networks as opposed to the flat tree structures of sequent calculus . To distinguish the real proof nets from all the possible networks, Girard devised a criterion involving trips in the network. Trips can in fact be seen as some kind of operator [ clarification needed ] acting on the proof. Drawing from this observation, Girard [ 1 ] described directly this operator from the proof and has given a formula, the so-called execution formula , encoding the process of cut elimination at the level of operators. Subsequent constructions by Girard proposed variants in which proofs are represented as flows, [ 2 ] or operators in von Neumann algebras . [ 3 ] Those models were later generalised by Seiller 's Interaction Graphs models. [ 4 ]
One of the first significant applications of GoI was a better analysis [ 5 ] of Lamping's algorithm [ 6 ] for optimal reduction for the lambda calculus . GoI had a strong influence on game semantics for linear logic and PCF .
Beyond the dynamic interpretation of proofs, geometry of interaction constructions provide models of linear logic , or fragments thereof. This aspect has been extensively studied by Seiller [ 7 ] under the name of linear realisability, a version of realizability accounting for linearity.
GoI has been applied to deep compiler optimisation for lambda calculi . [ 8 ] A bounded version of GoI dubbed the Geometry of Synthesis has been used to compile higher-order programming languages directly into static circuits. [ 9 ] | https://en.wikipedia.org/wiki/Geometry_of_interaction |
Geometry processing is an area of research that uses concepts from applied mathematics , computer science and engineering to design efficient algorithms for the acquisition, reconstruction , analysis , manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing . For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator , geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator .
Applications of geometry processing algorithms already cover a wide range of areas from multimedia , entertainment and classical computer-aided design , to biomedical computing, reverse engineering , and scientific computing . [ 1 ]
Geometry processing is a common research topic at SIGGRAPH , the premier computer graphics academic conference, and the main topic of the annual Symposium on Geometry Processing .
Geometry processing involves working with a shape , usually in 2D or 3D, although the shape can live in a space of arbitrary dimensions. The processing of a shape involves three stages, which is known as its life cycle. At its "birth," a shape can be instantiated through one of three methods: a model , a mathematical representation , or a scan . After a shape is born, it can be analyzed and edited repeatedly in a cycle. This usually involves acquiring different measurements, such as the distances between the points of the shape, the smoothness of the shape, or its Euler characteristic . Editing may involve denoising, deforming, or performing rigid transformations . At the final stage of the shape's "life," it is consumed. This can mean it is consumed by a viewer as a rendered asset in a game or movie, for instance. The end of a shape's life can also be defined by a decision about the shape, like whether or not it satisfies some criteria. Or it can even be fabricated in the real world, through a method such as 3D printing or laser cutting.
Like any other shape, the shapes used in geometry processing have properties pertaining to their geometry and topology . The geometry of a shape concerns the position of the shape's points in space , tangents , normals , and curvature . It also includes the dimension in which the shape lives (ex. R 2 {\displaystyle R^{2}} or R 3 {\displaystyle R^{3}} ). The topology of a shape is a collection of properties that do not change even after smooth transformations have been applied to the shape. It concerns dimensions such as the number of holes and boundaries , as well as the orientability of the shape. One example of a non-orientable shape is the Mobius strip .
In computers, everything must be discretized. Shapes in geometry processing are usually represented as triangle meshes , which can be seen as a graph . Each node in the graph is a vertex (usually in R 3 {\displaystyle R^{3}} ), which has a position. This encodes the geometry of the shape. Directed edges connect these vertices into triangles, which by the right hand rule, then have a direction called the normal. Each triangle forms a face of the mesh. These are combinatoric in nature and encode the topology of the shape. In addition to triangles, a more general class of polygon meshes can also be used to represent a shape. More advanced representations like progressive meshes encode a coarse representation along with a sequence of transformations, which produce a fine or high resolution representation of the shape once applied. These meshes are useful in a variety of applications, including geomorphs, progressive transmission, mesh compression, and selective refinement. [ 2 ]
One particularly important property of a 3D shape is its Euler characteristic , which can alternatively be defined in terms of its genus . The formula for this in the continuous sense is χ = 2 c − 2 h − b {\displaystyle \chi =2c-2h-b} , where c {\displaystyle c} is the number of connected components, h {\displaystyle h} is number of holes (as in donut holes, see torus ), and b {\displaystyle b} is the number of connected components of the boundary of the surface. A concrete example of this is a mesh of a pair of pants . There is one connected component, 0 holes, and 3 connected components of the boundary (the waist and two leg holes). So in this case, the Euler characteristic is -1. To bring this into the discrete world, the Euler characteristic of a mesh is computed in terms of its vertices, edges, and faces. χ = | V | − | E | + | F | {\displaystyle \chi =|V|-|E|+|F|} .
Depending on how a shape is initialized or "birthed," the shape might exist only as a nebula of sampled points that represent its surface in space. To transform the surface points into a mesh, the Poisson reconstruction [ 3 ] strategy can be employed. This method states that the indicator function , a function that determines which points in space belong to the surface of the shape, can actually be computed from the sampled points. The key concept is that gradient of the indicator function is 0 everywhere, except at the sampled points, where it is equal to the inward surface normal. More formally, suppose the collection of sampled points from the surface is denoted by S {\displaystyle S} , each point in the space by p i {\displaystyle p_{i}} , and the corresponding normal at that point by n i {\displaystyle n_{i}} . Then the gradient of the indicator function is defined as:
▽ g = { n i , ∀ p i ∈ S 0 , otherwise {\displaystyle \triangledown g={\begin{cases}{\textbf {n}}_{i},&\forall p_{i}\in S\\0,&{\text{otherwise}}\end{cases}}}
The task of reconstruction then becomes a variational problem. To find the indicator function of the surface, we must find a function χ {\displaystyle \chi } such that ‖ ▽ χ − V ‖ {\displaystyle \lVert \triangledown \chi -{\textbf {V}}\rVert } is minimized, where V {\displaystyle {\textbf {V}}} is the vector field defined by the samples. As a variational problem, one can view the minimizer χ {\displaystyle \chi } as a solution of Poisson's equation . [ 3 ] After obtaining a good approximation for χ {\displaystyle \chi } and a value σ {\displaystyle \sigma } for which the points ( x , y , z ) {\displaystyle (x,y,z)} with χ ( x , y , z ) = σ {\displaystyle \chi (x,y,z)=\sigma } lie on the surface to be reconstructed, the marching cubes algorithm can be used to construct a triangle mesh from the function χ {\displaystyle \chi } , which can then be applied in subsequent computer graphics applications.
One common problem encountered in geometry processing is how to merge multiple views of a single object captured from different angles or positions. This problem is known as registration . In registration, we wish to find an optimal rigid transformation that will align surface X {\displaystyle X} with surface Y {\displaystyle Y} . More formally, if P Y ( x ) {\displaystyle P_{Y}(x)} is the projection of a point x from surface X {\displaystyle X} onto surface Y {\displaystyle Y} , we want to find the optimal rotation matrix R {\displaystyle R} and translation vector t {\displaystyle t} that minimize the following objective function:
∫ x ∈ X | | R x + t − P Y ( x ) | | 2 d x {\displaystyle \int _{x\in X}||Rx+t-P_{Y}(x)||^{2}dx}
While rotations are non-linear in general, small rotations can be linearized as skew-symmetric matrices. Moreover, the distance function x − P Y ( x ) {\displaystyle x-P_{Y}(x)} is non-linear, but is amenable to linear approximations if the change in X {\displaystyle X} is small. An iterative solution such as Iterative Closest Point (ICP) is therefore employed to solve for small transformations iteratively, instead of solving for the potentially large transformation in one go. In ICP, n random sample points from X {\displaystyle X} are chosen and projected onto Y {\displaystyle Y} . In order to sample points uniformly at random across the surface of the triangle mesh, the random sampling is broken into two stages: uniformly sampling points within a triangle; and non-uniformly sampling triangles, such that each triangle's associated probability is proportional to its surface area. [ 4 ] Thereafter, the optimal transformation is calculated based on the difference between each x {\displaystyle x} and its projection. In the following iteration, the projections are calculated based on the result of applying the previous transformation on the samples. The process is repeated until convergence.
When shapes are defined or scanned, there may be accompanying noise, either to a signal acting upon the surface or to the actual surface geometry. Reducing noise on the former is known as data denoising , while noise reduction on the latter is known as surface fairing . The task of geometric smoothing is analogous to signal noise reduction, and consequently employs similar approaches.
The pertinent Lagrangian to be minimized is derived by recording the conformity to the initial signal f ¯ {\displaystyle {\bar {f}}} and the smoothness of the resulting signal, which approximated by the magnitude of the gradient with a weight λ {\displaystyle \lambda } :
L ( f ) = ∫ Ω ‖ f − f ¯ ‖ 2 + λ ‖ ∇ f ‖ 2 d x {\displaystyle {\mathcal {L}}(f)=\int _{\Omega }\|f-{\bar {f}}\|^{2}+\lambda \|\nabla f\|^{2}dx} .
Taking a variation δ f {\displaystyle \delta f} on L {\displaystyle {\mathcal {L}}} emits the necessary condition
0 = δ L ( f ) = ∫ Ω δ f ( I + λ ∇ 2 ) f − δ f f ¯ d x {\displaystyle 0=\delta {\mathcal {L}}(f)=\int _{\Omega }\delta f(\mathbf {I} +\lambda \nabla ^{2})f-\delta f{\bar {f}}dx} .
By discretizing this onto piecewise-constant elements with our signal on the vertices we obtain
∑ i M i δ f i f ¯ i = ∑ i M i δ f i ∑ j ( I + λ ∇ 2 ) f j = ∑ i δ f i ∑ j ( M + λ M ∇ 2 ) f j , {\displaystyle {\begin{aligned}\sum _{i}M_{i}\delta f_{i}{\bar {f}}_{i}&=\sum _{i}M_{i}\delta f_{i}\sum _{j}(\mathbf {I} +\lambda \nabla ^{2})f_{j}=\sum _{i}\delta f_{i}\sum _{j}(M+\lambda M\nabla ^{2})f_{j},\end{aligned}}}
where our choice of ∇ 2 {\displaystyle \nabla ^{2}} is chosen to be M − 1 L {\displaystyle M^{-1}\mathbf {L} } for the cotangent Laplacian L {\displaystyle \mathbf {L} } and the M − 1 {\displaystyle M^{-1}} term is to map the image of the Laplacian from areas to points. Because the variation is free, this results in a self-adjoint linear problem to solve with a parameter λ {\displaystyle \lambda } : f ¯ = ( M + λ L ) f . {\displaystyle {\bar {f}}=(M+\lambda \mathbf {L} )f.} When working with triangle meshes one way to determine the values of the Laplacian matrix L {\displaystyle L} is through analyzing the geometry of connected triangles on the mesh.
L i j = { 1 2 ( cot ( α i j ) + cot ( β i j ) ) edge ij exists − ∑ i ≠ j L i j i = j 0 otherwise {\displaystyle L_{ij}={\begin{cases}{\frac {1}{2}}(\cot(\alpha _{ij})+\cot(\beta _{ij}))&{\text{edge ij exists}}\\-\sum \limits _{i\neq j}L_{ij}&i=j\\0&{\text{otherwise}}\end{cases}}}
Where α i j {\displaystyle \alpha _{ij}} and β i j {\displaystyle \beta _{ij}} are the angles opposite the edge ( i , j ) {\displaystyle (i,j)} [ 5 ] The mass matrix M as an operator computes the local integral of a function's value and is often set for a mesh with m triangles as follows:
M i j = { 1 3 ∑ t = 1 m { A r e a ( t ) if triangle t contains vertex i 0 otherwise if i=j 0 otherwise {\displaystyle M_{ij}={\begin{cases}{\frac {1}{3}}\sum \limits _{t=1}^{m}{\begin{cases}Area(t)&{\text{if triangle t contains vertex i}}\\0&{\text{otherwise}}\end{cases}}&{\text{if i=j}}\\0&{\text{otherwise}}\end{cases}}}
Occasionally, we need to flatten a 3D surface onto a flat plane. This process is known as parameterization . The goal is to find coordinates u and v onto which we can map the surface so that distortions are minimized. In this manner, parameterization can be seen as an optimization problem. One of the major applications of mesh parameterization is texture mapping .
One way to measure the distortion accrued in the mapping process is to measure how much the length of the edges on the 2D mapping differs from their lengths in the original 3D surface. In more formal terms, the objective function can be written as:
min U ∑ i j ∈ E | | u i − u j | | 2 {\displaystyle {\underset {U}{\text{min}}}\sum _{ij\in E}||u_{i}-u_{j}||^{2}}
Where E {\displaystyle E} is the set of mesh edges and U {\displaystyle U} is the set of vertices. However, optimizing this objective function would result in a solution that maps all of the vertices to a single vertex in the uv -coordinates. Borrowing an idea from graph theory, we apply the Tutte Mapping and restrict the boundary vertices of the mesh onto a unit circle or other convex polygon . Doing so prevents the vertices from collapsing into a single vertex when the mapping is applied. The non-boundary vertices are then positioned at the barycentric interpolation of their neighbours. The Tutte Mapping, however, still suffers from severe distortions as it attempts to make the edge lengths equal, and hence does not correctly account for the triangle sizes on the actual surface mesh.
Another way to measure the distortion is to consider the variations on the u and v coordinate functions. The wobbliness and distortion apparent in the mass springs methods are due to high variations in the u and v coordinate functions. With this approach, the objective function becomes the Dirichlet energy on u and v:
min u , v ∫ S | | ∇ u | | 2 + | | ∇ v | | 2 d A {\displaystyle {\underset {u,v}{\text{min}}}\int _{S}||\nabla u||^{2}+||\nabla v||^{2}dA}
There are a few other things to consider. We would like to minimize the angle distortion to preserve orthogonality . That means we would like ∇ u = ∇ v ⊥ {\displaystyle \nabla u=\nabla v^{\perp }} . In addition, we would also like the mapping to have proportionally similar sized regions as the original. This results to setting the Jacobian of the u and v coordinate functions to 1.
[ ∂ u ∂ x ∂ u ∂ y ∂ v ∂ x ∂ v ∂ y ] = 1 {\displaystyle {\begin{bmatrix}{\dfrac {\partial u}{\partial x}}&{\dfrac {\partial u}{\partial y}}\\[1em]{\dfrac {\partial v}{\partial x}}&{\dfrac {\partial v}{\partial y}}\end{bmatrix}}=1}
Putting these requirements together, we can augment the Dirichlet energy so that our objective function becomes: [ 6 ] [ 7 ]
min u , v ∫ S 1 2 | | ∇ u | | 2 + 1 2 | | ∇ v | | 2 − ∇ u ⋅ ∇ v ⊥ {\displaystyle {\underset {u,v}{\text{min}}}\int _{S}{\frac {1}{2}}||\nabla u||^{2}+{\frac {1}{2}}||\nabla v||^{2}-\nabla u\cdot \nabla v^{\perp }}
To avoid the problem of having all the vertices mapped to a single point, we also require that the solution to the optimization problem must have a non-zero norm and that it is orthogonal to the trivial solution.
Deformation is concerned with transforming some rest shape to a new shape. Typically, these transformations are continuous and do not alter the topology of the shape. Modern mesh-based shape deformation methods satisfy user deformation constraints at handles (selected vertices or regions on the mesh) and propagate these handle deformations to the rest of shape smoothly and without removing or distorting details. Some common forms of interactive deformations are point-based, skeleton-based, and cage-based. [ 8 ] In point-based deformation, a user can apply transformations to small set of points, called handles, on the shape. Skeleton-based deformation defines a skeleton for the shape, which allows a user to move the bones and rotate the joints. Cage-based deformation requires a cage to be drawn around all or part of a shape so that, when the user manipulates points on the cage, the volume it encloses changes accordingly.
Handles provide a sparse set of constraints for the deformation: as the user moves one point, the others must stay in place.
A rest surface S ^ {\displaystyle {\hat {S}}} immersed in R 3 {\displaystyle \mathbb {R} ^{3}} can be described with a mapping x ^ : Ω → R 3 {\displaystyle {\hat {x}}:\Omega \rightarrow \mathbb {R} ^{3}} , where Ω {\displaystyle \Omega } is a 2D parametric domain. The same can be done with another mapping x {\displaystyle x} for the transformed surface S {\displaystyle S} . Ideally, the transformed shape adds as little distortion as possible to the original. One way to model this distortion is in terms of displacements d = x − x ^ {\displaystyle d=x-{\hat {x}}} with a Laplacian-based energy. [ 9 ] Applying the Laplace operator to these mappings allows us to measure how the position of a point changes relative to its neighborhood, which keeps the handles smooth. Thus, the energy we would like to minimize can be written as:
min d ∫ Ω | | Δ d | | 2 d A {\displaystyle \min _{\textbf {d}}\int _{\Omega }||\Delta {\textbf {d}}||^{2}dA} .
While this method is translation invariant, it is unable to account for rotations. The As-Rigid-As-Possible deformation scheme [ 10 ] applies a rigid transformation x i = R x i ^ + t {\displaystyle x_{i}=R{\hat {x_{i}}}+t} to each handle i, where R ∈ S O ( 3 ) ⊂ R 3 {\displaystyle R\in SO(3)\subset \mathbb {R} ^{3}} is a rotation matrix and t ∈ R 3 {\displaystyle t\in \mathbb {R} ^{3}} is a translation vector. Unfortunately, there's no way to know the rotations in advance, so instead we pick a “best” rotation that minimizes displacements. To achieve local rotation invariance, however, requires a function R : Ω → S O ( 3 ) {\displaystyle {\textbf {R}}:\Omega \rightarrow SO(3)} which outputs the best rotation for every point on the surface. The resulting energy, then, must optimize over both x {\displaystyle {\textbf {x}}} and R {\displaystyle {\textbf {R}}} :
min x,R ∈ S O ( 3 ) ∫ Ω | | ∇ x − R ∇ x ^ | | 2 d A {\displaystyle \min _{{\textbf {x,R}}\in SO(3)}\int _{\Omega }||\nabla {\textbf {x}}-{\textbf {R}}\nabla {\hat {\textbf {x}}}||^{2}dA}
Note that the translation vector is not present in the final objective function because translations have constant gradient.
While seemingly trivial, in many cases, determining the inside from the outside of a triangle mesh is not an easy problem. In general, given a surface S {\displaystyle S} we pose this problem as determining a function i s I n s i d e ( q ) {\displaystyle isInside(q)} which will return 1 {\displaystyle 1} if the point q {\displaystyle q} is inside S {\displaystyle S} , and 0 {\displaystyle 0} otherwise.
In the simplest case, the shape is closed. In this case, to determine if a point q {\displaystyle q} is inside or outside the surface, we can cast a ray r {\displaystyle r} in any direction from a query point, and count the number of times c o u n t r {\displaystyle count_{r}} it passes through the surface. If q {\displaystyle q} was outside S {\displaystyle S} then the ray must either not pass through S {\displaystyle S} (in which case c o u n t r = 0 {\displaystyle count_{r}=0} ) or, each time it enters S {\displaystyle S} it must pass through twice, because S is bounded, so any ray entering it must exit. So if q {\displaystyle q} is outside, c o u n t r {\displaystyle count_{r}} is even. Likewise if q {\displaystyle q} is inside, the same logic applies to the previous case, but the ray must intersect S {\displaystyle S} one extra time for the first time it leaves S {\displaystyle S} . So:
i s I n s i d e r ( q ) = { 1 c o u n t r i s o d d 0 c o u n t r i s e v e n {\displaystyle isInside_{r}(q)=\left\{{\begin{array}{ll}1&count_{r}\ is\ odd\\0&count_{r}\ is\ even\\\end{array}}\right.}
Now, oftentimes we cannot guarantee that the S {\displaystyle S} is closed. Take the pair of pants example from the top of this article. This mesh clearly has a semantic inside-and-outside, despite there being holes at the waist and the legs.
The naive attempt to solve this problem is to shoot many rays in random directions, and classify q {\displaystyle q} as being inside if and only if most of the rays intersected S {\displaystyle S} an odd number of times. To quantify this, let us say we cast k {\displaystyle k} rays, r 1 , r 2 , … , r k {\displaystyle r_{1},r_{2},\dots ,r_{k}} . We associate a number r a y T e s t ( q ) = 1 k ∑ i = 1 k i s I n s i d e r i ( q ) {\displaystyle rayTest(q)={\frac {1}{k}}\sum _{i=1}^{k}isInside_{r_{i}}(q)} which is the average value of i s I n s i d e r {\displaystyle isInside_{r}} from each ray. Therefore:
i s I n s i d e ( q ) = { 1 r a y T e s t ( q ) ≥ 0.5 0 r a y T e s t ( q ) < 0.5 {\displaystyle isInside(q)=\left\{{\begin{array}{ll}1&rayTest(q)\geq 0.5\\0&rayTest(q)<0.5\\\end{array}}\right.}
In the limit of shooting many, many rays, this method handles open meshes, however it in order to become accurate, far too many rays are required for this method to be computationally ideal. Instead, a more robust approach is the Generalized Winding Number. [ 11 ] Inspired by the 2D winding number , this approach uses the solid angle at q {\displaystyle q} of each triangle in the mesh to determine if q {\displaystyle q} is inside or outside. The value of the Generalized Winding Number at q {\displaystyle q} , w n ( q ) {\displaystyle wn(q)} is proportional to the sum of the solid angle contribution from each triangle in the mesh:
w n ( q ) = 1 4 π ∑ t ∈ F s o l i d A n g l e ( t ) {\displaystyle wn(q)={\frac {1}{4\pi }}\sum _{t\in F}solidAngle(t)}
For a closed mesh, w n ( q ) {\displaystyle wn(q)} is equivalent to the characteristic function for the volume represented by S {\displaystyle S} . Therefore, we say:
i s I n s i d e ( q ) = { 1 w n ( q ) ≥ 0.5 0 w n ( q ) < 0.5 {\displaystyle isInside(q)=\left\{{\begin{array}{ll}1&wn(q)\geq 0.5\\0&wn(q)<0.5\\\end{array}}\right.}
Because w n ( q ) {\displaystyle wn(q)} is a harmonic function , it degrades gracefully, meaning the inside-outside segmentation would not change much if we poked holes in a closed mesh. For this reason, the Generalized Winding Number handles open meshes robustly. The boundary between inside and outside smoothly passes over holes in the mesh. In fact, in the limit, the Generalized Winding Number is equivalent to the ray-casting method as the number of rays goes to infinity. | https://en.wikipedia.org/wiki/Geometry_processing |
A geometry template is a piece of clear plastic with cut-out shapes for use in mathematics and other subjects in primary school through secondary school . It also has various measurements on its sides to be used like a ruler . In Australia, popular brands include Mathomat and MathAid.
Mathomat is a trademark used for a plastic stencil developed in Australia by Craig Young in 1969, who originally worked as an engineering tradesperson in the Government Aircraft Factories (GAF) in Melbourne before retraining and working as head of mathematics in a secondary school in Melbourne. Young designed Mathomat to address what he perceived as limitations of traditional mathematics drawing sets in classrooms, mainly caused by students losing parts of the sets. The Mathomat stencil has a large number of geometric shapes stencils combined with the functions of a technical drawing set (rulers, set squares, protractor and circles stencils to replace a compass).
The template made use polycarbonate – a new type of thermoplastic polymer when Mathomat first came out – which was strong and transparent enough to allow for a large number of stencil shapes to be included in its design without breaking or tearing. The first template was exhibited in 1970 at a mathematics conference in Melbourne along with a series of popular mathematics teaching lesson plan; it became an immediate success with a large number of schools specifying it as a required students purchase. As of 2017, the stencil is widely specified in Australian schools, chiefly for students at early secondary school level. The manufacturing of Mathomat was taken over in 1989 by the W&G drawing instrument company, which had a factory in Melbourne for manufacture of technical drawing instruments. Young also developed MathAid, which was initially produced by him when he was living in Ringwood, Victoria . He later sold the company.
W&G published a series of teacher resource books for Mathomat authored by various teachers and academics who were interested in Mathomat as a teaching product.
[ 1 ] [ 2 ] [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Geometry_template |
Geophagia ( / ˌ dʒ iː ə ˈ f eɪ dʒ ( i ) ə / ), also known as geophagy ( / dʒ i ˈ ɒ f ə dʒ i / ), [ 1 ] is the intentional [ 2 ] practice of consuming earth or soil-like substances such as clay , chalk , or termite mounds. It is a behavioural adaptation that occurs in many non-human animals and has been documented in more than 100 primate species. [ 3 ] Geophagy in non-human primates is primarily used for protection from parasites, to provide mineral supplements and to help metabolize toxic compounds from leaves. [ 4 ] Geophagy also occurs in humans and is most commonly reported among children and pregnant women. [ 5 ]
Human geophagia is a form of pica – the craving and purposive consumption of non-food items – and is classified as an eating disorder in the Diagnostic and Statistical Manual of Mental Disorders (DSM) if not socially or culturally appropriate. [ 6 ] Sometimes geophagy is a consequence of carrying a hookworm infection . Although its etiology remains unknown, geophagy has many potential adaptive health benefits as well as negative consequences. [ 5 ] [ 7 ]
Geophagia is widespread in the animal kingdom. Galen , the Greek
philosopher and physician, was the first to record the use of clay by sick or injured animals in the second century AD. This type of geophagia has been documented in "many species of mammals, birds, reptiles, butterflies and isopods, especially among herbivores". [ 8 ]
Many species of South American parrots have been observed at clay licks , and sulphur-crested cockatoos have been observed ingesting clays in Papua New Guinea . Analysis of soils consumed by wild birds show that they often prefer soils with high clay content, usually with the smectite clay families being well represented. [ 9 ]
The preference for certain types of clay or soil can lead to unusual feeding behaviour. For example, Peruvian Amazon rainforest parrots congregate not just at one particular bend of the Manu River but at one specific layer of soil which runs hundreds of metres horizontally along that bend. The parrots avoid eating the substrate in layers one metre above or below the preferred layer. These parrots regularly eat seeds and unripe fruits containing alkaloids and other toxins that render the seeds and fruits bitter and even lethal. Because many of these chemicals become positively charged in the acidic stomach, they bind to clay minerals which have negatively charged cation-exchange sites, and are thereby rendered safe. Their preferred soils have a much higher cation-exchange capacity than the adjacent, rejected layers of soils because they are rich in the minerals smectite , kaolin , and mica . The preferred soils surpass the pure mineral kaolinate and surpass or approach pure bentonite in their capacity to bind quinine and tannic acid. [ 8 ]
In vitro and in vivo tests of these soils and many others from southeastern Peru indicate that they also release nutritionally important quantities of minerals such as calcium and sodium . In the Manu River example cited above, the preferred soil bands had much higher levels of sodium than those that were not chosen. Repeated studies have shown that the soils consumed most commonly by parrots in South America have higher sodium contents than those that are not consumed. [ 10 ] [ 11 ] [ 12 ]
It is unclear which factor is driving avian geophagy. [ 13 ] However, evidence is mounting that sodium is the most important driver among parrots in southeastern Peru. Parrots are known to eat toxic foods globally, but geophagy is concentrated in very specific regions. [ 14 ] Researchers Lee et al. show that parrot geophagy in South America is positively correlated to a significant degree with distance from the ocean. This suggests that overall lack of sodium in the ecosystem, not variation in food toxicity, is a better predictor of the spatial distribution of geophagy. This work, coupled with the recent findings of consistently high sodium levels in consumed soils, [ 10 ] [ 11 ] [ 12 ] make it highly likely that sodium is the primary driver of avian geophagy among parrots (and possibly other taxa) in the western Amazon Basin. This supplemental nutrients hypothesis is further supported by peak geophagy occurring during the parrots' breeding season. [ 15 ]
There are several hypotheses about the importance of geophagia in bats and primates. [ 16 ] : 436 [ 17 ] Chimpanzees in Kibale National Park , Uganda , have been observed to consume soil rich in kaolinite clay shortly before or after consuming plants including Trichilia rubescens , which possesses antimalarial properties in the laboratory. [ 18 ]
Geophagy is a behavioural adaptation seen in 136 species of nonhuman primates from the suborder Haplorrhini (81%) and Strepsirrhini (19%). [ 19 ] The most commonly ingested soils are soils from mounds, soils from tree bases, soils from termite mounds, 'Pong' soils, and forest floor. [ 4 ] Studies have shown many benefits of geophagy such as protection from parasites (4.9%), mineral supplements (19.5%), and helping to metabolize toxic compounds from leaves (12.2%) nonexclusive. [ 4 ] From soil analysis, it has been seen that one of the main compounds in the earth consumed by these primates is clay minerals that contains kaolinite, which is commonly used in medications for diarrheal and intestinal problems. [ 20 ] Geophagic behaviour plays an important role in nonhuman primates' health. [ 4 ] This kind of zoopharmacognosy use differs from one species to another. For example, mountain gorillas from Rwanda tend to ingest clay soil during dry season, when the vegetation changes forcing them to feed on plants that have more toxic compounds, in this case the ingested clay absorbs these toxins providing digestive benefits. [ 4 ] This kind of seasonal behavioural adaptation is also seen in the red-handed howler monkeys from the western Brazilian Amazonia, which also have to adapt to the shift of feeding on leaves that contains more toxic compounds. [ 21 ] In other cases, geophagy is used by the Ring-Tailed Lemurs as a preventive and therapeutic behaviour for parasite control and intestinal infection. [ 19 ] These benefits from clay ingestion can also be observed among rhesus macaques. [ 20 ] In a study that was carried out in the island of Cayo Santiago, it has been observed that the rhesus macaques had intestinal parasites and their health was not affected and they did not have many gastrointestinal effects from these parasites. [ 20 ] Data observed shows that this was caused by the consumption of clay soil by this species. [ 20 ] On the other hand observations have shown that behavioural geophagy provides mineral supplements, as seen among Cambodia's Colobinae. [ 22 ] The study was done at the salt licks in Veun Sai-Siem Pang Conservation Area, a site that is visited by various species of nonhuman primates. [ 22 ] More in-depth research needs to be carried out in order to better understand this behavioural adaptation of geophagy among nonhuman primates.
There is debate over whether geophagia in bats is primarily for nutritional supplementation or detoxification. It is known that some species of bats regularly visit mineral or salt licks to increase mineral consumption. However, Voigt et al. demonstrated that both mineral-deficient and healthy bats visit salt licks at the same rate. [ 23 ] Therefore, mineral supplementation is unlikely to be the primary reason for geophagia in bats. Additionally, bat presence at salt licks increases during periods of high energy demand. [ 23 ] Voigt et al. concluded that the primary purpose for bat presence at salt licks is for detoxification purposes, compensating for the increased consumption of toxic fruit and seeds. [ 23 ]
Evidence for the likely origin of geophagy was found in the remains of early humans in Africa:
The oldest evidence of geophagy practised by humans comes from the prehistoric site at Kalambo Falls on the border between Zambia and Tanzania ( Root-Bernstein & Root-Bernstein, 2000). Here, a calcium-rich white clay was found alongside the bones of Homo habilis (the immediate predecessor of Homo sapiens ).
Geophagia is nearly universal around the world in tribal and traditional rural societies (although apparently it has not been documented in Japan or Korea). [ 16 ] In the ancient world , several writers noted the phenomenon of geophagia. Pliny is said to have noted the ingestion of soil on Lemnos , an island of Greece, and the use of the soils from this island was noted until the 14th century. [ 16 ] [ 24 ] The textbook of Hippocrates (460–377 BCE) mentions geophagia, and the famous medical textbook titled De Medicina edited by A. Cornelius Celsus (14–37 CE) seems to link anaemia to geophagia. [ 24 ] One of Rumi 's fables tells about a geophage being cheated by a sugar seller who leaves him alone with a weight made of clay and then waits until the man eats enough of it, thus reducing the amount of sugar he will get. [ 25 ]
The existence of geophagy among Native Americans was noted by early explorers in the Americas, including Gabriel Soares de Sousa , who in 1587 reported a tribe in Brazil using it in suicide, [ 16 ] and Alexander von Humboldt , who said that a tribe called the Otomacs ate large amounts of soil. [ 24 ] In Africa, David Livingstone wrote about slaves eating soil in Zanzibar, [ 24 ] and it is also thought that large numbers of slaves brought with them soil-eating practices when they were trafficked to the New World as part of the transatlantic slave trade. [ 16 ] Slaves who practised geophagia were nicknamed "clay-eaters" because they were known to consume clay, as well as spices, ash, chalk, grass, plaster, paint, and starch. [ 26 ]
In Africa , kaolinite , sometimes known as kalaba (in Gabon [ 27 ] and Cameroon ), [ 28 ] calaba , and calabachop (in Equatorial Guinea ), is eaten for pleasure or to suppress hunger. [ 28 ] Kaolin for human consumption is sold at most markets in Cameroon and is often flavoured with spices such as black pepper and cardamom . [ 29 ] Consumption is greatest among women, especially to cure nausea during pregnancy, in spite of the possible dangerous levels of arsenic and lead to the unborn child. [ 30 ] [ 31 ] Another example of geophagia was reported in Mangaung, Free State Province in South Africa , where the practice was geochemically investigated. [ 32 ] Calabash chalk is also eaten in west Africa. [ 33 ]
In Haiti , poor people are known to eat bonbon tè made from soil, salt, and vegetable shortening. These biscuits hold minimal nutritional value, but manage to keep the poor alive. [ 34 ] However, long-term consumption of the biscuits is reported to cause stomach pains and malnutrition, and is not recommended by doctors. [ 35 ]
In Central Java and East Java , Indonesia a food made of soil called ampo is eaten as a snack or light meal. [ 36 ] [ 37 ] [ 38 ] It consists of pure clay, without any mixture of ingredients. [ 36 ]
Bentonite clay is available worldwide as a digestive aid; kaolin is also widely used as a digestive aid and as the base for some medicines. Attapulgite , another type of clay, is an active ingredient in many anti-diarrheal medicines. [ 26 ]
Clay minerals have been reported to have beneficial microbiological effects, such as protecting the stomach against toxins, parasites, and pathogens. [ 39 ] [ 40 ] Humans are not able to synthesize vitamin B12 (cobalamin), so geophagia may be a behavioral adaption to obtain it from bacteria in the soil. [ 41 ] Mineral content in soils may vary by region, but many contain high levels of calcium , copper , magnesium , iron , and zinc , minerals that are critical for developing fetuses which can cause metallic, soil, or chewing ice cravings in pregnant women. To the extent that these cravings, and subsequent mineral consumption (as well as in the case of cravings for ice, or other cold neck vasoconstricting food which aid in increasing brain oxygen levels by restricting neck veins) are therapeutically effective decreasing infant mortality, those genetic predispositions and the associated environmental triggers, are likely to be found in the infant as well. Likewise, multigenerationally impoverished villages or other homogenous socioeconomic closed genetic communities are more likely to have rewarded gene expression of soil or clay consumption cravings, by increasing the likelihood of survival through multiple pregnancies for both sexes. [ 40 ] [ 42 ]
There are obvious health risks in the consumption of soil that is contaminated by animal or human feces ; in particular, helminth eggs, such as Ascaris , which can stay viable in the soil for years, can lead to helminth infections . [ 43 ] [ 44 ] Tetanus poses a further risk. [ 43 ] Lead poisoning is also associated with soil ingestion, [ 45 ] as well as health risks associated with zinc exposure can be problematic among people who eat soils on a regular basis. [ 32 ] Gestational geophagia (geophagia in pregnancy) has been associated with various homeostatic disruptions and oxidative damage in rats. [ 46 ] | https://en.wikipedia.org/wiki/Geophagia |
Geophilus bipartitus is a species of soil centipede in the family Geophilidae [ 1 ] found in Japan . It grows up to 15 millimeters in length; the males have about 35 leg pairs, the females 39. It lives in Japanese white birch . [ 2 ]
This centipede -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geophilus_bipartitus |
Geophilus bluncki is a species of soil centipede in the family Geophilidae found in San Remo, Italy . [ 1 ] It grows up to 23 millimeters in length; the males have about 61 leg pairs. The uniform pore fields and long antennae resemble Arctogeophilus glacialis , formerly Geophilus glacialis . [ 2 ]
This centipede -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geophilus_bluncki |
Geophilus monoporus is a species of soil centipede in the family Geophilidae found in Tiba, Japan . [ 1 ] This species can reach 45 mm in length and has 87 pairs of legs. [ 2 ] [ 3 ] The species name refers to the single pore at the base of each of the ultimate legs . [ 4 ]
This centipede -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geophilus_monoporus |
Geophysical fluid dynamics , in its broadest meaning, is the application of fluid dynamics to naturally occurring flows, such as lava, oceans , and atmospheres , on Earth and other planets . [ 1 ]
Two physical features that are common to many of the phenomena studied in geophysical fluid dynamics are rotation of the fluid due to the planetary rotation and stratification (layering).
The applications of geophysical fluid dynamics do not generally include the circulation of the mantle , which is the subject of geodynamics , or fluid phenomena in the magnetosphere . Ocean circulation and air circulation are typically studied in oceanography and meteorology.
To describe the flow of geophysical fluids, equations are needed for conservation of momentum (or Newton's second law ) and conservation of energy . The former leads to the Navier–Stokes equations which cannot be solved analytically (yet). Therefore, further approximations are generally made in order to be able to solve these equations. First, the fluid is assumed to be incompressible . Remarkably, this works well even for a highly compressible fluid like air as long as sound and shock waves can be ignored. [ 2 ] : 2–3 Second, the fluid is assumed to be a Newtonian fluid , meaning that there is a linear relation between the shear stress τ and the strain u , for example
where μ is the viscosity . [ 2 ] : 2–3 Under these assumptions the Navier-Stokes equations are
The left hand side represents the acceleration that a small parcel of fluid would experience in a reference frame that moved with the parcel (a Lagrangian frame of reference ). In a stationary (Eulerian) frame of reference, this acceleration is divided into the local rate of change of velocity and advection , a measure of the rate of flow in or out of a small region. [ 2 ] : 44–45
The equation for energy conservation is essentially an equation for heat flow. If heat is transported by conduction , the heat flow is governed by a diffusion equation. If there are also buoyancy effects, for example hot air rising, then natural convection , also known as free convection, can occur. [ 2 ] : 171 Convection in the Earth's outer core drives the geodynamo that is the source of the Earth's magnetic field . [ 3 ] : Chapter 8 In the ocean, convection can be thermal (driven by heat), haline (where the buoyancy is due to differences in salinity), or thermohaline , a combination of the two. [ 4 ]
Fluid that is less dense than its surroundings tends to rise until it has the same density as its surroundings. If there is not much energy input to the system, it will tend to become stratified . On a large scale, Earth's atmosphere is divided into a series of layers . Going upwards from the ground, these are the troposphere , stratosphere , mesosphere , thermosphere , and exosphere . [ 5 ]
The density of air is mainly determined by temperature and water vapor content, the density of sea water by temperature and salinity , and the density of lake water by temperature. Where stratification occurs, there may be thin layers in which temperature or some other property changes more rapidly with height or depth than the surrounding fluid. Depending on the main sources of buoyancy, this layer may be called a pycnocline (density), thermocline (temperature), halocline (salinity), or chemocline (chemistry, including oxygenation).
The same buoyancy that gives rise to stratification also drives gravity waves . If the gravity waves occur within the fluid, they are called internal waves . [ 2 ] : 208–214
In modeling buoyancy-driven flows, the Navier-Stokes equations are modified using the Boussinesq approximation . This ignores variations in density except where they are multiplied by the gravitational acceleration g . [ 2 ] : 188
If the pressure depends only on density and vice versa, the fluid dynamics are called barotropic . In the atmosphere, this corresponds to a lack of fronts, as in the tropics . If there are fronts, the flow is baroclinic , and instabilities such as cyclones can occur. [ 6 ] | https://en.wikipedia.org/wiki/Geophysical_fluid_dynamics |
Geophysics ( / ˌ dʒ iː oʊ ˈ f ɪ z ɪ k s / ) is a subject of natural science concerned with the physical processes and properties of Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists conduct investigations across a wide range of scientific disciplines. The term geophysics classically refers to solid earth applications: Earth's shape ; its gravitational , magnetic fields , and electromagnetic fields ; its internal structure and composition ; its dynamics and their surface expression in plate tectonics , the generation of magmas , volcanism and rock formation. [ 1 ] However, modern geophysics organizations and pure scientists use a broader definition that includes the water cycle including snow and ice; fluid dynamics of the oceans and the atmosphere ; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial physics ; and analogous problems associated with the Moon and other planets. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones , while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 AD. Isaac Newton applied his theory of mechanics to the tides and the precession of the equinox ; and instruments were developed to measure the Earth's shape, density and gravity field, as well as the components of the water cycle. In the 20th century, geophysical methods were developed for remote exploration of the solid Earth and the ocean, and geophysics played an essential role in the development of the theory of plate tectonics.
Geophysics is pursued for fundamental understanding of the Earth its space environment. Geophysics often addresses societal needs, such as mineral resources , assessment and mitigation of natural hazards and environmental impact assessment . [ 2 ] In exploration geophysics , geophysical survey data are used to analyze potential petroleum reservoirs and mineral deposits, locate groundwater, find archaeological relics, determine the thickness of glaciers and soils, and assess sites for environmental remediation .
Geophysics is a highly interdisciplinary subject, and geophysicists contribute to every area of the Earth sciences , while some geophysicists conduct research in the planetary sciences . To provide a more clear idea on what constitutes geophysics, this section describes phenomena that are studied in physics and how they relate to the Earth and its surroundings. Geophysicists also investigate the physical processes and properties of the Earth, its fluid layers, and magnetic field along with the near-Earth environment in the Solar System , which includes other planetary bodies.
The gravitational pull of the Moon and Sun gives rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide. [ 7 ]
Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. [ 8 ] Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits (see gravity anomaly and gravimetry ). [ 9 ] The surface gravitational field provides information on the dynamics of tectonic plates . The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents (such as with very narrow canals). [ 10 ]
Seismic waves are vibrations that travel through the Earth's interior or along its surface. [ 11 ] The entire Earth can also oscillate in forms that are called normal modes or free oscillations of the Earth . Ground motions from waves or normal modes are measured using seismographs . If the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection. [ 12 ] [ 13 ]
Recording of seismic waves from controlled sources provides information on the region that the waves travel through. If the density or composition of the rock changes, waves are reflected. Reflections recorded using Reflection Seismology can provide a wealth of information on the structure of the earth up to several kilometers deep and are used to increase our understanding of the geology as well as to explore for oil and gas. [ 9 ] Changes in the travel direction, called refraction , can be used to infer the deep structure of the Earth . [ 13 ]
Earthquakes pose a risk to humans . Understanding their mechanisms, which depend on the type of earthquake (e.g., intraplate or deep focus ), can lead to better estimates of earthquake risk and improvements in earthquake engineering . [ 14 ]
Although we mainly notice electricity during thunderstorms , there is always a downward electric field near the surface that averages 120 volts per meter. [ 15 ] Relative to the solid Earth, the ionization of the planet's atmosphere is a result of the galactic cosmic rays penetrating it, which leaves it with a net positive charge. [ 16 ] A current of about 1800 amperes flows in the global circuit. [ 15 ] It flows downward from the ionosphere over most of the Earth and back upwards through thunderstorms. The flow is manifested by lightning below the clouds and sprites above.
A variety of electric methods are used in geophysical survey. Some measure spontaneous potential , a potential that arises in the ground because of human-made or natural disturbances. Telluric currents flow in Earth and the oceans. They have two causes: electromagnetic induction by the time-varying, external-origin geomagnetic field and motion of conducting bodies (such as seawater) across the Earth's permanent magnetic field. [ 17 ] The distribution of telluric current density can be used to detect variations in electrical resistivity of underground structures. Geophysicists can also provide the electric current themselves (see induced polarization and electrical resistivity tomography ).
Electromagnetic waves occur in the ionosphere and magnetosphere as well as in Earth's outer core . Dawn chorus is believed to be caused by high-energy electrons that get caught in the Van Allen radiation belt . Whistlers are produced by lightning strikes. Hiss may be generated by both. Electromagnetic waves may also be generated by earthquakes (see seismo-electromagnetics ).
In the highly conductive liquid iron of the outer core, magnetic fields are generated by electric currents through electromagnetic induction. Alfvén waves are magnetohydrodynamic waves in the magnetosphere or the Earth's core. In the core, they probably have little observable effect on the Earth's magnetic field, but slower waves such as magnetic Rossby waves may be one source of geomagnetic secular variation . [ 18 ]
Electromagnetic methods that are used for geophysical survey include transient electromagnetics , magnetotellurics , surface nuclear magnetic resonance and electromagnetic seabed logging. [ 19 ]
The Earth's magnetic field protects the Earth from the deadly solar wind and has long been used for navigation. It originates in the fluid motions of the outer core. [ 18 ] The magnetic field in the upper atmosphere gives rise to the auroras . [ 20 ]
The Earth's field is roughly like a tilted dipole , but it changes over time (a phenomenon called geomagnetic secular variation). Mostly the geomagnetic pole stays near the geographic pole , but at random intervals averaging 440,000 to a million years or so, the polarity of the Earth's field reverses. These geomagnetic reversals , analyzed within a Geomagnetic Polarity Time Scale , contain 184 polarity intervals in the last 83 million years, with change in frequency over time, with the most recent brief complete reversal of the Laschamp event occurring 41,000 years ago during the last glacial period . Geologists observed geomagnetic reversal recorded in volcanic rocks, through magnetostratigraphy correlation (see natural remanent magnetization ) and their signature can be seen as parallel linear magnetic anomaly stripes on the seafloor. These stripes provide quantitative information on seafloor spreading , a part of plate tectonics. They are the basis of magnetostratigraphy , which correlates magnetic reversals with other stratigraphies to construct geologic time scales. [ 22 ] In addition, the magnetization in rocks can be used to measure the motion of continents. [ 18 ]
Radioactive decay accounts for about 80% of the Earth's internal heat , powering the geodynamo and plate tectonics. [ 23 ] The main heat-producing isotopes are potassium-40 , uranium-238 , uranium-235, and thorium-232 . [ 24 ] Radioactive elements are used for radiometric dating , the primary method for establishing an absolute time scale in geochronology .
Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras . [ 25 ] Radiometric mapping using ground and airborne gamma spectrometry can be used to map the concentration and distribution of radioisotopes near the Earth's surface, which is useful for mapping lithology and alteration. [ 26 ] [ 27 ]
Fluid motions occur in the magnetosphere, atmosphere , ocean, mantle and core. Even the mantle, though it has an enormous viscosity , flows like a fluid over long time intervals. This flow is reflected in phenomena such as isostasy , post-glacial rebound and mantle plumes . The mantle flow drives plate tectonics and the flow in the Earth's core drives the geodynamo. [ 18 ]
Geophysical fluid dynamics is a primary tool in physical oceanography and meteorology . The rotation of the Earth has profound effects on the Earth's fluid dynamics, often due to the Coriolis effect . In the atmosphere, it gives rise to large-scale patterns like Rossby waves and determines the basic circulation patterns of storms. In the ocean, they drive large-scale circulation patterns as well as Kelvin waves and Ekman spirals at the ocean surface. [ 28 ] In the Earth's core, the circulation of the molten iron is structured by Taylor columns . [ 18 ]
Waves and other phenomena in the magnetosphere can be modeled using magnetohydrodynamics .
The Earth is cooling, and the resulting heat flow generates the Earth's magnetic field through the geodynamo and plate tectonics through mantle convection . [ 29 ] The main sources of heat are: primordial heat due to Earth's cooling and radioactivity in the planets upper crust. [ 30 ] There is also some contributions from phase transitions . Heat is mostly carried to the surface by thermal convection , although there are two thermal boundary layers – the core–mantle boundary and the lithosphere – in which heat is transported by conduction . [ 31 ] Some heat is carried up from the bottom of the mantle by mantle plumes . The heat flow at the Earth's surface is about 4.2 × 10 13 W , and it is a potential source of geothermal energy. [ 32 ]
The physical properties of minerals must be understood to infer the composition of the Earth's interior from seismology , the geothermal gradient and other sources of information. Mineral physicists study the elastic properties of minerals; their high-pressure phase diagrams , melting points and equations of state at high pressure; and the rheological properties of rocks, or their ability to flow. Deformation of rocks by creep make flow possible, although over short times the rocks are brittle. The viscosity of rocks is affected by temperature and pressure, and in turn, determines the rates at which tectonic plates move. [ 8 ]
Water is a very complex substance and its unique properties are essential for life. [ 33 ] Its physical properties shape the hydrosphere and are an essential part of the water cycle and climate . Its thermodynamic properties determine evaporation and the thermal gradient in the atmosphere . The many types of precipitation involve a complex mixture of processes such as coalescence , supercooling and supersaturation . [ 34 ] Some precipitated water becomes groundwater , and groundwater flow includes phenomena such as percolation , while the conductivity of water makes electrical and electromagnetic methods useful for tracking groundwater flow. Physical properties of water such as salinity have a large effect on its motion in the oceans. [ 28 ]
The many phases of ice form the cryosphere and come in forms like ice sheets , glaciers , sea ice , freshwater ice, snow, and frozen ground (or permafrost ). [ 35 ]
Contrary to popular belief, the earth is not entirely spherical but instead generally exhibits an ellipsoid shape- which is a result of the centrifugal forces the planet generates due to its constant motion. [ 36 ] These forces cause the planets diameter to bulge towards the Equator and results in the ellipsoid shape . [ 36 ] Earth's shape is constantly changing, and different factors including glacial isostatic rebound (large ice sheets melting causing the Earth's crust to the rebound due to the release of the pressure [ 37 ] ), geological features such as mountains or ocean trenches , tectonic plate dynamics, and natural disasters can further distort the planet's shape. [ 36 ]
Evidence from seismology , heat flow at the surface, and mineral physics is combined with the Earth's mass and moment of inertia to infer models of the Earth's interior – its composition, density, temperature, pressure. For example, the Earth's mean specific gravity ( 5.515 ) is far higher than the typical specific gravity of rocks at the surface ( 2.7–3.3 ), implying that the deeper material is denser. This is also implied by its low moment of inertia ( 0.33 M R 2 , compared to 0.4 M R 2 for a sphere of constant density). However, some of the density increase is compression under the enormous pressures inside the Earth. The effect of pressure can be calculated using the Adams–Williamson equation . The conclusion is that pressure alone cannot account for the increase in density. Instead, we know that the Earth's core is composed of an alloy of iron and other minerals. [ 8 ]
Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field. Earth's inner core , however, is solid because of the enormous pressure. [ 10 ]
Reconstruction of seismic reflections in the deep interior indicates some major discontinuities in seismic velocities that demarcate the major zones of the Earth: inner core , outer core , mantle, lithosphere and crust . The mantle itself is divided into the upper mantle , transition zone, lower mantle and D′′ layer. Between the crust and the mantle is the Mohorovičić discontinuity . [ 10 ]
The seismic model of the Earth does not by itself determine the composition of the layers. For a complete model of the Earth, mineral physics is needed to interpret seismic velocities in terms of composition. The mineral properties are temperature-dependent, so the geotherm must also be determined. This requires physical theory for thermal conduction and convection and the heat contribution of radioactive elements . The main model for the radial structure of the interior of the Earth is the preliminary reference Earth model (PREM). Some parts of this model have been updated by recent findings in mineral physics (see post-perovskite ) and supplemented by seismic tomography . The mantle is mainly composed of silicates , and the boundaries between layers of the mantle are consistent with phase transitions. [ 8 ]
The mantle acts as a solid for seismic waves, but under high pressures and temperatures, it deforms so that over millions of years it acts like a liquid. This makes plate tectonics possible.
If a planet's magnetic field is strong enough, its interaction with the solar wind forms a magnetosphere. Early space probes mapped out the gross dimensions of the Earth's magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail , hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles called the Van Allen radiation belts. [ 20 ]
Geophysical measurements are generally at a particular time and place. Accurate measurements of position, along with earth deformation and gravity, are the province of geodesy . While geodesy and geophysics are separate fields, the two are so closely connected that many scientific organizations such as the American Geophysical Union , the Canadian Geophysical Union and the International Union of Geodesy and Geophysics encompass both. [ 38 ]
Absolute positions are most frequently determined using the global positioning system (GPS). A three-dimensional position is calculated using messages from four or more visible satellites and referred to the 1980 Geodetic Reference System . An alternative, optical astronomy , combines astronomical coordinates and the local gravity vector to get geodetic coordinates. This method only provides the position in two coordinates and is more difficult to use than GPS. However, it is useful for measuring motions of the Earth such as nutation and Chandler wobble . Relative positions of two or more points can be determined using very-long-baseline interferometry . [ 38 ] [ 39 ] [ 40 ]
Gravity measurements became part of geodesy because they were needed to related measurements at the surface of the Earth to the reference coordinate system. Gravity measurements on land can be made using gravimeters deployed either on the surface or in helicopter flyovers. Since the 1960s, the Earth's gravity field has been measured by analyzing the motion of satellites. Sea level can also be measured by satellites using radar altimetry , contributing to a more accurate geoid . [ 38 ] In 2002, NASA launched the Gravity Recovery and Climate Experiment (GRACE), wherein two twin satellites map variations in Earth's gravity field by making measurements of the distance between the two satellites using GPS and a microwave ranging system. Gravity variations detected by GRACE include those caused by changes in ocean currents; runoff and ground water depletion; melting ice sheets and glaciers. [ 41 ]
Satellites in space have made it possible to collect data from not only the visible light region, but in other areas of the electromagnetic spectrum . The planets can be characterized by their force fields: gravity and their magnetic fields , which are studied through geophysics and space physics.
Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters , which led to the discovery of concentrations of mass, mascons , beneath the Imbrium , Serenitatis , Crisium , Nectaris and Humorum basins. [ 42 ]
Since geophysics is concerned with the shape of the Earth, and by extension the mapping of features around and in the planet, geophysical measurements include high accuracy GPS measurements. These measurements are processed to increase their accuracy through differential GPS processing. Once the geophysical measurements have been processed and inverted, the interpreted results are plotted using GIS. Programs such as ArcGIS and Geosoft were built to meet these needs and include many geophysical functions that are built-in, such as upward continuation , and the calculation of the measurement derivative such as the first-vertical derivative. [ 9 ] [ 43 ] Many geophysics companies have designed in-house geophysics programs that pre-date ArcGIS and GeoSoft in order to meet the visualization requirements of a geophysical dataset.
Exploration geophysics is a branch of applied geophysics that involves the development and utilization of different seismic or electromagnetic methods which the aim of investigating different energy, mineral and water resources. [ 44 ] This is done through the uses of various remote sensing platforms such as; satellites , aircraft , boats , drones , borehole sensing equipment and seismic receivers . These equipment are often used in conjunction with different geophysical methods such as magnetic , gravimetry , electromagnetic , radiometric , barometry methods in order to gather the data. The remote sensing platforms used in exploration geophysics are not perfect and need adjustments done on them in order to accurately account for the effects that the platform itself may have on the collected data. For example, when gathering aeromagnetic data (aircraft gathered magnetic data) using a conventional fixed-wing aircraft- the platform has to be adjusted to account for the electromagnetic currents that it may generate as it passes through Earth's magnetic field . [ 9 ] There are also corrections related to changes in measured potential field intensity as the Earth rotates, as the Earth orbits the Sun, and as the moon orbits the Earth. [ 9 ] [ 43 ]
Geophysical measurements are often recorded as time-series with GPS location. Signal processing involves the correction of time-series data for unwanted noise or errors introduced by the measurement platform, such as aircraft vibrations in gravity data. It also involves the reduction of sources of noise, such as diurnal corrections in magnetic data. [ 9 ] [ 43 ] In seismic data, electromagnetic data, and gravity data, processing continues after error corrections to include computational geophysics which result in the final interpretation of the geophysical data into a geological interpretation of the geophysical measurements [ 9 ] [ 43 ]
Geophysics emerged as a separate discipline only in the 19th century, from the intersection of physical geography , geology , astronomy , meteorology, and physics. [ 45 ] [ 46 ] The first known use of the word geophysics was in German ("Geophysik") by Julius Fröbel in 1834. [ 47 ] However, many geophysical phenomena – such as the Earth's magnetic field and earthquakes – have been investigated since the ancient era .
The magnetic compass existed in China back as far as the fourth century BC. It was used as much for feng shui as for navigation on land. It was not until good steel needles could be forged that compasses were used for navigation at sea; before that, they could not retain their magnetism long enough to be useful. The first mention of a compass in Europe was in 1190 AD. [ 48 ]
In circa 240 BC, Eratosthenes of Cyrene deduced that the Earth was round and measured the circumference of Earth with great precision. [ 49 ] He developed a system of latitude and longitude . [ 50 ]
Perhaps the earliest contribution to seismology was the invention of a seismoscope by the prolific inventor Zhang Heng in 132 AD. [ 51 ] This instrument was designed to drop a bronze ball from the mouth of a dragon into the mouth of a toad. By looking at which of eight toads had the ball, one could determine the direction of the earthquake. It was 1571 years before the first design for a seismoscope was published in Europe, by Jean de la Hautefeuille . It was never built. [ 52 ]
The 17th century had major milestones that marked the beginning of modern science. In 1600, William Gilbert release a publication titled De Magnete (1600) where he conducted series of experiments on both natural magnets (called 'loadstones ') and artificially magnetized iron. [ 53 ] His experiments lead to observations involving a small compass needle ( versorium ) which replicated magnetic behaviours when subjected to a spherical magnet, along with it experiencing ' magnetic dips ' when it was pivoted on a horizontal axis. [ 53 ] HIs findings led to the deduction that compasses point north due to the Earth itself being a giant magnet. [ 53 ]
In 1687 Isaac Newton published his work titled Principia which was pivotal in the development of modern scientific fields such as astronomy and physics . [ 54 ] In it, Newton both laid the foundations for classical mechanics and gravitation , as well as explained different geophysical phenomena such as the precession of the equinox (the orbit of whole star patterns along an ecliptic axis . [ 55 ] Newton's theory of gravity had gained so much success, that it resulted in changing the main objective of physics in that era to unravel natures fundamental forces, and their characterizations in laws. [ 54 ]
The first seismometer , an instrument capable of keeping a continuous record of seismic activity, was built by James Forbes in 1844. [ 52 ] | https://en.wikipedia.org/wiki/Geophysics |
A geopolymer is an inorganic , often ceramic -like material, that forms a stable, covalently bonded , non-crystalline to semi-crystalline network through the reaction of aluminosilicate materials with an alkaline or acidic solution. Many geopolymers may also be classified as alkali-activated cements or acid-activated binders. [ 1 ] They are mainly produced by a chemical reaction between a chemically reactive aluminosilicate powder e.g. metakaolin or other clay-derived powders, natural pozzolan , or suitable glasses, and an aqueous solution ( alkaline or acidic ) that causes this powder to react and re-form into a solid monolith. The most common pathway to produce geopolymers is by the reaction of metakaolin with sodium silicate , which is an alkaline solution, but other processes are also possible. [ 2 ]
The term geopolymer was coined by Joseph Davidovits in 1978 due to the rock-forming minerals of geological origin used in the synthesis process. [ 3 ] These materials and associated terminology were popularized over the following decades via his work with the Institut Géopolymère (Geopolymer Institute) . [ 3 ]
Geopolymers are synthesized in one of two conditions:
The alkaline route is the most important in terms of research and development and commercial applications. Details on the acidic route have also been published. [ 4 ] [ 5 ]
Commercially produced geopolymers may be used for fire- and heat-resistant coatings and adhesives, medicinal applications, high-temperature ceramics, new binders for fire-resistant fiber composites, toxic and radioactive waste encapsulation, and as cementing components in making or repairing concretes. Due to the increasing demand for low-emission building materials, geopolymer technology is being developed as a lower-CO₂ alternative to traditional Portland cement, with the potential for widespread use in concrete production. [ 6 ] The properties and uses of geopolymers are being explored in many scientific and industrial disciplines such as modern inorganic chemistry , physical chemistry , colloid chemistry , mineralogy , geology , and in other types of engineering process technologies. In addition to their use in construction, geopolymers are utilized in resins, coatings, and adhesives for aerospace, automotive, and protective applications.
In the 1950s, Viktor Glukhovsky developed concrete materials originally known as "soil silicate concretes" and "soil cements", [ 7 ] but since the introduction of the geopolymer concept by Joseph Davidovits , the terminology and definitions of the word geopolymer have become more diverse and often conflicting. The word geopolymer is sometimes used to refer to naturally occurring organic macromolecules ; [ 8 ] that sense of the word differs from the now-more-common use of this terminology to discuss inorganic materials which can have either cement -like or ceramic -like character.
A geopolymer is essentially a mineral chemical compound or mixture of compounds consisting of repeating units, for example silico-oxide (-Si-O-Si-O-), silico-aluminate (-Si-O-Al-O-), ferro-silico-aluminate (-Fe-O-Si-O-Al-O-) or alumino-phosphate (-Al-O-P-O-), created through a process of geopolymerization. [ 9 ] This method of describing mineral synthesis (geosynthesis) was first presented by Davidovits at an IUPAC symposium in 1976. [ 10 ]
Even within the context of inorganic materials, there exist various definitions of the word geopolymer, which can include a relatively wide variety of low-temperature synthesized solid materials. [ 11 ] The most typical geopolymer is generally described as resulting from the reaction between metakaolin (calcined kaolinitic clay ) and a solution of sodium or potassium silicate ( waterglass ). Geopolymerization tends to result in a highly connected, disordered network of negatively charged tetrahedral oxide units balanced by the sodium or potassium ions.
In the simplest form, an example chemical formula for a geopolymer can be written as Na 2 O·Al 2 O 3 ·nSiO 2 ·wH 2 O, where n is usually between 2 and 4, and w is around 11-15. Geopolymers can be formulated with a wide variety of substituents in both the framework (silicon, aluminium) and non-framework (sodium) sites; most commonly potassium or calcium takes on the non-framework sites, but iron or phosphorus can in principle replace some of the aluminum or silicon. [ citation needed ]
Geopolymerization usually occurs at ambient or slightly elevated temperature; the solid aluminosilicate raw materials (e.g. metakaolin) dissolve into the alkaline solution, then cross-link and polymerize into a growing gel phase, which then continues to set, harden, and gain strength.
The fundamental unit within a geopolymer structure is a tetrahedral complex consisting of silicon or aluminum coordinated through covalent bonds to four oxygens. The geopolymer framework results from the cross-linking between these tetrahedra, which leads to a 3-dimensional aluminosilicate network, where the negative charge associated with tetrahedral aluminium is balanced by a small cationic species, most commonly an alkali metal cation (Na+, K+ etc). These alkali metal cations are often ion-exchangeable , as they are associated with, but only loosely bonded to the main covalent network, similarly to the non-framework cations present in zeolites .
Geopolymerization is the process of combining many small molecules known as oligomers into a covalently bonded network. This reaction process takes place via formation of oligomers (dimer, trimer, tetramer, pentamer) which are believed to contribute to the formation of the actual structure of the three-dimensional macromolecular framework, either through direct incorporation or through rearrangement via monomeric species. [ 12 ] These oligomers are named by some geopolymer chemists as sialates following the scheme developed by Davidovits, [ 3 ] although this terminology is not universally accepted within the research community due in part to confusion with the earlier (1952) use of the same word to refer to the salts of the important biomolecule sialic acid . [ 6 ]
The image shows five examples of small oligomeric potassium aluminosilicate species (labelled in the diagram according to the poly(sialate) / poly(sialate-siloxo) nomenclature), which are key intermediates in potassium-based alumino-silicate geopolymerization. The aqueous chemistry of aluminosilicate oligomers is complex, [ 13 ] and plays an important role in the discussion of zeolite synthesis, a process which has many details in common with geopolymerization.
Example of geopolymerization of a metakaolin precursor, in an alkaline medium [ 14 ]
The reaction process broadly involves four main stages:
The reaction processes involving other aluminosilicate precursors (e.g. low-calcium fly ash , crushed or synthetic glasses, natural pozzolans ) are broadly similar to the steps described above.
Geopolymerization forms aluminosilicate frameworks that are similar to those of some rock-forming minerals, but lacking in long-range crystalline order, and generally containing water in both chemically bound sites (hydroxyl groups) and in molecular form as pore water. This water can be removed at temperatures above 100 – 200°C. Cation hydration and the locations, and mobility of water molecules in pores are important for lower-temperature applications, such as in usage of geopolymers as cements. [ 15 ] [ 16 ] The figure shows a geopolymer containing both bound (Si-OH groups) and free water (left in the figure). Some water is associated with the framework similarly to zeolitic water, and some is in larger pores and can be readily released and removed. After dehydroxylation (and dehydration), generally above 250°C, geopolymers can then crystallise above 800-1000°C (depending on the nature of the alkali cation present). [ 17 ]
There exists a wide variety of potential and existing applications. Some of the geopolymer applications are still in development, whereas others are already industrialized and commercialized. [ 18 ] They are listed in three major categories:
From a terminological point of view, geopolymer cement [ 19 ] is a binding system that hardens at room temperature, like regular Portland cement .
Geopolymer cement is being developed and utilised as an alternative to conventional Portland cement for use in transportation, infrastructure, construction and offshore applications. [ citation needed ]
Production of geopolymer cement requires an aluminosilicate precursor material such as metakaolin or fly ash , a user-friendly alkaline reagent [ 20 ] [ promotional source? ] (for example, sodium or potassium soluble silicates with a molar ratio (MR) SiO 2 :M 2 O ≥ 1.65, M being sodium or potassium) and water (See the definition for "user-friendly" reagent below). Room temperature hardening is more readily achieved with the addition of a source of calcium cations, often blast furnace slag . [ citation needed ]
Geopolymer cements can be formulated to cure more rapidly than Portland-based cements; some mixes gain most of their ultimate strength within 24 hours. However, they must also set slowly enough that they can be mixed at a batch plant, either for pre-casting or delivery in a concrete mixer. Geopolymer cement also has the ability to form a strong chemical bond with silicate rock-based aggregates . [ 21 ]
There is often confusion between the meanings of the terms 'geopolymer cement' and 'geopolymer concrete'. A cement is a binder, whereas concrete is the composite material resulting from the mixing and hardening of cement with water (or an alkaline solution in the case of geopolymer cement), and stone aggregates. Materials of both types (geopolymer cements and geopolymer concretes) are commercially available in various markets internationally. [ citation needed ]
There exists some confusion in the terminology applied to geopolymers, alkali-activated cements and concretes, and related materials, which have been described by a variety of names including also "soil silicate concretes" and "soil cements". [ 7 ] Terminology related to alkali-activated materials or alkali-activated geopolymers is also in wide (but debated) use. These cements, sometimes abbreviated AAM, encompass the specific fields of alkali-activated slags, alkali-activated coal fly ashes , and various blended cementing systems.
Geopolymerization uses chemical ingredients that may be dangerous and therefore requires some safety procedures. Material Safety rules classify the alkaline products in two categories: corrosive products (named here: hostile) and irritant products (named here: friendly). [ 20 ]
The table lists some alkaline chemicals and their corresponding safety labels. [ 22 ] Alkaline reagents belonging to the second (less elevated pH) class may also be termed as User-friendly , although the irritant nature of the alkaline component and the potential inhalation risk of powders still require the selection and use of appropriate personal protective equipment , as in any situation where chemicals or powders are handled.
The development of some alkali-activated-cements , as shown in numerous published recipes (especially those based on fly ashes) use alkali silicates with molar ratios SiO 2 :M 2 O below 1.20, or are based on concentrated NaOH. These conditions are not considered so user-friendly as when more moderate pH values are used, and require careful consideration of chemical safety handling laws, regulations, and state directives.
Conversely, geopolymer cement recipes employed in the field generally involve alkaline soluble silicates with starting molar ratios ranging from 1.45 to 1.95, particularly 1.60 to 1.85, i.e. user-friendly conditions. It may happen that for research, some laboratory recipes have molar ratios in the 1.20 to 1.45 range.
Commercial geopolymer cements were developed in the 1980s, of the type (K,Na,Ca)-aluminosilicate (or "slag-based geopolymer cement") and resulted from the research carried out by Joseph Davidovits and J.L. Sawyer at Lone Star Industries, USA, marketed as Pyrament® cement. The US patent 4,509,985 was granted on April 9, 1985 with the title 'Early high-strength mineral polymer'. [ 23 ]
In the 1990s, using knowledge of the synthesis of zeolites from fly ashes, Wastiels et al., [ 24 ] Silverstrim et al. [ 25 ] and van Jaarsveld and van Deventer [ 26 ] developed geopolymeric fly ash-based cements.
Materials based on siliceous (EN 197), also called class F (ASTM C618), fly ashes are known:
The properties of iron-containing "ferri-sialate"-based geopolymer cements are similar to those of rock-based geopolymer cements but involve geological elements, or metallurgical slags, with high iron oxide content. The hypothesised binder chemistry is (Ca,K)-(Fe-O)-(Si-O-Al-O). [ 29 ]
Rock-based geopolymer cements can be formed by the reaction of natural pozzolanic materials under alkaline conditions, [ 30 ] and geopolymers derived from calcined clays (e.g. metakaolin) can also be produced in the form of cements.
Geopolymer cements can be designed to have lower attributed CO 2 emissions compared to other widely-used materials such as ordinary Portland cement . [ 31 ] Geopolymers use industrial byproducts/waste containing aluminosilicate phases in manufacturing, which minimizes CO₂ emissions and therefore have a lower global warming potential (GWP) . [ 32 ] However, emissions still arise from various stages of production of geopolymer concretes. The extraction and processing of raw materials, such as fly ash, slag, or metakaolin, require energy and contribute to CO₂ emissions, though they are often industrial by-products with a lower environmental impact than clinker production in Portland concrete. [ 33 ] A significant source of emissions in geopolymer concrete manufacturing is the production of alkali activators like sodium hydroxide (NaOH) and sodium silicate, which require high-temperature processing and contribute to the overall global warming potential. [ 33 ] Additionally, energy consumption during mixing, transportation, and curing, especially when elevated temperatures are used, can further contribute to emissions. While studies suggest that geopolymer concrete can reduce global warming potential by up to 64% compared to Portland concrete through material selection and optimized activator use, the overall impact depends on the specific composition and processing methods employed. [ 33 ]
While geopolymer concrete generally has a lower global warming potential (GWP) than ordinary Portland concrete, its environmental impact varies based on the choice of raw materials and activators. [ 33 ] In particular, the production of alkali activators like sodium hydroxide plays a crucial role in determining the overall sustainability of geopolymer concrete. A life cycle assessment (LCA) study by Salas et al. (2018) shows that sodium hydroxide production is a major factor in the environmental impact of geopolymer concrete, as it is also essential for sodium silicate production. [ 34 ] The energy mix used in its production significantly influences emissions, with a 2018 mix (85% hydroelectricity) reducing impacts by 30–70% compared to a 2012 mix (62% hydroelectricity). [ 34 ] The source of sodium hydroxide also affects geopolymer concrete’s sustainability, with solar salt-based production and hydropower reducing its GWP by 64% compared to conventional concrete (CC). [ 34 ] However, geopolymer concrete has higher ozone depletion potential due to CFC emissions from the chlor-alkali process, a drawback not present in CC production. [ 34 ] Other environmental impacts vary, with geopolymer concrete slightly outperforming CC in fossil fuel depletion and eutrophication but performing slightly worse in acidification and photochemical oxidant formation. [ 34 ]
In June 2012, the institution ASTM International organized a symposium on Geopolymer Binder Systems. The introduction to the symposium states: [ citation needed ] When performance specifications for Portland cement were written, non-portland binders were uncommon...New binders such as geopolymers are being increasingly researched, marketed as specialty products, and explored for use in structural concrete. This symposium is intended to provide an opportunity for ASTM to consider whether the existing cement standards provide, on the one hand, an effective framework for further exploration of geopolymer binders and, on the other hand, reliable protection for users of these materials .
The existing Portland cement standards are not adapted to geopolymer cements; they must be elaborated by an ad hoc committee. Yet, to do so requires the presence of standard geopolymer cements. Presently, every expert is presenting their own recipe based on local raw materials (wastes, by-products or extracted). There is a need for selecting the right geopolymer cement category. The 2012 State of the Geopolymer R&D, [ 35 ] suggested to select two categories, namely:
along with the appropriate user-friendly geopolymeric reagent.
Similarly to the Environmental Impacts, the production of geopolymer concrete has some notable human health implications, primarily due to the use of alkaline activators such as sodium hydroxide (NaOH) and sodium silicate (Na₂SiO₃). These chemicals are highly caustic and can cause severe skin burns, respiratory issues, and eye damage if not handled properly. [ 36 ] Additionally, the manufacturing of NaOH and Na₂SiO₃ contributes to greenhouse gas emissions and releases pollutants linked to human toxicity and ozone depletion. [ 36 ] Fly ash and silica fume, commonly used in geopolymer concrete, also pose risks when not properly managed, as fine particulate matter from these materials can contribute to dust pollution and respiratory diseases. [ 33 ] However, geopolymer concrete can still provide environmental and health benefits by diverting industrial byproducts from landfills and reducing the hazardous emissions associated with traditional cement production. [ 37 ] In addition, the selection of certain precursors and alkaline activators can minimize the health risks associated with geopolymer concrete production. [ 38 ]
Geopolymers can be used as a low-cost and/or chemically flexible route to ceramic production, both to produce monolithic specimens, and as the continuous (binder) phase in composites with particulate or fibrous dispersed phases. [ 39 ]
Geopolymers produced at room temperature are typically hard, brittle , castable, and mechanically strong. This combination of characteristics offers the opportunity for their usage in a variety of applications in which other ceramics (e.g. porcelain ) are conventionally used. Some of the first patented applications of geopolymer-type materials - actually predating the coining of the term geopolymer by multiple decades - relate to use in automobile spark plugs . [ 40 ]
It is also possible to use geopolymers as a versatile pathway to produce crystalline ceramics or glass - ceramics , by forming a geopolymer through room-temperature setting, and then heating (calcining) it at the necessary temperature to convert it from the crystallographically disordered geopolymer form to achieve the desired crystalline phases (e.g. leucite , pollucite and others). [ 41 ]
Because geopolymer artifacts can look like natural stone, several artists started to cast in silicone rubber molds replicas of their sculptures. For example, in the 1980s, the French artist Georges Grimal worked on several geopolymer castable stone formulations. [ 42 ]
In the mid-1980s, Joseph Davidovits presented his first analytical results carried out on samples sourced from Egyptian pyramids . He claimed that the ancient Egyptians used a geopolymeric reaction to make re-agglomerated limestone blocks. [ 43 ] [ 44 ] [ 45 ] Later on, several materials scientists and physicists took over these archaeological studies and have published results on pyramid stones, claiming synthetic origins. [ 46 ] [ 47 ] [ 48 ] [ 49 ] However, the theories of synthetic origin of pyramid stones have also been stridently disputed by other geologists, materials scientists, and archaeologists. [ 50 ]
It has also been claimed that the Roman lime-pozzolan cements used in the building of some important structures, especially works related to water storage (cisterns, aqueducts), have chemical parallels to geopolymeric materials. [ 51 ] | https://en.wikipedia.org/wiki/Geopolymer |
Geopolymer bonded wood composite ( GWC ) are similar and a green alternatives to cement bonded wood composites. These products are composed of geopolymer binder, wood fibers / wood particles. Depending on the wood and geopolymer ratio in the material, the properties of the wood-geopolymer composite material vary.
The main functions of wood in the composite material are weight reduction, reduction of thermal conductivity [ 1 ] [ 2 ] and the fixture function [ 3 ] whereas the main functions of geopolymer are bonding of wood particles, improvement of fire resistance , [ 4 ] providing mechanical strength, [ 5 ] improvement of humidity resistance and protection against fungal and insect damages. [ 6 ]
They serve similar functions and purposes like all other mineral bonded wood composites. The fact that the binder agent (geopolymer) are mostly produced from industrial residue and waste puts these materials at a greater advantage over other mineral bonded wood composites. However, most of the works under this topic remains at the research and development phase. Some of the core difficulties in production and commercialization of standardize product is the variation in the sources of the aluminosilicate binder and the cost involve in activating the binder. Currently, metakaolin binder remains as the one key source to produce or bind these products with huge variations in other sources of the binder such as slag, fly ash etc. [ citation needed ]
The inherent properties and the incorporation of wood fiber and particles in this composite, has made it possible to produce GWC building materials that are light weight and has a variety of uses due to its heat storage capacity, for example in areas of thermal insulation, fire and noise protection. The wood-geopolymer composite material in the building walls can serve as a microclimate regulator absorbing the moisture when the air humidity is high and returning the moisture when there is a low air humidity period, thus improving the hygrothermal comfort in the building. [ 7 ]
Currently, there is no commercialization of these products. More research is still ongoing on these composite materials as to ascertain the properties and how best to utilize these materials. [ citation needed ] | https://en.wikipedia.org/wiki/Geopolymer_bonded_wood_composite |
" Geoprofessions " is a term coined by the Geoprofessional Business Association to connote various technical disciplines that involve engineering , earth and environmental services applied to below-ground ("subsurface"), ground-surface, and ground-surface-connected conditions, structures, or formations. The principal disciplines include, as major categories:
Each discipline involves specialties, many of which are recognized through professional designations that governments and societies or associations confer based upon a person's education, training, experience, and educational accomplishments. In the United States, engineers must be licensed in the state or territory where they practice engineering. Most states license geologists and several license environmental "site professionals." Several states license engineering geologists and recognize geotechnical engineering through a geotechnical-engineering titling act.
Although geotechnical engineering is applied for a variety of purposes, it is essential to foundation design. As such, geotechnical engineering is applicable to every existing or new structure on the planet; every building and every highway, bridge, tunnel, harbor, airport, water line, reservoir, or other public work. Commonly, the geotechnical-engineering service comprises a study of subsurface conditions using various sampling, in-situ testing, and/or other site-characterization techniques. The instrument of professional service in those cases typically is a report through which geotechnical engineers relate the information they have been retained to provide, typically: their findings; their opinions about subsurface materials and conditions; their judgment about how the subsurface materials and conditions assumed to exist probably will behave when subjected to loads or used as building material; and their preliminary recommendations for materials usage or appropriate foundation systems, the latter based on their knowledge of a structure's size, shape, weight, etc., and the subsurface/structure interactions likely to occur. Civil engineers , structural engineers , and architects , feasibly among other members of the project team, apply the geotechnical findings and preliminary recommendations to take the structure's design forward. They realize these preliminary recommendations are subject to change, however, because – as a matter of practical necessity related to the observational method inherent to geotechnical engineering – geotechnical engineers base their recommendations on the composition of samples taken from a tiny portion of a site whose actual subsurface conditions are unknowable before excavation, because they are hidden by earth and/or rock and/or water. For this reason, as a key component of a complete geotechnical engineering service, geotechnical engineers employ construction-materials engineering and testing (CoMET) to observe subsurface materials as they are exposed through excavation. To help achieve economies on their clients' behalf, geotechnical engineers assign their field representatives – specially educated and trained paraprofessionals – to observe the excavated materials and the excavations themselves in light of conditions the geotechnical engineers opined to exist. When differences are discovered, the geotechnical engineers evaluate the new findings and, when necessary, modify their design and construction recommendations. Because such changes could require other members of the design and construction team to modify their designs, specifications, and proposed methods, many owners have their geotechnical engineers serve as active members of the project team from project inception to conclusion, working with others to help ensure appropriate application of geotechnical information and judgments.
In other cases, geotechnical engineering goes beyond a study and construction recommendations to include design of soil and rock structures. The most common of these are the pavements that make up our streets and highways, airport runways, and bridge and tunnel decks, among other paved improvements. Geotechnical engineers design the pavements in terms of the subgrade, subbase, and base layers of materials to be used, and the thickness and composition of each. Geotechnical engineers also design the earth-retention walls associated with structures such as levees, earthen dams, reservoirs, and landfills. In other cases, the design is applied to contain earth, via structures such as excavation-support systems and retaining walls. Sometimes referred to as geostructural engineering or geostructural design, these services are also intrinsic to hydraulic engineering , hydrogeologic engineering , coastal engineering , geologic engineering and water-resources engineering . Geotechnical-engineering design is also applied for structures such as tunnels, bridges, dams, and other structures beneath, on, or connected to the surface of the earth. Geotechnical engineering, like geology, engineering geology, and geologic engineering, also involves the specialties of rock mechanics and soil mechanics , and often requires knowledge of geotextiles and geosynthetics , as well as an array of instrumentation and monitoring equipment, to help ensure specified conditions are achieved and maintained.
Earthquake engineering and landslide detection, remediation, and prevention are geoprofessional services associated with specialized types of geotechnical engineering (as well as geophysics ; see below), as is forensic geotechnical engineering, a geoprofessional service applied to determine why a certain applicable type of event – usually a failure of some sort – occurred. (Virtually all geoprofessional services can be performed for forensic purposes, commonly as litigation-support/ expert witness services.) Railway-systems engineering is another type of specialized geotechnical engineering, as are the design of piers and bulkheads , drydocks , on-shore and off-shore wind-turbine systems, and systems that stabilize oil platforms and other marine structures to the sea floor.
Geotechnical engineers have long been involved in sustainability initiatives, including (among many others) the use of excavated materials; the safe application of contaminated subsurface materials; the recycling of asphalt, concrete, and building rubble and debris; and the design of permeable pavements .
All civil-engineering specialties and projects – roads and highways, bridges, rail systems, ports and other waterfront structures, airport terminals, etc. – require the involvement of geotechnical engineers and engineering, meaning that many civil-engineering pursuits are geoprofessional pursuits to a greater or lesser degree. However, geotechnical engineering has for centuries also been associated with military engineering ; sappers (in general) and miners (whose tunneling design services (known as landmining and undermining ) were used in military-siege operations).
Engineering geologist.
(a) Elements of the engineering geologist specialty.
The practice of engineering geology involves the interpretation, evaluation, analysis, and application of geological information and data to civil works. Geotechnical soil and rock units are designated, characterized, and classified, using standard engineering soil and rock classification systems. Relationships are interpreted between landform development, current and past geologic processes, ground and surface water, and the strength characteristics of soil and rock. Processes evaluated include both surficial processes (for example, slope, fluvial, and coastal processes), and deep-seated processes (for example, volcanic activity and seismicity). Geotechnical zones or domains are designated based on soil and rocked geological strength characteristics, common landforms, related geologic processes, or other pertinent factors. Proposed developmental modifications are evaluated and, where appropriate, analyzed to predict potential or likely changes in types and rates of surficial geologic processes. Proposed modifications may include such things as vegetation removal, using various types of earth materials in construction, applying loads to shallow or deep foundations, constructing cut or fill slopes and other grading, and modifying ground and surface water flow. The effects of surficial and deep-seated geologic processes are evaluated and analyzed to predict their potential effect on public health, public safety, land use, or proposed development.
(b) Typical engineering geologic applications and types of projects. Engineering geology is applied during all project phases, from conception through planning, design, construction, maintenance, and, in some cases, reclamation and closure. Planning-level engineering geologic work is commonly conducted in response to forest practice regulations, critical areas ordinances, and the State Environmental Policy Act. Typical planning-level engineering geologic applications include timber harvest planning, proposed location of residential and commercial developments and other buildings and facilities, and alternative route selection for roads, rail lines, trails, and utilities. Site-specific engineering geologic applications include cuts, fills, and tunnels for roads, trails, railroads, and utility lines; foundations for bridges and other drainage structures, retaining walls and shoring, dams, buildings, water towers, slope, channel and shoreline stabilization facilities, fish ladders and hatcheries, ski lifts and other structures; landings for logging and other work platforms; airport landing strips; rock bolt systems; blasting; and other major earthwork projects such as for aggregate sources and landfills.
(Taken from Washington Administrative Code WAC 308-15-053(1))
While engineering geology is applicable principally to planning, design and construction activities, other specialties of geology are applied in a variety of geoprofessional specialty fields, such as mining geology , petroleum geology , and environmental geology . Note that mining geology and mining engineering are different geoprofessional fields.
Geological engineering is a hybrid discipline that comprises elements of civil engineering , mining engineering , petroleum engineering , and earth sciences . Geological engineers often become licensed as both engineers and geologists. There are thirteen geological-engineering (or geoengineering ) programs in the United States that are accredited by the Engineering Accreditation Commission (EAC) of ABET: (1) Colorado School of Mines, (2) Michigan Technological University, (3) Missouri University of Science and Technology, (4) Montana Tech of the University of Montana, (5) South Dakota School of Mines and Technology, (6) University of Alaska-Fairbanks, (7) University of Minnesota Twin Cities, (8) University of Mississippi , (9) University of Nevada, Reno (10) University of North Dakota, (11) University of Texas at Austin, (12) University of Utah, and (13) University of Wisconsin-Madison.
Other schools offer programs or classes in geological engineering, including the University of Arizona .
Geoengineering or geological engineering , engineering geology , and geotechnical engineering deal with the discovery, development, and production and use of subsurface earth resources, as well as the design and construction of earthworks . Geoengineering is the application of geosciences , where mechanics, mathematics, physics, chemistry, and geology are used to understand and shape our interaction with the earth.
Geoengineers work in areas of
Professional geoscience organizations such as the American Rock Mechanics Association or the Geo-Institute and academic degrees such as the bachelor of geoengineering accredited by ABET acknowledge the broad scope of work practiced by geoengineers and stress fundamentals of science and engineering methods for the solution of complex problems. Geoengineers study the mechanics of rock, soil , and fluids to improve the sustainable use of earth's finite resources, where problems appear with competing interests, for example, groundwater and waste isolation, offshore oil drilling and risk of spills, natural gas production and induced seismicity .
Geophysics is the study of the physical properties of the earth using quantitative physical methods to determine what lies beneath the earth's surface. The physical properties of concern include the propagation of elastic waves (seismic), magnetism, gravity, electrical resistivity/conductivity, and electromagnetism. Geophysics has historically been most commonly used in oil exploration and mining, but its popularity in non-destructive investigative work has flourished since the early 1990s. It is also used in groundwater exploration and protection, geo-hazard studies (e.g., faults and landslides), alignment studies (e.g., proposed roadway, underground utilities, and pipelines), foundation studies, contamination characterization and remediation, landfill investigations, unexploded-ordnance investigations, vibration monitoring, dam-safety evaluation, location of underground storage tanks, identification of subsurface voids, and assisting in archeological investigations. (definition from Association of Environmental & Engineering Geologists)
Geophysical engineering is the application of geophysics to the engineering design of facilities including roads, tunnels, wells and mines.
Environmental science and environmental engineering are the geoprofessions commonly associated with the identification, remediation, and prevention of environmental contamination. These services range from phase-one and phase-two environmental site-assessments – research designed to assess the likelihood that a property is contaminated and subsurface exploration conducted to identify the nature and extent of contamination, respectively – up through the design of processes and systems to remediate contaminated sites for the protection of human health and the environment.
Environmental geology is one of the principal geoprofessions engaged in assessing and remediating contaminated sites. Environmental geologists help identify the subsurface stratigraphy in which contaminants are located and through which they migrate. Environmental chemistry is the geoprofession that encompasses the study of chemical compounds in the soil. These compounds are categorized as pollutants or contaminants when introduced into the environment by human factors (e.g., waste, mining processes, radioactive release) and are not of natural origin. Environmental chemistry assesses interactions or these compounds with soil, rock, and water to determine their fate and transport, the techniques to measure the levels of contaminants in the environment, and technologies to destroy or reduce the toxicity of contaminants in wastes or compounds that have been released to the environment. Environmental engineering is often applied to assess contaminated sites, but more often is used in the design of systems to remediate contaminated soil and groundwater.
Hydrogeology is the geoprofession involved when environmental studies involve subsurface water. Hydrogeology applications range from securing safe, plentiful underground drinking-water sources to identifying the nature of groundwater contamination in order to facilitate remediation. Environmental toxicology is a geoprofession when used to identify the source, fate, transformation, effects, and risks of pollutants on the environment, including soil, water, and air. Wetlands science is a geoprofessional pursuit that incorporates several scientific disciplines, such as botany , biology , and limnology . It involves, among other activities, the delineation, conservation, restoration, and preservation of wetlands. These services are sometimes conducted by geoprofessional specialists called wetlands scientists . Ecology is a closely related environmental geoprofession involving studies into the distribution of organisms and biodiversity within an environmental context.
Numerous geoprofessional disciplines contribute to the redevelopment of brownfields , sites (typically urban) that are underused or abandoned because they are or are assumed to be contaminated by hazardous materials. Geoprofessionals are engaged to evaluate the degree to which such sites are contaminated and the steps that can be taken to achieve the sites' safe reuse. Environmental engineers and scientists work with developers to identify and design remediation strategies and exposure-barrier designs that protect future site users from unacceptable exposure to environmental contamination resulting from previous uses of the site. Because these previous uses often resulted in degraded soil conditions and the presence of abandoned, underground structures, geotechnical engineers often are needed to design special foundations for the new structures.
Construction-materials engineering and testing (CoMET) comprises an array of licensed-engineer-directed professional services applied principally for purposes of construction quality assurance and quality control. CoMET services commonly are provided as a separate discipline by firms that also practice geotechnical engineering, possibly among other geoprofessional disciplines. The geoprofessional-service industry has evolved in this manner because geotechnical engineering employs the observational method. Karl von Terzaghi and Ralph B. Peck – the creators of modern geotechnical engineering – used the observational method and multiple working hypotheses to expedite and economize the subsurface-exploration process, by using sampling and testing to form a judgment about subsurface conditions, and then observing excavated conditions and materials to confirm or modify those judgments and related recommendations, and then finalize them. To economize still further, geoprofessionals educated and trained paraprofessionals to represent them on site (hence the term "field representative"), especially to apply their judgment (much as a geotechnical engineer would) in comparing observed conditions with those the geotechnical engineer believed would exist. Over time, geotechnical engineers expanded their CoMET services by providing the additional education and training their field representatives needed to evaluate constructors' attainment of conditions commonly specified by geoprofessionals; e.g., subsurface preparation for foundations of buildings, roadways, and other structures; materials used for subgrade, subbase, and base purposes; site grading; construction of earthen structures (earth dams, levees, reservoirs, landfills, et al.) and earth-retaining structures (e.g., retaining walls); and so on. Because many of the materials involved, such as concrete, are used in other elements of construction projects and structures, geoprofessional firms expanded their field representatives' skill sets still more, to encompass observation and testing of numerous additional materials (e.g., reinforced concrete, structural steel, masonry, wood, and fireproofing), processes (e.g., cutting and filling and rebar placement), and outcomes (e.g., the effectiveness of welds). Laboratory services are a common element of many CoMET operations. Also operating under the direction of a licensed engineer, they are applied in geotechnical engineering to evaluate subsurface-material samples. In overall CoMET operations, laboratories operate with the equipment and personnel required to evaluate a variety of construction materials.
CoMET services applied to evaluate the actual composition of a site's subsurface are part of a complete geotechnical engineering service. For purposes of short-term economy, however, some owners select a firm not associated with the geotechnical engineer of record to provide these and all other CoMET services. This approach precludes the geotechnical engineer of record from providing a complete service. It also aggravates risk, because the individuals engaged to evaluate actual subsurface conditions are not "briefed" by the geotechnical engineer of record before they go to the project site and seldom communicate with the geotechnical engineer of record when they discern differences, in large part because the firm associated with the geotechnical engineer of record is regarded as a competitor of the firm employing the field representatives. In some cases, the field representatives in question lack the specific project background information and/or the education and training required to discern those differences.
CoMET services applied to evaluate constructor's attainment of specified conditions take the form of quality-assurance (QA) or quality-control (QC) services. QA services are performed directly or indirectly for the owner. The owner specifies the nature and extent of QA services that the owner believes is appropriate. Some owners specify none at all or only those that may be required by law. Those required by law are imposed via a jurisdiction's building code . Almost all U.S. jurisdictions base their building codes on "model codes" developed by associations of building officials. The International Code Council (ICC) is the most prominent of these groups and its International Building Code (IBC) is the most commonly used model. As a result, many jurisdictions now require IBC "Special Inspection," a term defined by the IBC as "the required examination of the materials, installation, fabrication, erection, or placement of components and connections requiring special expertise to ensure compliance with approved construction documents and referenced standards." Special Inspection requirements vary from jurisdiction to jurisdiction based on the provisions adopted by the local building official. While some of the services involved may be similar to or the same as conventional CoMET services, Special Inspection is handled differently. Most commonly, the owner or the owner's agent is required to retain a building-official-approved Special Inspection-services provider. Special Inspection is often required to obtain a certificate of occupancy.
QC services are those applied by or on behalf of a constructor to ensure the constructor has attained conditions the constructor has contractually agreed to attain. Most CoMET consultants are engaged far more to provide QA services than QC services.
Many CoMET procedures are specified in standards developed by standards-developing organizations (SDOs) such as the American Society of Civil Engineers (ASCE) , ASTM International , and American Concrete Institute (ACI) , using standards-development protocols approved by the American National Standards Institute (ANSI) and/or the International Organization for Standardization (ISO) . All such standards identify what is minimally required to conform. Likewise, several organizations have developed programs to accredit CoMET field and laboratory services to perform certain types of testing and inspection. Some of these programs are more comprehensive than others; e.g., requiring regular calibration of equipment, participation in proficiency testing programs, and implementation and documentation of a (quality) management system to demonstrate technical competence. As with all such programs, of course, accreditation identifies what is least acceptable. Many CoMET laboratories go far beyond minimum requirements in an effort to attain higher levels of quality.
A variety of organizations – including local building departments – have developed personnel-certification protocols and requirements. In many jurisdictions, only appropriately certified individuals are permitted to perform certain evaluations. Individuals typically are required to meet certain prerequisites for certification and must pass examinations, in some cases involving performance observation in the field. The prerequisite for higher degrees of certification often include a requirement that the individual has met requirements for a lower degree of certification (e.g., Soils Technician I is in some cases a prerequisite for Soils Technician II). Field representatives are sometimes referred to as "soil testers," "technicians," "technicians/technologists," or "engineering technicians." The Geoprofessional Business Association developed the term "field representative" to encompass all the many types of paraprofessionals involved (e.g., those involved with specific types of materials, such as reinforced concrete, soil, or steel; those who observe or inspect processes or conditions, such as welding inspectors, caisson inspectors, and foundation inspectors), and especially to underscore their significant, mutual responsibility, that purpose titles such as "technician" fail to signify. In fact, the engineers who direct CoMET operations are personally and professionally responsible and liable for their field representatives' acts and statements while representing the engineer on site.
Especially because CoMET consultants have more hands-on experience with construction activities than many other design-team members, many owners involve them (among other geoprofessionals) from the outset of a project, during the design phase, to help the owner and/or design team members develop technical specifications and establish testing and inspection requirements, instrumentation requirements and procedures, and observation programs. Geotechnical engineers employ CoMET services during the earliest stages of a project, to oversee subsurface sampling procedures, such as drilling.
Many of the CoMET services performed for construction projects are performed for environmental projects as well, but requirements tend to be less rigid because they involve fewer licensing and related requirements. For example, individuals may perform federally mandated all-appropriate inquiries – typically a phase-one environmental site assessment – without a license of any kind.
To the extent that archeology and paleontology require systematic subsurface excavation to recover artifacts, they, too, are considered geoprofessions. Many geoprofessional-services firms offer these services to those of their clients that need to satisfy federal and/or state regulations that require paleontological and/or archeological inquiry before site development or redevelopment activities can proceed. | https://en.wikipedia.org/wiki/Geoprofessions |
The Geordie lamp was a safety lamp for use in flammable atmospheres, invented by George Stephenson in 1815 as a miner's lamp to prevent explosions due to firedamp in coal mines .
In 1815, Stephenson was the engine-wright at the Killingworth Colliery in Northumberland and had been experimenting for several years with candles close to firedamp emissions in the mine. In August, he ordered an oil lamp, which was delivered on 21 October and tested by him in the mine in the presence of explosive gases. He improved this over several weeks with the addition of capillary tubes at the base so that it gave more light, and tried new versions on 4 and 30 November. This was presented to the Literary and Philosophical Society of Newcastle upon Tyne (Lit & Phil) on 5 December 1815. [ 1 ]
Although controversy arose between Stephenson's design and the Davy lamp (invented by Humphry Davy in the same year), Stephenson's original design worked on significantly different principles from Davy's final design. [ 2 ] If the lamp were sealed except for a restricted air ingress (and a suitably sized chimney) then the presence of dangerous amounts of firedamp in the incoming air would (by its combustion) reduce the oxygen concentration inside the lamp so much that the flame would be extinguished. Stephenson had convinced himself of the validity of this approach by his experiments with candles near lit blowers: as lit candles were placed upwind of the blower, the blower flame grew duller; with enough upwind candles, the blower flame went out. [ 3 ]
To guard against the possibility of a flame travelling back through the incoming gases (an explosive backblast ), air ingress was by a number of small-bore tubes through which the ingress air flowed at a higher velocity than the velocity of a flame fueled by a mixture of firedamp (mostly methane ) and air. These ingress tubes were physically separate from the exhaust chimney. The body of the lamp was lengthened to give the flame a greater convective draw, and thus allow a greater inlet flow restriction and make the lamp less sensitive to air currents. The lamp itself was surrounded by glass, which had an additional perforated metal tube surrounding it for protection. Davy had originally attempted a safety lamp on similar principles, before preferring to enclose the flame inside a brass gauze cylinder; he had publicly identified the importance of allowing the restricted airflow in through small orifices (in which the flame velocity is lower) before Stephenson had, and he and his adherents remained convinced that Stephenson had not made this discovery independently. [ 4 ] [ a ] Later on, Stephenson adopted Davy's gauze to surround the lamp (instead of the perforated metal tube) and the intake tubes were changed to holes or a gallery at the base of the lamp. It was this revised design that was used for most of the 19th century as the Geordie lamp.
One advantage of Stephenson's initial design over Davy's was that if the proportion of firedamp became too high, his lamp would be extinguished, whereas Davy's lamp could become dangerously hot. This was illustrated in the Oaks colliery at Barnsley on 20 August 1857 where both types of lamp were in use. [ 7 ]
Stephenson's design also allowed better light output as it used glass to surround the flame, which cut out less of the light than Davy's, where the gauze surrounded it. [ 8 ] But this also posed the danger of breakage in the harsh conditions of mineworking, a problem which was not resolved until the invention of safety glass .
The Geordie lamp continued to be used in the north-east of England through most of the 19th century, until the introduction of electric lighting. [ citation needed ]
Bibliography | https://en.wikipedia.org/wiki/Geordie_lamp |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.